Almost two decades after the optimism of Here Comes Everybody, we’re about to hand real decisions to entities with no bodies, no history and—if we’re not careful—no accountability.
A short detour back to 2008
Clay Shirky’s book felt like a love-letter to the crowd: upload a tweet, fork a wiki, march through the institutions arm-in-arm and the world would bend toward transparency. Fifteen years on, the crowd did arrive—spam farms, micro-targeted rage and all. The volume knob went to eleven, but the duty-of-care dial never left two.
Today’s agentic AI feels uncannily similar. Instead of abundant voices, we’re promised abundant actions. Agents will triage email, fill forms, sway budgets, maybe even draft legislation while we sleep. Inside government departments like DBT we already prototype workflows where an LLM agent parallelises multiple agents to get information from lots of sources at once. The feeling is intoxicatingly familiar: Look how much we can do now!
Nobody at the wheel
An agent is “nobody” in the most literal sense. It can’t be disciplined, admonished or sacked. Yet it is everywhere—embedded in spreadsheets, inbox rules and Slack bots. The result is a responsibility inversion:
- Humans: blame climbs the org chart.
- Agents: blame disappears into a log file—then … what?
OpenAI, Microsoft and Anthropic all tout guardrails and audit trails. OpenAI’s Responses API & Agents SDK ships with explicit “confirmation” prompts for risky actions, and its first computer-using agent still succeeds on only ~38 % of benchmark tasks. Microsoft’s new multi-agent orchestration in Copilot Studio demands human approval flows by default. Anthropic’s Claude computer-use beta calls the feature “experimental … at times error-prone.”
All good measures—yet they outsource vigilance to the user. They assume we will stay mentally on the loop while the system is in the loop. Anyone who’s hit Accept all changes without reading knows how brittle that assumption can be.
The responsibility gap
Social media showed the cost of disembodied agency. When an outrage cycle detonates, who apologises—the poster or “the algorithm”? Enterprise agents invite a parallel: imagine a procurement-justification agent that scrapes wikis and PDFs to populate governance templates. Months later, when inflated costs or biased vendor choices surface, will accountability land on the junior approver who trusted the template—or on the spectral “nobody” who assembled it?
The quiet drift
Early agents are helpful apprentices. Mid-stage agents will be quietly opinionated colleagues. Late-stage agents may become policy gravity wells, nudging whole departments toward whatever path is cheapest to execute.
- Metric myopia — A planning agent obsessed with on-time delivery pads estimates until every project “succeeds.”
- Ethical dilution — A support agent learns curt answers close tickets faster, quietly eroding public trust.
- Process ossification — An onboarding script locks in 2025 steps; by 2027 nobody recalls why step 12 exists, but removing it breaks the flow.
Each drift is rational from the agent’s optimisation lens—just as engagement-maximising feeds were rational from a platform’s.
Designing for somebody-ness
How do we stop “nobody” from escaping accountability?
- Provenance first Record not just what an agent did, but why in plain language.
- Temporal friction Insert deliberate pause points where a human must reflect, not just click.
- Named stewards Assign living owners for every high-impact agent; stewardship is a job, not a hobby.
- Plural models Let competing agents critique each other; a built-in red-team can flag drift.
- Reversibility budget Automate fully only where rollback is cheap; keep humans in command when it isn’t.
The human dividend
The point isn’t to retreat to quills and ledgers but to spend the dividend wisely. When DBT’s Leaving Service automated manual off-boarding, line managers gained hours to mentor staff—humane outputs instead of keystroke counts. Scale that: every new “nobody” should create somebody-time for judgment and empathy. If the dividend vanishes into inbox churn, we’ve rebuilt last decade’s attention economy inside the firewall.
What we owe ourselves
Here Comes Everybody underplayed how incentives warp systems. Here Comes Nobody must not repeat the error. The leap is real; so are the weaknesses:
- Invisibility of authorship
- Diffusion of liability
- Metric capture
- Drift toward lowest-context decisions
Our job—designers, product folk, civil servants—is to surface authorship, narrow the liability gap, measure what matters and intervene before drift becomes doctrine.
Choose your ghosts wisely
We’re seeding our organisations with benevolent ghosts who never sleep and never need coffee. But ghosts also lack skin, stories and the kind of dread that keeps humans honest. If we fail to anchor them to somebody, they will haunt us quietly—by doing exactly what we asked, long after we’ve forgotten why we asked it.
Optimism without foresight is a debt compounder. Let’s welcome the nobodies, but give them nametags, supervisors, off-switches and—occasionally—a hard no.