Your new direct report doesn't sleep
On leading teams where half the contributors are human and the other half don't care about your org chart
Jack Dorsey published a post tracing organizational design from the Roman legions to AI. Two thousand years of history, one conclusion: managers are information routers, and AI routes information better.
I read it twice.
The first time, I was nodding. The history lesson is genuinely good. The diagnosis of hierarchy as an information bottleneck is accurate. And the vision of a “world model” that maintains a continuously updated picture of the entire business, replacing layers of status meetings and alignment rituals? Anyone who’s ever sat through a three-hour quarterly review wondering who this was actually for will feel the pull.
The second time, I started counting assumptions. I ran out of fingers.
The pitch
Dorsey’s argument goes like this: humans can effectively manage three to eight people. This biological limit forces organizations to add coordination layers. Each layer slows information. AI removes the constraint by maintaining a shared model of business state that everyone can access directly. Result: fewer managers, more builders, faster decisions. Yay!
He proposes three roles. Individual contributors who build. Directly Responsible Individuals who own problems that span multiple teams for defined periods. Player-coaches who develop people while still building. No pure managers. The world model handles alignment. And pigs might fly.
If you’ve spent any time in an organization larger than a dozen people, you feel the appeal. The layers, the alignment meetings, the translated-twice-before-it-reaches-you version of a decision someone made three weeks ago. All real friction and all worth reducing.
But reducing friction and removing the people who manage friction are different operations.
Why everyone hates middle management (and why they’re wrong)
I don’t know when middle management became the villain of every organizational redesign. We show up in every consulting deck as the “layer to eliminate.” Every new model, from Holacracy to AI world models, starts with the same premise: the problem is the people in the middle.
I’m in the middle. Let me tell you what the middle actually does.
We translate strategy into work and work into strategy. We absorb pressure from above so teams can focus. We absorb frustration from below so leadership hears what's actually going on, not the version that got cleaned up on the way up. We are, in the most literal sense, the shit shovelers. The people who handle what nobody else wants to touch: the difficult conversation, the impossible deadline negotiation, the moment when the plan falls apart and someone has to figure out what happens next.
DDI surveyed 11,000 leaders across 50 countries. Trust in immediate managers dropped from 46% to 29% between 2022 and 2025. A 37% decline. Forty percent of leaders have considered quitting. Seventy-one percent report heightened stress. And frontline leaders are three times more likely than senior executives to have concerns about AI.
The people closest to the work are the most stressed, the least trusted, and the most worried about what’s coming. And the organizational response is: let’s remove their roles.
We all know how the Flat Earth theory ended. Flat orgs are heading the same way, just with better branding.
Flat org, meet Flat Earth
Every five years or so, someone announces a better way to organize humans. Holacracy at Zappos. Squads at Spotify. No managers at Valve. Each experiment started with genuine insight and ended the same way: when you strip out the visible structure, you strip out the map. What replaces it is informal, unaccountable, and works for people who already know everyone. The people who already know who actually calls the shots adapt fine. Everyone else is guessing. Ever heard of the cool kids club? It’s real.
Dorsey’s DRI model sounds great on paper. Own a problem that spans multiple teams for 90 days, pull whatever resources you need. But who picks the problems? Who decides the scope? Who steps in when 90 days isn’t enough? Those are hierarchy decisions. They just lost the job title that made them visible.
I’ve lived through what happens when “flat” and “empowered” become excuses to stop communicating.
Years back we tried the “every team is an island” approach once. Autonomous. Self-directed. No cross-team dependencies to slow anyone down. It took a couple of weeks before one team shipped a change that broke the system downstream, and the other team spent a Friday afternoon troubleshooting with we haven’t changed anything, what the hell happened?
Within a month the teams were on us vs. them terms. We had to undo the islands. Rebuild the bridges. Make sure people crossed paths, in Slack, in person, in whatever worked. Because it turns out we were part of a living, connected system, and pretending otherwise just meant the surprises were bigger and the blame was louder.
I’ve also seen team members find out about features they’d have to sell, by accident, because the empowered team building it didn’t think to tell anyone before shipping. These were the same people who thought keeping others in the loop meant asking for permission — which, if you think about it for more than five seconds, is a bit silly. Empowerment favors action. Less so communication. So they shipped fast, almost in secrecy, so no one could slow them down. Then complained that marketing had no materials and sales had no pitch.
The popular take is that hierarchy is the obstacle and flattening is progress. The hierarchy you removed is the hierarchy you can’t see. You didn’t eliminate power dynamics, just made them harder to challenge.
Now add agents to the mix
This is where it gets interesting, and by interesting, I mean nobody is prepared for this.
The conversation about AI and organizations splits neatly into two camps. Camp one: AI eliminates jobs (FOMO, rush to deploy, keep buying tools that don’t work yet). Camp two: AI augments jobs (it’s just a tool, relax, nothing will change). Neither camp is talking about the specific, practical, Monday morning reality of managing teams that include both humans and AI agents.
Your team now includes contributors that never get tired, never push back, and never tell you when you’re wrong. See the problem?
An agent can generate 80% of a deliverable before your human engineer has finished scoping the edge cases. The agent’s output looks complete. It passes tests. It even has comments. What it doesn’t have is context about why the third endpoint was deliberately left slow, or that the product team changed direction in a hallway conversation yesterday.
Now imagine the reality most of us actually work in. A codebase with ten thousand lines built over years, without documented decisions about why anything was done the way it was. If you’re lucky only half the people who wrote it aren’t at the company anymore. Exception on workaround on exception, with some good practices... and then more exceptions, because sometimes customer value first, stability later.
An agent will build its own understanding of that code. It might, at some point, revert the fix you shipped last week for a critical production issue and in worst case commit straight to master without hesitation. Because from the agent’s perspective, why not? It doesn’t know the backstory. It isn’t always good at following instructions. And nobody told it that particular line of code was load-bearing in a way that doesn’t show up in tests.
Meanwhile, the humans on your team are watching. The engineer who used to be your strongest contributor at their core task is now slower than an agent at that same task. Their value hasn’t decreased. Judgment, institutional knowledge, the ability to read the room and flag something the spec missed. All still critical. But their confidence has taken a hit because the visible measure of contribution, the thing they built their identity around, isn’t special anymore.
HBR published an article with research findings on what happens when AI provides confident but incorrect information: it creates “trust ambiguity.” People stop trusting the AI and start distrusting each other’s use of it. The team fragments. Nobody agreed on how much to trust it, and now nobody trusts each other’s judgment either.
And it’s not just the ICs. Now that leaders have AI tools too, they brainstorm with agents, get excited about the output, and send over two sentences of high-level instruction with “the details are in the linked Claude chat.” No useful context. No evaluation of whether this even makes sense. No pause to ask the agent what could go wrong or why might this not work. Before, they’d come to the team for input. Now they arrive with a solution the agent designed without knowing anything about our codebase, our constraints, or the three times we already tried this. Just vibes, a link, and an encouraging “Just do it!” Helping the team think, apparently.
This is happening now, in real organizations, by real leaders who used to ask their teams for input before making decisions. Is this the new reality? Maybe. But the trust ambiguity isn’t just about agent output anymore. It’s about whether your boss ran the idea past a human before they ran it past you.
This is where Dorsey's "world model handles alignment" falls apart. The world model can track velocity, customer metrics, system health. What it can't track is your best engineer getting a little quieter every week, because he's been flagging the same risks for months, nobody's picking it up, and the only questions he gets are "how fast?" and "why not faster?" It can't see the reluctance to make hard calls when you know no one has your back if it goes wrong. It won't notice that the work has quietly stopped being the kind anyone would put on a CV. It tracks the work., but it sure misses the people doing it.
The stuff that isn’t machine-readable
People are not rational agents who write everything down, communicate clearly, and follow the optimal path. Sometimes they’re tired. They run out of energy, will, or both. They don’t want to write documentation. They skip the standup because they’ve got nothing to show yet and don’t want to say that out loud. They avoid the hard conversation because it’s Thursday afternoon and they’ve already had three that week. And other times they’re on fire. Full of ideas, moving fast, shipping things, and the last thing they want is to slow down and write it up. Let’s just build! They’ll document it later. (They won’t.)
There’s a name for this kind of environment. Jamais Cascio calls it BANI: Brittle, Anxious, Nonlinear, Incomprehensible. It replaced VUCA, which described the environment from the outside. BANI describes how it feels from the inside. That’s not a strategic framework. That’s a regular day at the office.
Dorsey’s model is optimized for a world that isn’t BANI. It assumes the system is comprehensible, that causation is linear, and that the structure is resilient. BANI says: your system is brittle, your people are anxious, causation is nonlinear, and some of what’s happening is genuinely incomprehensible. Not temporarily unclear. It doesn’t fit in a model. Period.
Cascio himself said something that stuck with me: define the purpose, not the path. He calls it “flexive intent.” Get everyone pointed at the same thing, then let them figure out how to get there. Which is really just admitting that control was always an illusion. But someone still has to hold the purpose steady when the path keeps changing. Someone has to notice when the team has gone quiet. Someone has to do something about it. That’s my job.
So what do I actually do with this
Leadership theorists have a name for the archetype they think will thrive in BANI environments: the Futurecaster. Visionary, agile, innovative, relational, resilient. It reads well in a keynote abstract.
Now try selling that to a team that’s been through three strategic pivots in 18 months, watched half their colleagues get laid off, and has a leadership team that oscillates between we need to adopt AI yesterday and actually, let’s wait for Q3 to decide. Your team isn’t hungry for another vision. They’re hungry for someone who will stop changing the plan every time a CEO publishes a blog post.
Your VP is going to keep forwarding the latest AI hype article with thoughts? attached. The C-suite is going to keep chasing every shiny tool that promises 10x productivity. That’s not malice. That’s leadership FOMO, and it’s everywhere right now. You can’t change the people above you. You can change how you absorb what they send down.
So the competencies that matter aren’t new. They’re old competencies applied to a context that didn’t exist 18 months ago.
Cognitive flexibility is the World Economic Forum’s polite way of saying: you need to hold two contradictory ideas at breakfast and still make a decision by lunch. In practice, it looks like switching between “evaluate this agent’s output for subtle errors” and “support this human who feels their role is disappearing” in the same hour. Holding “hierarchy can be oppressive” and “hierarchy can be clarifying” at the same time, and knowing which one is true right now, for this team, for this decision. I started building this by spending time in rooms where people think differently than I do. Sitting in on sales calls when my background is engineering. Joining product discussions for the platform my team supports. Cognitive flexibility grows from problems shaped differently than the ones you’re used to. No workshop replicates that.
Emotional resilience, and beyond it, anti-fragility. Resilience is surviving stress and returning to baseline. Anti-fragility is getting stronger because of it. Your humans are anxious about the agents. The agents produce without caring about the anxiety they generate. You sit in the middle, absorbing the tension so the team can function. Nobody wrote a job description for this. You’re doing it anyway.
What I’ve learned is that vulnerability is what makes this work. When I started saying “I don’t have a playbook for human-agent team dynamics, we’re figuring this out together,” the team didn’t lose confidence. They stopped pretending they had it figured out, too. And that’s when we started actually solving problems instead of performing competence at each other. I’ve been on teams where the leader was the last to know something was going wrong. It’s never because the team didn’t see it. It’s because nobody thought telling the leader would change anything.
Sensemaking. Nobody talks about this one enough. AI has made it easier to generate options, strategies, and recommendations. It has not made it easier to determine what matters. The shift I keep coming back to: it’s not “what should we do” anymore. It’s “what’s actually happening here.” Without it, more information just creates confusion. Teams get stuck in debate mode. Executives reverse decisions. Meetings multiply. You’ve seen this week. I’ve started forcing precision before action: what decision are we actually making? What are the constraints? What are we explicitly choosing not to do? Boring questions. But the teams that ask them consistently spend less time undoing decisions nobody remembers making.
The fortitude to embrace calculated risk means deploying agent-generated work you can’t fully verify, accepting that your team will make mistakes in a category of failure that didn’t exist before, and owning outcomes you didn’t directly produce. Inc. reported that AI implementations fail because executives deploy without addressing the operational, cultural, and governance stuff first. The technology usually works fine. Everything around it doesn’t.
There’s a name for the two sides of this: FOMO (racing to adopt) and FOMI (fear of massive implosion from deploying recklessly). Nearly 94% of C-suite executives report dissatisfaction with their AI solutions, and their response is to buy more AI solutions. That anxiety rolls downhill. By the time it reaches your team, the excitement has curdled into exhaustion. So you make the call anyway. Keep the blast radius small. Be honest when something breaks.
Some weeks I feel like a sponge for organizational FOMO. Every day, another signal that everything is about to change. Another article forwarded, another tool to evaluate, another have you seen what X company is doing with AI? I have to be careful not to get sucked in myself. Because it’s easy. The excitement is real, the pressure is real, and falling behind feels like a thing that could actually happen to you. But if I pass all of that through to the team unfiltered, they don’t get motivated. They get whiplash. So I translate instead. This is what it means for us. This is what changes. This is what stays the same. Some days, translating is the only useful thing I do.
The WEF says 70% of job skills will change by 2030. That’s probably true. But it doesn’t mean 70% of what makes you good at your job will become irrelevant. The technical surface changes. The human part stays. Judgment, trust-building, the ability to hold contradiction, the courage to make a call when the data is incomplete. Those don’t have a half-life. They compound.
The futurecaster isn’t someone who sees the future. It’s someone who holds the team together while the future arrives in unpredictable, nonlinear, incomprehensible bursts. That’s a middle management skill. Always has been.
Where Dorsey is right
I don’t want to be unfair. The post identifies a real problem. Layers of human coordination do slow information. AI can compress that overhead. The “world model” concept, a continuously updated shared picture of business state, is genuinely useful. If your organization’s primary bottleneck is information routing, AI coordination will help.
And honestly? Parts of Dorsey’s vision would make my job easier. A system that automatically surfaces upcoming changes, pulls relevant context from scattered Slack threads and private channels, sends reminders about releases before they surprise the team next door. I’d take that. Right now, too much information lives in the heads of a few people or buried in channels most of the org doesn’t even know exist. If AI can make that visible, that’s not replacing middle management. That’s giving us better tools to do the work we’re already doing.
But most organizations I’ve worked in aren’t primarily bottlenecked by information routing. They’re bottlenecked by decision avoidance, because the system rewards not deciding over deciding wrong, and by the gap between what leadership says and what the operating system actually does. Those aren’t information problems. They’re human problems. And human problems don’t get solved by replacing the humans with a model.
The competency future leaders need has nothing to do with thriving without hierarchy. Build structures honest enough to be challenged and flexible enough to handle what’s coming. And then say, clearly and out loud: this is new, I don’t have it figured out, and we’re going to build the answer together.
That won’t fit in a world model. But it might fit in a team.





