AR Andy Roberts

3 April 2026

SIGNAL

Everyone is a manager now

Everyone is a manager now

The story you keep hearing about AI is about jobs. Which ones are safe. Which ones aren’t. Whether your role will still exist in five years.

It’s a compelling story. It’s also the wrong one.

The more important shift is already underway, in every function, at every level of every organisation. AI isn’t replacing workers yet. It’s replacing the work. And almost nobody is telling them what their job actually requires now.

The delegation is already happening

Walk through any team that’s actually using AI and you’ll see the same pattern, regardless of function.

The marketing manager who used to write first drafts is now reviewing them. The financial analyst who used to build models is now specifying what the model should show and interrogating the output. The software engineer who used to write the boilerplate is now directing an agent to write it, then deciding whether what came back is right.

In every case, the human has moved upstream. They’re no longer doing the work in the traditional sense. They’re directing, reviewing, correcting, and deciding.

This is happening in engineering. It’s happening in people teams, operations, finance, legal. Faster than anyone planned. Without any real preparation. Most organisations haven’t acknowledged the shift, let alone equipped their people for it.

So everyone’s a manager now, right?

The “management” analogy is everywhere. If you used to do the work and now you’re directing something else to do it, that sounds like managing. And the comparison to a manager gaining a team of tireless, fast, always-available reports is a useful mental model for getting people comfortable with the idea of delegation.

But push on it for a moment and it breaks down.

Traditional management is about people. It’s about motivation and coaching, reading the room, navigating politics and building relationships. It’s about human complexity.

None of that applies here. An agent doesn’t need coaching. It doesn’t get demoralised when its output is rejected. It doesn’t have a career trajectory you need to think about or a working style you need to accommodate.

The skills that make someone an effective people manager are largely irrelevant for directing AI. Calling it “management” isn’t just imprecise. It sends people in the wrong direction.

A new kind of work that doesn’t have a name yet

What individual contributors (ICs) actually need in an agentic world doesn’t have a clean name yet, which is part of why organisations are struggling to develop it.

The closest analogies are roles like creative director, editor, or architect: people whose job is to hold the vision, shape the output, and know when something is good enough versus when it isn’t. But even those don’t quite capture it.

Here’s what the role actually requires:

Clarity of intent. The ability to specify what good looks like precisely enough that an agent can execute against it. When you do the work yourself, the goal lives in your head. Writing it down with enough precision for someone else to act on is a skill most people have never had to develop.

Outcome thinking. The shift from “how do I do this” to “what does done look like.” These are genuinely different cognitive modes. Defining the destination before you move is uncomfortable for people whose expertise lived in the doing.

Critical evaluation. The ability to look at output and judge it. Not just “does this look right” but “is this actually right.” Agents produce errors, inconsistencies, and gaps with complete confidence. You need to know what good looks like to catch them.

Orchestration. As work involves multiple agents and multiple steps, someone has to sequence it, decide where to intervene, and know when to hand off. That’s a kind of operational thinking most ICs have never needed before.

Taste. The hardest to teach. The judgment to know when something is good versus when it’s merely fine, and whether the problem is the brief or the execution. Taste is what separates people who generate a lot of AI output from people who generate good AI output.

None of these are on anyone’s job description. None of them appear in most organisations’ competency frameworks. And almost none of the standard training or development programmes address them.

Why this is harder than it looks

Most people became good at their jobs through doing. The doing built the instincts. The repetition built the pattern recognition. The struggle built the judgment.

The agentic shift asks people to stop doing, at least in the traditional sense, and start directing. That’s a genuinely different cognitive mode, and it doesn’t come naturally to people who’ve built their careers on execution.

There’s also a confidence problem. When you do the work yourself, you know the work. When an agent does it, you have to trust your ability to evaluate it. For people who’ve always proven their worth by doing the work, that’s an uncomfortable place to be. Like you’re one bad prompt away from shipping something you don’t fully understand.

It’s a reasonable response to a genuine shift in what the job requires. Most people are being thrown into this transition cold, with no framework for thinking about the new role, no language for the new skills, and no explicit permission to develop them during working hours.

And it’s not just individual contributors who are struggling. Leaders are often just as underprepared, which means the people who should be helping their teams make sense of the shift don’t know how to do it either.

What leaders can do about it

The first thing is to name it.

Most organisations have adopted AI tools without articulating what changes as a result. The implicit message is: use the tools, stay productive, carry on. But people can feel the shift even when nobody’s named it. Acknowledging that what the job requires has changed, that this is a transition not a tweak, gives people permission to take it seriously.

The second thing is to create deliberate space to develop the new skills.

This doesn’t mean training programmes or workshops. It means creating feedback loops on output quality, not just volume. Having honest conversations about what “good” looks like when the agent produces the first draft. Treating clarity of intent as real craft, not an assumed capability.

The third thing is to stop measuring the wrong things.

The temptation when teams adopt AI is to measure throughput: more content, more code, more analysis, faster. Volume is easy to measure. But volume without quality is just more of the wrong thing. Reward the quality of direction and judgment, not just the quantity of output.

The challenge hiding behind the headline

The challenge isn’t that AI is replacing workers.

It’s that AI is replacing the work, and most organisations are treating that as a productivity story when it’s actually a capability story. The people who will thrive in this shift are the ones who develop genuine skill at directing, evaluating, and shaping AI output. The ones who won’t are the ones who mistake high-volume output for high-quality work, and never develop the judgment to tell the difference.

That capability doesn’t develop on its own. It needs to be named, valued, and deliberately built. The leaders who get ahead of this aren’t just going to have more productive teams. They’re going to have teams that can tell the difference between output and quality.

All posts