-->
  • May 14, 2026
  • By Ian Jacobs, vice president and lead analyst, Opus Research

Bots Get Agency, You Get Monitored

Article Featured Image

“Fitter. Happier. More productive.” —Radiohead (“Fitter Happier”)

The AI agent boom is more than a little strange. People in that world love to use the language of freedom while quietly putting together a new system of control. Vendors talk about autonomy, orchestration, reasoning, and digital coworkers. It all sounds very modern. But when you look closer, you realize a lot of the underlying logic would have brought a smile to the face of Frederick Taylor, the early-20th-century industrial efficiency expert.

Taylorism was about breaking work into parts, measuring every part, standardizing the best method, and shifting control over how work gets done from workers to managers. Replace the stopwatch with analytics and the foreman with an orchestration layer, and the likeness is as strong as a good cup of coffee, at least as an analogy.

That logic now shows up in how companies describe humans in an agentic world. The machine gets to be the agent. The person gets to be the backup plan. In the sales pitch, AI agents handle routine work while people focus on higher-value moments. Sometimes that’s even true. There is real value in clearing away repetitive nonsense, preserving best practices, and helping people with tedious tasks. Few people rush into work revved up to summarize a call, hunt through six systems for the same policy, or clean up yet another case note that nobody will read unless legal gets involved.

That’s the rosier take on this story. A good system can make work less chaotic and help more people use the same knowledge. Some structure helps when the alternative is scattered know-how/tribal knowledge, inconsistent decisions, and managers who think coaching means sending a message with three question marks.

And there’s also a darker version.

As more routine work gets pushed to software, the human role narrows. People are told they will handle the exceptions, the edge cases, the emotional conversations, the messy judgment calls, and the situations where automation breaks down in public. Congratulations, you are now a human exception queue!

That creates a nasty shift in the texture of work. The relatively straightforward tasks that once gave people rhythm and breathing room start to disappear. What remains is the hard stuff all day long. More escalation. More conflict. More weirdness. More moments where the system has a hiccup and a person has to clean up the mess.

At the same time, the worker is surrounded by more measurement than ever. Real-time prompts, next-best-action guidance, automated quality scoring, coaching nudges, compliance alerts, sentiment tracking, performance dashboards. All of it framed as support. And some of it may genuinely be support. But much of it ensures the space for unscripted judgment gets smaller and smaller.

That’s the neo-Taylorist part. Human variation starts to look like operational noise. Discretion looks expensive. Judgment becomes something to capture, codify, and route around. The ideal employee starts to resemble someone who can step in gracefully when the system fails, follow the recommended path, and hand the work back without creating fresh trouble.

What keeps nagging at me is how this language reshapes the human role. In too many versions of the agentic AI story, people are being redefined as managed exception handlers, monitored decision followers, and flexible capacity inside an orchestrated system. Their judgment is still there, but it gets treated less as part of the value being created and more as something to be supervised, contained, and called on when the software gets confused.

That doesn’t mean the whole project is misguided. Some structure helps. Some automation is useful. Some of this really can make work less chaotic and less tedious. Yet the old Taylorist instinct is easy to spot. The agents keep getting described as intelligent and autonomous. The humans keep getting described as the ones who step in, follow guidance, and keep the machine running smoothly. That’s kind of a dark punch line for a future supposedly built around agency.

Ian Jacobs is vice president and lead analyst at Opus Research.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues

Related Articles

Cold Circuits, Warm Heart: AI and Empathy in CX

Customers will judge the interaction in front of them.

Profiled by Prompt, Illustrated by Model: Audit Your LLM’s Assumptions

Your large language model thinks it knows you. Is it right?

Why Conversational Experience Orchestration Will Have Its Moment

Something needs to be at the helm making sense of customer experiences.