Calling AI Employees May Teach Clients Not To Need Consultants

Initially published on Forbes January 25, 2026

When a consulting firm publicly declares that AI agents can now do large parts of the work consultants were hired to do, it might unintentionally be teaching its clients how to replace it.

Language matters. When McKinsey & Company announced it was deploying tens of thousands of AI agents and assigning them employee IDs and email addresses, it quickly sparked a broader conversation about AI taking over jobs. The framing alone was enough to convince many that knowledge work, as we know it, is about to disappear.

AI Is Commoditizing Knowledge Work

What’s actually happening is simpler and more uncomfortable: AI is commoditizing knowledge work by turning once-premium outputs into table stakes. Seen through that lens, McKinsey’s approach in calling AI agents “employees” might just be diluting their value. Because if AI can do the work, why pay a premium for the firm?

In wanting to show they are moving all in with AI, McKinsey runs the risk of having allowed their value proposition to become indistinguishable from the output they produce.

This is the white T-shirt problem.

A white T-shirt is basic. It is everywhere. Anyone can buy one. Its value is not in the fabric, the cut, or even the brand label stitched inside. Its value emerges only when it becomes part of a style, a point of view, a way of showing up in the world. On its own, it is replaceable. In context, it can be iconic.

AI is turning large parts of professional work into white T-shirts.

In finance, it is models and forecasts that used to signal expertise. In marketing, it is decks, personas and campaign analysis. In HR, it is frameworks, surveys and benchmark reports. In law, it is research, drafting and precedent reviews. Research, analysis, benchmarking, first drafts, slide creation, synthesis. These were once differentiators. They justified headcount, fees and long timelines. Now they are increasingly baseline, fast, abundant and cheap. When consulting firms frame their value around output volume rather than interpretation, they no longer explain why a client should choose them.

That is why the announcement from McKinsey & Company matters far beyond one firm or one headline. It exposes a deeper tension facing every knowledge-driven organization as knowledge work becomes increasingly automated. Across industries, organizations are discovering that what once looked like differentiation was often just effort. AI does not remove value. It exposes where organizations never clearly articulated what their value actually is.

When Output Becomes Abundant, Judgment Becomes the Product

If AI agents can produce the same analysis as junior consultants once did, then the analysis itself was never the product.

Judgment was. Context was. The ability to decide what matters, what to ignore, what is signal and what is noise, and how to act in messy, political, human systems. Those capabilities were always the real value. AI simply strips away the illusion that the rest was indispensable.

Too many professionals have built their identity around being good at producing the thing. The deck. The model. The report. The recommendation. AI does not just automate these outputs. It exposes the differentiation underneath.

AI-Heavy vs. AI-Enabled: The Leadership Choice Ahead

There is a growing gap between organizations that are AI-heavy and those that are AI-enabled — a distinction that will define which firms remain defensible in the age of AI. The first use AI to produce more work, faster. The second use AI to make fewer, harder decisions better. In the AI-heavy model, humans review, approve and refine what machines generate. This work looks efficient, but it is easy to replicate. In the AI-enabled model, human work moves up the stack toward authority. They decide what deserves attention, which signals are credible, which trade-offs are acceptable and when speed should give way to judgment.

This work is harder, not easier. It carries risk, requires context, organizational awareness and the willingness to own outcomes rather than activity. This is the work that remains human. And it is precisely the work clients will continue to pay for.

This is why the language matters so much.

Calling AI agents “employees” reinforces the idea that work is primarily about production. Consulting, however, was never about production alone. It was always about judgment layered on top of information. By framing AI as workers rather than as leverage, firms unintentionally shift the conversation away from what makes them valuable.

Ironically, in trying to show how advanced they are in using AI, consulting firms risk making themselves look more replaceable, not less. When the basics are free, they need to be clear on the differentiator.

AI makes everyone capable of producing a white T-shirt. Only some will turn it into a signature. They will be the ones who are clear about what they stand for when everyone has access to the same tools. They will be able to articulate what makes them distinct when output is abundant. They will be able to tell the client why they should trust their judgment rather than that of an algorithm.

Language is not neutral here. The way leaders talk about AI shapes how roles are designed, how value is priced and how easily people become interchangeable. Every organization adopting AI is already making a choice, whether consciously or not. Either it uses AI to amplify judgment, authority and accountability, or it uses AI to optimize output and commoditize its own people.

How leaders frame AI today will determine whether their organizations become harder to replace, or teach the market how to do without them.

That is not a technology decision. It is a leadership one. And no AI agent can answer the question leaders now have to face: when the basics are free, can you clearly explain why your people still matter?

Leave a Reply

Your email address will not be published. Required fields are marked *