
Persona
Over the last year, I have sat in more meetings about AI than I can count. The pattern is familiar. Someone opens with a strong intent: “We need to do something with AI.” Budgets are opened, vendor demos follow, pilots start. Six months later, the question quietly changes to a more uncomfortable one: “What did we actually get out of this?”
This is not a technology failure. It is a motivation problem.
AI is increasingly treated as a destination rather than a tool. When that happens, organizations start with the solution and go looking for a problem. In the best case, they end up with an interesting proof of concept. In the worst case, they operationalize complexity, increase cost, and still fail to produce measurable value.
The most common mistake is starting with AI instead of value
I see the same root cause behind stalled AI initiatives. The starting point is rarely a clearly articulated business constraint. Instead, it is usually external pressure: competitors, the board, or vendor narratives that frame AI as mandatory for relevance.
The outcome is predictable. Teams generate “use cases” that sound impressive but are loosely connected to outcomes. They apply AI to workflows that are already inefficient. They introduce agentic systems where deterministic automation would be cheaper, faster, and safer.
That is how AI becomes a solution in search of a problem.
The uncomfortable truth is that many so‑called AI use cases do not actually require AI at all. If the task is rule‑based, repeatable, and bounded, adding reasoning models and agents only increases operational overhead. You pay for inference, monitoring, security reviews, and governance for something a standard workflow engine could do reliably at a fraction of the cost.
This is how organizations quietly waste money while believing they are “innovating.”
Agentic AI amplifies weak decisions
Agentic AI is powerful. It is also unforgiving.
When applied to a well‑designed process with a clear decision point, agents can scale judgment and reduce cognitive load. When applied to a poorly understood or unnecessary process, they amplify inefficiency.
I have seen environments where an agent was added simply because it was possible, not because it was needed. The process itself was never questioned. As a result, the company automated noise, scaled ambiguity, and introduced new failure modes, all while increasing operational spend.
AI does not fix broken workflows. It accelerates them.
Before introducing any agentic layer, the hard question should be: What human judgment are we trying to improve, and how will we know if it actually got better? If that question cannot be answered clearly, the agent will almost certainly disappoint.
The trap of vanity metrics
Another recurring pattern is how success is measured.
As AI adoption accelerates, many organizations fall back on metrics that are easy to collect but meaningless to the business. Number of users. Number of prompts. Token consumption. Agents deployed.
These metrics create the illusion of progress. They look impressive in steering meetings and dashboards, but they say nothing about value.
High usage can mean many things, most of them bad. Inefficient prompting. Redundant agents. Poorly designed workflows that require constant retries. In some cases, it simply reflects cultural signaling: teams want to be seen as “AI‑forward,” so they use it aggressively, regardless of outcome.
If AI success is measured primarily through activity, the organization is optimizing for noise, not results.
What value‑oriented AI measurement looks like
In cases where AI delivers real ROI, the metrics look very different. They start with outcomes, not inputs.
Instead of asking how much AI is used, leaders ask:
- Are decisions made faster without increasing rework?
- Has outcome consistency improved across regions or teams?
- Did this reduce cost, avoid risk, or unlock revenue that was previously blocked?
- Were steps removed from a workflow, not just automated?
These are harder questions. They require baseline measurement and honest comparison to non‑AI alternatives. But they are the only questions that matter.
AI earns its place when it changes an economic or operational constraint. Everything else is experimentation, which is fine, as long as it is labeled as such and funded accordingly.
A healthy skepticism toward vendor narratives
Vendors are doing what vendors always do. They showcase best‑case scenarios, abstracted from organizational complexity. They rarely talk about integration pain, data readiness, governance overhead, or long‑term operating costs.
In reality, AI is not plug‑and‑play. It is a capability that must sit inside real processes, real incentives, and real accountability structures. Without that, even the most impressive model will remain a demo.
A practical rule of thumb I often share is this: if the value case cannot be explained without mentioning the technology, it is probably weak. Value should stand on its own, even if you remove the words “AI,” “agent,” or “model.”
Reframing the executive question
The most successful leadership teams I work with have stopped asking “Where should we use AI?”
Instead, they ask:
- Where is judgment slow, inconsistent, or expensive?
- Where do decisions fail more often than they should?
- Where is human attention the true bottleneck?
Only after those questions are answered does AI come into the conversation. And often, the answer is not AI at all, but process redesign, clearer ownership, or better data.
That is not a failure. That is discipline.
Closing thought
AI is becoming cheaper, faster, and more accessible every month. That makes judgment, not technology, the scarce resource.
Organizations that treat AI as a status symbol will scale cost faster than value. Organizations that treat AI as a surgical tool for removing real constraints will quietly outperform them.
The difference is not ambition. It is restraint.