Skip to main content
Key Takeaways

AI ROI Challenges: Many organizations struggle with measuring AI ROI, often misattributing issues to management rather than implementation.

Metrics Misalignment: Current metrics focus on technology performance but fail to address the underlying people and process issues.

Long-Term Impact: Poor AI implementations can cause organizational damage that accumulates over years, hampering effectiveness.

Organizational Health: AI investment decisions need to prioritize people and process metrics alongside technology for sustainable success.

Leadership Gaps: Transformation leadership typically does not involve HR, leading to short-sighted investments without long-term viability.

The ROI deck always looks clean. Hours saved, headcount redirected, cycle time reduced, adoption rates climbing toward whatever benchmark the vendor suggested in the kickoff call. Green arrows. Charts moving up and to the right. The CFO nods with approval.

Eighteen months later, something is off. Attrition is up in departments that were supposed to benefit most from the change. Cross-functional collaboration has gotten clunkier. A team that used to operate with a kind of informal coherence now moves like it's marching through quicksand. Nobody can point to a cause, so it gets attributed to the usual suspects: management style, market stress, post-pandemic something or other.

The AI rollout that preceded it by six quarters doesn't come up.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

This is what measuring AI ROI wrong actually costs. Not a line item. A slow degradation that arrives misattributed and gets treated with the wrong remedies.

Tech Metrics vs People Metrics

The metrics that dominate AI investment conversations — time saved, headcount redirected, cycle time reduced, and increasingly, usage — are all leading indicators of technology performance. They measure the layer of the transformation that's easiest to quantify.

What they don't measure is the organizational layer, where the real costs of bad implementation live and where they compound until they're impossible to ignore.

Usage deserves particular scrutiny, because it's assumed to be an organizational health signal when it's actually a behavioral one. A BCG analysis from late 2025 found that 60% of companies globally were generating no material value from AI despite substantial investment, and identified the culprit in part as the metrics themselves.

Organizations fixated on logins and time-spent figures as proxies for adoption learned the hard way those measures miss entirely whether AI has become central to how people actually work.

High adoption rates tell you that people are using the tool. They tell you nothing about whether the work is going well, whether teams are functioning, whether the tacit knowledge that used to move through collaborative tasks is still moving at all. A workforce can hit every usage benchmark while the connective tissue of how it actually operates is quietly coming apart.

BCG's October 2024 report, "Where's the Value in AI?" — drawn from a survey of 1,000 CxOs across 59 countries — found that roughly 70% of AI implementation challenges stem from people and process-related issues.

The budget allocation in most implementations runs exactly counter to that: the majority goes to technology, with change management and organizational readiness treated as line items rather than load-bearing parts of the investment.

When the metrics you use to evaluate a transformation only speak to the technology layer, you are, by design, measuring the portion of the equation least likely to determine whether it succeeds.

Redefining ROI

The CFO framing isn't wrong in isolation. It applies rigorous discipline to a real question: is this investment generating return?

The problem is that it applies a capital allocation logic to what is fundamentally an organizational change problem, and the time horizons don't match.

Software ROI cycles run 12 to 18 months. Organizational damage from a poorly managed implementation accumulates over two to three years and surfaces looking like something else entirely.

How the CFO ends up holding this conversation in the first place is worth examining. Anthony Onesto, who has spent his career inside HR tech and is now building AI transformation practices for mid-market companies, describes the pressure cascade.

Boards push CEOs, CEOs look into the organization for someone to own it, and the natural first choice is the CTO or CIO. When that proves insufficient, the conversation moves to the CFO.

That’s always going to be a financial decision, which sometimes is the right decision and sometimes it’s not.

Onesto-20421
Anthony OnestoOpens new window

Former Chief People Officer at Suzy

His view is that HR should be leading these transformations, not as a change management afterthought, but as the function best positioned to think about how machines and people operate together. The fact that it rarely does, he argues, is structural. Most HR leaders haven't raised their hand, and the pressure cascade doesn't naturally route to them.

The result is an ROI conversation shaped by whoever inherited accountability and in most organizations, that person's incentives run toward demonstrable short-term efficiency, not long-term organizational health.

Challenging Professional Identities

The identity disruption alone illustrates the gap. When routine tasks get automated, professional identity often goes with them. The analyst who built judgment through low-stakes, repetitive work now has higher-value work to do, in theory. 

In practice, the context in which she was developing has been removed, and nobody has built a replacement. The disruption doesn't show up at the announcement. It surfaces eight or ten months later as disengagement that reads like a culture problem and gets addressed as one, while the actual cause sits unremarked in a past implementation log.

This isn't a niche concern. It's the mechanism by which automation investments produce worse outcomes than projected despite hitting their efficiency targets. The targets were set for the wrong system.

What gets automated is rarely just a task. Routine work often carries informal functions that aren't visible on any process map: coordination rhythms between teams, mentorship embedded in collaborative workflows, professional touchpoints that give junior employees regular exposure to how senior judgment operates. 

These aren't documented anywhere because they don't need to be until they're gone. At that point, you learn what they were worth the hard way.

A weekly cross-functional report that three team members built together wasn't just producing a report. It was producing alignment, visibility, and a standing reason for those teams to be in regular contact. Automate the report and you've captured the efficiency.

The cost of that doesn't appear in the ROI deck. It appears eventually in how those teams perform together or stop performing together.

The case for expanding what counts as a cost isn't an argument against ROI discipline. It's an argument for applying that discipline to the full system rather than the convenient subset. If 70% of what determines transformation outcomes lives in the people and organizational layer, then a measurement framework that excludes that layer isn't conservative, it's systematically optimistic about the wrong things.

The Right Approach

What a more complete measurement framework includes: 

  • Attrition trends in affected roles, with adequate lag time built in
  • Cross-functional communication patterns before and after significant automation changes
  • Knowledge transfer rates to junior employees
  • Trust levels, measured through surveys designed to surface actual sentiment rather than satisfaction proxies. 

None of this is technically difficult. Most organizations aren't doing it because they've never connected automation decisions to organizational health outcomes as a deliberate practice.

Ravin Jesuthasan, whose work on workforce architecture spans decades of enterprise transformation, puts it plainly:

I don’t know of a single organization that led with the technology, kept the human side of the equation out of play that has ever gotten to a successful result.

Ravin Headshot-42796
Ravin JesuthasanOpens new window

Global Leader for Transformation Services at Mercer

The organizations that start with the tech, he argues, develop a binary view of the problem, substituting out the human element wherever possible and never build the organizational sustainability that makes transformation stick.

Fixing the measurement problem is partially a technical question and mostly a governance one. Someone at the table when AI investment decisions are made needs to be responsible for the organizational health side of the ledger and needs the standing to treat it as equivalent to the efficiency side, not subordinate to it.

In most organizations, that person either isn't at the table or doesn't have that standing. The CHRO is often consulted on change management as an implementation task rather than as a co-equal dimension of what success means.

Jesuthasan frames the broader failure in terms that go beyond measurement. 

The change that is being asked of leaders is bigger than anything we've seen in the last 150 years. And I don't think many of them understand it.

That's not a condemnation so much as a structural observation. The leaders making AI investment decisions were trained in a world where the technology and the organization were separable problems. They're not anymore. 

The ROI deck that keeps the two apart is a legacy document being applied to a fundamentally different situation and the error, compounding in attrition numbers and collaboration deficits, is getting expensive.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.