Skip to main content
Key Takeaways

AI Success Rate: Only 5 percent of AI initiatives achieve desired business outcomes, revealing systemic issues.

Organizational Readiness: The failure of AI projects often stems from organizations being unprepared for the transition.

Problem Diagnosis: Companies frequently misidentify the actual problem, leading to misguided AI implementation.

Stakeholder Involvement: Involving stakeholders early in the process can prevent resistance and enhance acceptance of AI tools.

Governance Literacy: Understanding responsible AI use is crucial for decision-makers who lack technical backgrounds.

You’ve no doubt heard the news by now. Gartner estimates that only 5% of AI initiatives deliver the promised business outcomes.

That number gets thrown around quite a bit when we talk about AI, either as a condemnation of the technology or an indictment of our ability to use it effectively. 

Beverly Wright has been watching this pattern repeat itself since before most of today's AI vendors existed. She started in decision sciences in the early 1990s, when there were three graduate programs in the country and maybe 20 graduates per year.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

"We had a similar issue," she said at the Optimized AI Conference in Atlanta last week. "But now the issue is at scale and with automation. It's an even bigger problem."

Wright spent decades in data science roles at Southern Company and Cox Communications before moving into executive education at Georgia State University. Her diagnosis of why AI investments keep failing runs counter to what most technology vendors want to hear: the problem is organizational.

Sorry vendors. This isn’t your problem to fix. 

The Real Problem Is Rarely the One You Started With

Wright opened her session with a story about elevators. A long time ago, buildings were getting complaints that elevators were too slow. The obvious fix was faster elevators. But the real problem, it turned out, wasn't speed. It was boredom. Forced to wait in silence, or worse, in the company of strangers, people’s boredom created a discomfort that made the wait for the elevator feel unnecessarily long.

So what solved it? Mirrors. That’s it. No engineering required.

That gap between the presenting problem and the actual problem is where most AI initiatives go sideways. Companies frame a business challenge, hand it to a data team, and expect the model to do the work of diagnosis. It doesn't. 

"Finding the actual problem is a lot harder than we think it is," Wright said.

Reaching for AI before defining what decision it needs to support is, in her view, the first of three reasons why 95% of initiatives return nothing.

The second is explainability. When a calculator gives you an answer, you can sanity-check it. You know the rough scale of what you asked. AI systems built on neural networks don't afford that kind of intuitive verification. 

This matters everywhere, but it's especially acute in regulated industries where human review is required by law. What Wright is observing in practice is that organizations check that box by using AI to evaluate AI. The human in the loop doesn't understand what the model is doing well enough to catch what's wrong.

The third failure mode is the one nobody wants to own, which is that the organization simply wasn't ready.

Wright distinguishes between technology enablement and cultural enablement, and finds that companies treat them as sequential rather than parallel. They build first, then ask whether people are prepared to use what was built. By that point, the answer is usually no, and the investment gets shelved.

She used tornadoes as a frame: only about 0.1% cause serious damage, but the fear of that fraction shapes how everyone responds to all of them. Change resistance works the same way. The perceived risk of adopting something new outweighs the demonstrated risk of staying put, even when the math runs the other direction. 

There’s usually more damage with not doing. They think they’re being risk averse, but there’s a higher risk of doing nothing.

BevWright-92272
Beverly WrightOpens new window

Data Science AI Consultant

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

There Should Never Be a Ta-Da

When Wright talks about what actually works, there’s no commentary around technology investment. What she describes instead is how organizations rebuild their relationship between insight and action.

Start with the decision, not the tool 

Wright is direct about the board pressure dynamic, where companies announce AI initiatives to move a stock price, and the announcement becomes the goal. 

That's the wrong reason to take on an AI project," she said. "You have to start with an actual strategy that says here's how we're going to solve problems, and we might or might not use AI.

Meet the company where it is

This cuts against the aspiration-first thinking that dominates most digital transformation conversations. Organizations build roadmaps toward a future state, then try to retrofit current operations into them. 

Wright's argument runs the other direction: understand what you actually have, and build AI solutions that benefit that reality.

Involve stakeholders from the beginning

"There should be no ta-da moment," she said. The reveal-at-the-end model, where a data team builds in isolation and presents a finished product, is a reliable path to rejection. 

By the time a tool reaches the people who have to use it, they've had no hand in shaping what it does. Resistance isn't irrational. It's the predictable result of exclusion.

The Governance Problem

Wright draws a distinction between two types of AI users now present in most organizations: those who came up through data and analytics, who converged into AI through accumulated technical fluency, and those arriving at it from operational or functional backgrounds, often without the foundation to recognize when something looks wrong.

That gap is a governance problem, and it belongs to the people running the business, not the data team. If the humans tasked with reviewing AI outputs don't understand what they're reviewing, the review is theater. And theater, repeated across enough initiatives, is how a 5% success rate happens.

The fix Wright recommends is literacy, not immersion. Operations leaders don't need to understand neural networks. They need to understand what responsible use looks like, what the guardrails are, and why the guardrails exist. 

It actually makes it impossible to screw things up," she said, describing how that framing tends to land with non-technical audiences. "And that's true for all of us.

Most organizations have spent a decade building data infrastructure. The gap between that investment and actual business decisions is organizational, and no platform upgrade closes it.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.