Skip to main content
Key Takeaways

AI Adoption: Less than 30% of organizations using AI see meaningful results. Decision-making redesign is crucial.

Productivity Theater: AI improves speed but not decision-making. Organizations often fall into the 'humans-in-the-loop' trap.

Organizational Design: AI requires changing decision structures, not just implementing tools, to improve processes and results.

Behavioral Approach: Human response and process integrity are key in AI adoption, beyond just deploying technology.

Leadership Shift: AI demands adaptive leadership beyond traditional frameworks, focusing on guidance rather than control.

Fewer than 30% of organizations trying to implement AI are generating meaningful returns from it, a figure that has held stubbornly in place even as tool adoption has accelerated.

Paul Gibbons, a consultant and the author of "Adopting AI: The People-first Approach", has spent three decades watching technology implementations stumble for the same underlying reason.

The barriers are organizational. Silos, culture, mindset, skills.

PGibbons-53927
Paul GibbonsOpens new window

Board Advisor, Human Capital and AI Adoption

What tends not to appear in post-mortems is this: the organization never redesigned how it made decisions.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

That observation sounds abstract until you look at what failure actually looks like from the inside. Jurgen Appelo, author of "Human Robot Agent" and founder of the Solo Chief newsletter, calls the most common pattern productivity theater.

Executives mandate AI tool deployments. Teams learn to use them, summaries multiply and note-takers populate every meeting. Then, almost imperceptibly, the organization realizes that everything is faster but nothing has fundamentally changed.

Companies are sprinkling AI on top of broken workflows and it shows.

Jurgen A-62887
Jurgen ApelloOpens new window

Founder of the unFIX Company

The decision flows running underneath those workflows, who owns a call, what data informs it, where human judgment is required and where it's just habit, have remained entirely intact.

Appelo calls this the humans-in-the-loop trap: every AI interaction still starts and ends with a human, the systems never talk to each other, and everything moves at the speed of the slowest bottleneck. In other words, the tools got smarter, but the architecture didn't change.

What does it look like when it does?

When the Tool Performs and the Organization Doesn't

Carla Catelan has run talent acquisition operations at scale for more than two decades, leading teams through hundreds of hires annually at companies including Thoughtworks, Cognizant, and Hewlett-Packard. When she talks about AI in hiring, she does not talk about it as a tool. She talks about it as a design problem.

Leadership in an AI-first era is about designing decision architectures, creating systems where people, data, and AI collaborate to produce better outcomes.

Carla Catelan-52519
Carla CatelanOpens new window

Head of Talent Acquisition Americas at Thoughtworks

That framing came from a hard-won observation.

When Thoughtworks began applying predictive analytics to its hiring funnel, Catelan's team discovered something unexpected. Cycle time, the number of days from opening a role to filling it, was already strong, consistently in the 30-to-35-day range. Speed was not the problem. So the team went looking for what was.

What they found was that variability in role definitions was driving worse hiring outcomes more reliably than candidate quality. Job descriptions that were ambiguous, inflated, or inconsistently structured were distorting the AI's learning behavior upstream, producing lower conversion rates, higher late-stage rejections, and weaker early performance regardless of who was in the candidate pool.

The Cost of Ambiguity

The Cost of Ambiguity

“Before working with predictive models, I underestimated how much ambiguous, inconsistent, or inflated job descriptions could distort hiring outcomes and AI system learning behavior,” Catelan says. “With AI, job design and job description quality determine every downstream decision.”

The fix was not a new model. Thoughtworks rebuilt its job descriptions as what Catelan calls "predictive artifacts," stripping out requirements that correlated with noise rather than success and concentrating the signal going into the model. The AI improved because the decision inputs improved.

That upstream logic, the quality of what enters the decision architecture determines the quality of what comes out, runs through every application Catelan has built.

AI is deployed to surface patterns, aggregate longitudinal data, and structure quantitative signals. Humans retain ownership of qualitative judgment and final decisions. The boundary between the two is explicit, not assumed.

Pilots Don't Scale. Architecture Does.

The most common failure Catelan sees in organizations trying to replicate this is treating AI as a series of pilots rather than a fundamental change to how work gets decided.

Most companies invest heavily in AI tools and expect improvements," she says. "But they keep the same incentives, the same hierarchical structures, and the same low data literacy across leadership. The result is that AI becomes either underused or misused. Powerful models producing insights that are not trusted, not understood, or not acted upon.

Thoughtworks addressed this by establishing a cross-functional working group dedicated specifically to AI-enabled decision design in talent acquisition.

The group operates against explicit business-level targets:

  • Increase conversion rates across the hiring funnel by 50% through predictive segmentation
  • Reduce interview hours by 30% by eliminating low-probability candidates earlier in the process.

Those numbers are not aspirational. They are the mandate.

The sequencing matters as much as the targets. Before any automation is introduced, the group maps where critical decisions are made, who owns them, what data is required, and where human judgment must remain central.

Only after that architecture is clear does AI enter the picture, augmenting specific steps rather than being layered on top of unexamined processes.

"We redesign decision workflows before introducing automation," Catelan says. "Not after."

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

The 90% Most Rollouts Skip

Yannick Fouagou, Director of People Operations and Solutions at Greenshield, arrived at a similar conclusion from a different direction. His background is in electronics engineering and oil and gas quality systems, environments where process integrity is non-negotiable and failure carries physical consequences. When he moved into people operations, he brought that same rigor to the human side of AI adoption.

What surprised him was how little the technology mattered relative to the human response to it.

AI adoption is 10% technology and 90% human psychology. I initially assumed everyone would be equally eager to adopt these tools, but I quickly realized that AI requires the same legacy change management we’ve used for decades. If you skip the human work of addressing fear, resistance, and the change curve, even the most advanced technology will still fail.

Yannick headshot (1)-99460
Yannick FouagouOpens new window

Director of People Operations at GreenShield

For Fouagou, decision architecture is not purely an organizational design question. It is a behavioral one. His approach segments stakeholders by where they sit on the adoption curve, meets them there, and then codifies the new way of working into formal processes and documentation so the shift becomes structural rather than a temporary enthusiasm spike.

"We don't just roll out tools," he says. "We codify the new way of working into SOPs and policies to ensure accountability."

That codification creates a feedback loop. The system monitors what employees are asking, identifies where documentation is missing, and generates recommendations for filling the gaps. The knowledge base learns from the patterns of the people using it. This is what decision architecture looks like when the behavioral layer is built in rather than bolted on.

The Loop Leaders Need to Get Out Of

The practitioners working through this at the function level have room to redesign. The harder problem is at the top.

Gibbons has run AI literacy programs for executive teams and found the results sobering. When he asked a group of twelve leaders how much formal education they had completed in the previous year, the total was three days between all of them.

"That just won't cut it," he says.

The issue is structural more than individual. Success, Gibbons argues, has made most senior leaders resistant to the kind of learning that AI adoption requires. Their authority is built on knowing things. AI destabilizes that.

Appelo frames the leadership challenge differently but arrives at the same place. When AI outperforms humans on most analytical tasks, leadership stops being about having the answers.

You Don't Have to Know The Answers

You Don't Have to Know The Answers

“Leadership now is about knowing which questions to ask, which constraints to set, and when to intervene,” he says. “That’s harder than knowing the answers.”

Most executives were trained on management frameworks built for conditions of relative predictability. Appelo points specifically to Transformational Leadership, a model that dates to the 1970s and assumes a leader who controls information flow, approves decisions, and sits at the center of the loop.

That model is precisely what AI adoption requires dismantling. Managers in the loop become bottlenecks. Managers above the loop, directing, adjusting, and holding judgment on the genuinely hard calls, are what the new architecture requires.

Gibbons calls the equivalent shift Adaptive Leadership. Built for pace, for emergence, and for operating without knowing what the endpoint looks like.

We don't know what shape pilots will take or how easily they will scale," Gibbons says. "Old leadership paradigms were built for a predictability that doesn't exist anymore.

Building a Decision Architecture for AI

The practitioners above are not working from a shared methodology, but their approaches follow a consistent logic. Here is how to apply it.

Start with the decision, not the tool

Before selecting any AI application, map every significant decision in the workflow you're targeting. Who owns each decision? What data informs it? Where is human judgment genuinely required, and where is it just habit?

Catelan's team did this work across talent acquisition before introducing predictive models. The mapping revealed that the biggest source of downstream error was not the selection process but the job descriptions feeding into it.

Audit the inputs

AI learns from what you feed it. If the data entering a model is inconsistent, inflated, or poorly structured, the outputs will reflect that. Catelan's redesign of job descriptions improved model performance without touching the model itself.

Before deploying AI on any workflow, examine the quality and consistency of the inputs that workflow depends on. Garbage in scales faster now.

Set business-level targets, not process metrics

Vague goals produce vague results. Catelan's working group operates against specific numbers: a 50% increase in funnel conversion rates, a 30% reduction in interview hours.

Those targets create pressure for sustained, system-level performance rather than isolated experimentation. If you cannot articulate what a successfully redesigned AI workflow produces in business terms, the design is not ready.

Define the human-AI boundary explicitly

Every redesign requires clear answers to two questions: what does AI own, and what do humans own? Catelan's principle is that AI structures quantitative signals and surfaces insights while humans retain qualitative judgment and final decisions.

That boundary should be documented, shared, and revisited as the system matures. Without it, accountability becomes diffuse and errors become untraceable.

Embed governance before you scale

Build human override capability, bias and drift monitoring, and clear accountability for model outcomes into the architecture from the start. Fouagou codifies changes into policy rather than leaving them as informal practice.

Governance added after the fact tends to arrive after something has already gone wrong.

Treat adoption as a behavioral problem

Fouagou's 10/90 observation is the layer most organizations skip entirely. Technical deployment and human adoption are not the same event.

Segment your stakeholders. Identify where resistance is likely and address it before the rollout, not during. Measure not just efficiency gains but whether the time recovered is actually being redirected to higher-value work.

There is a gap most organizations have not yet reckoned with. It sits between the tools they have deployed and the decision logic they have left unchanged.

Catelan's framing of AI-readiness is worth sitting with.

It means consistently asking better questions," she says. "What signals matter, where uncertainty exists, and how data can inform human judgment.

The tools surfaced the question. The design work is what answers it.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.