Skip to main content
Key Takeaways

AI Adoption: Strong HR-IT partnerships led to 15x higher AI productivity, while 53% of organizations failed expectations.

Workflow Integration: Embedding AI in existing tools resulted in higher adoption rates compared to standalone AI tools.

Capability Gaps: Training employees to effectively use AI proved crucial, highlighting the need for skill development.

Complexity Tax: Simplifying tech stacks and reducing reliance on consultants improved AI deployment and results.

Manage Strategy: AI changed managerial roles, requiring explicit transition plans and training for effective adaptation.

The gap between AI deployment and AI adoption became the defining challenge of 2025. BCG research found that organizations with strong HR-IT partnerships achieved 15x higher productivity from AI investments than those without.

The technology worked. The organizations struggled. And by year-end, the data painted a clear picture: 53% of organizations failed to achieve their expected returns from AI investments.

The Workflow Integration Divide

The biggest separator between successful and unsuccessful transformations wasn't technology sophistication or budget size. It was whether AI lived inside existing workflows or outside them.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

"The winners picked one or two real problems in existing workflows, not 'art of the possible,'" said Bhrugu Pange, AI & Digital Technology Solutions Managing Director at AArete. "They chose bottlenecks people already complained about like call wrap-up time, quote generation, responding to customer questions, invoice exceptions. Most had the 'swivel-chair' syndrome."

Companies that embedded AI into the tools employees already used—Salesforce, contact center platforms, document systems—saw adoption rates of 60-80%. Companies that launched standalone AI tools saw adoption plateau at 30-40%.

The pattern held across functions. Marketing teams showed 98% belief that AI would improve metrics, but only 27% reported wide adoption. The gap wasn't caused by skepticism, it was friction.

"Successful firms didn't ask employees to open a separate window to access the AI," Pange explained. "They put it inside the tools people already use, extending existing platforms like Salesforce."

The unsuccessful ones built what Pange calls "off-context chatbots" and wondered why nobody used them. By late 2025, the lesson had hardened into a principle: if AI isn't where work happens, it won't stick.

The Capability Gap Nobody Expected

Early 2025 started with a widespread assumption: AI adoption would follow the pattern of previous enterprise software deployments. Buy the right tools, train people on the interface, measure adoption rates. By mid-year, that assumption had collapsed.

The capability gap showed up everywhere. Customer service teams had AI that could draft responses but couldn't evaluate which responses to send. Sales teams had AI that could summarize call transcripts but couldn't determine which insights mattered. Finance teams had AI that could generate forecasts but couldn't assess which assumptions to trust.

"Untrained workers were 6x more likely to say AI made them less productive," said Emily Mabie, Senior AI Automation Engineer at Zapier. "Teams that really want to achieve AI transformation need to invest in training."

Employees needed to learn what to delegate to AI, what to keep, how to verify AI outputs, and when to override AI recommendations. Those capabilities weren't taught in a two-hour training session.

Companies that succeeded treated AI adoption as skill development, not software deployment. They created apprenticeship models where experienced employees worked alongside AI for months, building judgment through repetition. They measured capability development, not just tool usage.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

The Complexity Tax

While some companies struggled with adoption, others discovered they'd built the wrong foundation entirely. Freshworks' Cost of Complexity report found that 20% of organizations' software spend was wasted on failed implementations, underused tools, and hidden costs.

"Many organizations are running into complexity roadblocks that impede productivity, including legacy systems, disjointed systems, and siloed data," said Ashwin Ballal, CIO at Freshworks. "These challenges lead to employee fatigue, inefficiencies, and revenue leakage from delays and missed opportunities."

This resulted in employees losing an average of 6.7 hours per week navigating tool complexity instead of doing productive work, creating significant AI adoption barriers.

A vicious cycle was then created. Companies brought in third-party vendors and consultants to integrate AI into their messy tech stacks, but these additions often created more friction than they removed.

Freshworks found that 12% of financial losses from software inefficiency came from unnecessary consultants and contractors.

"Businesses often bring in third-party vendors and consultants to fix legacy systems, but these additions can add friction rather than remove it," Ballal said. "Organizations often pay twice, once for complex technology, and again for consultants to make it work."

The mid-market companies that succeeded in 2025 took a different approach. Rather than adding AI on top of existing complexity, they used AI adoption as an opportunity to simplify. They consolidated redundant tools. They chose AI features built into platforms they already used instead of buying standalone products.

The lesson: you can't transform what you can't manage. Companies with simpler, more integrated tech stacks deployed AI faster and saw better results.

The Pilot Trap

The gap between pilot success and production deployment became one of 2025's most frustrating patterns.

"The actual outcomes of 2025 AI investments were far more uneven than headline adoption rates suggested," Mabie said. "Generative AI and pilot projects were everywhere at the beginning of the year, but among leaders we surveyed at Zapier, only 26% said that the majority of their AI pilots reached production."

The pattern repeated across industries. HR departments piloted AI-powered resume screening with excellent results but couldn't get hiring managers to use it. Operations teams piloted predictive maintenance AI that worked exactly as promised but couldn't expand beyond the initial plant.

Successful pilots operated under special conditions: extra support, leadership attention, motivated volunteers, protected time to learn. When companies tried to scale, those conditions disappeared.

"Successful efforts identified a member of the business to be accountable for the outcome," Pange said. "IT became not the driver or the owner, but the custodian of the AI implementation. Business owned the outcome."

Companies that successfully scaled did three things differently.

  1. They documented not just what the pilot team did but why it worked, mapping the organizational conditions that enabled success.
  2. They piloted with representative teams, including skeptical and under-resourced groups, not just volunteers.
  3. They measured organizational readiness, not just technical readiness.

The lesson: a successful pilot proves the technology works. It doesn't prove your organization can absorb it.

The Shadow AI Problem

By summer, mid-market executives faced employees using AI tools the company hadn't approved, hadn't secured, and often didn't know about. Employees uploaded customer data to ChatGPT, used consumer AI tools for sensitive business analysis, and created automated workflows with no security review.

The shadow AI problem revealed a deeper issue, in that companies were moving too slowly to meet legitimate employee needs. Employees weren't being reckless, they were being productive with the tools available to them.

Smart companies responded by accelerating approved AI deployment, not just tightening controls. They established rapid evaluation processes for employee-requested AI tools and created approved alternatives to common shadow AI use cases. They measured time-to-approval for AI tools and treated delays as organizational failures.

Shadow AI was a signal of unmet need, not a compliance problem to solve through policy. By year-end, the organizations managing it best had shifted from reactive prohibition to proactive enablement.

The Management Adaptation Crisis

When AI handles routine coordination work like scheduling, status updates, basic problem-triage, the traditional management activities that filled 40-60% of a manager's day suddenly disappear. What's left is the work most managers never had time for like coaching, strategic thinking, complex problem-solving, team development—skills that expose leadership pipeline problems.

The best managers adapted quickly. The struggling managers felt displaced, questioned their value, and often resisted AI adoption because it threatened their identity.

This boils down to a management development challenge. Explicit transition plans for managers whose roles are changing is needed and it needs to offer coaching on how to work differently, because very few leaders have a clear vision for this, almost none have mastered it. Organizations need to evaluate whether their AI strategy addresses these management transitions and redefine expectations to reward managers who successfully shift to higher-value work.

Some managers wnn't make the transition. Leaders can handle this in a couple of ways.

  • Create individual contributor career paths for managers who excel at work AI can't do but struggle with the new management model.
  • Provide intensive coaching, but be prepared to make the difficult decision that certain managers just aren't suited for the AI-era role.

What won't work is pretending the role hasn't changed. Organizations that deployed AI but kept management expectations the same ended up with managers doing busywork to fill time.

The Trust Architecture Breakthrough

The unexpected success story of 2025 was companies that built systematic trust in AI rather than hoping employees would just accept it.

ADP deployed their "5P Framework" across AI initiatives:

  • Purpose (why we're using AI here)
  • People (who's involved in decisions)
  • Process (how it works)
  • Performance (how we measure success)
  • Protection (what safeguards exist).

Organizations using structured trust frameworks saw 60-80% AI adoption rates compared to 30-40% for those relying on informal trust-building.

"Human-in-the-loop was ranked the top governance priority by 71% of leaders," Mabie said.

Trust doesn't emerge naturally from good technology. It requires intentional design. Companies need to answer specific questions before deployment such as:

  • Why are we using AI for this task?
  • Who has oversight?
  • How does the system make decisions?
  • What happens when it's wrong?
  • What protections exist?

Organizations that answered these questions explicitly—in documentation, in training, in ongoing communication—built trust faster and sustained it longer.

The Hidden Cost Cascade

Costs extend far beyond wasted software budgets.

One bad incident can stall adoption for months. Loss of trust became a hidden tax on future initiatives.

"AI can speed up the first draft, but if you don't redesign the review process, you create a new problem," Pange said. "Teams now spend time verifying and correcting. We call this 'verification tax.'"

Employee cynicism emerged as another hidden cost. When leadership oversold AI and under-invested in training and workflow redesign, employees resisted adoption. Worse, they used unsanctioned AI tools, creating the shadow AI problem that consumed security and compliance resources.

The St. Louis Fed's Real-Time Population Survey found that by August 2025, about half of U.S. adults reported using generative AI and over a third used it at work. The survey estimated time savings equivalent to about 1.6% of total work hours with potential productivity lift of up to 1.3% since ChatGPT's launch.

But those gains were distributed unevenly.

"While one company is stuck in pilot purgatory, another is quietly improving service levels, sales throughput or operational cycle time," Pange said. "And that advantage of incremental improvements and learnings starts to stack up."

The Measurement Maturity Shift

One of 2025's clearest evolutions was how companies measured AI success.

"Early in the year, success was framed around launching pilots and proving feasibility," Mabie said. "Slowly, leaders shifted to measuring AI fluency and ROI through business outcomes."

The fact that most leaders are measuring ROI reflects a broader maturity. "How cool is it?" became "What's the KPI and where's the baseline?"

Part of moving from measuring adoption rates to measuring business impact involves not just whether employees use AI tools, but whether those tools improve decision quality, reduced cycle times, or freed capacity for higher-value work.

The most sophisticated organizations measured both efficiency gains and capability development. They wanted to know if AI was making employees faster and better.

The Resource Allocation Wake-Up

Perhaps the most important learning from 2025: the 70-20-10 rule for AI investment proved accurate and most companies were allocating backwards.

The rule, validated across multiple research studies: 70% of AI transformation investment should go to people and process change, 20% to infrastructure and integration, 10% to algorithms and models.

Most mid-market companies were spending 60-70% on technology and scrambling to fund the organizational change work.

This means funding change management roles, protecting time for learning and adaptation, investing in training programs, communication strategies, and organizational support structures.

The companies that maintained technology-heavy investment saw their AI initiatives stall regardless of how sophisticated the technology was. Good algorithms with unprepared organizations underperformed mediocre algorithms with ready organizations every time.

What 2026 Requires

The lessons from 2025 point to several clear priorities for mid-market companies entering 2026.

  • Embed AI in existing workflows. The workflow integration divide will only widen. Stop launching standalone AI tools and start upgrading the platforms employees already use.
  • Invest in training before technology. Zapier's finding that untrained workers are 6x more likely to say AI makes them less productive should terrify every executive planning AI investments. The capability gap won't close through software licenses.
  • Simplify before adding complexity. Companies with fragmented tech stacks will struggle to deploy AI effectively. "Organizations that work with software leaders who aggressively subtract from the tech stack, integrate AI into their systems and replace tech sprawl with unified systems will gain a competitive advantage," Ballal said.
  • Build governance that enables rather than blocks. Human-in-the-loop governance ranked as the top priority for 71% of leaders. But governance that slows deployment to a crawl creates shadow AI. The goal is lean governance that makes it safe to move fast.
  • Scale orchestration capabilities. "More mid-market teams will adopt AI orchestration," Mabie said. "Because mid-market teams don't have the resources to completely rebuild platforms and teams, they'll use and connect their existing tools with AI orchestration."

Zapier's data shows 25% of leaders expect to reach full-scale AI orchestration by 2026, while 43% anticipate operating agentic workflows across functions. With that in mind it's a good idea to:

  • Redesign management roles explicitly. AI changes what managers do. Acknowledge that change, support the transition, and accept that some managers won't make it.
  • Rebuild career frameworks for the AI era. Traditional promotion paths are breaking. Companies need new models for how people progress when AI handles work that used to be developmental.
  • Engineer trust systematically. Trust frameworks work. Informal trust-building doesn't. Answer the five Ps explicitly for every AI deployment: Purpose, People, Process, Performance, Protection.
  • Reallocate budgets to match the 70-20-10 rule. Most AI value comes from organizational change. Most AI budgets go to technology. Fix that mismatch or accept lower returns.
  • Measure business outcomes, not technology metrics. Track decision quality, cycle time reduction, and capability development—not just system utilization.

"Some trends that are likely to fade are one-off experiments, developer-only automation models, and 'AI as a feature' thinking," Mabie said.

The divide between leaders and laggards isn't about who has AI. It's about who learned from deploying it. That learning gap will widen as 2026 AI decisions determine which companies catch up and which fall further behind.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.