Skip to main content
Key Takeaways

Perception Gap: There's a significant disconnect between executives and employees regarding understanding AI strategies.

Leadership Skills: Effective AI transformation requires specific leadership capabilities beyond technical implementation.

AI Success Factors: Empathy and understanding individual change processes enhance AI adoption success.

Strategic Patience: Many AI initiatives fail due to unrealistic timelines and insufficient support for capability building.

The playbook getting the most attention right now runs on a simple premise: make people afraid, and they'll work harder.

Layoffs framed as fiscal discipline. Return-to-office mandates positioned as productivity interventions. AI transformation announced before anyone understands what it means for their actual job.

It's generating headlines. What it's not generating is the transformation executives claim to want.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

New research from BCG and Columbia Business School reveals a 51-percentage-point perception gap between executives and frontline employees on whether people understand AI strategy. Leaders think 80% of employees are well-informed. Only 29% of individual contributors agree. That's not down to communication. It's a leadership problem.

AI transformation success has little to do with sounding tough or moving fast. It depends on seven specific leadership capabilities that have nothing to do with technology deployment and everything to do with how leaders show up when outcomes are uncertain.

Before Brian Elliott took the stage at Transform as the event's emcee in March, he unveiled what he sees as four key capabilities for leaders in this era via his Substack.

It resonated for me. But it also felt ripe for expansion.

What I'm going to do now is break down the capabilities as Brian did and expand on them with a few additions to the list. This is not a time for weak or toxic leadership. As he and I discussed on the podcast last year, tone deaf leadership is proving costly in what might be the most confusing era of work in a century.

1. Empathy: Understanding How People Experience Change

GenAI excitement among U.S. employees dropped from 45% to 36% in just three months during 2024, according to Slack's research. The technology didn't get worse, the deployment approach did. Organizations rolled out tools without understanding how differently people adopt new capabilities.

When managers at BCG-studied companies adjusted their training approaches to account for individual differences in AI adoption, generative AI use increased 89%. That's not about being nice, it's about using cognitive understanding of how people experience change to design better approaches.

Employee-centric organizations have people who are 70% more likely to feel enthusiastic about AI adoption and 92% more likely to feel well-informed about strategy, according to BCG and Columbia research.

The connection to business outcomes is direct. These same organizations report AI maturity rates far higher than their peers. Employee centricity explained 36% of the variance in AI maturity—more than industry (14%), department (12%), or company size (5%) combined.

David Zierk, clinical psychologist and author of "Mind Rules," describes what he calls "connection deficit disorder"—the gap that opens when AI provides answers but leaders fail to provide context or support.

The mind doesn't tolerate uncertainty," he explains. "We go seek certainty as quickly as possible, and AI provides relief. But anything that gives you relief has addiction potential. Leaders need to help people sit with ambiguity long enough to develop genuine understanding, not just grab the first answer AI offers.

The gap between "AI will augment, not replace" messaging and simultaneous restructuring in departments getting new tools has created exactly the distrust that undermines AI initiatives and kills honest feedback about whether implementations are working.

Before you go too far, want to test yourself on the seven capabilities? Finish the assessment and you can download a free, personalized report with development goals.

Note: This is not a scientific assessment, it's simply a thought exercise that could kickstart self reflection or ideas for your own professional development journey. If you're looking for a more rigorous assessment, check out our story from last year on the use of personality assessments in hiring.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

2. Presence: Getting Back to the Dance Floor

Amazon's return-to-office mandate came via memo. No conversation about how hybrid work was actually functioning. No engagement with teams who had redesigned entire workflows around distributed collaboration. Just a policy affecting daily life, delivered from distance.

That's the opposite of what BCG's 2025 AI at Work Report identifies as critical for successful transformation: leaders who understand how work actually happens. The report found that only 51% of frontline employees regularly use AI tools, compared to 75%+ of leaders and managers. That's a "silicon ceiling" created by distance from operational reality.

Elliott describes the pattern.

I had a board member who in a senior management meeting stands up and says, ‘You all need to get back in the office because I know that’s the best way to work. Based on my experience back in the 1980s, when I had that one lunch with the guy who turned out to be my mentor and coach.’ And it’s always a he, is basing his experience on what worked best for him.

brian-elliot
Brian ElliottOpens new window

CEO of Work Forward

The same logic applies to AI with executives making decisions based on their own context, not operational reality.

Think About the How, Not Just the Why

Think About the How, Not Just the Why

Most organizations approach AI like ‘if we build it, they will come.’ But only about 40% of people are even using AI in their personal lives. You need to spend significant time and energy on how you lead this type of change.

You can't address "workslop"— AI-generated content that's technically complete but substantively useless— if you're not showing what good quality looks like. You can't demand transformation if you have no idea how the work is actually being done.

3. Product Thinking: Treating Work as Something You Design

MIT Sloan research found that 91% of data leaders say cultural challenges are blocking their AI efforts. Only 9% point to technology issues. Yet most companies are still treating AI as a technical problem, announcing efficiency targets before understanding what creates friction in actual workflows.

BCG research shows reducing toil, or what you might call soul-crushing repetitive work, drives higher odds of AI success and improves retention.

Zapier moved from 65% initial usage to 89% daily AI tool adoption by focusing on solving real problems rather than maximizing feature deployment. Their customer service team saw 50% reduction in ticket handle time while employee engagement scores increased 20-30 points.

Brandon Sammut, Zapier's leader through this transformation, explained the shift.

We stopped asking ‘what can this AI tool do’ and started asking ‘what problems are creating toil for specific teams.’ The tools followed the problems, not the other way around.

Brandon Sammut-07562
Brandon SammutOpens new window

Chief People & AI Transformation Officer at Zapier

Organizations invest $142 billion annually understanding customers but under $11 billion on employee experience, despite clear evidence linking engagement to business results. If you treated customers the way most companies treat employees during technology transitions, you'd be out of business.

The root problem is what Elliott calls "productivity theater" — optimizing for visual cues of activity rather than outcomes.

"Sixty-five percent of people polled said it's more important for them to respond quickly to a message than it is for them to focus on and deliver their core work," Elliott notes. "That's just sad, right? That's the visual cues of activity happening. Producing more of a thing isn't an outcome."

Elliott went on to describe the necessary shift. Leaders must abandon visual activity cues in favor of outcomes-based management. That means defining organizational goals, clarifying top priorities, establishing success metrics, and cascading that clarity throughout the company.

Done well, it creates a level playing field and delivers the results executives actually want. But it requires significant investment and represents a fundamental change in how organizations operate.

4. Courage: Having Spine When It's Risky

At a Salesforce all-hands meeting, leadership joked that ICE was waiting at the back of the room for international employees who stood up. The uproar that followed wasn't just about poor judgment, it was about the destruction of psychological safety required for AI experimentation.

The quickest way to deal with uncertainty is to judge," Zierk notes. "That closes the mind. The opposite of judgment is curiosity, and curiosity is what organizations need right now. But curiosity requires psychological safety, and fear-based leadership destroys that.

Courage in leadership means delivering difficult news personally with honest explanation of business drivers, not handing managers termination scripts. Especially when roles are being cut before AI has actually proven out, and conveniently before bonus checks clear.

The alignment around tough-talking leadership creates opportunity for leaders willing to build organizations where people can do their best work through uncertainty. Not because it's kind, but because fear generates compliance while trust generates the intelligent risk-taking AI transformation requires.

5. Strategic Patience: Playing the Long Game

According to S&P Global research, 42% of companies abandon AI initiatives before they ever reach production, a number that's more than doubled from just a year ago. The pattern goes like this:

  • Announce aggressive efficiency targets to satisfy boards
  • Rush deployment to hit timelines
  • Discover the organizational capabilities don't exist
  • Quietly walk back expectations six months later.

The term "reshape stage" has emerged to describe companies who are redesigning workflows end-to-end rather than just deploying tools. These orgs show 46% of employees worried about job security versus 34% at less-advanced companies.

That anxiety is the cost of genuine transformation. Strategic patience means protecting time for that anxiety to resolve through demonstrated support and capability building, not pretending transformation happens without friction.

Protect Experimentation

Protect Experimentation

“The J-curve is real. You get worse before you get better when performance comes from fundamental redesign. Leaders who protect experimentation time and budget even under pressure see higher ultimate adoption and better business outcomes than those who rush to prove value before building capability.”

BCG found that 79% of employees who received more than five hours of AI training became regular users, compared to 67% of those who received less than five hours. Strategic patience is protecting that investment even when boards want immediate results.

6. Transparency: Honest About Uncertainty

The 51-point gap between what executives think employees understand and what employees actually understand doesn't come from insufficient communication—it comes from dishonest communication.

Leaders announce "AI will free you for higher-value work" without specificity about what that work is or whether it will exist at their current compensation level. They present AI strategies as fully formed plans when they're actually hypotheses being tested in real time. They claim certainty about AI's trajectory when senior leaders themselves are uncertain.

Steve Cadigan explains the dynamic.

For you to have trust, you got to build a body of consistent, reliable performance. And we just don’t have that quite yet with AI. We also have a massive narrative in the world of media right now, which is artificial intelligence is going to replace you. It’s going to take your job away. That level of fear and distrust is going to create some speed bumps around implementation.

photo of Steve Cadigan
Steve CadiganOpens new window

Former Head of HR for LinkedIn

The fear manifests in what Ethan Mollick calls "secret cyborgs"—people using AI tools but not telling their bosses, sometimes out of concern that admitting usage signals replaceability. That's the opposite of the transparent experimentation required for transformation, it's what we now know as shadow AI.

Johannes Sundlo, AI adoption expert, questions our relationship with what we know as knowledge.

When universities debate banning AI because students use it to cheat, they’re missing the real question: what is knowledge in an AI-enabled world? Leaders in organizations make the same mistake—they try to control the narrative instead of being honest about what they don’t know yet.

Johannes
Johannes SundloOpens new window

AI Strategy Advisor at Prorio AI

Transparency means separating what's certain (we're investing in AI, some roles will change) from what's uncertain (exactly which roles and how). It means sharing pilot results including failures. It means admitting "we're figuring this out together" rather than pretending to have answers you don't have.

Cadigan notes that a key element driving mistakes around AI is that it's "unlike any other technology we've seen before. We're used to being pretty clear on how this is going to apply. With AI, we don't even know a lot of what it's capable of doing and we're still learning that. The possibilities haven't fully been explored yet, but we're pretty certain there's competitive advantage here. So here we go."

That uncertainty makes traditional deployment playbooks obsolete and honest communication essential.

7. Systems Thinking: Understanding Ripple Effects

Most AI initiatives fail at the boundaries between teams. You implement an AI tool in customer service without considering impact on product development (different complaint patterns), marketing (messaging needs shift), or operations (staffing models change when AI handles routine issues but escalates complex ones).

BCG's research on AI governance shows that 52% of successful organizations now use cross-functional teams of business and technology leaders to drive strategy, up from just 5% a year prior. The shift acknowledges that AI decisions create people and process consequences that IT-led approaches can't address.

The root cause of siloed implementations, according to Cadigan, is how we did things in the past.

Like off-the-shelf software," he says. "Plug it in, we get trained, here's how you use it. But AI requires experimentation, and that's not something we're typically used to seeing. We need to recognize that AI touches the jobs and career paths and work of all our employees in ways that require systems-level thinking.

Caldwell describes deployment of general-purpose AI tools without education as being equivalent to handing employees a Swiss Army knife with 20 functions, but only explaining three. Without understanding how their work connects to others' work, they optimize locally in ways that create system-level dysfunction.

Building Capability, Not Just Announcing Strategy

These seven capabilities—empathy, presence, product thinking, courage, strategic patience, transparency, and systems thinking—aren't soft skills. They're hard work requiring sustained practice.

The BCG and Columbia research shows companies with strong AI-focused change leadership see 3.2x higher profit margins than those using traditional IT-led approaches. Technology sophistication is wasted if leadership teams can't drive the cultural transformation AI demands.

We're in a moment where Leaders and managers are more likely to worry about losing their jobs in the next ten years to AI than frontline employees. That anxiety creates pressure to move fast, sound confident, and demonstrate control. But the executive teams succeeding are doing the opposite by moving deliberately, admitting uncertainty, and building trust.

Create the Right Environment

Create the Right Environment

“The leaders who look smart at the end of 2026 won’t be the ones who sounded toughest in 2025. They’ll be the ones who built organizations where people could do their best work through genuine uncertainty.”

The gap between where you are and where you need to be on these capabilities isn't fixed. It's a function of how deliberately you invest in building them. The question is whether you're willing to do the work while everyone else is generating headlines.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.