Skip to main content
Key Takeaways

Invisible AI: Workers use personal chatbots without approval, forming an unrecognized AI stack in organizations.

Design Limitation: AI should transform work structures, not just layer onto existing ones, for effective adoption.

Adoption Challenges: Past IT control doesn't apply as AI tools require minimal training due to existing familiarity.

Resistance to Tech: Organizations struggle with AI implementation, focusing on technology rather than workflow transformation.

Urgency in AI: Engage with how teams use AI now to address constraints, avoid business disruption, and foster innovation.

Executives across industries make the same mistake with AI adoption.

They treat it like a procurement decision. Buy this platform. Pilot that agent. Add this feature to the stack.

The problem runs deeper than tool selection. According to MIT's State of AI in Business 2025 report, workers at more than 90% of companies are using personal chatbot accounts for daily tasks, often without IT approval, while only 40% of companies have official LLM subscriptions. Your employees have already built the foundation of your AI stack. They just did it without asking permission.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

This creates a strange reality: the technology most leaders are still planning to implement is already running in their organizations. By the time you've formed a committee to evaluate AI tools, your team has already picked their favorites and embedded them in daily workflows.

The Architecture Problem

When you approach AI as a procurement decision, you miss the redesign opportunity.

Gabriela Mauch, Chief Customer Officer and Head of Productivity at ActivTrak, sees this pattern repeatedly.

Teams can envision transformative AI use cases, but they're blocked by organizational infrastructure that wasn't designed for AI-enabled work. If leadership is only measuring tool usage, they can't see these invisible ceilings.

The result is AI gets layered onto existing org structures rather than transforming how work gets done. Skills atrophy instead of evolution.

The alternative requires finding lanes for AI and lanes for humans. AI should handle analysis, draft reports, manage mundane tasks. It should inform humans and ask questions it can't contextualize. The final decision rests with someone who can be held accountable.

Accountability matters because AI won't lose its job. In March 2024, Air Canada learned this lesson when a customer sued after the airline's chatbot provided incorrect information about bereavement fares. The court ruled Air Canada was responsible for its chatbot's errors. Any reputational damage to the AI was temporary. The company's legal team carried the real consequences.

That's the design constraint most AI conversations avoid.

Why Experienced Executives Are Stumbling

Leaders who successfully managed cloud migration and ERP implementations are making rookie mistakes with AI. Three factors make this wave different.

  • The pace is unprecedented. Gartner predicts 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025. That's an 8x increase in one year. By the time you understand one development, the next wave offers something radically different.
  • The ethics piece is complex. Every implementation requires questioning: Can AI do this as well as humans? What would it take to get there? Should humans remain in control?
  • The adoption pattern is inverted. Previous workplace tech required training, certification, controlled rollouts. AI tools require minimal specialization because people already use them.

That's the fundamental challenge. In previous tech waves, IT controlled the rollout. Now the bottom layer of your stack is already in motion and largely invisible.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

The Gap Between Announcement and Adoption

A common problem is the belief that once a new strategy is announced, people will instantly know how to work differently.

We see this a lot with AI," Iris Cremers, CHRO at GoodHabitz says. "Once leadership says 'we're doing AI,' the assumption is that everyone will suddenly work smarter and more efficiently. In reality, people and teams need time, clear guidance, and hands-on learning.

New technology can feel complex or threatening without proper support. GoodHabitz designed a company-wide training program with their Goodlearn AI learning platform to give people a simple and safe way to build confidence and skills.

"It helped reduce anxiety and replaced it with curiosity and excitement as we introduced these new tools into our day-to-day work," Cremers said.

Sharon Steiner, CHRO at Fiverr, frames it differently.

"The biggest disconnect I see is that leadership often approaches AI as a technology or efficiency strategy, while teams experience it as a fundamental change to how work gets done," she said. "Executives ask, 'Which tools should we adopt?' but employees are asking, 'How does this change my role, my skills, and how I'm evaluated?'"

What's really happening at the team level is experimentation. People are already using AI to move faster, learn new skills, and solve problems in real time.

When Technology Decisions Ignore Organizational Reality

Mauch shared an example of a financial services customer that rolled out enterprise AI with standard technology approaches: tool access, training, use case libraries, and champions. Six months later, adoption plateaued and ROI wasn't materializing despite 70% active usage.

The firm wanted to use AI to automatically triage customer inquiries by complexity and route them appropriately, which would represent genuine workflow transformation. They couldn't because:

  • IT hadn't prioritized the API work to access customer data systems
  • Manager approval workflows hadn't changed to let AI make routing decisions
  • QA processes were still designed for human-only work
  • Team coordination patterns were still structured around individual task assignments

"Leadership kept treating these as 'change management details to work out later,'" Mauch said. "Meanwhile, they invested in more technology like advanced models, more tools, more integrations. But the constraint wasn't tech-based."

Employees showed up in adoption metrics as "active AI users" because they were still using AI for basic tasks, but they'd given up on the high-value use cases that would actually drive ROI.

The Long View Nobody's Taking

When I talk about taking the long view on AI, I mean two specific things.

Personnel changes

A professional services firm identified their top AI performers—consultants producing three times normal output—and rewarded them with promotions, bonuses, and all-hands presentations. Classic talent recognition.

Three months later, team-level AI adoption had actually declined in those superstars' groups.

"The company rewarded individual productivity gains, so the high performers hoarded their AI techniques rather than shared them," Mauch said. "Why help teammates adopt AI when that just creates more competition for the next promotion? The company missed an opportunity to create AI mentors."

They restructured incentives around team adoption rather than individual productivity. Bonuses were tied to helping teammates advance their AI skills. Promotions required demonstrating that you'd coached others.

With this shift, high performers started actively teaching and team-level adoption accelerated. Individual superstars are great, but only if their knowledge compounds across the organization rather than staying locked in as an individual competitive advantage.

Organizational evolution

Leaders love a five-year plan, but we have no idea what the world will look like at the end of this year.

Build flexibility into your planning. Your talent profile and how you're developing people will help you stay flexible. The alternative is overindexing on AI as the central component of strategy, then realizing you need people to make it work, then going into a talent market where the specific skills you need are hard to find.

Plan for two years and see what happens. The rate of change is too high for longer horizons.

How to Build When the Foundation Is Already Moving

You have to communicate.

"Stop treating adoption as binary and start diagnosing impact," Mauch said. "Most organizations have plenty of basic usage, but almost no transformative usage, which is where value materializes."

Most executives track logins, queries, and seat licenses rather than behavioral transformation. True ROI happens when AI reshapes workflows, which requires understanding if the technology has changed actual work behavior, not simply whether AI tool usage increased.

The analyst who uses AI to draft report sections hasn't transformed their workflow. The analyst who redesigned their entire analytical process around AI's continuous monitoring capabilities has.

Steiner emphasizes the learning conversation.

Ask your teams how they're using AI today, where they want to build skills, and what support they need to grow," she said. "Give people room to experiment, upskill, and build confidence with these tools.

While you're analyzing restructuring options, some of your people are trying to learn. They're acquiring new skills without knowing if they're the right ones. You can steer them in the direction you need and help them build capabilities the business requires.

In the process, you'll understand the tools they're using, where they're extracting value, and where the organization can better support what they're doing.

This keeps you ahead of what's changing and helps you maintain control of the ethics piece. You become more comfortable with direction rather than getting sucked into hype.

And instead of "we're behind, we have to do this to keep up," you operate from "does that shiny new thing help us cultivate the vision we have for where we want to be in two years?"

The shift from reactive to intentional requires rethinking governance entirely.

Governance as Alignment, Not Policing

When you discover an employee built their own AI workflow, frame it as collaboration.

"Treat shadow AI as market research, not a compliance problem," Mauch said. "Your employees who've adopted unapproved tools are telling you they've found solutions to real problems that your official tools don't address."

Poll: How is your org handling AI adoption?

How is your organization handling AI adoption?

Cremers and her team at GoodHabitz took this approach when they realized employees were already using external AI tools.

Instead of blocking everything, we chose a more open and supportive approach," she said. "We invited anyone who was already using an external tool to share it with us. We then checked if it could be used safely. If it passed the safety check, we made it available to the whole organization so others could benefit too.

This built trust, reduced risks, and showed that the goal was to help people work smarter, not to police them. And as you might expect, some employees had discovered useful tools the company wasn't aware of yet.

"Goodlearn also helped by giving everyone a basic understanding of how to handle information safely and responsibly, and how to recognize which tools could be trusted," Cremers said.

Mauch recommends a three part framework:

Understand before you standardize

Map what's actually being used and why, not to punish it, but to learn from it. In one case, ActivTrak discovered a product team using an unapproved AI coding tool four to five hours daily.

Rather than shut it down, leadership asked why. The team had found specific capabilities their "official" tool lacked, and they'd organically developed sophisticated workflows no one else had achieved.

Differentiate risk levels

Not all shadow AI carries equal risk. The analyst using AI to brainstorm presentation structures isn't the same risk as someone processing customer PII through an unapproved tool. Create simple criteria:

  • What data is being processed?
  • What decisions can the AI output drive?
  • How mature is the user's judgment about appropriate use?

Let low-risk, high-value usage continue while you build proper guardrails.

Co-create governance with users

Bring shadow AI users into policy design. They understand the practical tradeoffs between security and utility better than anyone. Ask them:

  • What would you lose if we shut this down?
  • What concerns do you have?
  • What guardrails would you want?

This surfaces solutions like sandboxed environments where teams can experiment with unapproved tools on synthetic data before getting production approval.

The reality is that AI is already part of how work gets done, and trying to shut that down puts companies behind, not in control," Steiner said. "Governance still matters, but in an AI-first environment it should enable progress, not slow it down.

Making these processes work requires embedding them in systems people already use. Can it be a form in your internal documentation hub? Can designated approvers rubber stamp ideas within a set timeframe in Slack? Making approval native to existing workflows reduces friction while helping people vet ideas for clarity, goal alignment, and business impact.

You have to reward transparency from employees who are experimenting. If someone shares something that isn't a good use of the technology, it can't be a punishment.

What "Too Late" Actually Looks Like

You haven't got a lot of time.

Too late looks like launching features your competitors shipped six months ago. It looks like your best people frustrated and eyeing the exit. It looks like managers feeling like confused intermediaries between strategy and execution.

Most of all, it looks like an inability to implement AI as a solution to real problems within your organization.

Some companies are already feeling this. That's why they're resorting to layoffs and short-term gains. They don't have a clear vision for long-term payoff, so they're stripping the business down while they figure it out.

Research from Reco AI found that two shadow AI tools had median usage durations of 403 and 401 days respectively. Well over a year of continuous use without formal approval or oversight. After 100 days of continuous use, an AI tool is no longer a trial. It's embedded in core business processes. Trying to remove it isn't just an IT task, it's a potential business disruption.

That's what too late looks like. The inability to govern what's already been adopted.

The First Move

Steiner has clear advice for executives who feel behind.

"The real question isn't whether to start using AI, it's when and figuring out how, and the answer is now and with experimentation," she said. "This isn't a moment to sit out and observe. AI is very quickly reshaping how work happens."

Start the conversation about how your teams are working with AI. What tools they're using. Where they're finding value. What they wish they could do but can't.

A question as simple as "what would you do with AI if organizational constraints weren't in the way?" can surface invisible barriers.

"We consistently see people who envision transformative applications, but are blocked by data access, approval workflows, or governance policies designed for beginner risks," Mauch said.

That conversation reveals where your invisible foundation already exists. It shows you which experiments are worth scaling and which need to be redirected. It helps you understand whether you're building on solid ground or technical debt.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.