Skip to main content
Key Takeaways

AI Utilization: Most companies use AI but struggle to achieve significant returns beyond experimental phases.

AI Fraud: AI-generated fraudulent activities rose sharply, demanding advanced anomaly detection systems over manual inspections.

Governance Challenges: Emerging AI tools necessitate employee-level governance to manage risks and ensure effective implementation.

The gap between companies experimenting with AI and companies getting value from AI became the defining story of 2025. McKinsey found that 88% of organizations use AI in at least one function, but most remain stuck in pilot purgatory, burning budget without generating returns.

Meanwhile, the technology itself crossed a threshold. AI moved from "useful but requires constant babysitting" to "reliable enough to delegate actual work." The question shifted from "can AI do this?" to "how do I make sure it does this correctly?"

Here are the 12 developments from 2025 that matter for how you work in 2026 and what you need to do about them.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

1. Deep Research Became Production-Ready

ChatGPT's Deep Research mode and Claude's equivalent capabilities moved from experimental features to legitimate workflow tools. These systems conduct multi-step research across hundreds of sources, generating comprehensive reports with citations in minutes instead of hours.

What changed: Junior analyst work for vendor research, market analysis, and competitive intelligence can now be delegated to AI. But output validation remains critical as these tools can hallucinate sources even while citing them.

What to do in 2026: Use deep research for initial exploration that gets verified before decisions. Stop assigning junior staff to compile information. Start assigning them to validate and analyze what AI compiles.

2. AI Agents Launched (And Immediately Raised Governance Questions)

OpenAI's Operator and Anthropic's Claude Code marked a fundamental shift: AI moved from advisor to actor. These agents go beyond suggested actions and start executing them. They browse websites, write code, make purchases, and click buttons autonomously.

What changed: When AI can take action in your systems, you need employee-level governance: access controls, approval workflows, audit logging, spending limits.

What to do in 2026: Treat agents like new hires, not new software. Before deployment, define what they can access, what they can spend, and who can override them. The productivity gains are real, but so is the risk if you skip this step. Establishing AI compliance frameworks becomes essential for managing these risks effectively.

“Ultimately, closing the gap between executive expectation and real world execution isn’t about the technology itself," says HR and AI consultant Reyhaneh Khalilpour. “It’s about aligning people, processes, and governance so that AI can actually deliver on its promise without creating new bottlenecks or risks.”

3. API Pricing Collapsed 60-80%

AI got dramatically cheaper in 2025. Claude's pricing dropped 67% from Opus 4.1 to Opus 4.5. GPT-4o pricing fell to a fraction of GPT-4's cost. DeepSeek proved that "expensive equals better" isn't always true.

What changed: Your AI budget projections. The cost-per-query economics shifted overnight.

What to do in 2026: Recalculate your AI costs based on current pricing. You'll likely find room to expand use cases or cut expenses. Renegotiate contracts with AI vendors—the market has moved against them.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

4. The Deloitte Disaster Set a Precedent

In July 2025, Deloitte Australia delivered a $290,000 report to the Australian government filled with AI-generated fabrications: fake academic references, non-existent court cases, incorrect quotes from federal judges. The firm was forced to refund the money and rewrite the report.

What changed: If Big Four firms with quality controls can't consistently catch AI errors, your team won't either without validation protocols. Professional liability doesn't diminish just because AI assisted.

What to do in 2026: Every AI-generated output used in decisions needs human verification. "AI-assisted" doesn't reduce your responsibility for accuracy, it just changes your workflow. Build validation checkpoints before AI outputs reach clients or executives.

5. AI Fraud Exploded in Finance Operations

AI-generated fake invoices and receipts surged in 2025. AppZen reported that fake AI receipts accounted for 14% of fraudulent documents by September, up from 0% in 2024. Ramp's software flagged over $1M in fraudulent invoices within 90 days.

What changed: Fraudsters are using the same AI tools to create convincing fakes. Visual inspection of receipts and invoices is now obsolete.

What to do in 2026: Upgrade to anomaly detection systems that analyze spending patterns, vendor relationships, and behavioral signals, not just document images.

6. Shadow AI Became Your Biggest Governance Blind Spot

A KPMG study found that nearly 60% of employees admitted to making mistakes due to AI errors. About half use AI at work without knowing whether it's allowed. More than 40% knowingly use it improperly.

What changed: Employees are using personal AI accounts to process company data, creating exposure you're not tracking. 

What to do in 2026: Survey your team to find out what tools they're actually using. Then make the rules clear: what's allowed, what's prohibited, what requires approval. Shadow AI is a governance failure, not an employee problem.

There’s all this shadow AI usage and then that creates capabilities. They’re learning, they’re becoming more capable, but you have no say in what they’re becoming capable of. You have no say in how they’re developing. So I think part of the problem is that currently a lot of leaders are thinking of AI purely in terms of productivity measurements. I’m curious about how we shift the conversation away from that into more imagining and thinking about what’s possible as people develop those capabilities, whether you are steering it or not.

PMP – Podcast Guest – Charlene Li-16724
Charlene LiOpens new window

Digital Transformation Expert

7. AI Displaced Workers Faster Than Companies Could Reskill Them

Multiple high-profile layoffs in 2025 explicitly cited AI as the reason for workforce reductions. Duolingo cut 10% of contractors and cited AI translation capabilities. IBM paused hiring for back-office roles that AI could handle. Dropbox, UPS, and others announced significant headcount reductions tied directly to AI automation strategies of previously human work.

What changed: The "AI will augment, not replace" narrative collapsed. Companies stopped pretending automation wouldn't cost jobs. The reskilling timelines companies promised in 2023-2024 proved wildly optimistic—most workers couldn't be retrained fast enough to keep pace with AI capabilities.

What to do in 2026: If you're in HR or workforce planning, stop treating AI transformation as a technology project. It's a workforce crisis that requires honest assessment of which roles will disappear, transparent communication about timeline, and genuine investment in transition support—not just generic "upskilling" programs. Companies that handled this transparently in 2025 maintained trust with remaining employees. Companies that didn't are dealing with morale collapse and talent flight in 2026.

8. EU AI Act Enforcement Began (Then Got Complicated)

The EU's AI Act became enforceable in 2025, with fines up to €35 million or 7% of global revenue for violations. But implementation proved so complex that the EU itself proposed simplifications by November.

What changed: The first comprehensive AI regulation globally is now active, even if enforcement is evolving. Compliance requirements exist and affect any company processing EU data.

What to do in 2026: Document your AI governance now while enforcement is still maturing. Adopt existing frameworks (NIST, ISO) rather than building from scratch. If you operate in the EU, you need recordkeeping systems that prove compliance.

9. Traditional Software Added AI (But Implementation Lagged)

Microsoft Copilot, Oracle Fusion AI, SAP Joule—all the major enterprise platforms shipped AI features in 2025. But implementation support and documentation couldn't keep pace.

What changed: "Included" doesn't mean "works." Finance and HR teams found features unusable without extensive vendor assistance that wasn't included in standard support contracts.

What to do in 2026: Before you budget for AI features from existing vendors, demand sandbox access for testing with your data, written training commitments, defined support service level agreements for AI features, and roadmap transparency. The gap between vendor demos and actual implementation was the story of 2025.

10. AI Insurance Emerged as a Risk Transfer Option

Major insurers launched dedicated AI liability policies in 2025, covering errors, hallucinations, data breaches from AI tools, and copyright violations. Premium costs correlate directly to governance maturity—companies with documented controls pay 40-60% less.

What changed: You can now transfer some AI risk through insurance. But pricing penalizes poor governance.

What to do in 2026: Evaluate AI insurance if you're deploying agents or processing sensitive data with AI. Use the underwriting process as a governance audit. It will likely reveal gaps in your controls. Better governance means better rates.

11. Most Companies Still Can't Scale Beyond Pilots

The McKinsey finding was damning: 88% of companies use AI, but most haven't scaled beyond pilot programs. The gap between "using AI" and "getting value from AI" is process redesign, not better models.

What changed: Pilots don't generate ROI. Testing without metrics is burning money. This is part of what has led to a flood of Chief AI Officer roles in the job market. 

What to do in 2026: Stop running AI pilots without success metrics and end dates. Cut anything that isn't producing measurable value. The gap isn't about technology, it's about change management, training, and workflow re-engineering. Budget for those, or don't budget for AI at all.

Also, take some advice on running more effective pilots from Lisa Jones, CEO of EyeMail, on how to create more impactful pilots.

“We launched micro-pilots across departments,” she said. “Each pilot included a reflection cycle: What worked? What felt off? Did the output reflect our values of joy, gratitude, and connection? This iterative approach helped us embed AI not just functionally, but intentionally.”

12. Multimodal Analysis Became Practical for Complex Work

Models can now handle images, PDFs, spreadsheets, and code simultaneously in a single conversation. Claude's context window expanded to 200K tokens. Gemini offers 1 million token windows. AI can analyze complex documents with embedded charts and tables.

What changed: Faster analysis across multiple documents, but accuracy still requires validation. Models can misread numbers or charts in complex formats.

What to do in 2026: Use multimodal capabilities for variance analysis, document reconciliation, and cross-referencing. But treat outputs as drafts that need verification, not finished work. The time savings are real, but the accuracy guarantees aren't.

The Pattern That Matters

Every development on this list follows the same arc: AI became more capable in 2025, which made governance more critical, not less.

2026 isn't about having all the most advanced AI tools. It's about developing better rules for how to use them.

If you're still running pilots without ROI metrics, you're wasting money. If you haven't asked your team what AI tools they're actually using, you have a shadow AI problem. If you think "AI-assisted" reduces your liability for accuracy, the Deloitte disaster proved otherwise.

2026 isn't about testing anymore. It's about scaling AI transformation effectively and killing what doesn't with governance that protects you from the risks that matter.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.