The numbers tell a story that most executives would rather not hear. While organizations rush to purchase enterprise AI subscriptions, 98% now have employees using unsanctioned AI applications.
Among those employees, 38% admit to sharing sensitive work information with AI tools without authorization. Perhaps most troubling: 81% of employees and 88% of security leaders—the very people responsible for preventing this behavior—report using unapproved AI tools themselves.
This gap between enterprise strategy and employee reality has created what security professionals call "shadow AI," and the challenge is accelerating. Between 2023 and 2024, employee adoption of generative AI applications grew from 74% to 96%. But only 31% of organizations have established formal, comprehensive AI policies to govern that use.
For CIOs and CHROs, this represents a governance crisis that procurement alone can't solve. The traditional approach of "buy the right tool and mandate its use" fails when employees can access comparable capabilities through free consumer applications. The question facing leadership teams is how to establish control without crushing the productivity gains that made AI attractive in the first place.
Scale of Invisible AI Use
Shadow AI differs from previous shadow IT challenges in both scope and accessibility. An employee can't spin up a shadow server or procurement system from their phone during lunch. They can absolutely paste confidential customer data into ChatGPT or Claude to draft a response email, write a performance review, or analyze a spreadsheet.
In March 2024, 27.4% of corporate data employees put into AI tools was classified as sensitive, up from 10.7% a year earlier. This isn't just about volume, it's about velocity.
The distributed nature of AI adoption makes traditional monitoring approaches insufficient. Unlike software installations that require admin privileges or SaaS tools that generate procurement paper trails, AI usage through personal accounts leaves minimal technical footprint.
Among organizations experiencing high shadow AI usage, security breaches resulted in more personally identifiable information being compromised (65%) and more intellectual property being exposed (40%) compared to organizations with lower shadow AI rates, according to IBM's 2025 Cost of a Data Breach Report. These incidents also added an average of $670,000 to breach costs.
When employees process HR data, financial information, or customer records through consumer AI tools, organizations lose chain of custody. They can't demonstrate compliance with data protection regulations, can't audit what information left the organization, and can't guarantee deletion when required by law.
The concentration of risk is particularly acute in smaller organizations. Companies with 11 to 50 employees showed the densest shadow AI usage, averaging 269 unsanctioned AI tools per 1,000 employees.
Even mid-sized organizations with 500 to 1,000 employees had about 200 shadow AI tools per 1,000 users. These aren't temporary experiments, some shadow AI applications show median usage durations exceeding 400 days of continuous use without formal approval or oversight.
This is one of the most common realities right now: AI adoption is already happening, whether leadership planned for it or not. The worst response is to shut it down. That creates more shadow usage and erodes trust.
Speed-Versus-Security Mindset
Recent research reveals an uncomfortable truth about employee attitudes toward AI governance. Among workers, 60% agree that using unsanctioned AI tools is worth the security risks if it helps them work faster or meet deadlines. Another 21% believe their employer would "turn a blind eye" to unapproved AI tool usage as long as work gets completed on time.
This calculus shifts based on organizational level. Among presidents and C-level executives, 69% believe speed trumps privacy or security concerns. At the director or senior VP level, that figure is 66%. In contrast, just 37% of those in administrative roles and 38% in junior executive positions share this view.
The people with the most authority to change governance policies are the ones most likely to ignore them.
The data employees are sharing reflects this risk tolerance. One-third have shared research or datasets through unsanctioned AI tools, more than a quarter have shared employee data including names, payroll, or performance information, and 23% have shared financial statements or sales data.
AI Governance
Why Blocking Fails
Organizations attempting to solve shadow AI through restriction are discovering a fundamental problem. Workers find workarounds to access blocked applications. Blocking doesn't eliminate the behavior; it just drives it further underground and eliminates visibility.
People in your organization are using it. They’re using it all the time. So you may have to declare what I call AI amnesty to say, come out of the dark and show us what you’re using with AI. You know, use the preferred tools that are locked down, not training the model, secure everything. But come out of the shadows, no more shadow AI. You have amnesty. We won’t care that you use this in the past. We figured we just won’t care. But please bring it out into the open and let’s figure out how to do this together.
The knowledge gap compounds this challenge. While 40% of employees recall receiving AI training, 40% still use unapproved tools daily. More paradoxically, researchers found a positive correlation between users reporting that they understood AI security requirements and their regular use of unapproved AI tools.
As employees' knowledge of AI risks increases, so does their confidence in making judgments about that risk—even at the expense of following company policies.
This suggests that security awareness training is insufficient as a safeguard. Less than half of workers say they know and understand their companies' policies about AI usage. The policies that do exist often fail to address the practical reasons employees turn to shadow AI in the first place.
Building a Practical AI Governance Framework
The Information Systems Audit and Control Association has developed guidance for organizations to address the full lifecycle of AI management. According to ISACA's research, only 31% of organizations have formal, comprehensive AI policies in place despite 83% of IT and business professionals believing employees in their organizations are using AI.
The best frameworks start with acknowledgment," Evans said. "First, an inventory of where AI is already being used and why. Second, define clear guardrails—what data is allowed, which tools are approved, and where human oversight is required. Third, assign ownership: who is accountable for outcomes when AI informs decisions. Governance should enable speed safely, not slow teams down. When done well, it turns informal experimentation into repeatable, responsible execution.
ISACA recommends adapting the COBIT framework - traditionally used for IT governance - to address AI-specific challenges. This approach works because most organizations already have COBIT-based governance structures in place, making AI governance an extension rather than a complete rebuild.
So let’s take a look at the steps this entails:
Strategic alignment
Ensures AI initiatives connect directly to business objectives rather than existing as standalone technology projects. This means asking not "what can AI do?" but "what business problems need solving, and can AI help?"
Inventory and risk classification
Catalog existing AI tools in use - both sanctioned and unsanctioned - and classify them by risk level. A customer service chatbot processing routine inquiries presents different risks than an AI tool analyzing employee performance data or financial projections. This classification determines appropriate governance levels for different tools and use cases.
Embedding AI risk management into existing organizational processes.
Integration with software development, project management and change management processes ensures risk is assessed early and often. From a privacy and data governance perspective, proper asset, data and model tracking provides a holistic view of AI usage. Third-party risk management processes must track which vendors use AI and how they are securing it.
Establishing data boundaries
Organizations need explicit policies defining what information can and cannot be processed through AI systems. These boundaries typically operate on a tiered system: public information can be used in any AI tool, internal information requires enterprise solutions with data protection agreements, and confidential information is restricted to specific, heavily monitored AI environments.
The framework emphasizes that AI governance must be comprehensive, overseeing the entire AI lifecycle from start to finish. This includes capturing relevant metadata at every stage, ensuring the governance framework covers all aspects of model development, deployment and monitoring.
AI governance should provide full visibility of all AI models across the enterprise ecosystem, fostering transparency and allowing stakeholders to understand how models are created, used and managed.
Practical steps to get started

Organizations don't need to implement a complete governance framework overnight. ISACA's approach suggests a phased implementation:
Phase 1: Establish Visibility (Weeks 1-4)
- Conduct an AI tool audit across departments
- Create an AI registry requiring employees to document which tools they use and for what purposes
- Classify tools by risk level (low, medium, high, critical)
- Identify data types being processed through each tool
Phase 2: Define Policies and Boundaries (Weeks 5-8)
- Develop clear data classification standards (public, internal, confidential, restricted)
- Create tiered usage policies matching tool risk levels to data sensitivity
- Establish approval processes for new AI tool requests
- Define roles and responsibilities (who approves what, who monitors compliance)
Phase 3: Implement Controls (Weeks 9-16)
- Deploy technical controls (data loss prevention, access management)
- Provide sanctioned alternatives to commonly-used shadow AI tools
- Launch employee training on approved tools and data handling policies
- Establish monitoring systems for anomalous AI usage patterns
Phase 4: Continuous Improvement (Ongoing)
- Conduct regular audits (quarterly recommended) for unsanctioned AI use
- Review and update policies as AI capabilities evolve
- Measure compliance rates and governance effectiveness
- Collect feedback from employees on tool gaps and policy friction
The key is treating AI governance as a product with dedicated ownership rather than a policy document. Just as organizations have Chief Security Officers with teams managing security, effective AI governance requires someone - whether a Chief AI Officer, Chief Data Officer, or dedicated AI governance team responsible for maintaining the framework as a living system.
AI Sandboxes for Safe Experimentation
Perhaps the most significant development in shadow AI governance is the concept of "AI Sandboxes" or controlled environments where employees can experiment with AI capabilities without putting sensitive data at risk.
Sandboxes acknowledge a fundamental truth: employees use shadow AI because it helps them work better. Blocking access without providing alternatives doesn't eliminate the need.
Harvard University pioneered this approach when faculty and researchers needed safe access to large language models without risking data leakage to external vendors. The university launched a secure AI sandbox supporting GPT-3.5, GPT-4, Claude 2, and PaLM 2 Bison.
Over 50 pilot users leveraged the sandbox to test AI for teaching and research while protecting confidential data. The controlled tests also informed Harvard's procurement decisions for future AI integrations.
The Massachusetts government built similar isolated sandboxes using AWS infrastructure for AI tools including chatbots and procurement systems. These systems enabled secure, low-risk experimentation, streamlining services and informing broader AI adoption strategies. Cities including New Jersey and Washington D.C. implemented comparable approaches using Azure.
In the financial sector, the UK Financial Conduct Authority partnered with NVIDIA to create what they termed a "supercharged sandbox," giving firms access to AI models, datasets, and regulatory guidance. The sandbox facilitated innovation in fraud detection, risk management, and automation while ensuring compliance and oversight.
The sandbox concept creates designated spaces where teams can test AI tools for specific use cases using sanitized data. A finance team can experiment with AI-powered forecasting models using historical data with customer identifiers removed. An HR team can test resume screening tools using synthetic candidate profiles. Marketing can prototype campaign concepts without sharing unreleased product information.
These environments serve dual purposes. They satisfy employee desire for AI capabilities while giving IT and compliance teams controlled testing grounds to evaluate new tools before enterprise deployment.
When a team identifies a valuable AI use case in the sandbox, the organization can vet the tool properly and negotiate appropriate data protection agreements before broader rollout.
The CHRO's Role in AI Governance
While CIOs typically lead AI governance initiatives, CHROs play an equally critical role that often goes underestimated. Shadow AI isn't just a technology problem, it's a people problem rooted in how employees work, what productivity pressures they face, and whether they trust official tools to meet their needs.
CHROs have to address the cultural dynamics that drive shadow AI adoption. When employees believe that requesting access to new AI tools will trigger a six-month procurement process, they'll find workarounds. When they perceive IT policies as obstacles to productivity rather than legitimate risk management, the same thing happens.
This requires CHROs to advocate for governance frameworks that balance security with usability. A system that requires fifteen approval steps to use an AI writing tool will fail regardless of how theoretically secure it is. Employees will simply use ChatGPT instead and never mention it. Effective governance makes the right path the easy path.
CHROs also own the change management required to shift employee behavior from shadow AI to sanctioned alternatives. This involves communication about why governance matters, training on approved tools, and creating clear channels for employees to request new capabilities when current tools don't meet their needs.
The performance management aspect can't be ignored either. Organizations must decide how to handle employees who violate AI use policies. A growing number of employees trust AI tools more than their managers or colleagues, indicating a shift in trust from human relationships to technology. This trust dynamic complicates enforcement. Too harsh and employees hide their AI use more carefully, too lenient and policies become meaningless.
Monitoring Systems That Work
Effective AI monitoring requires balancing security needs against privacy concerns and employee trust. The most effective approaches focus on data movement rather than individual behavior. Systems monitor for large volumes of potentially sensitive information being copied to clipboards or uploaded to external sites—patterns that indicate possible data leakage rather than routine work.
Context matters enormously in monitoring. An engineer pasting code into an AI tool might be seeking debugging help or might be sharing proprietary algorithms. The monitoring system needs to understand the difference, often through data classification systems that mark sensitive code repositories differently from open-source projects.
Some organizations are implementing "just-in-time" education systems that intervene when risky behavior is detected. If an employee attempts to paste what appears to be customer data into an external AI tool, the system generates a warning explaining data protection policies and directing them to approved alternatives. This educates without punishing and provides an off-ramp before policy violations occur.
AI in Governance - Exploring
The Procurement Question
A significant driver of shadow AI is procurement friction. Employees can create a ChatGPT or Claude account in minutes and start working. Getting IT to approve, purchase, and deploy an enterprise AI tool can take months. This timeline gap guarantees shadow AI adoption.
Forward-thinking organizations are creating fast-track procurement processes for AI tools that meet certain criteria. If a tool doesn't require access to sensitive data, doesn't integrate with core systems, and serves a limited user base, approval can happen in days rather than months. This reduces the incentive for shadow usage while maintaining appropriate oversight for higher-risk deployments.
Some organizations are adopting "AI tool libraries", pre-approved catalogs of AI services that employees can access immediately through enterprise agreements. Rather than negotiating contracts for each new AI tool, IT establishes umbrella agreements with major providers and employees can activate access as needed. This provides the speed of shadow AI with the governance of sanctioned tools.
Where Governance Falls Short
Current AI governance frameworks struggle with several emerging challenges. The first is the increasing capability of AI tools embedded in other software. Microsoft Copilot integrates directly into Office 365, Adobe's AI features sit inside Creative Cloud, and Salesforce Einstein operates within CRM workflows. These "invisible" AI capabilities don't register as discrete tools requiring governance but process significant amounts of business data.
The second challenge involves personal devices. Many organizations have bring-your-own-device policies allowing employees to access work email and documents from personal phones and laptops. When those devices have AI assistants—Siri, Google Assistant, or third-party tools—capable of accessing work data, traditional network-based monitoring becomes ineffective.
The third challenge is distinguishing between AI use and AI dependence. Governance frameworks can control what tools employees access but struggle to address whether employees are developing appropriate critical thinking skills or becoming overly reliant on AI outputs. An employee who uses AI to draft every email may be violating no policies while developing writing skills at a fraction of their pre-AI rate.
What Success Looks Like
Organizations that successfully govern AI usage without crushing productivity share several characteristics.
- They establish clear, simple policies that employees can understand and follow without legal interpretation.
- They provide sanctioned alternatives that genuinely meet employee needs rather than inferior substitutes that drive people back to shadow tools.
- They create feedback mechanisms where employees can report gaps in approved tools or request new capabilities.
- They treat governance as an ongoing conversation rather than a static rulebook.
Regular communication about why policies exist, what risks they mitigate, and how they evolve maintains employee buy-in. When employees understand that governance protects them from liability, data breaches that could compromise their own information, and regulatory violations, compliance becomes cultural rather than enforced.
The organizations struggling with shadow AI often exhibit opposite characteristics:
- Complex policies that require legal interpretation
- Approved tools that lag consumer offerings in capability and ease of use
- One-way communication where IT mandates policies without explaining rationale or soliciting feedback.
The Path Forward
Shadow AI won't disappear. Consumer AI tools will continue improving in capability and accessibility, making them perpetually attractive to employees seeking productivity gains. The governance challenge isn't eliminating shadow AI but channeling AI adoption toward managed systems that protect both organizational interests and employee productivity.
This requires partnership between CIOs, CHROs, and business leaders to create frameworks that work in practice rather than just in policy documents. It requires investment in approved tools that actually compete with consumer alternatives rather than serving as bureaucratic substitutes. It requires trust that employees will make good decisions when given clear guidance, and that leadership will provide the resources necessary to work effectively within governance boundaries.
The alternative is a growing disconnect between official policy and actual practice, where shadow AI usage continues expanding in ways organizations can neither see nor manage. In 2025, data showed that roughly one-third of enterprise generative AI projects were expected to stall due to poor data quality, inadequate risk controls, escalating costs or unclear business value. Research indicates that over 80% of AI projects fail overall.
