Rejection Reality: AI systems are making job application rejections without human review, raising legal accountability questions.
Governance Gap: AI governance lacks clear accountability, as policies don't assign ownership for decision-making impacts.
AI Accountability: Courts question who should monitor AI decisions, holding employers liable for AI-based discriminatory outcomes.
Policy Misalignment: Existing policies fail to adapt to organizational changes, leading to governance issues with AI tools.
Legal Implications: Colorado's AI Act sets precedent for AI accountability, influencing employment decisions and legal standards nationwide.
At 1:50 in the morning, Derek Mobley received a rejection. He had applied less than an hour before. No hiring manager had reviewed his résumé, no recruiter had weighed his experience.
The system processed him, rendered a verdict, and moved on before anyone at the company had started their workday. Mobley, who is Black and over forty, would apply to more than a hundred jobs through Workday's platform and be rejected every time.
The lawsuit he filed in 2023 is now a certified nationwide collective action, potentially covering hundreds of millions of applicants. Workday argues it isn't the employer, which is technically true. The employers say they weren't making the decisions, which is also technically true. A federal court has decided that those two facts sitting together is precisely the problem.
When a platform exercises judgment, not just applying criteria but participating in the decision, it functions as an agent of the employer. Therefore, the employer inherits the liability. What courts are now pressing is not whether the AI made a bad call. It's who was supposed to be watching it.
That question is landing on CHROs' desks. Not because they volunteered for it, and certainly not because they have the authority, budget, or legal infrastructure to answer it. Because someone has to, and they're the closest thing most organizations have to an owner of the people-related risk that AI is generating at scale.
Owned Everywhere, Accountable Nowhere
Sean McIntire, Chief Legal Officer at PEBL, has watched organizations build AI policies with genuine care, only to see them dissolve somewhere between the executive suite and the people actually making decisions.
"Owned everywhere, accountable somewhere," he said during a panel on AI risk at Transform in Las Vegas this week. The construction sounds like a solution. It isn't.
He draws the comparison to GDPR. When that regulation arrived, companies scrambled. Many had no clear picture of where their personal data lived, who was processing it, or what their obligations actually were.
The chaos was real, but it was also a forcing function. Organizations that emerged from it with strong data governance programs hadn't just become compliant, they'd built infrastructure that proved durable.
McIntire's argument is that AI is at the same pre-forcing-function moment, and most companies are behaving exactly as they did before GDPR hit: reacting to the surface threat while the underlying structural problem goes unaddressed.
That structural problem, stated plainly, is this. AI tools entered the workforce primarily through the efficiency door that leads to faster hiring, automated performance signals, compensation benchmarking, and continuous feedback loops.
The organizations deploying them treated procurement and governance as separate problems. Legal reviewed the contracts. IT approved the security posture. Finance signed the budget. HR was handed the deployment and told to manage it.
What nobody clearly established was who owned the risk when the tool made a consequential decision that harmed someone.
McIntire's framing captures this exactly. AI governance, in most organizations, is diffuse by design. The policy exists. The working group exists. The usage guidelines exist. What doesn't exist is a named individual with both the authority and the resources to be accountable when something goes wrong.
The Committee Isn't Present at 2am
The data makes this concrete. A Gartner poll of more than 1,800 executive leaders found that 55% of organizations now report having an AI board or dedicated oversight committee in place. That sounds like progress until you look at the companion figure from McKinsey: only 28% of organizations say the CEO takes direct responsibility for AI governance oversight.
Boards fare worse, with just 17% formally incorporating AI governance into their committee charters.
The committees exist. The accountability doesn't follow.
Vittoria Reimers, who leads the people function at Juniper Square and has built one of the more operationally serious AI governance models you're likely to find from a mid-market, doesn't put much weight on committees alone.
Your first line of defense is your people and your processes. The governance committee is just not going to be present for the split-second decision someone is making at 2am.
What she's pointing at is an organizational reality that governance frameworks rarely account for. The decisions that generate the most risk aren't made in committee rooms. They're made by engineers building model functionality on a deadline, by managers acting on AI-generated performance flags to recommend a termination, by recruiters running candidates through screening tools whose underlying logic they don't fully understand.
The governance apparatus operates at a level of abstraction that the actual work doesn't.
Reimers built a practical response to this at Juniper Square. Her team created what they called ACE. It's roughly ten to twelve employees distributed across the organization, functioning as informal AI advisors while maintaining their regular roles.
The pitch was direct: if you're building something and don't know if it's safe or scalable, come to ACE. The uptake surprised her. People came, consistently, because they finally had a process that addressed the actual problems they were facing, not a policy document that lived above them.
Spend a lot of time on your usage policy, your governance board, your governance committee," she said. "Then spend ten times that amount of time investing in your people — their enablement, their training, your culture.
The ACE model doesn't resolve the accountability question at the top of the organization. But it does something the formal governance structure can't: it moves ownership to where the decisions actually get made.
What HR Inherited
Matt Poepsel has been thinking about this gap from a different angle. As Vice President of Talent Optimization at The Predictive Index, he works closely with HR leaders navigating the distance between what AI promises and what it delivers when deployed without organizational context.
He tells a story about a period in his own management career when he lacked the context he needed to lead well — when he made calls he later regretted, not from bad intent, but from incomplete information.
That, he argues, is what AI is doing right now, at scale, in every organization that deployed a generic tool and expected it to understand the specifics of their workforce.
They say we have to keep the human in the loop. I would say we have to keep human resources in the loop. I see HR struggling with the same thing I did when I was overly fixated on the technical part of the equation.
That marginalization is partly structural and partly historical. HR spent years trying to earn a seat at the business table. When AI tools arrived, they were often positioned as a way for HR to demonstrate ROI through faster hiring cycles, reduced attrition costs, and automated compliance.
The framing made HR a beneficiary of AI deployment, not an architect of it. The governance conversation, where it happened at all, tended to involve legal, IT, and finance. HR got the tools. Accountability for those tools settled elsewhere, or nowhere in particular.
Poepsel's critique is pointed: generic AI doesn't know your organization. It doesn't know your culture, your behavioral dynamics, your history, or the specific populations it's making decisions about.
What it produces is plausible-sounding output that lacks the context any experienced HR professional carries by instinct. When AI flags a performance outlier, scores a candidate, or recommends a compensation adjustment, it's working from patterns that have no knowledge of the person in front of them. The HR leader who's been in the room for three years does.
If AI can do it, it is by definition commoditized, Poepsel said. "What you bring to it is differentiation.
That's uncomfortable, but important. HR's value in an AI-assisted organization isn't execution, the tools can do that faster. The value is judgment: knowing when the output is wrong, when context matters, when a decision carries consequences the model can't perceive.
That judgment is precisely what gets squeezed out of organizations that deploy AI without redesigning the processes and roles around it.
The Legal Forcing Function
On June 30, 2026, Colorado's AI Act takes effect, making it the first state law in the country to impose comprehensive governance obligations on both developers and deployers of AI systems that influence consequential employment decisions.
The requirements are substantial. Employers must maintain documented risk management programs, conduct annual impact assessments, notify employees when AI has influenced a decision that affected them, and report discriminatory outcomes to the state attorney general.
The law recommends the NIST AI Risk Management Framework as the operative compliance standard. It applies regardless of employer size.
Colorado may punch above its weight here. Other states are watching, and the regulatory working group Governor Jared Polis assembled reached unanimous consensus on revisions as recently as this month, which means the law's final shape is still in motion, but not its ambition.
Mobley establishes why this matters beyond Colorado's borders. The court didn't need a state AI law to find that Workday could be held liable as an agent of the employers using its platform.
The agency theory that participation in a consequential decision creates shared accountability is being applied to AI tools right now, under existing federal anti-discrimination statutes. An employer doesn't have to operate in Colorado to face the argument that it delegated a hiring decision to a system that exercised real judgment, and that judgment produced a discriminatory outcome at 1:50 in the morning while everyone was asleep.
The legal exposure runs directly back to the governance vacuum. An employer that cannot demonstrate who reviewed the AI tool before deployment, who monitors it for discriminatory patterns, and who is responsible for its outputs has no meaningful defense when a decision gets challenged. The committee had a meeting. Nobody was watching.
Redesigning Who Owns What
Chelsea Gregory, from healthcare AI company Suki, recently oversaw an employee handbook overhaul. She offered the audience at Transform a ground-level illustration of how the governance gap appears before it becomes a legal problem.
A flexible PTO policy that worked well with a smaller team started generating equity complaints as the company grew. A manager was approving extended leave for newer employees while tenured staff felt the policy was being applied inconsistently. Nothing was malicious. The documentation just hadn't kept pace with the organization's scale.
The same dynamic plays out with AI tools. Policies designed for one deployment context don't automatically transfer when the organization grows, the workforce shifts, or the tool gets updated.
Do you have the proper documentation and processes in place? And how are you verifying that those are accurate and serve their purpose?
In most organizations, no one is formally responsible for checking.
Closing that gap requires something more substantive than a better policy document. McIntire argues for integration, or in other words, AI risk shouldn't sit outside the enterprise risk management framework as a separate and exotic category. It should live inside the existing risk architecture, with the same ownership clarity that governs any other operational exposure.
The question then becomes, do the people running existing risk processes have the authority, training, and organizational access to apply them to AI?
Reimers and Poepsel arrive at the same conclusion from different directions. The ACE model works because it places accountability close to the work. The HR multiplier framework works because it makes explicit what HR uniquely provides — organizational knowledge, behavioral context, the ability to test a model's output against the actual human it affects. Neither replaces formal governance. Both make formal governance mean something.
What CHROs are being asked to absorb is accountability for AI decisions without the authority to govern them right now, and it is a structural failure being misread as a talent problem.
Organizations treating it as the latter will likely keep filling seats on committees that carry no real weight. The actual work is redesigning how governance authority is distributed across the C-suite, so that responsibility and power move together.
Gartner's 2025 research puts the current state plainly: fewer than one in four IT leaders say they are very confident their organization can manage governance when rolling out generative AI tools. More enterprises have formal strategies on paper than at any point in the past. Very few have successfully operationalized them.
The gap between a policy and a governance structure with real accountability is where the risk lives. It is also, based on the trajectory of the litigation, where the next set of lawsuits will be filed.
