Skip to main content
Key Takeaways

Compliance Risk: Organizations often believe legal documentation is enough, not recognizing actual compliance risks with AI.

Regulatory Landscape: AI governance operates within fragmented laws, requiring awareness of existing legal standards and liabilities.

Bias Detection: Bias in AI stems from historical data, impacting hiring practices, often without intent or awareness.

Vendor Accountability: Relying solely on vendor compliance certifications can obscure significant organizational risks associated with AI tools.

Governance Necessity: Effective AI governance requires executive oversight and clear accountability to ensure compliance and reduce risks.

Paperwork can make compliance feel finished. Legal signs off, contracts are filed with an indemnification clause that promises protection. But those steps often create the illusion of safety rather than the reality of it.

Recently, I attended a conference with several sessions dedicated to the legal landscape around workplace AI. Practitioners, employment attorneys, and compliance experts spent the better part of two days pulling apart assumptions that organizations have built their AI governance strategies on. The picture that emerged wasn't one of pending danger. The exposure is already here.

The Regulatory Picture

The federal government has not passed comprehensive AI legislation. That's the fact some executives stop at, and it's precisely why so many are underestimating their risk. The actual landscape is a patchwork of state statutes, local ordinances, and existing federal law that doesn't require new AI-specific regulation to apply.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

You hear all this ‘we’re not going to regulate this’ rhetoric from the current administration. Don’t get sucked in by that. They’re selling certainty where there is none.

Scott Kelly-05866
Scott KellyOpens new window

Partner at Ogletree Deakins

Kelly and every other attorney that spoke at the conference was fairly direct about what that misread costs organizations. The EEOC's 2024–2028 strategic priorities, which include the Hiring Initiative to Reimagine Equity, place specific focus on facially neutral systems, meaning tools that don't explicitly discriminate but produce discriminatory outcomes.

Federal law already covers that. So does the EU AI Act, which applies to any organization with operations or employment activities in Europe. 

The now infamous Workday case, a class action lawsuit alleging that the company's HR software discriminated against applicants by age and race, demonstrated clearly that legal exposure doesn't wait for Congress.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

The Discoverability Problem

One of the more jarring conversations at the conference concerned documentation. Specifically, what happens when your AI decisions end up in discovery.

AI prompts can be subpoenaed. Internal logic, decision inputs, model configurations used in hiring, performance management, or compensation processes can be requested as evidence. Organizations that can't produce detailed AI decision records going back four years face significant exposure, both in litigation and in regulatory review.

This isn't theoretical. As AI agents take on more autonomous roles in HR operations, the liability question becomes sharper. When an agentic system makes or influences a consequential employment decision and something goes wrong, the law places the responsibility squarely with the employer, not the vendor.

Strict liability for agents puts the organization in the position of owning outcomes produced by tools they may not fully understand.

Document retention requirements are shifting accordingly. Governance that wasn't built with discoverability in mind will require reconstruction under pressure.

Where Bias Actually Enters

Most organizations approach AI bias as a vendor problem. They buy an AI tool, ask whether it's been audited, receive some version of yes, and move forward. The legal and compliance community has started calling out how incomplete that is.

Bias enters AI systems through historical data, success definitions, screening filters, and proxy variables. When an AI system is trained on years of who got hired, who got promoted, and what performance ratings looked like, it learns the organization's preferences. Those preferences often carry embedded discrimination that no one deliberately designed, but that the model encodes and then scales.

Bias is amplified," Kelly noted, "transforming isolated decisions into institution-wide patterns.

The legal standard for disparate impact doesn't require intent. A plaintiff needs to demonstrate a statistical disparity, and the threshold is already defined. The four-fifths rule, also known as the 80% rule, holds that if a protected group's selection rate is less than 80% of the highest-selected group's rate, that constitutes evidence of disparate impact. 

An AI tool running thousands of resume screens a month can cross that threshold without a single person in the company meaning for it to happen.

A University of Washington study cited during one session illustrated the scale of the problem. When the names of Black male applicants were compared against white male applicants in AI screening systems, the systems preferred Black names zero percent of the time.

The disparate treatment standard, which covers intentional discrimination, brings its own AI-specific complications. Direct inputs using protected characteristics as explicit data points are the obvious case. But hidden motives are harder to detect and harder to defend against, like selecting a vendor specifically because their screening tool tends to filter out older applicants, while framing it internally as a preference for a "fast-paced culture." 

That framing doesn't neutralize the intent from a legal standpoint.

Why Vendor-Oriented Oversight Fails

The default governance model in most organizations runs through the vendor relationship. The assumption is that if a vendor's tool passes certification or ships with audit documentation, the organization has met its obligations.

That assumption doesn't survive contact with either the law or operational reality.

"At the end of the day, the buck is going to stop with the organization," Kelly said. "All the risk is going to come out of the same bucket. You could work with any folks in compliance, risk, in legal, all working to minimize risk in their jurisdictions, but the only way to do that is for people in your organization to understand what you're using and what you're using it for. If I've missed that your recruiting team is using a technology that we're not testing for risk, our work is somewhat useless."

Chris Lippert, a director at Schellman, pointed to the vendor certification landscape as part of the problem. Many AI tools in active use across HR functions have not been independently certified. 

Greenhouse, one of the more widely adopted applicant tracking platforms, received its ISO/IEC 42001 certification only in February of this year. The implication for the field is that organizations adopted AI-enhanced functionality long before the tools were validated, and many are still running on unvetted versions.

Just because an application has AI doesn’t mean it should be used. And if it adds AI functionality, that’s the time to see something, say something. Do your company policies get updated before the technology gets rolled out? No. You’re getting exposure before it can be solidified in policy.

Lippert-54354
Chris LippertOpens new window

Director at Schellman

Lippert recommended building on privacy impact assessments organizations have likely already completed. AI risk review doesn't have to be built from scratch. The same stakeholders, the same analytical frameworks, and much of the same documentation infrastructure already exist in most compliance programs.

Treating AI governance as an extension of that work rather than a separate initiative makes it more executable and more durable.

The Governance Gap at the Top

Operational governance requirements aren't complicated to list out.

  • Human oversight checkpoints
  • Validation cadence
  • Vendor accountability
  • Outcome monitoring
  • Documentation,
  • Version control

These are all well understood. What's less clear in many organizations is who owns them, and whether anyone with organizational authority is actively watching.

The general consensus on all AI risk frameworks is that it’s got to be supported from the top down. You have to have that ownership.

Camille Howard-47662
Dr. Camille HowardOpens new window

Founder and CEO of Humanistic Power.

Howard's point connects to a broader failure mode. Governance structures built at the team or functional level, without executive mandate and clear accountability, tend to be inconsistently applied and difficult to enforce.

When a business unit adopts a new AI tool without going through an established review process, the gap it creates doesn't show up until there's a complaint, a lawsuit, or a regulatory inquiry.

Model drift compounds this. AI tools change over time, sometimes through vendor updates, sometimes through shifts in the underlying data they're processing. A tool that passed a bias audit at deployment may not pass one eighteen months later.

"As models drift, the more risk you're inheriting," Howard said. "You need to empower your people to speak up when something looks off."

Testing on Your Own Terms

The absence of a mandatory federal framework means organizations are largely left to design their own testing protocols. Kelly's advice on this was blunt.

You're on your own," he said. "That's the reality.

The practical implication is that organizations need to run their own bias testing, continuously, and do it with legal protection in place. Conducting tests under attorney-client privilege allows organizations to surface problems without automatically creating discoverable evidence of those problems. 

It's an investment with a clear return, particularly for any organization using AI in hiring, where the legal exposure is highest and the documentation requirements are most demanding.

The organizations getting ahead of this have stopped assuming the vendor did the work and started treating AI governance as an operational discipline with real ownership, real testing, and real accountability.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.