Most HR leaders can tell you exactly which AI tools their organization is running. Very few can tell you what those tools are doing to each other.
When AI enters recruiting, it changes how candidates are assessed. When AI enters performance management, it changes how feedback gets generated and who delivers it. When AI enters learning and development, it shapes what skills get prioritized and for whom. These are not isolated events, but they're rarely governed as connected ones.
Leaders are making deployment decisions function by function, and almost nobody is accountable for what happens at the seams.
The people stack — recruiting, performance, L&D, benefits — is being retrofitted with AI one function at a time, each deployment managed by a different team with a different vendor and a different set of goals. The interactions between those systems rarely get monitored, and almost nobody is accountable for what happens at the seams. That's where drift compounds.
A Thousand Flowers, No Gardener
During a panel last week at Transform in Las Vegas, the focus was on what actually changes when AI enters the people function. A recurring tension surfaced: organizations are experimenting broadly but governing narrowly.
Jevan Soo Lenox, Chief People Officer at a high-growth AI company WRITER, described what many leaders are quietly living through.
We’ve been in a world where we let 1,000 flowers bloom. Try lots of things, do lots of tools, do a hackathon every other quarter. That creates a great culture of experimentation. But it’s really insufficient for massive ROI.
The problem isn't the experimentation. The problem is the absence of visibility into what's actually winning and what's creating inconsistency across the organization.
That inconsistency is structural. When an AI tool in recruiting is scoring candidates against a competency model, and a separate AI tool in performance is generating development recommendations against a different set of criteria, and a third tool in L&D is surfacing learning content based on yet another data signal, the organization ends up with three versions of what good looks like. Nobody designed that outcome. Nobody is watching for it.
Giovanni Luperti, CEO at Humaans, who works with enterprise customers deploying HR agents at scale, framed the core distinction driving many of these decisions: decision augmentation versus decision substitution.
For repetitive, deterministic work — onboarding coordination, scheduling, answering policy questions — agents can run the process reliably and improve over time. But for anything requiring judgment, the human has to stay in the loop, not as a rubber stamp, but as the actual decision-maker.
There's often not a right answer," Luperti said, "And so it becomes a decision augmentation piece.
The problem is that organizations aren't always making that distinction deliberately. AI gets layered into a workflow because it's available, because a vendor made a compelling pitch, because a team was moving fast.
The threshold of how much AI autonomy is appropriate for a given decision gets set implicitly, not intentionally. And when AI is touching recruiting and performance and development at the same time, those implicit thresholds start interacting with each other in ways that are very hard to audit.
The Foundation Underneath
The data layer makes this worse before it makes it better. Lennox was direct about it:
If you're not building from a great, consistent knowledge base, a data layer you can access and build on, then everything else is gonna break.
The people function has historically been one of the worst data environments in any organization with fragmented systems, inconsistent definitions, years of manual process. Deploying AI on top of that doesn't clean the foundation. It scales whatever's already there.
Kit Krugman, SVP of People and Culture at Foursquare, was blunt about what the moment actually requires.
The people function has always had a challenge having that strategic seat at the table, and AI is a revolution in what we could achieve. An orchestration layer is one of the most powerful disruptions we’ll see in the space. But you need the baseline operational layer to fit, so we need to rethink the whole operating model.
That's the part most organizations are skipping. They're deploying tools without rethinking the operating model underneath them.
The orchestration layer Krugman described — the thing that would actually coordinate AI activity across the people stack and surface what the system is doing in aggregate — doesn't exist in most companies yet. Instead, what exists is a collection of point solutions that report to different leaders, run on different data, and optimize for different outcomes.
Lenox, who has had the advantage of building a people function from scratch rather than retrofitting one at Writer, described his approach to role design in that context.
That patience — the decision to pause, map the actual need, and then build toward it — is exactly what most organizations can't afford to skip when deploying AI across interconnected functions.
Moving with intention means something specific here: identifying where AI can drive a measurable outcome, building from a clean data layer, and establishing governance before scaling. Most organizations are skipping that sequence, not because they don't know better, but because the tools are accessible enough that the pressure to move fast outweighs the discipline to move deliberately.
What that produces is a people stack that looks modern from the outside and is inconsistent at its core. When AI is making recommendations across recruiting, performance, and development simultaneously, with no shared logic and no one watching the interactions, the system doesn't fail in obvious ways. It drifts slowly in a direction nobody chose.
