Skip to main content
Key Takeaways

AI Ownership: No clear owner for AI literacy; it's often owned by everyone and consequently by no one.

Governance Gap: AI governance often lacks adequate involvement from those at the operational level.

Reskilling Need: Organizations lag in redefining roles, causing fear of obsolescence among employees.

HR Challenge: HR is expected to lead AI transformation but lacks the technical expertise and alignment.

Operational Alignment: There's an urgent need to bridge the gap between technical solutions and human adaptability.

Late afternoon at Transform, a room full of HR leaders is being asked a simple question with no simple answers. Who owns AI literacy in your organization?

The answers come in through a polling tool projected on the screen. "L&D." "IT." "Executive leadership." "Everyone." And then a big dose of honesty: "No one."

One of the session facilitators looks at the screen and says what everyone in the room already knows. 

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

"There's a huge gap. I don't think anyone's owning it." 

Another participant chimes in.

"Specific leaders determine everything, and that's a problem, because everyone ends up on completely different pages"

The polling data confirms what the room already feels. Among HR practitioners gathered specifically to discuss AI adoption, the most common answer to "who owns this" is practically tie between everyone and nobody.

This is not a conference session about failure. It is a session about the actual state of things. And the state of things, across the two biggest conferences in this spring conference run I’ve been on and across dozens of hours of conversation with executives, practitioners, consultants, investors, lawyers, and builders, is that the people responsible for deploying AI and the people responsible for absorbing it are not talking to each other in a sustained, structural way. When they do talk, they're often answering different questions.

When I set out on this journey I mapped a total of four conferences from early March to mid-April. The two most notable of these would likely attract very different audiences, but provide answers around the future of work and the challenge it presents to leaders and organizations.

The Two Rooms

Transform, held in Las Vegas, is often dubbed “HR summer camp”, a gathering of more than four thousand CHROs/Chief People Officers, heads of talent, and other people managing organizations through the disruption AI tools create.

HumanX, held in San Francisco two weeks later, attracted founders and platform builders, the people shipping the tools. The two audiences share a vocabulary, AI adoption, reskilling, agentic workflows, but the words do different work depending on which room you're in.

For the builders, the dominant question centers on capability. A constant examination of what can these systems do, how do we unlock their full potential, how do we move from AI that assists to AI that acts

Ted Bailey, CEO of Dataminr, described his as a company that feeds real-time intelligence to the White House Situation Room and 30 international militaries. His philosophy for autonomous AI is somewhat precise: build the best possible intelligence picture, then leave the final decision with a human. 

Linda Tong, CEO of Webflow, described giving every employee unlimited tokens and watching the company reconstitute itself around agents. 

It is not an engineering-limited capability. Everybody is a builder.

Linda Tong-32902

For the predominantly people focused sessions at Transform, the dominant question was different. Not what it can do, but who answers for it when it does something wrong?

Shawn McIntire, Chief Legal Officer at PEBL speaking at Transform, put the accountability problem in four words: "Owned everywhere, accountable somewhere.

He was describing what happens when AI governance frameworks get built from the top down without reaching the ground level. The governance board meets. The usage policy gets written. And then, at two in the morning, someone makes a split-second decision that the governance board will never review. 

Your first line of defense is your people and your processes. Getting to the ground level of the people doing the work, interacting with the models, making that split-second decision the governance committee is just not going to be present for.

S. McIntire-17497
Shawn McIntireOpens new window

Chief Legal Officer at PEBL

This is the accountability gap. And it runs directly through the space between the two rooms.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

Who Answers for It

The builders have solved a version of this problem at the technical level.

Jyoti Bansal, CEO of Harness, walked through the layers between AI-generated code and a deployable product. Eight types of testing, eight types of security review, deployment verification, rollback protocols, cost optimization. Thirty to forty checks between code and production. 

"You cannot have trust without verification," he said.

On the same panel, Jeff Wang of Windsurf added that when an AI agent breaks something, it logs that failure and learns not to repeat it. The system is, at the technical level, self-correcting.

You can’t help but ask, what organization has built the equivalent of that for its human processes? And equally, who is running systematic checks on agent behavior across business operations? Who reviews the pattern of decisions an agent made last quarter, the same way an auditor reviews financial statements? 

The builder sessions had detailed answers to those questions for software pipelines. For organizational governance, the practitioner sessions revealed something closer to improvisation.

At Transform, Victoria Reimers has watched companies pour effort into governance committees and usage policies while underinvesting in the people actually doing the work. Her prescription? Simple, spend ten times as much on your people as on your committee. 

That sounds like a joke, but the model she described at Juniper Square, a team of twelve employees who became internal AI experts available to anyone who wasn't sure if something was safe or scalable, is one of the few concrete architectures for closing the gap from the ground up rather than from the boardroom down that I’ve seen.

GDPR comparisons came up more than once. In some ways, the data challenge that AI presents is not all that different and serves as a force for change and a vehicle to ask companies what they’re doing with the employee data they collect. 

People did not know what to do (for GDPR)," McIntire said. "But it was a forcing function for companies to look at how they manage personal information.

The implication is that regulation may be what finally makes the accountability question impossible to defer, but Navrina Singh, founder and CEO of Credo AI, which has spent six years building AI governance infrastructure for Fortune 500 companies, argues organizations shouldn't wait for that forcing function. She noted that if they wait for an incident to happen, and then invest in AI governance, they'll already be irrelevant.

Florian Douetteau, CEO of Dataiku, made the cost case more bluntly. Fail on people, orchestration, or governance, and the whole agentic investment collapses. 

People will start to say, we spent lots of money there, and there is no ROI.

Florian D-45506

Between the Pipeline and the People

Meanwhile, the reskilling conversation sits in a strange middle space between the two sides.

Research presented at Transform by Brandon Hall Group found that 65% of organizations are actively integrating AI into core workflows, and fewer than 30% have meaningfully redefined their roles or job structures to reflect that. 

The technology is moving, but the human architecture around it is not keeping pace. Stated plainly, this describes organizations rebuilding their engines while driving, without telling most of the passengers what's happening or where they're going.

Amy Reichanadtner, Chief People Officer at Databricks, described her team's challenge in terms that stayed with me. 

We don't want to build a jungle for them," she said, talking about deploying AI tools broadly without a coherent map for how they connect. “People need a road, not a machete.

There's a name for what employees feel in the absence of that road, what we now call FOBO or fear of becoming obsolete. It is, according to numerous presenters, the single most common thing their team hears when AI comes up.

The builder sessions treat adoption as primarily a solvable problem of access and incentive. Give everyone tokens. Run a company-wide challenge. Make building the expectation.

This works, demonstrably, in organizations where the culture of experimentation is already present and where employees understand their continued role is not in question. For everyone else, the question is more complicated.

Robin Daniels, Chief Business Officer at Zensai speaking at HumanX, made the point that “speed without clarity produces chaos rather than transformation.” 

The urgency coming from the builder side to adopt faster, deploy more, close the gap with competitors, lands differently in organizations where no one has established what comes next. 

Change fatigue becomes less exhausting if you know where we’re going. Ambiguity is exhausting.

IAN-06363
Isaac Agbeshie-NoyeOpens new window

Director, Foundation Programs at SHRM

The Cobbler's Kids

The literacy workshop at Transform was, in its way, the most honest session at either conference. A room full of HR practitioners, gathered specifically to work through AI adoption, voted that AI literacy belongs to everyone, which in practice means it belongs to no one. 

One participant from a consumer packaged goods company said her organization had deliberately held back, let others make the mistakes first, and received a board mandate just last month to jump in. 

Another described a company where every department owned its own AI literacy because the alternative was firing people. A third pointed out that nobody would think to ask who owns financial literacy in an organization, and wondered if the framing itself was part of the problem.

What the session surfaced, without quite naming it, is that the HR function is being asked to lead an organizational transformation it is only partially equipped to understand. 

We're seeing our organizations rolling out AI all around us," one of the facilitators said. "We don't even understand AI within our own function, and then we're being asked to help other business units adopt AI. 

Matt Poepsel, VP of Talent Optimization at The Predictive Index, came to the same conclusion from a different direction.

He described the damage he'd caused early in his career as a manager focused on pushing team performance and obsessing over technical execution. He missed what his team was actually experiencing, and it was HR that helped him realize that what he was lacking was context.

"What I come to find now, all these years later, is that I'm seeing the same things play out in organizations, but it's happening at scale and at speed," he said. "And that's because HR is missing that critical context the same way I was missing it. You've heard people say we have to keep the human in the loop. Well, I say we have to find a way to keep human resources in the loop."

I’ve seen firsthand the damage I’ve personally caused, despite not intending to, when I was overly fixated on the technical aspects of the business. HR folks understand and care about the people components in a different way than the average manager who gets promoted to lead people. So it concerns me when I see HR being left out of the equation and being treated only transactionally or only for compliance when it comes to one the biggest revolutions we’ve seen in a long time.

Matt Poepsel-55239
Matt PoepselOpens new window

VP of Talent Optimization at The Predictive Index

This is the cobbler's-kids problem, and it is endemic. HR is the function most responsible for workforce adaptation and least positioned, by training and by organizational habit, to lead a technical transformation.

Stacy Eng, former Chief Learning Officer at Chevron, described building an AI council that included the CIO, CFO, and a small working group, specifically to create governance around which ideas to pursue and which to shelve.

Without governance you have mess. We can’t boil the ocean.

Stacy-09512
Stacy EngOpens new window

Chief Learning & AI Enablement Officer

The structural argument tends to be that HR needs to be at that council alongside the CIO and CTO, not downstream from it. Rasmus Hulst, CEO of Zensai made the same case at HumanX. 

Get the CHRO, the CTO, and the CFO in the same room,” he said, and noted he has "almost never actually seen it happen.

The irony is that the builders at HumanX have, in many cases, solved this internally. Vijay Tella, CEO of Workato, described 28 agents deployed across his company's operations. Tong described Webflow's internal culture of universal building.

These are organizations where the technology function and the people function have effectively merged, where the CEO, the CHRO, and the CTO are, by necessity, in the same conversation. 

They are not useful analogies for the 50,000-person enterprise where the CHRO still hears about AI deployments after the fact.

The 40 Percent

The question of what happens to people sits underneath all of it.

Adit Jain, CEO of Leena AI which deploys AI colleagues for enterprise HR and back-office functions, offered one of the more grounded assessments I heard across two weeks. In automating business processes, his company routinely sees 60% of the people in a given workflow become redundant. 

He said it without drama, because it is happening. He described what the better clients do. Roughly 20% of the displaced workers can absorb back into the process as human managers of AI systems, doing oversight and correction rather than the underlying task. He named the other 40% directly. "That's something we've had multiple conversations with customers about."

At Transform, a voice from a different direction. Van Jones, on the main stage, went at the headcount question without softening it. 

People are going to rush to reduce headcount, replace people with bots because their view is the people cost too much, the bots are cheaper," he said. "There will be some of that, it's inevitable. But everybody is going to have the bots soon. 

His argument was structural rather than sentimental. Once the technology is commoditized, which it will be, the differentiator returns to people. Specifically, to the ones doing the invisible work, the empathy, the connection, the emotional labor of keeping teams functional. 

If you see them as headcount and get rid of them. In two years, the teams that have the same bots as you and better people working better together, they’re going to eat your lunch.

Van Jones-96328
Van JonesOpens new window

Founder of Magic Labs

What neither conference quite resolved is the architecture for organizations that want to do something other than optimize their way through the transition. The builder sessions had detailed technical frameworks for making AI systems more reliable. The practitioner sessions had detailed frameworks for making employees more adaptable. 

The middle layer, the governance and accountability infrastructure that connects those two things, the organizational equivalent of Harness's 30-layer quality pipeline, does not exist in most companies, and building it is nobody's assigned job.

Where Risk Lives

Kit Krugman, SVP of People and Culture at Foursquare, described the people function's perennial problem at Transform. It has always struggled to earn a strategic seat at the table. AI, she argued, is a genuine opportunity to change that, but only if HR can meet the moment operationally. 

An orchestration layer is one of the most powerful disruptions we’ll see in this space. But you need the baseline operational layer to work first.

Kit Krugman-90621
Kit KrugmanOpens new window

SVP, People & Culture at Foursquare

That baseline of clean data, clear governance, defined accountability, and a workforce that understands what it's being asked to do and why, is what most organizations are still trying to assemble. The builders are moving faster than that assembly can happen. The practitioners are doing the assembly under moving conditions. And the difference between those two speeds is where most of the risk lives.

For next year’s conference slate, it’s probably a good idea to get the two crowds together and bring both perspectives to the same table.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.