Skip to main content
Key Takeaways

Governance Priority: AI adoption requires a structural approach with clear governance to avoid chaos and duplication.

HR Involvement: HR must assert itself in AI strategy, leading on workforce and learning aspects alongside technical leaders.

Manager Role: Middle management is crucial for AI transformation, needing context-aware managers to guide the process.

Ambiguity Impact: Understanding AI transitions as ongoing development, not disruptions, reduces employee burnout and change fatigue.

Data Foundation: Strong, consistent data infrastructure is essential for effective AI deployment and avoiding systemic issues.

The conference circuit has become its own kind of signal. What gets said in these rooms and on these stages is a distillation of what's happening in, being felt, and being predicted for the modern workplace.

Transform 2026 in Las Vegas was no exception. Across three days of sessions, panels, and hallway conversations, a few things emerged as genuinely clarifying. Of course, there was the usual optimism-flavored keynote abstractions and pessimistic admissions, but there were also honest friction points from practitioners who are in it.

Here's what the week actually surfaced for me.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

1. Governance is the unsexy work that determines whether any of this sticks.

No one comes to a conference to talk about policy documentation. And yet it kept surfacing as the load-bearing wall of AI adoption.

At a skills-first session on Day 1, Stacy Eng, former Chief Learning Officer at Chevron, described the moment her organization realized enthusiasm alone was going to create chaos. They had opened the floodgates to employees ideas for what to do with AI and the result was indeed a flood. Hundreds of ideas quickly led to the realization that they needed a framework to prioritize them.

The fix was structural: an AI council pulling together the CIO, CFO, and other senior stakeholders to sort what to pursue, what to shelve, and what to let go.

That experience echoed across sessions. One practitioner from a non-tech company described sitting out the early AI rush deliberately — "we wanted to let everybody else make the mistakes" — before a board mandate forced action.

Amy Reichanadtner, Chief People Officer at Databricks, described organizing her team's AI work in three explicit layers: leadership-sponsored strategic initiatives, a function-by-function process review, and a sandbox zone for open experimentation with guardrails.

Without that structure, she noted, you end up with duplicate tools solving the same problems and no way to know what's working.

The underlying problem is that incomplete policy documentation creates a weak foundation for any AI system drawing from it.

Start Picking Things Off

Start Picking Things Off

“Don’t let perfect get in the way of great,” Jamie Viramontes, CEO & Founder of Konnect said. “Maybe you haven’t updated your handbook in two years, but start picking things off one at a time.”

2. HR's seat at the AI table isn't being offered. It has to be taken.

One of the clearest frustrations running through the week: HR keeps getting cut out of AI strategy, then asked to manage the fallout.

Isaac Agbeshie-Noye noted that a very low percentage of HR professionals understand their role in AI strategy.

"They've outsourced it to the CTO," he said. "We need these professionals to be clued in."

Eng made a similar point about organizational design. AI implementation is a workforce event, rather than a technical deployment. The skills, the psychology, the learning architecture are HR's domain.

"HR as the ones who understand the workforce can build the learning and development piece. They should be part of the AI council along with the CIO, CTO, making sure they consider the human aspects of change."

Brandon Hall Group's 2026 HR Outlook found that 65% of organizations are actively integrating AI into core workflows. Fewer than 30% have meaningfully redefined what roles or job structures look like to reflect it. That gap is where HR either steps in or gets sidelined permanently.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

3. The manager layer is where AI transformation is dying.

Matt Poepsel's session may have been the most honest of the week. He described managers operating without context, making decisions in a vacuum, and framed it not as an individual failure but as a systemic one: generic AI that doesn't know the organization, the people, or the behavioral dynamics.

"When we find managers cutting corners," he said, "it creates a vacuum and it's showing up in every stage of the employee lifecycle."

The consequences are compounding. Employees turn to ChatGPT for manager advice because trust has eroded to the point where they don't believe they'll get a straight answer from their actual manager.

Claude Silver, Chief Heart Officer at VaynerX described promoting people into management roles and discovering the organization had assumed they'd know how to hold space for fear, uncertainty, and real conversation.

We need to have managers that can create that," she said. "We seem to skip right over that.

Brian Elliott's framing as the emcee of the event was pointed: "AI transformation is a team sport, that requires time and space for teams to train together. One AI champion embedded in a team that knows the team's goals is more powerful than training programs."

4. Ambiguity is burning people out faster than workload.

Change fatigue was cited repeatedly, but the more precise diagnosis came from Agbeshie-Noye.

The Power of Clarity

The Power of Clarity

“Ambiguity is exhausting. The more employees are involved in the strategy conversation, the more they feel like they can plan and understand where they fit into the change.”

This has practical implications for how organizations communicate AI transitions. Eng described a messaging choice at Chevron where the focus was framing AI not as a rupture, but as the next chapter in an ongoing digital journey, making it feel continuous rather than disruptive.

Danny Guillory, Chief People Officer at Gametime, went further.

"Sometimes we need to do less in terms of what we're pushing and being conscious of the rhythms people are experiencing."

There's a difference between preparing your workforce for change and overwhelming them with it. The goal for those looking to manage this challenge centers on creating clarity about where things are going, not just enthusiasm about what's possible.

5. The data layer isn't a technical detail. It's the foundation.

Jevan Soo Lenox, Chief People Officer at Writer, said something that deserved more airtime: "If you're not building from a great consistent knowledge base, a data layer that you can access and build on, then everything else is gonna break."

He wasn't talking abstractly. He was describing the experience of watching organizations deploy AI tools on top of chaotic, incomplete, or inconsistent data infrastructure and wonder why nothing works as promised.

Giovanni Luperti, CEO of Humaans, made a related point about agentic AI: the systems will learn from what you give them. Feed them messy inputs and they'll systematize the mess. The onboarding process is one example he offered — highly repetitive, high potential for automation, but only if the underlying process is well-defined.

You want to get agents that are gonna run this process all the time and they’re gonna learn how to do it better. But when you go into more complex authority components, I don’t think these systems are there.

Luperti 2-91403

The question organizations should be asking before they build anything: what is the AI actually drawing from?

6. Leadership accountability for AI adoption needs to be measurable or it disappears.

Stacy Eng was direct about what it takes to move leaders from cheerleading to ownership.

Measure Leaders AI Ownership

Measure Leaders AI Ownership

“Once they agree to it, measure them on it. Have it show up in their performance reviews and look at how you’re demonstrating what we talk about. Make it so there’s no way to hide.”

Peter Beard, VP Policy and Programs at the U.S. Chamber of Commerce, added a related point about visibility. Keeping AI projects in the C-suite's line of sight — not to interfere, but to signal that attention is being paid — changes behavior throughout the organization. "That's what's going to drive change," he said.

This matters because the accountability gap is real. When AI adoption is framed as a culture initiative without teeth, it gets treated as optional. When it's embedded in how performance is evaluated, it becomes real work.

7. Recruiting signal is collapsing, and volume is not the answer.

The recruiting session with Aaron Wang, CEO of Alex.com, surfaced a problem that TA leaders are watching in real time. Application volumes have exploded — some ATS systems reporting 750 applicants per opening — while the percentage actually getting reviewed hovers around two to three percent.

AI has made applying frictionless. The cost to apply is now near zero. And increasingly, it's AI applying on behalf of candidates, sometimes at scale, sometimes deceptively.

The vendor argument is that AI screening fixes this by processing everyone. But the harder question flagged in the session and not fully resolved, is what signal actually survives when both sides of the transaction are AI-mediated?

One in four applicants may now be fraudulent, according to Wang. That number, if accurate, reflects how broken the top of the funnel has become.

The deeper issue isn't volume. It's that the resume as a signal was already weakening before AI arrived, and AI has accelerated the collapse. Screening at scale doesn't solve for what good looks like when the inputs can't be trusted.

8. Human judgment is still the differentiator, but only if it's embedded, not assumed.

The week's most consistent thread, across sessions that otherwise had little in common, was this: humans remain better at judgment than AI, but that advantage is only meaningful if organizations deliberately preserve and develop it.

Matt Poepsel's framework was direct.

Human Judgment Embedded

Human Judgment Embedded

“You bring the decision, AI sharpens it. You bring the judgment, AI embeds it. If AI can do it, it is by definition commoditized.”

Amy Reichanadtner pointed to two capabilities she believes will carry people through: thinking from first principles and being a builder.

June Dinitz, Head of People across the U.S. and Canada at EPAM Systems, described the shift in professional services toward "forward-deployed engineers", or in other words, people who can sit inside a client's business and understand its full context before proposing technical solutions.

This means not treating judgment as a soft skill. Treat it as infrastructure and building the conditions for it to be exercised. Luperti's framing of "decision augmentation versus decision substitution" is a useful lens. Anything deterministic, hand to the agent. Everything that involves context, nuance, or consequence, keep the human squarely in the loop.

9. The workforce entering now is experiencing something nobody older fully understands and leaders need to listen instead of explain.

Michael Walters, CHRO at Samsung Semiconductors, said something that is good advice for everyone born before 2000.

I have to understand that I don't know what it's like to be 25 in 2026, so I have to listen a lot.

Claude Silver described seeing younger workers come through and leave quickly, not because of the work, but because organizations aren't having honest conversations about AI's implications for their futures, their career paths, or their sense of belonging.

During a chat on the main stage on day 2, Van Jones cut to it.

Don’t let yourselves use this technology to reduce headcount. Everyone is going to have the bots and then you’re back to human beings. You need to instead invest in the emotional labor to keep your teams together.

Van Jones-96328
Van JonesOpens new window

CNN Host and Founder of Magic Labs Media

That's not a soft argument. It's a strategic one. If we treat AI adoption as purely an efficiency exercise, we are going to lose the people who will outlast the current wave of automation.

Building AI fluency and building human trust aren't competing priorities. At this moment, they're the same project.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.