Skills-First Shift: The skills-first movement emphasizes hiring based on capabilities rather than traditional degree requirements.
AI Disruption: AI is rapidly changing the nature of required skills, creating challenges for traditional assessment frameworks.
Governance Necessity: Clear governance structures are essential to manage ideas generated by AI adoption and avoid chaos.
Leadership Role: Leadership must actively participate in AI strategies to ensure successful skills integration and accountability.
HR Involvement: HR needs a crucial role in AI discussions to create effective, up-to-date skills frameworks.
The skills-first movement has been building for years. Drop degree requirements. Assess for capability. Hire for what people can actually do. Most organizations are somewhere in the middle of trying to figure out how to make that work in practice.
The disruptive power of AI, however, is scrambling the underlying math.
That tension sat at the center of Monday's "Skills-First Strategies in the Age of AI" session at Transform 2026, where talent leaders from Chevron, the US Chamber of Commerce, and beyond wrestled with a straightforward problem: the frameworks most organizations use to assess and develop skills were built for a world that assumed skills were relatively stable. They are not.
On the podcast last year, Cole Napper shared some research from Lightcast which found that roughly a third of skills required across job categories changed between 2019 and 2022. The pace accelerated in the three years that followed.
Stacy Eng, former Chief Learning Officer at Chevron, put it plainly: putting critical skills in front of people "in tangible and meaningful ways" has become one of the core challenges of her work, precisely because those skills are not sitting still long enough for traditional frameworks to capture them.
The question the session kept returning to was one that's pretty foundational to pursuing a skills first strategy. Do we actually know which skills we need to be pursuing?
Isaac Agbeshie-Noye argued that part of the problem is structural.
"We put too much pressure on managers to solve this," he said.
AI creates a real opportunity to help employees chart their own development within an organization, taking some of that load off managers who are already stretched. But that only works if the organization can first articulate what development should look like, and for many, that clarity doesn't exist yet.
One approach that surfaced in the session: embedding early-career employees as digital champions inside business units, then pairing them with senior leaders in working sessions.
The junior employees bring fresh perspective on how AI can reframe a problem while the senior leaders provide the business context that makes those ideas executable. The knowledge transfer runs in both directions. It doesn't replace a skills strategy, but it creates the conditions for one by putting people with different vantage points in direct conversation about how work is actually changing.
Our research shows that it’s a very low percentage of HR professionals that understand what their role is in AI strategy. They’ve outsourced it to the CTO. People need to get clear about what their individual relationships to AI are going to be so that we can articulate the skills that they need to have.
That outsourcing problem has a downstream effect. When HR isn't in the room where AI strategy is being set, the skills implications of those decisions don't get surfaced until they're already baked in. By then, the taxonomy is already behind.
Eng described what happened at Chevron when AI adoption took off without a clear governance structure. Employees generated hundreds of ideas. Enthusiasm was high. But with no framework for prioritizing, the energy became unwieldy.
"We had people coming up with hundreds of ideas, but how do you prioritize them?" she said.
Chevron's response was an AI council, drawing together the CIO, CFO, and others, to collect ideas, identify overlaps, and decide what to pursue, what to hold, and what to set aside.
"Without governance you have mess," Agbeshie-Noye added. "We can't boil the ocean."
The governance conversation and the skills conversation are more connected than they might appear. When 50 people are using 500 tools with no coordination, you don't get a skills strategy, you get a skills scramble. The data that could inform a coherent view of capability across the organization gets fragmented across platforms and use cases that don't talk to each other.
Peter Beard, VP of Policy and Programs at the US Chamber of Commerce, pointed to the accountability gap that tends to follow. C-suite visibility on AI projects matters, he argued, not to meddle, but to signal that the work is being taken seriously.
That's what's going to drive change. Without it, skills initiatives stay at the initiative level and never get embedded into how the organization actually operates.
Leadership accountability was a recurring theme. Eng made the case that leaders can't just sponsor AI adoption in name only. They need to be able to articulate why it's happening and model the behaviors they want to see.
None of that is a skills taxonomy. But it is the prerequisite for one that actually works. A framework for assessing skills in an AI environment requires organizational clarity about what AI is doing, who is responsible for managing its impact on the workforce, and what HR's role is in setting that direction. Right now, Agbeshie-Noye argues, many organizations lack all three.
Change fatigue becomes less exhausting if you know where we're going," he said. "Ambiguity is exhausting. The more they are involved in the strategy conversation, the more they feel like they can plan and understand where they fit into the change.
That framing applies as much to the skills question as it does to change management broadly. Employees can't develop toward a target that hasn't been defined.
HR can't build a relevant skills framework without a seat at the table where AI decisions are being made. And organizations can't claim to be skills-first if the skills they're assessing for were identified before AI reshaped what the work actually requires.
