Skip to main content
Key Takeaways

AI Workforce: The focus is shifting from AI's impact to preparing the workforce for AI-integrated roles.

Skills Needed: Coding, adaptability, and clarity are emphasized as key skills for AI readiness in the workforce.

Organizational Design: Current readiness frameworks focus on individuals rather than addressing systemic organizational design challenges.

Measurement Metrics: AI readiness should not rely solely on usage metrics. Success is determined by outcome improvements.

Urgency Issue: Urgency around AI adoption is emphasized, but precision and clear direction are often lacking.

Every major conference agenda this spring featured some variation of the same session title: what it really takes to build an AI-ready workforce.

The phrase was everywhere. A reliable signal that the industry has moved past the question of whether AI changes work and arrived at the harder question of how to prepare people for what comes next.

The real question turned out to be harder to answer than expected.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

Across sessions at Transform in Las Vegas and HumanX in San Francisco this spring, speakers offered thoughtful, albeit somewhat high level takes on what workforce readiness looks like in an AI-accelerated environment.

Andrew Ng, Founder of DeepLearning.AI, made a provocative case that everyone, regardless of role, should learn to code.

Coursera CEO Greg Hart pointed to surging demand for critical thinking courses, up nearly 200% year over year on the platform, as evidence that organizations are starting to understand that technical fluency alone is not the answer.

Robin Daniels of Sensai argued that the foundational AI skill is clarity. In other words, if you cannot define what you want clearly enough for a person to do it, you certainly cannot define it clearly enough for an agent.

Sarah Franklin, CEO of Lattice, pushed back on the instinct to measure AI readiness through usage metrics, arguing that what matters is whether people are achieving better outcomes, not whether they're generating more tokens.

These are not empty observations. There is real thinking behind them. But taken together, they reveal something the conference circuit has not yet solved: "AI-ready" remains a directional phrase rather than an operational one.

You can attend three days of sessions and leave with a compelling sense of urgency and almost no way to assess where your organization actually stands.

That is worth examining, because the cost of mistaking inspiration for strategy is showing up in real organizations right now.

What the Conference Defines as Ready

The working definition of AI readiness that emerged across sessions has a few consistent components.

The first is skills, primarily technical. Ng's coding argument is the sharpest version of this. As AI makes software building more accessible, people who can work at that level will outperform those who cannot, regardless of their job title.

He described watching marketers, recruiters, and finance professionals who could code pulling away from peers who couldn't, and framed the gap as widening.

Hart reinforced the technical dimension through enrollment data. One person enrolled in an AI course on Coursera every four seconds in 2025, double the rate of 2024.

The second component is adaptability, defined loosely as the capacity to keep pace with change.

"The one human quality that is needed is adaptability," said Adit Jain, CEO of Leena AI, whose company builds AI colleagues for G&A and back-office functions.

He was describing what he sees when enterprises deploy automation. The people who survive the transition aren't necessarily the most senior or the most technically sophisticated. They're the ones willing to redefine their role around supervising and improving the AI rather than defending the work it displaced.

The third is something that resists a clean label. Franklin called it treating AI as a teammate, not a tool. Daniels called it clarity. Bianca Anghelina, CEO of AILY Labs, described it as the capacity to translate business problems into AI-addressable challenges.

Ng's coding argument, at its core, is really that coding is a vehicle for learning to think in systems, to decompose problems into components an automated process can handle.

Underneath these three components is an implied fourth, the willingness to keep learning indefinitely. Every speaker eventually landed here. The lifelong learning imperative is less a definition of readiness than an acknowledgment that no fixed skill set will remain sufficient.

As a description of the direction an organization needs to move, this is reasonable. As a framework for assessing where a given organization stands, it is nearly useless. "Be adaptable" and "keep learning" are not evaluable. They don't tell a CHRO what to measure, what to develop, or what a passing score looks like.

Additionally, what's needed as you start this process is rarely mentioned, though we have covered it on the podcast.

Where the Definition Breaks Down

The most useful moment in a HumanX session on reskilling at scale came not from a framework but from a number. Jain described what typically happens when his company automates a business process for a large enterprise customer, 60% of the people doing that work are no longer required.

Of the remaining 40%, roughly 20%, approximately 12% of the total, can be absorbed back into the process as human managers of the AI. The other 48% represent a transition problem that most organizations have not solved.

He offered a real example of a healthcare customer redirecting HR ops staff, freed from routine ticket handling, into building and managing agents that do patient data compliance work at scale. Work that genuinely hadn't been done before because it was too resource-intensive to do with people.

The redeployment required that those employees could understand the new work, acquire new skills quickly, and accept a fundamentally different relationship with their job function.

This is where the conference definition of readiness runs into its structural problem. Readiness, as most sessions frame it, is a property of individuals: their technical skills, their adaptability, their willingness to learn.

But the 48% problem is not primarily an individual problem. It is an organizational design problem. Whether those employees can transition depends not just on their personal adaptability but on whether their organization has identified where they might go, what those roles require, what development investments will get them there, and how long the organization can sustain that transition before economic pressure pushes it toward simpler solutions.

"AI-ready" as a workforce concept does not address any of this, at least not in the way we think of it today. It focuses on the capability side of the equation while leaving the deployment side largely unexamined.

Daniels made a pointed observation about the Oracle layoffs earlier this year.

There’s a severe lack of courage at the top to go and reskill a workforce. What are you doing if you’re sitting on boatloads of cash and you don’t take the time to reskill or retrain your employees?

Robin Daniels-19535
Robin DanielsOpens new window

Chief Business Officer at Zensai

The framing was about executive courage. But what's being described is also an organizational capability failure. Oracle almost certainly had people who were adaptable and willing to learn. What it apparently lacked was a system for identifying who those people were, where they could go, and how to develop them at the speed the transition required.

Courage matters. But courage without a diagnostic is not a strategy.

The Measurement Problem

Franklin's point about measuring AI readiness through usage metrics deserves more weight than the conference setting gave it. She was describing something organizations get wrong regularly. They track adoption, tool logins, prompt volume, and token usage, and conclude they are making progress.

The measurement is clean and easy to report upward. It is also easily gamed and only loosely correlated with the thing that actually matters.

Measuring just the usage of AI is missing the forest through the trees. You need to still measure what people are able to do, because we deeply believe that AI needs to be here in service of the success of people.

Sarah Franklin-14206

This matters especially in workforce development contexts, because the instinct to measure adoption is already shaping how some organizations structure their AI training requirements. Tie AI usage to performance reviews, and you will get AI usage. You will not necessarily get better decisions, faster work, or more capable employees.

Jain described a version of this problem from the other direction: when he tried a soft approach to reskilling, asking employees to work through a course at their own pace over three months, only two of thirty did.

When he reran the same program with a 90-day deadline, a formal exam, and a pay bump for passing, more than 85% completed it. The motivational finding is interesting. But the more important implication is about what the organization was actually measuring.

Completion and compensation were trackable. Whether the people who passed were materially more capable of doing the new work was a different question.

What would make this a real assessment rather than a checkbox? That is the question the conference sessions could not fully answer.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Join the People Managing People community for access to exclusive content, practical templates, member-only events, and weekly leadership insights—it’s free to join.

Name*

Testing the Conference Definition Against a Harder Standard

The seven capabilities that reliably distinguish organizations succeeding at AI transformation from those generating activity without outcomes are: empathy, presence, product thinking, courage, strategic patience, transparency, and systems thinking.

They show up consistently in the research. They are also, notably, almost entirely about leadership behavior rather than employee skill sets.

Running the conference sessions against each one produces an uneven picture.

Empathy

Franklin's case for AI in performance management rested on a genuine insight about how people experience evaluation, that removing the judgment dynamic from performance conversations by surfacing evidence-based information before the human conversation happens, changes what people are willing to hear.

Her description of employees being "more comfortable" when they understand what the AI is drawing from trusted data rather than a manager's subjective impression is a real observation about how individuals experience change differently.

Coursera's finding that learners who engage with their AI coach have materially better outcomes than those who don't points in the same direction. Different people need different scaffolding, and the organizations paying attention to that are seeing better results.

The sessions didn't offer a diagnostic for assessing organizational empathy, but the directional understanding was present.

Product thinking

Anghelina's boot camp model is the clearest example, employees given a working agent framework and a concrete business challenge, not an abstract skills curriculum, and not a theory of AI capability.

Optimize marketing spend by 10%. The design choice is significant. She's treating workforce development the way a product team treats a problem: start with a real user need, build something that addresses it, iterate from there.

Jain's healthcare redeployment example follows the same logic. The freed HR ops capacity wasn't pointed at generic reskilling, it was pointed at a specific compliance problem the organization had never been able to address.

These are product thinking in practice. But neither speaker framed it as such, which means leaders watching the sessions are likely to absorb the example without extracting the principle.

Courage

This surfaced once, clearly, and then the conversation moved on. Daniels' Oracle critique was sharp, an organization with the financial resources to reskill its workforce chose not to, and he named that as a failure of executive will. The framing landed.

What it didn't produce was any discussion of how organizations build the institutional courage to absorb short-term cost for long-term workforce capability. The gap between naming the problem and developing the leadership capability to address it was left open.

Transparency

This one was implicit throughout but rarely examined directly. Franklin's comments about building organizational trust as a prerequisite for AI in performance management came closest.

But no session addressed what honest communication looks like when an organization is genuinely uncertain about which roles survive an AI transition, or how leaders can be truthful about that uncertainty without accelerating the fear that undermines adoption.

Research from Boston Consulting Group and Columbia Business School documented the 51-point percentage gap between what executives think employees understand about AI strategy and what employees actually understand didn't come up. It should have. That gap is not a communication problem. It's a transparency problem, and it's large enough to derail implementations that are otherwise technically sound.

Presence

In an era known for powerful people lacking self awareness, presence was, fittingly, almost entirely absent. The sessions were pitched at a strategic level, and the question of whether leaders understand how work actually happens in their organizations, close enough to know what an AI deployment will actually change for the people doing the work, went unexamined.

Jain's numbers imply it: if 60% of the people in a business process become unnecessary after automation, someone at the top had to have a clear picture of what those people were actually doing and where equivalent work might exist.

But how organizations develop that operational closeness, especially in large enterprises where the silicon ceiling between executive teams and frontline work is already documented, was not part of the conversation.

Strategic patience

The conference's dominant register was urgency. Move now or fall behind. Pilots are over, it's time to deploy. Anghelina's one-day integration claim. Jain's warning that a CEO who doesn't move fast enough will face analysts asking why competitors have better margins.

Robin Daniels' closing corrective, "speed plus uncertainty is just chaos, take a breath," was the only counter-pressure in any of the sessions, and it arrived in the final thirty seconds before time ran out.

The research on what actually produces sustained AI adoption points in a different direction, where 79% of employees who received more than five hours of AI training became regular users, compared to 67% who received less. The J-curve is real.

The capability building that produces durable results takes longer than conference urgency acknowledges, and the organizational cost of abandoning initiatives before they mature, S&P Global puts the abandonment rate at 42% of AI projects before they reach production, is not captured by "move fast or get left behind" framing.

Systems thinking

No session substantively addressed how AI deployment in one function creates downstream effects in others. Things like:

  • How workflow redesign in HR ops changes what customer service needs to be able to do
  • How automating loan approval changes the product management motion for the entire lending function
  • How agent deployment in one department surfaces governance questions that affect every department.

Ng's loan approval example actually contained the seed of this, describing how top-down workflow redesign unlocks new products rather than just incremental efficiency, but the systems-level implications for how organizations need to be structured to see and act on those opportunities weren't developed.

The BCG finding that 52% of AI-successful organizations now use cross-functional teams of business and technology leaders, up from 5% a year earlier, suggests this is one of the more consequential structural shifts underway. The sessions treated it as assumed background rather than active challenge.

The pattern across all seven capabilities? Leaders seem more prepared around the capabilities that look most like individual skill development (empathy, product thinking) and weakest on the capabilities that require organizational redesign (systems thinking, strategic patience, presence).

That imbalance reflects something real about where the conversation is around AI right now. Individual skill development is tractable. It has vendors, curricula, and completion metrics. Organizational redesign is slower, more politically complicated, and harder to package into a 45-minute session.

So the conversation gravitates toward what it can describe concretely, and we call the result a readiness framework.

What Actionable Requires

The distinction between aspirational and actionable readiness is not about pessimism versus optimism. Both Ng and Hart were clear that the organizations doing this well are making real progress.

Hart described a large professional services firm that upskilled 5,000 employees into specialized AI roles and a global technology company where Coursera engagement correlated with a 50% improvement in retention. These are outcomes, not just intentions.

What those organizations share, from the descriptions available, is that they started from a specific business problem rather than a generic readiness goal. The professional services firm was trying to deliver better outcomes for clients in AI-enabled engagements. The technology company was trying to retain employees who were becoming more effective. In both cases, the skill development was anchored to something concrete enough to evaluate.

Anghelina made this point most directly, though in the context of those boot camps and the goal to optimize marketing spend to generate 10% more sales. The mechanism is not abstract. The outcome is measurable. The employee's job is to figure out how to apply AI to get there.

When users see the impact of AI on the business. They get more creative. The skills develop because the problem is specific enough to develop against.

Bianca Anghelina-41259
Bianca AnghelinaOpens new window

Founder at AILY Labs

This is where a lot of reskilling efforts are getting stuck. They start with skills because skills are legible, trainable, and trackable. The Coursera enrollment numbers are real. The micro-credential completion data is real. The 91% of learners reporting positive career outcomes within six months is real. But skills divorced from organizational context drift toward the generic. And generic readiness is not readiness for anything in particular.

Franklin made the same point from the HR direction when she said HR spends approximately 60% of its time documenting what happened in the past.

The opportunity is not to make HR better at documentation. The opportunity is to free HR capacity to work on problems that actually require human judgment, problems specific enough to need someone who understands the organization, its people, and the decisions that actually drive outcomes.

A real AI readiness strategy starts there. Not with "what skills do our people need" but with "what does this organization need to be capable of doing in 18 months that it cannot do today, and what does each role need to contribute to that?"

Working backward from that question produces something evaluable. The Seven-Capability Framework exists precisely to structure that backward mapping, translate a business capability goal into the specific human capabilities required to achieve it, and to assess the distance between current state and required state with enough precision to build a development program around it.

The Urgency Obscures

One dynamic the conference circuit reinforces, probably unintentionally, is that urgency substitutes for precision.

Every session I've attended in the last 6 weeks, whether it was the grand ballrooms at Transform, the dramatically lit stage at HumanX or the workshop sessions at smaller, more local conferences right here in Atlanta where I live, every one of them referenced the pace of change.

Several speakers noted that being three to four months behind on the latest AI coding tools, in Ng's framing, produces material performance gaps among engineers. The implicit message is that speed of adoption is the primary variable.

It is a variable. It is not the primary one.

Daniels said it most clearly, even if the surrounding conversation somewhat buried it.

Speed plus clarity is awesome. Speed plus uncertainty is just chaos.

Robin Daniels-19535
Robin DanielsOpens new window

Chief Business Officer at Zensai

Most of what is being called AI readiness right now is being driven at speed. The urgency is real in the sense that we all feel it, and by committing to it, we perpetuate it.

But the organizations building actual competitive advantage from AI workforce development are the ones who have defined what they are building toward specifically enough that speed has somewhere to go.

The test is simple, and the conference sessions inadvertently demonstrate it. If you took the AI readiness definition offered in any of these sessions and tried to use it to build a development program for your organization, you would immediately need to answer a set of questions the definition doesn't address.

  • Which roles matter most to our specific AI strategy?
  • What does capability look like at each level, and how do we measure it?
  • What is the development sequence that gets us from current state to required state fastest?
  • What organizational changes have to happen alongside individual skill development for any of this to work?

Those are not aspirational questions. They are operational. And answering them is what separates your ability to build AI-ready workforces from your ability to run AI training programs.

The phrase has become shorthand for a goal everyone agrees on. The strategy is the work that follows from deciding precisely what that goal means.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.