Workflow Insight: Traditional recruiting methods favor screening a tiny percentage of applicants, leaving most unconsidered.
AI Challenge: AI tools enhance efficiency but fail to assess actual candidate performance effectively during hiring.
Application Cost: The ease of applying for jobs has led to increased volume, complicating the filtering process.
Signal Erosion: Common hiring indicators are losing reliability, making it harder to identify qualified candidates.
Skills Focus: Adopting a skills-first approach could improve hiring, but implementation remains a significant challenge.
For years, the workflow inside most recruiting teams has followed the same steps. Post the job, review the best applications, screen the strongest candidates, send the hiring manager the top picks.
The system worked for many years as a method of finding the best talent for the business.
The problem, as Tim Sackett, CEO of HRUTech.com and author of The Talent Fix: A Leader's Guide to Recruiting Great Talent, laid out at Transform earlier this month, is that this is largely fiction.
When a job posting draws 250 to 300 applications (many draw significantly more), the typical recruiter reviews 25, maybe 30. That's 2-3% of the applicant pool getting any consideration at all. The rest disappear.
"We tell our hiring managers we're out here finding the best talent in the market," he said. "When in reality, a small fraction of your applicants are actually getting into the process."
That gap between what recruiting teams promise and what they actually are able to do has existed for decades. AI didn't create it. But AI has made it significantly harder to close at times, and the tools organizations are rushing to deploy may be measuring exactly the wrong things.
The Measurement Problem Predates the Technology
Leanne Markus, Managing Director at Centranum Group and an organizational psychologist who works with organizations on capability frameworks and competency-based assessment, draws a distinction that most hiring processes collapse together. That is the difference between qualifications, knowledge, and competence.
Qualifications — education, certifications, training — indicate preparation and potential. Knowledge testing can indicate understanding. Neither reliably demonstrates the ability to perform in an actual work setting.
Decades of research in psychology shows that job-specific knowledge," says Markus. "The ability to apply it appropriately in context, is the single biggest predictor of job performance.
The resume, by design, measures the first category and sort of gestures at the second. It has never measured the third.
Organizations have historically accepted this tradeoff because the resume was at least a practical filtering device — a way to move from hundreds of applications to a manageable conversation, with the assumption that better signal would emerge in the interview.
That assumption is now breaking down at both ends.
What AI is Screening
When AI enters the hiring funnel, its advocates typically frame it as a solution to the volume problem. More candidates reviewed, faster, with less human bias introduced in the early stages. What it doesn't change is what's being measured.
AI makes screening more efficient, but it doesn't change what's being measured," Markus says. "Most screening approaches, including AI-driven interviews, are response-based. It's all self-reported information. AI can speed up processing, but it is essentially evaluating how well a candidate can describe the work, not whether they can actually do the job successfully.
This matters more than it initially appears, because the training data underlying most AI hiring tools compounds the problem. These systems learn by correlating application inputs and hiring decisions with performance ratings. But formal performance evaluations bear little relationship to actual on-the-job performance. The AI is learning to replicate a signal that was already unreliable.
The result is a system that rewards candidates who can describe work convincingly, which, in 2026, means candidates who can use AI to describe work convincingly.
"AI can reinforce patterns that are consistent," Markus says. "But this doesn't mean they are valid. In a black box environment, it is not clear how the various factors are weighted."
The Cost of Applying Fell to Zero
The scale of what's happened on the candidate side is still not fully absorbed by most recruiting operations. Aaron Wang, CEO of Alex AI, made the structural point plainly at Transform: the market cost to apply for a job has fallen to near zero.
A candidate with access to any major AI tool can apply to hundreds of jobs overnight, with each application customized to the specific job description, the hiring manager's background, and the employer's stated values.
The same tools can coach candidates through whatever screening format they encounter on the other side, including AI interviews. The funnel that once filtered by effort now filters almost exclusively by volume.
Wang framed this as an escalation that neither side can step away from. He described a near-future scenario where candidate AI agents negotiate and apply for jobs autonomously, meeting employer AI systems on the other side.
"That future is not only possible, it may be likely," he said. What such a system would actually be measuring is an open question.
For now, the practical effect is that the application has become even less trustworthy as a signal. Candidates who are the strongest fit for a role may not be the ones who look best on paper, particularly when the paper has been optimized to beat the ATS.
Performed Signals
The erosion of reliable signal isn't limited to applications. Sabra Sciolaro, Chief People Officer at FirstUp, points to a parallel breakdown inside organizations that has direct implications for how hiring signals should be understood.
In a recent FirstUp study of more than 3,000 U.S. and Canadian workers, between 75-89% of employees across roles reported being engaged. Between 40-46% of those same employees said they were actively considering leaving their jobs this year.
Engagement, like enthusiasm in an interview, is a signal that can be performed. It doesn't predict outcomes.
Many of the signals companies have relied on are easy to perform but don’t predict outcomes like retention or performance. The signals that correlate more closely with actual performance involve how people navigate complex information and how quickly they adapt when priorities shift — traits that show up over time and in context, not in a single AI-assisted interaction.
That framing applies upstream as well. The traits that make someone a strong employee are largely the same traits that matter in a candidate, and neither a resume nor a structured AI screen is well designed to surface them.
What Practitioners are Doing Instead
Ben Lamarche, General Manager at Lock Search Group, has changed where in the process he tries to get at real signal. Rather than waiting for a scheduled interview to begin genuine conversation, he pushes that interaction to day one of candidate contact.
Real conversation used to enter the process after an interview was scheduled. Now it often happens day one. The goal is to get out of the document review phase and into real interaction sooner, where it’s much harder to rely on a polished script.
When he does review applications, his methodology has shifted toward what resumes reveal in the gaps between stated accomplishments.
AI can help a candidate write a strong bullet point. It cannot easily replicate the texture of a specific experience under pressure.
"When I start asking specific questions," Lamarche says, "I sense pretty quickly whether it's real or rehearsed."
He also flags a distortion that AI screening creates at scale: candidates who use AI to optimize their presentation may advance through early stages precisely because they look polished, while quieter, more capable candidates get filtered out before anyone with judgment sees them.
The signal problem isn't just noise, it's noise that systematically advantages certain presentation styles over substance.
Markus's offers a more structured alternative, one she describes as a shift from predicting performance to requiring evidence of it.
The sequence she uses with clients moves from baseline filtering on qualifications and regulatory requirements to job-specific knowledge testing, then to structured evidence of prior applied competence through validated work examples and task history, then to targeted questioning that validates that evidence, and finally to in-role confirmation during onboarding and probation.
That last stage matters more than most organizations treat it. If the goal is validating competence in actual context, the probationary period is more than a formality, it's the most reliable assessment stage available, and most companies have stopped using it as one.
Where Skills-first Lands
The skills-first hiring movement has taken significant organizational mindshare over the past several years, in part because it addresses a genuine problem: credentials and pedigree are weak proxies for what someone can actually do. Lamarche and Markus both acknowledge the direction is right.
It forces companies to think more clearly about what the role actually requires, rather than defaulting to credentials or pedigree," Lamarche says. "That's a positive shift.
The implementation is where it breaks down.
Many implementations still rely on very generic skills taxonomies and self-declaration of skills. Skills-first hiring doesn’t help unless there is a clear link between specific role responsibilities, the competencies needed to deliver on them, and a means to validate those at the level of actual tasks. Most organizations haven’t built that link. They’ve replaced one set of weak proxies with another and called it progress.
The volume problem has no clean answer. Thousands of applicants cannot realistically be shortlisted through simulations or on-the-job trials. Something has to filter the funnel first, and right now that something is either human reviewers looking at a small percentage of applications or AI tools evaluating language patterns. Neither is measuring what hiring is actually supposed to measure.
Moving high-signal evaluation earlier, building assessment frameworks specific to the role, treating the document review phase as the low-signal step it is — none of that is technically difficult.
It's just slower and more deliberate than posting a job and letting an ATS sort the results. In an environment where speed has become the default metric for recruiting efficiency, deliberate is a hard sell. It's also, at the moment, the only way to find out whether the person you're hiring can actually do the job.
