When turnover is low, leadership loves to call it stability. Jay Caldwell makes the more uncomfortable point: sometimes it is just fear with better optics. In this conversation, he and David unpack why “quiet staying” can become a serious organizational liability in an AI era—especially when people are still hitting goals, still showing up, and still slowly draining the place of experimentation, risk-taking, and fresh thinking.
They also get into the deeper workforce consequences of AI adoption: why broad rollouts often create anxiety instead of momentum, why the most AI-engaged employees may be the most likely to leave, and why cutting entry-level hiring might solve a short-term budget problem while quietly wrecking your future talent pipeline.
What You’ll Learn
- Why low turnover can mask stagnation, fear, and declining innovation
- Why AI adoption is a culture and workflow change problem, not just a software rollout
- How general-purpose AI and targeted AI use cases create very different employee experiences
- Why heavy AI users may become more likely to look elsewhere for growth
- How reduced entry-level hiring can weaken long-term leadership and capability pipelines
- Why hiring for curiosity, adaptability, and judgment requires more than updating job descriptions
- How performance management may need to shift from measuring execution to evaluating judgment
Key Takeaways
- “If you build it, they will come” is not an AI strategy. Leaders may be excited about AI, but employees are not adopting it at the same pace. Turning on the tool is the easy part; changing habits, confidence, and workflows is the real work.
- Low turnover is not the same thing as a healthy workforce. People can stay, hit targets, avoid trouble, and still stop contributing real energy. That is not retention as a win. It is stagnation with a nice dashboard.
- Quiet staying can be more expensive than quitting. When people stay mainly for safety, they are less likely to experiment or take risks. In an environment that depends on adaptation, that creates a compliance culture right when you need invention.
- AI power users may outgrow slow-moving organizations. As employees get better with AI, they start to see what faster, better work could look like. If the organization keeps moving at half-speed, that gap between potential and reality becomes hard to ignore.
- Targeted use cases build confidence faster than vague enterprise rollouts. A broad AI tool can feel like handing everyone a piano and expecting a symphony. Targeted use cases give employees a clearer win, a clearer purpose, and a more measurable outcome.
- Entry-level hiring still matters, even if AI compresses basic work. Remove the bottom rung from the ladder and people fall. Early-career talent does not just fill tasks; they bring fresh ways of thinking, including AI-native instincts that many organizations still lack.
- Skills-based hiring requires system change, not slogan change. Dropping degree requirements means very little if the rest of the hiring and promotion system still runs on pedigree. New language on paper does not fix old infrastructure.
- Judgment is becoming the real differentiator. If AI handles more of the middle, the human value shifts to the bookends: asking better questions and knowing when the answer is wrong. That is harder to score, but it is where the work is going.
Chapters
- 00:00 — Quiet Staying
- 01:30 — AI Adoption Myths
- 03:03 — AI Is a Culture Shift
- 06:12 — Why AI Users Leave
- 10:50 — Faster Work, More Burnout
- 11:36 — Hidden Risks of Low Turnover
- 15:18 — General vs. Targeted AI
- 19:30 — Building Confidence With AI
- 20:53 — Entry-Level Hiring Crunch
- 21:33 — Rise of AI-Native Graduates
- 25:55 — Hiring for Curiosity
- 28:35 — Testing Creativity
- 30:23 — Skills-Based Hiring Reality
- 32:13 — Developing Adaptability
- 32:56 — From Execution to Judgment
- 37:46 — Experience vs. AI Output
- 38:31 — Measuring AI-Era Performance
- 39:08 — Why Judgment Matters
Meet Our Guest

Jay Caldwell is the Chief Talent Officer at ADP, where he leads enterprise-wide talent and people strategies designed to strengthen leadership capability, employee engagement, and workforce development across the organization. With more than a decade of leadership experience at ADP, he has held several senior HR roles—including Division Vice President of Human Resources and Vice President of Talent Solutions—helping shape global talent programs and performance practices. Earlier in his career, Jay worked in organizational effectiveness and leadership consulting, and he holds a master’s degree in Industrial/Organizational Psychology from the University of New Haven and a bachelor’s degree in Psychology from Quinnipiac University.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Check out this episode’s sponsor: Intuit QuickBooks Payroll
- Connect with Jay:
Related articles and podcasts:
David Rice: Your turnover numbers look great. People are staying, engagement is stable, and leadership is celebrating. And you're building the most expensive problem that you've ever had because staying is not the same thing as contributing. And right now your workforce is quietly staying, showing up, hitting their numbers, trying not to cause any trouble. They look engaged on paper, but they're not experimenting. They're not pushing boundaries, they're not taking risks. They're preserving themselves.
Today's guest is Jay Caldwell. He is the Chief Talent Officer at ADP. And we're gonna dig into why low turnover in this environment might actually be a warning sign rather than a win. Because here's the thing that we all have to keep in mind—the people who feel safest are taking the fewest risks. And in this environment, not taking risks is probably the riskiest thing that you can do.
So today we're gonna cover why if you build it, they will come doesn't work for AI adoption, the quiet staying problem and how compliance mindsets might be replacing innovation, how to tell if low turnover is masking stagnation and fear, general purpose AI versus targeted use cases in which one builds confidence. We're gonna look at the career ladder crisis and why AI native graduates are about to challenge everything that you're doing.
I'm David Rice. This is People Managing People. And if your turnover numbers look healthy, but innovation feels dead, this conversation explains exactly what's happening underneath the surface. Let's go.
All right, Jay, welcome to the show.
Jay Caldwell: Thank you, David. Happy to be here.
David Rice: I wanted to kind of start the conversation around like we're a few years into this AI wave now, and I think the initial excitement a lot of people had, right? It's given away to real world consequences. I think we're one day after the block layoff announcement.
I'm curious though, what's the biggest misconception in your opinion, that leaders are still holding onto about integrating AI into their workforce?
Jay Caldwell: There's probably a lot of misconceptions. I'd love to hear some of yours as well, but I think one of the biggest I still see in many organizations is that everyone is as equally excited about AI as all the leaders are.
You know, I think what the recent statistics is, even in the US still, what is it approximately 40% of people are so using AI in their home lives, right? Even outside of work. So it's still relatively early in terms of AI adoption, although 40% is quite a bit. I think there's, as usual with enterprise tools, a little bit of a perception that, you know, if we build it, everybody's gonna come and everybody's gonna use it and they're gonna love it as much as we do.
But I don't think that's the case. Certainly not yet. You might have a lot of really interested and excited adopters. You'll probably have just as many that are completely resistant to adopting things like AI and then most of your people somewhere in the middle. So I don't think this is easy as like, you know, if you turn it on, everyone will come and I need to spend a lot of time and energy on how you lead this type of change and transformation for your organization.
David Rice: Yeah, that's definitely resonates. I think from our perspective, like, you know, the one that we talk about all the time is like. Leaders are treating it essentially like a tool that you deploy, like a software. But I always kind of come back to, I read somebody said that it's more like introducing a new colleague into every team simultaneously.
So it's gonna change dynamics, power structures, who feels competent, who feels threatened. So it's really more of a culture project. I don't think people are against AI. Some are, but on an ethical level. But I think a lot of people are curious about it, but they just have not been told a story where it's clear where they fit in the future and that's a gap that the C-Suite narrative, you know, that whole AI will free you up for higher value work.
It what people actually experience. That's often out of alignment and we really are getting to a place now where as in terms of work and how people are experiencing it at work, it's really just a faster pace. There's less clarity about what good is now. So it's it's created a real tension there.
And I think the other misconception that I see all the time is that like adoption essentially is integration. I think there's actually a perception that's how it is. You roll out the tools and they already know how to use 'em. It's great, you know, but really I think it's, we've gotta rewire how people think about their work.
Like what their value is and what their career trajectory looks like. That's really the job now of a lot of leaders and most orgs haven't kind of hit it.
Jay Caldwell: I very much agree. I mean, it's the other set just on general AI usage is most people use it as a search engine, right? Like that's. By the way, it's a really good search engine.
David Rice: Yeah. It is. It is.
Jay Caldwell: I don't wanna disparage that usage, but I don't think that's what everyone's hoping we use AI for. So yeah, I think as you mentioned, culture change. Also, just even at the individual level, it's like habit changing. It's great that I have these tools, but I've been working, for example, I do a lot of my work in Outlook, like I'm sure a lot of corporate employees do, but.
It takes me a moment to say, wait a minute, before I jump into this email, maybe I should spend some time going over to the AI tool and thinking about a little bit more creatively about what I wanna say or how I wanna say it, and. That's a habit shift that not everyone just gravitates towards immediately.
So you're right, it's a culture, skillset, mindset. There's a lot to it.
David Rice: Sometimes I'm like, oh, I guess I should ask it that. Let's just see what it says. Make sure I'm still.
Jay Caldwell: I love the analogy used. It's like you have a new colleague on the team, but sometimes I forget that colleague's even here, so I gotta remind.
Yeah. Well if we have this new expertise on the team, I should go tap that person.
David Rice: Maybe that's why I read somewhere recently that remote workers are more, you know, naturally inclined to use it. Maybe that's why, because they'll, the physical presence isn't something that they've been relying on.
Jay Caldwell: Yeah. Yeah. I worked from home for about five years. I think just having a buddy was also helpful. Is probably a good use case as well, where some of the companionship use cases of AI are probably helping people that feel a little bit more physically isolated.
David Rice: You shared that heavier AI users, they may be more likely to explore other opportunities. And on the surface that sounds like a simple supply and demand story. They've built this valuable skill, so they have options. Right. But I'm wondering if something deeper is happening. Does working closely with AI change how people see their own potential or what they expect from their employer?
Jay Caldwell: It is interesting. Yeah. We, ADP saw some of that in our own research around retention risk. AI adoption, and it is really fascinating to think through all the layers of why that might be. I think to your point, one is definitely just you've got the new hot skill that you're developing, and where else might you be able to apply that?
How does that help your employee ability? I definitely think that's a big chunk, but I agree. There's more to that story as well when you start to un layer it, unpack it as an example, like you're power users of AI. They might actually start to feel constrained by the organization in terms of its pace and how fast really like the work process around that individual is or is not changing, right?
Like you might be able to see the future and the opportunity and the ability to change work process faster. But if you're just one piece of the puzzle it takes more than just yourself to be able to do so. So there's some of it frustration that I'm not sure my organization is moving fast enough here and I could get a lot more done somewhere else.
That could be it as well. And some of it also might just be, you know, fear of the unknown. Like, you know, again, as people, yeah. In their hands on AI and just working about, if an organization doesn't communicate the purpose of AI really well, I think that can create anxiety. And maybe I start to fear my own employment at this company because of the power I could see these AI tools actually have.
And then maybe it's more of a survival mode, David, around like. It's not just that I have a hot skill, it's that I wanna ma remain gainfully employed and I'm not sure that's gonna be here given the tools that we have. So again, I think a lot of that goes back to what I was saying around the change management put around these deployments.
David Rice: It feels like a bit of an uncomfortable combination, right? From a talent perspective. Like the folks who are really engaged with AI, they're not disengaged with their jobs, but they are the people shopping around, right? Because they've kind of. Maybe they've outgrown their role, or in some cases they've outgrown the org.
Like if you're working with AI all the time, it's almost like a conscious expanding experience or horizon expanding, I guess, for your career. I think I've experienced this a little bit personally. You start seeing what's possible and then if you're in a company where you realize, you know, your company is moving at half the speed that you know is possible.
Then suddenly the gap between your potential and your environment becomes intolerable for some people.
Jay Caldwell: Yeah. It could be. It could be. And you know what's important to keep in mind as it relates to engagement, which leads to retention, right? Someone's willingness to say with your organization is, generally speaking, engagement's low in the workforce.
That varies by company for sure, but generally speaking it's, call it 19 to 20% of the US workforce is fully engaged, so.
David Rice: Historic lows, I think that's what I read there.
Jay Caldwell: Does it take much to push someone over the edge in this situation? And in terms of thinking about what else could be in your, how they can grow their careers differently?
No, probably not. So AI maybe is a catalyst for some of those to really start thinking about what potential options are out there.
David Rice: Absolutely. I think like, especially if, you know, like you said, you're already kind of at the edge anyways. A lot of people are, and then they look around, they see a lot of bureaucracy approval chains.
Maybe it's like rigid role definitions. You're feeling more capable with this technology. It's almost like you feel like you're in a developmental cage now, whereas like, you know, you feel like if I go over the edge, this thing's gonna help me fly, you know?
Jay Caldwell: That's right. Yeah. You're busting outta the lid. Yeah. One other possibility here as well is just as you said, a developmental cage or opportunity, but also just like when you think about. How fast your skillset is moving and then you know, you wanna apply it in different ways. But also some of the research we're seeing is, you know, we hope AI makes everyone more productive, for sure.
That's a goal, but we're also seeing a lot of data around, it's not necessarily giving time back. Like people are actually working more and harder when they have AI because they're getting more done, they're moving at a faster pace. Their colleagues are also moving at a faster pace. So work may be getting more intense right now, which also I think increases retention risk.
David Rice: I've seen this quite a bit. Like people are just burning out and they're leaving and they don't have a plan, you know, because they just feel like they can't match the pace or keep up with the pace of it. And it's, you know, I had a guest on recently and she said, I just feel like in a lot of cases we're just doing more, but it's not that it's better or that it's more productive as we're just doing more.
A lot of people seem to be feeling that way. If turnover's down, but engagement isn't rising, are we looking at a workforce that's kinda like just staying for safety rather than any other reason in an AI driven environment where, you know, I think experimentation, adaptability are clearly the things that matter, right?
Is this sort of like quiet staying? Is it more corrosive than simply quitting because it looks stable on paper?
Jay Caldwell: This is what worries me quite a bit right now, given what's happening with all these tools and technologies and this, you know, state of the job market for some individuals. I mean, the market is very different depending on which function or industry that you're in.
But yeah, you're seeing more and more of these terms fly around, whether it's like quiet, quitting, or job hugging or whatever they are. I think the trend is the same, is if churn isn't happening in the organization, then that's gonna be a challenge. It's a different challenge from a few years ago when we had the Great Resignation, right?
Where it was all about how do we retain people as effectively as we can, which was painful for employers, but for employees. That's a great environment, even if you're not leaving your organization right, because if turnover's up, that gives you more opportunity. To move up in the organization or to new roles.
There's just more chances to take on different types of job responsibilities. So if those opportunities are also lesser because turnover in your organization is lower, then I think that creates risk of people starting to feel a bit stagnated and stuck maybe, or they're not growing as much as they could.
I think that risk to your point is if that drives engagement lower, that's gonna drive innovation and creativity lower. And that's the true, to me, the true promise of AI is innovation and creativity. So if you don't have the mental energy to drive forward with that, what's the point really? I agree. I think there's great risk there, and I would recommend each organization in this environment don't get too complacent on turnover, looking lower and healthy because there's a lot more hiding under the surface right now.
David Rice: I couldn't agree more. I think what we've been so kind of obsessed with measuring retention and turnover that we've kind of forgotten to ask whether staying is the same thing as contributing. Low turnover in this environment might actually be a warning sign rather than a win.
Jay Caldwell: I agree. And you know the value of bringing external people to your organization as well, bringing in new ideas, new ways of thinking, new perspectives is also significant. So. I think a lot of intentional thought around how you keep the pipelines flowing essentially is critical in this environment in addition to, you know, even if people aren't moving, how do you keep them really emotionally committed and contributing the most energy they possibly can in the world that they're in today?
David Rice: That's a great point 'cause like the quiet staying kind of thing, it might be a really expensive problem if they show up and they don't cause trouble and they hit their numbers and it looks good, right? But they're not experimenting. They're not pushing the boundaries of how the team works or anything that they're not taking risks.
And we know that really the big reward with AI adoptions comes out of taking a bit of risk, right? But if they're just kind of preserving themselves and they're staying outta fear, whether it's fear of the job market, or fear of having to learn elsewhere or. In the case of a lot of mid-career folks, they feel like they're starting over.
And so we're kind of building this workforce for, it's like a compliance mindset, right? It's not really innovation. I think that's one of the costs. It's sort of like ironic, right? 'cause the people who feel safest are the ones taking the fewest risks and in this environment, not taking risks is the riskiest thing that you can do.
Jay Caldwell: Yeah that's a really good point. When you talk about people and unpack their careers, especially successful people, you do see a lot of risk in the moves they've made, you know, the pivots they've made or gone into a completely different function and yeah, I think that's a great point. I hadn't thought as much about as.
The personal impact of that because it may feel riskier to make those kinds of jumps right now.
David Rice: When orgs roll out broad sort of general purpose AI tools, it often kind of feels like everyone go figure this out is like the message, but targeted AI use cases, they tend to be more structured and outcome driven.
Right? I'm curious, in your opinion, how do the two approaches differently shape employee confidence or learning curves, maybe even performance pressure?
Jay Caldwell: Great question. I've seen organizations approach strategies differently, right? Like some started with general purpose and then targeted later some started targeted and then general later.
But you're right, I think they are different strategies, you know, when really targeted. I think it's in both cases. I should start here, is. Is not under investing in communication and training that comes with these tools and not to what we were talking about earlier. Not just training on the how to, but also like mindset change, how to approach your work differently.
And I think that's more important for the general purpose tools because you know you're giving someone a Swiss Army knife. I remember having a Swiss Army knife when I was a kid. There's still probably only like three things on that. That tool of 20 tools that I actually knew what they were for. There's like a magnifying glass, a knife, and a nail file, and there's a bunch of other things.
I still have no idea. Like a toothpick or something. It was a toothpick, and that's what general purpose is very much like. There's so many possibilities of how you can use it. And so you do have to really help people understand what the possibilities really are. And if you just turn it on and hope people are gonna do that, it's not gonna go well.
So I think organizations I've seen do this effectively. They learn out loud. Connecting people in similar roles to just talk about like, oh, this is what I tried and it failed historically, so just avoid that one. But, oh, I tried brainstorming and then, you know, launched into an application of what I found and it went great and you know, half the room says I would've never even thought of that.
I think on the other side of the spectrum, when things are more targeted use cases, it is more about clarity of how this affects my role today. How I need to change my workflows and really thinking about the AI in the flow of work. So the communication or training can be much more targeted than just general use case, but that also means the interpretation of the impact may also be much more targeted, whether good or bad, if that targeted use case is heavily driving productivity.
Taking a lot of the tasks that I do and automating them. Then a lot more energy and attention on caring for those individuals and helping them understand the safety of their position, or we were talking about careers within the organization is really important.
David Rice: Intuit QuickBooks Payroll knows that HR has so many moving pieces that could lead to chaos. It's why they sync payroll, HR, time tracking, and finance into one powerful platform. And this summer, QuickBooks Payroll is evolving into something bigger to support the entire team lifecycle. Soon, businesses will be able to onboard employees in one seamless flow that feeds directly into payroll, configure automated HR workflows for things like promotions or offboarding, track performance, time off, and benefits alongside payroll.
See what a difference QuickBooks Payroll can make for your business today and get ready for what's coming next. Learn more by going to quickbooks.com/workforce. That's quickbooks.com/workforce.
I like your Swiss Army knife analogy. I'm then, now it's got me wondering what the AI equivalent of is those little pairs of scissors, you know, the flimsy.
Jay Caldwell: Oh my God, that's great. Got a spring always broke on those. It's unbelievable. Yeah.
David Rice: The everyone figure it out philosophy is kind of a way of saying like, oh, we believe that this technology's gonna make this big difference, but we have no strategy and we don't know what to do with it.
Jay Caldwell: Yeah.
David Rice: It is. You give everybody a piano and expect a symphony.
Jay Caldwell: Yeah, but there's so much excitement about it. I think that's the rush is like, and it's so intuitive. I mean, 'cause it's really just a lot of the general purpose tools are just chat interfaces, so it's so intuitive to figure out how to use it. It's more about like the creativity of what should I do with this and what are the possibilities that are, they're difficult to discover on your own?
David Rice: Well, I think the targeted use cases are really helpful in that scenario because they do something that's important psychologically for how you're gonna approach AI, 'cause we're all kind of like looking at it. You see what it's capable of and it can be a little bit scary, right? Depending on your role.
So what it does psychologically to have a targeted use case is it gives you a win. You solve a specific problem, you feel confident. Now you build confidence more quickly. And general purpose tools, sometimes they give people like anxiety 'cause there is no finish line. It's like you're never done using it and you're never using it well enough.
There's always some guy on LinkedIn that's gonna look more awesome using it than you, you know? So it's like the performance pressure, right? I think there's another problem too, and it's like if it's general and undefined, your manager doesn't know how to measure it, so they default to, are you using it at all?
Which is a pretty meaningless metric, right? Like targeted deployment, at least lets you measure outcomes rather than activity. I think the learning curve is real here too.
Jay Caldwell: Yeah, it's a great point. From a management perspective, I mean, these are big investments, so you're right. The value of having some of those targeted cases is you're going in to solve a very specific opportunity, and so you can very specifically measure the return on that.
That to your point, is not just using the tool, it's what are you getting from it.
David Rice: We're seeing like I just published a story today from a guest contributor about, you know, the sort of compression of entry level work and what's happening in entry level roles. And it's not hard to see that if we stop hiring at the base of the talent pyramid.
You're kind of dismantling your future leadership bench, and we've talked a little bit about this on the podcast in the past, but I think they will have consequences for everything from institutional knowledge to succession planning. And socioeconomic mobility over time. But I'm curious, what do you see orgs doing to get out ahead of this a little bit?
Are you seeing a shift from focusing on short-term productivity and thinking more about long-term capability at all?
Jay Caldwell: I do think that is an evolving conversation because out of the gate, I think some of the use cases for AI, like when AI passed the bar exam. Probably the immediate question was like, well, what are we doing with paralegals from here on forward?
So I think the immediate reaction was slowing those hires down. I do think companies are now wrapping their head around this, just as you describe it as well, David, that yeah, we can't do that entirely. One, because you will rip a rung out on your career ladder, and that's a big problem. People will start falling.
But two, I think also organizations as they start to see that. Not everyone is adopting AI at the same pace. I think that's especially true for people that have been in their careers longer. They're not. I mean, this is a very general, I'm generalizing quite a bit. They may not adopt these new tools or be willing to change our work practices as quickly, but we're now at a point where the college graduates of today.
AI native, they started using gen AI tools when they entered college. And so there's a lot we can learn from that population right there that have really gone deep into these tools. Hopefully for all the right reasons. They've gone in deeply to understand the potential and the purpose, and they can come into our organizations and challenge us on, Hey, why aren't you using AI to do this?
That's strange. Why are we doing this manually? Still, AI can easily. Solve that opportunity for us. So I think it actually, the other downside is you're losing an opportunity to bring in, call it more modernized skill sets or ways of thinking to the organization. But you know, you have to think about with that segment of your workforce also, how do you help them build the right level of judgment and expertise that they're using the AI well and using it appropriately.
So I still think we have to have a really healthy mix across all career bands or tenure bands to make sure we've got all the right capabilities in our workforce. You know, the volume may be changing a little bit because maybe. As we were talking about, the demand isn't as high as it used to be.
Turnover's a little bit lower, so I think that there may be some tempering in terms of volume of hiring at the early career level, but still just as important. I think that was a bit of an overreaction out of the gate, and as I said, companies coming back to the realization that they have to keep those pipelines open and strong.
David Rice: It helps that, you know, there's a lot of stuff in the AI conversation that C-suite leaders are maybe baffled by. They don't actually understand it on a very technical level, but every one of them understands this. They started at the bottom. They made mistakes in low stakes environments. They learned how organizations actually work from the inside, like the politics, all of it.
And it doesn't take a genius to realize if you eliminate that experience, those entry points, right? You're kinda limiting where all this is going. And I think. A lot of C-suite leaders, they don't wanna talk about the socioeconomic piece, but we all are aware of it. And entry level, white collar jobs have been the primary vehicle for economic mobility for decades.
And if those disappear, we've got to replace 'em with something. So I think it's, there's a social contract at play here as well, and I think we're gonna see. In the next year, that may become a board level concern as well too. Like what are we doing to ensure that, you know, we're still providing value to our people, you know, getting them in and get, helping them grow.
Jay Caldwell: And the key word there, values or value, right? Like I think that's where a lot of organizations need to step back and look is what are your values and how do you let that drive your strategy? You know, at ADP, we are a values driven organization. We, one of our values is each associate counts and we are laser focused on making sure we keep the human in the middle of all of this work change and make sure we're supporting really all of our associates through the change no matter where they are in their careers.
So I agree. I think it's a critical focus, especially when you think about like how you treat your employees through this transformation. You know, with things like Glassdoor and Reddit, it's very public. So looking at this, as you said, with a long range view is really critical.
David Rice: One of the things we're hearing, you know, a lot of people say is like, curiosity and adaptability matter more than expertise.
And of course that sounds great. Philosophically and I, I would agree, but structurally, our hiring systems historically have been built around pedigree. And depth in many cases. I would say so for all to talk about moving to skills-based hiring, and I think that there's some orgs that are doing some cool things on that, but a lot of companies have been maybe a little bit confused by that.
They've not been able to figure it out. So what has to change about how we evaluate talent so that we don't default back to the comfort of specialization.
Jay Caldwell: A lot of the mindset around what we're looking for has to change first, and this is change in your leadership, right? So I think that's the focus of the change is how do your leaders think about identifying and selecting talent?
Where, to your point, like choosing on pedigree or call it credentials is an easy way to do it because it's on paper, you can see it. It's hard to argue it. You know, if someone graduated with a four year degree, you can't argue that they had the capability and the energy to do so.
So I think what has to change is looking at alternative sources of, call it credentializing, whether that's more assessments that you can dig into the skills that someone has, and this is where AI also offers just immense opportunity to assess skills, whether they're technical skills or communication skills, creativity skills, a lot of that can be scaled using AI, but also doing that in a very careful way because of the legal considerations around selections. So that's one opportunity that we can change is like, how do we really start assessing these things a little bit differently, but also the mindset and criteria for what we're looking for.
That helps our leaders feel more confident on making these really important decisions because there's nothing more important than who you hire to your team. How do we help build their confidence in making a decision that's not just based on one single credential, on a resume? And so unfortunately, a lot of managers aren't the best that I interviewing.
Really digging into what someone's experience or creativity levels. So I think we're gonna have to help our managers practice that as well. Or giving some simple behavioral based approaches that can help them assess something like creativity or innovation or breadth of experience. I think it's actually, it's ironically getting back to some core behavioral based interviewing that we've had for decades, but probably haven't been relying on quite enough.
David Rice: It's funny sometimes the answer is just being old fashioned, you know?
Jay Caldwell: Yeah, exactly.
David Rice: But yeah, I was talking to a a professor at the University of Michigan, like it was like a year ago, and he was like, I'm just going back to proctor exams and just having 'em write it, you know, they can't have any device.
He is like, that way I can test whether or not they actually know it. And I'm like, that's fair. I mean, it sounds awful. Because I, I hated those tests, but.
Jay Caldwell: There's something called I don't know how much it's used these days, but the alternative use tests. You familiar with those?
David Rice: Yeah, I've heard of it.
Jay Caldwell: I mean, like, here's the simplest form. Here's the test is David, here is a pen? Tell me that in the next three minutes, as many use cases for this pen as you possibly can. And so just like report those back and say it almost feels like a family game or something like that, but. It's a way to test for diversion.
Thinking like, how many different ways can we look at a situations, maybe a proxy for curiosity, but that's like tests or methods from like 40, 50 years ago. So to your point, it's like that back to the basics. I'm sure AI can do that for you really well, and it'll give us way more use cases than we need.
But if you just sit down with your own brain. How good are you at performing that?
David Rice: I gotta ask. 'cause you know, like obviously ADP's footprint in the talent landscape is huge and so I'm curious to get your thoughts on this. Is this like a talent infrastructure problem? Like is it down to like how we use an, what we expect from our a TS systems, how we've trained recruiters, the interview frameworks we're using, like even our language is built around the pedigree model, right?
So like I don't think skills-based hiring is like a policy change. It's an entire infrastructure overhaul, and I think a lot of folks are just trying to run new software on old hardware. It is like the mentality that they're kinda stuck in at the moment.
Jay Caldwell: I think it requires also really defining, if you say skills-based, what do you mean by that and what changes are you making associated with that?
Like I said, if skills-based means. If you're not doing a degree requirement for particular roles anymore, great. Okay. But now looking at the system end to end, that's what's important is what else might you need to change about your hiring practices? What might also you need to change about your upstream practices?
So, as an example, if your company eliminates degree requirements for entry level roles, but still has them for senior leadership roles. How do you help people close that gap over the course of their careers? Or should you change both is maybe the better question but you've gotta look at the full system and think long term versus just this one little isolated change.
David Rice: The other thing I can't always think about, you know, if cur say we're all kind of saying the same thing, right? Like curiosity and adaptability. Those are core competencies that we wanna hire for. But I mean, how do you screen for that? 'cause you can't put adaptability. On a resume and then show it, I think about this, 'cause I was helping a buddy who's looking for a new job and I'm trying to think of like ways to, for it to help him sort of show his portfolio and do these things.
And adaptability is like, one of those things. It's like, well you just kind of have to show it and you really don't get to show it in a 30 minute interview. You can't test curiosity in a short interview. Right. So I think our methods sort of fundamentally are not matched up well to what we're saying we value.
AI tools could help us evaluate potential and sort of those things a little bit better, whether it's through analyzing work samples or assessing a problem solving approach. Maybe even identifying something like transferable skills. But a lot of us are using it to do the old thing faster, so it feels like.
Jay Caldwell: Yeah, it could also, you know, it's, I think we're talking about hiring right now, but put that same question to development.
You can. Yeah, totally. All creativity skills and adaptability skills, and it's another great use case for AI. For example, we've been looking at how do we integrate AI to our leadership development training. And part of the purpose behind that is so we can give people more reps, more repetitions and leadership faster, so I could set up a simulation for you to deal with a really difficult direct report and giving them really difficult feedback.
Then 10 other situations that'll help you build more adaptability as a leader. So I think that's some of the promise as well. It's not just that we wanna hire for it, but we have an opportunity to develop it.
David Rice: If AI handles things like depth analysis, maybe even some elements of decision making, human performance really shifts from execution to judgment. Given that, how do our current performance frameworks have to be sort of adjusted so that we're measuring the right things?
Jay Caldwell: Exactly. I wish I had an answer to this. Wish I had an easy answer to this question.
David Rice: I know, I, I only do easy questions. It's all softballs on this show.
Jay Caldwell: Is it? It's kinda like how do you measure their performance of a creative job is like an artist.
It's sales. I don't know. I have a lot of artists don't actually sell until after they're gone. Right. And that's when they become really popular. So. I, it's really tricky. You know, I think it starts though with starting to reframe even some of the mindset of how does someone view their own performance.
For example, I feel super productive when I clear my inbox. I can see it, I can measure it. I know when it starts. I know when it's done. And if AI is helping me do that in the future, I've lost something that makes me feel productive and that I'm performing. Or answering a certain amount of calls or, you know, summarizing a certain amount of notes that I've had from meetings all week.
Like, these things help me feel like I'm making progress. Jobs that are just by nature, more creative. It's harder to see that you're making progress, right? Like the milestones aren't as clear, the timelines are longer. So I think that's first is if someone's job is fundamentally changing from production to creativity, that's a completely reframe in terms of how you view your own performance.
The measurement side is a whole other challenge, like how you think about performance management processes is a big challenge. So I think looking at things like more adaptable goal setting, measuring performance on potentially a more frequent basis for managers, maybe simpler, lighter ways to get manager perspective more frequently is really important.
And also emphasizing not just the results, but also the inputs, the behaviors, the skill sets. So how is somebody showing up, you know, the old debate versus what are you measuring? The what or the how? I think there's gonna be more emphasis put on the how moving forward, because those typically are the inputs to good creative, strategic work or how you're collaborating with your peers as an example.
So, so I think rethinking individually, what does it actually mean to perform, but also as an organization thinking about how do you trans. Form your measurements that you use in something like annual reviews or something like that is, it's gonna be a very timely one as AI has an impact, but now's the time to think through what those potential implications are.
David Rice: Yeah, absolutely. I mean, you mentioned from production to creativity and I often think of it as like from execution to judgment is sort of the big thing and but like how do you measure judgment? I'll give you like an example. Like in my job I create a lot of content. I use AI to help create some of that kind.
At least gimme a first draft. It'll say some things that sound, you know, really convincing and it, but I'm always just naturally sort of skeptical of like what it gives me. And so I'm always chasing down what I think sounds off to me judgment wise. That doesn't sound right and I catch in lying or fabricating things all the time where it's like, that's not what that statistic says, or that's not at all what that means.
But how am I gonna put on my quarterly scorecard? Made us not look like idiots 27 times, you know, this month, like it's a hard one to measure. There's a lot of times where it will just, you know, it's good at like creating an argument around data, but you always have to chase that data. Like, where did it come from?
What did it actually say? Is it reframing it in a different way or is it reporting it as it was? Because my job isn't to reframe it and to essentially translate it unless it's like very complex. It's to cite it in a lot of cases and to present it as like, here's a thought, here's part of why I think this.
Now, if it is like data that we've done, yeah, we're here to translate it, that's fine, but sometimes it'll take a number and it will just. Make something completely different out of it, and you're like, that's not what that report said. And so like, I've been doing that for two years almost, and it's getting better.
Certain platforms are better at than others. Like I think for example, I think Perplexity has like sort of a citation specialization. That's really what, something that it does well, but it's just not as good at some of the other things that, like a claw is you kind of have to like find your tool and you're happy.
You're like, for this, I'm gonna use this and for that I'm gonna use that kind of thing. Yeah. Yeah. But yeah, it's, there's cases all the time where it says something that I'm like, I remember that data point that was from two years ago, and it's framing it like it's from last month.
Jay Caldwell: Back to our discussion on like, how do you help early career talent? Like this is the keys. How do you build that kind of judgment? 'cause you know that, because you're experienced.
David Rice: Exactly. Like I've been doing this for a long time, so it just, it's natural for me to think about that. But if I was 24 fresh outta school and I'd never seen this stuff before and I'm just like.
Trying to analyze the outputs. There's no way that, yeah, I'm gonna know that, or, you know, I could maybe go do the digging, but then it's like, it's just changing my job from creator to fact checker. Is that the value? I don't know if that's inherently valuable. So yeah, it's finding ways to use these tools that make sense, still keeping judgment at the center and high value but allowing you to develop the skills.
Jay Caldwell: It's another way to think about how you measure performance because it, there's like three phases there. You input a certain question prompt. Whatever the AI did, its magic and you evaluate it at the end. I think as you look at performance in a world with AI, it's gonna be probably really important to evaluate the human on those bookends.
How good are they at asking the right questions, massaging the prompt in the right way, and then how good are they at evaluating and using the output? The middle is pretty efficient and automated by AI, but yeah, it's that cycle of how you work through those uses is really important.
David Rice: That's a great way to put it 'cause like it's sort of a shift in power, right? Like the people with the best judgment aren't always the ones with the biggest titles. Sometimes they're just the people that sit at the right place. Or they can be genuinely disruptive into an organizational hierarchy if we're removed. But yeah, it's so hard to do, like to understand, you know, where judgment has to be, has to sit and what the level of sort of experience or knowledge or ability to ask the right question at the right time. That's where we're at.
Well, Jay, it's been really good talking to you. I've enjoyed this one.
Jay Caldwell: Yeah, you as well. I think we could probably talk about this all day. It's an exciting space and an exciting moment that we're in, but you know, we're all learning together, so appreciate you doing podcasts like this because that's really helpful to think through. We're all on the same boat right now. There's no far ahead of anybody, so the more sharing we can do, the better. So I appreciate it, David.
David Rice: Absolutely. Well, listeners, you know, if you haven't done so already, go on over to the website, sign up for the People Managing People Newsletter. You'll get episodes like this as well as all the content that we create on People Managing People straight to your inbox.
And until next time, judgment, that's where it's at.
