If you’re looking for a quick how‑to on rolling out AI in your org and actually seeing adoption—not just flashy pilots—you’ll want to stick with today’s episode. Glen Cathey joins us to get real about what it takes to move from “hey, we launched a chatbot” to a workforce that defaults to AI, and why most companies trip themselves up at the starting line.
We peel back the usual “let’s train everyone” playbook and instead ask: what happens when leadership doesn’t live what it preaches? How do you build habits, not just certifications? And how do you get everybody (yes, including your tenured folks) to think of AI as a real teammate instead of a toy? Expect a mix of hard truths, practical frameworks, and a few punches at our collective complacency.
What You’ll Learn
- Why leadership isn’t just “nice to have” but the lever that drives real AI fluency in organizations.
- How training by itself fails, and what behaviour change actually looks like when AI becomes part of the workflow.
- Why “manage AI like a human” isn’t a gimmick — it’s a metaphor that helps you design execution, feedback loops, and accountability.
- What fear of replacement tells us about corporate culture, and how ignoring that human side sabotages your tech rollout.
- Where roles and processes get exposed in the AI era (e.g., recruiting), and how to decide what gets automated vs what stays human.
Key Takeaways
- Leadership models behaviour. If the C‑suite says “AI is strategic” but doesn’t use it themselves, you’ll still get pilots dominating shelf space—not workflows.
- Training ≠ fluency. Consider it like running a fitness video versus actually hitting the gym. You need repetition, role‑relevance, and reinforcement.
- Prompts & workflows > open‑ended freedom. Give people “five points in your workflow where AI should be used” + starter prompts. But also teach them to spot their own opportunities.
- Psychological safety is non‑negotiable. Slowing down to experiment is uncomfortable under quota pressure. But if you don’t build safe space, you’ll sneak back into old habits.
- Experienced workers can be stuck. Their “unconscious competence” (they do good work, no thought) becomes baggage when the paradigm shifts. Ask them to regress to “consciously incompetent” and relearn the work.
- Managing AI is management. If you give a human vague instructions and expect excellence—you won’t get it. Same goes for AI. Context + goals + feedback = much better output.
- Governance isn’t just IT’s job. When you distribute AI to thousands of employees, you’re effectively doubling your workforce. Who manages this asset? How do you review its work?
- Automation reveals redundancy. In recruiting, for example, AI tools exist today that source, message, screen. So you need to ask: what part of the recruiter role still adds human value?
- Fluency looks like habits, not events. An org is fluent when employees redesign their own workflows, lean into AI by default, and talk about “how we use AI” as naturally as they talk about email.
- The conversation is human, not just technical. 64 % of senior leaders say fear of replacement by AI is stifling adoption — yet only 24 % call employee resistance a major barrier. If you ignore the fear, you ignore the people.
Chapters
- 00:00 – Moving Beyond Pilot Mode
- 00:40 – Teaching AI Leadership Skills
- 01:10 – Closing the Training–Skills Gap
- 02:42 – Embedding AI in Daily Work
- 06:01 – Experience as Operational Baggage
- 10:51 – Developing Workforce Management Skills
- 13:47 – Strengthening Feedback Loops with AI
- 15:33 – The Hidden Cost of AI Scale
- 19:26 – Fear, Adoption, and Organizational Reality
- 29:06 – Breaking Out of Pilot Paralysis
- 31:44 – Defining AI Fluency in Practice
- 35:29 – Rethinking Traditional Roles
Meet Our Guest

Glen Cathey currently serves as Senior Vice President and Consulting Principal of Talent Advisory and Digital Strategy at Randstad Enterprise, bringing over 25 years of experience in global staffing, RPO, and digital recruitment innovation. He is a globally recognized sourcing and recruitment thought-leader known for his expertise in passive talent engagement, search-and-match technologies, and the ethical application of AI in hiring. Glen began his career as an IT recruiter and has since built and led large-scale sourcing teams, developed industry-leading training content, and spoken at numerous international talent events — all underscoring his commitment to elevating the future of work and talent acquisition.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Glen on LinkedIn
- Check out Randstad Enterprise
Related articles and podcasts:
David Rice: There are a lot of companies that get stuck in like pilot mode. They roll out training models, they call it a day. What's the biggest lever that actually moves an organization toward fluency?
Glen Cathey: Single biggest lever is leadership. If you are going to create a culture of defaulting to AI, it's leaders that demonstrate, first and foremost, they set the vision. They actually live it and show it so they can serve as examples. You can have all the training in the world, but if you don't actually have leadership that's fully aligned, you're not gonna drive a culture where more people are defaulting to AI and exploring all the ways that it can actually improve their work.
David Rice: Most people have never been managers. How do we teach 80% of the workforce to suddenly become AI people leaders?
Glen Cathey: We've never really saw management as a skill. That's also not something you can just take a course and say, I'm now magically a fantastic manager of people, or AI. It is something that people have to turn into somewhat of a habit to be able to start thinking about those steps.
What am I trying to solve? What resources do I have? What are the capabilities? How do I provide enough instructions so that they can actually do a good job for me? And then how do I provide feedback?
David Rice: Do you think we're clinging to certain roles almost out of tradition rather than necessity?
Glen Cathey: Let's look at a few solutions that are relatively new, like LinkedIn's hiring assistant. It has a separate model for searching, outreach, screening. You can enter in information and natural language. We're talking about the automation of tasks in recruitment. Understanding a job, translating that job into search and match require. So you have solutions available in the market today that can actually do that. And then it makes me wonder about why does the role of the recruiter exist anyway?
David Rice: Welcome to the People Managing People Podcast — the show where we help leaders keep work human in the era of AI. I am your host, David Rice. And on today's episode, I'm joined by Glen Cathey. He is the SVP Talent Advisory within Randstad Enterprise. We're gonna be talking about the journey to AI fluency and building real skills to get there.
Glen, welcome!
Glen Cathey: Thank you very much. I'm happy to be here. Looking forward to the discussion.
David Rice: Absolutely. So I kind of wanted to start us with talking about online training. Obviously this is something that everybody's thinking about because we've gotta develop all these skills that we don't quite have or so in cases we don't even know which ones we need yet.
So when we were talking before this, you said to me like, online training creates knowledge, but not necessarily skills. So my question for you is like, what would it take for companies to actually close that gap and how do we stop treating AI, like a sort of course to pass when What we really need is to think of it more like a culture that we're gonna live.
Glen Cathey: I love that framing. It makes sense for any company to really lean heavily into online training because it's the easiest to scale to any organization. You know, you create a course and then you can deploy it to hundreds of thousands of people. But as you mentioned, I do believe that. Primarily online learning teaches people knowledge, and we're looking to essentially not just give people knowledge, but the ability, there's behavioral change that's actually critical as a part of this.
So I don't think it's wise to treat generative AI as a tech deployment. It really is more of a change management exercise because it involves getting people to think differently about how they work. And it involves them changing their behaviors and then eventually you want those behaviors to turn into habits.
And you do that mostly through experience, like hands-on learning. So. I tend to recommend, and something that we've done internally and externally as well, is essentially develop hands-on experiential learning workshops where people roll up their sleeves, so to speak, get their hands dirty, get repetitions in so that they're actually working with it, with role relevant exercises.
So it's not just a, an idea or a concept. It becomes real, it's still a challenge because training is just an event. So whether it's online or a workshop, you really have to focus on the reinforcement of that and that's how do you build mindset first, then behavioral change, and then turning that into habits.
David Rice: Yeah, it's interesting. It's like watching a fitness video doesn't make you fit, right? So like you gotta do it. It's I don't know why in some cases it's like, we don't really think of it the same way here, but we ought to, I think sort of, we've gotta build it into daily workflows, right? Like this isn't like some something that HR can just sort of like force on people, right?
Like you've gotta embed AI practices. Even like so saying about embedding AI prompts in project templates, just something simple like that, making it a part of the everyday experience at your company. You know, and then rewarding experimentation when it happens. I think all that is kinda key to how we get to that point.
Glen Cathey: So I'm a member of the World 50 AI Roundtable that meets once a month. I've been doing this for almost two years now, and there was a really nice quote that I won't be able to quote exactly, but the essence of it was, you know, the magic of generative AI comes from when you thoughtfully embedded into workflows.
And so, you know, the word that I tend to use is being prescriptive. So sometimes it's still even difficult for me to wrap my head around how to really explain the challenge of when you roll out a Jar AI tool set to your employees, like a Copilot or a Gemini or ChatGPT is the fact that these tools are capable of literally anything when it comes to knowledge and cognitive tasks.
Now that sounds exciting on one hand, but it poses a challenge on the other hand, which is where do I start? So like you talked about prompt libraries, it is very helpful for people to. Have access to things to say, oh, well that's an interesting idea. I hadn't thought of that. But what's also more important is to also help people even think about their work in a way where they will pause and reflect and realize this is an opportunity for me to use AI.
So it's on one hand, it's helpful to be very prescriptive and say, Hey, in this particular workflow for this particular role, here are the five points that we expect you to use AI. And here are some starter prompts. That's very important. I think it's also equally important to help people just develop the skill of being able to identify opportunities where they can leverage AI on their own.
So it's not an either or. It's an and.
David Rice: Yeah, absolutely. And they gotta have the psychological safety to feel like it's okay for me to just try this and see what happens, and if I fail, it's not that big of a deal. That's kind of how knowledge becomes skill, right?
Glen Cathey: Yeah. People learn by doing and there's really interesting book that shows that people actually learn most, even at the brain level, if there's some struggle involved, which is kind of fascinating. I think the book I read on that was called The Talent Code, and it just talks about how people actually build skills and skills are often built through trial and error, which means you have to have some mistakes along the way, and by making the mistakes and then overcoming the mistakes, you're really solidifying your knowledge on how to do that particular thing.
To me, that's very fascinating. I love the fact that you touched upon the psychological safety. What we find is that. There's an existing pressure for people to perform in their current goal, and if you're going to take the time to slow down with regard to how you're thinking about your work and then inviting AI and doing some experimentation, it will slow you down.
And many people are nervous that if I will, if I slow down and I start experimenting with this, which isn't always going to work, I'm not gonna be an expert at it right away, that's a concern. People might say, well, I don't really feel safe. Spending some extra time when I'm supposed to be as productive as possible.
So I think it's really important for companies to recognize that when there is change involved. People are gonna have to slow down to speed up. And like you said, they're going to have to experiment and no one is going to go from a noob to an expert in a day. It takes a process and you have to give your people space and time to experiment safely so that they can actually go from where they are to where you want them to be.
But I feel like that's something that a lot of companies focus enough on is making sure that people have the space to experiment and that it is okay to do so, and that they have time to do that. And if it slows them down initially in the beginning, that's also okay because it's expected.
David Rice: It's interesting too, 'cause we've had a few guests on where I've talked about the fact that people are very attached to like tasks because we've always taught them that's their value, right?
Is their ability to complete tasks. We're moving into this era where like it's really more about learning and kind of what can you be rather than what you are. And I think part of what the challenge here is like unlearning. We'll all have habits. We all have like things that we do in our work and we've gotta kind of unlearn that and then relearn it in a different way with AI.
And I'm curious, in your opinion, is experience sort of a type of baggage? How do you convince seasoned employees that they're hard won habits might actually hold them back?
Glen Cathey: I love that framing the baggage. You know, we hadn't talked about that before. It's interesting because I think it's unconscious baggage.
I don't know if we've chat about this before. The four levels of competence, right. So you know at the top, most people, if you're not doing your role and you've been doing your job for a while, you are unconsciously competent, which means that you're not actually thinking about what you do, you just, you're in the flow of work, you get it done.
That's how we all pretty much work, unless again, you're new to a role, new to the workforce. The reality is when you implement a tool or a solution like generative AI, people have to actually kick themselves back down to the first level, which is hopefully, you know, the four levels of competence actually starts with being unconsciously incompetent, which means that people don't even know what they don't know, and then they have to move to the next level, which means that I'm consciously incompetent, which is like, okay, so I'm now aware of AI, but I realize I'm not very good at using it or incorporating it into my daily work.
And then you move to the next level where you're consciously competent, which is I'm kind of good at, but I have to really think about using it properly. The goal really is to get everyone from autopilot, unconscious competence, where everyone's just doing their work every day. To me, I actually would call that kind of like a baggage that you have to get rid of because you really have to rethink the way you work.
It is a rethinking of the way you work. You have a resource that's there available 24/7 to be able to help you with any knowledge or cognitive task at any time. And that's something that people don't have is a resource just sitting next to you waiting to be guided, delegated, collaborated with. And it is a huge mental leap for people to say, how do I go from the way that I currently work where I'm not even consciously thinking about working?
To slowing down and become conscious of my work process and then also realize I might have to ask some key questions about how can I use AI to help me improve the quality. It's not just efficiency. It could also be the quality of the work or the decisions and the outcomes or even customer experience.
So there's lots of other variables, even beyond efficiency and like cutting down on tedious tasks.
David Rice: I think reframe on learning too, right? Like I think when people hear it, it's, it is the first thing that sort of like it triggers is this sense of like they're gonna abandon their expertise or something.
But I think it's more like just positioning it like a software update or you're not erasing this muscle memory. You're just sort of like adding some new reflex. The research shows that the folks who know how to do this stuff, and now they're applying AI, they tend to be more successful in their roles and have more impact than folks who are either just getting started and using AI or have just outsourced to it.
So I think it's kind of just creating a better perception around what it means to unlearn and relearn. I'm curious, you know, working with AI agents, it has some commonalities with managing a person at times, right? Delegating tasks, setting goals, giving feedback, things like that. If most people have never been managers, how do we teach 80% of the workforce to suddenly become AI people leaders, I guess we'd say.
Glen Cathey: Yeah, that's been top of mind for just the past few months, is making that connection. The fact that I think the right way to look at working with AI is the fact that it is a resource to be managed and collaborated with. It is challenging because if you have the majority of the workforce, that's never managed anything before.
Or certainly people may have collaborated with other people, but it's not something that people can collaborate with another person any second of the day. Usually have to set up meetings for that. So it does require some rethinking of your work and also training people around what are management skills.
And the first one is problem identification and even deconstruction, which sounds super fancy, but. It's like, what are we trying to actually do? And to our part of the discussion just a moment ago is that most people are just kind of on autopilot. They know what they're focused on every day. This requires you to slow down and figure out what am I actually trying to solve?
And then what resources, you know, if I'm a manager, what resources do I have available to me? Well, now we have AI as a resource that I can either collaborate or delegate to. It's like, well, if you have this resource, now you have to provide expert level instructions. You have to provide enough context for it to understand what it is you're trying to do, who you are, what you need, why you want it, how you want it.
And then once it actually performs some work, you have to provide feedback like a manager would to an employee. You did this work. Some of this is okay. I need you to focus a little more on this area. These are things that the individual contributor has never done before. So, although we've never really solved like management as a skill, that's also not something you can just take a course and say, I'm now magically a fantastic manager of people or AI.
It is something that people have to turn into somewhat of a habit and a routine to be able to start thinking about those steps. What am I trying to solve? What resources do I have? What are the capabilities? How do I provide enough instruction so that they can actually do a good job for me? And then how do I provide feedback so we can continue to get better and better output?
David Rice: Absolutely. And teaching the workforce to do that. I think, you know, like as I think there's some opportunities around like little simulations where employees sort of coach an AI to improve its outputs. Just like a manager giving feedback and you can give feedback essentially on how they're coaching it and build that skill.
You know, the clarity, the specificity, sort of the overall feedback loop. It's the same traits that good managers have, but it's kind of ironic that it might be AI that teaches us to do it better in some ways.
Glen Cathey: I agree. And it's funny, it wasn't until you said that I, it gave me an idea that there is a way to be able to create a prompt.
Or if it's like in Gemini, Gem or if it's ChatGPT, like a custom GPT, where you can give it instructions where it is there to ask you questions. So it isn't just waiting for your input, it's actually there to kind of coach you through it. So that's something that you could totally implement today with almost any commercially available self solution.
It's basically help you manage it better, and it is kind of ironic. The reality is sometimes when I do trainings in this area, I think that people need to fully appreciate how powerful these solutions are, how capable they are, how human-like, and even sometimes superhuman in terms of even human skills like empathy and emotional awareness, which sounds strange, that AI can actually outperform people.
But multiple studies have shown that in blind research, even people will judge the output of AI to be more empathetic and more emotionally aware. Which is absolutely mind blowing, but that does mean that if you have this resource that you can work with, it's not just technologically strong. It's not just strong from a reasoning and intelligence standpoint, it can actually help you be even better as a person and the soft skills that you would never have assumed AI could help you with.
David Rice: It's an interesting time to be alive. I just keep saying that to myself, like, Hey, never thought I'd see this. So when we were talking before this, you know, you mentioned to me that deploying AI at scale, it's like doubling your workforce overnight, but you know, if that's true, what's sort of the hidden cost of that in terms of.
Or risk, I guess not just cost, but suddenly having twice as many workers, half of whom never sleep, but also never think without being prompted, never think independently, right?
Glen Cathey: I find that way of looking at it to be helpful for companies to help motivate them. To understanding that this isn't just like deploying software licenses that people may or may not use.
I feel that if you look at it this way, as if you're doubling your workforce, it creates a little more, I'd say awareness and maybe a feeling of responsibility to actually make sure that these resources are working. Because if not, they're just sitting idle. And I try to use that framing because if you really were to double your workforce, you would have a lot of pressure and anxiety around making sure that your new employees are as productive as possible.
But I don't feel that the companies are thinking about that when it comes to working with generative AI solutions. They should have that mentality. And that goes back to the management training, which is if you're gonna deploy this to people, majority of your employees being individual contributors, that's the missing link is figuring out you have hired, quote unquote, these new resources, their AI, they can actually perform real work today, augment, and even from a agent or automated perspective.
But you have to be thinking, we have these new resources. Who is managing them? And then who's capable of managing them? And then how do we upskill people to make sure that these resources are gonna get up to speed? I guess this framing for me until I find a better one is really more around the creating anxiety around we're underutilizing a workforce asset that we have.
And there's probably a better way of looking at it, but so far that seems to resonate with some people to say, okay, if I really did double my workforce, I would be responsible for making sure that those resources were up to speed and as productive as possible. Whereas right now in a lot of companies, you're sitting idle, and even though it's much less expensive than a person, that doesn't mean that you shouldn't be thinking about it and having that level of anxiety of making sure that your people are leveraging this tool.
I'll say there is you know, more and more companies are deploying these at an enterprise scale, and that means that. Having access to the tools is not the competitive advantage anymore, which then means your competitive differentiator is how well you're enabling your people to use the tech, which goes back to the whole point of our discussion, and hopefully that also creates a little bit of pressure and anxiety for people to make sure that, Hey, are we doing enough in this space to make sure that our people are enabled to get maximum benefit from these tools that we're deploying.
David Rice: Yeah, and I couldn't echo that more harder, like either the idea that, you know, this, you get a competitive advantage out of it, is it is disappearing a little bit because it is becoming so standard. I think the thing that I would challenge folks to do is, you mentioned there, you know, like making sure that they're prepared.
You've gotta think of this as like management overhead, right? Like every new agent, every new thing that you deploy, it adds cognitive load somewhere. So there's more things to brief, more things to check and trust. You wanna build it into something you can trust because it sometimes has false confidence.
It never sleeps, right, but it never pushes back. So it can be, it can generate wrong answers very confidently. And I, if you don't take the time, you might put it out there. So it's like one of those things, like, especially for me, I write a lot. That's one of those things I'm always aware of. But I think there's an opportunity there in terms of like governance and sort of thinking about it, like AI operations, the same way you think about it as an operation, right?
There's rules for like quality control, ownership. It's about putting those things in place and thinking of it in that framework.
Glen Cathey: Yeah, totally agree. And there, there's like centralized governance and then there's also I think, governance that extends to every employee that's using these types of tools.
That goes back to training, which is making sure that people understand the do's and don'ts of using these types of technologies. And yes, there's opportunities for hallucinations or what people call AI swap, but what I would say when it comes to AI SWAP is that it's not an AI problem that's a people problem.
Because if you are perpetuating lop, that means you're allowing it to get out. And that, I think that just calls out that one of the most critical things that I think people are bringing to the equation is critical thinking. That involves the review, and it goes back to thinking as a manager, because if you were to delegate work to an employee, you're gonna review that work.
You're gonna make sure that before it goes out externally or is used for some important purpose, that someone who knows better has an opportunity to review it and make sure that the quality is there and it's representative of what you're trying to achieve. But. When you think of being distributed to a lot of your employees, you are placing a lot of trust in that every individual is going to follow.
That's why training is important and it's not just, you know, I like to say training is an event. You might go through a course or you might have a workshop, but you also have to perpetuate that kind of a culture to make sure that you're always over-communicating these types of what we could call governance, which is constant reminders of making sure that people are checking on the quality.
Fact checking anything that needs to be fact checked, but also making sure that it's the appropriate quality and tone of voice that you want it to be in terms of however you're using the output. It's very easy to write a simple prompt and get somewhat decent output these days, but that's gonna be somewhat generic.
And you know, I would, I hesitate to call it AI slop, but I see a lot of comment around that. But I try to help people. That's not AI's fault, that's our fault. If it gets out, it's because you allowed it to get out.
David Rice: Welcome to this week's Data Bite. And the number that I want you to keep in mind this week is 64%. According to EYs latest AI pulse survey, 64% of senior leaders say that fear of replacement by AI is stifling adoption within their organizations. Yet only 24% identify employee resistance is a major barrier. So let's sit with that contradiction for a minute. What this tells us is that leaders know their people are afraid. They see it, they acknowledge it.
But they're not naming it as the core problem. Instead, they're talking about data readiness, cybersecurity, and regulatory concerns. But what I see really happening here is that we're watching a transformation where the human cost is being coded as a technical challenge. Two thirds of leadership recognize that their workforce fears obsolescence, but that fear isn't being treated as the crisis that actually is.
The question that we should be asking isn't, how do we overcome resistance to AI? It's, what does it say about our organizations that our people are terrified of being replaced and we're still barreling forward? Because here's the uncomfortable truth, when 64% of leaders see fear, but only 24% treated as a barrier worth addressing, we're not solving for human wellbeing.
We're solving around it. The truth is, this isn't a change management problem. It's a fundamental question about what role humans will play in a world where actively making less dependent on them. And right now, I don't think we're having that conversation, honestly. And with that, back to the show.
I always say it's never actually the technology. It's how we choose to use it. And if you choose to use it in a lazy way, well you're gonna get a lazy results. You know, you're gonna get slop, which is the same thing you'd have if you had a lazy, whatever position it is. It doesn't matter. They're gonna have sloppy work that they turn in and it's gonna be the same thing.
So, you know, it's about learning how to press it, how to make sure that it's getting the result that you want.
Glen Cathey: I'm glad you said the word lazy, because that's something that I try to impress upon others is that, and I try to be very transparent with myself, so I'm on the same journey. Everyone else is trying to figure out the best way to use these tools, and there are times where I'm working with AI where I start to get annoyed with how long it's taking me to provide enough context and a prompt.
And I have to always catch myself and realize that yes, I'm getting annoyed, it's because it feels like it's slowing me down, but I have to remember, so I'm even reminding myself every day, just a few more minutes of appropriate context will yield much better results. And so I try to tell people that you shouldn't think of yourself as a manager of these tools.
And as such, don't be a lazy manager. And I'd also say that if you have managed people, I'd say most good managers take the time to make sure that. When they're delegating tasks or signing work to people, that there's people fully understand what it is that they're looking to do. They probably even have examples of what good looks like, right?
You probably also allow them to ask questions. So say, I've explained what I want you to do. Is there anything you need to know? What questions do you have for me? That's like good management. That's another funny thing that we do in our trainings is just remind people that you can have AI ask questions, and it's such a simple, non-technical aha.
Unlock that at the end of your prompts, you can always just say, you know, ask me a few questions one at a time. That would help you provide the best output for me. And again, it's treating it more like a person. So I just feel like if you're gonna manage, be a good manager, you need to manage AI as well as you would manage a good employee.
And I feel that some people, they're using AI and they're not really managing AI. Using AI as minimal inputs, configuring things, typing in, you know, one sentence and you'll get okay results, but you're not gonna get the best results. But that's the same as if you gave such that little amount of context to a human, they would never be able to do a good job nine times outta 10.
So you have to have the same standard level, I think is what's helpful for people to keep in mind.
David Rice: Yeah. I think a lot of people have this kneejerk reaction when you say, like, treat it like a human, where they're like, I don't wanna do that, but I'm like, I don't mean. You talk to it about your problems, really.
I mean, you gotta give it the same level of understanding that you would give a human to being like, it doesn't, it's not inherently a genius. Yes, it has a massive amount of computing powder and it can think really fast and it can do all these things, but it still needs the context. It still needs to understand why you want this. It's just gonna blindly spit something out that it thinks you want, but it may not be that thing.
Glen Cathey: To me, that still is fascinating. You know, I used to try to tell people that it's almost garbage in garbage out. When it comes to prompt. It's not really true because even with a basic prompt gen AI tools today, and they'll only get better tomorrow, actually still produce for what minimal input you produce.
It's still strong. The issue is it's not its full capability. So I'd say garbage in is actually better than garbage out, but it's not its best output. And like you were saying, you have to provide enough information for it to do its best job. But I still think it's nice and hopefully anybody that's listening to this will use that whole concept of just ask AI to ask you questions, say what else do you need to know to do your absolute best on this particular task?
And I will guarantee you, you will be surprised by the thoughtful questions. That you can automatically understand. Of course, that is helpful to know if it's going to do this task for me, but you didn't think of it first. It had to pull it out of you by asking you questions. So to me, that's a fun little thing is just have AI ask you questions and say, you had to do the best job on this.
What else do you need to know? It will ask you really good insightful questions, which would then in turn allows it to do the absolute best work that it can't do. But most people aren't even aware. One more thing I'll slip in is I tend to tell people that if you're ever unimpressed with the output or you think it's average or mediocre, I tell people that you have to look in the mirror, that you're probably at least 50% of the problem.
So take ownership. If the output is like, ah, it didn't really do a good job, you need to be like, well, what could you have done better to help? Would have done a better job. I feel that not a lot of people are thinking about it from an accountability perspective of how responsible they need to be in terms of providing input, but it's bizarre in a positive way.
We're at the stage where we can use natural language to communicate with systems. And prior to three years ago, yes, you had voice text, but it wasn't like generat. I feel that people do get a little bit lazy with their minimal inputs without realizing that yeah, just take a few more minutes and provide it with all the context it needs to do a fantastic job for you, whether it's typed or voice, just something I still feel that some people are not taking advantage of some of these tools is.
Talk to it like a person that actually using the microphone so you don't have to sit there and type. So if you feel like you're slowing down to type, just click the microphone and start talking to it.
David Rice: It's an incredible ability that it has.
Glen Cathey: It really is. Yeah. We take it for granted now, but in the future it's kind of crazy. Like we have mobile phones now and some of us are old enough to remember corded phones that stuck you to the wall in your house.
David Rice: Car phones.
Glen Cathey: That's how it actually used to be. Right? Or records now and we have digital music. At some point I think we'll be laughing at the fact that we actually used our fingers to touch little buttons to communicate with computers.
When that's a significant barrier, when you really realize that touching keys is a barrier to communication and now that barrier is literally gone 'cause you don't have to type anymore, you can just speak.
David Rice: It doesn't mind long voice notes like all your friends do, so.
Glen Cathey: That's true. It never seems to get annoyed if you ramble a little bit.
And I'll also say it does a really good job, even with. If you're fast and furious typing notes in meetings, even if you use a note taker, sometimes I still like to take on notes and it's like, this is not gibberish, but it's not well outlined. It's amazing how well I can understand even those things and then polish them up into something that just looks like, yes, this is exactly what I wanted 'em to be.
But if you looked at my original notes. You wouldn't be able to make sense of them, but AI can. It's just absolutely fascinating.
David Rice: There are a lot of companies that kind of stop at literacy though, right? They get stuck in like pilot mode. I've been hearing this a lot, ladies and even we recently launched a free community account that you can create and people list in that their reason for joining, and a lot of people are in the stage of running AI pilots, and we're seeing this a lot, but a lot of people get stuck in there.
They roll out training models, they call it a day, right? I'm curious, in your opinion, what's the single biggest lever that actually moves an organization toward fluency?
Glen Cathey: Single biggest lever, I would probably have to say is leadership. Because if you are going to create a culture of defaulting to AI. Or making sure that people are more conscious about bringing AI into their work thoughtfully, where it does make a difference, that's not going to be accomplished just through training.
So you can have an online training course, you can have your experiential hands-on learning workshops, which I think are absolutely necessary. That would be my number two, but number one would be leadership because. It's leaders that demonstrate first and foremost, they set the vision. This is what we're trying to do.
They actually live it and show it so they can serve as examples, and then it's leadership's chain of command to hold people accountable at all levels. So you have leaders who lead leaders are holding them accountable for people actually making changes in their daily work. So I feel like you can have all the training in the world, but if you don't actually have leadership that's fully aligned and holding all the different levels of leadership from the very top, all the way down to line managers that manage all of the individual contributors, you're not gonna drive a culture where more people are defaulting to AI and exploring all the ways that it can actually improve their work.
David Rice: Yeah, I think it's like just understanding like what are the characteristics of fluency too, like leaders. Modeling it is gonna be sort of the thing that creates fluency where it looks like employees are willing to redesign a workflow on their own, for example, without being told to, and like moving sort of part of it too is like moving mentality from sort of completion rates of like, you know, that productivity.
It's more like capability shift. Who's using it? What for? And like what changed from the outcomes is like, there's just some mentality shifts that gotta happen from leadership and part of it is just using it to be honest, I've talked to multiple people now who have said the same thing. Like, they talk to leaders and they're like, no, I we're an AI organization, but I don't use it. Huh?
Glen Cathey: Well, and people see it. You're always, I think when you have any large population of employees, whether it's hundreds or tens of thousands or hundreds of thousands, you're gonna have that. I think your normal distribution, you're gonna have your early adopters, they're going to lean into it anyway, regardless of any training.
They're gonna get it. Then you have some people that don't trust AI, and even if you're really enthusiastic about it, I should remember there are people on the energy spectrum that are doubtful. They're distrustful, they're worried about it, or maybe they're just reluctant to learn something new. And you have a lot of people in the middle.
And to be able to move that. Again, training is great, but leadership, it's not just communicating it and demonstrating it. It's holding people accountable. And although that sounds hardcore, sometimes it can be as simple as making sure it's a part of every team meeting that you're discussing. How are we using AI, giving people opportunities to share some of their use cases so that other people can hear about that.
Documenting those use cases, sharing them with others. You can make it a part of performance reviews as well, and I don't mean that in a scary way. It's not a stick, but it's also a carrot. If you tell me, Hey, Glenn, you know, when we have our performance reviews, I'm going to be asking you about how are you using AI?
As simple as that. Now I know that I'm going to be asked, and so I'm going to do things because I know I'm going to be asked. Sometimes it's simple as that. It doesn't have to be anything more complex, but it's that leadership chain that actually starts help kind of pulling people towards the right behaviors.
'cause they know it's gonna be discussed and talked about and people are gonna be asked, are you using it? 'cause if not, then it'll come out. And then I would say it doesn't really look good for you. It's almost like saying, no, I don't wanna use the internet. Like when the internet was coming out. Like at some point you just have to be like, this is the future and I have to lean in.
David Rice: Yeah. I mean, well, especially with, if you have a job where you sit behind a computer, like this is just the reality. You know? It's like saying that you don't want to use the internet. You wouldn't even think that today it's gonna be that. Yeah. It would be absurd. Yeah. It would like, the absurdity of that would be a, wouldn't even be able to comprehend it.
So, I think we are gonna get to that point with this as well.
Glen Cathey: I hope so. And I think even with any technology, like sometimes I use Excel as an example, like so Excel's been out for a long time, but there are still things that about Excel that like 99% of people don't know about. So there's so much that you can do.
There are people who make careers on just training people on how to use Excel and it's just, I don't wanna say like it's just Excel, but it's a very powerful technology and tool. Most people can get by with using 10% of the functionality and it does what they need. You just have to realize there's 90% of other functionality there.
So I feel like we will get more people to be more AI native. I don't want to use that, but I can't think of a better term at the moment where people are more defaulting to it. But just because they're using it, it doesn't mean they're using all of it. Just like in Excel or a Word doc, there's still features.
There's Windows shortcuts today that I just learned like two months ago. I'm like, how did I not know that? So it's like this will never be done, just like leadership hasn't been solved. Just because there's training doesn't make everyone magically, you know, awesome leaders. There's still growth for everybody to have in that area.
I think it'll be very similar when it comes to AI, but as long as people are leaning into it and realizing that this is the new way of working and it's a, I think, fear of being left behind. It can be leveraged positively because it's also reality. I mean, imagine if, I know I'm using like old school analogies, but imagine when like Microsoft Office came out.
If people are just like, I don't wanna use spreadsheets and word processing, it's like at some point you're left behind. You made a choice to not see that this is something that's gonna be integral to work and that's going to hurt you. It's the same thing with AI. It's no different than MS Office or like, it's just gonna be a skill that everyone's gonna have to have.
So if you don't lean into it, then I feel like that's something people are gonna do to their detriment.
David Rice: When we look at something like recruiting for example, right? Like AI could take over tasks from sourcing to shortlisting, even messaging people. Where's the line sort of between augmentation and replacement?
'cause that's something a lot of people fear, right? And I'm wondering, do you think we're cling to certain roles almost out of like tradition rather than necessity?
Glen Cathey: That is a great question. I have some thoughts, and it might be controversial for some, I'm still thinking this through like many people are, but first off, let's look at a few solutions that are relatively new, like LinkedIn's hiring assistant.
So LinkedIn's hiring assistant is a legitimate multi-agent model. It has a separate model for searching. It has a separate model for outreach. It has a separate model for screening. And so you can enter in information and natural language, and there's other solutions to do this too, right? So we're talking about the automation of tasks in recruitment, understanding a job, translating that job into search and match requirements, finding people, engaging people, pre-screening people.
So you have solutions available in the market today that can actually do that. And then it makes me wonder about, well, why does the role of the recruiter exist anyway? This is at least my journey, is that at a certain size you have really small companies like startups where they don't have dedicated recruiters, right?
So who hires managers, like they recruit their own people. But you get to a point where your job as a manager of whatever it is you're managing, let's say software engineering, I don't have time to be a full-time recruiter. So that creates the need for a person to be able to take over those tasks.
What happens when you have technology that's capable of performing those tasks? You could have a scenario in the future, and I'm not saying it will come, but you could, and the technology already exists today where who is to say that the end user of LinkedIn's hiring assistant isn't the hiring manager?
So I know that sounds scary and, but people need to be aware. We can't put our head in the sand and realize that no, they can't. No, technically that's totally capable today. Right? So let's be aware of that. We're not saying it's going to happen. Then you have to figure out if I'm a recruiter, where do I fit in the future?
In that type of scenario, not all hiring managers are going to want to self-serve using technology. So they'll probably still, many of them will want to outsource to a human recruiter. I said like if I'm a recruiter and I manage this truly multi-agent system, where do I add value? It sounds cliche, but it's true.
It's like relationship building, the actual recruiting, which is persuasion and influence. How do I take a passive candidate who's wasn't really thinking about making a change or someone who's actively interviewing at other companies, how can I listen to them empathetically and match what it is that they're interested in terms of their skills, motivations, and aspirations, and match them to the next opportunity?
And how do I align what I'm hearing them tell me with how our company is actually perhaps the best match for them, if that's actually true. Relationships with hiring managers being consultative. I don't know if we'll get to the point where. People are a hundred percent relying on AI to be a talent advisor.
But it could. But I still think there's definitely a role for people to play from a talent advisor perspective, both on the hiring manager side and the candidate side. So it's just some of my thoughts in that space and realize that the capabilities are there and to say that we can't be replaced is actually not accurate.
It doesn't mean just because we can doesn't mean we should. And I also think the last thing I'll say on this is this will be, I think, a strategic decision for every company. One company might say, we are going to automate this portion and we wanna use AI here, but we're gonna use people here. Every company is gonna figure out what their fingerprint is, so to speak, in that space, to figure out where do we want AI and where do we want people?
And where you put people might actually end up being one of your competitive advantages from an experience perspective, but also where you use AI could also be a competitive advantage from an experience perspective. Because not everybody even gets feedback, right? And you don't. Nobody has enough recruiters to provide feedback to all applicants.
So AI could be perfect at doing that and filling that gap that we can't accomplish with people. It's just a few of my thoughts on that subject.
David Rice: I agree with you, 'cause I think we still somewhat underestimate how much of work is mechanical in this particular case, in the case of recruiters, right. I think, you know, it can handle quite a bit, but the portion in that, it's still, you know, where does the human add value?
It's the judgment, it's storytelling, empathy, that kind of thing. And I think. It's weird because like the role could persist because of identity rather than utility. But what is the value in that? I think the future value is what you mentioned there. Not every candidate gets feedback, but candidate experience would be a lot better if that's what they were getting.
And so like I think maybe reshaping candidate experience and thinking about how you use it to do that is a really high value use case. It's more like not is it gonna replace me, it's what part of my work deserves more attention, and how is this gonna allow me to do that is really what it is. Comes out I come into this realization in my own work, right?
Like I'm realizing, oh, like actually maybe some of that stuff that I was really attached to in the past or that I spent a lot of time doing, I don't really have to do it that much. I can use these prompts, get this result, and then I can just kind of. Pour more into things that like, I always wanted to give more attention, but the things that actually need my specific attention, you know, versus not anybody can do that.
And it's like, okay, well now I finally can. So it's finding those moments and the opportunities to do that.
Glen Cathey: I agree. I think there's an opportunity to not just look at the existing recruiting lifecycle or even talent management lifecycle and saying, where can we apply AI that's legitimate. I also think we have an opportunity to reimagine things, which means I think you do have to wipe this slate clean at some point and say, now that we have these additional capabilities, how would we design this experience?
How would we design this process? That's a different thing than saying, here's our existing process and we're inserting AI. Now it's, well, if we were to reimagine it, what would look like? And that to me really captures my imagination because there are already companies thinking about doing that. But as time moves on, more and more companies are gonna be forced to say, yeah, we can't just keep doing things the old way and inserting AI and it will work.
It's not like it won't work. But it won't be, I think, the full realization of what could be until you kind of say, maybe we do have to rethink everything because we have completely new capabilities we didn't have before. And it does change things. So it's not just insertion of tech and our ex old process.
It, I think also involves the re-imagination of what a new process should look like with our new capabilities.
David Rice: Burn it down and start over. Right. Well, Glen, I want to thank you for coming on the show today. I really appreciate it. Thanks for giving us some of your time and your insights.
Glen Cathey: This is awesome. Thank you.
David Rice: Well listeners, follow Glen over on LinkedIn. If you haven't done so already, head on over to People Managing People, create that free community account. Get signed up for the newsletter. We'll see you next time.
