AI is supposed to free people up for “higher-value work.” Fine. But what, exactly, is that work? In this episode, David Rice talks with cyberpsychology researcher and psychotherapist Dr. Rachel Wood about the part of AI adoption most organizations keep skimming past: the human cost of automating too much, too quickly, without a real philosophy for what should remain deeply, stubbornly human.
Their conversation cuts through the usual AI optimism and gets to the more uncomfortable truth. Some friction should absolutely go away. Nobody needs to spend an hour copying and pasting spreadsheet data. But some friction is the job: disagreement, discernment, hard conversations, learning by getting things wrong, and figuring out who you are when your expertise is suddenly easier to imitate. This episode is really about that distinction, and why leaders need to stop treating AI as a software rollout and start treating it as a human development challenge.
What You’ll Learn
- Why “be more strategic” and “be more collaborative” are outcomes, not instructions
- The difference between administrative friction and relational friction
- How overusing AI can erode basic communication and thinking habits
- Why AI adoption is often blocked by ambivalence, not a lack of training
- How automation can trigger identity loss, grief, and vocational uncertainty
- Why leaders need an AI philosophy, not just an AI policy
- How to use AI as a rehearsal tool to strengthen human judgment and communication
Key Takeaways
- Not all friction is bad. Administrative drudgery is fair game for automation. Relational friction is not. If people lose their tolerance for disagreement, nuance, and discomfort, they do not become more human at work. They become less capable of doing the human part of work well.
- Basic skills decay when we stop practicing them. Offloading every simple email or low-stakes communication task might feel harmless, until six months later when people are less clear, less thoughtful, and less able to say what they actually mean.
- Training alone will not solve AI resistance. Rachel argues that many teams are not short on information. They are holding mixed feelings, fear, excitement, uncertainty, and identity disruption. That ambivalence has to be named before adoption gets easier.
- Grunt work did more than waste time. It also taught context, judgment, and how organizations actually function. You do not need nostalgia for inefficiency to admit that struggling through the reps built leadership capacity.
- This is partly a grief process. When AI starts approximating the thing someone spent years mastering, that is not just a workflow change. It can feel like a personal and professional loss. Pretending otherwise does not make it disappear.
- Leaders should go first. Before asking teams to adapt, leaders need to examine their own 2:00 AM questions about AI. Anxiety has a way of leaking into culture when it is left unnamed.
- Use AI as a rehearsal, not a replacement. One of the most practical ideas in this episode: use a chatbot to role-play difficult conversations, sales calls, or interpersonal scenarios. Let it expose blind spots and help you practice. Then go do the real human work yourself.
- A policy helps, but a philosophy matters more. People need clarity on what kinds of AI use are acceptable. But beyond that, organizations need to define what behaviors they want to encourage, what judgment they want preserved, and what they believe should remain meaningfully human.
Chapters
- 00:00 – AI & communication decline
- 02:26 – Defining higher-value work
- 04:59 – Supporting AI adoption
- 06:05 – When not to use AI
- 09:09 – Good vs. bad friction
- 12:14 – The ambivalence barrier
- 14:24 – The value of grunt work
- 16:50 – Identity disruption at work
- 21:47 – Naming grief at work
- 25:47 – Productive struggle
- 29:02 – AI as rehearsal
- 32:42 – Policy vs. philosophy
- 34:32 – Pace of change
- 35:36 – Protecting human development
Meet Our Guest

Dr. Rachel Wood is the Founder of the AI Mental Health Collective, a cross-disciplinary community advancing responsible conversations and collaboration at the intersection of artificial intelligence and mental health. With a PhD in cyberpsychology and experience as a licensed counselor, she is a speaker, advisor, and consultant focused on how emerging technologies are reshaping human connection, decision-making, and emotional wellbeing. Known for her balanced, nuanced perspective, Rachel brings together clinicians, technologists, and researchers to explore both the opportunities and ethical complexities of AI in mental health and the future of human relationships.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Rachel:
- AI Mental Health Collective
- Rachel’s website
Related articles and podcasts:
David Rice: Your team has been using AI to write emails for six months. Simple ones, quick responses, nothing complicated. And the quality you've noticed is starting to slip. Messages feel generic. There's less personality. Less clarity about what actually matters. They're not losing the ability to communicate, but they are losing the practice of thinking through what they actually want to say.
Today's guest is Dr. Rachel Wood, a cyberpsychology researcher and psychotherapist who sits at the intersection of AI and mental health. And she's gonna talk about something a lot of us are feeling, but maybe not naming. We are lowering our tolerance for friction, all friction. Even the beneficial kind.
There are two types of friction in work. Administrative friction, copying and pasting spreadsheets for an hour. That's stuff AI should eliminate. But then there's also relational friction. The tension when a colleague thinks differently than you do. The negotiation require when colleagues disagree. The discomfort of difficult conversations.
This is problematic. Because if we automate away all discomfort, we won't be freed up for the more human work by nature. The more human work includes friction. It requires tolerating that your coworker sees the project differently. It demands learning to work together as a team, which is a skill that doesn't just happen. But nobody's handing people a playbook for how to actually get better at the human side of their job.
Leaders keep saying, be more strategic, be creative, be collaborative. But those are outcomes, not instructions. And meanwhile, adults are losing patience for learning new tools, kids are struggling to focus on reading. Something structural is changing in how we handle difficulty. So today we're gonna cover how to use AI to help you get better at being human, not replace your humanity.
The communication skills that roe when we stop practicing them. When you need an AI philosophy, not just an AI policy, whether the rate of change is faster than the human psyche can adapt to. And how to differentiate friction worth removing from friction worth preserving.
I'm David Rice. This is People Managing People. And if you've been treating AI like a tool deployment instead of a human development challenge, hopefully this conversation is gonna help reframe everything for you. Let's get into it.
All right. Well Rachel, welcome to the show. It's good to have you.
Rachel Wood: Thank you so much. I've been looking forward to this. It's good to be here.
David Rice: So where I wanted to start with this conversation is I often think like leaders have spent a better part of the last two years telling their teams that AI will free them up for higher value work. And I still hear this repeated sometimes, like I thought by now it's gotta become a cliche, right?
But I still hear it being said. But I'd say most never defined what the higher value work actually is. You kinda have a pretty clear position about what it should be. So take us through that, what they are and why.
Rachel Wood: I hear this a lot too, David and I do think that it's somewhat context specific. That there isn't a one size fits all because different industries and different teams are gonna be focusing on specific things, but some of that work can really look like focusing on an enhancing creativity and innovation.
And so when you kind of remove some of the burden of the administrative task, there's more just cognitive space to actually be imagining. Creating, trying new things, thinking about how to be innovative in the work. And another thing that I think we're freed up to do is to learn to work better with each other so that there's like a congruence within teams that maybe there wasn't kind of a cognitive space for before, but that now you can really learn like.
How do you work together as a team? Well, because it's a skill. It doesn't just happen like you get the right players on the team, but then you have to learn to work with each other. And so it can be learned, it can be taught, but that's another thing that there's more space and time now to be focusing on, I think.
David Rice: Yeah, I mean, I would agree and this whole. It's funny 'cause like the whole thing kind of resonates with me. I've heard it myself personally. Like said to me, and I keep seeing this pattern where companies, you know, they deploy AI and then they tell people to be more strategic or more creative or more collaborative, and all of those are outcomes.
They're not really instructions like I feel like nobody's handing someone a playbook for how to actually get better at the human side of their job or how to maximize their unique contribution. By using AI to address, you know, their weaknesses. And so I think what's interesting is you're saying the tool itself could be maybe like a part of the playbook when we were talking before this, which I think is something that a lot of people haven't considered, you know.
Rachel Wood: I completely agree with that. And I think one thing that is really important, like when you have a budget line item for your AI initiative, whatever that is. There needs to be a concurrent line item for how to support people in using it because you know you're paying X amount for the technology itself.
Okay. But really that is only bolstered and insured by training people on how to engage with it, what to do, what to not do. And like you said, this idea of using AI itself to help you get better at things. I mean, wouldn't it be interesting, David, just to plug into whatever chatbot you use to say, Hey, so you are helping me offload some of my admin work.
What should I be doing as a human that can really, in my specific job, make me better that you chatbot can't do, but I human can do. So, I mean, it's interesting to like, let's use AI to help us do things better.
David Rice: Absolutely. Oh, it's funny too, 'cause like I was at a conference this week and everybody's talking about being agitated with their coworkers or like their direct reports using it to write emails.
And I'm like, yeah, that is infuriating a little bit, you know, like. You can't just write an email. And I'm like, if you ask the AI, should I write an email? Like should I use you to write an email? Like, I think Claude would tell you, that's not really a task that you need me for. Right. You know? So I think like we need to start thinking about it like.
Now, Claude might say, if you have a dense response that you need help making sense of, or there's a lot of nuance and context and you need me to rephrase it, that's different. And I think we would all agree that is like a valid use case. Like you are wanting to communicate something that's really hard to communicate and maybe it can help you articulate that in a different way.
Especially if it's really important message, like it's a, it is something really important to the business. Okay, fine. But if it's just like. You're getting back to somebody about why the report was late. Like, no, come on. Like, let's use our heads here.
Rachel Wood: Well, there's this huge level of discernment that we have to employ in order to know when it's best to use it and when not.
And if we are non-discriminatory in terms of we use it for everything. Well over time, some of the kind of smaller tasks, actually, we won't be able to do them. Like our ability to do them will erode over time. So your example, David, this idea of like having a chat bot, right? A very simple email. It may sound like not a big deal to keep offloading that, but come back to me in six months and tell me what it's like to write a very simple email on your own.
So I think we have to be discerning on what are we willing to not know how to do anymore after maybe six months or a year of not practicing it, versus really kind of collaborating with AI in such a way that it helps our thinking as opposed to obviously replacing it.
David Rice: I think this is a bigger issue than a lot of people realize, like, and what we're talking about here, just like with a simple email, like that's communication and it's learning how to communicate with your coworkers and say things to them in the way that they need to hear it.
And so it's like that's a human skill. That's something that we all do sort of by trial and error a lot of the times, and it is sort of reading body language and feeling out the situation and the context of that situation. We've been hearing from leaders that something strange is sort of is starting to happen and we see it in ourselves.
Even when I talk with colleagues, you know, in different industries, I think adults are losing the patience for learning new tools and like you give up after a couple of minutes, you switch to something else. The context switching is crazy. Kids are struggling to focus on reading. We see that all the time.
So like from a psychological perspective. I mean, is this just a preference shift or is something structurally changing in how we handle difficulty?
Rachel Wood: I think that we have to look at the history of what's really built up to this moment in time. So we think about. Social media. I mean, even going like the invention of the internet and then coming into social media, like all these things have contributed to where we're at today.
And so AI in and of itself, we can't just point a finger as if it's causing certain things, but we can see that it's amplifying some of the existing structures in place, which is kind of that low ability to maintain attention or to focus and. I also think that there's this part of what you're saying, David is talking about friction.
You are saying that people have less tolerance for friction. And there's two kind of frictions that I really wanna differentiate here. The first is an administrative friction of like in order to get this spreadsheet correct, I have to like copy and paste, like that's friction and I have to spend an hour, like I copy and paste and I copy from this spreadsheet and it's like.
This is friction and this is friction that we want to remove because this is something that AI can do. And so the administrative friction is something that we're wanting that lowered and removed. However, there's a beneficial friction, which is more of a relational friction with others. It's tolerating that, David, you think differently than I do, and you have a different idea about the project than I have.
And so there's a little bit of friction between us that we're gonna have to negotiate and compromise and communicate on. Now, I don't know about you, but I've never been in any kind of relationship with anyone in my life. A colleague, a partner, a friend, a family member, where there hasn't been some level of friction.
So the reality is. If we continue to lessen our tolerance for friction, we actually won't be freed up to do the more human things, because part of the human things includes friction and navigating difficult conversations and navigating different points of view. So part of our work is actually like building our muscle in the place of navigating difficulties with one another in the workplace.
David Rice: Yeah, I think that friction distinction is key, right? Like I'm thinking about some of the conversations I've had lately, some of the ones that I've heard lately from other people. And I think like there's a bit of assumption in leadership that, you know, if people are struggling with a new tool, it's essentially a process problem or it's a training problem and you gotta throw more education at it, essentially.
I think if we're thinking about friction, it's not that people lack information it's the capacity to sort of sit with it and do the like long, hard work enough to the point that it, like that capacity is starting to, it will start to erode and that connects to something I've been thinking about a lot, which is what happens to people coming up through organizations right now.
The ones who, they never really had to do it the hard way.
Rachel Wood: Yeah. Let me say one thing before I answer that, David, because that was such a good meaty chunk. Part of what you're saying we can't just train this out of people, like it's not like more training is gonna fix this. In some cases, trainings might help, but it's not always the answer.
I think part of the answer, and this is part of what I research on and worked with organizations on, is how do we navigate the ambivalence that people feel toward AI. So when I say the word ambivalence, what I mean is kind of mixed feelings or conflicting thoughts. So like I feel kind of excited about AI and I'm also pretty nervous and maybe have some concerns about it.
So most people are holding the degree of ambivalence toward AI and this doesn't get trained out of you. Training on the tool doesn't help this, but there are ways to talk about the ambivalence that can actually increase AI adoption. Because right now, if your team is sitting with a lot of AI ambivalence, which most of them are, that's a barrier to adoption and the training isn't gonna help.
But if you focus on the ambivalence and if you bring and facilitate conversation. Around how people are experiencing the vocational identity shift that they're in. That conversation is actually what lowers the barrier to adoption and brings people in to be able to engage with it in ways they couldn't before.
David Rice: Now, there's a bit of a counter argument here that says, right, the years of grunt work that people used to do were actually inefficient. I think this is along the same line of thinking that Sam Altman was having when he said that a human has to eat for 20 years. And that uses the same amount of energy as an AI prompt.
But in this line of thinking, you know, the years of grunt work were inefficient and that AI lets people skip to, you know, doing the good stuff faster. I'm curious though, from your perspective, like what does a psychology tell us about why struggling through something difficult builds a kind of confidence?
You just can't replicate by being handed something.
Rachel Wood: Yeah. Okay. Let's frame this in terms of the career ladder. Think about like, you know, you're fresh out of college and you get like your first entry level job and you know what you're doing. You're getting coffee, you're going to the fax machine to like pull fa, you know, send faxes.
You're like picking up the donuts at the local. You know, like you're doing kind of this grunt work that may have seem to have nothing to do with your job itself. But while you're doing that, you are really kind of picking up in almost an ambient way how everything in that particular niche and industry works.
You're overhearing what the leadership says. You're watching how the leadership deals with things. You're learning the ropes, and of course, you're also doing the work of the business. And so when that goes away. Let's say we don't have that role anymore, and you just, AI is filling some of that. Well, the people who are coming into a new industry or a new business have missed a crucial part of the grunt work that builds the capacity to be a good leader later in their career.
And so it's important that we. Kind of go through the reps on our own because it makes us more knowledgeable, appreciative, it helps us know how to run things when we are gonna be a leader later. And so skipping that is actually skipping quite a bit.
David Rice: Yeah I mean, I think that's the piece that like gets lost in the efficiency conversation, right?
Like nobody, nobody's arguing for inefficiency, just for its own sake. Right. But when I talk to senior leaders though, the ones who are really honest about it, I mean, they're gonna tell you that the reason that they trust their own judgment so much is because they've been wrong enough times to know what being right feels like.
And I don't think you can shortcut that. Like you can't download it. I would ask you this 'cause like to just take it a little deeper, because it's not just about skills and confidence. We've been hearing from people that this hits something more personal, right? Like. I keep coming back to this idea of like identity.
What this really is challenging your identity and so much of what the AI challenge is mindset. The people that feel like they're thriving with this and that they're really doing some interesting things, what are some of like the traits that you see in them that I guess support that feeling that they're having?
Rachel Wood: Yeah. And let me also kind of add a little bit here to this concept of, you said something great, which is, you know, you've been wrong enough times that you know what being right feels like. And essentially part of what you're saying there is that you have knowledge plus experience. And what AI has is knowledge without experience.
So if an AI gets something wrong. It doesn't have any kind of repercussion within itself to recalibrate other than, oh, I need to please the user 'cause I gave up the wrong output. However, with us, like when we get something wrong, like we learn a lot of character, like we grow and maybe we have to apologize, or we're humbled or we're embarrassed, like there's all these psychological things that we go through.
And so the knowledge plus experience is what's so important as we develop leaders over the years, and that isn't obviously present with AI. So getting back to this concept of identity. You know, our work identity, David, I'm sure both of us agree that who we are professionally is a big part of who we are as a whole.
We spend a lot of time at work, all of us do, and hopefully a lot of us love what we do, so we're passionate about it. We're invested. We've been educated. We've maybe gone through trainings. I mean, we've put a lot into this and a huge portion of the pie chart says. Our identity is founded in who we are professionally.
So now it's like all of that is kind of being disrupted in different ways. And so we are not only looking at a professional disruption, but we're looking at a disruption in terms of our meaning and purpose and meaning and purpose. We know drives a lot of who we are upon this earth. You know, if you find meaning and you find purpose, you've found a really good thing.
So when that gets kind of disrupted or discombobulated or even unsettled by the injection of AI, we're not just learning a new tool. We're learning a new way of who we are and how we kind of walk upon the earth in terms of our meaning and our purpose and our identity.
David Rice: I spoke with one leader recently who was describing like their senior engineers and sort of reacted junior people using AI to do a lot of their job.
She had pointed to that. She said it wasn't really a skills issue, it was an identity issue and that, you know, this thing that they'd spent years mastering, suddenly it could be sort of approximated. And I've been like, trying to think of like, is there a term for that? But I don't think that there is.
But it's I just keep coming back to it causes this sort of existential crisis within us. You know, you mentioned their embarrassment as like a motivator because it's really what it is, right? We've all been wrong and been embarrassed about it, or been like, it cost the team and you feel guilty now because you'd done something that was foolish.
And AI doesn't have that. It also doesn't have this existential crisis feeling right, that we're dealing with. It's an interesting thing because it's like you have to interact with it, but I think that there's something in that emotion that can allow you to provide a unique value and a unique contribution to whatever comes out of it because of that.
Rachel Wood: Well, it's so true. I mean, we really are motivated a lot by our failures, and that's a good thing when we get something wrong, like we get to learn from it and we get to move on from that. But you know, another example of that is like, think about like a loan officer who all of a sudden overnight, like AI can actually determine who can and cannot get a loan, just as an example.
Well then the loan officer walks into their job and like the one thing that was their purpose is gone. So then you have this massive internal struggle of like, what am I here for? What am I doing? Do I have to reinvent myself now? And you know, reinventing yourself can be a difficult thing at times.
Although a lot of us like to kind of progress and, you know, evolve. It still is quite a bit of work. And so I just agree with you that it's really fascinating for us to hone in on the psychological impacts of what happens when your expertise becomes automated.
David Rice: Well, it's your expertise can, but your experience can't.
And I think that's what we have to like figure out how is my experience gonna create value? Because then if I can figure that out, I'll be able to find the experiences and the work that I'm looking for. It's powerful too because like organizations are treating it. Right now to treat it like a re-skilling challenge.
Right. But there's a part of it. It's at least partly a grief process. That's what you were kind of describing there. They're mourning a version of their professional self that it may not exist anymore.
Rachel Wood: Yes, that's exactly right, David. And that's why having people who are actually. Well versed in kind of psychology and that realm are so important to have at the table for AI initiatives because this is more than learning a new skill.
This is essentially, like you said, there's a grief and a loss when what used to be your bread and butter of expertise is now different or gone or shifted. And I think this is why it's so important that when we're learning to use AI, we're also reimagining what our future identity, professional identity roadmap is.
And part of what's baked into that process, David, is us learning how to collaborate with AI in a way that doesn't offload our skills and doesn't. Essentially we need to do it in a way that is making us transform into like the evolution of who we are next vocationally.
David Rice: It's interesting, like, you know, if you're a chief people officer, you're a CEO, maybe at first you weren't aware that this was happening under the surface as you roll out this technology, but you start to read the signals and you're getting from your team, you realize that they're not resistant to change, but they're processing this sort of loss of what they did and what they, who they were.
I guess the question becomes, okay, if we know this is happening, well what do we actually do about it? Because you can't just preserve the struggle and then leave it there.
Rachel Wood: So what's really important, and I think this can be overlooked, I think people, and maybe leaders don't wanna talk about this deeply because they're afraid that if you open this can of worms, everything is gonna get derailed.
If we start talking about how people maybe feel grief in the workplace, oh my goodness, we're gonna become, you know, something we don't wanna be, and we're gonna be bogged down by our feelings. And I just wanna be very clear right now that actually the opposite happens when you come into an organization and you have a structured way of facilitating a conversation that names the grief, that names the ambivalence.
That simply names some of the things that people are going through. There is a relief that ripples through teams where they go, oh, okay, good. This is what we're all feeling. It's normalized. It's okay that this is happening, and instead of now holding onto that all of the grief and all of the ambivalence as a barrier to moving forward, once you name it, it actually drops.
I mean, not that it completely goes away, but I'm telling you the energy that your team has been holding of their grief and of their ambivalence, when that is named and it goes away, they're freed up to move forward with the initiative. And so it is costing you more to not name it than it is to name it.
David Rice: I think too, like part of it is you can't eliminate. Productive struggle, right? Work still has to feel like work. If they come to work and they feel like they're just automating themselves out of a job. That's gonna cause a level of anxiety and even like depression when they think about the future.
Because it's like, well, where do I go from here? And you start asking yourself some really heavy questions that don't have clear answers, and that's hard to navigate for people, especially in the, many of them have worked really hard to have some clarity and direction. I think there's an argument maybe that organizations need to preserve that productive struggle and not make work.
I mean, don't make work artificially hard, but be intentional about like learning experiences that are gonna build judgment. What does that actually look like? There's a line here that we've gotta walk to. It's between like difficulty that helps people develop and pressure that's gonna destroy them.
Right? So like how does a leader know the difference in the edge?
Rachel Wood: Yeah. And I think that there's a huge call right now upon leaders. To help facilitate their teams working through the productive struggle of a bit of an professional identity shift. And that if that work gets completed, and it's an ongoing process, but if you address that work, that is your only way to then move on to the next layer of productive work and the next layer of productive struggle.
And the next layer, you know, this is an iteration of it. But I think right now, David leaders are trying to bypass and circumvent the hard conversation of the pain and the struggle that some teams are in. They think that they can circumvent it, but it's always gonna come back to you. So face it head on.
Get somebody in there who can help facilitate a really good conversation and then you'll be surprised at how you can move forward from there at a better clip.
David Rice: And I think like this is important for people to hear. 'cause I can already imagine a leader listening to this and thinking, okay, great, so now I have to make work harder on purpose while also making sure nobody burns out.
Sounds impossible. But, well, I think what we're really saying is. This is about intention. It's about knowing which experience we're trying to preserve and why, and what the overall impact is gonna have on people. We're not just adding friction for no reason, right? Like I had Bob Sutton on the podcast like a few years ago.
We had a book called The Friction Project, and it's about how there is such a thing as good friction, like it needs to exist in certain places. And I think this is where it gets really practical is 'cause you know, like you've thought a lot about how AI itself can be part of a solution here.
Rachel Wood: And I think, you know, to get really practical with like, what are the next steps here?
'cause you're saying for the leader listening. Okay, the leader listening right now, instead of thinking about how do I help facilitate something for my teams in this, just start with yourself. So when you think about AI, what are you thinking about at 2:00 AM when you can't sleep, and I don't know that's personal to you.
Are you thinking you're afraid you're gonna, it is gonna automate a lot of your workforce and you're gonna have to change their flow? Are you afraid that it's. Changing your job? Are you engaging with it? Are you personally hesitant because you are not quite sure how to use it in your workflow? I mean, start with you as a leader.
Write down kind of your own concerns and your own maybe excitement about it, and work through that process of looking at those questions. Then you'll be more equipped to hold a space for knowing that your team has the same 2:00 AM questions that you do. But you can't help theirs until you kind of look at your own questions.
David Rice: Yeah. You have a framework like it's is AI is more like a rehearsal for relationship. It's not a replacement. And we've kind of talked about a lot of the risks here that, you know, we face with this technology and how we collaborate and things that we do. Walk me through what it actually looks like for a manager to use AI in that way, like kind of practically step by step, let's say they've got a hard conversation coming up with a direct report. What do they do?
Rachel Wood: Yeah. Okay. I love this. This gets me excited. 'cause this is of course a fun way that we can be using AI to really help our teams. So we know that things like emotional intelligence and interpersonal communication are massive parts of a successful company. They are not just kind of side soft skills.
Any teams that are gonna really succeed like they know how to communicate and they have a degree of emotional intelligence that helps them be good at relationships. You may be thinking, well, how do we train that? Okay, well here's a cool way to use AI to do that. So like you said, you have a difficult conversation on the horizon.
Maybe it's with a colleague, a direct report, whoever it is, and you want it to go well, you want it to be successful. So what you do is, there's two ways that you could approach this. If you use AI. The first way is if you use AI as a replacement for relationships. So you maybe are talking to a chat bot about some different things and kind of saying what a difficult time you're having with this colleague, and that's a dead end.
But if you use it as a practice ground, what you would do is you would put into the chat bump, Hey, here's a situation with my direct report, X, Y, and Z, and can you role play with me this difficult conversation? You say things like, show me my blind spots. Help me practice how to listen to this person and help me learn how to make this conversation be successful.
So then you role play back and forth with chat bot. Then you take that practice, you do it in real life, and there are going to be beneficial outcomes with that because you feel more confident going into the conversation. You maybe have some insights that you got with the role play that you didn't have before, which again, help it be more successful.
So overall, the framework that I love to present is using AI as a rehearsal instead of a replacement, and that rehearsal's a training ground for success interpersonally within teams.
David Rice: Absolutely. I mean, yeah, I've been seeing this kind of come up a little bit more and I think it's cool 'cause like people need permission to use the tools that way, right?
Like the default right now is like, we'll summarize this draft that, automate that workflow, do the things that you just do essentially. And what you're describing though, I think is it's a different relationship with the technology where it's not doing the work for you, it's trying to make you better.
Work that only you can do. And that to me is like the big unlock that most organizations haven't found yet. So it's like, okay, I wanna close something a little more personal. Essentially. I want to like make this conversation more personal. I wanna make this sound right, but not cross a line. Help me do that.
And that sort of like conceptual thinking around it, even just in conversation or in relationship development, I think can be very powerful.
Rachel Wood: Yeah. I mean, think about this David, in terms of like sales teams. I mean, set up your chat bots to role play your sales calls, how amazing would that be? Then you can fail 20 times in a row with your chat bot if there's like zero consequence.
There are so many use cases for role-playing and practicing your real world conversations by using a chatbot.
David Rice: We've talked to Taylor Blake recently. He's works for a degree and part of what he is doing internally is like helping develop some of that. And like make people kind of go through like some really kind of odd scenarios and I think that's great.
That was the promise of it all along really. Was that it could help us think of things that we never thought of before.
Rachel Wood: Yes, exactly. You mentioned something that brought another practical thing to my mind. You were saying that people, having people feel permission to use AI. So the research has shown that the baseline thing that you need in place if you don't have this already in your organization, is an AI policy.
It doesn't have to be fancy, doesn't have to be huge, but the research shows like if your company has some sort of AI policy, then your employees are more likely to engage. But if they're kind of wondering like, oh my gosh, can I do this? Can I not? Then they're gonna steer away or they're gonna hide it.
It's gonna kind of be shadow use. So another very practical step is making sure that you have that in place in your company.
David Rice: Definitely AI policy. I was thinking about this as I said at this conference this last week, and I was thinking about, you know, everybody has a compensation philosophy. Where's your AI philosophy?
What are the behaviors that you wanna reward? How are you going to reward them? What is the way that you want people to think about the technology and where you want 'em to use it, and what's the mindset that they should try to bring to that? Those are all things that I think we've gotta really define more and we will start to get more and more out of the tools that some of it we stuff we never expected.
Rachel Wood: Exactly. And this is where it's like an exciting frontier. You know? Like I can feel both of us kind of like bubbling up of like there's some cool stuff happening and so you can wade through some of the concerns. Move into a space where it's like, let's really go after this in a way that is totally gonna support the benefit of the company and of the individual employees.
David Rice: Yeah. Let's all start getting a little glass half full about it, right?
Rachel Wood: And let's experiment. Like, let's not be afraid to try things, to try and use it in different ways and just see what works.
David Rice: Well, it's interesting 'cause I, you know, we're sitting here to mention like that mentality piece, and you sit at the intersection of AI and mental health like every single day.
And honestly I wanted to ask like, is the rate of change right now faster than the human psyche can healthily adapt to because, or are we more resilient than we think? 'cause the one thing that we always come back to is like, change exhaustion. People are tired and this just represents constant change.
Rachel Wood: Yeah. Yeah. Oh, David, you know, I'm someone who lives, as you said, like day and night, like this is what I breathe and I can barely keep up. You know, like there's just so much that it can feel overwhelming, and so that's why it's important to maybe find just a few things that you're gonna experiment with and just try those, because if you try and keep with up with it all.
You know, I really like to think of AI, like we can't really talk about it in year cycles. We really have to talk about it in days and weeks and month cycles because it's changing and iterating so rapidly.
David Rice: I think like this is a good place for, you know, to land on this conversation. 'cause like sometimes I feel like I come off as like anti AI.
I'm really not. I'm, I don't like the way that we're navigating it at the moment. Right. Okay. We really need to be intentional about protecting human development. We've gotta be thinking about like from a social perspective, is this the right thing to use this technology for? Whether that's for the impact that's gonna have on human beings, or the impact it's gonna, you know, remember that like every time you prompt this thing, it uses an amount of energy that.
We should probably take into account, and as we kind of wrap this show up, if leaders are listening to this, and there's one thing I hope you take from it, it's that the struggle isn't an obstacle. And this conversation, this type of conversation where it's a little bit philosophical, that's not an obstacle leader, like this sort of struggle that we're having right now, this is where our growth is gonna happen.
And your job is just to make sure people get, still, get access to it and that they are growing with you. That's how I, yeah, like I, I love this kind of stuff.
Rachel Wood: I so agree with you, David. Like this is the way through. And then on the other side of this, we can enter into all these ways of using it and deploying it.
And like you said, it's so important for us to be talking about, you know, using it without losing our critical thinking or our autonomy or our agency. All of these things are really important. And talking about it simply gets us through to being able to deploy it.
David Rice: Well, Rachel, I love having you on the show.
This was great. I really enjoyed the chat.
Rachel Wood: Yeah, this has been so fun. David, I am so grateful to be able to talk with you about this.
David Rice: Absolutely.
Well, listeners, if you haven't done so already, make sure you're going over the website, peoplemanagingpeople.com/subscribe. Get signed up for the newsletter, you'll get this podcast, you'll get all of our content that we're developing nonstop into your inbox straight away.
And until next time, get into it. Right? Think philosophically about it, have a little fun with it. Ask these hard questions.
