The advent of AI is transforming the landscape of leadership in the modern workplace. As technology continues to evolve, leaders must strike a delicate balance between leveraging AI’s capabilities and maintaining the irreplaceable human touch.
In this episode, host David Rice is joined by Christopher Lind—VP, Chief Learning Officer at ChenMed—to delve into the complexities of navigating leadership in the AI age.
Interview Highlights
- AI in the Workplace: Challenges and Realities [00:55]
- Christopher discourages firing the marketing team and replacing them with AI.
- AI struggles with exceptions and interpersonal aspects of marketing that are crucial for success.
- Instead, Christopher suggests collaborating with the marketing team to identify repetitive tasks for automation.
- This frees up the team to focus on strategic and creative aspects of marketing that AI can’t handle.
- Firing the team and relying on AI likely leads to generic marketing messages in a crowded market.
- Scaling a startup with AI is not a shortcut.
- AI might accelerate processes or provide innovative solutions, but core steps are still crucial.
- Skipping steps due to AI hype can lead to problems later.
- Leadership in the AI Era [04:39]
- Leaders in the AI age need the same core skills as before, but to a greater degree.
- This means understanding your team and its needs, not assuming competence.
- Collaboration is key, even if it requires vulnerability and admitting you don’t have all the answers.
- Leaders are responsible for the well-being of their teams, including self-worth and professional identity.
- Leaders should involve their teams in the journey, not dictate it from above.
We aren’t just messing with people’s work anymore. If you’re a leader and you’re leading a team, you are messing with their self-worth, professional identity, and much more. You have a responsibility to guide and support them well.
Christopher Lind
- The Future of AI: Opportunities and Risks [07:50]
- AI will disrupt some aspects of society, but core human values and needs will remain important.
- Success requires steps and strong interpersonal relationships, which AI can’t replace.
- Companies that tried to overhaul everything for AI are already backtracking.
- The pandemic showed us flexibility in work arrangements, similar to how AI will force adaptation but not wholesale change.
- We’ll learn what works with AI and what needs to stay human-centric.
- AI’s positive impact: solving problems beyond human capacity, like disease diagnosis and dyslexia treatment.
- Untapped potential of AI: understanding humans better to create solutions that improve our lives.
- AI shouldn’t replace human connection: AI companions won’t fulfill the need for intimacy.
- Courts are holding companies liable for AI actions, highlighting the need for responsible implementation.
- The best use of AI combines its transactional strengths with human empathy for complex situations.
At the end of the day, our companies are built on people, filled with people, and our customers are people. There is a unique aspect where you can’t skip steps in a process for success; you can’t just jump from A to Z. The same is true for interpersonal relationships, which are essential for a company that functions well internally and for developing products that work well for human consumers.
Christopher Lind
Meet Our Guest
Christopher Lind is the Vice President and Chief Learning Officer at ChenMed, where he leads enterprise learning strategy. He’s also the host of Learning Tech Talks, a podcast that explores the landscape of learning technology, an advisory board member of the CLO Exchange and the founder of Learning Sharks, which offers organizations comprehensive advisory on all things learning innovation and technology. Previously, Lind was head of global digital learning at GE Healthcare.

We are always looking to avoid accountability and seek all the benefits without any responsibility. That’s one of the false promises AI gives: it offers benefits without accountability.
Christopher Lind
Related Links:
- Join the People Managing People community forum
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Christopher on LinkedIn
- Check out ChenMed and Christopher’s Substack
- Check out Christopher’s podcast: Future-Focused
Related Articles And Podcasts:
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Christopher Lind: We aren't just messing with people's work anymore. If you're a leader and you're leading a team, you are messing with their self-worth, you're messing with their professional identity, you're messing with a lot of stuff and you have a responsibility.
David Rice: Welcome to the People Managing People podcast. We're on a mission to build a better world of work and to help you create happy, healthy, and productive workplaces. I'm your host, David Rice.
We're going to be picking up where we left off with Christopher Lind, Chief Learning Officer at ChenMed. This is a sort of part two to our previous conversation where we were speaking about AI. We're going to get into the realities of managing people through this next era of work and what we look forward to with this technology adoption that's coming our way. So without further delay, let's just get right into it.
So a business owner comes to you and they want to fire the whole marketing team because they think they can just get ChatGPT to do it. What do you tell them?
Christopher Lind: I tell them you could do it at your own peril is what I would say. Here's the thing with it. And this is where in human history, we have never had to deconstruct work to the level that we have to now. And I think that's the problem is there's a lot of like wiggle room with people because people are contextual or problem solvers or really unique creatures.
And we often take that for granted. So we can look at let's we'll use your example. And this isn't me picking it on marketing, to anybody who's in marketing that's listening to this. But like somebody could look at marketing and go, I see these activities happening. That seems really robotic. Why don't we just have AI do that, teach it how to do those things, and then we can get rid of the people. But what you're not seeing is all the interpersonal and the problem solving and the exceptions, because that's another thing AI is just terrible at. It's great when things go right. It's terrible when things are an exception to the rule. It just blows up. You don't believe me.
Go find an autonomous car and put a road cone on the hood. It just, it can't function because it's like, Oh my gosh, what do I do about this? And it can't solve its way out of it. And I think that's the kind of stuff that we take for granted. When you do that, I think a wiser way to do it would be to actually deconstruct with your marketing experts and say, What are the things that are very robotic in nature that we could maybe do away with or automate with minimal oversight? And what would that free you up to do? Because that's the other thing that I think a lot of companies, my biggest pick on when it comes to people and their approach to AI is all they're thinking about is how do we just get more efficient and optimize what we're doing today?
And I'm like, that's certainly one thing you can do, but what are all the things that you know you should do or would love to do that you just can't because you're limited on resources or time or whatever. And I think if you do that, yeah, I mean, go ahead, fire your marketing team, hire an AI bot.
You'll have a bunch of noise created by AI in a saturated AI market. You won't differentiate. And when you ask it to do something different, it'll go. Here's another generic way that we can do the same thing. And then you'll wonder why you can't keep up.
David Rice: We talk about scale all the time cause we talk about the startup world. And so you're trying to scale your operations. And I'm thinking to myself it's going to change the way people approach that. But I'm not really sure how much it can change sort of the trajectory. Like maybe it can make you go a little bit faster, but you're still going to need to hit all the steps right along the way.
Christopher Lind: Yeah. Well, I mean, that's people think it's like the easy button. Hey, how do we get this product to market faster? Can we just hit the button and then we're in market? Nope. You're like, you still got to go through the other things and maybe you can accelerate it. Maybe you can do it differently. Maybe you can innovate on it.
But you can't just skip steps. That's, I think the risk we're running right now with how fast we're moving is, is there's a lot of this hype around, Hey, you can go from A to Z and get the same result. It's no, you won't. Maybe for a short period of time, but it's going to blow up in your face and it's going to hurt really hard when you try and clean it up.
David Rice: And it like gets at our need for instant gratification, which comes from, all this other technology that we've had, like feeding us this idea.
Christopher Lind: Right. Right. Why should I have to wait? I should just be able to do it right now. It's no, it doesn't work that way.
David Rice: For leaders who are navigating this next era of work, right, cause we're in this really interesting time where like leaders have to kind of think about what the workplace is going to look like or feel like, or what is this job going to do? Like you said, we have to deconstruct work and they go out there into the, well, like what we've mentioned the AI rabbit hole, right?
And they're seeing all these different perspectives and, you know, you can hear that it will do everything in your business. And you can also hear that it will enslave your entire workforce and kill the world's like this range of possibilities that people will present to you. And so I guess would you advise for leaders who are thinking about this and they don't want to get caught up in sensationalist nonsense that were cavalier thinking, right? They want to have a measured response.
Christopher Lind: So what's funny is the wisdom that I give to leaders now in the AI age is the same wisdom I would have given to them pre-AI age. But to me, it's exponentially more important than it's ever been before. In that, I think a lot of leaders, again, because you had people, you just assumed that things would be taken care of.
And more than ever, you are going to get to know the ins and outs of your team, you actually need to lean on your people. And this is an age where, you know, kind of this attitude of I'm a leader. I have all the answers. I know all the details. I mean, that was never true. It's not like a hundred years ago, leaders had all the answers and knew everything, but you kind of could posture and get away with a little bit more.
Feel like now you can do that at your own peril. You actually need to trust and empower your people and bring them into the fold and say, help me understand this. What are we dealing with here? What does this look like and do it collaboratively, which can be uncomfortable for some leaders because you have to be vulnerable and admit you don't have all the answers. And you have to invite people in that maybe historically you would have pushed it down and said, here's how this has to work and it may be what it worked.
Now you can't do that. Yeah. Or I mean, you can, but it's not going to go well for you. And so I think that's one. I think the other thing is, is that for leaders, this is when being a leader, and when I say leader, I mean, someone who cares deeply about the people that are following them. I think that has never been more important than it ever has, because we aren't just messing with people's work anymore.
If you're a leader and you're leading a team, you are messing with their self-worth. You're messing with their professional identity. You're messing with a lot of stuff and you have a responsibility to shepherd that well. And I think that's one of the things where actually taking the time to care for the people underneath you.
That goes back to inviting them in, helping them see, hey, here's where we want to go. I want you to be part of this journey, instead of I'm going to try and figure it all out in the corner office and then tell you how it's going to play out. Not only is that going to end well poorly operationally, but you're going to destroy people along the way.
And I think that's where it's a big, important step that leaders should have always been taking, but I think now more than ever, the criticality of it is paramount.
David Rice: The other thing that we kind of talk about when we talk about this, what we're every, the topic sort of floats around a lot of the time is this ends up becoming a, it's an existential conversation, right?
It's easy to look at some things and you say, well, that's just a tool, no big deal. But the tools that we have today, they're just the tip of the iceberg, right? We know that more is coming, that it will get more, fancy. So it was sort of complicated. So, it kind of, if it moves too fast, cause we started talking about speed, do you think that it threatens sort of the way that we've structured society, the things that we do that have value, like the things that human beings do that have value for economies, society, business, you know what I mean?
Christopher Lind: Yeah, I know what you're getting at.
David Rice: Yeah.
Christopher Lind: So I do think that there are some things that are just going to rip apart as we go through this journey. But I think what's going to be interesting about, and I'm already seeing some of this. There are certain things, because at the end of the day, our companies are built on people, so they're filled with people and our customers are people.
And so there is this unique component where you can't skip steps in a process for success. You can't just jump from A to Z. And the same thing is true when you think about kind of the interpersonal relationships that require a company that works well internally and developing products that work well for human consumers.
And so when I think about this, I'm already seeing some things where companies are like, Hey, with this AI age, the whole thing's going to have to change or the world's going to burn. And I'm like, Hmm, actually, no. And I've seen some companies take that approach and they're already stepping back from it going, yeah, you know what, it turns out there are some things that are really durable and consistent that we have to stay focused on. But I think there are other things that are going to break at the seams.
And we're already seeing that. We saw that through the pandemic. I mean, to me, the whole idea that work is proximity based, come on. I can't believe we're still having conversations about where people sit and do their work in many regards. But we're still learning that lesson. And I think AI is the same one where there's going to be some things we hold tightly to, and then there's going to be other things we just throw out the window and go, ah, we don't need to worry about that anymore.
And it's going to blow up. And we're going, Oh, I guess we need to dial that one back.
David Rice: What's something that you're really looking forward to that was this technology develops that you think it's going to have a really great positive impact for society? And then what's one sort of like maybe unintended consequence that you think will be definitely in our future?
Christopher Lind: So, we'll start positive and then, we'll start positive. Because we've had a heavy conversation. So there are some things, there are some things that we just literally do not have the human capacity to process. We just can't. And so in all of human history, it's it didn't matter how many people you put in a room.
There just was no way that you could ever do the analysis and the problem solving to fix this stuff, to identify the patterns. And I think there's patterns hidden everywhere, as an example. I think we're going to be able to do things that were formerly impossible before. So one example on my podcast, I just talked to somebody, they've cured dyslexia. Because you couldn't see the patterns of linguistics to understand the connection between people's brains and how it was breaking down and all this stuff.
Because it wasn't observable necessarily by a person and you couldn't observe it at scale to actually get to that point. Another one was I talked to a CEO who they're diagnosing diseases by having people cough into an app. And it can pick up the pattern of your cough based on this. We never would have got there without artificial intelligence because it's how many coughs David would have had to listen to, to finally pay?
And could we have even heard the oratory things to pick out the, probably not. So I think about the new things that we haven't tapped into that I think are very exciting where we go, that really is like a robotic. It's truly just a scale problem that we have. And I don't even think we've scratched the surface because everybody's too busy using generative AI to write more articles and create more video content to cram up the internet with junk.
I don't think we've really even fully tapped into, understanding human beings better so that we as people can do better things with it. That's the part I get excited about. The part that makes me nervous is when we start messing with things that I don't think we should be substituting us. We shouldn't be using aspartamine when we should be using real sugar.
And so, just one example of this, I mean, we're in a loneliness crisis. People desire deep connection. They want relationships, and they're struggling with that. And they don't know where to find them, and their skills aren't great at building relationships with others, and there's all these other factors.
And I think artificial intelligence offers, Hey, you don't need friends, you have me. And it's that doesn't end well. Intimacy doesn't happen between a conscious and a non-conscious being. That's one of the worries me, because I think there's a lot of things pulling people towards that, because there's a lot of implications to that.
When you start looking for human connection from a non-human entity, I personally don't think we should be mashing those two things together, or at least we should very, very, very carefully.
David Rice: Yeah. Yeah. It can get black mirror on you really quick, like a couple of episodes of that show.
Christopher Lind: On a lighter note, I did an update about this last week because I think sometimes you can go really down the rabbit hole, but even on a practical level. Did you hear about the Air Canada thing with their chat bot?
The chatbot that gave a customer an answer that wasn't the right thing and all of a sudden they tried to sue and courts are like, your chatbot did it, you're on the hook for it. I think what we even see there is, there's something about certain things are just transactional in nature. And artificial intelligence is great for that.
If you just need, something to be taken care of that is always the same type of a thing. But when I look at that, it's like that person was going through a bereavement thing. They were dealing with a dead loved one. There were all these other factors that this was not just a transaction, Hey, I need to return this present I got for Christmas type of a thing.
There was something deeper going on there and you can see. Well, you tried to automate that, it didn't go very well, but you could have bifurcated that into, there was a transactional component of it, so that maybe a customer service person could have understood the situation, maybe had a conversation, then let AI do the actual transaction part.
I think they could have solved for that.
David Rice: That's just one example that I've heard. I was talking to this guy for that story that I wrote, and he was talking about, they have one now that's promising that it can text your mother. Right? Like it can, it'll respond to it. And I'm like, don't do that to mom, it's your mother.
Let's text her back. But he's okay, maybe that feels a little bit extreme, but I think you're right. Like about the isolation and loneliness factor, like people will start to use it in different ways. And then I think there's also like a consequence where, if we start having it like answer emails for us, well, did it make a commitment in that email that now I got to be aware of? I just think we're opening up a whole world of.
Christopher Lind: Well, and it goes back to accountability. We are always looking to like subjugate accountability and be like, why don't, I want all the benefit with none of the accountability. And I think that's one of the false promises AI gives. It's Hey, you can have all this benefit, no accountability. And it's no, I mean, you start to have an AI texting your mom back.
I don't have to draw that line very far before I go, Oh no, this wasn't, even, even mom finds out and she's you don't even respond to my text, and the relationship, distrust and you're like, really? All because you don't want to just, there are certain things we just should not shortcut.
David Rice: I agree.
Well, we're coming up on time, so one of the things I like to do before I have anybody sign off is give me a chance to kind of promo, where can people find you, where they can keep up with what you got going on? Plug your podcast.
Christopher Lind: Yeah, so I'm very active on LinkedIn, so I do a lot of content out there and for me it's human authored.
I know there's a lot of social media people are just like, Hey, ask ChatGPT and copy and paste. I spend a lot of time just thinking deeply about this stuff. So I'm on LinkedIn a lot. I do have a podcast called Future-Focused, where my goal with that is just to help people make sense of this stuff, because it is overwhelming, and it can be frightening.
And so I really want just people it's like you still got to make the decision. I'm not going to tell you what to do. But I do want you to at least understand what you're dealing with so you can make wise, discerning decisions. So those really are the big ones. And for the people who are like, I don't have time to listen to another podcast.
Last year, I started a Substack where I do the, hey, if you didn't catch the whole thing, here's the biggest things that you should walk away with as you consider some of these things. So those are where you can find me.
David Rice: And the last thing is I've started a little tradition here on the podcast where I give you a chance to ask me any question you want. So, shoot.
Christopher Lind: I'm curious, where do you sit on this? Are you excited? Terrified? A mix of both? What are your feelings about where we are right now?
David Rice: It's a little bit of a mix. I maybe lean a little bit towards being skeptical of the whole thing. I wouldn't say terrified by any means, because I think it's like anything else in human history, right?
There's going to be some good things, there's going to be some bad things someone invented the game of football. And it's great because we all get entertainment out of it, except there's this entire class of human beings that now has brain damage. So it's good. And you know what I mean?
Yes, they make a lot of money, but they are good at bad things with every single thing that we put into motion in the universe. And so like this thing, I think like the possibilities that it can do in medicine, it's going to be amazing in that sense. I think if not handled properly, it could have a major social implications that are going to be very damaging and maybe only serve a very small group of people, it has the potential to do that.
And sort of the way that we've been going as a society is that group of people tends to get what they tends to be served most of the time and a lot of other people don't. And I'm afraid that's the part I have a little bit of concern where I think that might not be good. But hopefully, the goodness in people will show and we will just learn and change and grow and evolve and continue to adapt to whatever comes our way.
So I think that people are resilient. And so, yeah, we'll find our way with it. We just don't know what that, I have no idea what it looks like. I don't think it's going to be Minority Report or something like that. Like the movie, like Terminator 2 or something.
Christopher Lind: It's funny, somebody asked me that the other day and I was like, honestly, even if it is, I don't care because they're going to be super efficient.
So, they're not going to waste any time. I'm not even going to know what hit me.
David Rice: It's going to happen so fast at the rate we're moving.
Well, thank you for coming on today. When I have this conversation, there's a lot of folks that, I mean, like when you don't have record pressed, they'll engage in this like big conversation with the sort of the, like you said, like the existential questions and sort of the bigger ideas.
But the moment you hit record, they're sort of just well, it's fantastic. It's like, where's the conversation we've been having?
Christopher Lind: Well, and I think it goes back to anybody who says they know, is lying. Like anybody who's I know where this is all going. I know how this is all going to they're lying.
And I think that's an uncomfortable place for a lot of people to be right now is to be like, there's a lot of complexity to this. And I can't necessarily make sense of all of it. That's a very vulnerable position to be in. And so I can see where some people would, would be a little uncomfortable being on the record going, I don't know.
I mean, maybe it will be two years, maybe Eleazar's right. And we won't be here in two years because it's just going to vaporize us off. I don't know, but I'm optimistic it won't be.
David Rice: I agree. It's one of those things where I can understand people's discomfort because I don't even know how it works.
You know what I mean? I can't even explain like the basic mechanics of it. So why would I know where it's going to go? And I think for a lot of people, especially when you talk about like business leaders, these are entrepreneurs. They're not like technologists necessarily in a lot of cases.
They're looking at it as confused as I am at times, so.
Christopher Lind: Well, I think that's, if anything, a good thing. Because to your point of optimistic about what we'll do with it, I hope it brings us together. I think there's a lot of things that right now are tearing us apart, but I'm hoping that at some point we're going to go, wow, this is awful.
And it's going to pull us back together because we need each other. That's the thing. We all need each other to navigate this thing. Nobody's going to solve this on their own. Going back to yeah, just think, if you think GPT 5 and you in a boardroom by yourself are going to solve all these, it's not going to work that way because there's just so much stuff that you don't know and AI doesn't have the context.
David Rice: Absolutely. And listeners, as we start to sign off here, don't want to scare anybody. Don't have this influence the decisions of your life, like the career trajectory that you want to take your dreams. Whether or not you're going to have children or buy a house, don't base any of these things on what's happening with AI.
Cause I've seen some pretty reactionary thoughts on just sort of how like people are going to live their lives. And I'm like, well, I don't know. I don't really think you should change like your personal goals and your, or how you want to live. If you want to own a home, own a home. If you want to, because I think well, I may not have a job.
Christopher Lind: To the thing with that, and I think one of the ways that I've navigated that and I would encourage people to think this way. So I grew up in a funeral home. And the thing is that reality is true every day of your life. Like tomorrow is not guaranteed. So if you want to live your life well, but I don't know if this is going to take it here.
It's well, but you don't know if you're going to have a heart attack in the middle of the night, or you're going to walk out your door and slip and crack your. And I've seen so many people go through life worrying about the what ifs that they actually miss out on living right now. And I think that's where I completely agree with you. Don't buy a home. I mean, I've got seven kids. I sometimes get the question, do you think it's wise to have kids right now? I'm like, yes, I do. And I love every single one of them and I wouldn't change it for the world.
David Rice: It's if you want them, why wouldn't it be, if this is like a sincere desire.
Christopher Lind: Right. I'm not going to go, Oh, I'm worried about what AI may do.
David Rice: We're going to go ahead and sign off, but Chris, thanks for joining us today. I appreciate it.
Christopher Lind: Yeah. Thanks so much for having me, David.
David Rice: Alright listeners, if you want to keep up with all things people operations and HR, and you're not already subscribed to the newsletter, head on over to peoplemanagingpeople.com/subscribe and get signed up.
If you want to keep up with the latest developments in AI and technology, as AI is getting implemented into almost every piece of HR software these days, and you're in luck we're your source for software reviews and the latest of what's going on in the HR software world. So head on over to our tools section. Check that out and get in depth reviews. You can get our best of lists and we're going to be covering software a whole heck of a lot more in 2024.
So until next time, enjoy the summer heat, go to the zoo. Everybody loves a zoo. It always makes you happy. Cheers.