As we navigate through the ever-changing landscape of technology and work, understanding how AI intersects with our daily lives and professions has never been more critical.
In this episode, host David Rice is joined by Christopher Lind—VP, Chief Learning Officer at ChenMed—to offer valuable insights into how AI is reshaping work and life.
Interview Highlights
- Meet Christopher Lind [01:25]
- Christopher’s career path wasn’t linear. Originally interested in computers, he found them boring and shifted his focus to people and the impact of technology on them (late 90s/early 2000s).
- He wasn’t easily categorized as a technologist or something else but thrived at the intersection of business and technology.
- He found his niche in helping businesses achieve goals through people and technology.
- He had a dynamic career working with various companies on unusual projects.
- His current role as Chief Learning Officer at ChenMed allows him to develop employees in a unique way, not following the typical mold of corporate learning leaders.
- The Impact of AI on the Workplace [03:56]
- AI is developing much faster than ever before.
- The rapid advancement of AI creates a challenge because infrastructure cannot adapt as quickly.
- There are concerns that the societal and ethical implications of AI are not being considered carefully enough.
- Experts’ predictions about the timeline for AI development have been significantly underestimated.
- There are two extreme viewpoints about the dangers of AI: imminent destruction of humanity vs inevitable human adaptation.
- A middle ground view suggests that there will be significant mistakes made along the way as society integrates AI.
- AI Development Concerns [09:47]
- Unlike social media, some AI researchers are worried about the rapid development and potential negative impacts.
- Christopher compares the situation to the development of the nuclear bomb, where there was a realization of the dangers and efforts to slow it down.
- AI is seen as potentially much more powerful and readily accessible than a nuclear bomb.
- Christopher acknowledges some fear-mongering from those involved in AI development.
- He believes in learning from past mistakes but also highlights humanity’s ability to adapt.
- There’s a need for critical thinking before blindly embracing all new AI technologies.
- The Future of Learning and Specialization [11:52]
- Christopher warns against the pressure to constantly learn everything and be the best.
- He emphasizes the value of specializing in a niche and becoming an expert in that area.
- There’s no clear answer on whether brain chip implants will be necessary, but Christopher doubts their stability.
- He believes people will need to be comfortable with constant change and adaptation.
- He uses the typewriter repair example to illustrate how some jobs will be replaced by technology.
I’ve observed how unstable technology is. It’s built by humans, and we are unstable, fractured, and imperfect. Expecting AI to be perfect and solve all our problems, to me, is foolishness.
Christopher Lind
- Advice for the Next Generation [15:19]
- Christopher, as a parent and leader, disagrees with specializing in AI degrees.
- He believes a general focus on durable skills like problem-solving, creativity, and communication will be more valuable.
- Specializing in one’s area of passion using these skills is key, regardless of the field (marketing, manufacturing, etc.).
- Encouraging children’s imagination and innovation is important for future success.
There are some skills today that are completely durable. You might need to apply them differently, but even in the AI age, skills like problem solving, creativity, decision making, communication, and interpersonal skills will remain essential. If anything, you will need these skills even more.
Christopher Lind
- Specialization vs. Generalization in the Age of AI [17:47]
- Christopher argues against becoming a generalist – AI can handle general tasks better.
- He emphasizes deep specialization in a domain, not just the activity (e.g., understanding computer science, not just a specific coding application).
- Losing this deep understanding means giving too much control to machines and having no one to fix them when they malfunction.
- Specialists who understand their domain can make informed decisions, predict problems, and mitigate risks of rapid technological change.
Meet Our Guest
Christopher Lind is the Vice President and Chief Learning Officer at ChenMed, where he leads enterprise learning strategy. He’s also the host of Learning Tech Talks, a podcast that explores the landscape of learning technology, an advisory board member of the CLO Exchange and the founder of Learning Sharks, which offers organizations comprehensive advisory on all things learning innovation and technology. Previously, Lind was head of global digital learning at GE Healthcare.

AI can perform general tasks a thousand times better than any human. So if you want to be ubiquitous and a jack of all trades but master of none, you can follow that path, but I don’t think it will fare well for you.
Christopher Lind
Related Links:
- Join the People Managing People community forum
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Christopher on LinkedIn
- Check out ChenMed
Related articles and podcasts:
- About the People Managing People podcast
- How To Use AI To Empower Your Employees & Transform Your Org
- Is AI A Better Trainer Than You?
- How Will AI Impact Human Resources?
- The Cognitive Cost Of Convenience: AI Will Impact Our Brains
- Should You Be Worried About AI Taking Your Job?
- AI is Part of Your Teams, Not Just for Your Teams
Read The Transcript:
We’re trying out transcribing our podcasts using a software program. Please forgive any typos as the bot isn’t correct 100% of the time.
Christopher Lind: What are you really good at? What are you really passionate about? But let's talk about how AI is changing that and, you know, what is sustainable about that? Because as I look at skills, there are some skills today that are completely durable.
David Rice: Welcome to the People Managing People podcast. We're on a mission to build a better world of work and to help you create happy, healthy, and productive workplaces. I'm your host, David Rice.
My guest today is Christopher Lind. He's the Chief Learning Officer at ChenMed. In his role, Chris hits the intersection of learning, business, and the human experience. We're going to be talking about AI today, what it means for the human experience at work and in life.
Chris, welcome!
Christopher Lind: Hey, thanks for having me on David. I will say based on our pre-recording session chat, we're going to have fun today, to say the least. It's going to be a good way to end a Monday.
David Rice: Oh, yeah. No, I mean, give listeners full transparency. We haven't like predetermined what we're going to talk about here aside from just like a general sort of theme to keep it around.
Christopher Lind: Which is actually good because had you done that? And we wouldn't have followed it anyway.
David Rice: Yeah, that's true.
Christopher Lind: It's kind of my style. It's like, we're going to talk about this. Well, okay, I guess we'll see.
David Rice: Yeah, exactly. We're just, we would have thrown it out the window after the first question.
Which is, of course, as a starting point, I always want to give our guests a chance to introduce themselves. So tell us a little bit about your career, how you got to where you are, and what you've been up to lately?
Christopher Lind: Yeah, so a little bit of background into me. So I actually, I say I'm a husband and father first. So I've got seven little kids. My oldest just turned 13 yesterday. So as you can imagine, that keeps me very, very busy. And to me, that's far more important than anything I do on the professional side. But my background is kind of a unique one in that I really was never in HR.
I actually wasn't ever in HR until my role as the Chief Learning Officer, because I originally was a total computer geek and everybody thought I was going to go build some software company or whatever, and I hated it. I thought it was really boring. And I was fascinated by people, but was really fascinated by the way technology was changing people.
And this was, back in the late 90s, early 2000s, when it was like the internet was just coming, personal computing was just happening, cell phones were just coming out. And I was going, this is nuts. What's going to happen in the world? And so I was, I found myself always as the person, nobody really knew what to do with me.
They're like, I don't know, you're not really a technologist, but you're not really like something else. So I don't know. So I kind of bumped around in my career, but what I found was I was always the person that business leaders called when they were like, here's what we're trying to do, how do we get people to do it and how do we use technology to do it as best as we possibly can?
And that was my sweet spot. And so I really had a pretty dynamic career. I have had a pretty dynamic career with some crazy companies doing some crazy stuff, but I always was integrated into a business function or connected to a business function. And it was only recently when ChenMed, it was been three years now, they called and they were like, do you want to lead the way we do enterprise development of our employees? And I was like, I mean, that sounds kind of interesting, so long as I can do it through my own flavor, because I definitely don't fit the mold of most corporate learning leaders.
Let's just say that. That's just the politically correct way that I'll put it.
David Rice: I can tell. I've talked to a few and you definitely have a unique perspective.
Christopher Lind: Yeah. Let's just say I don't get invited to a lot of parties.
David Rice: All right. So obviously, like I said, we're talking about AI today. As somebody who's in elite and L&D, you're looking closely at this all the time, how it impacts obviously the workplace, but also how it impacts us as humans.
And we can kind of get into all that, but where I wanted to kind of start was when we were talking, the thing that like we kept coming back to was essentially the speed at which this thing is coming into the fold for the workplace, for society as a whole, right?
Like when we talk about it, people always say to me things like, this isn't the first groundbreaking technology that changes the way we live. I've had people say that, well, cars change everything. I don't hear you saying we need to roll back and move slower on cars. And I'm like...
Christopher Lind: Go back to the horse drowning carriages.
David Rice: Yeah. Yeah. That's a lot of pulling that one out. And I'm like, okay, but let's unpack that for a moment. It took 60 years to go from the model T to the Ford Mustang. And in that time, the interstate highway system was built, the autobahn was constructed with the rethought city design. We proliferated the asphalt road.
Right? Like the technology itself didn't just change. The infrastructure of our world changed to accommodate it. And like that has a huge knock on effect, right? For accessibility, for environmental effects and on and on and on. Every action has a reaction. When you set something in motion, you don't necessarily know what that's going to be.
And I'm just wondering at the pace, like we were talking about the pace things are moving at, how concerned are you that like the infrastructure of essentially our world, how we have come to do things?
Christopher Lind: Is it sustainable or are we just going to blow ourselves up?
David Rice: Yeah.
Christopher Lind: Yeah. So, so it's really funny because I've been doing my podcast for about almost five years now.
And it was because I just was having all these conversations with technologists and business leaders about things and something dynamically has shifted really about the time of the pandemic, is really the best assessment that I can see where things have been moving fast for a while and the pace of change has accelerated, but nothing quite like what we've seen in the last three years.
Now, granted, I think a lot of the forced disruption that came out of the pandemic, in that sense I think was actually much needed. I think in that sense, there was some change that actually hadn't been happening. We were holding tightly to the way things had been done that needed to happen. Now, the unfortunate part with that is that kind of shuffled the cards and threw them all over the kitchen floor.
But the AI pace, what concerns me the most about it is even the most integrated people I know in this space, people who are way closer and way deeper in the rabbit hole than I am, they keep going. I'm a little nervous about this because we keep saying it's going to be five years before and again, it's not a, Oh, things are moving. That's where I get nervous when people just say, Oh, things are changing too fast.
It's like, well, what do you mean? Because if you don't have a targeted measure, then it's as ambiguous as anything. But where I see the technologists that I know saying, these were milestones we had actually established as it's going to take us this long to get to this point and we've cut those things in like a 10th.
Oh, it's going to be 10 years before we actually implant a chip into someone's head and it actually successfully connects to a computer. Oh wait, it's not 10 years, it was 3 months. And granted, things have been building longer than that, but some of these things, assumption that the natural progression rate was going to be this, has suddenly gone this way. And one of the things I don't see is us taking the time to go, what are the implications of this stuff that we are not taking into consideration?
And I think that is where you combine those things together. And I mean, I do have concerns. I mean, I guess when I sit on the spectrum of things, you hear the folks what's Eliezer Yudkowsky. I talked about this last week. He came out and he said, the end of humanity is coming in as soon as two years.
And I went, well, that's one extreme you can jump on is that, by the two years from now, we're just going to destroy ourselves. Then you see the other end of the spectrum where everybody goes, ah, this is the same as it's always been. We're just going to quietly adapt and everything will move forward.
I tend to sit somewhere in the middle where I go, we're going to make some colossal mistakes and we're going to do some really stupid stuff. And I've already advised some people who I'm like, you're not going to want to do that because it's going to, it's going to blow up in your face and been told, nah, I think the technology is there.
And then it explodes. But I think the other foolish mistake is to think, Oh no, it's not, we're fine. Cause I don't think we're fine.
David Rice: Yeah. There's like this range of attitudes about it. Like some people are super blase. I wrote this article for the website called The Cognitive Cost Of Convenience. And I was trying to examine like, how our brains essentially interact with technology and where is this going to go?
Because I'm not saying technology has been all bad for our brains, sort of like a path toward laziness, right? In a lot of ways.
Christopher Lind: I mean, laziness and narcissism is two of the things that technology have been really good at creating in people is I hate anybody who doesn't think or act like I do. And I should get what I want when I want it as soon as possible.
And I shouldn't have to do anything that makes me uncomfortable or I don't really want to do. Those things have, are not good for the human soul.
David Rice: Or our ability to perform logic and reason, right?
Christopher Lind: Yeah, just let the machines decide. What could possibly go wrong?
David Rice: Yeah. Well, it's like, cause you go down the AI rabbit hole of like research and you, it's like, like you said, it's people within the industry like that are helping to create it.
I think that's the part that I keep coming back to. I'm like, I've never seen anything. When they were developing different social media platforms, I don't remember anybody going, this is really going to damage young girls or this is going to have a negative impact in this way. But we're seeing folks in this go, no, this is like, we're moving way too fast.
And actually asking, like government entities to try to put some kind of slowdown on it. I don't think I've ever seen anything like that before.
Christopher Lind: No, I think the best comparison, and I wasn't even alive at this, was the nuclear bomb, right? I think that was like the one historical moment where we went down a path and people went, Woah, like, I think we need to dial this back.
I think this could go sideways. But when you think about that, I mean, you couldn't have a 17 year old in their garage building an atom bomb. So it was a little easier to govern. Now it's like, Hey, we've given you something a thousand times more powerful than that. Maybe even a million times more powerful than that.
And you have unrestricted access to it. Twee major, like here you go. And I think that's the part where you see some of these folks who are seeing this. Now granted, I think you have to take it with a grain of salt because some of it is, people are selling books. It's like, Oh, you created all this. Now conveniently you're leaving and now you're warning everybody.
It's a little odd. So I think you do have to take it with a grain of salt and you also have to look at the trajectory of human history and gone, we've made some really stupid mistakes over the past. We have adapted. There are things that we will do. But going back to it, I think they also have very good reason.
And I'm in the same camp where I go, there are some legitimate concerns that people should be thinking critically about before they just go, let's just go down this technology path and hell or high water, see what happens.
David Rice: Yeah, when it comes to learning and we think about like our hardware and software, so to speak, right? Like the brain and its processes, our ability to keep pace with this, we've already seen some of this transhumanist stuff that's coming out, right?
And so like, I guess my question is, it's like in the future of the workplace, like how do you do learning and help people stay competitive without essentially asking them to adopt things that might make them pretty uncomfortable in terms of like, do you need a chip in your brain to be successful in the future workplace? Like, I don't know, you know.
Christopher Lind: So I think this is where I did a conversation last week where I talked about the fact that one of the downsides to this, and this is where I caution people getting so sucked into the treadmill of this hype. It's put a lot of people in a position where they feel like they have to know everything and be the best at absolutely everything.
And you gotta be running on 110% all the time or you're just gonna vaporize into oblivion. That's actually the fear mongering that's not helpful and it's not true. That actually is not the case. Even as I study and talk to these folks, there are things where, and I said it last week, there has never been a better time to specialize in your niche.
And actually figure out what is it that I do uniquely really, really, really well. And I understand the complexity of it. I understand all the different nuance of it and all that. And I think that's one of the things that actually is most helpful for people is to recognize it. Look, you're not going to keep pace on what everything that's happening over here is going on, but you can know your space really well and figure out how to navigate that. And I think that can be really helpful for folks who are trying to go, do I need a chip in my brain to be able to survive in 2027?
Well, I don't have the answer to that at this point, because I haven't seen how we're going to make decisions with things. But the confidence I've seen in how much this technology is unstable. I mean, it's built on human beings. We're unstable, fractured, broken. Like we blow up, as we're not perfect.
So to think that AI would be perfect and solve all our problems to me is just foolishness. And that's where I think there is this cosmic thing where you are going to have to get comfortable with being uncomfortable. I mean, I've had conversations with people where I go, If you keep doing what you're doing now, you will be obsoleted, two years tops. Like you will, I could replace you with a chat bot in 12 months, if you're doing what you're doing right now. And I get that makes people really uncomfortable, but somebody went to the typewriter repair man and said, the PC's here, you better adapt, or you're going to have to close up shop.
And I'm sure some of those people went, how dare you? I am a pillar of the community repairing everyone's typewriters. I will be here till the end of time. And it was like, well, no, you won't. So I think it is that paradox.
David Rice: Yeah, no, I couldn't agree more. Like it's sort of the natural order of things to continue progressing and to, for us to have to challenge ourselves in different ways. I mean, a lot of the job, I mean, you think about something like, search engine optimization, right? Like when I was a kid, that job didn't exist. So like, it's only natural that it continues.
So like, one of the things I think is interesting though, and you as a parent would probably have an interesting perspective on this is, your kids coming out of high school right now. They spent years telling kids, you got to learn to code.
You got to learn to code. And now, like, this thing is going to be able to write its own code in five years, it's like, no time. So, like...
Christopher Lind: It's already coding itself.
David Rice: Yeah, exactly. So, like, what do we tell the kids to do? I was like, well, how are you approaching this as, like, a, as a parent and as a leader?
Christopher Lind: So this is a really interesting one in that I made a comment about this because one of the big headlines was, all the universities now are coming out with bachelor's and master's degrees in artificial intelligence. And this is the next thing, like, everybody needs to get a degree in this and specialize in this.
And maybe I'm a contrarian, but I'm like, that'll be the biggest mistake everybody can make is if they say, everybody go get a degree in artificial intelligence and think that you have to do that. Because, similar to everything else, it's like, well, there's a hype cycle, and that's like, nobody went and got a degree in the internet.
I mean, you just didn't do it because it was like, well, no it's so ubiquitous. It's more like, well, what do you do with it? Will there be a space for specializing in that? Even going back to electricity, did everybody specialize in electricity? No. But they thought about how did they use electricity to specialize in their wheelhouse.
And I think the same is going to be true when I talk to my kids. We don't talk about like, okay, you all now need to, master machine learning and learn how to code algorithms. Because if you don't, you're doomed. No. What are you really good at? What are you really passionate about? But let's talk about how AI is changing that.
And, what is sustainable about that? Because as I look at skills, there are some skills today that are completely durable. Now, you might have to apply them in different ways. But even as I look at the AI age, problem solving, creativity, decision making, communications, interpersonal skills, you're like, you're going to need to do that. If anything, you're going to need to do it more.
Now can you do that as a marketer, as a manufacturing lead, as a, what, of course you can. But I think specializing in those durable skills, that's where I spend time talking with my kids and let them explore their imagination. I hope they create something that doesn't exist. That'd be my dream.
David Rice: It's funny that you said that, really this is a good time to specialize because I've heard some people say like, well, this is, you want to have your knowledge be more holistic now because the really specialized stuff like this thing will probably influence it in a way where like a lot of the tasks within that aren't going to look like that.
Like the career actually require you to be more holistic. What are your thoughts on that?
Christopher Lind: So it's funny because this is where I get into it sometimes with people where they're like, no, no, no, you need to be the generalist. I'm like, this is not the time to be the generalist because I can tell you right now, AI can do general things a thousand times better than any human being can.
So if you just want to be ubiquitous and, the jack of all trades, but master of none, you can go down that path, but I don't think it's going to fare well for you. And going back to the specialization, this is where I think what we don't understand and where I think there's room for this is, we have a tendency to get hung up on the activity as the thing we need to specialize in.
And what I would say is, yeah, don't specialize in how you do it. That would be very foolish right now to be like, I am the master at this application. Okay. You might not want to go, that far cause that app may be completely gone by the time you actually get anywhere with it. But understanding the depth of that, even going back to coding as an example. Having people who are highly skilled in computer science and understanding logic and how does this stuff fit together, that's still going to be a critical skill.
Should everybody have it? No. But what kind of was the hype cycle that was amiss was everybody go learn to code. Not everybody needed to learn how to code. Everything turned into a graphical user interface and generative AI can do it. But the deeper underside of how does computer science work? How do these pieces fit together?
How do you do that? If we lose that, the problem with that is if we lose that, we hand things over to the machines. When it makes mistakes, and it will, no one will know how to fix it because it'd be like, boy, that thing just blew up. Anybody know how to fix this thing? No, because I'm a generalist. I can speak to it, but I don't really know how it works.
David Rice: That's funny because you mentioned right before we hit record here, about the ChatGPT thing, having another player, you're about having a little bit of a meltdown last week and like, no one knows why. It's gonna keep happening.
Christopher Lind: That's just it. It's going to keep happening. And if you don't have the people who specialize in their domain, and less on the how, like I said, less on the how, you don't need to know the intricacies of every single thing, because that will change.
But the, what's underneath that, we need people for that, so that they can make good decisions, predict what's around the corner, all these kinds of things that we're talking about, about, you know, we're moving too fast. Well, part of the reason we're moving too fast is people don't necessarily know the domain well enough to know the implications of the decisions they're making.
They're like, I'm a generalist. Sounds good. What could possibly go wrong? We need more people who understand the depth of it who go, hey, no, no, no, no, no, no, because this is connected to this is connected to this. We break this, the whole house comes down and somebody goes, wow, okay, good to know. How do we deconstruct that?