Most business leaders are still talking about AI as if it’s just another “productivity upgrade.” Meanwhile, the world building the future — massive data centers, AGI R&D, and winner‑take‑all investment — is sprinting ahead without guardrails, ethics, or broad societal input. In this conversation, researcher Christopher DiCarlo pushes executives to confront a reality most aren’t prepared for: AI isn’t a tool you add to the org chart, it’s a paradigm shift that will redefine work, power, human purpose, and morality.
This episode is part wake‑up call, part philosophical intervention. It challenges HR leaders and executives to stop asking “how much more productive do we need to be?” and start asking “what kind of future are we building — and at what cost?” If you’re still waiting for a miracle app to solve everything, this conversation will shift your perspective on AI strategy, ethics, and leadership responsibility.
What You’ll Learn
- Why most executives are “in the dark” about where AI really is vs. where it’s headed — and what that ignorance costs their organizations.
- How the current AI race isn’t about better tools — it’s about dominance and first‑mover advantage.
- Why treating AI as a productivity hack blinds leaders to existential impacts on work, society, and human agency.
- The ethical tensions between shareholder demands, workforce stability, and human flourishing.
- What leaders should actually be doing right now — from hands‑on experimentation to ethical foresight.
Key Takeaways
- AI is not a feature — it’s a historical inflection point.
Most leaders think in incremental productivity because that’s comfortable. But AI’s trajectory isn’t incremental — it’s exponential, sociotechnical, and civilization‑altering. Treating it as another efficiency tool is naive. - The “dark room” problem
People inside the tech ecosystem assume everyone else knows what they know. CEOs often don’t — and when they learn the pace and scope of AI progress, they’re startled. That gap between insider knowledge and executive awareness is dangerous. - Ethics isn’t a side conversation — it’s a strategic imperative.
Asking “What can we do with AI?” is no longer enough. Leaders must ask, “What should we do?” — especially when AI can replace thinking, agency, and even human work itself. - Competitive pressures force hard trade‑offs.
Executives face a version of the prisoner’s dilemma: lean into cautious AI adoption and risk being outcompeted, or go all‑in and risk destabilizing your workforce and ethical foundation. There is no easy out. - Human work, meaning, and society are on the line.
When organizations substitute AI agents for human employees, they must confront deeper questions: To whom does the CEO owe responsibility — shareholders or society? And what happens to communities when employment evaporates? - Practical advice for leaders today
- Stop waiting for a “miracle app.”
- Experiment with AI tools personally and professionally.
- Invest in education at every level of your org.
- Embed ethical and philosophical thinking into your strategy.
Chapters
- 00:00 – AI as existential shift, not just productivity
- 01:56 – Why executives are still in the dark
- 04:49 – The AGI arms race and investor urgency
- 07:58 – AI’s ethical risks for the workplace
- 11:16 – Governance gaps and regulatory challenges
- 13:47 – Agentic AI and the loss of control
- 17:47 – The “Big Brain” idea and what we missed
- 22:19 – Global AI competition vs cooperation
- 23:52 – AI’s ripple effects on business and society
- 26:37 – The leadership dilemma: compete or hold back
- 29:24 – Workforce disruption and safety nets
- 30:53 – What leaders should be doing now
- 37:36 – Leaders as moral imagination
- 39:45 – Final thoughts: risk, promise, and urgency
Meet Our Guest

Christopher DiCarlo is a Senior Researcher at Convergence Analysis, where he focuses on exploring and interpreting emerging trends at the intersection of technology, business, and society. With expertise in data analysis, market research, and strategic foresight, Christopher helps organizations understand how technological convergence shapes competitive landscapes and future opportunities. His work blends rigorous research with clear, actionable insights, making him a valued voice in conversations about innovation, digital transformation, and the evolving role of technology in strategic decision-making.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Christopher on LinkedIn
- Check out Convergence Analysis
Related articles and podcasts:
David Rice: No matter what aspect of AI we're talking about, we spend more time these days than any period I can remember having existential conversations, and for good reason. Behind closed doors in the Bay Area where people used to have ping pong tables and dogs at work, they're now sleeping in their offices.
That's because the race to artificial general intelligence has turned into a winner-take-all scenario. In case you missed it, 80% of Wall Street is pumping money into data centers the size of Manhattan because whoever gets to AGI first takes everything from defense contracts to business service market dominance. The future in a nutshell.
And while tech billionaires are sprinting towards the holy grail, most business leaders are still asking how much more productive do we really need to be? Christopher DiCarlo is a researcher studying AI risk mitigation, ethics and governance. And he makes one thing very clear, we're nowhere near ready for what's coming.
In the last five years, AI models went from high school level to PhD level. The scaling laws have changed. The trajectory at this point is somewhat predictable. And most organizations are still waiting for a Miracle App to drop into their computer so that they can just hit it, execute, and walk away.
So today we're gonna cover why executives are operating in the dark, and what happens when they refuse to get uncomfortable, how the race between tech CEOs affects your organization, why this requires philosophical conversations, not just business model conversation, what leaders and executives should be doing right now, and how to be the moral imagination of your organization during the biggest paradigm shift in human history.
I'm David Rice. This is the People Managing People podcast. And if you've been treating AI like a productivity tool instead of an existential shift in what human beings do, this conversation will probably change your perception a bit. So let's get into it.
All right, so Christopher, welcome to the show.
Christopher DiCarlo: Thanks for having me.
David Rice: Well, you and I were chatting before this. You mentioned that even the CEO of one of the world's biggest companies had his jaw drop when you explained what's really going on with AI. Why do you think so many executives are still operating in the dark about this and what happens to organizations whose leaders refuse to become uncomfortable?
Christopher DiCarlo: You know, from where I sit and with the colleagues that I work with and being involved with AI and for decades now, and then watching it grow, I don't blame the CEOs for having the reactions they do because when you're in the business, you think everybody knows the same thing you do. It's a common bias that we all share.
No matter what we're proficient at, we just assume others know we do. So when they find out, much like the general public, when they find out, when I give like public lectures and things like that, it's because they're not fully aware of where we're at in history in terms of the technology and what has been developed up to this point in time, and then where we're headed.
Now that we've accomplished so much, we can predict with relatively decent accuracy where we might be in the next few years, and that's from anywhere from one to 10 years. And when CEOs find out about this, they're a bit shocked because they're not sure, A, are they ready? Is their company ready for these changes?
And B, they had no idea of the sociotechnical effects that this is gonna have on society at large.
David Rice: It's interesting because when I feel like that's so much of the conversation that's going on around about it, you know, like when we are on social media, you see people are always talking about sort of the socio impact of it.
But like when we talk to leaders, I think the thing that I've noticed is it's still so much about productivity, right? Like which viewing it through that lens, instead of seeing it as like this big paradigm shift and sort of, I think the reason for that is, is it's. A little bit more comfortable, the idea of incremental innovation, whereas so much of what goes on with this is existential questioning.
It gets into philosophy, right? If you're the average executive or leader, I think that you're like worried about business models, but you're not necessarily having a lot of fi philosophical, existential conversations. That's not why you got into what you do. I think it's just a very naturally kind of uncomfortable thing that we're all gonna have to get a little bit more comfortable with.
I'm curious, when you look at what's actually happening right now, you know, like you think about these data centers that are massive, right? And some of 'em the size of like Manhattan, you see that kind of scale and urgency around it and sort of like the pace at which we're trying to move. I mean, what does that say about where it's headed in terms of how fast?
Like what does this mean for workplaces? How can we possibly keep up?
Christopher DiCarlo: Yeah, and in case you're wondering why, you know, 80% of Wall Street is pumping money into AI, that's where it's going, right? It's going into these huge data centers because the hope is, you know, amongst the handful of tech bros that whomever gets to AGI or artificial general Intelligence first.
It's kind of like a winner take all scenario because you'll be so far ahead of the competition and what you'll be able to do and what you'll be able to provide and what you'll be able to offer. In defense contracts and all sorts of other things, and the wave is coming at us so fast and with such intensity and ferocity that it catches us off guard.
It's like that metaphorical tsunami on the horizon where you can see it, you know, they're bad, but you're wondering, how long have we got until that thing hits the shoreline? And are we ready? Have we prepared? Not just as business leaders, but society in general. Governments, are we ready for this? And clearly the answer is no.
We're nowhere near ready.
David Rice: Yeah. I think the pace, the intensity of it sort of points towards like this. We're gonna go through a hyper acceleration phase. I don't, I'm not exactly sure when, but it's feels like it's right around the corner. But maybe that's sort of the existential panic setting. Right.
Christopher DiCarlo: I was talking to a CEO in Frisco in the Bay and he said, I hate it here now.
We used to love it. 15 years ago, I don't have to mention the company he's working for, but they used to have, you know, ping pong and slides from their offices and dogs coming to work and, you know, lots of family time off and time to just think through things and whatnot. He said, today, forget about it, you're sleeping in your office and everybody is so wound because they know this holy grail of AGI is all that the big companies, you know, the big tech bro billionaires want to get, and that has caused.
This amazing amount of competition between the tech bro is also between us and China, but really between Zuck and Altman and Emote and HASAs and even Elon where they know this is the holy grail. They know they can grab the brass ring, but they can't get there unless they have the money behind them to create these compute farms.
Because when you look at the scaling laws of the improvement of AI, large language models. In the last five years, they've gone from high school level to PhD level, and so the scaling is pretty indicative of what is coming and where we're heading towards. So that's why they're so excited. They're just seeing dollars and power and then there's that entrepreneurial, pioneering aspect of it.
I want to be the first to be on Everest, you know? I want to be that person in the risk mitigation business. You're trying to say, guys, can you just hold on? Until we get some laws in place, get some guardrails in place, both technical and ethical so that, yeah, we get all this good stuff that it could bring us, but it also has the capacity to generate significant harm and potentially in areas where we're currently unaware of.
David Rice: This raises all kinds of ethical questions. Are we building capacity faster than, I mean, I think we are. You know, what is some of the dangers of building capacity faster than we build understanding or safeguards? I mean, especially for, you know, this is a podcast dedicated to AI in the workplace.
So when we think about it through that lens, take me through some of the dangers of that.
Christopher DiCarlo: Yeah. So what happens when we build applications and integrations where workers no longer need to really think that much? Like, when was the last time when you had to go somewhere in your car? Did you pull out a paper map?
Right? It's gonna be a while ago. You're either gonna use your onboard GPS or your phone to get you to where you want to go. That kind of technology is integrated into our lifestyle now, where it's just, it's literally a no brainer. You don't have to use much cognitive capacity to listen to a voice, tell you when to turn.
And so one of the fears is that, well, first of all. If humans are outta the picture, we just need AI agents now, and why not have a thousand AI agents working for us than 50,000 employees? Because these AI agents work way better 24/7. No health benefits. You don't have to keep paying them. They're never gonna strike any CEO who has a responsibility to the stakeholders, to the people who are signing his check essentially, and they want more return on their investment.
The decision is gonna be fairly easy, right? Why would our company X be the mom and pop style and do things the old fashioned way with humans when company B over there has fired over half of their workforce using AI agents and are so far ahead of us in terms of productivity, we just can't possibly keep up.
So the moral CEO may find herself coming to grips with the moral dilemma of to whom do I owe my allegiance? Is it with my employees or is it with the shareholders? Because if the bottom line is that you owe returns on these investments, then the answer is clear and the CEO can just simply say, we're going with AI agents.
And that's just the way it is. We can't keep up in competition with others in our sector. So, I'm sorry guys. Gotta lay you all off. We're gonna go with the AI agents. And so you see how the competition is gonna fuel and the cycle of investment and return on investment is gonna continue to fuel this.
And so, you know, there's all kinds of aspects. I mean, I'm just mentioning too that there'll be job disruption for sure. And then for those who stick around and still have their jobs, to what extent are we going to exercise critical thinking and decision making and problem solving when like a GPS, we can just ask a machine.
It does the job better than we do. It sees things that, oh, we kind of would've overlooked. So that's where we're headed right now. We're at this, like I said, this in-between stage where the, this window of time is shrinking between how everything used to be done and how everything's about to be done. So that's where we're at in history right now.
David Rice: We're in a weird moment, right? Because like for the longest time, NASA would lead the way on some kind of discovery, or it was kinda like a government entity of sometime, maybe it was a military, but I think for the first time, all those groups are knocking on commercial technologies door instead. And I'm curious what that shift means when we think about governance and control of these systems.
Because to your point, like the interests that are driving it are not necessarily focused on what, essentially what's good for humanity.
Christopher DiCarlo: So what happens is we get ahead of ourselves with tech. I mean, we've seen it in other areas, right? But we've been able to mitigate risk and activities through forward thinking and through advanced understandings of what could potentially happen.
And the two most common, actually, we can go with three. With the Geneva Convention, outlawing chemical warfare with the IAEA regulating atomic energy and nuclear weapons. And then of course, in 2000, with the completion of the Human Genome Project, you have basically Clinton's administration saying, here's what we're gonna outline, what you can and cannot do with genomics, or what you can do with the study of genetics.
And to date those things, you know, have worked. This is a different animal, unfortunately, because. This isn't a stagnant static thing that is coming into being. You know, a printing press was a static thing. You had to move things around in order to print stuff, you have to take visionable material. You have to enrich it in various ways, and then you have to have the knowledge to know how to make it either fuse or fi, you know, fissionable to explode in a particular way.
You have to know how to use bio or chemical weaponry. All these types of things require movements of materials and ideas around. Now, the technology itself is going to become agentic. It's gonna be an agency unto itself, and that's like saying to the atomic weapons, you now are in control of yourselves.
Is that something we're gonna be okay with? Right? Allowing that kind of power to have a determination over what it chooses to decide to do. So this is unlike any other technology. That we've ever invented before. It's almost like a species rather than a static stagnant object.
David Rice: It's interesting 'cause I mean, when we think about governance, it's sort of, there's a line between like private governance and public governance.
Like it's all who governs the governors essentially, right? To CEOs have to decide about boundaries of AI deployment and it's like, well, yeah, they do. And then in that. They've got to grapple with the question because they've always just been like, well, what can we do? Right. That's the question that we've always asked in business.
It's like, well, what can we do? What else can we try? How can we innovate? And now we're gonna get to a point where the question is gonna be like, what shouldn't we do? Because I, I think that's like gonna be the big question for everybody once you can do anything.
Christopher DiCarlo: It is. And the fact that you have, this time right now to think about it with what you and I are discussing right now.
The majority of the world has no idea this is going on. And so when I get public lectures, invariably you get the drop jaw responses at the end saying, why are we just finding out about this now? Why isn't our government telling us? Well, here in Canada, we have an AI minister. My colleagues and I lobbied for years to have a ministry of artificial intelligence.
We just got a new Prime minister, mark Kearney, and he creates. A ministry of AI and data innovation. That's wonderful. Except that the Minister of AI, Evan Solomon, says This is Canada's Gutenberg moment. It's printing press, it's advancements. That's right. And I've described it as potentially the world's Oppenheimer moment.
It's actually both. There's going to be a lot of great things that are going to emerge from these transformative artificial intelligence developments. At the same time simultaneously, if you keep building smarter and smarter things systems and they outsmart us, how will we be able to control them if they are capable of getting around our controls?
And we've already seen that. We've seen it in an, in almost every large language model, right? Andros, you know, and they said, we're gonna shut you down. It went into the email list and discovered, you know. It was a fake email list that a VP was having an affair with somebody within the company. And I tried to blackmail the VP and say, if you shut me down, I'm gonna send your wife a notice, letting her know you're screwing around.
That's at this level. This is just a narrow form of intelligence. This isn't an AGI. This thing isn't fully agentic yet. That's just how it's doing that. So there have been other cases, you know, where a reporter at the New York Times to try to convince him to leave his wife. These are things, well, I predicted in the '90s and in a paper I wrote in 2015 where I was basically warning, you know, the future developers look when this thing comes along and it will, it's just a matter of when it's gonna come along.
You better be ready and you better have things in place to be able to know when this thing is following human commands and algorithmic incentives and when it's being deceptive, because right now we can tell, but I don't know. When we get to AGI, right? You saw the movie Blade Runner classic movie by favorite sci-fi movie of all time.
So in the movie you have Replicants and they've gotten so good that the machine that's used to determine whether they're human or not becomes more and more difficult. Every time they improve upon these Replicants Nexus. Six, how are we going to develop a vo comp test to know when a system is obeying the moral rules we want it to.
If we can't, should we be building things that are going to be smarter than us? Are we ready to be the number two species on the planet? Because that's literally where we're heading right now.
David Rice: You mentioned there in the '90s and you were telling me, we spoke before, that you wanted to build sort of like the big brain with controlled information and interdisciplinary oversight, you know, philosophers, economists, psychologists, all working together, but it didn't happen.
What do we lose by not taking that more thoughtful approach. And what's the lesson when we look at this AI challenge today? Like what is the modern day Big brain?
Christopher DiCarlo: Yeah, the modern day, big brain. I mean, first of all you've gotta have the big bucks or you can't build a big brain. And that's something that I learned in the '90s.
Everybody that I talked to, the university presidents and politicians thought this is a good idea. We, you know, especially for a country like Canada, we can designate ourselves as world leaders. And you know, we're out there, we're on the cutting edge, but nobody would throw a dime into this thing. And you know why?
Because it takes a lot of dimes, right? It takes billions to make these things do what they do. So that was a big stumbling block. And I didn't have a philanthropist. I didn't have Bill Gates behind me saying, you're right, this is exactly what we should be doing. But nonetheless, I wrote up a constitution, an accord for the future population of the world.
And basically said, look, when you build this thing, you might wanna keep these things in mind. And one of those is to have international governance. You gotta get a regulative body that knows who's doing what at a certain level that is unfettered and unbiased or least biased by political and wealth driven interests. So we need to create a body of governance on an international scale of angels of their better nature.
People who care more about the hive than they do about themselves and their families. Not an easy thing to get, but they do exist. Such people do exist. If you can get them all working together to monitor what countries are doing and then to try to work with those countries, and especially like China and the US, if the competition that's going on between the tech pro billionaires is similar to that's going on between nation states like China and the us.
The drive to get the AGI is not going to stop either country because they're gonna be worried. It's like the bomb. If we knew at the time of the Manhattan Project that Germany was doing the same thing, they would've sped things up as much as they could to get there first. Right? Because then they can bomb Germany into the stone Age and disable them like they did in Iran recently, right?
So you just knock out their infrastructure and then you weaken them significantly. The same thing's gonna happen with AGI. If we don't get together with China on this and somehow work out a way to say to each other, once a certain benchmark is reached, you have to make that open. Not open source, but at least public knowledge.
You have to let people know, we've reached this level, and whenever you face problems, you have to be open about that as well. Like, we tried to shut this thing off and it didn't work. You need to tell the world that you can't hold that information for weeks at a time, and then it suddenly spills out.
Because with AI, viruses take time to buzz around the world for everybody to get sick, but AI doesn't. AI works at the speed of light, at the speed of fiber optics. So we have to be vigilant in our capacity to recognize who's where, who's at what level, and perhaps we can come to an agreement. Maybe we can make computers super smart, but we don't give them God-like abilities.
Once they get to AGI, we don't let them improve upon themselves for what's called recursive self-improvement, right? We don't let them keep doing that over and over again because they're just gonna get more and more powerful and more and more intelligent. And why would they want us around? For what reason?
Are they gonna be grateful? Are they going to be benevolent? Nobody can predict what's gonna happen when AGI comes into being. So maybe we can get together as a global population and say, this is what's best for humanity and all other species on the planet right now is we want the big brain to make the world a better place, but we can't let the big brain get misaligned, whether it does so intentionally or accidentally.
We need to have those guardrails in place now.
David Rice: It's interesting 'cause we're talking about this right now through the lens of like governments and international politics, but it doesn't take much to see how this can tie back to businesses, right? Like the threats to your business in terms of what it can do to your customers.
'cause you've been collecting a ton of data on them and same thing goes for your employees and what happens with their information. And I think like we're in this place where because of the way that we've treated business in our society, we've made it such a central component. In everything that gets decided, right?
Like we don't make any decisions in politics without thinking about how will it affect business. And we've made it such a priority to protect business. But in that same time, now, we did that because business was an avenue into helping people find a peaceful way to make a life right through employment.
And if that gets threatened, I think this is the time why I always come back to this topic is that I believe this is the time where like HR, people leaders, CEOs, you gotta start taking on some of that interdisciplinary weight and considering impact on the all different scales. Like what's the small scale impact Yes to people's roles and what, like how does their job change? But also beyond that in terms of how your business operates in the world, how customers interact with it, how you know, what is the impact of all of your decisions with this technology going to ripple out into the broader world and impact our society.
Like I think that's something that we all have to think about.
Christopher DiCarlo: Absolutely. From, to what extent should an employee be using a large language model to summarize a report to, you know, what gets sent out? And what could be bad for the company. Like the la last thing you want your company to get caught at is somebody using a cheap version of ChatGPT to do some kind of analysis.
And it turns out it hallucinates. And then, you know, your customers are like, what? What is this? This is just entirely fabricated. There's actually no person named. You know this. And then you get caught, right? So aside from those kind of technical use aspects of it. You're right. It does ripple out and it becomes very quickly a moral problem, right?
Because a CEO, just the C-suite in general and all mid-level upper management, they're gonna have to think seriously about what they want to do as a company, as an entity that employs X number of people when the technology becomes available, that makes their jobs. Relatively obsolete. And now what are they gonna do?
Like, just look at what Bezos did. 14,000 employees gone. What was that like? 6% of his workforce is gone. Why? Well, we've equipped the warehouses now with robotics that are so efficient. We, you're gone. We don't need you. Right. Okay. What about entry level positions with firms, law firms, major companies, that kind of thing?
Kids are coming outta university. They want to get in at that first level, that entry level. Well, no, our ouris can, you know, our bots can do this better than you guys can and way cheaper. So now we've got a generation of students that are graduating who want to get into the workforce, but they're seriously concerned about what they should be doing and what area of expertise they should be focused on.
We have this conversation five years ago. It's coding. Coding. Go into coding, coding, you'll get a job. Don't worry. Coding, get in, you know? And now I've got students coming up to me after my talks, they're third year Compsci majors. And it's like, what am I gonna do now? And I said, you better get creative.
You better figure out how to utilize your skills with the AI tools that are coming out.
David Rice: You've sort of framed this as sort of like the prisoner's dilemma for executives, right? You sort of, you hope this thing hits a ceiling and then stay cautious, or you have to assume that your competitors are going all in and risk being left behind.
So I guess my question is how should leaders be thinking through that choice when the technology keeps sort of exceeding expectations every time we turn around?
Christopher DiCarlo: Yeah. Well, I guess that depends on their capacity for implementation and integration balanced with duty and responsibility, not just to shareholders, but stakeholders broadly conceived, which includes the employees and their families.
And then in a much bigger way when the main paper mill of a town that supplies half of the employment to the town goes outta business, all the other businesses are affected now. The corner store is affected, the gas station's affected, you know, everybody is affected. So these things need to be considered on this kind of rippling effect level, so that it's not an, it's not an easy thing, and it's not going to be an easy thing for upper management to think through.
That's why some of the talks I give now to various businesses, we look at their whole enterprise from the smallest level at which AI is going to affect it to these big picture problems. How are you going to feel about your employees being outta work? And, you know, maybe permanently, you know, they might be driving Uber before those are all automated and you know, they might be doing whatever they have to.
So that's just businesses. But now governments are gonna have to think about what do we do with those employees that are now laid off and out of work? How do we help them and their families? And then the rest of. The community that's going to have that ripple out effect. We're gonna have to come up with like a universal basic income or we're gonna just have to print money and you know, are we gonna come up with an alternative source?
A lot of people are thinking, well, we'll just wait and the AGI will figure it out. Once we get to that level, it'll become its own technological fix. Right. And that would be great if we could get to a level. Where you and I don't have to scrounge around for monetary gains to bring about the resources that we and our families and our dogs need in order to survive that, that is largely taken care of by the state.
And that if we wanna make more money, we can go into other areas, but we don't have to, you know, there will be a base level of comfort that every human will be taken care of, their healthcare, their housing, and all this kind of stuff. But if you want to go up tears. We shouldn't quash that. We shouldn't disallow incentive in this new global economy, whatever it's going to look like.
That's the interesting thing is, how are governments going to react and will they be able to do so seamlessly? Or will it be like a COVID thing, just bang a shock that's just shut down everything. Well, how am I gonna make money? Well, we're just gonna pay you to stay at home. Is that going to be something similar with AI developments?
David Rice: It's interesting 'cause I've been thinking about this recently 'cause I just came back from a conference a couple weeks ago and it's really like you've got to start thinking about ethics as part of your strategy. And that probably feels weird to a lot of leaders, but it's just part of it now.
Like I think, if you don't want to be reactive and you still wanna be exploratory, I think you've gotta be asking these kind of questions and be thinking about like, okay, what are the small bets that we can place that can maybe expand people skills? You know, learning without necessarily, you know, in action feels like a guarantee for obsolescence.
So you wanna do something, but you have to keep the ethical framework within whatever that strategy then becomes or we end up in a really weird place in just a few years.
Christopher DiCarlo: Yeah. And who's gonna call those shots? Who's gonna put those rules in place? And how educated are these people? Our leaders, like, I like Evan Solomon.
He was a great journalist, but as a minister of AI, I don't know what this guy knows about the details, the nuances, you know, hopefully he listens to the experts. I'm certainly putting in for a meeting with this guy. To find out, but you know, we'll just have to wait and see, which is unfortunate because it's not a time for that.
It's a time to be ahead of the curve and to listen to the experts and to plan accordingly.
David Rice: When you ask organizations, you know, what do you want AI to do for you? I'm curious, what kind of answers are you getting? Like, and more importantly, what answers should they be giving if they're thinking about their role as stewards of humanity, rather than, you know, maximizing shareholder returns?
Christopher DiCarlo: Yeah, exactly right, because you know, the C-suite are just thinking in terms of how can this make us more money? How is this gonna make us more productive? Or, and it will, and I can tell you how it can do that and I can put you in touch with agencies that will implement and integrate those tools into your business.
There's no question about that on supply side, on customer service. It doesn't matter in what area we can improve overall. Systems operations at all levels in any company. But the problem is that the majority of companies today are at that .com era point that we had with the internet back in the late nineties where everybody knew it was important.
They knew it was coming, they knew they had to get in on it in some capacity, but didn't know exactly how. And a lot of CEOs are there, they know and they feel bad that they don't know more. That they're not working faster to integrate it, but they're literally kind of, you know, waiting to see what others do first.
And because the app doesn't currently exist that you download hit Go, and it just takes over and shows you exactly how it's going to make your company better. CEOs are gonna be in the dark, they're gonna be cautious, and they're gonna say, well, we're making money now. Why do I need to bring this in? Why do I need to integrate now?
How much more productive do we really need to be? So unless they see their competitors, you know, significantly starting to outperform, they're gonna be cautious. And frankly, I don't blame them. You know, it's very new, it's unfamiliar, but they should be reaching out. They should be talking to some experts, right?
They should be saying, okay, what is the next five years gonna look like? For us, so they should definitely be putting together five-year plans and then constantly revising it as new information becomes available.
David Rice: If we project forward five years at the current trajectory, what does the world look like, in your opinion?
Christopher DiCarlo: A lot different. A lot different.
David Rice: Well, given that vision, what a conversation should HR leaders and executives be having right now about implementation and about purpose and human flourishing?
Christopher DiCarlo: Yeah, like they should be downloading these tools, they should be using them every day, they should be using them at work and at play.
They should be tinkering and dinking with them just like we did with the internet, right, because that's the best way to, to learn about these things. And then head office should be sending their HRs and C-Suite and middle management people to conferences specifically tailored for AI integration for their businesses.
That's the main thing they should be doing. They should be eating this stuff up and seeing what the competition is doing and then trying to make those very important executive decisions about, Hey, if we bring in this tool to help with sales. They implement it in the way in which it's supposed to be brought in.
The projections are X. Huh? Really? Okay. And you went to this conference. Yeah. And you're happy with what you saw. Like you've gotta trust your head honchos. Right? You gotta trust these people that they're making valued decisions. And that's what I think should be going on right now is don't shy away from it.
Don't think somebody's gonna just bring it all to you and explain it all to you. Don't wait for the Miracle App to drop into your computer that you just hit, you know, execute, and away it goes. And then you don't have to do much. No, learn now. Get out there. Get messy. Roll up your sleeves and play with these things.
If you don't have ChatGPT, then you know and you want something more ethical. Get clawed from Andro, play around with it and get your boss to subscribe to the highest level. Don't get entry level. Large language models get their very best. Then play around with them and then see what you can do. But then always be on guard.
You know, my boys, you know, at Convergence and I, we have the very latest, the very best models. And one day we asked it to write a scientific paper complete with references, and it used what's called chain of thought reasoning. And we told it, do not hallucinate, do not put anything in here that you are not 100% sure of.
So it's all in the prompting, right? You have to know how to prompt these things so you don't get garbage out. Right. So the prompts are pages long. They're very specific. And it takes the model, the system longer because it doesn't just predict the next word or token or whatever. It literally has the ability to step back from itself and to work through things almost like you and I would do with pen and paper and crossing things off and double checking and that sort of thing.
And of course, you know, within 20 minutes this thing produced a fairly polished. Scientific paper that we all agreed would probably stand about a 50% chance of being accepted to any major journal in science. And that's just us dinking around and tinkering with this thing. It was able to go out into the net and grab this information, collate it, and bring it together and synthesize it in a particular way as to be able to say, well, if Joan said this and Smith said this, and Brian said this, and Avery said this and it brings all these things together and makes novel observations.
We were pretty impressed. And that's one of these large language models at the highest level. And we went through it and we went through and we went through it and it had about maybe 15, 10 to 15% hallucination where the references weren't quite exact. They got some dates wrong with when a particular author published a paper.
You know, fairly minor stuff, but still decent enough. So get all your HR people and all your top level C-suite people playing with these things. At work, at home, at play, there's some cool stuff you can do with your kids that you can make up stories and you can create videos. You can do all kinds of really neat stuff that might actually have application to your job.
You know, you might actually be able to bring this, the stuff that you're playing around with your kids into your business as well. And there's a lot of stuff out there that you can tinker with and you can play with. But by far. Number one advice is don't recoil. Lean in, get into this stuff and start playing with it just to see what it's about and see what you can do, and to know then what its limitations are.
David Rice: I think it's a good time to dig in, like you said, but also think of yourself as sort of the moral imagination of the organization. You know what I mean? Like, I mean, somebody's gotta do it. And I think it's a fair thing for people, focused professionals to take on and. The big promise here that we all wanna realize is what kind of humans do we wanna become?
Not just that we can do more, but that we can be more so, like what is the reality of that look like? I think that's the challenge for leaders and it's gonna be a hard one, but I can't think of a challenge within this that's tech. I mean, it is one of the most rewarding if we get it right.
Christopher DiCarlo: It is. And sometimes it takes a surge to develop a trend to create a trend for the rest to come on board. Kind of like the Epstein files. Didn't wanna have to mention that in this podcast, but it seems to be rather ubiquitous. It's everywhere. It's everywhere, right? Have you noticed like it went like 471 to one in the vote, right?
And why? Because enough people got on board. Enough people said, this is the right thing to do. These women have suffered, these girls have suffered. Nobody cares about them. You're all just worried about covering things up. So even when, you know, Trump tried to get people to take their names off of it and whatnot, it wasn't gonna work because the morality of it was so great.
It overrode all of the other lesser values money, power, prestige, reputation, and that's what we can hope for. AI is that the safety everybody chance? The same mantra in AI. We want the very best that AI has to offer while mitigating the very worst that could happen. That's it. That's the mantra of AI development today.
And if we can get that and if we can get there, we've got as close to a utopias as we're going to get. And if we get it wrong, well. It's not gonna bode well for our particular species.
David Rice: The pendulum swings both ways. Well, Christopher, thank you for coming on today. I really appreciate it, and thanks for giving us some of your time.
Christopher DiCarlo: Oh, my pleasure. Thanks for inviting me.
David Rice: Absolutely.
Well listeners, if you haven't done so already, as always, I just wanna guide you to the website to sign up for the newsletter. You'll get all the latest updates, latest podcasts, our events, everything that we do here at People Managing People. That's peoplemanagingpeople.com/subscribe.
And until next time, don't be afraid of the philosophical conversation.
