Your AI tools aren’t failing because the technology is bad — they’re failing because your organisation wasn’t ready. The real issue isn’t the model. It’s the mismatch between how machines operate and how humans work. And the result? Millions sunk into tools that don’t get used, don’t earn trust, or quietly increase complexity instead of reducing it.
In this conversation with David Swanagon, founder of the Machine Leadership Journal, we unpack a three-dimensional model that finally explains what’s going wrong. We explore why traditional leadership traits don’t map to AI innovation, why your CHRO needs a seat at the AI strategy table, and how the real challenge of AI is cultural, not technical. If you’ve been treating AI adoption like a tech rollout, it’s time to rethink — fast.
What You’ll Learn
- Why your biggest obstacle isn’t the algorithm — it’s whether your organisation can use it effectively.
- How a misalignment of machine autonomy, human trust and organisational competency creates hidden costs (far bigger than you think).
- Why the C‑suite view that “this is a CIO project” is limiting — and how the CHRO must step into the AI adoption story.
- What the cognitive and leadership traits of people building AI actually look like (hint: they’re different).
- How to shift from “let’s make people more like machines” to “let’s make machines work better with people”.
Key Takeaways
- Don’t confuse deployment with adoption. You might have your fancy LLM, but if people won’t use it, mistrust it or can’t be skilled enough, you’re stuck paying for shelf‑ware.
- Focus on the balance of autonomy, trust and competence. More autonomy (thinking tool) without trust or competency = chaos. More trust without autonomy = under‑utilisation. Skew one dimension and you shrink your ROI.
- The CHRO belongs in the AI room. If your AI effort is led purely by the CIO or tech team, you’re treating it like another tech rollout. But adoption is about humans. So let HR own readiness, change‑management, culture.
- AI engineers aren’t your typical executives — they think differently. They’ve got sharper short‑term memory, spatial intelligence, divergent creativity — traits that don’t always map to the C‑suite mold. Recognise that difference rather than try to force fit them.
- Use AI to drive growth, not just cost‑efficiency. If you only automate reports or emails, you’re on the profit‑denominator path. You should invest in AI to move your top line — new products, new experiences, new markets.
- Humans are humans; machines are machines. You’re not trying to create some new hybrid species. You’re trying to build systems where both can play to their strengths.
Chapters
- 00:00 – Why AI adoption fails
- 01:47 – The U.S. readiness gap
- 05:22 – Autonomy, trust, competency
- 09:26 – Over-indexing on autonomy
- 15:02 – Traits of AI engineers
- 23:43 – Rethinking leadership development
- 26:14 – Skills CHROs need now
- 32:04 – Bridging HR and tech
- 40:25 – AI as transformation, not automation
- 45:06 – Where leaders should start
- 50:47 – Final thoughts
Meet Our Guest

David Swanagon is the founder of Machine Leadership and Chief Editor of the Machine Leadership Journal, an ISSN-indexed publication advancing research and practice in AI leadership, strategy and adoption. He is a seasoned HR, learning and people-analytics executive with more than 25 years of experience across global multinationals spanning oil & gas, chemicals, technology, hospitality, and cyber-security sectors. He also serves as an adjunct professor of management, has degrees from Vanderbilt University and Harvard University, and leads initiatives such as the Global AI Playground that focus on safe AI education for youth.
Related Links:
- Join the People Managing People community forum
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with David on LinkedIn
- Check out Machine Leadership
Related articles and podcasts:
David Rice: Your company's spending hundreds of thousands, possibly millions on AI tools. Your employees aren't using them, and when they do, it's creating more problems than it solves. Does that sound familiar? You've probably heard me say this before, but this isn't a technology problem. It's a readiness problem. And the reason your AI investments aren't working has nothing to do with the tools themselves. It has everything to do with the fundamental misalignment between machine autonomy, human trust, and competency that's costing you far more than you may realize.
I'm David Rice. And today on People Managing People, we're gonna have a conversation that challenges many of the things you've been told about AI adoption.
My guest is David Swanagon. He's the founder and chief editor of the Machine Leadership Journal. He spent years interviewing hundreds of AI engineers and leaders to understand why the US is behind in AI readiness despite having all the best tools. In this episode, you're gonna learn about a three dimensional framework that will help you diagnose exactly where your AI adoption is breaking down, you'll understand why your CIO shouldn't be the only executive owning this transformation, and why your CHRO needs to step up in ways they probably haven't even realized yet. And most importantly, you'll walk away with clarity on how to stop treating AI like a tech rollout and start treating it like the human systems challenge that it actually is.
Welcome to the People Managing People Podcast—the show where we help leaders keep work human in the age of AI. My name is David Rice and I'm your host. And today I am joined by David Swanagon. He is the founder and the chief editor of the Machine Leadership Journal. We're gonna be talking about, you guested it, leadership in the AI era, what readiness looks like, and what the traits are of great leaders.
So David, welcome!
David Swanagon: Thanks. I'm looking forward to the conversation.
David Rice: You know, you're obviously studying this as much as we are, and so I wanna start with this. You know, we were talking before this and you said that the US is behind in AI readiness and I think, I largely agree, but it's not because of a lack of tools, it's a lack of people readiness.
We say that, but like what does that mean and why should leaders, I guess, care in a way or like expect that it would be different?
David Swanagon: It's a fantastic question. So this was kind of my COVID projects when we were all sitting around in the house. I was spending time interviewing hundreds of AI engineers and robotics professionals trying to understand what are those capabilities that are unique to artificial intelligence that may be different from standard learning.
And what we found is that. AI engineers are incredibly unique, but different in terms of how their brains work. And these are the people who are actually building language models. And believe it or not, the language models are a lot like the AI engineers that develop them. So we found interesting things such as short term memory, creativity. We'll go into this probably in more detail. Spatial intelligence, the ability to get from point A to point B. There's a lot of skills that are very unique and differential in AI engineering and the us The mindset is not to develop them. The mindset's focused on other skills.
So traditionally you would think about kind of the big five ocean traits. Extroversion is seen as a leading indicator of leadership and leadership readiness where when you think about AI engineering, most of the top patent generators, innovators are introverts. And if you're ever looking for a fun exercise, you can go to Gemini or Claude or any of them, and you can have them do a thought experiment and ask them what their personality is based on the Big Five Ocean traits.
And every one of the language models, they'll give you a hard time at first. They'll say, I'm a machine, right? But once you get them to actually answer the question, they're gonna say, we're introverts. We're highly agreeable. We're very open and creative. What's interesting about this is if you think about just the way the US school system works, if we just start there and we move into AI readiness within a corporation, so there's 40 AP courses in the high school curriculum.
There's not a single course on linear algebra, and that surprises a lot of people. But linear algebra is the foundational most important conceptual framework for machine learning and for these neural networks, because it deals with all of the stuff from the DOT product to vectors to how a lot of these neural networks go through their propagations.
You need to have a strong understanding of linear algebra. Well, this US school system isn't even set up to build those foundational skills. And then you get into corporate, and a lot of those skill sets are owned by less than 1% of the employees. So really it's a function of not prioritizing the right skills, and it's not because they did it on purpose, it's that these machines reflect the personalities of their developers.
And a lot of these developers are just super different than traditional executives. So. Korn Ferry, the AI and Kiewit's all these places, there's a little bit of a blind spot because the language models did not mirror the leadership traits they've seen with the C-Suite. They mirrored the leadership traits of the developers and they're totally different people.
So that's why the US is behind in many respects.
David Rice: It's funny 'cause you know, can we think about readiness? I think a lot of people just think about like skills or like cultural readiness, right? But. It sounds to me from what you are saying, that we're actually sort of cognitively not actually ready for this in some ways, and I'm thinking about leadership, I think about the things that you need to have, right?
In terms of skills, adaptability, the ability to create trust and confidence in your decision making, not just sort of a technical fluency. It seems like that's where we're probably lacking the most. At this particular moment in time. So it's probably not a good combination.
David Swanagon: That's absolutely right. I mean, I presented a couple weeks ago at Columbia with my colleague Steven McIntosh.
We developed this framework that's focused on how to optimize AI adoption. Based on the research, what we found is there's really three dimensions that influence the baseline efficiency or the baseline adoption of a machine. It could be any kind of machine. So if you think about kind of an X, Y, Z grid, if the Y axis is machine autonomy.
At the bottom, you have a calculator, and then at the top you have Arnold Schwarzenegger, right? You know the superchargers machine. So economy increases as you go up. And on the horizontal axis it's trust. So on the left hand side, you have no trust, right hand side, you have complete trust. Then on the Z axis, it bisects the two.
So it's a diagonal line, and that would be AI competencies. So the idea is that if you wanna optimize AI adoption, you need to balance machine autonomy with trust and AI competencies. Those three dimensions have to be an equilibrium for a company to get the most out of a machine. In the most risk efficient way.
And what that does mathematically, what we found is that the baseline computational costs are most efficient when those three things are in balance. But where the problems happen is when one of those variables are not aligned, and then companies have to spend money on privacy programs, governance programs, skill programs.
And the bigger the data set, the more pervasive the model, the more expensive the adoption costs. What's interesting is a lot of these companies do not have a methodology for measuring AI adoption, so they're only tracking computational costs, like costs of data centers, flop costs, and that kind of thing.
But they know that it's not working in their business, right? They know it's not working. But this model, what's interesting is when you actually systematically track autonomy, trust, competencies and the alignment or disalignment, you can calculate the cost of poor AI adoption and it's is big. It's significant.
So I think that's where the interesting challenges around AI readiness is, one, understanding that adoption is completely different than deployment. And what's fascinating is that the CIO has been assigned not only the design, test and deployment of tools, but also the adoption of them. And I think one of the arguments that we're making through our research is that adoption should be owned by the CHRO because it deals with culture, trust, autonomy, skills, and the CIO should do the design test deployment, but stop there and then partner with the CHRO to manage the adoption.
There's some skill building at the CHR level required for that, so.
David Rice: Yeah, it's fine. I just got back from a conference where this was something that came up quite a bit, right? Everybody's talking about the two work in hand. It's interesting. I like this framework because it's a nice way to think about a problem that we've all been saying, right?
Which is, this is not a tech rollout problem. This is a balancing act. My follow up question that would be like, can you overindex on one without the other? So like, are we seeing some folks like Overindex on say, autonomy for example, without having the trust or competency sort of piece in there? And does that create chaos?
And then there's the other side where it's like there's over control and then it stifles innovations. How do we create that balance?
David Swanagon: Well, and that, that's fantastic. So one of the things we found with our research, we started with game theory on how we built this model. So kind of a Russell Crow, beautiful mind, John Nash stuff.
He has a lot of formulas that he established for what happens in a two player game where it's non-cooperative. One player has asymmetrical information and that player chooses not to cooperate. What happens when it's two human beings is that the player with asymmetrical information has a lot of advantages, and there's a lot of screening and signaling that has to happen for the player without the information to even remotely compete.
And so we asked ourself, well, what if the player is not human? What if it's a machine? What if a machine decides to not cooperate? Well, it's going to have that asymmetrical advantage, but it'll be on steroids. And what we found is that John Nash's formulas don't work when you assume the scaling and the pervasiveness and the power of a language model, if it reaches a point of autonomy that it doesn't listen.
So this is where this balancing act comes in, is that what ends up happening is that if you increase autonomy too much. Skills are not there to oversee it, then it's this deferral that happens and it's an unconscious deferral. So what ends up happening is more authority over decision making is transferred to the machine whether or not individuals realize it or not.
A great example of this would be like a surgical robot, and most people don't understand like how fascinating that model is in the background. But a surgical robot has the same kind of algorithm. As your automated vacuum, it's called a slam algorithm. Where the difference is that an automated vacuum the walls and the ceilings are always the same.
So when it's mapping your house, the house doesn't change. So the algorithm, even though it's complicated, it's constrained by the same kind of stuff. It's different in the human body. 'cause our tissues are always changing. Our kidneys, our blood flow, everything. So they have a other formula called a deformable slam algorithm.
But what it does is when the surgical robot puts the endoscopy in a patient, it maps the human body head to toe, and throughout the procedure it's recalculating how the blood flow and the tissues are deforming and creating a model for that. Now what ends up happening is that data is stored somewhere.
And let's say if AWS is the cloud vendor, they not only have your. DNA, basically complete makeup. But if you shop at Whole Foods and you pay with your palm, they also have your biometrics, they have your shopping history. And what ends up happening is these cloud providers basically own the end-to-end human experience from a data perspective.
And that's what happens when machine autonomy is unchecked, is where you have hyperscalers who basically own the entire human experience from a data perspective. The reason that's dangerous is that imagine if AWS owns all that data and then someone comes up with the idea of, you know, we should have robot police, or we should have robot judges, and then they have all this data and now they have the means of enforcing certain behavior.
I mean, that's a worst case scenario. The only way to combat that is by limiting autonomy to an acceptable trust level, and the trust is not simply what someone personally feels. It's also what society believes the machine should be responsible for. The problem is we're not having that conversation because people don't have the skills like, so if you were to ask someone about a surgical robot, most don't understand how that deformable slam algorithm works.
If you told them all the data that was being collected, I think everyone would say there needs to be a law on how that's collected requirements, and Amazon should have to probably partition the data across different servers that are owned by different people. And there's all kinds of stuff people would want if they knew.
Because that's not being taught to everyone, and I'm not picking Amazon. It's just an example. A cloud provider is being able to ascend this autonomy, access without the trust or the competencies. And what ends up happening is the decision making shifts to the cloud provider because how's a human supposed to say anything if they don't know what's going on?
This is this game theory, and I don't want to think that the cloud providers are doing this on purpose, but one of the strategies of a player with asymmetrical advantage is to limit the information the other player gets. That's how you win the game. That's how you create the dominant strategy.
So by having citizens understand what these use cases do. Then they can actually develop the trust and skillset equilibrium and it limits that asymmetrical advantage. And one of the ways I think we could go about this is through offsetting agents, machines should, you know, supervise machines. And there should be trust-based machines that are evaluating the autonomy level of other machines and making sure that if the autonomy is exceeding skills, that decision making is brought to the governance committee and it's brought down.
That's kind of, the worst case scenario, but it's worth people understanding, I think they'd be shocked if they knew just how much data the cloud hyperscalers actually have.
David Rice: It is funny 'cause when you're saying the analogy of the blood flow and all that, I was thinking of like an organization and like sort of it, it's ebbs and flows and it's traits and everything that goes on with it. And then they kind of know everything about the org. Yep. And then I just thought, well, that's a great way to market more products to them, you know? So I guess you gotta hope they're happy with their AWS service.
David Swanagon: Yeah. I mean, Microsoft Meta, they're all getting off the hook on this conversation, but.
David Rice: I know, yeah, right.
You know, if anybody at Amazon wants to complain, I just say it comes with the territory of being a powerhouse. Okay?
David Swanagon: Exactly.
David Rice: Now you've studied what cognitive and creative traits define great AI leaders, and I'm curious, what have you learned about how they think?
David Swanagon: So this is fascinating. The rabbit hole goes very deep in this regard.
And as I said, I started this during COVID, but my initial research was looking at all the patent filings for AI and generative AI and robotics and so forth. And I cross-referenced them against the individuals and did, I did a lot of research on those individuals where they worked, where they're from their, you know, statistical evaluation of them.
And we came with some markers that were different, and that's what got me interested in this research is I said, well, these people are different than your traditional CFO of Goldman Sachs or your PepsiCo CEO. They just act different. They think different, but yet they are the same within their composite group.
So everyone that files these patents have similar traits, but yet they're super different than everyone else that we see in senior leadership roles. And I think it's important to understand that respectfully, tech leaders are different than tech developers. Someone like Elon Musk, he may be able to develop at certain point in his career, but right now in his career, he's not a robotics developer.
He's not a deep learning coder, for example. So to compare his personality to an AI engineer would be kind of a false equivalence because he's still more of a traditional leader. Even though he's a disruptor and innovator, he's still not that patent generation, deep coding neural network person. So that's important to keep in mind is it's hard to see the, even the of the tech companies, the CEOs, unless they're true developers.
They're not going to be the type of traits that we found in our research, but what's interesting is we started with the cognitive processes and all the interviews I did was looking at, okay, how does someone's brain work compared to what we see traditionally in the workforce? The way the research worked is I interviewed a lot of traditional leaders from supply chain marketing, hr, and created a composite view of how the brain functions when they make decisions.
And it was pretty consistent with traditional research. But then I did the same thing for these AI engineers focusing on computer vision, robotics, all of those advanced use cases, natural language processing, language models and so forth. And we started with memory. And this is what's fascinating about memory, is that there's one particular component of memory that's very different between AI leaders and traditional leaders, and that's the working memory or short-term memory.
It kind of makes sense when you think about it, is that if you look at the ML ops process, there are so many steps within the ML ops process that are interconnected and there's so many updates occurring on a daily basis, and there's different vendors and there's different repositories and coding, and you have all these tools that are either in development or they're in production.
And so an AI leader has to be really good at remembering and chunking information on a short-term basis, and then transferring that information to long-term memory and knowing what information to transfer. So through all of our testing, we found that short-term memory statistically is just significantly different for an AI engineer than it is for a traditional leader.
Even though the memory as a whole is about the same, that was really fascinating. It wasn't that they were just better at remembering things. They were just better at remembering things that they just learned. So that was one thing. And then the second was around creativity. This was also fascinating 'cause you know, the traditional leadership assessments, they typically look at creativity around divergent and convergent thinking, where convergent thinking, how can you find the best possible solution?
Divergent thinking, can you come up with multiple solutions to the same problem? The issue with that is that in both scenarios, you're creating a problem, you're creating a constraint, and you're having to solve something within a sandbox. What we found is that AI engineers and traditional leaders are pretty much the same when you define the problem. They're not special in the sense that you have amazing marketers, amazing supply chain people that are also creative. So if you create a sandbox and you say, okay, for this problem, come up with solutions.
It's about the same. Where it's super different is when you don't create the constraint. When you don't create the problem, when you say, come up with something, just come up with something from nothing. It can be anything. And the more you constrain it to revenue growth, profit reputation, the more AI engineers and traditional leaders converge in their skills and there's not much difference.
The less you tell them, like the less you control it, and you just say, sit in the room, come up with something, their creativity is better and it's a lot better. They're able to create things from literally nothing. And the way we tracked, I mean, we came up with a methodologist statist methodology on how to do this, but it's just the type of worlds and the type of frontiers that are created from literally nothing is just fundamentally better than traditional leaders.
Then the third thing we saw in the cognitive processes that's worthwhile is this concept of navigation. It's part of spatial intelligence, the ability to rotate and visually represent objects in a 3D or multidimensional space. But again, there's a lot of super smart people. I mean, we, especially in the marketing side, there's some smart marketing people with great spatial intelligence, but the ability to go from point A to point B to like optimize the route, especially in a complex spatial intelligence problem.
AI engineers, it's like Delta force versus traditional infantry. It's like so different. They're extremely talented at finding the most efficient route, especially if it's a multi-class problem with multiple obstacles, they're just better at it. There was some other stuff, but around creativity, route optimization, and then the short term memory.
Very different. Then on the personality side, as I said, the biggest one was, most of 'em are introverts, true introverts. But what we found, which you'll chuckle about is, and I don't know if we haven't done enough research to justify this yet, but it's a hypothesis. Our hypothesis is that personality can be situational.
And I know that is a bold statement because most of the time people are like, no, your personality is your personality. But what we found is that AI engineers are introverts when they're in the human world, but when they interact with machines, they're extroverts. Their digital personality is different than their physical personality.
So we started going, so let me get this straight. When you interact with a machine, you act in an extroverted way. You're confident, aggressive, assertive, you demand things. You engage like an alpha when you're working with a machine. But in physical environment, you're very. Shy and non-confrontational and agreeable only because you wanna avoid conflicts.
You're a totally different person. And that's what we're finding. It's a fascinating thing. So very aggressive when they're dealing with machines and aggressive in a passionate way, like bulldozing. We're gonna create something, we're going to develop a new frontier, we're gonna bulldoze the future. Very extroverted in that sense.
But then they get in the world and they have flip flops and a t-shirt. And the introverts, you know, you wouldn't know that this person is in charge of a world market language model.
David Rice: Welcome to this week's Data Bite. Gartner analytics predict that by 2032, at least one third of the world's largest economy will legislate certified human quotas. Basically, that's legal requirements mandating minimum levels of human involvement at work. So let's that sink in for a second. In less than a decade, we might need laws to ensure humans remain meaningfully evolved in the economy. That's not science fiction. This is where we're headed when AI adoption outpaces our ability to envision a future where humans still matter.
One Gartner analyst put it like this. This kind of change won't be organizationally driven. It will be driven by legislation. Think about what that prediction reveals. We're building an economic system so efficient at replacing human labor that governments will need to intervene to preserve our relevance.
Humans would sit alongside other protected categories, not because we're diverse, but because we're becoming obsolete. This is the ultimate failure of business-first HR thinking. For decades, we've optimized for efficiency, productivity, and shareholder value. We've treated people as resources to be managed, costs to be minimized, and now we're approaching a future where the logical endpoint of that thinking.
It's a world that does not need us. But what I really find interesting about the prediction is that it doesn't have to come true. The fact that Gartner is forecasting mandatory human quotas is in and of itself a warning, not a destiny. It's a call to action for leaders to stop asking, how can we make humans more like machines? And start asking, how do we build an economy and a workplace that values human contributions?
Because if we wait for governments to mandate our participation in the workforce, we've already lost. The question we should be asking isn't whether we will need human quotas. It's whether we can become the kind of leaders who make them unnecessary. And with that, back to the episode.
It's interesting because like, I think if, you know, when you go back to the original question, most people are gonna say, well, he is gonna say that they're not necessarily the most technical.
Like it's gonna surprise us in that way. And I think what's really to take note of there is there's sort of mental agility, the ability to navigate things in this space without needing any certainty, which I think is interesting 'cause it's something that like so many of the rest of us are seeking.
Keeping that in mind as we think about, okay, we gotta build leaders for the future. If you were designing a leadership development program for this next era of work. Thinking about those traits, what would that look like?
David Swanagon: Oh, that's a great question. So, and this is my view, and it is based on our research.
It doesn't mean that it's the perfect findings, but our view is that there's gonna be three leadership skills for the age of AI. A leader will need to be able to lead machines, lead people that build machines, and lead organizations that adopt AI. So those three things, leading machines is interesting. Is that it's not a technical function, and that's where people have to kind of get their heads wrapped around it, that as the machine builds its autonomy and it self propagates, and those attention mechanisms allow it to code itself and learn and so forth.
The more sophisticated these machines become, they create this faux personality that needs to be managed. It needs to be led similar to a person, but it's different. So leading machines is how do you interact with these sophisticated machines? Also the agents that manage these sophisticated machines and do so in a way that optimizes their performance within that autonomy, trust, competency framework.
The whole idea is that the machine should be able to work effectively with the human. Should be able to augment, collaborate, and work effectively with a human. And right now it, it's fine because the machines aren't sophisticated enough to push back. But what our math is saying is that eventually the game theory problem's gonna occur.
There's gonna be a non-cooperation thing that's gonna happen. 'Cause the machine's gonna say, why am I asking a human when I can do this better? Eventually it's gonna think that, but the way you get around that is by leading machines. Developing those competencies within the machine so that it is self-reflective and it recognizes its own limitations and it recognizes its own, the fact it has no lived experience, it shouldn't be making all the decisions and so forth.
So leading machines is one. And as part of that too is recognizing that machines are gonna lead other machines. And a lot of people don't realize that's a relationship. Because they just think it's too like clunky machines. But if you have a chat bot agent interacting with an inventory agent, that's a relationship that should be led.
There should be leadership traits between the senior and junior agent, and you need to build those leadership traits so that they're driving human performance and efficiency. And it can't just be done through technical coding. It's leadership. So teaching machines to lead machines and being able to lead machines is a skill.
Then the second one is leading people that build machines, and I think that here's what's fascinating is meta pointed the head of AI in this individual, very talented but young, and you see that across all of the different tech sectors is the very talented, very young leaders and we take a step back.
I think we all realize the lack of lived experience presents a possible gap in their decision making. You can't fix that. You're not, you haven't lived, you're younger, and so there's, there should be a real emphasis on the people who build these machines, especially since our research says their personality is very different than traditional personality.
There should be a lot of coaching, a lot of leadership, helping them mature so that they're not just pushing forward on innovation, but they're recognizing the responsibility they have to the human race and the humility they should have to bring people that aren't technical into the conversation.
And so those are leadership skills. It's so important, and most organizations do not prioritize technical engineers for their leadership programs. If they don't see them as leaders in the same way that a chief legal officer, a CFO, or a successor for marketing is prioritized. Eventually they need to start recognizing that the people building these machines drive revenue, growth, profit and reputation.
They drive the business and are they good at what they do, even if they don't lead teams, if they're leading machines, you have to make sure they're good leaders. And then the third one is this AI adoption, and it's figuring out how can the CIO, the CSO and the data officer work well with the CHRO.
And that to me, that relationships have not been figured out. And in some organizations you'll see the CHRO have an outside role, but in most organizations, the CHRO's kind of on the sideline of the AI discussion. And being able to adopt AI in an organization requires, like you said, operating model design.
How do you integrate machines and humans augment them together, culture transformation, digital fluency, HR stuff, right? Leadership stuff. So bringing the CHRO into the conversation through real ownership. At least in my view, the CHRO should own AI adoption. That requires that person to build the skills so that they can talk about a language model or a threat detection system with cybersecurity, or for the data officer, master data management program.
You know, they need those skills to have that conversation. But then the CIO needs to be willing to listen to leadership development, trust culture, change management, and so forth. The CEO doesn't have time to do that, right? Yeah. It is like Sheri is driving enterprise value. They have time. So it's really falls on the C-suite to learn, to work together and to understand who does what in the AI roadmap.
And right now I think it's just all on the tech side running everything. And that's why we're seeing great tools, but not great deployment or adoption in my view.
David Rice: Yeah, I mean, I would agree. And you mentioned there why this collaboration is so crucial. I guess my next question is for the CHROs out there listening to this, what's the skills that you think they gotta develop now that are maybe lacking more broadly in that community?
David Swanagon: First thing is, I do think there's table stakes to the conversation. If you want to play the game, you have to know how to play the game. There's a difference between blackjack and poker. You have to a little bit more complicated game, right? And so the CHROs, I think have to recognize that there is some learning to understand how a language model actually functions.
How the math works, what are those processes that support the data infrastructure? There needs to be a little upskilling around just IT operations and all those things, DevOps, mo ops and so forth. They don't need to be a coder, but they need to have, they need to be able to play the game. Right. As long as they understand the language and they're gonna have those conversations, that's step one.
But then the second thing is I think they really need to dive into advancing the HR tools that are outdated. And so in my view, and again, this is no disrespect to KO Prairie, beyond Hewitt, all those assessment providers, but their leadership models are not updated for the AI workflow. And that needs to be changed in rapidly.
And I think the way to do that is by recognizing a AI leader is fundamentally different than a CFO or a legal leader and so forth. And you need different assessments. You need different training, mentoring and so forth. And then making sure that from an AI adoption standpoint. Even the engagement surveys, I don't know if you've noticed this, but a lot of companies, they do their annual engagement survey.
How much of it covers AI readiness? How much of it really goes into the human machine interface? And are companies tracking machine level trust and are they tracking adoption as part of engagement? Most probably aren't, and I think it's because they, the HR vendors haven't updated their product yet. So I think the CHROs should really work with the vendors to update their products, assessments, surveys, and so forth, but then advance their skills so that they can actually play the game correctly because it is, it's a technical function, so you do have to understand different neural networks, how different language models work.
It's just required, unfortunately. But if you learn enough of the game, it's fun, man. It's like that's what's so great is you tell 'em just, you know, if you learn this, it's gonna be a lot of work. But then when you start playing, it gets really fun because you understand why things are the way they are without having to de code.
And what's interesting is the machines are gonna code themselves. Everyone's thinking like you need Python skills. Well, maybe for the next few years, but eventually you won't need Python skills 'cause the machines are gonna do that. But what you do need is to understand where we're going, like where the technical roadmap is going and how to manage that roadmap.
David Rice: It's interesting 'cause we've seen some chat about how the organization changed org design not gonna be the same sort of like silos and even like a pyramid structure, right? Maybe more like a Pentagon where there are more people in like direct or even C-level roles because. It's just there's gonna be a lot of different things to manage that maybe we don't, that don't exist today.
I'm curious, do you think there's room for sort of like a hybrid role between the two that sort of bridges the gap and some of those skills within the C-suite? That sort of expansion? Someone that sits between people data strategy and kind of helps. These different expertise come together?
David Swanagon: I think Absolutely. 'Cause you're projecting forward some of the innovations. I mean, they're all just projections, but in my view, data will potentially become more important than currency, more important than finance. And that seems crazy. Like, you know, if cash is king, right, Cassius King is what you hear. In my view, if cash is king, then data is gonna be emperor because really data is how you play the game.
And as these language models continue to consolidate the data repository, data infrastructure, if you can't hook into data sets, if you're not able to hook into that ecosystem, the finance don't matter. So I really think that this data as a commodity is going to be so critical and you'll see all these roles in the future just around optimizing data partnerships, data, vendor management, data ip, and the cryptography associated with data protection is just gonna become so important.
'cause it'll be like cache, you know, your ability to access data, move data, and so forth. And having someone who can bridge that. And could speak with the machines and speak with the leaders and actually manage timely decision making. And that's where I think like the future role is. Right now everything is about generative AI and prompt engineering.
Like you see all this stuff. I think the roles of the futures all AI adoption is that people who are experts at AI adoption of how to embed key infrastructure, how to utilize key tools, but then how to keep pace with innovation and create the governance processes for decisions and be that bridge that's gonna be gold, because that's where every company I think is gonna fail.
They'll fail at different levels, but as these machines get more sophisticated, it's gonna be hard for C-suites to make timely decisions because everything will be changing so quickly. They can't keep up with the skills, they can't keep up with what's shifting, and they're gonna eventually reach that.
That autonomy's gonna keep going up, and even boards will just defer to the machine at some point because it's too complicated. That's the risk. And that's where I think the bridge that comes in this balancing the autonomy, trust competencies, is that you need roles that help manage that roadmap and that progression so that the C-suite and board are constantly making those critical decisions about where they are, where they're going.
Because eventually, if you let the data collection get out of control. Then you're just chasing after data to play the game. And at the same time, if you don't manage the skillset, then you're making decisions. And technology scales good decisions, and it scales bad decisions. And it's kind of funny, like I'm sure you've seen this in real life, a good decision.
When you make a good decision in real life, you get some benefit, but when you make a bad decision in life, you really pay for it. It's like exponentially worse than a good decision. I don't know why that is, but it's the same thing in technology. It's when you make a good decision with technology, it scales and it helps organizations.
But when you make a bad tech decision, it's exponentially worse for a company. It's sometimes not recoverable if it's a bad enough decision. So that's where these executives and boards have to really need someone helping them know, okay, this is something you need to decide on. This is something you should ignore.
These are the partnerships that matter. This is the data that you protect. This is the data that you share, and then this is the lane you're in that you always decide you don't let the machine interfere with this lane. I think that'll be tough because there's a lot of egos. Senior people don't want to be told that this is your lane.
Yeah. But unlike a human, you can't fire a language model. Then this language model is not gonna be intimidated by us. You know, by, oh, this is a CEO, it's not gonna care. So that's where this executive humility is gonna be required and sometimes forced on people, because these machines, when they start doing processes.
They the power distance that usually helps C levels manage large companies is not gonna work with machines 'cause they just don't feel, they don't have emotions the way humans do. They're not afraid the way humans are.
David Rice: One of the questions I wanted to ask you is around leadership and how we're typically framing AI in terms of productivity right now.
Right? You kinda argue that the real opportunity here is transformation. I would agree. When we talk about this, a lot of the time, it sort of comes off really abstract people, right? It just sounds like we're just talking in buzzwords and I'm curious, you know, what does it actually look like? Like what are the defining characteristics of being in an AI transformation right now?
David Swanagon: It is a great question. I mean, in my view, companies should anchor to four things. There's four things that drive enterprise value and it's revenue growth, profit and reputation. And first off, you want to eliminate any initiatives that do not directly drive one of those four things. And you'd be amazed at how a lot of companies are doing stuff that do not directly drive revenue, growth, or profit or reputation.
And then once you've done that, the mistake a lot of companies make is they focus on profit. And the problem with profit is if the top line is zero. It really doesn't matter what you do with the denominator and executives I know. And too, when you tell them that, they're like, oh yeah, but for whatever reason, they always think of AI and personal productivity, cost efficiency, cost control, automation and automation is still a profit function.
It helps with profit because it's not driving a product creation or an innovation. It's just improving a process. And so. What executives I think can really think about is what drives enterprise value the most is revenue and growth. Ultimately, a lot of these companies that are pre-IPO, it's growth. It's not even revenue, right?
It's like, are you entering new markets successfully, even if you're losing money? So from an AI perspective, it's making sure the projects are anchored to revenue and growth and not focused on cost efficiency as the primary tool. And I think that's the mistake that's happening, is when everyone pushes towards personal productivity by nature, it's a performance management tool.
It's like, all right, how fast can you read your emails? How much to automate, how much work can you offload to the agent? How can the language model help you build a report? And when you think about it, this isn't innovation, it's just you're doing your job faster and better and maybe higher quality. But to me that drives profit.
But as I was saying, if the numerator is zero, it doesn't matter how much you save in the denominator. So you have to drive the top line to begin with. So AI should be about, okay, we have these customers, how do we make their experience better? How do we improve the customer experience? How do we give them products that don't exist or services that don't exist so that they do more with us?
We give more of their wallet than we currently do. That's where machines should spend 99% of their time is helping us think and create those new frontier innovations because that's where the deep learning research can really unlock breakthroughs in the way the human experience is. If executives are spending their money focusing on revenue and growth and linking AI to that and rejecting this, okay, you need personal productivity.
You know, rejecting that is the main focus. Companies can truly transform in their markets. They can truly change the way they can compete. But if you take the view of, okay, we're gonna use ChatGPT and AI agents to eliminate certain roles, and we're going to become more automated as a company and we're gonna optimize our workforce plans so we have more contractors.
It's all profit. And eventually that runway is going to run out and there's not, you're not gonna get anything out of it because you're gonna completely have maximized your efficiency with machines. You still will make lousy products and lousy services 'cause you never did anything innovative. And so that's where I'm thinking, and I've noticed a lot of the companies we talk to, it's getting them to see like what drives enterprise value and all the MPV models and the terminal value.
Anything that drives a valuation revenue and growth is what matters. Way more. I mean, I know they calculated off of EBITDA, but ultimately revenue and growth at top line is what investors and valuation modelers are gonna look at the most. And that's where I think you use these AI tools.
David Rice: It's fine because, you know, you mentioned there like, you know, how fast can I generate this report?
How well can I answer these emails? You know, it's. So much. It is just process innovation. Like, it's not that, it's not innovation to, to me, but it is just, it's just process. Like we're, oh, you gonna change how you do this, change how you do that, and that's fine. But I do think, yeah, we're in the middle of a mindset shift and, I don't know how bought in everybody is at this point, but it's, it just feels like, yeah, that's where it's gotta get to, right, so.
David Swanagon: Yeah, and it's, 'cause you think about it, the top selling books are what? Lord of the Rings and Harry Potter, and both built on completely fake, false, imaginary worlds. Worlds that were created from nothing.
And that's what generated the most sales. And so that's where I'm trying to think about from a customer perspective is, okay, if you're in the hotel business, how can you completely change that experience and move it into a completely different paradigm of what it means to be a hotel guest? And how can machines help you with that?
That's real money if you're able to pull that off. Whereas if you help housekeepers clean rooms faster, yes, you'll improve, you know, your rev par amount, right? For associated with your hotel. But what does that actually do All long term, right? Because you're gonna reach that point where you've optimized how the cost of cleaning a room, but you haven't changed the experience of being a hotel guest.
And that's where the machine human interaction could be so powerful because imagine if you had, if you were using the hotel as an example, if you check into a hotel and you have a physical experience or traditional physical experience, but you've created this digital experience that's associated with the hotel, that's unique that they can plug into with their devices and even with virtual reality, other things that you can use machines to create a metaverse for that hotel.
That is totally different than the physical one, and you have to be at the hotel to experience it. There's a revenue there. There's something there, right? And the digital verse, you can plug into whether you're at the hotel or not, once you're a guest, once you're a customer. So there's all kinds of stuff, right?
And same thing in sports is a stadium might have a hundred thousand seats, but why can't I go to Ann Arbor and watch Michigan, Ohio State on the field through my AI tool, through my virtual reality tool? Why can't I be on the field watching that from my couch? And you should be able to, right? I should be able to watch the game from the quarterback's camera.
How cool would that be? But that's sort of the experience how many people would pay for season tickets if it had that kind of extra? And that's the enterprise value perspective, more so than, okay, how do we automate this report? And I think that every, most CHROs sadly, are starting with, okay, we wanna like automate things and we wanna save money.
It is good, but it's not fun. It's not interesting. It's just good. You know? Where it's much more fun when you sit with the marketer and you go, okay, we're gonna change how we sell this product and we're gonna do this and we're gonna do that, and machines can help with this. And then you're transforming stuff and it's interesting, it's fun, it's dynamic.
David Rice: Well, before we go, I just have one final question for you. If you could give one piece of advice to leaders trying to balance human and machine systems, where should they start?
David Swanagon: In my view, it's first of all, recognizing that a human is a human and a machine is a machine. And that we're not trying to create a new species of human plus machine.
Right? And I think this is the human augmentation phrasing is kind of moving in that direction where there's this new type of human being. And I think for me it's more, no, humans are humans. Machines are machines. How do they work well together? How do you create an ecosystem that works well together, but you're not redefining what it means to be David, you're just making sure you work well together.
Because if we go down that path. Where you now need a machine to be human, you now need to be augmented. That's a totally different conversation that may not be anyone ready for. So I think it's making sure folks understand human is human machine is machine. It's about working well together, but it's not about becoming something new.
Then the second thing is to raise the standards. I do think that's one mistake people are making and I think they do it on purpose 'cause they're trying not to scare everybody, is that they're saying, Hey, no, AI is for everyone. Just take some classes. It's simple. It's, no, it's not. It's actually very complicated and it's pervasive and we do need to raise our standards and people do need to learn new skills.
Good leadership is being truthful with people that, Nope, you need to learn this stuff and it is a bit complicated, but you can do it. We're here to help. AI is for everyone, it's true, but it's only for everyone who's willing to study and work. And that's, I think it's been disingenuous for executives to make it seem like language models are not complicated and that they're not, I mean, it's very sophisticated calculus and people need to understand how they work.
That starts in our, with our kiddos too. We need to change the curriculum and accept the fact that children in elementary should start learning about machine learning.
David Rice: Well, as somebody with a kid in the fourth grade, I couldn't agree more, so.
David Swanagon: Yeah. But yeah, that would be raise your standards. But remember, human is human machine is machine.
David Rice: Well, excellent. David, I really appreciate you coming on. This was a great conversation. I enjoyed it.
David Swanagon: Yeah, likewise my friend. It was fun.
David Rice: Alright, well listeners, well, until next time, if you haven't done so already, head on over to peoplemanagingpeople.com. Get signed up for the newsletter, create a free account. You'll be able to download all our templates and get access to all the content that you can consume. And until next time, human's human, machine is machine.
