For the past 18–24 months, HR teams have been asked a blunt question: What are you doing with AI? The response has often been activity without strategy — policies, guardrails, governance frameworks — busy work that signals compliance rather than creates value. Dr. Dieter Veldsman joins David to diagnose this pattern of urgency → paralysis → compliance, explain why it’s holding HR back, and show how a deeper sense‑making process is the real lever for progress.
Dieter argues — and the data backs it up — that most HR orgs focused on what can’t be done with AI, not what should be done to drive business value. The root isn’t technology ignorance, it’s cultural: HR has treated AI like a tech project when it’s fundamentally a people and organizational transformation. This episode walks through how to break the compliance trap, structure experimentation with purpose, and expand the CHRO role into the architect of the human‑machine ecosystem.
What You’ll Learn
- Why urgency without clarity produces paralysis. The rush to be “doing AI” replaced strategic purpose with busy signals.
- The compliance trap. Too many HR teams have built guardrails before defining the destination.
- Disruption exposes organizational wiring. How teams respond to uncertainty reveals culture, not capability.
- AI fluency is a mindset shift, not a skills checklist. It’s about learning how to think with these tools, not just use them.
- The CHRO’s evolving role. From policy guardian to chief architect of human‑technology work ecosystems.
Key Takeaways
- Start with value, not tools. Too many teams bought licenses without knowing what problem they were solving. Strategy begins with why.
- Sense-making matters. HR needs to move beyond fear and hype to help organizations understand what AI is, what it does, and what it doesn’t.
- Clarity over speed. It’s fine to say “we’re still figuring this out” — what matters is offering direction and shortening the loop between decision and learning.
- Build fluency through use. Start with simple, familiar tasks. Fluency comes from using AI thoughtfully, not perfectly.
- Encourage responsible use. Just because AI can do something doesn’t mean it should. Set clear boundaries without discouraging experimentation.
- Leaders must model experimentation. Normalize learning out loud. If leaders don’t show how they’re using AI, teams won’t feel safe doing it either.
- CHROs should lead, not follow. AI is a people transformation — HR belongs at the center of the conversation, not playing catch-up.
Chapters
- 00:00 – Introduction
- 01:47 – Overreaction, Compliance & the Value Gap in HR
- 04:34 – Why HR Defaults to Safety, and What That Costs
- 06:49 – Organizational Response to Disruption
- 09:55 – Clarity, Experimentation, and Strategic Focus
- 12:27 – Chasing Tools vs. Solving Problems
- 16:06 – Are We Confusing Buzz with Value?
- 18:10 – HR’s Deliberate Pace: Boon or Barrier?
- 20:48 – Adoption Rates & the Catch-Up Curve
- 22:27 – Low-Stakes Entry Points for AI Fluency
- 28:39 – Behavioral Shifts that Enable AI Experimentation
- 35:15 – The CHRO’s Role in the AI Future
- 38:24 – Integrating Humans & Machines: A Cultural Imperative
- 39:31 – Episode Close & Final Advice
Meet Our Guest

Dieter Veldsman is the Chief Scientist at the Academy to Innovate HR (AIHR), where he leads research and thought leadership on the future of work, people analytics, organizational design, and HR strategy. With a background in industrial and organizational psychology, Dieter works at the intersection of science and practice, helping HR leaders apply evidence-based insights to solve complex people challenges. He is a sought-after speaker, researcher, and advisor known for translating rigorous research into practical frameworks that enable organizations to build more human-centric, high-performing workplaces.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Dieter on LinkedIn
- Check out AIHR (Academy to Innovate HR)
Related articles and podcasts:
David Rice: Probably about 18 months ago, your CEO asked you what you were doing with AI. So you did something, you wrote policies, you built guardrails, you established governance frameworks. And now you're stuck. 66% of HR organizations have focused on compliance, or in other words, what you can't do with AI. But only about 30% have clarity on the value you're actually trying to create. So you built safety rails. Nobody really built the road.
Today's guest is Dr. Dieter Veldsman. He's the chief AI scientist at AIHR. And he's gonna walk us through the whiplash pattern he's seeing in HR right now, which is the overreaction be seen doing something with AI, followed by the overcorrection into safe mode, and why that leaves you stuck in the middle of the pack while marketing, sales, and product development all race ahead as departments.
What he really wants you to understand is something very similar to what we've been saying on the show and that's companies are treating AI as a technology transformation, when in actuality it's a culture and people transformation. And that means HR shouldn't be playing catch up. You should be leading the conversation.
So today we're gonna cover why being busy with AI became more important than being strategic, how to break outta the compliance trap without reckless experimentation, the CHRO's evolving role as chief architect of the human machine ecosystem, and how to shift from what we can safely use to what value are we creating.
I'm David Rice. This is the People Managing People podcast. And if you've been playing defense on AI when you should be orchestrating the transformation, well, this conversation is your reset. So, let's go.
Dieter, welcome!
Dieter Veldsman: Thanks so much, David. Lovely to be here.
David Rice: When we were talking beforehand, you'd mentioned this kind of overreaction, then overcorrection pattern that you're seeing in hr, specifically when it comes to AI. Can you unpack that a bit? Like what's really going on there? What's driving that?
Dieter Veldsman: You know, I think similar to other functions when if you roll back time, 18 to 24 months ago, there was this big pressure on, but what are you doing with AI? You need to be seen doing something with AI.
And I think it did result in a bit of, I don't want to call it mindless experimentation, but you know, people were, what tools do I have? How am I going to utilize the tool sets where, what are the use cases that I should be focusing on? So we did definitely see this in a frenzy on the one side and this overreaction towards, you just need to be busy with AI, whether it's valuable or whether it's aligned to what you actually want to drive as an HR team or not.
That was kind of a little bit second place and irrelevant. It was more to be shown being busy. I think there was then a second phase that then pulled it back quite significantly that said, okay, hang on. We can't just. Experiment mindlessly going forward. Let's rather talk about, but what value are we driving and how secure are we and what does the governance look like and are we using the right things and what does the guardrail guardrails entail?
So I think we've definitely seen the shift over the last, like I said, 18 months or so. And you know, we recently did a study of about 337 HR organizations and what the data showed, it was quite interesting, was. Almost 66% of them have focused a lot on like the compliance aspect. So what's the governance?
What does the policies look like? Guardrails, what can we use, what can't we use? But only 30% of them actually say, you know, there's clarity around what value is going to look like for us through the use of AI. Which is why when I talk about the overreaction, there was definitely this thing to be seen doing something with AI as opposed to maybe going slightly slower and then saying, but what is it that we want to do with AI and why tends to have played second fiddle a little bit more over the last couple of months.
David Rice: Yeah, I think that's right. Like I've seen this, like there's like a whiplash between urgency and avoidance, but it's like, yeah, like you said, you're told to do something and do it fast, but then without any guidance or like goal, specific goal that you really want to achieve. You sort of end up freezing.
And then the default, and it's not just an HR thing, but I'd say it is, it gets exaggerated in the HR community where you default to safe mode. It's that the compliance mindset. Right? And it feels like HR wanted to respond strategically, but then whether it was due to capacity or permission to experiment without being judged, sort of, it just never really quite happened.
Is there also like a significant fear of just. I mean, I guess getting it wrong, or maybe it's even something deeper. Like there's this pressure to be seen as doing something, but there's no structure to back it up, so you end up kind of just feeling helpless.
Dieter Veldsman: No, I think you're spot on. And I think there was a couple of things that happened and I mean, I'm a very passionate HR professional myself, right?
So I can definitely identify with that initial feeling where, you know, we are a little bit more risk averse in the HR side of things. And it's not just related to AI in general, but we also know what the real risks are when things do go wrong. So we saw kind of HR being middle of the pack in terms of AI adoption, right?
So we were lagging functions such as, you know, product development, marketing, sales. But we were still ahead of some of the other, can I call them more legal governance types of functions? Right? So legal risk compliance, et cetera. We could kind of there in the middle bit there. I think there was also the general, and I dunno why HR always gets targeted.
I mean, that's maybe a conversation for a different day, but whenever new technology comes, oh, it's the end of the HR function. Right? And that's never proven to be the case. It's changed a couple of times. But there was that AI anxiety, oh, is it finally here? You know, what is it going to do to us? And I think that's a lot more of a reasonable conversation today to say, realistically, AI will take some tasks, activities, workflows away.
But what does that mean in terms of what the real expectation is going to be, of what HR work is going to be in future? The last point that you make there. I do think that there was a realization in a lot of organizations that first treated AI very much as a technology transformation and then kind of missing the boat a bit, that it's much more about a culture and a people transformation, which is why HR should be around the table guiding and leading that conversation.
And the companies that we've seen that do this really well have a multidisciplinary approach towards whatever AI strategy they're adopting. They say, I actually want technical voices around the table. I want the HR voice around the table. I want the business operations, voice and strategy around the table to tell us what AI capability do we require for the future.
And I think in a lot of organizations, HR was a little bit late to that conversation, or whether that's your point due to being authorized to be there or not seeing the need for. Not being equipped to have that conversation. I think the reasons vary slightly.
David Rice: You mentioned there, you know, you're an HR guy, right? You've been inside, you've been inside several companies trying to figure this kind of stuff out. I'm curious, what's one of the biggest things you've noticed about how organizations actually respond to disruption? I mean, there's what we say. About disruption and adaptability, but then there's like the reality of that.
Dieter Veldsman: I quite like your earlier comment that you made around sometimes the default position is safety. I think there is a bit of when things are really disruptive, entering the organization, you definitely find the three camps, and we saw that with AI as well. There was the early adopter camp to say, this is absolutely amazing, we just need to jump on it.
Don't worry. You know, this is going to be the silver bullet that absolutely changes everything. So almost that. Over enthusiasm, hype. Sometimes that does get you into trouble because I think people embark on things without really understanding them properly. And then on the other side, I think we then typically also find the people that are skeptics right from the get go and then think to withdraw a little bit more to say, not my first rodeo.
I've seen this big transformation narrative before. Nothing's really going to change. I just need to write that out a little bit more. I think what made AI slightly different was just the scale at which it hit organizations due to the fact that for the first time everybody had access to generative AI tools.
Literally, if I had an internet connection, it wasn't something that was kind of restricted to a particular audience. So I think the scale at which it happened heightened a lot of the anxiety pertaining to that a little bit more. But when I look at the disruption, I think we are now further down the line where organizations are going through a bit of a sense making process of what does this really mean?
What can it really do? Where is it really useful, and where is it just interesting and novel, but not necessarily that impactful for us as an organization. So for me, when we talk about disruption, there has to be that sense making cycle in organizations that I think not all organizations go through at the same pace, where we say, okay, let me understand what this is.
Let me understand what that means for me in my context. Let me know what I wanna do with it and let me think about how I do that sustainably and not just as a flash in the band type of event. And I think that tells me a lot about an organizational DNA and culture around how they approach disruption or big changes in things that they seem to see.
'cause I think your real colors come out when there's uncertainty that you need to deal with, and that's kind of what we see, you know, in terms of how organizations respond to it. So I think you always have your early adopters, you will have your people that are slower to adopt. For me, the sense making piece is the really important one that I think leaders should focus on a little bit more to make sure that they equip their organizations with what they want to do.
David Rice: I love that you use that phrase sense making cycle, right? So if disruption does anything, it sort of exposes your organizational wiring, right? Yeah. Spot on. So it's like you're gonna find out if your team, you might have one image of it, and then you find out, actually our default is to like over-engineer everything.
We spend so much time thinking or we stall out because we don't have a shared language or any clarity around this topic. And I think it's, it seems to me that like. The orgs that are really doing the best with this transformation. It is not about resourcing necessarily, right? Because like you said, like everybody's got access to generative AI.
Instead, it's like if you have the clearest sense that you can have of how a, how fast are you really willing to go? What is your comfort level as an organization? And then what are you willing to try is like how deep is the experimentation gonna go? Is that sort of what you've seen?
Dieter Veldsman: It's spot on, right? I think that sense of clarity, for me, organizations at the moment that are much more clearer about what they're willing to try and willing not to try, I think are doing much better.
You know, to give it, make it practical. We had a client the other day that I had a conversation with in there. Kind of, you know, chief AI officer, we were talking and he made this comment to say at the outset, I want to tell you that we want to be a very strong second follower around the whole AI revolution.
I was like, excuse me, did I hear you Greg? Everybody's always first mover. He says, we don't want be first mover. There's gonna be massive amount of mistakes that people are going to make. We are comfortable to kind of trot along slightly slower than the rest. Space ourself, learn from others mistakes. And when we get into the market with whatever we wanna do there, we do it well and we do it in our own time and style, which I thought was a really counterintuitive type of message that I hear from a lot of organizations.
'cause I think we have, we almost engineered these responses that we want to be first, we want to be the best, we want to be the most innovative. Sometimes it's okay to choose to be more of the solid citizen around how you're going to move with these types of things. To get back to your point, for me it's about, you know, clarity, create clarity at every conceivable moment.
And clarity might just mean to say, I don't know what we are going to do with this yet as such, we are first going to explore further, you know, try this small pilot. This use case is going to be our starting point. So the more I think you can shorten that sense making cycle to give clarity along the route and along the way.
I think organizations do much better there because then they stretch that resilience muscle that they have versus, you know, it's a massive crisis. Our people are in turmoil. We are never going to be able to get there. 'cause I think that if you don't own the narrative around the disruption and especially the narrative around the AI transformation in your organization, I think you're running into trouble from a culture point of view.
You run into trouble from people's levels of anxiety and how excited they are about what the future holds. And then it becomes this big thing that you need to sell to people as opposed to really ingrained into a way of work over time. And so that's kind of what we've started to notice a little bit more.
David Rice: It's an interesting time, right? 'cause we've moved from like the sort of SaaS driven tools. Everybody's chasing tools, right? And now we're in this space where like. We're all using common tools, but what you're chasing is value, and that sounds obvious, right? But clearly what we see from these lessons is it's not easy.
I guess what's making that so hard for people to actually, I mean, we still see some folks treating it like they're chasing tools, I guess, you know, what's making it so hard for people to realize what they're actually trying to do here?
Dieter Veldsman: I think there's a couple of things for me that stand out, right? I think the one is to be very honest, chasing tools is easy and you can immediately show that you are doing something about it. So, you know, I can say, sure, we've rolled out licenses to 90% of our people and that's how we are adopting AI. And I think it's a very natural tendency first to also to deal with the fear of being left behind and the fear of becoming obsolete to be seen, to be doing something there.
I also think, and I think there's this fine balance, we've been pushing a narrative over the last couple of years around experimentation is a good thing. You need to be a lot more agile in what you do, you know, fail forward, whatever you want to call it. And I think sometimes we still slip a little bit into that trap that moving quickly does not mean that I don't think through what I want to do.
And that there's a little bit of method behind the madness of putting things in place. Because what I've seen with AI, you've already mentioned, we started with tools. It was quite interesting. The next movement then was use cases. Everybody was just, okay. If it's not about tools, then surely it's about the use cases that I can utilize the tools for, whereas it's actually not, you need to roll it back one level.
And when we talk about value, I get the question a lot where people say, come help me with my AI strategy. And I'm like, why do you have an AI strategy? I'm like no. But that's the big thing we are hearing. That's what's going to unlock future value. I said, I think you need a business strategy. And as part of that, AI is most definitely something that needs to be in there as a value lever or as a pillar, but why do you separate these things out so much?
Again, what is your competitive advantage as an organization? What is your unique contribution and how can AI enable you to amplify that? That's the question you should be looking at, but it's a much more. It's a simpler question, but a much more difficult nuanced answer than, do I buy license X or license Y or dual X, or do I focus on use case Y?
So I think it is to elevate that a little bit more strategically, but also I think what our data tells us. What is quite interesting is you wanted a bit of a two-pronged approach. You want the leadership team to have this conversation to set the guardrails in the direction. You do want some experimentation to happen in the grassroots level, right?
In terms of where people are utilizing these things day to day. But you wanna give them guardrails around what is the value that we are trying to unlock? Because just telling people, go use AI, become more productive. That's not good enough. What do we do with the productivity? Right? What is it that we are trying to unlock?
So I think, again, that clarity and direction that you can give to people there. I think is extremely valuable. My earlier point in around go own the narrative. Go own the narrative around what does AI mean for us as a business, and how are we going to try and apply it, and to what gains or to what value or to what impact.
I think it phrases the conversation for people in a vastly different way.
David Rice: Yeah, I agree. I think part of the challenge here is there's just like so much noise right now in the AI space, right? It's like hard to even know what does value look like. Like it was funny, I was at a conference and there was this guy speaking and he said, don't let AI just be a buzzword in your organization.
And there was this woman to my right and she sort of just groaned. And she went like, and she goes like, what does that even mean? And I thought about it. I was like, I mean, she's right. 'cause it was just like this vague statement set on a stage, you know? But I thought, well, I guess if I had to try to give that a meaning, it would be the idea.
Like you're essentially, you confuse being busy with AI to actually doing something with AI. I've seen this with other tools in the past. I remember I worked in a marketing function once and we adopted this huge tool and I, we spent months doing all this stuff, training stuff, and setting up backend work.
And then when you actually looked at what we produced out of it, it felt like we did a lot of work to do very little. And so like, I think there's like this emotional attachment to the idea that tools can be these huge differentiators, but sometimes it's like. You gotta let go of a bad fit or if it's just not strategic enough.
What is the broader impact of that on your culture, your workflows, everything.
Dieter Veldsman: To your point there, I think there's also a part there around calling things what they are. So I come out of the financial services industry and right whenever we were doing like big change management projects on technology, I used to smile and I would say, what are we changing?
And they say, no, we are re-engineering the way that our consumers engage with us in channel in a more personalized way. I said, I've got no idea what that means. What are we doing? They're saying no, on online banking, we are moving that button there because our UI interface is going to be more intuitive and like, but say that is what we are doing.
There's nothing wrong with kind of calling things out in terms of what the focus is. And we do the same with AI. We kind of sell this big gold and pipe dream of what it's going to provide to us. It adds value at the task connectivity level. So say, you know, we're going to utilize it to do 1, 2, 3, 4 for us because it's important for these reasons and that's okay.
People will buy into it. But the fuzzier we make it and the more buzzworthy we try to make it, like your earlier example, I think the more difficult it is for people just to relate to it and to see, and to find any attachment to its value.
David Rice: We shouldn't play down small changes, right? Like sometimes I think if the FANG companies have taught us anything, it's that a small, tiny change can make a huge difference in what the product does.
You think about Netflix, it used to be all categorized. And when they changed the menu, people discovered more types of new content differently and it completely sort of changed the way that you navigate the app. And that generates a new sort of experience and sort of thinking about like, well, what kind of content should we create?
And so like one small change can make a huge difference and don't, we shouldn't play that down. I like the example you gave before about the company that said, we're happy to be second. I thought that was interesting. And it's, when we look at HR. It had a bit of a slower start on AI right, than it's given leaders maybe a space and a chance to be more deliberate.
Do you think that's been, you know, good for the function, I guess, to avoid some of those early mistakes that other teams have made? Is it helping folks? Especially, we talked about the compliance mindset and the safety positioning folks that kinda live in that space. Is it helping them to sort of feel more comfortable, the fact that it's happened slower?
Dieter Veldsman: It's a very interesting question. So 12 months ago we ran a study and we said, you know, HR professionals, how many people are using it? And I think the number that we had is something like 38%. We did a similar thing 12 months later, very recently, and that number is up to like 88%. So I think the last 12 months in the HR space, specifically related to AI, has been quite significant and there's been a lot of catch up that has been happening.
I think it's been beneficial on the one side for people to cut through some of the initial noise there a little bit and to kind of just make sense of, you know, exactly what is this and what does that mean for us. So I think that's always a good thing to, you know, in hindsight, the more you know, the better you can kind of adopt and avoid some of the gaping holes and mistakes that other people have made.
I think on the other side though, there has been a bit of a downside there also, in terms of that there is a bit of a leapfrogging effect that still needs to happen in HR around some of the adoption pieces that we are looking at. Having said that though, I think where HR professionals have inserted themselves into the bigger organizational conversation, I don't think we are lagging at all.
I think we are actually in a very good position there to guide the organization on some of the decisions that we are yet still to make around how we are going to take AI forward. To also be fair, what I hear from a lot of people, it's not necessarily HR that's held the organization back. Sometimes it's very much around access to a lot of these initiatives or the ability to actually start playing in safe environments and all those things.
I do think that what has happened a bit is HR has had the opportunity to reflect a little bit around some of that initial hype and what was hype versus what was actually reality and real. And to cut through that a little bit more. I think what HR did struggle with, and we are in a better position, is to really define what is our role in the broader AI play in the organizations.
And we see it as twofold. The one is you have to look inwards into HR yourself. So AI fluency will become an important skill for HR professionals in the future and how we change our own models and workflows. You have to be in the bigger conversation around what is the culture that we are going to establish where AI is one of the things that we do around how we work, but it still leads towards the things we've been talking about, the business value.
So I think HR caught up you know, what our data tells us over the last 12 months, as I've mentioned, is that it's actually moved extremely quickly in the last 12, 14 months. And I think it's a good thing. So I always, I'm also like, as I grow older, I think maybe it's a thing that comes with age. I'm also okay not to be first, you know, in a lot of things.
So maybe that's also perfectly fine to kind of learn from others and see where they land as long as we remain involved in the conversation. I think they were a big component of the HR community that didn't want to engage with AI at all for all the fears that we have mentioned. I think that's a little bit of a risk because then I think it's difficult to catch up, but I think we've done really well.
If you asked me this question two years ago, would've given you a very different answer.
David Rice: I think you're right though. HR gets dinged a lot for lagging, but you know. That space for being more thoughtful. I think rushing to adopt often leads to things like scope creep, where all of a sudden people are using the tool for things that they weren't intending to use it for, or we're using features like that's not even why we bought this tool.
And so like at the deliberate pace allows hr, it gives them a chance to focus. More on things that they actually want to focus on, right? Behavior, value, alignment, trust, things like that. I mean, and we've got this huge trust problem. We were talking about it before AI was even this big in the worst place, right?
I can go back to 2023, late 2023, 2024. We're talking about trust issues all the time. This is just like a, as a management challenge, you add AI into it. You've got another layer of trust problems and then you've got like shadow usage and all these things, and so having a little bit of space before you just go, like throwing it into your own stuff or putting it on other people's teams, I think is ultimately a good thing.
You made that point about fluency when we were talking earlier and it was less about the tech and more about having the guts to try new things, right. What are some low stakes ways HR teams who are starting to move in? Like you mentioned, the adoption has increased significantly over the last 12 months, but when I go to conferences and stuff, I talk to folks who are like, I kind of, I still don't know where to start with this thing.
Right. And so like, what are some low stakes ways for HR teams to start experimenting with AI right now? You know, just to build confidence and get moving.
Dieter Veldsman: We refer to it as fluency and not literacy as well, because it's like learning a new language, right? So if you think about learning a new language, you first need vocabulary, you have the grammar, you need to understand the context, and that's the same way that you need to go about, you know, educating yourself pertaining to AI.
So I think a's safe place to start, but I tell a lot of HR professionals is take a process that in your day to day that you already know really well, and identify one task or activity in that process where you can apply AI, where you know what the output needs to look like. See how you can apply that in something that you already know.
I think what a lot of people do is they want to immediately jump into the really sexy and fancy stuff that gets sold around what AI can do, but it's in domains and territories where they actually don't know whether the output that they are getting or the outcomes that they're achieving. Is correct or right or not.
So starting something that you know really well get to know the limitations of how you interact and engage with AI, but learn from a fluency point of view, the language. So obviously it's an easy thing to kind of say, and I think everybody's fallen into like, you know, prompt design and prompt engineering just by accident.
But find out how to speak to these tool sets in the right way. Apply it into a process that you can practice a day to day in a safe environment and start with a very low stake. Type of task or activity. And what I mean by that, it's not something where a very big decision is based on that. So for example, I wouldn't use it.
It's the very first time to influence a hiring decision that you are trying to make. That's a high stakes, high risk decision if you get it wrong. Use it for something where, you know, it kind of needs to guide you, recommend something for you as part of that in your normal natural workflow. And then the second part I would say there is, you know what's interesting for me?
A lot of people learn AI through self-discovery and self-exploration. You know, I kind of play around, I hear somewhere, somebody's using it for this. And then I try out, try to get into a bit more of a community that you can see how other people are using it and learning from them, like in into an HR community because everybody is experimenting with various different things.
But it is important to cut through the noise a bit. Now, I smiled a while ago in agentic AI Next Frontier. Everybody was talking about massive things, and I was at a conference and I was saying, okay, but what actual practical use cases do we have that already works? And everybody was like no. A lot of these things are still experimental.
I'm like, okay. But that's a different conversation that we need to have. So bring it back to the reality of where it is now. I would say, to answer your question is HR professionals shouldn't fear it. Right? There is a very natural tendency, and I think there's. When we look at the skills required to be AI fluent, there's a lot of like human related skills that HR is actually really good at.
So it's much less about the data, digital skills, which think to be lag a bit in the HR domain. This is a lot more about, you know, critical thinking, being able to think through things systemically, you know, the language skills I need to apply there. How do I evaluate outputs in a critical manner? So start with something, you know, start with something small.
Start with something low stakes. Follow the confidence there, and then over time start making it broader, bigger. We also see that very realistically, that application happens at three levels. There is a individual level, just me in my work day-to-day, my tasks, my activities, start there. That's the first place where you need to start.
The second level over time you feel more confident is using it with others. So kind of thinking about how can AI do things that I've never been able to do before. So enhancing my capabilities as a colleague, working with others. And then the third layer, which not a lot of organizations have gotten to yet, is kind of where it starts operating a little bit more autonomously, more just with human oversight.
But that will come especially in the HR space, slightly later. So I would start with what, you know, start with something simple. Start with something quick to turn around, lower stake type of thing, and build the confidence there.
David Rice: Yeah, you gotta think of it like the gym, right? You gotta get your reps in.
Dieter Veldsman: That's it. Yeah.
David Rice: But it is, you know, like we don't have, you don't have to wait until things are strategic to, to start trying stuff. I think that mindset's gonna delay your path to fluency. So I'd say like, you know, whether it's the first draft of a policy and you're gonna edit it, or it's summarize and exit interviews.
I know it doesn't sound transformational, but. Ultimately the transformation exists mostly in your mindset. It's not totally in your practices. You know what I mean? So I think you gotta start there and start transforming how you think about how to use this thing. And that's, yeah, there's a lot of like low lift and some of these tools now, you know, in 2023 where I remember when first time I put something in a Jet GBT, I was like, all right.
I was like asking an intern to do it. But now I'm like, oh wow, that's actually pretty good. And like I look at some of the other models clawed. And that's the other thing is I love experimenting with a new tool. They find like a new platform. I'm like, let's see what this thing can do. And sometimes I'm really blown away.
It's worth just checking it out and like seeing what else is going on.
Dieter Veldsman: And to your point, I think it is, you know, the experimentation is worth it to go play around and play around with a variety of different things. To also see what these things are all different. They do different things really well and start understanding what do you use what for?
It was explained to me at the very first time when we started using Gen AI to say it was exactly that. It was like having 50 interns at your fingertips. All super motivated. They kind of know what they're doing, but they don't have any experience in terms of really being able to see things through end to end.
That has changed quite significantly now, I think, how these tools have also been built, et cetera. So jump into it and, you know, your gym analogy, start with, you know, you're not going to run a 10 k on day one, it's fine, but like, you know, do the hard yards to run one kilometer or two kilometers, and over time you'll get to the really good stuff there.
For me, the aha moment was the devil lives in the detail a little bit. AI is really valuable when you get down to like task and activity level to start changing day-to-day things that you do. There's a lot of value for you to unlock there and to build your confidence and the biggest things will then come.
Don't try to search for that magic silver bullet thing. It's all these little increments of things that collectively I think makes a difference and moves the dial in the long term.
David Rice: This is really like a culture and mindset shift, which is kind of what we're saying here. It often gets talked about as a skills gap, right?
So, well we gotta train people, but I don't know if that's the case. 'cause folks are using it, they know what it does. They know they're learning how to prompt increasingly in their personal lives, not just in work, right? So if this really is that kind of culture shift about like what we do at work. What kind of behavior should we be normalizing if we want to encourage these higher levels of experimentation in the workplace?
Dieter Veldsman: So I think there is a balance there that we need to normalize a bit around. And when I say, you know, trying new things but not taking unnecessary risks, I think that there's a little bit of a balance. So for me, there're the behavior we need to normalize is. How do people identify where to apply this and where not because just because AI can does not mean it should.
So how do we teach people around what is responsible application, you know? And not to go into a major ethical debate every single time we want to utilize, but to intuitively start to understand as a human being, there's certain things I don't want to utilize AI for because either it's damaging, harmful, risky.
So I think that days, that type of behavior to build that behavior in a way that people still have confidence to try new things. So we don't use that to strangle the creativity and the innovation that people want to apply when they start utilizing some of these different tools. So it's a, sounds a bit counterintuitive, but there's a bit of a fine balance and that's why I'm a very strong believer in setting the guardrails out explicitly around.
Go play in this space and let's see what we can do there. But these are the things that we are going to avoid regardless of how good the models become. We don't want to get there. I think the second one is to also change the narrative. And this still exists in some places that people still feel using.
AI feels like cheating. You walk by and then they're like, oh, I don't wanna see that my, the AI tools open, or that's integrated there. You know? 'cause I want to I want to showcase that it was me that did this. Normalize the fact that we use AI at work and in work and to produce. The types of things that we do, but rather teach people around where is it good to use it?
There was an interesting study that came out that said, if I use AI or generative AI to do the thinking for me, I actually never attach any value to the work that I delivered. And I actually don't retain the memory pertaining to the fact around what I did and learned from it and build my credibility over time or my skillset, my proficiency.
Whereas if I use it to help me as a sparring partner refine things further down the line. I still retain that ownership of the work, the pride in the work, the learning that took place that I need to. So I think there's that behavior around teaching people that it's not yet to replace, it's yet to augment you, but what does augmentation look like and how do you work with it responsibly?
For me, it's like teaching the behavior for people the first time they get into managing others. It's the same type of behavior where you need to learn, what do I delegate to the tool? What do I do myself? What decision do I want it to keep with me? What are the things that I wanna do that I don't want to delegate to the team because of various different reasons and what does our scope look like around how we're going to play together? I think those things are quite, you know, important and tangible. It builds on the normal behavioral things like critical thinking is super important and it will remain important. Can you fact check? Can you think through things?
Don't take things at face value, know how to evaluate the outputs of these things and know where to put in the controls to avoid things like bias and kind of working in that particular way. But for me, that's kind of level one things. Around the general use. But if you get these other things right, I think you start building something that is a lot more scalable, sustainable, and will give you a lot more value with through a effect in the future.
David Rice: It's funny 'cause I maybe I'm just worked with too many creatives over the years, but I think with, I've seen this like romanticized perfection a lot, and one of the things I've noticed was like with the AI stuff is like now a lot of folks feel more pressure because it's almost like, well I had this all knowing.
Machine help me build this, right? So it's gotta be perfect. And I'm like, actually you should just romanticize iteration as much as anything. Experimentation. It's gotta be part of how we lead, though. If your leaders aren't just showing you that and they're not saying like, well, I don't know. We're gonna try something new with this workflow and we're gonna see what we learn.
I'm using AI to help us drive this change. I mean, that's how the culture shift happens, and that's how people sort of go. Well. If he can do it with this workflow, why can't I do it with this graphic or whatever I'm, you know, whatever I'm working on. And I think, yeah, like it's just gotta be modeled top down.
But there's like old habits that are ingrained in so many of us. If you're over the age of 30, then you've probably had some experience in what I'm talking about, like this, like romanticization of perfection. And I think we've gotta let it go 'cause it's, we're into a new era where things move faster. And it really is down to like how humans use these tools.
'cause these tools will spit out very similar looking things, but you are still the thing that makes it unique. And so it's still going to be iterative and it's not gonna be perfect a lot of the time. It's gonna be based on tastes and measures. So it's okay. Just go with it and see where you get to.
Dieter Veldsman: No, you spot on. And no, I always smiled in organizations whenever, and everybody talks about innovation. When I say, okay, what's important? No innovation, right? We want to innovate in a company. I said, okay. What do you do when people make a mistake? No quality is so important. You know, we immediately correct mistakes because we have a very high level of perfection threshold.
We hold people to a very high standard. Those are two counterintuitive behaviors. If you want people to innovate, you need to allow them to make mistakes, and then to your point, you need to create an environment that is tolerated. An environment where that is actually encouraged. An environment where people don't fear, they need to hide mistakes because it's going to be seen as some sort of consequence and you're spot on.
It's the same here. Things are not gonna be perfect. Just because I put something into spellcheck does not mean it. It gets everything every single time, and it's not going to fix the quality of my document. It can only do certain things. I think what is important is to think about what is the unique human contribution you still bring to the table?
Because ultimately you are still in the pilot seat around how and where you utilize these things and string it together and connect it to the real problem that you're trying to solve, or opportunity that you're trying to pursue. I think that's the important piece to always bear in mind that you are the sense making vehicle that kind of orchestrates this over time.
And you know, to get back to your intern fer, you can have a hundred interns running around doing amazing things for you, but if they're not coordinated and busy with the right things, which I think is the human contribution that you make. It's just busy work. It's just pointless work. It's seeing to be doing something as opposed to really shifting things in the right direction.
And I think if we use AI, well it'll, and maybe the, this is me being on my soapbox a little bit. I think it's going to celebrate our humanity and our ethnicity a lot more, our creativity. That is going to go for a major mindset shift around what does it do and what doesn't it do, and how do we fit into that new picture Because we are a bit on a shifting kind of, you know, a cusp of a new type of work environment, a new type of work models, new activities coming in, and it's happening quite quickly, which is why we do need that things making peace that I spoke about earlier for people to do in their individual capacity as well, and not just in the organizational capacity.
David Rice: Well, my final question for you is, you know, there's been a lot of talk over the last several months about the shifting role of CHRO. As a result of AI becoming so omnipresent in the workplace, how do you think the focus changes again in 2026 for this group Chief people officers included, obviously.
Dieter Veldsman: I love this question, right? 'Cause I think the chief people officer or CHRO role has been changing quite significantly over the last couple of years and it's also very due to the fact that the mandate that HR carries in organizations have been changing. Quite significantly from, you know, guardians of governance, personnel function servicing.
Those things have picked up a lot towards the strategic partnership transformation type of mandate that it carries today. For me, there's two things that's important. I think the first one is to remember that a CHRO is a business executive first. And an HR executive Second, what I mean by that is that you still carry business responsibility for the things that you need to do, and you need to wear a business hat.
HR then becomes the domain that you bring into the business boardroom around having those conversations and connecting people to it. I do think with AI specifically what has happened, and that's the crux of our conversation around if it really is a culture change in the organization around how do we transform the CHRO, has to carry that mandate around how is AI going to help us become the type of organization that we want to be?
Because it's just one of the things that we do that we are busy with there, as opposed to just having the technology conversation there. So I think that's something that is changing quite significantly in the CHRO role. I think pertaining to what this transformation really is and what are the things that we should be careful of and what are the things that we should be pushing and punting a little bit more.
And ultimately for me is how do we take a holistic approach towards AI transformation that involves, you know, the skills angle, the people angle, the technology angle, the data angle, the safety angle, and the strategy angle. 'cause that's what I think will lead towards success. And I think the ch RO is a little bit of an orchestrator of bringing all those parties around the table together and getting them to talk.
It's also something that CHROs seem to be pretty good at. Connecting them people towards higher level conversation on what does this mean for who we want to be as a business? And you know, when we look back on our AI transformation a couple of years time, what's the story that our people are gonna tell?
And is it one that we will be proud of? And is it one that our customers would also want to hear? And I think that's a key role that's slightly different that the CHR system into.
David Rice: I agree. I think, and also like I heard a term recently, I think it was behavioral economist as like something, you know.
And I, I think that's something that they are, your CHRO will become because yeah, you're stewards of not just people policy, but like, you know, systems that shape human decision making alongside these machines. And I think it's the role's evolving and just sort of this, it's more like a chief human strategist, right?
So you understand the interface between human capacity, organizational culture, and this technology and what it all means. That sounds incredibly complicated. You know, it's just funny 'cause you talk to like people, analytics folks and they'll tell you that like, well you gotta have a solid data background and it's gonna be this and that.
And then you talk to like people who are engagement focused and it's very human centric and this, so everybody's got like their different little thing about it. But I think it's actually gonna be the whole ecosystem. You gotta think of it like what are humans and machines capable of both. And how does this impact our culture and our workflows?
Yeah. Like what is the technology demand of that? That's kind of complicated, but it's gonna be interesting to see it evolve like that.
Dieter Veldsman: Yeah, I think it's I still think you know, whenever I do like career coaching and things, I, and I'm an ex CHRO, so when I talk to new CHRO say, why did you want be a CHRO?
Is this just the next step for you? Or, it's interesting to hear the reasons and the motivation of why people wanna step into these roles. And I think if you can harness that to become that, you know, I like your term around like, you know, the ecosystem, call it the chief architect that connects these things and strategists.
I think that there's a lot of meaning in there for CHROs and there's a lot of value for organizations if they leverage that in the right way. But it is. A balanced role of understanding all those different elements. And in my view, the biggest thing is can you deal with the complexity around how these things fit together and how you simplify it for people so that they can do something with it.
I think it's very easy to become part of the stuckness where people dunno what to do. Can you help them wade through that and cut through that a little bit to say, sure. We dunno everything, but for where we are now, the best decision we can make is. Z don't. Let's go try that. Learn from it, move on. You know, it's going to be okay.
So kind of steering a little bit through the storm with, you know, a little bit with calmness and maybe a touch of wisdom if you can.
David Rice: Absolutely. Well, Dieter, thank you so much for joining us on the podcast today. I really appreciate it.
Dieter Veldsman: Thanks so much, David. Really enjoyed the conversation.
David Rice: All right. Well, folks, until next time, be sure to head on over and sign up for the newsletter if you haven't already. Check out our AI transformation Explorer. There's always a lot of cool things here going with People Managing People. We've got events coming up. Be sure to keep an eye out for that.
And yeah, until next time, keep experimenting.
