AI is everywhere—and yet, in most organizations, it’s nowhere. People are intrigued, but unsure. Leaders evangelize, but workflows stay the same. Curiosity, it turns out, is not a strategy. In this episode, we’re joined by Justin Angsuwat, Chief People Officer at Culture Amp, to unpack how they flipped the script. In just six weeks, they moved nearly 80% of their workforce from passively curious to actively confident in using AI—without top-down mandates or perfectionist paralysis.
We get into the nuts and bolts of the “Accelerate” program, the value of separating exploration from expectation, and why confidence—not usage metrics—is the better north star for early-stage AI adoption. Justin also shares the uncomfortable truths about integrating AI into daily work, especially for senior employees whose identities are tied to outdated workflows. If your team is hovering in AI limbo, this conversation is your blueprint for action.
What You’ll Learn
- Why confidence—not curiosity or usage—is the key metric for AI enablement
- How structured, low-stakes experimentation builds momentum
- The real barriers to integrating AI into daily workflows
- Why junior employees often adapt faster than senior leaders
- The difference between AI that looks smart and AI that adds value
- How implicit feedback loops make AI products more useful over time
Key Takeaways
- Confidence > Compliance: Empowering people to experiment without fear of failure unlocks more progress than top-down mandates. No one builds muscle memory by watching demos.
- Separate Exploration from Expectation: By decoupling learning from deliverables, Culture Amp created a safe space for real experimentation. No performance pressure, just play.
- Probabilistic > Deterministic Tasks: AI isn’t just a faster spreadsheet. The real value lives in interpreting ambiguity—summarizing feedback, surfacing emotional patterns, simulating scenarios.
- Don’t Wait for Use Cases: Most breakthroughs happen during experimentation, not before it. Start anywhere and let relevance emerge.
- Unlearning Is Harder Than Learning: Senior leaders often struggle more than juniors because efficiency threatens their legacy workflows.
- Context Is King: Generic AI feels hollow. Real impact comes from layering in organizational context, behavioral data, and role-specific nuance.
Chapters
- 00:00 – From Curiosity to Confidence
- 02:00 – Launching the “Accelerate” Program
- 06:00 – The Aha Moment: Exploration Without Pressure
- 10:00 – Faster vs. Smarter: Why Efficiency Isn’t Enough
- 13:00 – Deterministic vs. Probabilistic Workflows
- 16:30 – Building the AI Coach: Context at Every Layer
- 20:00 – SaaS vs. AI Companies: Implicit Feedback & Delegation
- 25:00 – Why Junior Employees Adapt Faster
- 30:00 – Illusion vs. Impact in AI
- 33:00 – Mapping AI Maturity: Culture Amp’s Honest Take
Meet Our Guest

Justin Angsuwat is the Chief People Officer at Culture Amp, where he leads the People Experience and People Science teams to help organizations around the world improve employee engagement and workplace culture. With a rich background in human resources and people strategy, Justin has driven cultural transformation for high-growth companies and brings deep expertise in using data and analytics to optimize employee experience. Prior to Culture Amp, he served as Chief People Officer and Operating Principal at Blackbird Ventures, held executive HR leadership roles at Google and Thumbtack, and has been a trusted advisor on people and culture to numerous global organizations — all grounded in his passion for building inclusive, sustainable, and high-performing workplaces.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Check out this episode’s sponsor: Deel
- Connect with Justin on LinkedIn
- Check out Culture Amp
Related articles and podcasts:
David Rice: So your team is curious about AI. They've watched the demos. They've read the articles. They think that it's interesting. And they're doing absolutely nothing with it, because curiosity without confidence is just procrastination with better optics.
My guest today is Justin Angsuwat. He is the Chief People Officer at Culture Amp, and he is gonna walk us through how his organization got nearly 80% of their employees confident using AI in just six weeks.
Not curious, confident. Here's what they did differently. They separated exploration from expectation. No pressure to deliver polished outcomes, no waiting for permission, just a structured path from understanding to experimenting, to embedding. But there are some uncomfortable parts that Justin's dealing with now.
Confidence doesn't automatically translate to integration. People still stare at that point, canvasing their daily workflows and freeze. And sometimes the senior employees who are supposed to bring AI into the organization are the ones struggling the most, because their identity is wrapped up in the workflow the AI just made obsolete.
So today we're gonna cover why they focused on employee confidence, not usage metrics or automation targets. The weekly exercises that drove momentum, how to move people from deterministic tasks to probabilistic ones, which is where real impact lives. Why junior employees can be more native than the executives leading the transformation, and what just south of 80% confidence actually looks like in practice.
I'm David Rice. This is People Managing People. And if you are stuck between curiosity and actual implementation, this conversation is gonna show you exactly how one company bridged that gap. So let's get into it.
Justin, welcome to the show.
Justin Angsuwat: Thanks, David. Good to be here. Excited to chat.
David Rice: Good to have you.
I'm good to follow up our conversation in Orlando. I've been looking forward to this since, so. Before we dig too deep into coach and all that you've built, walk me through how your teams sort of actually started building with AI. Like what got them past the curiosity stage?
Justin Angsuwat: Well, if we start with AI in general with the organization, we started first of all with creating this aha moment.
And that in itself was a learning for us. When we started with our product teams, we moved to other teams, which meant starting with a goal, which was for us. Employee confidence using AI. I know there's a lot of other goals which could use like usage or automation, but for us we're pretty clear that it was about the humans behind the technology.
And so our goal was to improve employee confidence using AI. 'cause it's kind of hard to measure an aha moment. And so we kicked off this six week program, which was designed to separate. Exploration from expectation, and so it was intentionally about learning, trying things, just having a crack really. It wasn't about delivering some polished outcome, which really kind of took the pressure off and made people feel much more engaging with what was the pretty new technology at the time.
David Rice: What I've seen so far is companies that we've looked at or talked to is like, curiosity is pretty easy, right? It's like people wanna explore. But real movement, it usually needs like some sort of structure or clarity or ideally like a shared win early on. And I'm curious like what that transition looked like for you team, like you mentioned that aha moment.
Was there sort of a nudge or like an early win that helped people, you know, move things from, yeah, let's learn about this, or this is interesting, or to like. Here's what we're gonna do with it.
Justin Angsuwat: Yeah. That was pretty critical as you just said, like we weren't so clear about what it is that we're gonna be able to do with it.
And that wasn't the aha moment. What was interesting was people's aha moments all look quite different from each others. And so it was like quite hard to standardize that. And so what we did was I mentioned before this kind of six week program that we called Accelerate with an AI in the middle of that accelerate, so accelerate.
Because you know you can't have a program without AI in the name of it. What we did was we set phases, which broadly helped people kind of move along that journey, and so kind of move them from understanding to experimenting to then embedding or kind of using a day-to-day. Although that last part is the hardest and we're still.
Trying to crack the code on that one. It wasn't perfect, but really helped our employees feel supported rather than rushed or like really exposed. Like if everyone else is creating things, then I don't know how to create something. But it was also a balance of not letting them get too comfortable with the status quo.
Like we kept moving from that understanding to experiment, to embedding. And we did that through a lot of content, a lot of learning sessions where people could go into the tools and actually go create things. Everything from a computer game to a video. And then we shared small wins to drive more momentum.
And so I'd launch these weekly exercises or competitions across the company where it'd be something like create an AI generated video for our employment brand or that game Gandalf, where you try and crack the password and try and convince Gandalf to give you the password. See who could kind of get to that last stage and what did you learn?
And it was really fun to see. 'cause I remember one employee got so pumped about creating this agent in just a few minutes. They went home, they told their partner. Their partner was like, you did that. And like I did that. And then their confidence lifted and the whole team's confidence around them lifted.
'cause they were like, I can do this thing. And so that was what was really important in kind of driving this momentum throughout the six weeks is this kind of rinse and repeat of helping people go through understanding, experimenting, embedding, and in that embedding part how to make it more relevant to their day-to-day work.
And so I mentioned before, we have a lot to do on that front, but our starting point was at least trying to use the same tools that people were able to use in the workplace, whether it be like Miro, Glean, Gemini, Copilot, whatever else. We partnered with them, invite them in, and at the end of the day when we launched this program, we actually didn't wait or ask for permission.
We just kind of launched the program and kept moving it along. And I recently shared something about that in LinkedIn and what was interesting was someone summed it up with a really great comment that gets to your point, David, which was something along the lines of. I see a lot of companies kind of jump to the ladder first and say like, you know, we want to see all the great ideas for improving our work with AI, but they haven't really started with equipping people with that kind of foundational confidence to get going and so that's been our key focus at the start.
David Rice: Well, the other thing is like helping people understand what is actually helpful. I'm curious, why do you think that worked so well? Because so many other approaches they stall, right? Like people do kind of run into this sort of like, well, I can do anything. And then it's the blank canvas thing, like you don't know where to start or you just go like, I don't really need it.
Take me through kind of getting people to ask the right questions because it needs nuance and context, right? So helping people understand what to give the AI in order to get the most from it.
Justin Angsuwat: Yeah, that's been hard. I mean, that's gonna be our last stage, which is how do I use it in my day-to-day and kind of integrate it into my day-to-day workflows.
I think that, well, we're still working on that. Step one wasn't like, what made it successful was we were hyper-focused on the first part, which was, how do I make people really confident in using AI without even finding all the use cases yet? I think there was this mental barrier that a lot of folks had, which is, it's too hard.
And what was interesting was even as we went through this six week program, some people followed along and they would do the little prompts to go create their computer game, and then you'd go into a regular day-to-day workflow and then they were intimidated again. They were looking at that blank page.
So for some teams we actually launched like a small hackathon where they went through and did that. And in small teams even then they were like, well, I'm on a small team of three. Do I have to do the prompting? Can someone else do it? Like where do I start? But that was a much safer environment and in some ways it was kind of that nudge that they needed to be like, oh, okay, I guess I'm sitting in front of the screen.
Let me just try. And their aha moment was simply. It wasn't as hard as I thought it would be, and so I don't think we've cracked it. I think stage one is how do we create that confidence and we got there. So we measured it after we did this program. And confidence in using AI in day-to-day work was just south of 80% of our entire organization was confident.
Using AI, it was like 4% negative. And we also measured this other question over time, which is we explore and adopt new technologies like AI. We saw 24 percentage point improvement to like 84%. And so there's a strong momentum and strong belief in using AI. To your point, the next part is kind of cracking that code of how can I integrate into my day-to-day usage?
And that's where you and I had that conversation in in Orlando about kind of the using it not just as a better calculator, but it's something quite different.
David Rice: I mean, there's so much noise right now. So it's like everybody's overwhelmed by like the choice of how am I gonna use it? And then oftentimes they end up underwhelmed by the results because they don't really know the best information to feed it.
And I think what most people need is like that. Thing to start with that makes the most sense, and then keep moving from there. And it sounds to me like the framework that you put in place gave people permission to at least stop waiting for the perfect use case. Right. So yeah I think that's interesting that you were able to unlock some momentum that way.
Justin Angsuwat: Oh, I was gonna say, yeah, you're totally right. And what we see out there on the internet is. A lot of people are doing some really cool things and that can feel really intimidating. Like, well, I'm not using AI to do that. I'm still trying to learn how to prompt this thing. And so, you're right, there is a lot of noise, but that means there's a lot of good stuff out there.
But it can be really intimidating for people wanting to start to go, well, I'm not even close to what that thing is, or That was creative. How did you think about using AI to go do that thing? And I don't even know how to do any prompt engineering, so. It was just trying to boil away some of the noise and say, just pay attention to this six week program and we'll get you to a place of confidence even if we can't point to exactly the things you should be using AI on right now.
David Rice: You mentioned there the calculator, like stop using it as just a better calculator and I love that sort of analogy. I'm curious what actually happens or you know, changes when people sort of make that mental shift. Like what was your experience when you had that moment?
Justin Angsuwat: I'm sure there's a lot of AI frameworks out there and maturity curves about how people think about this, and I know you and I talked about that.
I'll boil it down to the simple, like how I see it version, which is again, quite simplistic. I think that in organizations where I see AI transform the humans inside and the work that's being done, I see kind of these two levels of AI change. First level being, doing something faster or better. The second level being unlocking something new.
So I can tell I made up this because it's not catchy and it's not an alliteration. But I know we talked about the calculator, but the kind of revising the calculator conversation we had, I think about when spreadsheets first came to be, it wasn't here, but for everything I had understood about that time, it was basically a better calculator.
And this gets to that first phase of kind of doing something faster or better, you know, improve what we already knew. And back then, you know, accountants would use paper ledgers. So they wanted to change one number in a budget or somewhere on that ledger. Had to manually erase and recalculate kind of every single dependent variable on that, and it took hours.
With a spreadsheet, you could go balance the books in, you know, five, 10 minutes instead of five hours. So you've kind of got this efficiency, boom, do something faster or better. And I'm seeing a lot of people get that from their AI now. Like, how can I automate something that I've been doing over and over again?
But the aha moment I think comes from the unlock of what happens next. And if you think about the spreadsheet. Once these calculations became instant, these accountants weren't just these number crunches anymore. They could now run scenarios basically. So the unlock was, instead of saying, I can just very quickly readjust every number on this spreadsheet or this ledger, the unlock was they could model answers to questions Now, so you could ask a question like, well, you know, if we lowered our prices by 10%, but we increased our volume by 20%, what does that do to our gross margins?
So they stopped becoming these bookkeepers, which was very backwards looking like just keeping records and became these scenario planners or strategic modelers or strategic finance people kinda looking forward and they ended up solving this problem they didn't even know they had. And that's what changes.
I think when people make that mental shift. It's the getting to that first stage, which is, how can I make some of the things I'm currently doing faster or better? But the second phase is like, what does this unlock? I've never been able to imagine it unlock before and it's really hard to describe that until you've kind of gone through that process.
David Rice: Yeah. Well it's interesting too 'cause like I think that's some of the reason why there's like so much hesitancy in people functions, right? It's 'cause when you have the aha moment, you're like, oh, I could do all these things. And then you're like immediately having a little bit of a nightmare about like the legal team getting in touch with you.
Yeah. So, yeah. I wonder like what it's gonna be for some people that like helps them over that moment. Was there like something for you that, like as you started playing with it, you thought, well, I don't know about this.
Justin Angsuwat: Well, there are, you raise a really good point about legal and I know that we'll joke about that, but there is some caution that we have to use with AI to your point because.
It is easier to verify a spreadsheet than it is to rely on AI for an answer or a specific answer. And I think that when we talked a bit about this around this concept of, I think we called, it was an impressive theater, I think during our conversation. Illusion versus impact. And it what got me thinking again about this, and it gets to your point around relying a lot on AI for some of these answers and illegal coming knocking at your door.
And I think a good example of that impressive theater is this notion of probabilistic versus deterministic tasks and what we do. And so for those that kind of dunno, I'll summarize it again using my very simple brain. Deterministic tasks usually have one correct answer like that mathematical calculation or what is legal and not legal, even though that can be still have some gray areas, but probabilistic tasks have, you know, many good answers.
You know, like how to draft an email or summarize a meeting. You can kind of do it a lot of different ways. Using AI solely for deterministic tasks can be good. Like tell me my revenue in Q4, but in many ways you're using a probabilistic engine to do a deterministic problem. And so the challenge with that is it's slower.
It might have hallucinations, it might actually be less accurate than do like a SQL query, for example, about your revenue or doing it in Excel or just going and looking it up. So it might look cool, but doesn't really use AI to its full capacity and it might make mistakes. And I think that's where we get into the, some of those conversations around legal versus using AI for the messy probabilistic tasks.
Like, go read these 500 employee pieces of feedback and tell me the top three emotional frustrations they're feeling, not just the exact topics, but tell me the emotional frustrations. You know, like a spreadsheet or formula. Can't just go calculate emotional frustration. So to maximize impact, I'd focus more on AI to handle interpretation of some of those probabilistic tasks rather than the deterministic tasks on its own.
And that gets to some of the conversations you, you were just talking about around like involving legal. It's sometimes we use AI too heavily on the deterministic tasks and we're like, tell me the answer of what I should do in this situation when there is like one or two correct answers. And that's, I think we can get into a little bit trouble, so I'm a little bit wary whenever we use it in those situations.
Just to kind of double, triple check answers before we kind of just go with what it says.
David Rice: That's good advice. I can, I can't underline that one enough.
Growing a global team shouldn't mean juggling five different systems for HR, payroll, and IT. Deel brings it all together so you can hire, onboard, pay, and equip people anywhere without the usual chaos.
Whether you're bringing on employees in 10 countries, or managing contractors across time zones, Deel handles compliance, benefits, and payroll in one place, which means less tool sprawl, fewer headaches, and more time focusing on your people. Ready to see what hiring without borders actually looks like?
Visit deel.com/pmp. That's deel.com/pmp to book a demo. That's deel.com/pmp. Deel. Hire, manage, and pay anyone, anywhere.
Now you all have built an AI coach. Talk to us a little bit about, you know, what essentially your goals were with that.
Justin Angsuwat: The goal was to be able to have kind of this always on AI coach or like a people scientist in your pocket.
And we had a few conversations with folks in Orlando where you're at, where people are like, I don't need a coach. All coaches are really expensive, and so I'd say a coach is more like having a people scientist in your pocket that you can ask questions of, like, how do I give this difficult feedback conversation?
You know, David has to give you really difficult feedback, and he's nervous because I will not receive that feedback very well. Like talk me through that. And so in some ways it's helping our HR business partners or the HR team or people team give advice to managers on demand across entire organization.
But it's contextually aware. And I think that's the big difference here. And I think context makes all the difference. 'cause if you ask a generic AI model how to respond to a situation, it's gonna give you the average of the internet. And so it'll give you advice that's probably technically correct.
Not that useful to your specific situation. Kinda like reading a horoscope, it's vague enough to apply to everyone. So it really helps no one very specifically. And I think about the layers of context that goes into something like an AI coach that makes really helpful. And there's really three layers.
And so you start with the foundational layer, which is how is the coach trained? So let's just say you, you're prompt, David. You're prompting the coach and asking a question. You haven't given it any more context. With this foundational layer, we're not training our coach on the average of the internet or Reddit threads or random blogs.
We've trained it on people science, and so we have about a billion and a half data points from all things people culture performance right now that we gathered over a decade, over thousands of workplaces. And we have teams of people, scientists, or PhDs that these are the people prompting and designing the coach.
And so the foundational layers built on solid science in coaching and on organizational data points. And then you layer on top of that organizational context. So then the coach knows the context of your company, and so you can very quickly upload that. It could be anything from your vision strategy, even your annual goals, your values, the behaviors that sit behind the values, even the types of feedback frameworks that you use.
You know, some organizations use radical Candice, some use SBI, whatever else. And so all of a sudden this coach knows about your company, and then you lay on top of that, this performance layer. So things like engagement data, performance data, one-on-ones, anytime, feedback, whatever it is. So it starts to know about your team, it starts to know about you.
And so when you put that context stack together, you don't get the generic advice like you should listen more. You get quite specific advice like. Based on your team's low engagement scores, which is that kind of performance layer and our company's new goal of radical candor and the fact that we're expanding in this new country as part of our strategy, which is that organizational layer, let me ask you some questions to understand the situation, which is what a coach does, and then use this specific script developed by people science.
Let's go role play that. And that's really that foundational layer. And so those three layers of context start to become really important and that's what differentiates kind of an AI coach from using an LLM or a generative AI tool to just give you advice.
David Rice: I think we've all had the experience of like using a tool that technically works, but it doesn't know anything about you, so it just feels like something that you're.
I mean, I don't know. I don't know why I'm talking to this thing. You know what I mean?
Justin Angsuwat: You must see that a lot. Also, you're working with a lot of HR leaders, building a lot of things. You seeing that?
David Rice: Yeah. No, and I've built a few myself where I was like, what is this thing talking about? Just forget it. You end up just going, okay, you know, this is not helping.
When you talk about layering in the context, things like team dynamics, org, history, individual strengths, I mean, that really flips like the model of what I think a lot of us start with. So what are some of like the most powerful signals, I guess, that make this AI advice feel really relevant?
Justin Angsuwat: Couple of things.
One, it is built on the people science and knows your organizational context. And what that means is as a manager interacting with an AI coach, you're not having to explain the context every single time. I dunno if you've gone through this journey or if it's just me, but after a while you're like, oh wow, that's a lot of context.
I've gotta give this coach every time. Like, you know, do it in this tone. Think about this. Or here's this person I've spoken to. It'd be wonderful if it's like, as we see that AI coach, you know what, you as a manager, you've had this conversation several times with this individual, like, what's not landing Like, let's go through this again.
And so one thing that's I think quite powerful is that the context is already there, but the other thing is that. I don't know. We talked a bit about this, about the difference between kind of a SaaS and AI company is. The coach keeps learning and it looks at the gap between like what's useful and not useful, and then that will continue to train the coach to become even more powerful and so happy to talk about that, but I think that's a nuance that's not always seen, which is that the coach does become better from negative feedback to the coach.
David Rice: I was listening to someone talk recently and they said that they were talking about they wanted their company to operate like an AI company and not just a SaaS company. I'm curious though, like when you hear that, like what does that mean when you're in the office on a Tuesday?
Justin Angsuwat: What I do know is operating like an AI company doesn't mean buying a bunch of co-pilot licenses.
You know, it's less about the software you buy and more about how you build and work. That shows up in many ways, but I'll pick one way that's been fascinating to me based on the topic we just talked about with AI coach. And that's what I think there's a big shift from moving from explicit to implicit feedback, and I'll explain what that means.
I think in the SaaS world, we usually rely on users explicitly telling us things. We see this like, you know, rate this feature one to five stars, or I'm sure we've all seen that. Was this article helpful question? You know? Yes. No. And so in that SaaS world, like by relying on users explicitly telling us things, the problem is that humans rarely click on those buttons unless they're very happy or very unhappy.
Unless, well, maybe that's just me. But we don't see humans kind of interact as much with those types of questions. And so there's a lot of data left on the table. We're looking for that explicit feedback. Tell me if this is good or not, versus operating in an AI mindset means obsessing over implicit feedback.
And so take something as simple as if you're editing a draft of something like deleting some text. And so when AI drafts something like an email or maybe even does a meeting summary. The user goes in and deletes a sentence. If you use explicit feedback, it might look like a fail, like we got this wrong, they didn't like it.
Or best case scenario just looks like an edit. It's an edit. But operating like an AI company means I see implicit feedback. I see a signal in that. And the signal is the user's telling the model that your prediction for this context was wrong. With that gap and understanding that gap between what the AI originally wrote and what the human was finally happy with that gap is such valuable training data, and so as an AI company kind of knowing what is wrong is more useful than knowing what is right.
And so by oversimplifying it, I think that I'm gonna oversimplify it. You know, SaaS companies really want your attention. They want you on and using the platform, and they rely on that explicit feedback. Did you like using the platform? Did you like this feature? Tell me how to make it better. And AI companies don't want your attention.
They want your delegation. They wanna be so good that you just sign off. And so they use this kind of negative space to understand where things are missing or they need to kind of retrain the model. They use that as a signal and they celebrate when they can find those because over time the AI is just gonna know you much, much better.
David Rice: That's fascinating. I mean, the reason I kind of pivot to that question, right, is 'cause I think it's a cool ambition, but a lot of people sort of get stuck on the part where you have to turn it into operational behavior. And I think like tools like Coach are something that can help you do that. Right.
When I think of like what's the difference between the two types of companies, like I think it's a little bit more experimentation on the AI side, like more real time decisions, less polished, more speed. I don't know, maybe I'm wrong here, but when you strip out the hype of it, what actually changes for teams?
When they start to live that idea?
Justin Angsuwat: Well, for me also when they, so that was only one piece of it was the, how do you think about implicit explicit feedback. It's also just how you develop features. You kind of go from the very traditional waterfall way of software development to one that can move much more quickly and.
For some of our software engineers and product managers, the aha moment was actually that when they started building, they're like, wow, I built something that could have taken like months to build in a week and let's just keep going. Or even just your first time to prototype is like much, much shorter.
And so go prototype it. Go build it very quickly and see what a working prototype looks like and go get that in the hands of some of our testers and users. To see what that's gonna look like rather than, again the traditional like, go scope out the project, go build in the timelines, go do all of these things.
And so that, that's been a big shift for us in terms of how we even do software development. But I'm curious, as you work with a lot of AI companies as well, is that what you see?
David Rice: Yeah, I think so. I think like you said something to me previously that like really hit home, right? You made this point about illusion versus impact, that just because the workflow is technically sophisticated doesn't mean that it's necessary.
And I mean, when I think about that, I'm like, I've been stuck in that like there's a lot of AI happening right now. It looks slick, it's cool, but does it like really solve a problem that needs solving? I think there's also some things in the workplace that like, it's okay that it's a little bit complicated.
It's called being at work. You should have to do a little bit of work at some point in the middle of it. Yeah. Just that idea of thinking about where AI genuinely adds value versus where it's impressive theater. Right.
Justin Angsuwat: Yeah, and I think, I mean, this loops back to part of our conversation before. I do think it's that, I mean, it depends on each organization as to what is kind of impressive theater versus not.
But I think one thing that's really easy to do is like, again, taking those deterministic tasks where there's one correct answer and using AI to go search for the deterministic tasks, I think it's like helpful, but you're not really using AI to its full advantage. And so it's like a faster spreadsheet basically.
It's, as we know, it's not always that great at spreadsheets anyway. And so instead it's like how do you tackle the, I think where the impressive theater comes from was the kind of the two phases of AI maturity that we talked about before, which is like how do you make things faster and how do you unlock something you never knew existed?
But I think there's a certain limit on making things faster. You can make a faster certain point, and there's a point of diminishing returns at that point. I think the theater comes from like, let's put another front end on this and like, let's add more AI to this. Task that's like relatively not that painful and not that business critical and it looks nice, it's cool.
You get to like type it into a chat box and it kind of comes back with an answer. And don't get me wrong, a bunch of those will be incredibly useful, but there is a point of diminishing returns where you kind of see people kind of put more AI front ends on other AI front ends versus starting to tackle the what are the things that can unlock where it's very good at like probabilistic tasks.
Which we've never really used a lot of technology for in the past, and so what can we unlock that we haven't even thought about yet that's gonna help us going forward? You know, I think about something like. Chatting to some folks in internal communications before where like a version of that is you can use a an AI tool to kind of go write your scripts for you, for any internal communications and channel presentations.
What's really cool is watching these teams leverage AI to do probabilistic tasks like. Go and pull every single all hands that CEO has done across the company. Analyze it, give it advice as if you're an internal comms expert. And I remember once hearing the advice come out of Claude at the time saying to one CEO.
You know what? When you deliver really good news, you lead with the good news, but when you deliver bad news, you lead with the values of the company first, and the leader's like, whoa, I had no idea I did that. I said, everyone's watching this leader. If they lead with values, they're like, uhoh, bad news is coming.
But being able to do those types of things, the things that were unimaginable before. You would take a comms person to have to remember all of the presentations, but it's a nuance that it never would've been picked up. And so again, I think impressive theater would be continuing to iterate on how do we make a better engine kind of writing speeches, which is like a valuable thing, or go search my speeches to give me like snippets of the, so a very deterministic task.
Versus starting to use it for real impact, which is kind of the probabilistic tasks. Like, you know, tell me something I had no idea about before. Like go analyze thousands of hours of presentations and give me a summary of like what I do well and not well. And I think that's where the power's gonna be in, in AI.
David Rice: I mean, I think it's a easy to. I was like thinking about, you know, when you were talking about that, I was like, it's easy to confuse complexity with effectiveness, right? And so I think that's gonna be one of the big leadership challenges that we face moving forward is like, how do you keep people focused on value when the temptation to show off what AI can do is gonna be so strong.
And I don't know if that one's going anywhere right now in 2026. It's hard.
Justin Angsuwat: I mean it's, you know, it's solving for the business problem first. So what's the problem to solve? And then how do I build a tool to go solve that problem? But the hard part with AI is it can solve problems we've never known it to solve before.
And so it's really hard to just sit there going, I wish you could solve this problem. 'cause you don't even know that problem exists until you start prompting. I remember one CEO saying to his leadership team, you know what, like if you haven't prompted, this was very early in the AI journey, but if you haven't done a hundred prompts in AI, like your opinion doesn't really hold much weight to me right now when you're talking about AI.
'cause you just don't know. So I think it's really important for folks to, that's why for us, we started with the confidence, like you just won't know the problems you can solve until you get in it and you're like, oh, that's the unlock it can do in solving this problem. I never even knew it could solve or a problem that I never even knew existed.
And that's gonna be a really important journey for people to go through.
David Rice: Your mapping AI maturity, not just for customers, but for your own organization. I'm curious, you know, what's catching you off guard about that? Because I think a lot of companies are trying to assess this. I don't know how it's going for a lot of folks.
Justin Angsuwat: We're still pretty early in our journey on this. You know, we're working hard at it, but it seems to be just, you know, you and I talked about this, it seems to be progressing in leaps and bounds. Every time I look back at it, there's just, there's something new, there's a new paper being published or whatnot.
So we're still very early in our journey and one thing that's not entirely catching me off guard, but it's. Probably in that realm is what something I'm seeing across a lot of organizations, which is that sometimes assuming that the senior or high performing, whatever you wanna call it, senior high performing folks, would be the first to jump on and kind of master AI is an assumption that's.
Not always come to fruition because I think unlearning how you've done things is actually pretty hard. I think it's sometimes harder than learning itself because I think about if it's your first job, you kind of don't care how the work gets done. Really. You've never done it in any other way. You just want to get the answer.
So what's my shortest path from A to B? I just want to get to the answer. If you've been doing the same thing for 20 years, part of your identity is wrapped up in how you get to that answer. You know, you spent so much of your life kind of perfecting that workflow to get there, and AI can render that obsolete like pretty quickly.
So it's an interesting time where you can see junior employees or like people new to the workforce operating in a more advanced way with AI than senior employees. Senior employees are the ones responsible for bringing AI into the organization. But senior employees now also bring a different skill set to bear, which is, you know, take the engineering side.
Scaling and architecture becomes much more important as a senior engineer now than coding, as an example. And so I don't think we've really cracked the code on that culture shift yet. We're still in that messy middle of working out. But that's something that I'm seeing across a lot of organizations, which is how do we grapple with.
New employees kind of like early in their career, employees who are very AI native with more senior folks or high performing employees that have a lot of validity in why they do things a certain way, but also need to shift to accommodate for what AI can provide.
David Rice: Yeah, it's a, it is a challenge, right?
And it's, I love that though, that you're living it. 'cause it's like. It is humbling, like we're going through it ourselves at times. You know what I mean? It was like, we think you're ahead of the curve, and then you're kinda like, well, I'm not even sure what the curve is at this point.
Justin Angsuwat: I'm outta the curve, but I'm the only person on this curve, like what happened? What's the rest of the curve?
David Rice: So I love to hear like other stories of other folks who are going through it and living it. And yeah, I want to thank you for sharing that today. That was really helpful.
Justin Angsuwat: Thanks, David. Thanks for having me.
I really enjoyed our last conversation. I love this conversation as well, and yeah, we're very much living it ourselves and trying to work it out one day at a time. We don't have a year long plan on how we're gonna crack the code on this, so we're just working out and always happy to share whatever we learn.
The good, the bad, and the ugly.
David Rice: Absolutely. Just trying to be adaptable every day. Right. Well, again, thank you for coming on. It was great having you.
Justin Angsuwat: Thanks again, David. Cheers.
David Rice: Alright listeners, until next time. If you haven't done so already, make sure you're signed up for the newsletter, peoplemanagingpeople.com/subscribe. Also, check out our AI Transformation Explorer.
And until next time, keep living it. We'll all figure it out eventually.
