AI didn’t replace your job—it replaced your value proposition. In this episode, we sit down with Taylor Blake, SVP of AI Labs at Degreed, to talk about the uncomfortable truth facing L&D teams: if your job is framed as delivering content, unblocking employees, or feeding answers in the flow of work, AI is already doing it better, faster, and without your calendar invite.
But where AI stops short is precisely where L&D’s future begins. Taylor shares how her team at Degreed lives as “customer zero,” using their own tools before shipping them to clients—which means they’re embedded in the mess, not just pitching the promise. From readiness over responsiveness to the emotional toll of relentless efficiency, this conversation explores what it really means to build capability in a world where one employee now has the power—and pressure—of ten.
What You’ll Learn
- Why L&D must shift from delivering knowledge to building human capability
- The difference between just-in-time learning and real readiness—and why it matters more than ever
- How AI is raising the stakes of every role, not replacing them
- The unintended emotional side effects of hyper-efficiency
- Why thoughtful skeptics, not just AI optimists, need to be building the future
Key Takeaways
- Efficiency ≠ Impact: Compressing a 400-page book into bullet points saves time but kills transformation. Learning isn’t just information; it’s experience and emotional resonance.
- Readiness is a System, Not a Sprint: High-stakes decisions require deep context and judgment. Without intentional scaffolding—simulation, coaching, psychological safety—workers are being dropped into complexity unprepared.
- Content Isn’t King—Context Is: AI may generate surface answers, but lived experience, institutional memory, and unspoken nuance remain irreplaceable.
- We’re Designing Burnout by Default: When AI clears low-stakes work, what’s left is nonstop judgment. Without slack, reflection, or human guardrails, the emotional costs compound.
- Ethics Can’t Be an Afterthought: Real people live inside our experiments. AI tool builders must hold space for ambiguity, unintended consequences, and the weird stuff—like your kid preferring AI validation over yours.
Chapters
- 00:00 — AI Took Your Job Description
- 02:00 — Degreed as Customer Zero
- 05:00 — L&D’s Shrinking Value Prop
- 07:30 — Readiness vs. Just-in-Time
- 09:00 — The Cost of Efficiency
- 12:30 — Tools for Change, Not Just Content
- 15:30 — High-Stakes Work, Less Prep
- 18:00 — Depth, Context, and the Loss of Apprenticeship
- 20:00 — Humans Hold the Real Context
- 23:00 — Designing Burnout by Accident
- 25:00 — Experimenting Responsibly
- 28:00 — Building AI with Concern
Meet Our Guest

Taylor Blake is Senior Vice President of New Initiatives & AI Labs at Degreed, where he leads innovation efforts at the intersection of learning, talent development, and artificial intelligence to help organizations future-proof their workforces. With deep expertise in people strategy, digital transformation, and AI-driven product development, Taylor champions solutions that enhance skill growth, engagement, and performance in a rapidly evolving workplace. He is a recognized leader in shaping how companies harness data and intelligent systems to power personalized learning experiences and strategic talent outcomes.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Taylor on LinkedIn
- Check out Degreed
Related articles and podcasts:
David Rice: AI stole your job description, not your job description. You know, the thing you've been saying you do for the last decade. AI does it faster, cheaper, and in the moment of need. If you're in L&D and you've been framing your work as delivering training or producing content, or giving people answers in the flow of work, well, Taylor Blake, the Senior Vice President of AI Labs and New Initiatives at Degreed has some bad news.
You're in trouble because AI can do all of that and it doesn't need your help. But here's what it can't do. It can't expand people's capabilities for higher stakes decisions. It can't prepare them for moments that haven't happened yet, or build readiness instead of responsiveness. It can't help humans navigate the strategic complexity that comes when one person can now make such a massive difference.
And that's the opportunity L&D is missing while some in the industry are busy defending content delivery. Taylor's team isn't just building AI tools for enterprises. They're using them internally as customer zero, which means they live inside the consequences of their choices. And what they're discovering is uncomfortable.
The gap between an AI demo and AI in practice, it's huge. Push for efficiency is flattening people's emotional capacity. And sometimes the knock on effects are things you'd never anticipate, like Taylor's 10-year-old daughter seeking validation from AI instead of her parents. Today we're gonna cover why L&D needs to think more expansively or it's gonna get replaced.
The difference between just in time learning and readiness, and why both matter now, how AI is making everyone's work more strategic and higher stakes. The efficiency paradox where we all are doing more with AI, but feeling busier than ever. And why thought full concerned people need to be building AI tools, not just optimists.
I'm David Rice. This is People Managing People. And if you've been defending your current value proposition instead of re-imagining it, this conversation's gonna be your wake up call. So let's get into it.
Hi Taylor, welcome! How are you?
Taylor Blake: Doing well, David. Thanks for having me.
David Rice: We had a opportunity to talk at the Gartner Conference.
It was really interesting to learn about all the things that you're doing with Degreed, and I kind of wanted to start with that idea of Degreed as like customer zero, right? You're experimenting not just for product development, but as a user of your own AI tools. I'm curious, what are some of the biggest benefits you all have realized from that, and how has it changed your approach over time?
Taylor Blake: Yeah, absolutely. The reason why we do these experiments, the reason why we have AI Labs is because there's so much new technology and capabilities coming with AI that we really need a chance to make sense of them, and it's so hard. There's no just specs that you can read and easily understand how it translates to L&D use cases or to user experience.
You really have to try these things so we can try these internally. It's done a couple things. One, it's really helped us understand which things are and are not ready. AI itself is going through sort of, you know, its different hype phases, but each sort of AI capability and for each use case, some might be ready, some might not be ready.
So the fact that we can try these things ourselves, it gives us conviction where things are ready, helps us understand that something's not quite ready yet. But also I think so many learning experiences come down to an individual's experience. So by making ourselves really customer zero and going through these things ourselves, I think it's actually raised the bar in terms of we wanna make sure these are really great experiences, and only once we sort of nailed something that we say, Hey, I'd use this myself, then we can sort of use that as the foundation.
And turn it into a capability that our enterprise clients can use.
David Rice: What I love about this model is it sort of, you tell me, but I feel like it would kind of force accountability, right? Because you're not just selling the idea of the market and then you leaving it at that you're kind of living inside of the consequences of the choices that you all are making, right?
Like I think that's pretty rare. And I imagine that it exposes sort of the tensions that you wouldn't see otherwise, like what people say they want from AI versus. What they actually use when it's in their workflow.
Taylor Blake: Yeah, absolutely. And there's so many different layers. I mean, the difference between an AI demo and AI in practice can be a huge gap, and you just don't really know until you get hands on.
You have to see and feel those problems and see what issues it creates and figure out kind of what, how you're gonna mitigate them or things like that. So yeah, it's certainly, we do feel that accountability, I mean, by using these things extensively ourselves first, we experience all those problems, but at the same time, we also recognize we are not necessarily our customers.
Even once we can get something working for us, it's only a foundation. It's not the end step. There's still a lot of steps. Process to make sure that these capabilities are ready to go for large enterprise customers.
David Rice: Large or small, it's like L&D is changing, right? And when we were talking at the conference, you know, we were talking about kind of like AI is kind of stealing L&D's lane a little bit, right?
It's faster at unblocking people in the moment. Then L&D teams can be really, and I, that's a huge shift. How does L&D sort of stay relevant in a world where unblocking someone isn't really the value prop anymore?
Taylor Blake: Well, I think it's a good challenge for L&D 'cause we have to appreciate if we frame our job as delivering training or producing content, or giving people answers in the flow of work, then we're in trouble.
'cause AI can do a lot of those things. Yes, it needs help and it needs guidance and it needs guardrails. But AI can do a lot of that. But I think L&D has an opportunity to think more expansively so. Not every performance problem is a training problem, but a lot of performance problems do come back to people.
So I think L&D needs to take this more expansive view of we're here to help people change, expand their capabilities, you know, perform better at work. If we're just focused on training, content delivery, yeah, it's getting eaten up. But if we take a more expansive view, then there's so many opportunities, especially because what we see is our clients like technology and opportunity are no longer the constraints, the real constraint for these enterprises, for the strategic moves they're trying to make, it all comes down to people.
It comes down to people how quickly they can change their appetite for change, how quickly you can get people aligned and pointed in the right direction. There's so many human centered challenges, and I think L&D just needs to say, yeah, we can help. We can help. We need to expand our toolkit in terms of how we're gonna help.
It's not just content, it's not just traditional courses or training.
David Rice: Yeah, I think that's a powerful point, right? 'cause there's so much opportunity here. When we think about connecting with somebody at the moment of need when it's really bothering them or they're really trying to fix something or whatever it is, that ability to be able to not just deliver content, but L&D knows better than anybody else, right?
People learn differently. And so to be able to like tweak that and customize and think about it, we think about it as an opportunity to really reshape sort of how we deliver whatever it is that we're, you know, whether it's training or micro learning or whatever.
I think it is just an endless possibility there.
Taylor Blake: Yeah. And it's not just, I mean, there's so much opportunity in the moment of need and there's opportunity before the moment of need, right. In terms of readiness, and I think we've given sort of a bad, sometimes it gets characterized as like just in time and everything else is just in case, which is just like it's an insurance policy and it's sort of a waste of time.
Like there's been so much emphasis on just in time because it's so efficient, it's so aligned, it's so relevant. But I think we need to sort of recapture this idea of. It's this idea of readiness. There's this, you can't train and enable and get someone ready for, there's stuff that has to happen before that moment of need.
So how can we prepare people, especially, and I'm sure we'll touch on it. AI is making our jobs more strategic. We're having to make higher stakes decisions, right? We are now all at higher points of leverage because we have so many tools and capabilities. So one person can make such a big difference.
But how do we prepare people for that moment? So preparation, readiness, I think those things are key now, as you know, just as important now as ever.
David Rice: You mentioned a word there that's on our minds quite a bit these days, and it's efficient, right? We talk about it all the time and it's, there's this, I think there's a shift away from trying to make, learning more efficient toward helping people sort of adapt through change.
I'm curious, what kind of infrastructure does that actually require inside a company? What does it demand of the people that are doing that work?
Taylor Blake: Well, this push towards efficiency has been interesting. You know, when I think about, I'll use this example of take a book. I think if you asked most people like, Hey, has there been a book that really changed how you thought about something?
That really changed, you know, that's been sort of transformational. And I think a lot of us could say, yes, there has been a book like that, but if you just. Made and whether that book was 400 pages. If you reduce that down to a summary, you know, if you reduce that down to a few sentences, if you made the consumption of that information as efficient as possible, like it wouldn't have been as impactful, it wouldn't have been as transformational.
So I think we're really seeing that, trying to just distill everything down into its smallest component and transactionally give information to people. It's not enough. Change is not just an information problem, it's not just an instruction problem. You really need to give people space and time, and you talked about what things are going to enable it.
Part of it is I think we need dialogue. I think we need tools to sort of enable. Everyone inside an organization, not just to receive information, but to be able to participate in change, share what they're thinking, what they're feeling, what challenges they're seeing, give the organization the chance to get this input, get this feedback, adapt.
Like I think that's how change is gonna happen. It's not just gonna be a memo created by an executive, it's not just gonna be turning on Microsoft Co-pilot and AI is giving us more tools to allow this, right, to encourage more open dialogue and to sort of make change a two-way street where people are participating in it.
David Rice: That's interesting, right? 'cause like so much of corporate learning over the years, it's sort of been built around trying to make content cheaper, faster, easier to consume. But I think to your point, like helping people change, helping them adapt, that's an emotional journey. It's a little bit different. It requires some reflection and feedback and context and all these things that are like kind of uniquely human and complicated.
And it makes me wonder if we've, in some ways, like did we build the wrong I don't know, maybe I'm thinking about it wrong, but did we build the wrong sort of scaffolding for the kind of learning that we actually need now? Like historically?
Taylor Blake: Yeah. I mean, I think it's an example of constraints changing and maybe the constraint used to be information, like the information just wasn't there.
So getting that information out to people. But now, gosh, it is easy to create and produce and push out. Information. So now we've discovered the next constraint, which is information's not enough. Change is it's a journey. It requires how people are thinking and feeling about this and how do we engage people through that process, like you said, I do think it's a whole different set of tools and interventions.
You know, I think AI is such a interesting use case right now 'cause it just highlights this so well because every organization is thinking about AI in some type of way. People have real fears, concerns. Some people are excited. Some people are terrified. I think AI as sort of a change initiative is just such a great example of people are bringing all sorts of experiences and emotions and concerns, and we need new ways to engage around that.
David Rice: Now you mentioned there we need new tools and think about it differently, and I know you're building some new sort of AI powered toolkits for change and alignment, like voice interfaces. You've got coaching experiences, some real time guidance. I'm curious what kind of use cases are you seeing where this approach works better than the old playbook?
You know, 'cause these aren't just product decisions, these, they kind of feel like bets on what kind of humans we're trying to shape at work.
Taylor Blake: Yeah, and I'll go back to AI transformation. It's an example that we hear all the time, and it's an interesting one because it's happening so fast and the AI landscape is changing so quickly that an old change playbook of, we've got this sort of singular discreet change that we need to manage people through.
So we're gonna create a change plan and then implement it, and then sort of we're done. Like it goes out the window in terms of just, this is now needs to be a system, right? It's not like. Change initiative is done. You just need a system that can manage and adapt and help people align and adapt to change.
And I do think, you know, it's going to be things like, and it's this balance. 'cause you know, we're talking about getting people comfortable with AI. And yes, AI gives us new tools, but we need to also find ways to bring people to the forefront of these interactions. But it's definitely ways of encouraging more conversation, more dialogue.
Giving organizations more visibility into where are people at, what are the challenges, what are the concerns? And then maybe the interventions are coaching that might be manager supported, that might be an AI coach. We just need some flexibility 'cause there's gonna be a real range of different needs.
But I think so much of it comes back to we need that system to build a sense and adapt and have conversations with people so that we know where people are at. You know, you can't just make assumptions and sort of have top down expectations of change kind of trickling down through the organization. You need to bring people along and engage them along the way.
David Rice: Yeah. It's interesting, right? It's not just product innovation, it's a set of assumptions about human behavior in a way. Like you gotta be countering a lot of questions, I imagine. Like what kind of friction is worth solving? Or you know, what kind of autonomy do you want people to have? Because these are now getting into like work design decisions, right?
They're like how should people engage with each other, with their work, with their own growth? These are all kind of questions that these tools are having an effect on.
Taylor Blake: Yeah, I mean, there's a lot of questions and honestly, that's a lot of why we believe in experiments. We believe in sort of testing small concepts and just being able to make sense quickly.
I mean, something that might feel like a good solution now. Nine months from now, I mean, might need a different approach. A lot of it is about sort of these meta skills and capabilities of agility and ability to quickly understand and respond versus, you know, just having the solution, the package, the program 'cause things are just changing so quickly now.
David Rice: When we were speaking before, you mentioned that, you know, sort of AI's sort of clearing out low stakes work. And what's left is really the judgment piece, what you were mentioning earlier, strategy, sort of working, doing these things at a different speed.
And that sounds great, but it also means that people are being dropped into high stakes calls much faster with less time to develop. How are you thinking about readiness in that kind of environment?
Taylor Blake: Yeah, absolutely. So, you know, I think about, I was listening to a Freakonomics podcast about air traffic controllers.
And how I didn't appreciate, but how complicated that job is and how they're managing all these different variables and they're having to make all these different optimizations and calls and it's just this really stressful, demanding. It's an example of, you know, technology took care of all the easy optimizations, so what's left is the complicated high stakes critical judgment pieces.
That can absolutely just be demanding, overwhelming. And so you think about if you use this air traffic controller example one, they do a lot of training. They do a lot in terms of readiness. The psychological side as well as the technical side. They have mandated brakes. They understand the sort of cognitive complexity that the role requires.
Another example, Nikki Helmer, who's our chief product officer, gives this example. She says, you know, European cities or large cities, very walkable, right? You'd walk to the market, you'd walk to public transportation. And now for those of us who live in the suburbs enabled by cars. We just don't have that organic reason to just go outside our house and walk.
Like there's nothing really within walking distance for a lot of us. I can't walk to work. I can't walk to the grocery store. So we've had to adapt by creating, you know, I'm gonna go to the gym every day, I'm gonna put a treadmill in my house. Like we have to create reasons for us to get up, to get outside, to get that exercise.
It used to be organic. Now we're having to think about it deliberately. I think the same thing's happening with the types of demanding work that we're increasingly gonna face.
David Rice: I think the same thing's true with how we sort of develop our understanding. We talk all the time about like the loss of entry level work or some of those lower level tasks being automated.
Well, those lower level tasks a lot of times. Provided us sort of the context or the background knowledge of the mechanics of what we were doing. And I think like we are gonna have to be more intentional about how we force ourselves to build those muscles. And it is gonna be a much different way.
Taylor Blake: Absolutely. And this might be a bad example, but I'm gonna throw it out there. We've been having furnace problems at our house, so Right we call up the HVAC people and you can tell the people who have the superficial training who know there's like 10 types of. Fixes that they're looking for. And if something doesn't fit one of those 10 things, I mean, you just gotta get a new one.
Sorry. You gotta get a new furnace. You find the old school guy who works for himself, you know, who's been doing it for forever, and he knows how every piece works and he knows, and he can just find. No, you just gotta clean this out, fix this up. Like you're gonna be good. They just have such a good understanding of the whole system.
David Rice: Yeah.
Taylor Blake: And yeah we're losing that. So it's gonna have to be more intentional. We're gonna have to create these opportunities and I'm not sure if. It's just a matter of time until we and organizations start feeling that pain of our people superficially kind of know what's going on, but they don't really understand the intuition behind all these pieces.
'cause they didn't have to kind of learn the hard way. So we're gonna definitely have to create some of those opportunities.
David Rice: It's funny, you know, I used to walk around like a Home Depot or a Lowe's and I'd always think to myself, there's an analogy for everything in here as like a writer and storytelling or I'm always thinking about like, you know what it is that you use this stuff for?
It's funny 'cause you're saying that, and I'm like, we're all looking for Gene. You know, the old guy in the plumbing section, like the young kid, he can tell you like where to find the wrench or that type of pipe maybe. But Gene will tell you all about how to put this together and what type of valves you need.
And it's like. He's just a wealth of information. You know, that's kind of what the whole world's turning, or like all our jobs are turning into. In a way, I think there's some research to show too that like senior level employees that have a lot of depth of knowledge and organizational context, their value is increasing exponentially.
Because if you want AI to do like complex tasks, you've gotta feed it a ton of context. That's why that's really interesting that you mentioned that.
Taylor Blake: Lemme just say, you know, we talked about context for AI so much and you know, oh, this AI has a million token context window and then you know, we're gonna give it these documents so that it has context.
You compare that to people and the amount of context, take that senior employee, if we're comparing context to context. Like your people have so much context across years of experience. So many things that are undocumented, invisible, and we, I think, really need to be champions for human expertise and human capability.
And so much of that is context that people bring to situations that might be invisible. So I think we just need better ways of championing that, representing that and valuing it, because I think it's immensely valuable.
David Rice: I agree. I mean, it's the value of lived experience, right? And it's, yeah, maybe it's 'cause I'm getting older, but you know, I think we don't value it enough sometimes because it's, it is the greatest teacher and you know.
AI for all the things that it can do. It can't have a lived experience, and so it doesn't, like you said, the silent things, you know, the way that the CEO reacted to something or the facial expression that somebody made when that was proposed at a town hall or whatever, it was like they, it can't absorb all that and understand it the way a human brain can, so it's just infinitely more valuable.
Think about this too, like we used to have a blend of like low pressure and high pressure work. You mentioned earlier balance, you were talking about that. Like I kind of thought about this, but now with AI stripping out some of that easy stuff and there's no built in sort of pause. I mean, we all complain about context switching, but sometimes it can be a good thing.
Right. I'm curious, do you think that we're accidentally designing burnout? How do we counter that?
Taylor Blake: Well, I think one, just appreciating that work is changing. There's some political show on Netflix. I forget which one, but it's talking about how when decisions reach the executive levels. You're deciding between two things.
It's not 80 20, there's not like, this is obviously the right answer and this is the wrong answer. It's like 51%, 49%, because if something was as clear cut as 80 20, it's gonna get solved lower in the organization, lower in the system. So I think increasingly, you know, you talk about high pressure, it's not just high pressure, but it's also.
Strategically complex things where there's not a clear cut answer, right? Clear cut answers increasingly are just gonna be made automatically, you know, which means it's not just about instructing people how to make decisions, but it's everything that comes after the decision as well. Because you're making these complex decisions.
There's not a clear right and wrong answer, and you're going to have to be able to explain it. You're gonna have to be able to justify it. You're gonna have to be able to handle the angst when people who are affected negatively by a decision are saying, Hey, why'd you do that? You know, I'm not a big fan of this decision.
And yeah, so I think one, just appreciating there is a lot of pressure that comes to these types of things. Now, how do we mitigate it? There's a couple things. You know, we talked about readiness, we talked about preparation, new tools for simulations and coaching to help get people ready for those things.
But also there might be this increase in sort of team, and they actually do this with air traffic controllers as well. There's this team dynamic where you are sharing responsibility so you can support each other anyways. We might have. More of these sort of team coup support type models so that you know, it's not just all falling on one person.
'cause that is just a recipe for all that pressure is just gonna lead to burnout and no one wants to do that.
David Rice: We've all gotta kind of own like building some slack into your system, right? Like you gotta know when the pressures become too high and not to like, particularly in, I think in leadership positions, as we think about leadership development, we think about how to lead through these transitions.
We don't wanna flatten everybody's emotional capacity. So how are you going to sort of reckon. What happens when more and more of their work becomes this high stakes, you know, high pressure decision making. That's something that's like, gonna be a continuous challenge. But I think we've been having a really, a much healthier discussion about things like work-life balance and mental health and all these things in recent years.
And I think it's time to leverage the, what we've learned from those conversations in this moment.
Taylor Blake: Yeah, and I think we're gonna see the flip side. There's been such a push towards efficiency, you know, towards doing more with less. But you know, on the surface, AI can handle a lot. But behind the scenes, and I think this is people's lived experience as well, it's like, yeah, AI is doing so much more for me, and yet I feel busier than ever.
Just like the surface area of a exceptions and decisions and figuring things out is just growing and growing, and I think this push towards efficiency, towards optimization, we're certainly gonna see repercussions, and we're gonna have to find the right balance in terms of, you can't just optimize everything and expect, I'd say optimize.
I mean, presumably optimize everything and expect the. System to be healthy. I mean, there's just so many invisible variables that because they're not visible, the optimization doesn't take 'em into account and then eventually they break and then you realize the system is, needs to be fixed.
David Rice: Absolutely.
You're in a space where, you know, failing fast is encouraged in a lot of. But when you're experimenting in real orgs, real people are affected, how do you sort of hold that space for both ambition and responsibility? Because I think this, in some cases we get very ambitious and we get a little reckless.
Right? So has there been a moment recently where that tension showed up for you? I'm just curious.
Taylor Blake: Yeah, and I mean, one of the reasons why we do experiments is 'cause we don't want to experiment inside of our client organizations with real people who are just trying to do their jobs, right? We're trying to pull out the things that we're testing and do it in smaller, more confined ways.
So we're trying to experiment and actually reduce risk, but you know your point about the tension between the world's moving fast and ambition and the repercussions for all this stuff. You know, this isn't necessarily a work example, but this is one you mentioned is the one that stood out to me.
This is one just of sort of AI use in general. It has to do with my kids. My oldest daughter, she is 10. This past summer, she was building a butterfly garden in our backyard and she did all this research about which plants and how to create a space to try and get butterflies to come. And it was very cute and sort of a great project for her.
And she did all this research and I let her use AI to show AI her butterfly garden and sort of get its advice on how to take care of different plans. But she came to me. I said, Hey, can I use your phone? I wanna talk to AI again. I said, well, what's your question? Why do you need to talk to it? She says, I don't have a specific question, but it just, it's so nice to me that I just.
There's so many nice things to me that I just want to talk to it, and for me it was just like alarm bells of like, oh man, what sort of relationship is happening here that I did not anticipate in terms of, you know, my daughter being, she's not looking for my validation, she's not looking for my wife's validation.
She is now so hung up on getting AI's validation and hoping that it's gonna say nice things to her. Is this reminder of even these things that have. Seemingly simple use cases, like there are so many just knock on effects and consequences and things that are hard to anticipate. And one of the hard things that I'm having to get better at is my team.
You know, we're trying to come up with new ideas, big ambitious ideas, and we're excited about them. But then it has to go through so many rounds of, well, did you think about this? And, oh, why are you doing it this way? And it can feel like this. Everyone's trying to sort of poke holes in it, but it's actually just this refining process of these things have to be evaluated critically from so many different directions before it's something that we're confident enough in sort of putting out there and releasing.
So that process from initial idea that we're so excited about to excited, really beating it up so that we're confident to put it out there. It's an interesting journey for sure.
David Rice: I love that you're building in because it's like an ethical guardrail to have that conversation. And we've got to ask ourselves some really tough questions.
And I think that when we're building things with this we can do it without being philosophical, but I think it's probably better if we don't because, or if we are philosophical, right? Because at the end of the day, a lot of these things that we're building are changing people's day. They're changing the way that they interact with things that mean a lot to them.
And the example with your daughter is very fascinating. I mean, it's, I'm seeing more and more of that as well. I love that you're having those conversations internally and asking like, what is the thing that we're not thinking about? Because I think it's what we all need to start doing.
Taylor Blake: And for sure there's blind spots and when people reveal to you, your blind spots, you know, Hey, this thing that you built, yes, it's cool, but it had this negative sort of effect and consequence and this thing you didn't think about.
So you gotta anticipate there are gonna be those things. You gotta be open to 'em. We've gotta find ways to incorporate 'em into the design process. I'd also say I want the thoughtful concerned people to be part of this process. Like if the only people building AI tools are the people who assume that you know, AI has no negative consequences, like it's gonna be a disaster.
We need the thoughtful concerned people who are equally or more sort of thinking about the potential downsides of this. Not just ignoring this technology, but participating in the process and hopefully. Helping us create things that are thoughtful and responsible.
David Rice: Well, Taylor, thank you for coming on today. I really enjoyed having you.
Taylor Blake: Yeah, well thank you so much for the opportunity and this is helpful for me too. We talked about reflection, I mean, this is a chance for me to reflect on some of the things I've been thinking about, so I really appreciate it.
David Rice: Well listeners, if you haven't done so already, go on over to peoplemanagingpeople.com/subscribe. Get signed up for the newsletter. We get all the latest updates, everything that we're doing from events to podcasts, upcoming content series, you name it.
And until next time, tests, but ask yourself the difficult questions.
