AI is transforming the workplace—but how do we separate real, practical use cases from overhyped trends that don’t deliver results? And what ethical risks should we watch out for?
Host David Rice talks with Jonathan Conradt—former Principal AI Scientist at Amazon and Management Board Advisor at Synerise—about AI’s real impact on HR and leadership. They explore how AI is shaping hiring, employee wellness, and decision-making, plus the crucial role of responsible AI. Jonathan also reveals why companies often mis-invest in AI—and how leaders can make smarter choices.
Interview Highlights
- Meet Jonathan Condradt [00:52]
- Jonathan has a long background in technology.
- Worked on the original Chrome team at Google, focusing on Mac and Linux versions.
- Managed marketing and integrated AI/ML at eBay.
- Spent 12 years at Amazon, where he noticed AI projects were often misjudged.
- Amazon’s leadership struggled to assess AI investments due to lack of understanding.
- Spent his last year at Amazon educating VPs and directors on AI/ML.
- Left Amazon to help businesses better understand AI opportunities and risks.
- AI in HR: Practical Applications [02:37]
- Generative AI is popular, but traditional machine learning has also advanced.
- One major advancement is automatic machine learning (AutoML).
- AutoML automates repetitive tasks in model creation.
- Open-source package AutoGluon makes AI accessible to those with basic SQL and Python skills.
- AutoML enables quick creation of high-quality models that previously took months.
- AI is useful in HR for tasks like employee classification, attrition prediction, and leadership training identification.
- Helps with numerical predictions, like hiring needs and workforce planning.
- Supports time series analysis for detecting seasonal trends.
- Can build a world-class model with just three lines of code.
- Ethical Considerations in AI [05:03]
- HR plays a crucial role in AI development, ensuring legal and ethical compliance.
- Scientists often lack expertise in HR-related laws and biases.
- AI models can unintentionally learn and reinforce historical biases (e.g., favoring candidates who played lacrosse).
- Responsible AI focuses on fairness, transparency, and ethical handling of data.
- HR professionals are well-equipped to guide AI fairness, especially in hiring and promotion models.
- AI bias in promotion predictions (e.g., favoring Ivy League graduates or males) must be mitigated.
- Regulations on AI ethics are emerging, with the EU leading and the U.S. considering legislation.
- HR can help scientists understand and implement responsible AI practices.
- AI models like ChatGPT do not directly use user inputs for training to avoid corrupting valuable data.
- They learn from user interactions, such as rephrased questions signaling dissatisfaction with a response.
- Bias can emerge if interactions reflect societal biases (e.g., gender, income, geographic preferences).
- A rumored example suggests OpenAI’s models adopted linguistic patterns resembling those from Kenyan contractors involved in reinforcement learning.
- Human feedback biases, like preferring longer responses, influenced model behavior.
- AI can unintentionally favor the speech patterns, topics, or preferences of dominant user groups.
- Biases in AI can emerge in unexpected ways, reinforcing patterns from the most engaged user demographics.
To do the right thing in terms of responsible AI, you need to remove the factors that shouldn’t be considered. Don’t let the model see or make inferences based on things you wouldn’t consider in an actual promotion.
Jonathan Condradt
- Guardrails and Executive Education [12:36]
- AI systems use guard models to filter out harmful or unethical queries before processing.
- Employees will find ways to use AI tools if companies don’t provide secure, sanctioned access.
- Uploading sensitive corporate data to public AI models poses serious security risks.
- Corporate AI accounts can offer controlled environments to mitigate risks.
- Executives, despite their expertise, may struggle to distinguish between secure and unsecured AI tools.
- Providing training and official AI tools can help prevent security breaches.
- AI adoption strategies vary:
- Early adopters: Invested heavily, faced high risks, and often struggled.
- Never evers: Skeptical and resistant, risking obsolescence.
- Fast followers: Learning from early adopters, adopting AI at lower costs, and likely to succeed.
- AI will reach a performance plateau, allowing fast followers to catch up with early adopters.
- Understanding AI Training & Adoption [20:03]
- Many people, including technical professionals, lack a deep understanding of machine learning.
- AI is fundamentally a mathematical function that takes input, processes it, and produces output.
- AI does not “think” or “remember” between uses—it exists only when called upon.
- The impressive capabilities of AI come from additional code and data organization.
- Training should be tailored to different roles within an organization (users, technicians, executives, etc.).
- A one-size-fits-all training approach won’t work.
- HR must determine appropriate AI training strategies for various employee groups.
- Workers, including non-technical staff, need AI education to reduce fear and enable adoption.
- Executives require different AI training than frontline workers.
- Employee Wellness and AI [23:20]
- Jonathan is working with researchers on AI-driven employee wellness measurement and improvement.
- AI is patient and can provide actionable feedback to employees.
- His past work with Gallup’s StrengthsFinder aimed to improve manager-employee relationships for overall well-being.
- Traditional assessments rely on forced-choice answers, which may not fully capture employee sentiment.
- The new approach allows employees to express themselves openly, adding valuable qualitative data.
- AI can convert text into mathematical vectors, enabling better analysis.
- A past experiment using AI to analyze research papers improved predictions by 14%.
- Applying similar AI techniques to employee satisfaction surveys could significantly enhance understanding.
- Future of AI in the Workplace [26:35]
- AI agents will likely be used by HR professionals to help manage complex tasks, gather information, and simplify processes.
- AI can act as an executive assistant, organizing tasks, reading emails, and providing suggestions, while leaving final decisions to humans.
- AIs are particularly good at information synthesis, making them effective for roles like vice presidents who need a broad view of the organization.
- AI-powered assistants (e.g., Microsoft’s co-pilots) will save time and improve business understanding.
- Companies can use AI to enhance productivity, but should avoid downsizing; instead, they should focus on increasing output with the same workforce.
- Businesses that adopt AI to improve efficiency will outpace competitors who reduce staff but maintain the same level of output.
The companies that choose to downsize and remove people while maintaining the same level of work with fewer employees will be crushed by the companies that keep the same number of people but dramatically increase the amount of work they can get done and the value they can deliver to customers.
Jonathan Condradt
Meet Our Guest
Jonathan Conradt is a seasoned expert in artificial intelligence and machine learning, currently serving as a Management Board Advisor at Synerise, a company specializing in AI and big data solutions. Before this role, he was a Principal AI and Machine Learning Scientist at Amazon, where he developed and taught curriculum for Amazon’s Machine Learning University. His extensive experience includes positions at eBay as Director of Internet and Email Engineering Product Management, at Google as an engineering product manager for the Chrome web browser, and at Microsoft as a product manager for a television over IP product. Jonathan holds eight patents related to machine learning and marketing and was one of the creators of the popular management tool StrengthsFinder. He is also recognized for his contributions to automated machine learning, particularly with tools like AutoGluon.

One of the great things about AI is that it’s very patient. It can be a valuable source of information because you can provide it with accurate and actionable data, and the AI can respond to people, offering them those kinds of insights.
Jonathan Condradt
Related Links:
- Join the People Managing People community forum
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Jonathan on LinkedIn
- Check out Synerise
Related Articles and Podcasts:
- About the People Managing People podcast
- AI’s Role in Talent, Trust, and the Future of Employee Experience
- Leveraging AI Pioneers To Transform Your Company’s L&D
- The Pace Of AI Advancement And What It Means For The Workplace
- Navigating Leadership In The Age Of AI
- Getting Ahead of AI: How HR Leaders are Adapting Their Tools & Systems
- How Gen Z Will Drag Companies Into the AI Age
Jonathan Conradt: [00:00:00] At its core, an AI is a function call. It's like an Excel, you have the sum function. And you all use that and you give the list of columns or rows that you want to add and it takes that as an input and it produces an output. And AI, that is all it is.
David Rice: Welcome to the People Managing People podcast. We're on a mission to build a better world of work and to help you create happy, healthy and productive workplaces. I'm your host, David Rice.
My guest today is Jonathan Conradt. He recently left Amazon where he worked as a Principal AI scientist. We're going to be talking about AI, machine learning, what needs to happen in terms of education around these technologies that are being folded into the work that people are doing every day.
So Jon, welcome.
Jonathan Conradt: Thanks.
David Rice: So first, tell us a little bit about you, how you got to where you are and what's the biggest challenge you're working on tackling right now?
Jonathan Conradt: Yeah, so I've been in technology for a long time.
I was on the original Chrome team at Google, so I joined the team when there was about [00:01:00] 40 of us. And if you've ever used the Mac or Linux versions of Chrome, That was my baby. I then was at eBay, where I managed their marketing and brought in AI and machine learning into that, both on site and off site.
And then for the last 12 years, I was at Amazon. So eventually at Amazon, what I discovered was that there were great AI projects that weren't being approved, and there were kind of bad AI projects that were being approved. And the principal problem is that Amazon has Really smart people, really smart vice presidents.
They're well educated. They come from great backgrounds. They know their business inside and out, but AI was coming out of left field to them. They didn't really know how to deal with it or what it involved. And they were struggling to make good decisions about what should we invest in. And so they were missing out on great ideas and sometimes.
Investing in things that weren't going to work out very well. So my, about my last year at Amazon, most of what I did was [00:02:00] prepare curriculum and teach vice presidents, directors worldwide at Amazon about machine learning and AI. And then ultimately I decided. If it's this big of a mess inside of Amazon with these people inside of a tech company, what is it like outside of Amazon?
And so I decided to leave Amazon and do things like this, try to help people understand what AI is, where they need to be cautious, where they need to be aggressive and. Help them understand how to make the best of it.
David Rice: It's an interesting and very poignant time to do that.
There's a lot of things that I think AI machine learning can do. And some of what companies have cast their eye on it is sort of tied to it being the shining new thing, right? Folks have some really big ideas, but there's a lot of practical solutions as you and I were talking about before we got on this call that are much more practical and can help companies tackle immediate needs.
I was wondering, can you give us some good examples of this and where can HR implement something for immediate impact?
Jonathan Conradt: Yeah, so [00:03:00] generative AI is that kind of shiny ball that's catching everybody's eye, right? But it turns out traditional machine learning has also made big advances in the last few years.
One of those things is called automatic machine learning. It's basically what happened is scientists were looking at the work that they were doing and they realized that they were basically following the same steps every time. And there was a potential to just automate all of that. And there's a really good open source package called autogluon.
And it's quite remarkable. When I looked at it, I had a bit of an existential crisis. Because I realized that here was a package that someone who had decent SQL skills, who knew a little bit of Python They could produce a world class model overnight that would have taken me potentially months to create.
That's really useful. So in HR, there's a variety of things you need to do. You might want to classify things. For example, you might want to identify people that would benefit from, say, leadership training, or employees who are [00:04:00] at risk of attriting. Those kinds of things. And these models can do that for you.
There's also types of predictions you want to make, like they relate to numbers. How many resumes do we expect to have? Or how many people do we need to man the warehouse in two weeks, right? Those kinds of kind of predictions of numbers. It also does really well. And it could also do time series. Time series is useful for things like Understanding, the seasonality of data.
So, I'm not sure how that relates directly to HR, but I'm sure there is some things that are seasonal in HR that they want to be aware of. So, what's remarkable about this is in three lines of code, someone who has the data can basically create a world class model. I think that's just remarkable.
David Rice: Yeah, that's super interesting. Of course HR is getting more and more data all the time, right? Especially as we keep implementing all these new tools.
And I know you've spoken a little bit in the past about responsible AI, and, you've kind of said that this is [00:05:00] tricky when it comes to HR gets talked to us a little bit about the biggest ethical considerations with AI and HR.
And what are some of the things we have to solve for in these models before getting it more involved in what HR does?
Jonathan Conradt: Yeah. In Amazon we have HR business partners, right? So they're tied to your team and they're great people to go to for advice. As we started working on some machine learning models, these models were going to use customer data to predict things about customers and HR and legal turned out to be really important business partners for us because computer scientists and scientists. We maybe have taken a human research class a long time ago in college. For HR, these kinds of laws and concerns are top of mind. And so there's a unique opportunity for scientists and people doing models to incorporate HR and ask them good questions.
Things about how do we want to manage gender and what sensitivities do we have around race, those kinds of things HR people know inside and out, [00:06:00] right? But the other thing that could happen is within the HR project itself. So let's say an HR organization wants to go and do something. All of those issues are very sensitive.
So the classic example is. You want to automate resumes as they're coming into the organization and you want to classify them or you want to sort them in some way to be more efficient. And there's dangers there because machine learning models, they cheat. They're just incessant cheaters. They're going to find the shortest path to the answer that they can get.
So you can see things like. The model will learn that if they mention in their resume that in high school they played lacrosse, those people get hired and promoted within the company historically. And that could be a historic bias. And it's something that you don't want as part of your filters for hiring or interviewing people.
And so, you wouldn't necessarily expect that word to end up being important to a model. But like I said, models cheat. And they're going to find all of those little things. And [00:07:00] so. Responsible AI is, it's a whole suite of ideas around how do we best manage the relationship between what our goals and what people and the people that are involved, right?
Maybe their customers or their employees or job applicants. How do we manage their data? How are we fair across subgroups? How do we, expose to them honestly and transparently what the machine learning model is doing? And so this is an interesting area of ethics. And again, HR professionals, they spend a lot of time thinking about ethics and how to treat people well and how to treat them correctly.
So responsibility is a fascinating area for HR, and it touches upon a lot of HR solutions. Let's say you wanted to build a model that was going to predict who of your current employees is going to become a vice president in six years. That's a useful model, right? Because you want to retain those people, you want to develop those people.
That's a good model. At the same time, as you're creating that model, you need to be very [00:08:00] careful because, let's say, all of your existing vice presidents that are your training data mostly graduated from Ivy League schools or, maybe they're male, all of those kinds of factors are in your training data.
And so to do the right thing in terms of responsible AI, you need to remove the factors that shouldn't be considered for consideration. Don't let the model see or make inference on things you wouldn't consider in an actual promotion. Yeah, responsible AI is, it's really interesting. And there's classes about it online.
There's lots of papers about it online and regulation is coming. It's already in the EU. It's coming to the U. S. Quite possibly, it's been introduced in Congress a couple of times, and this is an area, again, where HR is probably ahead of the curve for a lot of the scientists, and so you can be aware of it, and you can help them understand.
David Rice: Yeah, I think that's, super important. Like you said, it was, like certain data perpetuating bias. I was talking to someone recently, and they were referencing a study that said women were like [00:09:00] 75 percent less likely to experiment with AI than men. And then my first thought was, well, what data does that create?
It could then learn from an almost like perpetually this bias that like women don't have certain skills. I don't know. It's like everything, every action has a reaction, right?
Jonathan Conradt: Yeah, it's interesting. So like the large, let's say chat GPT, OpenAI's large language model. They don't use the things that you type for training the model directly.
Because people type all kinds of ridiculous, terrible, awful things, right? And your training data is your most valuable resource as an AI company. So you don't want to dilute it or add garbage to it. So you're very careful with it. But there's signals that they pick up from how people interact. One of the signals is.
Did you restate the question? Let's say you ask OpenAI a question about something about, I don't know, the Korean War, and it gave you an answer, and then you basically ask the same question in a slightly different way. That's a good signal to them that, Oh, we didn't do a good job. [00:10:00] And they eventually asked the question and got an answer that they appeared to like.
So that's a good signal to us. But of course what happens is, if there's a gender bias, or income bias, or geographic bias, In those interactions with the system is naturally learning to do the thing that Dom and a group likes a great example. That is actually. And I'm not sure this is true, but this is the rumor.
OpenAI's models had some peculiar language that they were using. There was words that they were using that were happening more frequently in their responses than you and I would have a normal conversation. And someone finally said, hey, this looks like the way people speak in Kenya. I think it was Kenya, but it might have been another African country.
But it was some African country. What happened was during the process of creating these models, there's a process called reinforcement learning with human feedback. And so what you do is you ask a [00:11:00] question, you generate five responses from the AI, and then you choose which one is best. And a couple of things were happening.
One was people were choosing the longest one, which wasn't necessarily the best, but it just seemed like it probably should be the best. And so what happened was the model started producing longer and longer responses. The other thing that happened was people would naturally click on the ones that sounded like The way they speak, and so we had this large group of contractors in an African country and they appeared to be responding to the language usage and tone that was familiar to them in their own country, which is slightly different than American English and slightly different than British English.
And so this got picked up and people were starting to create tools that could pick out AI responses. Based on this weird distribution of words, I mean, weird within the U. S. context, right? So there you have an example of, well, biases can creep into A. I. in a whole bunch of variety of ways, right? So if more men are [00:12:00] using it, it might end up responding in a way that is more satisfying to men than women, perhaps.
Or it could be, take a deeper interest in responding to things that the more popular, the more common population is talking about. Or it could end up mimicking back to you, the dialects of the trainers.
David Rice: Fascinating.
One of the things that comes up all the time is like guardrails, right? So in terms of how people are using it, the guardrails comes up around employees, particularly a lot. But sometimes I wonder if there's more guardrails around sort of executives that need to be put in place. I don't know what I mean by that as maybe sort of preventing them from over investing in the wrong things.
David Rice: So in terms of the tech itself, I'm wondering how much can we actually, I guess my question to you is how much can we actually put guardrails on it with employees and is there an element of executive education around these tools that needs to happen before anything gets implemented and handed to employees, essentially?
Jonathan Conradt: There's a lot to unpack there. First of all, I think it's useful to tell you that. On the science side, we [00:13:00] talk about guardrails as well. There'll be guard models so that when someone types in, what is the best formula for building a bomb with fertilizer?
When that question comes in, there are guard models and they're small, they're fast, and they're highly tuned to identifying violence or illegal things are unethical things, and they just take the question and drop it on the floor because there's no point doing the expensive processing after that, because we're not allowed to answer those types of questions, right? And then there's another guard after the AI is responding, and this guard's job is to make sure that you didn't trick the AI into telling you something you shouldn't tell you, right?
Like how to build a bomb. And so there it attempts to identify the fact that AI has gone off the rails and drop it on the floor. So there's kind of a technical term for this. Yes, guardrails for employees is an interesting problem because everybody has a phone and in my own life At home, I've built a machine that can run an AI and I've [00:14:00] got a way that I can get to it on my phone and I can do anything I want with that machine.
I could talk to it about anything. I could send any documents online. And of course, if you completely cut off in place, you provide them with no way to get to these types of tools. They will just find a way. Because the tools are too valuable for them. It takes the 20 minutes of writing an email into five minutes of writing an email because they can, get it started and then go back and edit it.
People are going to want to do that. The big problem with cutting people off completely and not providing any outlet is also that they're going to use models that you have no access to or control over or visibility to. And they're going to do really dumb stuff, like they're going to upload documents from your company to these models.
If you go look at the terms and conditions for OpenAI and Google's model and probably all the other ones as well, it says anything that you enter into this model can be used by the company in perpetuity for free for improving or developing the model. So you really don't want to You know, someone [00:15:00] uploading your business plans for the next year or your budget or lists of employees, like that would be a nightmare.
Fortunately, there are ways to provide nice guardrails. Like you can get corporate accounts now for most of these AIs. My former employer, Amazon. They have a really nice way for you to have basically your own AI that everybody can get to and it's sectioned off so things aren't being uploaded and shared with Amazon.
And I think that's a really good way forward. Most companies are going to provide the same thing. But what about executives? So now we've got the problem of executives. Well, first of all, we know that executives are just as likely to upload sensitive documents to AI as everybody else. In fact, they might be worse than others at it.
And there's a limited understanding of like, where is that AI actually sitting? It can be hard for people to tell whether it's, say, it's within our own IT, it's protected, it's safe, or it's a random third party on the internet, right? It can be hard for executives to figure that out because, they're experts at their [00:16:00] work.
They're well trained they're smart, they're well educated, but they're not IT people, right? So then you run into the issue of, okay, once we've kind of trained them, like it's sort of like the phishing training that we all have to go through, right? Once you train them, don't click on that link, don't upload that document.
It's a really good idea to try to provide them access to something. Something that they can use that's useful for them, that's, well protected, that's official, that's, logged and all that kind of good stuff. But those aren't projects. So projects are, hey, we're going to replace our entire customer service department with this AI.
Those kind of initiatives that some companies have already attempted, right? So there's those early adopter companies that ran really fast, spend an enormous amount of money. Had enormous difficulties, probably weren't successful, but they were early adopters. The companies that are thinking about these projects now, I would term to be more fast followers.
All right. So they have an opportunity to look at, okay, what is working in the marketplace? And the prices have come down dramatically. So they're not going to spend quite as much. [00:17:00] And there's been some good research about how do we measure the impact of these kinds of things? But, just let's take this, the customer service thing.
I don't know anybody that's I really hope they push me off to chat so I can talk to a machine about my problem. I don't know anybody that is looking forward to that and definitely not your best customers. And so while there is an opportunity to maybe make some things more efficient and use AI in those realms, you're going to have to be careful about it.
And I would suggest that one of the first things you want to do is have a good model in place where you can differentiate between an expensive, serious problem that you can recover from. And these calls that are costing you a lot of money, but, are really simple things that are easy to answer.
Like, how do I change my password? Those kinds of things. But yeah, executive guardrails are, that's a real problem. That's one of the reasons that, again, why I left Amazon was so I could have those conversations with companies and have that deep conversation about, okay, well, what data do you have?
What experience do you have? What problem are you trying to [00:18:00] solve? Are you trying to make more money with this? You're trying to save money with this. What are your goals? And, have you tried something simpler? That's also a possibility. And so, I invite people to, put 50 minutes on my schedule, we can have those conversations and decide if that's something that I can help them with.
But yeah, it's going to be very hard. I think there's three kinds of companies. Like I said, there's the early adopters. They've been trying stuff, spending a huge amount of money, taking big risks and probably not succeeding. The second group are the never evers. They're the ones that are convinced that, oh, this is another crypto.
It's it's a big fad. It's not going to work out. We've avoided these things in the past. We're going to keep doing everything the way we're going to do it. Those companies in particular are in big trouble, because it's not that. This is more like the coming of electricity. And then there's the fast follow organizations, who are just now looking at the data, figuring out how to measure what they're going to do, starting pilots.
They saved a lot of money because they, the prices have come down and now they're kind of getting into it. I think [00:19:00] those are the interesting groups that are going to succeed and do really well. And by the way, they're going to catch up with all of those early adopters. Because the performance of the models is going to hit a peak.
Okay, this is as good as they're going to get for a long time, right? There's going to be this plateau they're going to hit. The early adopters are going to get to that plateau first. Even if you started late, the rate at which these things are improving is so steep that someone who is a fast follow company is going to get there just in plenty of time.
David Rice: Yeah, the the learning curve and sort of the growing pains with it won't be quite as severe, right? Because a lot of that will have already happened.
So sometimes I wonder, do people have enough of a fundamental understanding of how machine learning, for example, works to understand how to make the best use of it. Would you say that's true? And what can we do to change that in the near term when it comes to training and development?
Jonathan Conradt: Yeah, absolutely. They don't have a great understanding of it. I mean, there's a lot of technical people and companies that don't have a great understanding of it. [00:20:00] I was at the University of Pennsylvania at Wharton, and I gave a talk to their graduate students about.
What is machine learning, and it's about a half hour long, and I get to a point where I explain that. Look at its core. An A. I. Is a function call. It's like an Excel. You have the sum function, and you all use that, and you give the list of columns or rows that you want to add, and it takes that as an input, and it produces an output. And AI, that is all it is.
So if you think about that some function, in between times that you use it, it's not wondering about what it said most recently. It's not plotting against you. It's not daydreaming. It's not, it doesn't exist in some ways, right? It only exists in that moment that it's being used and it goes away.
AI is fundamentally a mathematical object that's a function. And so it takes an input, processes it, produces an output. And a lot of the amazing, magical things that [00:21:00] we see that it can do are actually augmentations and code that we've written around the AI to organize data and things for them. And that's all complicated.
It's a little bit like expecting everyone to understand exactly how a fuel injection system works. Most people don't even know what that even means. And yet they can drive and they can do their lives and they've got these reliable systems. And so I think the, what we need to do is for each group or subgroup within an organization, we need to help them get to the right level of understanding, right?
Some people are going to be users. Some people are going to be technicians. Some people are going to be deep in the weeds with these things. And some people are going to be just beneficiaries. They never interact with the system, but it's just doing good things on their behalf. So that's going to be tricky.
HR has got a lot of, HR is usually responsible for training, right? They got to figure out who to bring into train and what kind of training to invest in. And it's going to be tricky, but you know, having a peanut butter solution across the organization, like everybody's going to get AI trained and we're all going to provide the same thing, [00:22:00] won't work.
When I was at Amazon, one of the things I worked on, I'm teaching these vice presidents and these senior vice presidents. The, one of the questions was, what could I do for the people that drive our trucks? The people that put the products in boxes. A lot of these people have, they don't necessarily have a college degree, they're hardworking, they're good people, they're smart people, but you know, they're not technical.
How do I bring them up to speed on AI so they don't number one, so they don't fear it, right? The company needs to adopt machine learning and AI, right? You don't want to be left behind because you were scared of it or your workers were terrified of it, right? So to get over that, there has to be training that's appropriate for them.
It meets their needs, it answers their questions, but also, there's that training at the executive level, which is quite a bit different.
David Rice: One of the things, I think there's some real interest in deploying this technology around is employee wellness. I spoke with somebody recently, they had an interesting use of voice technology to flag burnout or [00:23:00] stress, and we've got all kinds of data that could provide behavioral indicators, I would say.
Where do you see this change in the way we approach employee wellness in the coming years?
Jonathan Conradt: Yeah, I'm particularly interested in that now I'm working with two researchers on a way to not only measure employee wellness, but also to help improve it over time. One of the great things about AI is it's very patient.
It can be a valuable source of information because you can provide it with accurate and actionable information, and the AI can respond to people and provide them with those kinds of inputs. A long time ago, I was at the Gallup organization, and I helped develop StrengthsFinder. And so this was an instrument that was about helping.
Managers understand their employees and through that deeper relationship the goal was that everybody in the organization would benefit from that. There would just be a better understanding of each other, greater support for each other, and a lot of goodness comes out of that today I think we would refer to as part of wellness.
[00:24:00] But, yeah, and what we're trying to do with AI is, first of all, In a more interactive way, learn from people about how they're currently doing. One of the things that's frustrating in a lot of assessments is, you have to click the boxes. And I think at the end of the day, I usually feel like I'm not sure they're going to interpret my clicks the way I was thinking about it.
And so what we're doing is we're opening it up where besides that, the kind of forced choice data, which is still useful. We give employees an opportunity to just express themselves. It's okay, tell, as you were As you went through that last section, kind of talk to me about what is it you were thinking about?
What struck you as most important? What do you wish we, we were learning from what you were just saying? And this text is super valuable. One of the ways that I saw how text was super valuable was I created a machine learning model that would take research papers and it would try to predict whether I would find them interesting.
And so I had several hundred papers that I had marked as [00:25:00] interesting out of thousands. And it worked okay. And then I used a part of an AI. So AIs don't understand words. So you have to take the words and convert them into basically a mathematical object called a vector. And you might remember that from high school.
A vector is a mathematical object that has a direction and a magnitude, right? And so you take words or you take sentences or even entire paragraphs and convert them into these mathematical things. With basically no effort on my part, I just took all the titles and abstracts from all these papers. And I used the first part of the AI to create a vector.
And now I had more data. And so I provided that to my same model and it improved by 14%. And so that's a big jump. And likewise, if you could think about it, if we have an employee satisfaction survey, which is part of what we're working on, and you can understand their open ended responses, and it gives you 14 percent better understanding of your employees, [00:26:00] that could be a really big deal. Right?
David Rice: Yeah. I guess with that in mind, like what is some of the untapped potential in terms of the data that we have, the tools that people are gaining more access to? Yeah, I guess the untapped potential really in helping people and how they view their jobs, the company, their experience, those kinds of things.
Can we use it to change how leaders approach those things?
Jonathan Conradt: One of the most common questions I get as I talk to people is, can I outsource HR? Or legal to an agent and the answer that I give is I think that people that are most likely to use HR agents are going to be existing HR professionals and HR professionals job is very complicated.
There's a lot of stuff that's going on. They often need to look up more information, consolidate a lot of information. An agent is an AI that has the ability to go and do things on its own. Ask for information, maybe hit a database, hit an [00:27:00] application, do a web search, right? That kind of stuff. And so what I think is going to happen is people are going to gain, essentially, executive assistance.
Imagine everybody in the whole company has an executive assistant. And that executive assistant manages all of the kind of overhead of being at work and simplifies things like when you get an email that's making a request, the agent reads the email, organizes the information you need to respond and says, here's some things to think about in the response.
And maybe you could say something like this. But you, as a human, make the decisions, right? You can look at the data and say, Okay, I want to describe it a different way, or there's a reason that we want to say something different, or, the AI misunderstood it, whatever. I think those are going to be really powerful instruments.
And we're going to see that at the Here's the funny thing. AIs are best at replacing vice presidents. Because if you think about a vice president's job, it's about information synthesis. There's the entire organization that's gathering information and that [00:28:00] information bubbles up through the organization.
And then you have this vice president who has to make kind of these strategic decisions based on this really wide view of the entire organization. And it turns out that type of information synthesis is where AIs are much stronger than humans. I think we start off with these kind of, Microsoft calls them co pilots.
You might call them an executive assistant or an AI assistant. I think they're going to be really useful and powerful. They're going to save you time and allow you to have a better understanding of the business. By the way, I think it's a mistake for an organization to say, okay, we gained this efficiency, right?
So now let's look at computer programmers. If my computer programmers use AI to help them write their code, their efficiency goes up pretty dramatically. And there's two ways you could kind of respond to that. One is you could say, whoa, okay, we don't need as many programmers to get the same amount of work done.
The other way is to recognize that In every tech company I've ever worked at, there comes a time when you prioritize [00:29:00] everything you want the programmers to do. And then you have to draw a line and say, well, we don't have the time or money to do anything below this line. So you have to get rid of things that would benefit customers, would benefit the company.
Well, if AI increases productivity, essentially what it does is it pushes that line down. So the smart companies are going to be the ones that are like, look, let's make everybody more efficient. Let's help them get their stuff done faster. Let's help them reduce the overhead of just being at work. And let's produce more for our customers.
And those companies are going to accelerate past their competitors. So the companies that choose to downsize and remove people and stay at the same level of work with fewer people are going to get crushed by the companies that keep the same number of people, but dramatically improves the amount of work that you could done and the amount of things they can do for the customers.
David Rice: Love it.
Jonathan, I want to thank you if you're, joining us today. Before we go, there's always just two things I like to do with every guest. The [00:30:00] first is I want to give you a chance to kind of tell everybody where they can connect with you. If there's anything that you're working on that you want to plug, feel free.
Jonathan Conradt: Yeah. So we'll provide a link where you can just have a 15 minute conversation with me about anything you want. Tell me about the company you work for and the problems you're trying to solve. And I'd love to talk to you about it. I love learning about new companies. And this was a big part of what I did at Amazon is I would Amazon is this gigantic company does all kinds of different things.
So I would go and meet with random teams about what they're doing and help them understand this is a machine learning process that you could use to make your life better. This is how you can apply AI. So I'd love for people to go on Calendly and schedule some time with me.
David Rice: That'll be linked in the description for this episode. So feel free to check that out, get some time with Jon. And of course, connect with him on LinkedIn.
The second thing is we've started a little tradition here on the podcast. You get to ask me a question. It could be anything you want. It could be about the topic. It could be about something random.
It's up to you. So I'll just turn it over to you.
Jonathan Conradt: Yeah. So I'm curious, AI does [00:31:00] a great job with video editing and things like that. How has AI impacted you and the podcast?
David Rice: With the podcast, I mean, it's crazy what it can do with audio sometimes, it can really help us streamline the audio a bit.
It does have, I don't know if it's the AI that's doing it or some of our editors, sometimes it has a weird effect on somebody's voice. But the biggest thing with my job is just in content creation, how quickly it's changed and how we sort of organize that from the base level of here's an idea, here's how we're going to outline it.
Here's the process that you're going to go through to do basically the whole thing. I mean, that's really the biggest thing that, that has changed. It's sort of changed the nature of content creation. I wouldn't say in some ways, I wouldn't say it's easier because we're still in like that learning phase of figuring out how to get the most out of it and what you would actually want it to say, or particularly with your audience profile, depending on what it is, getting it to match that. I cannot believe how far it's come since the first time I typed something into ChatGPT. I [00:32:00] mean...
Jonathan Conradt: It's remarkable, isn't it?
David Rice: It's changed so fast.
Jonathan Conradt: I guess soap opera editing has been just revolutionized seemingly overnight.
Because they record so much video, and editing it is such an enormous job because of all the video. I suppose reality TV is the same way. And these tools just dramatically reduce the amount of time it takes. It basically they said that the AI can do the first rough edit automatically. That's amazing.
David Rice: That's incredible. Yeah. I mean, I'm sure that every major film studio is investing heavily.
Jonathan Conradt: Yeah. Well, it was a pleasure meeting you, David.
David Rice: Yeah. It was great chat. I appreciate you coming on. Hopefully we'll get to do it again sometime.
Jonathan Conradt: Yeah. Thank you.
David Rice: All right listeners, if you haven't already, head on over to peoplemanagingpeople.com/subscribe, get signed up for the newsletter. And until next time, keep [00:33:00] experimenting.