Generative AI has crossed the threshold from novelty to necessity—but most organizations still haven’t caught up. In this episode, I talk with Kenneth Corrêa, global AI educator and author of Cognitive Organizations: Leveraging the Full Power of Generative AI and Intelligent Agents, about what it actually means to be AI fluent. Kenneth breaks down how leaders can move from scattered experimentation to systems-level adoption, why uploading your financials to a free chatbot isn’t “innovation,” and how education—not fear—is the key to responsible implementation.
We unpack the shift from predictive to generative AI, the cultural lag that keeps leaders from seeing tangible ROI, and why the real competitive advantage comes from empowered humans—not replaced ones. For anyone trying to make AI a force multiplier rather than a security nightmare, this episode’s a roadmap.
What You’ll Learn
- Why the biggest AI risk isn’t bad prompts—it’s data leakage.
- How to move from “AI curiosity” to true “AI fluency.”
- Why most orgs still treat AI like it’s 2015, not 2025.
- How HR and operations teams can use GenAI for analysis, not just content creation.
- The anatomy of a functional, non-Frankenstein AI stack.
- What leadership skills will matter most in an AI-literate workforce.
Key Takeaways
- Education beats enforcement. You can’t monitor every prompt, but you can train people to think critically about privacy, ethics, and proper tool use.
- Start small, start smart. Automate repetitive and simple tasks first—then move up the value chain.
- Hands-on or hands-off? Hands-on. Leaders who don’t personally experiment with AI are flying blind.
- Bias isn’t theoretical—it’s historical. HR’s caution around AI stems from real issues in past predictive models; responsible use demands vigilance, not avoidance.
- Avoid the Frankenstack. Choose one ecosystem—Microsoft, Google, OpenAI, etc.—and train deeply before expanding.
- Fluency is repetition. You don’t read your way into it; you talk, test, and refine until it’s second nature.
- The new leadership skill is judgment. Knowing which tasks belong to humans—and which don’t—will define effective management in the AI era.
Chapters
- [00:00] The Real Risk: Convenience vs. Confidentiality
- [02:15] Why AI Fluency Still Feels Out of Reach
- [05:10] Education as the First Line of Defense
- [08:00] Generative vs. Predictive: The New Productivity Curve
- [13:40] Rethinking HR’s Relationship with AI
- [20:20] Where to Start: Low-Hanging Workflows for Automation
- [24:30] Escaping the Frankenstack: Building a Clean AI Stack
- [27:40] Redefining Leadership in the Age of AI Agents
- [32:40] Avoiding the Overpromise Trap
- [36:50] Getting Fluent: Why You’ve Got to Get Your Hands Dirty
Meet Our Guest

Kenneth Corrêa is the Head of Strategy at 80 20 Marketing, a professor at Brazil’s leading business school FGV, and an international speaker with over 150 lectures to his name, including two TEDx talks. He brings more than 15 years of experience in marketing and technology to his work, specializing in innovation, AI, metaverse, and data-driven strategy for large enterprises across Latin America.
Related Links:
- Join the People Managing People community forum
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Kenneth on LinkedIn
- Check out 80 20 Marketing
- Kenneth’s book — Cognitive Organizations: Leveraging the Full Power of Generative AI and Intelligent Agents
Related articles and podcasts:
David Rice: What are the risks when it comes to AI that leaders are underestimating just because, frankly, it's easy for people to understand how to use it now?
Kenneth Corrêa: I think the major mistake I see companies doing is not exactly using the tool by itself. It's actually sharing private, confidential files. Somebody just had an idea, so the person uses her own phone to upload a spreadsheet with the financial results for the quarter, for the free version of ChatGPT.
David Rice: You noted your own company saw a 15% increase in productivity and you didn't increase head count at all. Why would you say more companies aren't following suit?
Kenneth Corrêa: People are teaching and talking about AI as if it is this 70-year-old technology that still needs lots of data and databases and data scientists. Now generative AI, that's something that is easy to use, very easy to implement, very easy to guess. Just 20% and 40% increase in productivity.
David Rice: What are some of the pitfalls you typically see in orgs looking to become AI fluent? How do you avoid unreliable and overpromising output?
Welcome to the People Managing People Podcast, the show where we help leaders keep work human in the era of AI. I'm your host, David Rice. On today's episode, I'm joined by Kenneth Corrêa, a global AI educator, speaker and author of the book, Cognitive Organizations: Leveraging the Full Power of Generative AI and Intelligent Agents.
In this conversation, Kenneth lays out a path for companies to move from AI curiosity to true AI fluency. Breaking down what that means, why most leaders are still on the sidelines, and how to avoid the Frankenstack trap of tool overload. We talk about the shift from predictive to generative AI, what leaders are getting wrong about implementation, and why the future of team leadership may look more like workflow orchestration than task delegation.
If you're trying to empower your people instead of replace them, and you want a more grounded systems level approach to AI, this one's for you. So without further ado, let's get into it.
Alright. So Ken, welcome!
Kenneth Corrêa: Alright, thank you very much, David. It's a pleasure to be here with you.
David Rice: Yeah, absolutely. So you're in Brazil, right?
Kenneth Corrêa: Yeah, definitely at São Paulo, the largest city in South America.
David Rice: Oh wow. Yeah. That's cool. I've got some colleagues down that way. I always wanted to visit.
Kenneth Corrêa: It's a big city, not as big as New York, but it's our own New York down here in the south.
David Rice: So yeah, we're gonna be talking about AI fluency and the path to get there essentially, 'cause I think a lot of us are headed down that path already, but not really sure what all the steps are. Right. And like how do you measure fluency, so?
Kenneth Corrêa: We're still kind of building the stairs, so there will be steps that we'll find out halfway through it, right? The airplanes being built while it's flying.
So those are the metaphors we use nowadays because everything happens so fast, right?
David Rice: Absolutely. I think one of the things that is just so different about this, right, is like, you think about the modern AI interface. So it's kind of removed a lot of barriers to experimentation. Like you don't have to be a computer scientist to interact with it and to test what it can do.
Thinking about when we look back in different times in history, like Excel comes out, people had to take courses at first for like, how do I use Excel? And then now it's pretty much sort of common knowledge. By the time you leave college, you've got some level of experience with Excel. But that's also led to like misuse of spreadsheets, right?
Like I know people who have spreadsheets to track the amount of cleaning supplies they have on a cupboard. But I'm curious, what are the risks and when we come back to AI, that leaders, maybe they're underestimating the complexity of org wide AI usage, just because quite frankly, it's easy for people to understand how to use it now.
Kenneth Corrêa: Like point and click, or you start even talking to the tool and it will reply back to you with natural language.
So yes, but again, I really like the way you put it because I would always advertise that as a feature, but maybe you're trying to pull up where it may be a bug, right? In the sense that people can jump right in. I think having a good interface, making it easy for the person to interact. It's amazing to get a person to jump in, but I think the major mistake I see companies doing is not exactly using the tool by itself, that the act of using it, but it's actually sharing private, confidential files.
Right? We're talking about information from the company that should be kept secret, that are usually behind lots of. Firewalls and passwords and user authentications. And now somebody just had an idea, and maybe because the company blocked the use of ChatGPT, so the person uses her own phone. His own phone to upload like this spreadsheet with the financial results for the quarter, for the free version of ChatGPT.
So I think probably the major risk is regarding private information being spread out without the user not even know.
David Rice: Are there ways to sort of audit how people are using it or like kind of create some compliance checklist, maybe a. Sort of framework for your people to understand how to not use it, and what are some of the things that you've seen really work in that area?
Kenneth Corrêa: Yeah, I don't see any sort of, let's check out people's prompts or let's read their conversations that we don't have space for that in 2025. I mean, but on my side, what I try to do with my own company, also doing that for other companies as well, is education. It's the same issue with cybersecurity, right?
We're not going to be able to block it all. There are new URLs and new tricks that are used to hack people's account and all that. So we hope that educating people on how to use it on the limits of the tool, on whatever means. Using the free version on your own cell phone and how that's different from the corporate version that has blocks.
So information is kept inside those fences. So I think education should be the major path in that sense, because of course, we don't have enough time to check out on everybody's answer. What some companies still try to do is whenever they create their own assistant, so some companies will get the technology, GPT or Gemini or that, but create an assistant with that company's name, so now they can create extra guardrails.
As regarding whatever was asked for the assistant, I'm not able to help you with that. For example, this is the kind of answer you could get or this is the kind of information that you are not allowed to share. So if the company comes into creating their own assistant, they could install those extra guard hails to try to keep that from happening.
David Rice: It's interesting you brought up the cybersecurity angle, you know, the kind of comparison there.
Kenneth Corrêa: Usually when we're talking about innovation, we try to connect it to whatever we know before, right? David, because usually if I'm talking about something that is totally new and totally different, and I'll have to get your head up from your neck and then start talking to you as if it's totally new.
People will never adopt it. That's something we learned back in the sixties. There was this guy called, I think it was Raymond Lowy, created a concept of Maya. I dunno if you heard of it. It's most advanced yet acceptable. So when you're creating a new product, you try to keep it the most advanced as you can, of course, new attack and the possibilities and features and all that.
But you have to create some sort of familiarity. So for the person to connect to that and understand. So I was in San Francisco two weeks ago and I was riding Waymo cars. So now if somebody asks me what's a Waymo? Waymo is a Uber without a driver. So I make it familiar. So, you know, you can get an app, you can call for a ride, but the driver will not be in front of the wheel.
That's the difference. And the rest is all the same. You put a credit card, you pay for the distance that you travel and all that. So I think creating the sort of familiarity always makes it easy to sink in new information, ideas.
David Rice: We just got those Waymo's here in Atlanta and I don't think they're ready for this city. It's just, they keep getting stuck.
Kenneth Corrêa: They're getting stuck. I saw some violence in I think Los Angeles, right? There was this...
David Rice: Oh yeah, when people were lighting 'em on fire and all that. Yeah.
Kenneth Corrêa: So I don't know if the car is not ready for the city is not ready for the car. Some sort of paragon, you know? Right.
David Rice: Might be a little bit of both. Leaders are wanting teams to get more done. Right. Like productivity is really driving a lot of this, what people are aiming for with AI and they're largely getting that, I think. A 20 to 40% boost is sort of the norm from some of the studies that we've seen, and I believe that you noted when we spoke before that in your own company you saw a 15% increase in productivity and you didn't increase head count at all.
So, why would you say more companies aren't following suit? Like what's holding folks back? Is it just a lack of understanding on AI? Is it sort of a scope creep problem where the AI gets used so much that no one is using it for the same things and then it just creates a lot of confusion or differing levels of effectiveness? What, what is the big hurdle I guess, in your opinion?
Kenneth Corrêa: Yeah, I think it sounds absolutely crazy that companies are not jumping right in because I'm doing that for my own company, so I'm seeing the results. I found money on the table. It's like how I like to put it, but I think there are many factors pumping in.
The first one is people are teaching and talking about AI as if it is this 70-year-old technology that still needs lots of data and databases and data lakes and data scientists. So it scares the hell out of people that don't have the technical expertise for that. So I think one of the responsibilities is we're still not making it clear that whatever AI was before November 22 is different from what has happened in almost three years now, which it is classical AI, predictive AI. Hard, technical is amazing, but it costs a lot and takes a lot of time to implement. Now generative AI, that's something that is easy to use as you've mentioned before. So easy that people are uploading data they shouldn't.
Very easy to implement, very easy to get this 20 and 40% increase in productivity, which means, remember, if we're talking about five days a week of work. 20% is like one day off. So we're talking about accomplishing the same production, the same results, but we've won or two less days in the week. So whatever happens is companies, the leaders, people in charge, they are not using it personally.
And the first tip that I would like to give your audience, and that's something that I speak with leaders all around the world is. You have to put your hands on it. You only see the value as long as you start using the apps. So there's ChatGPT, there's DeepSeek, there's Gemini, there's Claude that you can download the app and you can use it for personal stuff.
So for example, you get at home, it's Sunday night, you don't wanna cook anything, but you gotta eat. You can take a photo of your refrigerator, of your kitchen cabinet and say, this is the ingredients that I have. What can I cook in five minutes or less? And now you're gonna see that this tool is able to see a photo, to understand the object.
Sometimes count the objects as well, reaching to a database of recipes and suggest you whatever you could cook. Combining those items on that. And this usually sparks something we say, like in Brazil we use like the. The head explode, right? Possibilities and ideas. And then when you're back in your own business and you look at your processes, you look at the activities that people were doing like every day, I'm pretty sure a lot of possibility will pop out.
And again, David now is the first time that we have a technology. The technology itself can teach you how to use it. So you can say ChatGPT lead a team of 14 people. Those are all salespeople. We're visiting clients in their offices and I want to know how ChatGPT could help me out. So now this tool will not give you like this generic answer as long as you explain the dynamics of your team.
Whatever you sell, what's your strategy, where you're going, you're gonna get like really good advice into what to do. So I think really start using the tools is one of the steps that should be taken and companies have done it yet, of course. When you look at the media, it's all about the fear and the risks and how AGI and Skynet and AI is gonna destroy us all.
Again, I dunno if we're reading the news or a science fiction book, but that's how it feels sometimes. So I think that's probably one of the main. Issues that companies haven't jumped on it yet. Now it's a technology, it has its limitations. I'm pretty sure it's important for us to understand the limitations.
So this tool will hallucinate, will create content which is not based or grounded in reality. And this is like a big no right? I don't wanna use a tool that are make it is making tough apps. So that's why I suggest people to start by using ChatGPT, but try to incorporate perplexity, for example. So now you get answers grounded on sources.
We're not talking about absolute truths, but at least there is some grounding involved in that sense. I think you can mature, you can evolve into better working. You can learn the best practices just as long as you are on board.
David Rice: Yeah, absolutely. Yeah. It's interesting there you'd mentioned people feeling like maybe it's not safe or they're not qualified to use it.
And I think we've seen some research that shows, like folks in HR operations, sometimes they do have a little bit of fear attached that they don't feel like technical experts, you know, they are maybe thinking of sort of those predictive AI solutions. In some cases, other folks have maybe experimented a little bit with Gen AI, but they don't really trust it yet.
And you know, I'm curious what you think about like, because what those folks will inherently derive value from in terms of what they can do with Gen AI is maybe a little bit different from other teams. You know, where do you think somebody working in a HR function, for example or operations, what are some of their biggest areas of opportunity?
Kenneth Corrêa: That's awesome. I'll jump right into that, but just before I'll mention, do you know why HR people are afraid of using Gen AI?
David Rice: Why is that?
Kenneth Corrêa: 2016, there was this paper came out and then every other media talking about human resources mention it, about the algorithmic bias. I hope I'm saying that right.
Because there were companies using predictive AI to help them select through resumes. And when they found out that actually the predictive AI tool saw the data on IT teams and they said it, okay, only men are working in it. So as a predictive tool, I predict that only men are good working in it. That's a horrible thing to say, right?
But again, AI has no ethics, has no morale. It's only working on the data which was provided. So that's a real issue. It's still not a hundred percent solved. So there's a lot of steps you have to take regarding responsible AI, and this is one of those. But a lot of the folks in the HR space, they created a thing with the term AI.
So whenever you say AI, they're like oh no, there's bias there. And of course, if I use ChatGPT, of course there's bias. Every person that I know has their own bias. I have my own bias as well, and software trained on human data will carry human biases. So that's just one thing that I wanted to mention about why HR people are not really into it, but.
I've spoken in two large events in Brazil, one of those 3000 people from hr, and I was showing and demoing some of the use cases that we've done with AI. One of those was a study case that I did with a company, with a solutions provider I work with because my company's 95 people that's the size of team and we use it to evaluate and to call our employees about their performance.
And regarding how they're feeling about the company and how we're treating them, how their bosses are treating them. So it's 360 feedback, analysis and all that. So now get it. 95 people are evaluating 95 people. We're talking about 95 times 95. It's 9,000 pieces of content to analyze. It's impossible to have time.
If we started that in January 1st, we'll still be halfway through it, right? But we have to give people feedback as well. So what we did is using Gemini, which has a large context window of 2 million tokens, it means the 9,000 answers to the form could be inputted as data. Now we're using that to analyze patterns.
To check out outliers, we asked Gemini to pinpoint to us whichever topics were more urgent so we could act on it. So one of examples for the HR people is sometimes you have dashboards, you have data, you have answers and forms, but you don't have the time to analyze it all. Some people in HR don't know Microsoft Excel, are not good with formulas, not good with numbers.
Gemini can do a pretty decent job into organizing and analyzing data and numbers for you. So that's something that we have been doing for HR as well. So then this is the first example about HR people. The second would be, if you think of a HR workflow, you got to have a job description so you know who you're hiring.
Then you have the resumes, so you know we're interviewing, and then you have the selection process in that sentence, again, we have to worry about the bias that I mentioned before. But still, if you create a prompt with the job description and you say, I'm Kenneth, I'm the person in charge of hiring this job description, and then you paste all the job description.
This could be done in any tool. ChatGPT, Gemini, DeepSeek, Claude, you base it there on Microsoft Copilot. Now you upload the first resume and say, okay, so this is the first candidate. How does that person fits into this job? And you can do that for every resume, but sometimes you're going to make interviews, you're gonna talk to these people if you get their authorization, which is sometimes a sensitive point, you can get a transcription of that interview.
So now you say, I have just the subscription. This is the resume. This is the transcript of the interview. Now, please, Gemini, help me understand the positives and negatives of hiring this person for disposition. Now, you're doing that for every interview that you make. Now you get summaries. Now you get comparisons.
Of course, you're gonna have your own impressions. You ran the interview. You can even ask for Gemini or ChatGPT Su suggest questions you could ask for that very candidate based on the resume they sent you. And now you're gonna get feedback. Now when you add. Your own perceptions with whatever AI came out from this resume, job description and the transcription.
I'm pretty sure you are way more. Empowered to make a better selection in that case. So this is thinking of the whole talent acquisition space for hr. Those are all stuff that I've seen companies doing already in America, United States, both Brazil, Chile, and also India are working with that as already.
David Rice: For a lot of companies where they've started with AI, especially Gen AI. You know, they've basically been automating content and communications, right? A lot of folks, they're not touching the ops. They're not touching planning or internal decision workflows. Leaders aren't necessarily using AI to gain better visibility or clarify priorities or support faster decision cycles in a lot of cases.
So where should they start if they wanna apply AI beyond these sort of normal tasks? I mean we touched on HR there, but we expand that out into leadership overall. Maybe it's the CEO, maybe it's operations. What internal workflows would you say are kinda low hanging fruit?
Kenneth Corrêa: Awesome. Awesome question.
I really like, remember the name of my company is 80 20, so that's based out of the Pareto rule, right? We really like the idea of prioritizing the ability to choose where should you act first, because surely there's a lot of things to do. You could act or you could attack in many different fronts. But once you think of prioritization, and this is a matrix that I created on the book where I say you have simple tasks.
You have complex tasks, you have unique tasks and repetitive tasks. So the first place you should go to are the both simple tasks and repetitive. So we're not talking about creating an AI solution to make better strategic planning. Okay, that's fine. But if you can start with accounts payable and accounts receivable, if you start with analyzing customer phone calls.
If you cannot start with job interviews, transcriptions, this is stuff that's happening a lot of times every day. So if you get a 5%, 10% increasing productivity in those tests, you're gonna collect this results in the very next day. So try to go. The low hanging fruit is on the simple and repetitive tasks, and now that's not something new.
That's not only for AI. Every time we're talking about automation as a more general word, we are using this very same matrix, but now with the large language models, what happened, David, is they're able to the models, the AI agents, they're able to understand information provided to them. They are able to read into text.
They can open CSV those are spreadsheet files. They can look at images or there's a company I work with that they use AI to look at the dashboards because now every company has their own dashboards with all those indicators, KPIs, and information about what's going on. But the employees never take the time looking at the dashboard.
So what good is a dashboard if you're not looking at it? But now we found out that a lot of people, like you mentioned in hr, they don't know technically how to analyze this data. What does this KP, I mean, how do I make a different decision regarding what I saw on the dashboard? Now, if you explain. To Microsoft copilot, for example, what every indicator there is if you have a manual, a description, sometimes the dashboard itself has a description of the indicators.
You now only take a photo and now you click a photo of the dashboard. And copilot will read it to you. So I think this is the sort of automations that really makes our lives a lot easier. I have a sales team of 45 people in another company that I own. It's a health tech back in Brazil that we have people visiting doctors, doctor's offices.
So they're in waiting rooms. When they get into the Ag doctor's office, they have like four to five minutes to make a sale. So it's a very dynamic market we work with. I have data from the United States has 50 something states. Brazil has 27, so it's half of the United States. We are in 17 states. So we have people spread it all around.
So it's very hard to know what's going on in a company this big. So now analyzing data through the dashboard, AI has no problem. AI is not lazy at all. Inter look key to every detail in the dashboard.
David Rice: Oh, when we talk about this idea of fluency, right? I think. This little part of me that thinks it means it's knowing where tools fit into your operations, right?
And how to design a workflow around it. Most orgs, they wanna jump on a cool tool when they see it, but they maybe never build the connective tissue between teams, tools, tasks, right? So I'm curious, you know, in your opinion, what does a basic but functional AI stack look like for a company that's under a hundred people, let's say?
How do you avoid that sort of Frankenstack that tends to form after a minute?
Kenneth Corrêa: Love it. Love it. Frankenstack. Yeah. There are a lot of companies that have Frankenstack in the sense that they are working with maybe 10, 20 different tools, and every tool has a different way to prompt it, a different result that comes out of it.
So what I see, larger companies, you asked me for companies with lower than a hundred employees, but I, what I see from larger companies is they try to keep it to one, two. So if they are a Microsoft company, their stack is Microsoft, they're going for copilot. If they're a Google company, they're going for Gemini.
If they are more tech savvy, let's say a software development company, if they're creating digital products, they'll probably go with either on Tropic, so that's cloud, or they're going with OpenAI for GPT, and I'd recommend that for a start. It's trying to go for one single tool and trust, train the team, like education and education because.
Really when you compare, I think now the, at this very minute we're talking, there are seven tools that I consider like state of the art, and that means that it doesn't really matter if you're using ChatGPT or Gemini, they're both have the same interface you can interact with and the same power, let's say.
So I think for now, mid 25, Copilot from Microsoft, ChatGPT from OpenAI, Gemini from Google. Grok 4 from X. Lama for Meta. DeepSeek will be the six here, and there's one next extra, which is Perplexity that I really like. I think those seven AI assistants are now at the very same level, so for a companys gonna be a lot easier to manage risks.
A lot easier to control access and a lot easier to train if you're focused and on. And this item going to one very single two. But that's regarding like the end user, every employee in the company having a copilot itself. But when you go to the backend, when you're talking about really automated processes, you're implementing networks of agents and orchestrating them, then you have to go to about three or four different providers.
It is very hard to stick to one single provider because. There's so much stuff going on that you don't want to be left behind. Every time that there's a new model, how to leave a new tech, you have to be able to switch between those seven. But this is all more technical side CTO side conversation.
David Rice: Gotcha. So big question right now. I think a lot of people have when it comes to our people, or a lot of leaders have, when it comes to our people, is around skills. How do we transition folks into what comes next, right? Right now it still feels like we're having to reframe efficiency to mean not fewer humans.
But better leveraged humans, right? So people becoming orchestrators and validators of AI outputs, not necessarily task monkeys. It seems like the future role of a team leader might be focused more on quality control of agents and strategic routing of workflows, right? So what new roles or skill sets should leadership be developing now to sort of manage AI literate teams?
Kenneth Corrêa: Yes. The idea of being AI literate in the sense of generative AI has all to do with using the tools. That's amazing. But for leaders is very important to have, and this is hard, what we call the judgment. Yeah. The ability to choose wisely. Either this task supposed to be executed by a human. Or is this test something that a computer would do best?
And there's no problem into that. We're not talking about any taboos here. The idea is there are tasks that the computers are already doing better. So if you have to check stuff on a checklist, if you have to analyze imagery or any sense if you have to check specific information into documents, if you have to scout for parts of text.
Or a piece of information, if you want to summarize. Those are all tools that computers at the very moment we're talking, are doing best. So as a leader, if you understand that and you use that as a starting point, like this is something that I don't want my humans to be working with, right? Because on the other side, you have creativity, you have judgment in the sense of making decisions, making the call, right, and being responsible for that decision as well.
Also being able to see paradox, being able to understand complexity into the decision making. Those are all human abilities where humans are thriving a lot more than AI, and AI is in this space. You could say you could use, try to use AI to create a plan for you. It will suggest maybe dates and a schedule for that, but it's.
Very low quality when compared to human work, so I think trying to decide which way to go for every test, either computers. Humans is one of the abilities that these managers will need. And I will have to say that, David, that's what's inspired me into writing the book Cognitive Organizations, because I wanted to give a handbook for managers so they're able to keep aligned with the vision that computers are always getting more and more powerful.
Computers are getting better by the day. Every day, every week there's a new solution out. So this is all right. We are not able to keep up with everything. Of course, it's impossible to follow. I try to keep up with what's going on, but we are already three minutes in here, so I don't know, maybe in the last 30 minutes something big happened, so you're not keeping up with everything but.
If you understand that this is a reality, if you understand that 20 to 40% increasing productivity is non ignorable, there's no way you can ignore that in the sense that your competitors are already using it. I think managers will make good decisions in that sense. And again, remember that we have a discussion, an ongoing discussion in the world about job displacement, about people being replaced by AI.
Actually, when I speak to companies, their problem is reverse. Actually. They are not finding talents willing to work. So now what I need to do, and that's what I did for my own company, you mentioned it earlier, is I kept my talents, I have 95 people in the company that's, for the past three years, I've grown 15% per year.
With that very same nine, five people team. And I'm doing that because I am empowering my own people. And again, that's the mindset I had to change when I saw that this is a reality. I need to get everybody on board. And I'll be honest with you, not everybody jumped on board. A lot of people still wanted to work like they did before, but I don't think you can work in 25 the same way you worked in 21 because something happened.
In that period, I'm not talking about COVID, right? I'm talking about the work of generative AI. So I think when you change your mind to, I have talents in my company, which I need to empower and to leverage the best that I can out of what they can do to get with AI tools, that's when you get the best the icing on the cake, you know?
David Rice: Yeah, I agree. You know, there's a lot of people who maybe struggle to come along on this journey, but I, I. The way I've been putting it lately is like, we can't be attached to our tasks. The task is not what gives us value, it's our abilities, it's our knowledge. It's our unique sort of human traits like you mentioned before, that are really, are where our value is derived from.
So maybe it's just about letting go of the tasks as the engine for those traits to come out, you know? I'm curious, what are some of the pitfalls you typically see in orgs looking to become AI fluent? You know, especially as they're sort of in those curious stages, they're starting to experiment, and how do you avoid getting sort of, I guess, the quote unquote rug pulled out from under air or something like that, but basically by unreliable outputs and overpromising on what the tech can do.
Kenneth Corrêa: Yeah, I think this is a bad problem with AI because usually we say like, AI is, and then you fill in the blanks and AI does this. AI does that. AI can do this, AI can do that. And the thing with that is we start to think of AI as something that is like ent. Just like if it's something that's floating around the globe and then understands everybody, and that's not true, right?
We're talking about multiple different companies. We're talking about different technologies. All those models are trained with different data. They have different guardrails because each company's deciding how they're going to block everything that a model can do that is not supposed to do. And sometimes that's even the problem.
So if we're talking about AI, we have to remember that it's just a tool. It's a tool that's helping ignite a revolution. Yes, a revolution in productivity, a revolution in how you, we do business, but it's still just a tool. So the way to get fluent is repetition, is usage, is trying and making mistakes. Because again, remember that when AI tells you to do something, you don't need to do it.
You just need to read it, and you decide whatever you do with that. The more you use it, the more it's going to be very easy. Again, back to the judgment call to understand where that will fit and work fine. There's a large company in North America called CH Robinson. They are operating all around the world with third party logistics, and those guys have to handle like 3000 emails per hour.
That's emails for clients that want to move packages from point A to point B. So I need to move something from Beaverton, Oregon, have to take it all the way to Austin. Taxes. So how much does it cost? How soon can you get to the spot to get it? How much would it cost me? So all this interactions were previously done by human operators.
So this is people taking calls. Reading short messages or emails on their computers. And now AI is doing a wonderful work at the triage. So it's selecting, so this is a simple routine call with every specific data already organized. I can generate a code on top of that automatically. No human interaction needed.
Okay. Now this one is a little bit different. This has to be taken with a refrigerator truck. This has to live in the next six hours, which is below the time that we usually do so, okay? Now let's take that to a human. So you start to developing fluency when you see more and more use cases.
Remember the CHR has been working with that for a year now, and they're only handling 10% of those calls. 90% of the calls are still handled by humans. There's another story with Klarna, the Swedish company that does like, customer assistance service, right? Those guys went all in into Gen AI.
When they saw the technology, said, okay guys, this is gonna save us. We're gonna lay off everybody, right? That's their approach to it. But now it kicked back because they realize that AI is not going to be able, AI is not a human, it is not going to be able to deal with every a hundred percent of customer service situations.
And now they've switched. To a more balanced approach, and I'm not making the numbers here. It's 80 20. So 80% of the work is handled by AI, 20%. Those are the exceptions. Those are the specific cases. Now they are triaged and taken to the human assistance. So I think, again, it's impossible to read a book about that.
So we were talking about fluency. There is no grammar book you can read. You could talk to other people, right? That's how you get more fluent in the language. I hope my English is not bad today, so.
David Rice: It's ingrained.
Kenneth Corrêa: Fluent right when we're talking about it. So you could talk to your folks or your fellows to people in other companies that are already using these tools and especially, again, very important, get your hands dirty.
I think I made my point on that. You get fluent the more you speak the language.
David Rice: Excellent. Well, Kenneth, thank you for coming on today. I really appreciate you giving us some of your insights.
Kenneth Corrêa: That's awesome, man. I hope you like, I'm glad you, you liked it. I liked it as well. I loved your very complex questions.
You're trying to get me, and I have to find a way to, to bring the side that I look at things because sometimes it will feel like I'm all excited and you should go all in and there's no limits or there's no problems with that. But there is, there are downfalls, there are points that we have to take care of.
But we will only learn when we jump in on board. So that's what I'm trying to help people to understand that 20% productivity boost is impossible to ignore.
David Rice: Well again, thank you and looking forward to seeing how your conversations evolve over the next couple years. I think it's gonna be interesting.
Kenneth Corrêa: That's awesome.
David Rice: Listeners, if you wanna check out Kenneth's book, it's called Cognitive Organizations: Leveraging the Full Power of Generative AI and Intelligent Agents. You can pick it up on Amazon, be sure to check that out. And until next time, get your hands dirty.