So yeah—your dashboards look great. Your team’s shipping faster, summarizing more, “getting leverage” with AI… and all the while you might be quietly trading away the one asset you can’t buy back on a subscription plan: human judgment.
In this episode, I’m joined by Dr. Vivienne Ming—neuroscientist, entrepreneur, and an “AI realist” who has zero patience for utopian hype or Skynet fan fiction. Vivienne lays out a clean fork in the road: cognitive automation (AI does the thinking for you) vs. cognitive augmentation (AI makes you think better—often by making the work harder). If your AI strategy is mostly about convenience, this is your gentle-ish warning that convenience is not a strategy. It’s a sedative.
What You’ll Learn
- Why “AI makes life easier” is one of the most dangerous pitches in modern business
- How low-quality data can derail AI systems—and why humans have a filtering “superpower” LLMs still lack
- The difference between AI that replaces thinking and AI that builds thinking
- Why the best AI tutors don’t give answers (and what that implies for workplace AI)
- How “hybrid collective intelligence” can outperform either humans or AI alone—but only with enough human capital
- Why efficiency metrics miss the real drivers of productivity inside organizations
Key Takeaways
- If AI is doing the “boring parts,” don’t assume the humans get the “creative parts.” Vivienne’s point is blunt: the more boring work AI does, the more boring work often expands (hello, infinite email loops where bots write spam to other bots). The “efficiency boost” can become productivity cocaine—great short-term, ugly long-term.
- Automation is when you get the output. Augmentation is when you get better. If you’re better while the tool is on, but worse when it’s off, that’s not progress—it’s dependence. Vivienne uses medical examples (AI-assisted diagnostics) to underline the risk: skill atrophy is real, and it’s not a “training issue.” It’s how brains work.
- Build (or configure) AI to be the “loyal opposition,” not the yes-man. The best example here is her “nemesis prompt”: she writes the chapter herself, then has AI tear it apart—find what’s wrong, what’s weak, what’s easily disproven, and how to improve it. That’s augmentation: using AI to force depth, not avoid it.
- Your company is a 500,000-person neural network—and you’re measuring the wrong outputs. Vivienne describes research where a relatively small slice of employees drove a huge share of productivity that wasn’t captured by individual metrics. The people who mattered most weren’t always the “stars.” They were the ones who helped—sometimes “inefficiently”—and made everyone around them better. Most productivity systems don’t even have a box to check for that.
- Entry-level work isn’t just cheap labor; it’s the knowing-to-understanding pipeline. Her line is sticky: new grads often “know everything and understand nothing.” Early roles are where people learn what to do with knowledge—judgment, ethics, tradeoffs, context. If AI strips out the “knowing” work completely, don’t be shocked when you end up with a workforce that can produce outputs but can’t explain, defend, or improve decisions.
- Leaders decide whether this becomes a skill-building revolution or a deprofessionalized “Jiffy Lube economy.” Vivienne’s nightmare scenario isn’t mass unemployment—it’s jobs getting hollowed out into low-skill wrapper roles around AI, because it’s cheaper and “good enough.” Great for CFO optics. Terrible for capability, trust, and society.
Chapters
- 00:00 – Productivity up, skills down
- 01:48 – AI “brain rot” and bad data
- 04:57 – What AI is (and isn’t)
- 07:00 – The danger of “easy” work
- 10:30 – Using AI to make work harder
- 12:38 – Why this isn’t the Industrial Revolution
- 17:03 – Humans + AI: when it actually works
- 22:04 – The efficiency myth at work
- 27:29 – The people who make everyone better
- 30:22 – What we lose without entry-level work
- 39:51 – Why giving answers kills learning
- 43:25 – Deprofessionalization and the “Jiffy Lube” future
- 46:18 – Automation vs. augmentation
- 47:50 – GPS, convenience, and cognitive loss
- 53:57 – The leadership choice that matters
Meet Our Guest

Dr. Vivienne Ming is a visionary theoretical neuroscientist, entrepreneur, author, and self-described “professional mad scientist” whose work spans human potential, artificial intelligence, and societal transformation. She is the founder and executive chair of Socos Labs and co-founder/chief scientist of ventures like The Human Trust and Dionysus Health, applying cutting-edge machine learning and neuroscience to education, health, inclusion, and human development. Known for her engaging keynotes and thought leadership on technology’s role in amplifying human capability, Vivienne has been featured by BBC’s 100 Women, Financial Times, The Atlantic and Quartz, and continues to inspire global audiences to rethink how people and intelligent systems can co-create a more equitable future.
Related Links:
- Join the People Managing People Community
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Vivienne on LinkedIn
Related articles and podcasts:
David Rice: So your team's using AI to write reports, analyze data, and make decisions faster. Productivity is up, right? Your quarterly metrics, they probably look pretty great, but guess what? You're quietly making your people dumber. I'm not being dramatic about that. A recent study of medical professionals using AI assisted diagnostic tools found they got so dependent on the technology that when you turn the AI off, they performed dramatically worse than before they ever started using it. That's not a training problem. That's brain atrophy.
Today's guest on the podcast is Dr. Vivienne Ming. She's a theoretical neuroscientist, an entrepreneur who 15 years ago predicted that GPS navigation would causely increase dementia rates. And today, there's empirical evidence proving that she was right, but that's not what she's here to tell you.
What she's here to tell us is that we now have a choice right now between two completely different paths—cognitive automation, where AI does the work for you and you get quick wins but long-term loss, or cognitive augmentation where AI makes the work harder in ways that make your people genuinely better.
We're gonna cover why AI makes life easier, is the most dangerous pitch in business right now, the difference between automation that erodes skills and augmentation that builds them, how to redesign AI tools so people are better when they finish them, when they started, why leaders who choose the lazy version will see short term gains and strategic failure.
I'm David Rice. This is People Managing People. And if you've been celebrating productivity wins without asking what you're losing in the process, this conversation is a bit of a wake up call. Let's go.
All right, Vivienne, welcome to the show.
Vivienne Ming: It's great to be here.
David Rice: Before we hopped on, I was combing through the news as I attended to sometimes, and I saw an article. Nature published a study last week saying that when fed information, particularly from social media, AI struggles. To make sense of anything, and I just wanted to get your reaction to that 'cause it said the actual words used were, too much social media gives AI chatbots brain rot.
Vivienne Ming: Yep.
David Rice: So did this content isn't good for anyone?
Vivienne Ming: Brain rot is the term of art of the moment. Brain rot and work slop. I can give you a very nerdy answer about that, about what's happening and what we ought to do about it. But for a general audience, we should be concerned, right? That it actually takes very little low quality information to send an LLM off in a wild direction.
And even if you go back, you know, for what's called human in the loop, fine tuning, reinforcement learning, it has a really hard time ever getting back to its original potential. So a lot of people talk about the importance of good quality data and. This is one of those examples where even a fairly small, unintentional, or in some cases intentional, like intentionally inject the garbage into the system and you will get, it's actually a way to hack the brain.
So this isn't even brain rot, this is like brain worm. So these are things to be concerned about. Here is my somewhat nerdy response, which I have written a paper I'm gonna submit about, which is. Listen, human beings are imperfect. We are also full of biases. We also turns out you expose us to the internet and our brain rots too.
But having said those genuine and honest things, humans as a rule do something that's really important. If we get a piece of information that's crazy off the reservation nonsense, we ignore it. Obviously there's caveats to that. But I mean, in the cases, the brain rot research, which is reported in that study, it was often gibberish.
No human being would come across a page of gibberish and think, oh my God, I gotta memorize this. So we have this superpower that current AI really doesn't have. We have a model of the world. The AI world talks a lot about world models, but nah, not really. We look at the world, we apply that model and things that don't fit for good and bad.
We say, you know what? That's less important and I'm not going to encode that as deeply, and we need to think about how to get AIs to do that as well. The idea that we have to hand tune these perfect data sets and you could never show an AI one bad x-ray or one bit of garbage off the internet or it will get brain rott is a real weakness for how we're training it today.
David Rice: You hear people say that, you know, we've created a brain. It's as sophisticated as ours. I've heard this before. I'm like, I, well, I don't know about that. And I've often taken a, you know, objected to the term brain. But then when I saw this, I thought, well, maybe it is a brain. Social media contents having the same effect on AI as us, so.
Vivienne Ming: Yeah. You know, I like to take a measured approach. I am an AI realist. The world is full of bullshit. AI utopian that think it's gonna fix everything is full of dystopias that think we're building Skynet. I wish we had a Skynet problem, only in the sense that I wish we had any idea how to build something like that.
We do not. We have no idea. But what we built is genuinely intelligent. It is artificial intelligence. It's just, it has one particular form of intelligence, statistical learning. It's the thing we make massive use of ourselves, and it shares that with us. And in that sense, there are a lot of similarities between what particularly large language models and human beings do.
But we have all these other forms of intelligence, low level computational ideas of intelligence, high level, broad strokes. This famous guy, Henry Markham, had a surgery to prevent epilepsy early in his life, and then he never formed a new long-term memory for the rest of his life. People like me just know him as famous subject of all this research.
He knew how to play a guitar, but couldn't remember having ever learned how, in that sense, there's different kinds of intelligence, right? He couldn't form new memories, but he could still learn how to play guitar. So there are different strengths that machines bring and that human bring. If we would stop trying to build artificial humans and just celebrate what artificial intelligence does well.
Then we'd be in a much better space right now.
David Rice: I agree with you and it is funny 'cause it kinda leads into where I was thinking this conversation was going to go today, which is, you know, there's this sort of notion that AI is gonna make everything easier. It's gonna be more convenient, more efficient, productive, profitable.
I think that's the one that a lot of executives seem to be entertaining themselves with these days. But when we talked before this, you challenge that narrative to some extent, kinda asking like, well, what if AI doesn't make life easier? It makes it harder, right? Or in ways that make us better. So I'm curious in your words, what does that look like?
Vivienne Ming: I actually just shared some completely AI unrelated research this weekend in my newsletter, and one of them in particular was a finding that people are more likely to invest, let's say, a venture capitalist, investing in an entrepreneur. Controlling for all sorts of things. You know, race and gender and age and so forth, the more likely to invest in people whose faces are more similar.
So a cool experiment in some sense, you could only do with AI, that you could literally measure the numerical similarity of faces and then say, wow, you are 3% more likely to get an investment if you pitch a VC who looks like you and yet you are actually 7% less likely to produce a return. The reason I bring that up is just because, like I said earlier, we're full of our own biases.
How does that help? Like we have this bias to invest in people that look like us, and yet it produces worse returns and we wanna think we're perfect and that we're rational and we're not. And it turns out one of the ways in which we are wildly irrational and this. Is more true of some people than others, but we should be honest with ourselves.
It's more true of most people that we wanna take it easy. In psychology, we talk about shallow and deep processing. Most of our time, even the most brilliant people, we're really kind of going through life a little shallowly, and that's okay. It really genuinely is. But you gotta go deep every now and then.
And the funny thing is we've built the perfect tool. To spend the entire rest of your life as shallow as you can imagine. And if you layer on top of that, the business leader's decision making can be imperfect. It can be, let's hire the guy that looks like me rather than the one who's the best for the job.
Another paper I shared this week revealed that the Peter principle shock of all shocks is real, and that in sales teams you get promoted because of the quality of your sales. Not because of your potential as a manager, and that has real economic consequences inside companies. So we do all of these things while these same human beings are making decisions in promotions, in investing, and in how to deploy AI.
We're really buying into this pitch that the value of AI is to make our life easy. It'll do all the boring work. You get to do the super cool creative work. In my research, it turns out the more boring work is done with AI, the more boring work exists for everybody. Take an obvious example. As I have seen many companies tout internally to their teams, have AI, read all of your emails, have AI, write all of your emails, save all of that wasted communication time.
It turns out very clearly if AI is producing a whole bunch of garbage spammy emails, then you get more emails, right? Not fewer that you have to deal with. So this idea that the purpose of AI are these quick, easy efficiency boosts, you know that it's kind of like cocaine for productivity is, in my genuine opinion, a complete dead end.
Not the least of which is because it demonstrably erodes human capital as it substitutes for them. Again, more research I could cite there. So yeah, this idea that AI does all the boring work and somehow magically we become super creative is simply not true. You want better human beings and you're excited about AI like I have been for 30 years.
You need to build AI that explicitly and unambiguously supports creativity, and it turns out the best AI for that doesn't make your life easier. It makes it harder. It says, that's not a good idea. It says, here are three things that say that you're wrong. It's the loyal opposition. So I, in my upcoming book, I write about this and I call it the nemesis Prompt.
I used it for writing the book. I'd write a chapter of the book. And with, boy, do I wish I could just outsource it to somebody else, but it's a nerdy technical book with my own horrible sense of humor, deeply embedded in it. I could not get a ghost writer or an AI to do it. So instead I go through the process, the brutally hard for me process of writing a chapter.
And then I would say, Gemini, you are my nemesis, my worst lifelong enemy. Here are the new chapter of my book. Find everything I've said That's wrong, every bad idea, every easily, disprovable fact I have shared. And explain it to me in detail, and if you could just for once, tell me how I could make it better.
I didn't use AI to make it easier to write my book. I used it to make it harder, as you said, in the ways that made me better.
David Rice: This shows up all over the culture, right around AI, right? So it's like we see it as like this. Shortcut to productivity or a convenience tool that will make you more efficient, but we're not hearing enough of it framed as like a catalyst for growth or, you know, evolving the way that we train leaders.
Or even, like you said, you know, people were promoted for the quality of their sales, right? Like even challenging our ideas and asking better questions. It's not, we don't really frame it like that. We tend to frame it like we framed every other piece of technology for the last 50 years, which is like, it's easy and it's gonna make you more comfortable, which I don't think has done us a lot of good in other areas of our lives either, so.
Vivienne Ming: When I sort of partially transitioned, I still am fundamentally a scientist, but when I stepped away from academia and I started my first company's was also when I started, people started asking me to give keynotes or be on a panel about how the future of work is gonna change.
And having started AI in education companies and AI in workforce, as well as AI in medicine, you know, in some weird sense, who better. So I get on this stage and the person sharing the stage with me just says, is just like the industrial revolution. We don't have to worry about any of this. It'll create more jobs and it destroys, it'll be this amazing future as though it's like saying, it's just like the industrial Revolution is somehow a mathematical proof that is irrefutable.
And I remember one time being on stage with the biggest prickiest mansplainer i've ever shared a stage with. A guy who was just pulling makely facts out of his various biological orifices on a, throughout the entire event, and I was so frustrated. I got off and I just, I wrote, this is not the industrial revolution.
I ended up writing a hundred thousand words. It became a chapter in that yellow book behind me, and what it really was one, do we even really understand what the Industrial Revolution was and what it did? That's a nerdy history of economics question, but also is this moment, is AI just a factory line?
Is it just electricity or the internal combustion engine? You know, we pointed at these moments in history where things change and it's worth understanding them. Absolutely. But AI doesn't lift from the bottom. In fact, to AI, there is no bottom. You can ask it to write the stupidest tweet in the world for you, or generate a throwaway image, or you can ask it not just to diagnose cancer.
But to come up with a whole new idea of how cancer might be caused and how we could change it forever. AI doesn't care, doesn't know the difference between those two things. Those are two runs of, you know, somebody's household energy budget for the day, and the result of which is you've either enriched the world with.
Some ephemeral joke that's funny to a hundred people for 35 milliseconds, or you have changed the future of humanity. That is not the factory line. It's not the agricultural revolution. It's none of these events that have happened before. That doesn't mean it's gonna be bad, but it doesn't mean it's gonna be good.
And I think until we can just say humans are messy, and AI is a little bit like us, it's messy too. Turns out we're messy in a little bit different ways. You plug those two together, right? Something amazing happens and you plug 'em together wrong, and endocrinologists get worse at diagnosing cancer and students writing essays learn less and have degraded brain activity over time.
Like it's not good. It's not bad. In the end, it's a big, complicated, dynamic system and it's how we use it that matters. Well, I mean way more so than some Second Amendment argument we might have. Guns don't kill people. AI doesn't make the world worse. YouTube algorithms don't make the world worse, but it's the interaction between the choices we make and these algorithms that's gonna produce that future.
If we don't lean into going deep. Now, I'm worried.
David Rice: The thing I always come back to is when everybody talks about these previous historical events, I'm like, yeah, but they didn't unfold at this pace. The brain is not necessarily hardwired to deal with the level of change that we're talking about, right?
Like the changes that are gonna come with this, you can already see it kind of in people's ability to discern like what's real and what's, you know, when they look at content on the internet, right? Like they can't tell sometimes that it's not AI or that it is AI, or they're saying that it is and it's not, and they can't tell what's real anymore.
And so that never happened before where like we had to challenge our reality. I don't think. So it's, it is a much different task for us as human beings. And then the all important thing is how we use it. So if we're having a hard time understanding it and we're using it, there's a lot of space there for us to use it poorly.
Vivienne Ming: I'm actually working on a pilot study right now, and in this I wanna understand hybrid collective intelligence. The good, the bad, and the ugly of how humans and machines interact and what they could do. And it's early days. So if any of you wanna nerd out and read this paper, you're just gonna have to sit on your hands for a little bit and until I have a chance, you can really run this whole study and be confident in the results.
But if you'll bear with me that this is deeply preliminary, I thought there's a great opportunity to measure people's deep impact on the world by looking at how well people do making predictions. And there's something like this already. There's this website called Poly Market. You can go there, literally place monetary bets on the outcomes of the world, all sorts of things, including a little bit of press they got recently on the Mond election, for example.
Where the market showed a weird blip, which seemed erroneous. So what if I took a bunch of sort of classic university experiment participants and they come in teams of three to five and I give 'em an hour, and in an hour let's give them a whole bunch of poly market predictions to make. They will collectively make a choice.
They're not experts. Admittedly, they don't have money in the game, but they're, eventually those will come true. We'll know the result of the system. We'll know how poly market did. I'll know how my participants did, and I can do the participants, and then I can do Gemini and GPT, see how they do. Then I can see how humans and machines together do when they interact.
And then finally, I have a very special case to be discussed. The interesting thing is. These naive humans don't do great. They certainly don't do as well as the pros on poly market. The AIs do pretty well, not as well as poly market itself does, but they have good solid predictions, largely when the obvious idea was right.
Then these AIs are right when it kind of goes in weird directions. They do less well. When you put humans and machines together, it turns out things get really complicated and messy. And it isn't the AI that predicts you could be using GPT five, you could you be using an off the shelf, you know, open source lambda model.
It's the human capital that predicts how well they'll do, because it turns out the human capital predicts how they engage with the AI. When the human capital is low, they essentially just do what the AI tells them to do. So the results of these hybrid teams is basically the same as the result of the original AI alone study.
When you have high human capital, they don't simply take the AI's advice. A dynamic builds up, they ideate, and then the AI refines, and then the humans ideate, and then the AI refines and it goes through a few cycles of this and very provocatively to be confirmed. These in an hour naive human beings in a hybrid collective intelligence are doing comparable to poly market.
What's really exciting is they do best when the result is an outlier, when the result is the most unpredictable. Hybrid intelligence seems to do the best, but it's about how we're engaging. And I know that's nerdy and a little bit technical for some of the audience, but what I'm getting at is a reflection of what you're saying.
It's only when we are deeply engaging with the system. We're not outsourcing it as our collective brain, but we're incorporating it in our innovation process that the real benefits are there. And just believe me, if it doesn't resonate with you, this is a huge finding. The smartest thing on the planet today is hybrid intelligence.
It's not some super AI somewhere. It's not some genius you've read about. It's not Terrence Tao. As smart as he is, it is even a modestly above average group of people empowered with an AI. Almost nothing can compare. It's really amazing.
David Rice: That's really, this is fascinating. It's got my brain going on in ideas.
'cause I read about Poly Market this last weekend, actually. It was the first time I ever heard of it. And I was like, oh, this is interesting. And there's more of 'em popping up too. Like, I'm starting to see all these things where you can just bet on anything. And I was like, this is so weird that this is becoming the thing.
But as we, we kind of move along, I wanted to talk about, you know, the efficiency myth, right? It's a big one. We see examples of it all over the workplace and it's sort of this sly, we're all living, right? I think in a lot of ways. And the, you know, efficiency within your workflows and everything. Do you think this narrative around efficiency is sort of derailing AI's potential to impact the workplace?
Vivienne Ming: The efficiency assumption, some assumes a bunch of things. One is we have a good model of efficiency. That lines of code written by your engineers or deal flow and your sales team is really capturing holistically what makes your company great. If you actually look at a lot of work, you know, these are kind of esoteric ideas, but you look at most of the economic growth in the United States over say the last 30 years.
And a number of people, I believe, I'm not misattributing, but even recent Nobel Prize winner, Darren Olu and others have talked about the sort of intangible capital. A term was briefly popular, sort of talking about super firms, that most of this growth isn't a broad lift in the economy, but it's a subset of companies that are just transformative.
It's hard to put your finger on exactly what it is they're bringing into the world, but they are so much more productive if you look at their profit margins at particularly these tech companies though, that the lines are blurring there. They're per employee numbers are huge and you couldn't look at it to my mind and say that's because those people are faster at writing their emails.
Or they write faster code or they close deals quicker. In fact internal studies at Google showed really great teams were actually kind of inefficient in the sense that they spent little time interacting with one another. They spent most of their time interacting with other people in the organization, and then they come back together briefly.
They had a lot of trust that each other were doing things right. They didn't require a lot of top-down observations. And it just collaborated really well. My own research looked at what I call higher order productivity. How does one person make the other people around them better? So almost all of our measures of efficiency and productivity are about personal metrics or team level metrics, which maybe implicitly captures a little this, but not really.
So I said, let's be crazy mad scientists about this. What if a giant company of 500,000 people was a neural network with 500,000 individual artificial neurons in it? Well, if that were true, then we would have the math to do what's called in neural networks credit assignment. How does changing one person affect the global output of the institution?
There's a lot more to that doing this than that. But that's the basic idea. So I applied this to a real company and we found that about 11% of employees accounted for 80% of productivity that wasn't tracked by individual metrics. So a relatively small but still meaningful population made everyone around them better.
And the cool thing is then we got to look because 70,000 of those employees volunteered to share. Day-to-day information and we found that the people that had these big impacts, what differentiated from the other employees is they just helped. If anything, they were inefficient. They were parts of the day where they saw someone or something needed help that was not a part of their job, and they just helped and they made everyone around them better.
What metric of efficiency captures that? And I will say this was a huge untracked driver, total factor productivity inside these companies. And we are tracking none of it. We're not aware of it. You needed this very fancy mathematical models to pick up on it, and yet that's what's there if you think.
Making certain your call center workers, you know, replacing them with chatbots, you're right, you just cut out the cost of a human being and that is a meaningful bottom line issue. But if that generates more down the line, issues with handling caseloads of customers, if, as people have pointed out fully automating code generation just means senior level coders have to spend more time correcting errors.
You haven't made anything better, but what we find is when those high human capital individuals use AI to engage more deeply with their code, they're actually slower in the moment, but the longer term positive impact of their code is higher and better. There's longer term speedups that happen, but really it's better explained that the quality goes up enough to much more than account for these sort of decreased in traditional metric efficiency.
Again, I'm being kind of nerdy here, but what I'm getting at is our intuitions about what efficiency is and how you measure it. Are way off in a world where we should be leaning into people, making productive errors and exploring alternate ways of doing things augmented by AI, rather than just letting AI do the exact thing we already know how to do at lower costs on a regular basis.
'cause we are totally leaning into the ladder and it's getting us nowhere.
David Rice: I love that you brought up the, you know, when certain people fell out, it had such a drastic impact. I mean, we've all been on teams where like a certain person left and maybe from the outside you wouldn't think that would have such a big impact, but it's like what they call in sports, like a glue guy in a locker room, you know, like keeps everybody together and it's like that one person leaves and the team just does not function right for a while.
We've all seen that, but like how do you quantify that sort of in data, you know what I mean?
Vivienne Ming: Yeah. And you know, this is something obviously that Yes, in sports people have taken really seriously. And so you can look at things like value over substitution or these other metrics that say. Sure. Well now you have these four crazy guys in basketball that sort of, they're obviously better than everybody else, but when you run at those versus who is scores the most goals or the most baskets you run these other models really interesting and unexpected people pop up and the people you think are amazing fall out of the leadership boards because you are running a community.
You know, you don't have to be totally touchy feely about it. You really just have to think, right? There's all these complex parts that have to fit together. You take the complexity of your supply chain seriously. Why aren't you taking the complexity of your human capital seriously and appreciate that someone that may be great in one team isn't gonna be great in another, that there's right fits for people.
That every now and then you have again, if he didn't score any points, this would work better. But you know, you have your djokovics coming in and just suddenly a team that is solid. But otherwise, unexceptional are perennial world championship contenders because this one guy makes everyone else better.
And what we saw in our data is, yeah, some of the people that make everyone else better are obvious. They're across the board stars. There isn't a strong correlation with individual metrics, so you have people that actually have rather bad looking individual metrics, but if you could understand how they made everyone else better, you'd be insane to let them go.
And, you know, modeled that way, human potential gets really exciting. Maybe we could all geek out, like we're in scenes from, you know, if we could be Brad Pitt doing all these baseball deals behind the scenes. 'cause we can see how all these guys are gonna fit together and make the team better and Moneyball the whole thing.
But that's only for the sports stats nerds. I think what it really gets down to is. When you're thinking about AI, you have to think about how it makes people better, not how it substitutes for them.
David Rice: You had given the example of, you know, the 500,000 person neural network. When I think about it like that, you know, there's a lot of points of view that, you know, you just can't see any other way than to have all those different people in the network.
And there's a lot of concern right now around loss of entry level roles. I think everybody that listens to this show regularly will know sort of the cost of cutting entry level experience as a learning pathway and damage to sort of the succession planning. But I think it also gets into that like fresh perspective, those outside eyes.
I'm wondering if there's a bigger cognitive cost, like a bigger price tag around how we come to understand our place in organizations in society. In a world where AI knows nothing, as I've heard you say before, what do you think we're losing if we eliminate our first experience in the workplace, our entry level roles?
Vivienne Ming: That phrase you just used that I shared with you before, let me give it a little context. My graduate students. UC, Berkeley and elsewhere are some of the smartest people I've ever worked with. Even though they're young. They are brilliant, and I mean on the subject that they are studying their dissertation focus.
They know as much about it as anyone else and the whole planet and be clear more than me. It's at best one hour a week of my life. For six to seven years, it is 24 7. They are living and breathing this, and they're brilliant. Why am I there? They could learn all this. I mean, believe it or not, libraries still exists.
They could go to the library and learn all of the facts. All these papers are published. They could go read all the original research and understand it. They know the math. They know the brains. They know the AI. Why am I in the room? They know everything and they understand nothing. And my job, not only in the moment, is providing understanding.
My job isn't teaching them more facts. My job is teaching the understanding. How do you be a scientist? How do you explore the brain? Which long callback way more complicated than GPT five, way more complicated than the most elaborate system that exists today. And not a vain brag there. Brains just, they're phenomenally complicated.
They have all of that at their fingertips. What we're doing as scientists is exploring the unknown. The whole job is to having learned all of those facts, figure out, now what do we do? Which isn't even, what's the next fact? It's what's the next question? What are we even gonna ask about the world? Well, it turns out working with an LLM for me is a lot like working with my grad students, except they know everything about everything, right?
I can ask them about questions in the law. I can ask them about questions about brains about engineering, mathematics, what have you in sync, God forbid. But it'll give me answers. Will it? Occasionally he'll say an answer. Yeah, sure. But guess what? So do human beings. But it can give me, it understands everything, or it knows everything.
It understands nothing. It knows nothing about what it doesn't know by the very nature of how we build them. And so my job, the job of those human beings in the experiment with poly market is to explore the unknown. Push in that direction. That's what makes humans great. So if you're early in your career, for me to finally show why I am a terrible radio interview and actually get to your question, when we look at someone who's just come out of law school or medical school, but maybe not even that, that you know, has an undergrad in communications and is doing that first marketing job or something.
It's not what they know that is valuable. We're kind of renting that knowledge from them so that we senior people don't have to do it ourselves. Go do case prep, do contract review. Go put in your time as a resident working in the emergency room. 'cause I can't cover all of these patients at the same time.
And what I'm going to do for you. Teach you, not the law, but what to do about the law. And I had that experience early. I gave a talk, a private event in New York for a bunch of chief legal officers and heads of internal counsels of these big companies. And one of them came up afterwards. I thought that'd be a really cynical crowd.
But no, they really cared about this stuff. And they came up afterwards and said, listen. I block out the first two weeks after hiring. 'cause right, everyone comes outta law school at the same time. This is a famous place. They only hire elite performers, so they're hiring from these top schools and they come in, I block out the first two weeks because the first two weeks actually, they block out the final two weeks of the first month because that gives all these new hires time to come in, read every contract.
That the company's ever been involved in because they're crazy. They're all a bunch of overachievers coming out of these elite schools. They read everything. Every single one of them books an hour then with their boss's time, and they come and say. This is amazing. I read through all the contracts and I found, look, there's a mistake here and we can do this and we can take advantage of this competitor in this way and we can do all this stuff.
I give them about half an hour to work through it all, and then I say, that's great. I'm great. You put in the time, but we're not gonna do that because it's wrong. For one thing, if we did that to them, they'd do it right back to us. We already knew all those flaws were there. It isn't desirable that they're there, but we're well aware.
It's what you do about it. That's your job now. Or at least the job you're going to want to get, because you don't wanna be doing 80 hours of case review every week for the rest of your life. Someday you're gonna be wanting to make the decisions and to make the decisions you need to know both about case law and what to do about it.
You need to know things and you have to understand things. And if we take away the knowing things, parts of people's careers, how are they ever gonna understand? How are they ever going to apply? People aren't gonna magically come through. I will admit there's one context in which maybe this holds, which is the YouTube slash TikTok creator community, where a kid comes in and they just start creating and 1000000th of 1% of them connects and suddenly they're an influencer, or they're Andy Sandberg and the Lonely Island To date this process a little bit.
They have that breakthrough and they hit the zeitgeist, but wow is that the future of careers is a million people throw themselves into this mass and differentiated network where they have to figure everything out for themselves. And a tiny benighted few gets selected to move on rather than we figure out how to deliver this into everyone's professional life like.
I don't think we want the creator community vision of how your career gets built, which is largely, it's a lottery ticket because there isn't even a lot of evidence that it's the quality of the work that predicts your success. It's a lot of chance, lots of chance, hard work. And chance that shouldn't be the way it works because it's just, it's simply societally and economically bad.
So we need to figure out, but here's the great thing, and I know I'm already running along. Let me put it this way. It turns out the smartest decision gets made, not when the AI analyzes the spreadsheet and then you make the decision. It's when you and the AI analyze the spreadsheet together and you say, no, let's try this.
Let's do this. What if we did that? Okay, now I got it. Here's what I think is happening. And it turns out that is more efficient in a traditional sense than you just spreadsheeting by yourself. 'cause boy, it can write routines, it can do tests that you would've had to do, and that truly can speed things up.
But it works because you're part of it. You're iterating an active part of that analysis process. You are not outsourcing it, not as what I'm gonna call a cent. Some jobs are AI jobs, some jobs are human jobs and work becomes cent. The AI is the horse part and it does the horsey stuff. No, we're cyborgs.
Everything is a combined job. It speeds it up a little bit, but the quality of work is transformed and it gives everyone a chance to learn.
David Rice: There's a lot to unpack there, but I always come back to like, it has no context for the world. Like you said before, it has no context for your human experience and practical judgment is something that you form over time, like through just little experiences that informed how you started to view your role or your workplace or your boss or whatever it is, right? And so I think like, yeah, my big concern is growth loss is sort of the process that it takes for a human being to grow in such a way. And I don't know if that's overblown.
Vivienne Ming: Here's a cool seeming paradox. Cool for a nerdy scientist, not cool for the future of humanity.
One of the areas in which AI research has gone on with the most depth for the longest is education. People have been building AI tutors for a long time and if there is one golden rule to AI tutors a rule which has been replicated with Bleeding Edge LLMs. It is that if the tutor ever gives students the answer, they never learn anything.
And work at MIT recently. Even some interesting work inside Anthropic supports that this holds true, and again, early career. If the LLM is giving you the answers. I just extrapolate from 50 years of research, you'll never learn how to do your job or to be more fair and honest about it. Most people will never learn.
Some people are going to go deep. They're going to engage. We already see that it's a small minority of people do this by default. So here's the seeming paradox. The best AI tutor never give students the answers, right? It's capable of doing so. But it never does. It doesn't even force them to think about it first.
It's pure Socrates. Well, what do you think about this? How would you solve the problem? Did you read this paper? Did you listen to what this person said about the problem? It's not arbitrary. These LLMs, what they are saying is leading someone to their own insight. But it's never just giving it to them.
They're having to produce it themselves. And the LLM essentially is designed to what's called scaffold them to these moments. So what's interesting is this ties into my own work and passions around neuroprosthetics cyborgs, which is technology should not only make us better when you were using it. Yeah, most findings say we're on average and a little more creative.
We write better. We come up with more ideas. Not the best people, but the median person gets all these benefits from interacting with an AI. But not only were we better, were using it, we're better than where we started when we turned it off again. And there is copious research, my own included, that shows that is not true.
That for the average person interacting with an LLM or a generative model broadly, that they become worse at the very things they're trying to accomplish in using these tools. It doesn't have to be that way. Again, AI isn't the villain here. It's the interaction between us that is problematic. If we can fix that, it could be.
Supply side, we build AI that doesn't give the answers, if you will. It could be demand side. We build in that human capital so that in my own experiment with poly market, people choose to go deep and engage. 'cause we see that spontaneously in a little less than 10% of university students already. What if we make it 20% or 50?
I mean, I'm not gonna be delusional, it's not going to be everybody. But what if we could build the AI to. Just give you what you need to be better, not what you want. You know? That's the ethic behind it for me.
David Rice: I agree. It's like you used a phrase previously when we spoke, it was the Jiffy Lube economy. I sort thought it was like deprofessionalization, right?
Vivienne Ming: Yeah.
David Rice: Take me through what you mean.
Vivienne Ming: I mentioned this earlier in talking about whether this is just like the industrial Revolution or not, is one of these statements is. AI will create more jobs than it destroys. Just like looms gave rise to the fasting industry a hundred years later after having decimated the economies of India and China.
But what about this moment in history? And I actually agree in theory, AI should create way more jobs than it destroys. This is where AI is different than a factory line or a loom or a steam engine or any of these grand inventions in history because the minute it produces that new job, if that job is, I'm gonna call it, well post that job is regular, doesn't matter how sophisticated, it's, doesn't matter if it would require an advanced degree to do, if that job is regular and economically valuable, within six months, there will be an AI to do that job too.
So the question isn't, will it create more jobs than it destroys? It will be who will be qualified for the jobs? AI cannot do itself, and right now that is a disturbingly small number of people. Because instead of creating an economy in which, you know, this was the AI bait and switch, this is the long term promise.
Imagine every doctor your gp, you can just go into them and they can diagnose everything and they'll be collecting all of your smart watch data and your health records. And now we have super doctors. And by the way. Conceptually achievable. Some of the best work in hybrid collective intelligence is in diagnostics and in medicine, and it's super exciting.
But is that what we are doing or is this the Jiffy Lube colonoscopy where a kid with a high school diploma and the kind of conscientiousness to make certain, they show up every day for work, hooks you up to a computer and you know, it's not worth building a robot to do this, but. For the cost of a lower middle class wage, instead of the cost of a very expensive doctor, you get scoped and you get 80% of the value for 20% of the cost.
A lot of CFOs would see that as a huge win. So here we don't see jobs disappearing. We see jobs getting de professionalized. And there may actually be lots of low skill jobs out there getting created by essentially AI not being worth doing the job, which is, wow, what a profound split in society to know you have a job simply because it isn't worth building a robot to do it.
David Rice: That's rough. We spend a lot of time these days talking about augmentation, and that's what a lot of people think want from AI. It seems. It oftentimes will sort of veer into what we might call cognitive automation. And I'm curious for the audience, what's the difference between sort of cognitive automation and cognitive augmentation, and why does that distinction matter for leadership today?
Vivienne Ming: Oh God, do you want me to give the super nerdy answer the cognitive neuroscience answer? What parts of your brains are lighting up? I mean, one way to think about it is if you're really using your brain. It's effortful if you're using the things that make us human, your prefrontal lobe. You see all this activity go off, you are managing emotions, you're dealing with complex cognitive issues.
You're planning over time. We have this thing as a neuroscience we call GAM activity. This sort of high frequency cellular activity that indicates you are thinking hard. I alluded earlier to an MIT study vaguely. In that study they found a sort of a proxy of that was really decreased in students that used LLMs to write essays.
Versus students that either even just got Google or nothing at all. So the ones that had nothing at all had to think hard about it. The punctuation in their essays was worse. The quality of writing was a little worse, but you saw greater activation of this sort of proxy of gamma activity in their brain, and then later.
They knew what they had written. They felt like they had ownership of it, where the ones that did the LLM could barely quote anything that they had supposedly written. They didn't really feel like it was their words, and it turns out interesting enough in a related study, those essays were less persuasive to third parties.
So when we're looking at cognitive augmentation versus automation, let's go to something almost everybody probably uses at some point across their life, which is a GPS navigation. You know, it's pretty common in the modern world, even for years and years, neuroscientists study London cabbies. They had to memorize all the addresses.
They had to memorize the flow of all the streets. They knew everything, and it turned out they had these crazy memories. Their hippocampus as part of their brain involved. A memory was way outside the norm. Super exciting. And they had healthier cognitive lives later in life. So now you get into an Uber or a Lyft and they're almost always just following a map.
Well, they are better when they're using the map than when they're not. I use Google Maps all over the world when I hit a town I've never been in before. I don't like to take cars. I like to walk, so I use it so I can walk and get to know the city and be like a natural. The thing is, about 15 years ago, I made a prediction that the use of automated navigation would causally increase dementia rates.
Because then you don't have to think about it. That active frontal activity I was talking about is absent and someone that's just following a map. I'm better when I'm using the technology, but it turns out beyond my predictions, there is empirical evidence now. Now you're worse when you get there because it turns out you robbed yourself of future cognitive health by not using your brain.
Cognitive automation got me there, cognitive augmentation. Would simply have given me what I needed to get there. So I actually challenge my class at uc, Berkeley to do this. I give lectures there and I give them this example. I tell 'em they can work on any project they want to, but let's start with GPS.
How would you redesign Google Maps or Waze or Apple or any of it? So not only you got to where you wanna go, but you're better than where you started when you arrived. And I give them this challenge and we come up with fancy AI versions and fancy maps and there's all sorts of ones. Here's a simple one for anyone wondering right now that are you hurting yourself from Google Maps?
Yeah, probably a little on some level, but we're all a little different, so it's hard to show in the wash. But here's a simple thing to do that I do all the time. I turn on Google Maps, I say I'm going to the supermarket. It gives me directions and then I use what I uniquely know. To beat it there. What do I know about Berkeley, California or LA or London or place cities.
I know where I can take that recommendation. See all the traffic patterns. Then think to myself, oh, but I know that left turn it wants me to make is gonna be a disaster because there's a Bears game going on. Or I know that I'm able to cut through this courtyard. That's a public space, but it's not a street, so it isn't on the map.
I know that it doesn't. I'm using my brain while still benefiting from the AI. That may seem like a really simple example, but that is cognitive augmentation. My brain is getting used, and yet I am benefiting from AI. And the cool thing is, I'll spare you the nerdy context, but we can see how to explicitly build AI systems that augment us and make us better.
But it's not really the default build for your average LLM right now.
David Rice: That's interesting. I moved cities a few years ago and I was committed to like building an internal map in my head and so I was, after like a couple months, I was like, I'm gonna stop using it. So I shut it off. And what I found though is I'm like, you know what's still good to use it for?
I can go this way or I can go this way, but which one's faster right now? And so I'll just have it tell me that and then I go do that. But yeah, it's interesting.
Vivienne Ming: And you navigate on your own. But I think if you really thought about it, I vaguely referred to this earlier, you know this study in Portugal with gastroenterologists.
You know the doctors that study your guts and they found that people doing colonoscopies with an AI assisted system, if you turn the AI off, they are dramatically worse than they used to be before they were using the AI system. If it's making them better when they're using it, but then they are dramatically worse afterwards, a typical response is, yeah, but they're better when they're using it, and that's good enough, except of course if it's eroding their skills.
You might actually see long-term degradation. Also, as I said before, artificial and human intelligence make different kinds of errors and have different kinds of strengths. If you're just outsourcing your brain to the AI, it's cognitive automation, then you're not leveraging that anymore. You're not leveraging this unique human capacity to say, my spidey sense is tingling.
This is different than the training data of the AI. I think that little polyp there, looks suspicious. I'm gonna grab it even though the AI says it's fine. And that thing is exactly where the human hybrid intelligence domain really starts to take off is that interaction and playing on these differential strengths.
You get none of that with the automation.
David Rice: Well, this has been a fascinating chat Vivienne, and I wanna thank you for joining us today. I love this particular topic. I love talking about sort of what we're seeing actually happen around the brain and AI and how we're interacting with it, and of course what the long-term effects might be.
So I appreciate you coming on.
Vivienne Ming: I appreciate being here. And if I'm gonna leave you with one thing, I think we have to understand, ultimately I talk about all this collective intelligence and human and AI. The humans that are gonna matter here are leaders. If they go for the lazy version of this and they use it to substitute for people, you'll get quick boost to your productivity and long-term loss if you are willing to invest 'cause it will be an investment. In the human capital side of this, you will see real returns, but that takes courage. You gotta be willing to expend some political capital inside your company to make that happen.
David Rice: Absolutely. I just gave a talk last week and I was talking to HR leaders and I was just saying that I know that you didn't sign up to get into this industry to have philosophical conversations necessarily, but it's time to have one.
We're gonna all have to have one. That's just the way that it is. Get comfortable with it. Thank you for joining us and thank you for that final note then.
Vivienne Ming: Real pleasure. Thanks so much.
David Rice: Well listeners, if you haven't done so already, be sure you're signed up for the newsletter. Head on over to peoplemanagingpeople.com, check that out. We have our AI Transformation Explorer you might wanna check out as well. If you haven't created a free account with us, do that. It'll give you access to the Transformation Explorer.
And until next time, it's the human shaping in this and you all are leading the way.
