Overreliance: Relying too heavily on AI tools can lead to significant negative workforce outcomes.
Skills Atrophy: Dependence on AI may erode critical thinking skills, reducing employees' ability to analyze information independently.
Workforce Impact: Organizations must evaluate AI deployment effects on employees as part of their technology decision-making process.
There are things happening to employees as AI tools spread across organizations, and at some point, the question becomes does your leadership team have language for what's happening?
Fret not. A report published last month by MIT Sloan Management Review and Boston Consulting Group does.
The paper, "Beyond the Model: Why Responsible AI Must Address Workforce Impact," is the fifth annual responsible AI study from the two institutions, built on global executive surveys and curated expert panels.
In prior years, the research focused on governance maturity, third-party risk, and accountability structures. This year, the researchers turned toward something more immediate: what enterprise AI deployment does to the people using it.
They named five effects.
- Overreliance
- Skills atrophy
- Disempowerment
- Work intensification
- "AI brain fry."
The argument from authors Elizabeth Renieris, David Kiron, Steven Mills, and Anne Kleppe is direct when they say these are not hypothetical risks or edge cases. They are measurable workforce outcomes that should factor into go/no-go decisions before an organization commits to any specific AI tool.
That product evaluation language is intentional. They are saying workforce impact belongs in the same analysis as technical performance, compliance fit, and total cost, and that it currently isn't there.
What the Research Shows
The most precisely measured of the five effects is what they're calling "AI brain fry", a term that entered the research literature in March through a BCG and University of California, Riverside study published in Harvard Business Review.
The researchers defined it as mental fatigue from excessive use of, interaction with, or oversight of AI tools beyond an employee's cognitive capacity.
The numbers from that study show that 14% of workers using AI reported having experienced it.
Workers with brain fry made 39% more major mistakes than those who didn't. Productivity began declining when employees juggled more than three AI tools simultaneously. The most cognitively taxing form of AI engagement wasn't the complexity of prompts or the volume of outputs, it was oversight. Employees monitoring AI systems reported 12% more mental fatigue than those who weren't, and 19% greater information overload.
The groups most affected were in marketing, software development, human resources, finance, and IT. Those are not peripheral functions.
The paradox the researchers identified is significant for any COO or CHRO thinking about workforce planning. The employees most at risk for brain fry tend to be the early adopters, the people most enthusiastic about AI.
They're using more tools, overseeing more outputs, and absorbing more cognitive load in the process. The technology designed to reduce their burden is, in practice, intensifying it.
Eliza Jackson, Chief Operating Officer at ButcherBox, described the phenomenon from the inside.
You end up thinking ‘What did I even do? Did I just respond to 800 chats? Did I make any decisions that weren’t thoughtful? Did I think about anything I decided?
She added that her organization is actively trying to build in more reflective time, but acknowledged they're still working out how.
Skills atrophy works on a slower timeline. A Microsoft Research and Carnegie Mellon University study, presented at the CHI Conference last year and drawing on 936 real-world examples from 319 knowledge workers, found that higher confidence in AI tools was consistently associated with less critical thinking.
Workers weren't opting out of analysis. They were trusting the output and moving on. The study also found that AI use shifts the character of critical thinking, from generating analysis to verifying it. That might sound like adaptation, but it's a narrowing. When the AI is wrong, the question is whether employees still have enough independent judgment to catch it.
The clinical version of this dynamic has already been documented in practice. Vivienne Ming, a theoretical neuroscientist and AI researcher, points to a study of gastroenterologists using AI-assisted colonoscopy systems. When those systems were turned off, the doctors performed measurably worse than they had before using the AI at all.
It’s making them better when they’re using it, but then they are dramatically worse afterwards.
The risk isn't just present performance, it's the capability that erodes in the background while the tool does the work.
Overreliance is the pattern underneath both. When workers defer to AI outputs even when the tool is unreliable or operating outside its competence, individual errors compound. Across a workforce following the same outputs toward the same decisions, they scale.
At the individual level, that's a performance problem. Across a workforce using the same tools toward the same outputs, it's a systemic one.
The Governance Question
The MIT Sloan report makes a recommendation that has structural implications for how CHROs and COOs are positioned in AI decisions. Workforce impacts must be evaluated as part of the business case for AI use, built into deployment plans, and considered before procurement. That's not how most decisions currently work.
The researchers frame this as an obligation, not a suggestion. Organizations should communicate openly with workers about how AI will reshape or eliminate their tasks, what transition support looks like, and how those decisions were made. Not as a courtesy, they write, but as a core governance responsibility.
The accountability gap that creates is real. CHROs typically own people outcomes but are rarely seated at the table when AI tools are selected. Technology leaders own the stack but not the downstream skill erosion in the workforce using it.
Operations leaders track performance but often lack a framework for monitoring the capability degradation that precedes performance decline.
The terms MIT Sloan has named are useful precisely because they're specific enough to assign ownership and granular enough to measure.
Overreliance is not the same risk as disempowerment. Each requires a different monitoring approach, a different set of questions at deployment time, and a different owner inside the organization.
For most companies, none of those owners exist yet.
The research doesn't prescribe an org chart. What it does, clearly and with the weight of five years of responsible AI data behind it, is name the gap.
Go/no-go decisions are where these questions should surface. Before the contract is signed, before the rollout begins, before the first training session, someone in the organization needs to be asking what this tool will do to the judgment of the people running it.
Right now, most organizations don't have an answer for who that someone is.
