AI Feedback: AI-generated feedback improves employee performance more than human feedback if its origin remains undisclosed.
Disclosure Effect: Revealing AI as the feedback source causes resistance and worsens performance due to trust issues.
Trust Crisis: Employee trust in employers' AI use is low, with many seeing it as increasing bias.
Transparency Strategy: Openly disclosing AI in feedback processes is advised to prevent long-term trust issues.
Long-term Impact: The choice between transparency and opacity reflects organizational values and affects resilience.
The best performance feedback systems work like a magic trick in one sense: they're most effective when you don't know how the trick is done.
A 2021 study published in the Strategic Management Journal found that AI-generated feedback improves employee performance more than human manager feedback. The catch? It only worked if employees think a human wrote it. Once you reveal the method, the magic disappears.
The feedback was more consistent, more personalized, and objectively more effective at improving performance, but once workers learned that algorithms generated their performance insights, the gains evaporated.
The same feedback that improved outcomes when attributed to a human manager triggered resistance, lower engagement, and worse performance when employees knew its true source. The researchers termed this the "disclosure effect", a measurable performance drop caused purely by knowing AI was involved.
AI in performance management today is increasingly common, just as it is for AI to be integrated into other areas of life given the prevalence in usage of AI tools such as ChatGPT, Claude and Gemini. But the mere fact that usage has increased does not mean that trust has gone up with it.
For HR leaders navigating AI adoption, the paradox those researchers uncovered in 2021 creates an impossible position. Deploy AI feedback systems transparently and sacrifice performance gains. Hide AI involvement and risk catastrophic trust violations when the truth inevitably emerges.
The decision reveals something fundamental about what organizations optimize for. Is it this quarter's numbers or long-term transformation capacity?
The Trust Crisis Context
The disclosure effect isn't happening in a vacuum. Recent research from SHL shows that only 27% of workers fully trust their employers to use AI responsibly in HR decisions. Even more troubling, 59% believe AI is making workplace bias worse, not better.
This puts the paradox in sharper relief. The technology that could deliver better, less biased feedback is distrusted by the majority of employees who would receive it. And the research suggests their distrust becomes self-fulfilling. Knowing AI is involved makes them perform worse, regardless of whether the feedback itself is objectively superior.
Dr. Ryne Sherman, Chief Science Officer at Hogan Assessments, which works with over 75% of Fortune 500 companies, sees two forces driving employee resistance.
The first is the assumption that AI cannot appreciate the nuances of the workplace. We assume that a human judge of performance has a better sense of the organizational context, and more appreciation for performance that might not show up in things that are easily counted by a computer.
The second factor cuts deeper. It's not about whether AI can accurately assess performance but about what the choice to use AI signals about how an organization views its people.
"The lack of human input makes workers feel like they are simply being processed as literal human resources, without empathy or concern for their well-being as humans," Sherman says. "Our research shows that the most effective leaders show genuine concern for their employees. Using AI to conduct performance reviews suggests that the organization's leaders really do not care about their employees."
Whether these assumptions are accurate matters less than the fact that they exist. Employee perception shapes behavior, and behavior shapes performance.

Why Deception Doesn't Work
Organizations hoping to capture AI's performance benefits while avoiding the disclosure effect face a practical problem, which is that employees already know.
Edie Goldberg, an industrial and organizational psychologist and author of the forthcoming book Performance Enablement, points to the telltale signs that give AI-generated feedback away.
Most employees can tell if the feedback was written by a computer with all those em dashes. This type of feedback often lacks the specificity that a manager could provide.
The stylistic patterns of AI-generated text have become recognizable enough that attempting to hide its involvement may be futile. Even when organizations don't explicitly disclose AI use, employees spot the algorithmic fingerprints.
This undermines the entire hide-it strategy. If employees suspect AI involvement but aren't told directly, organizations get the worst of both worlds: the performance drop from perceived AI use combined with the trust violation from attempted deception.
Goldberg's recommended approach sidesteps the paradox entirely.
The distinction matters. AI can integrate feedback from multiple perspectives such as peers, customers, internal communication platforms, and combine it with hard data from internal systems.
"All of these factors make the feedback inherently more fair because it is not just one person's perspective," Goldberg says, referencing the extensive research on manager bias in performance evaluations.
But the fairness advantage only materializes if employees trust the process. Goldberg frames this as a division of labor problem rather than a transparency problem. Use AI for aggregation and pattern detection, where it excels. Keep humans responsible for context, specificity, and the relationship aspects that employees need to believe feedback is genuine.
In the end, the solution isn't to hide AI's role but to redefine it.
The Transparency-First Position
Kate O'Neil, CEO of performance management platform Opre, built her entire product around rejecting the hide-the-AI approach. All feedback Opre users receive is clearly identified as AI-generated, and neither managers nor employees can manipulate it to appear human-authored.
There should never be confusion about what is AI-generated and what is human-generated. After all, how can we ‘keep a human in the loop’ if no one knows what is human and what is AI in the loop? Lack of clarity about who generates feedback ultimately only serves to muddy managerial responsibility.
Opre's design philosophy centers on what O'Neil calls a "manager-and-employee vs. Opre" dynamic. The system sends performance insights to both parties simultaneously, creating a shared reference point for discussion rather than a top-down judgment.
In their next one-on-one, manager and employee discuss what Opre got right, what it missed, and what it might be misunderstanding about context.
"This replaces the far less productive dynamic we often see today: manager vs. employee, especially when performance is low or performance conversations are being avoided," O'Neil explains.
But O'Neil acknowledges she doesn't have comparison data. "I don't know if employees knowing it's AI-generated impacted anything because I don't have a population of employees who didn't know it's AI-generated to compare it to."
When asked whether she's leaving performance gains on the table by prioritizing transparency, O'Neil doesn't hedge.
I don't think knowing it's AI-generated tanks the effectiveness. I think the illusion that feedback is genuine when it's not tanks the trust and social contracts between managers and their teams. Tanked trust tanks performance.
She goes further on the bias question. Even if AI-generated feedback objectively reduces bias in the short term, hiding its source creates a trust problem that ultimately increases bias over time.
"If AI-generated content actually reduces bias, but employees perceive that it increases bias, the long-term impact will be that trust is degraded," O'Neil explains. "Managers passing off AI-generated feedback as their own degrades trust between them and their teams, and that leads to increased bias later."
This is why Opre's system prevents managers from editing or softening AI-generated insights. Both manager and employee receive identical feedback simultaneously, eliminating the possibility of deception. Managers and employees can add context or request clarification from the system, but they cannot modify what it produces.
Her position also rests on a calculation about time horizons. Organizations can optimize for immediate performance metrics or for the capacity to transform over years. The two goals, in her view, are incompatible when it comes to AI deployment.
"Opacity is not a performance strategy," she says. "It's a short-term extraction strategy. You might get a burst of effort while employees scramble to figure out the rules. But soon after, you lose your best people, exhaust the rest, and damage your ability to attract new talent."
When Trust Breaks
Both Goldberg and Sherman warn that deception around AI use carries consequences that far outweigh any short-term performance gains.
"It is easy to break trust, and very hard to rebuild," Goldberg says. "There is no benefit here, only risk."
That warning isn't just ethical posturing. Both Goldberg and Sherman operate in environments where they see the aftermath of broken trust play out across multiple organizations. Their certainty that deception fails comes from pattern recognition rather than philosophy.
It is disastrous for organizations to engage in deceptive performance review practices," Sherman warns. "First, in any organization of any size, there is no hope whatsoever that the deception will not come to light. All organizations have informational leaks, and this kind of information will absolutely get out.
The aftermath, in Sherman's assessment, would be severe.
"Once this information becomes known, trust in the organization will hit absolute rock bottom," he says. "Staff will protest, quit, and perhaps even worse, become totally disengaged."
His prescription for organizations caught in such a scenario pulls no punches. Employees will demand accountability, which likely means removing senior leaders involved in the deception.
"Even then," he says, "the organization will need to strongly consider transforming their entire performance review process to one of radical transparency to start to regain trust."
Goldberg's advice for organizations that find themselves in this position is simpler but cuts to the same conclusion.
What the Choice Reveals
The decision about whether to disclose AI involvement in performance management is rarely framed as a choice between performance and trust. More often, it's presented as a technical implementation question or a change management challenge. But the research suggests it's actually a litmus test for organizational values.
Companies optimizing for short-term performance metrics have a clear incentive to hide AI involvement. The deployment effect is real and measurable.
Companies building for long-term transformation can't afford the trust violation, even if it means accepting lower immediate performance.
There's also a troubling class dimension that practitioners rarely address directly. In organizations that partially disclose AI use, who gets to know? Are executives receiving transparent AI systems while frontline workers get hidden AI feedback? The trust gap between leadership and employees is already wide. Selective disclosure could crater it entirely.
Sherman's point about inevitable discovery deserves emphasis. In an era where employees screenshot internal communications, post anonymously to company review sites, and organize via private Slack channels, the idea that AI involvement in performance reviews could stay secret is implausible.
Add Goldberg's observation that employees can already recognize AI-generated text by its patterns, and the hide-it approach looks less like a strategy and more like a delay tactic.
The question becomes not whether employees will find out, but what happens when they do.
AI Transparency in Performance Management
A Different Application
Goldberg sees AI's real promise in performance management coming from a different direction entirely: real-time feedback rather than annual or quarterly reviews.
AI systems can help provide a lot of feedback in the moment, when it is the most beneficial," she explains. "But AI can integrate all these points of feedback over a performance period to provide a more balanced and less biased picture of individual performance. This is the power of AI in performance management, not simply writing the review.
This reframes the disclosure question. If AI provides immediate, contextual feedback on specific actions — flagging when project timelines slip, noting when collaboration patterns change, highlighting when communication frequency drops — the source matters less than the timeliness. Employees can see the direct connection between their actions and the feedback, which builds trust in the system's accuracy rather than undermining it.
The annual performance review written by AI still triggers the disclosure effect because it's trying to replicate human judgment in a context where relationship and authority matter.
Real-time feedback from AI operates in a different register. It's not pretending to be human wisdom, it's providing data-driven observation that humans would struggle to deliver at scale.
Whether this approach avoids the performance drop that disclosure creates remains untested at scale from what I can find, and perceptions about this may be changing as familiarity with how AI works grows. But it suggests that the hide-versus-disclose debate may be asking the wrong question about the wrong application of the technology.
The Unresolved Question
For leaders making decisions about AI deployment today, the research offers no comfortable options.
The transparency advocates — O'Neil, Goldberg, Sherman — argue that hiding AI involvement guarantees a trust catastrophe when discovered, and employees can already detect AI-generated text anyway. Better to build AI-ready cultures through honesty, even if it costs performance in the short term.
The counterargument is harder to dismiss than it seems. If AI feedback objectively reduces bias, delivers more consistent evaluations, and improves outcomes, is transparency an expensive luxury that primarily serves to make HR leaders feel ethical?
Neither side can point to longitudinal data showing what happens three years into transparent AI feedback deployment. The technology has evolved too quickly and the deployments are too recent. But the choice reveals what leaders believe about their workforce and what they're willing to sacrifice to get results.
Organizations that chose opacity made a bet that performance gains would be worth the eventual trust crisis. Those that chose transparency make a bet that building AI-ready cultures requires honesty about AI's role, even when it costs performance in the short term.
The research shows what happens in the immediate aftermath of disclosure. What we can't answer with as much certainty yet is which approach builds more resilient, higher-performing organizations over time.
For now, that remains a matter of philosophy, risk tolerance, and what kind of organization leaders are trying to build.
