Skip to main content
Key Takeaways

AI Value Insight: Most organizations lack a clear AI philosophy guiding human-centered use and development.

Human Judgment: AI should enhance judgment, not replace it; effective use demands self-reflection and critical thinking.

Training Necessity: Effective AI use requires training models to reflect personal voice and standards, not just efficiency.

Last week, I attended a conference on Human Resources in the Age of AI right here in Atlanta where I live. 

One of the more striking things said during an interactive panel came from a senior HR executive who described uploading his personal journal to an AI model and asking it for honest feedback about himself.

"It was tough to read," he said. "But it was valuable. It made me think about what it had to say. And that's how we can use it… to make us better."

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

That story struck me as unusual because it represents exactly the kind of AI use that almost never gets discussed inside organizations. We talk endlessly about productivity gains, workflow automation, and prompt engineering. We rarely talk about what it means to use this technology in a way that actually reflects who you are, sharpens your thinking, and leaves your human capabilities more intact than before.

There's a reason for that. Most organizations are approaching AI adoption as a policy problem when it's actually a values problem.

The Philosophy You're Missing

Most organizations have a compensation philosophy. It's a codified set of values and principles that guides how the company thinks about pay, equity, and the relationship between contribution and reward. It doesn't just set rules, it reflects what the organization believes about its people and what it owes them.

An AI philosophy should work the same way. Not a list of prohibited use cases or a vague directive to "use AI responsibly," but a genuine articulation of what this technology means for your people, your culture, and the work you collectively believe is worth doing.

As I write this, I don’t think most organizations have done this. They've handed people a tool with enormous range and enormous risk and said, broadly, "go figure it out." Some employees are using it well. Many are using it in ways that quietly hollow out their own development. And leaders, largely, aren't sure what to say about either.

Kathy Eastwood, founder of E Equals Why, put her finger on something important when she observed that results-at-all-costs pressure has left organizations without human-centered leaders.

We need to start pushing back. Yes, we need efficiency. But more so, we need effectiveness.

1751627613390-13803
Kathy EastwoodOpens new window

Founder of E Equals Why

That distinction matters enormously here. Efficiency asks how fast. Effectiveness asks whether the right person with the right judgment made the right call. AI can help with the first. Humans are still the only answer to the second.

The Email Is a Symptom

As I sat in the crowded room of HR professionals from across industries, one thing was palpable; frustration. Not with the technology itself, but how their people are using it. 

The clearest sign that organizations haven't worked this out yet is sitting in everyone's inbox.

AI-drafted emails have become a kind of organizational background radiation. People recognize them immediately due to their bloated structure, grammatical commonalities, the diplomatic hedging, and the absence of anything direct.

They read like a document that's been translated into another language and back again. The meaning survives. The person behind it doesn't.

This is where the conversation about AI in organizations tends to land: a critique of bad outputs. 

But the real issue is upstream. When someone reaches for an AI model to write a three-sentence reply to a colleague, it would be a major stretch to suggest that's a productivity decision. What they're really doing is making a judgment about their own voice, and they're getting it wrong.

An organization that doesn't have a clear AI philosophy leaves people without any framework for that judgment. So they default to whatever feels like less friction in the moment.

Over time, they lose the practice of writing clearly, thinking out loud on paper, and communicating as themselves. Younger employees who have struggled to fully develop those habits to begin with are especially vulnerable.

The fix isn't banning AI from email. It's giving people the tools to ask themselves some important questions before reaching for the model: 

  • Does this task actually need augmentation?
  • What do I lose by outsourcing it?
  • What do my colleagues lose when they encounter the AI's voice instead of mine?

This might sound like a trivial example, or as some CEOs might like to put it “foofy”. It’s not. It’s a bad habit that has consequences for the development of the skills we’ll need the most in the coming years. Exacerbating the problem is bad examples coming from the top. As one exec put it: “there’s a ton of intellectual laziness in leadership right now.”

What a Real AI Philosophy Looks Like

It's important we don't think of an AI philosophy as a prohibition list. It's a set of honest questions the organization commits to asking together, and a set of values that guide the answers. Some questions you need answers to include:

  • What capabilities do we believe our people should develop and own, regardless of what AI can do?
  • Where does augmentation genuinely serve the work, and where does it serve only the appearance of productivity?
  • What does it cost, in human development and organizational trust, when our people stop doing the things that make them good at their jobs?
  • At a broader level: is this particular use worth the environmental cost and the cognitive trade-off?

These aren't rhetorical questions. They're the foundation of a philosophy that treats AI as an extension of human judgment, not a replacement for it. And that distinction has to be explicit, because the tool itself won't make it.

The organizations that are navigating this most thoughtfully are treating AI adoption the way they treat other significant cultural moments, with leadership that models the behavior, communication that sets expectations clearly, and genuine investment in helping people build the judgment to use new tools well. 

HR has a real role here, and arguably an advantage. HR leaders are accustomed to operating through ambiguity. As one CHRO in attendance reflected: “There was no pandemic module in graduate school or your certification courses, but HR found its way through. The same instincts apply with this.”

Training the Tool to Sound Like You

There's a phrase that kept coming to my mind as I listened to the crowd discuss pitfalls in AI use that damage critical thinking and an organizational lack of what the group defined as “common sense.” 

"Don't be an avatar." 

If recent times have taught us anything, it’s the power of perception and how it can shape our reality.

When people use AI without training it to reflect their voice, their values, and their judgment, they become the opposite of what they intended. They don't appear more capable. They become a human interface for the model's outputs, an avatar for the technology’s voice. 

And the colleagues who receive those outputs know where they originated. They just don't usually say anything.

Using AI in a way that genuinely extends your capabilities rather than replacing them takes real work. It requires feeding the model your context, your voice, and your standards. It requires iterating, questioning, and rejecting outputs that don't reflect your thinking.

That kind of use is demanding in exactly the right way, because it keeps the human in the loop as an active agent, rather than a passive editor.

This is the version of AI use that organizations should be teaching. Not "here is how to write a better prompt," but "here is how to develop yourself professionally while using this tool effectively." The difference is significant. One produces slightly better outputs. The other produces people who are growing in their capability, not ceding it.

The journal exercise is a useful model for this kind of thinking. The executive who uploaded his journal to an AI wasn't outsourcing his self-reflection. He was using the tool to see himself more clearly. He did the hard work and  the model was just a mirror. 

That's exactly the kind of use that makes people better, and it's exactly the kind of thinking an organizational AI philosophy should be actively cultivating.

But that intellectual laziness will have to be overcome, because those who get this right are those willing to do the harder thing. They’re not just handing their people a tool, they’re helping them understand what kind of people they want to be while using it.

David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.