Background: Paul Gibbons has a diverse career, focusing on management and organizational change for 35 years.
AI Excitement: Gibbons is enthusiastic about AI's potential and its impact on productivity and learning.
Leadership Changes: Traditional leadership models need evolution to accommodate AI's rapid advancements in organizations.
People-First: Gibbons advocates for a people-first approach in AI adoption to foster trust and increase ROI.
AI Literacy: AI literacy in organizations should include concepts, skills, ethics, civics, and critical thinking.
Paul Gibbons is a generalist who has worked in countless fields, but his primary focus over the past 35 years has been on management and organizational change. He's a consultant and the author of The Science of Organizational Change and Adopting AI, among seven other titles.
We sat down with Paul to understand how organizations can improve AI adoption and literacy. He told us about his people-first approach.
A full-stack perspective
My father says, "Just as you are about to become world-class at something, you go and do something else." That is painfully true.
I was the youngest person to get into the University of Wisconsin's medical school — ever. And then, I got seduced by Wall Street and had a meteoric rise.
The problem was that, along the way, I developed severe addictions, accompanied by depression. I spent half a decade bouncing around, pursuing PhD programs and playing bridge professionally, all while I was effectively homeless.
At 32, I cleaned up, joined PwC, and helped build their Organizational Transformation and Change method. I then jumped ship again and started my own firm. It's now in its 25th year, though I sold it a decade ago.
Since then, I've been a consultant, professor, keynote speaker, and author of nine books. What do they say? "Life begins at fifty..."
Two of my books were home runs, or "sixes," as we say in cricket. The Science of Organizational Change was ranked one of the top five change-management books of all time. And Adopting AI was reviewed as the best AI book of 2025 — not, I hasten to add, by the New York Times, but on LinkedIn, which matters 1,000 times less.
I suppose, if there is a blessing in all this chaos, it's that I see our current context from multiple perspectives. You could say, "Jack of all trades, master of none, " but more charitably, you could say I have a "full-stack perspective."
Why this organizational change expert got off the fence
I personally have an OMFG moment about AI at least once a week. I get a sense of child-like wonder with every new tool, new perspective, and new model breakthrough.
But I was very much on the fence with AI in early 2025. It has amazing potential for humanity, but it's fraught, ethically. But something changed. I'm not sure whether I drank too much of Sam Altman's Kool-Aid or if I just feel the potential more viscerally, but I dived deeply into AI — even building LLMs on my home rig.
And right now, I'm trying to merge my abundant street cred in org change and leadership with the AI world.
How AI supercharges productivity and learning

I have never, ever been more excited. Nor have I ever been so productive.
AI's ability to create decks, for example, has changed everything for me. It may seem minor, even trivial. But as a 35-year management consultant, I've made hundreds of decks, each of which took a week. That is years of my life making PowerPoints.
Then, I discovered Gamma. I can stand up something world-class in a day.
I'm also learning faster than ever. To learn a topic well, you probably need to read 20 books or 50 academic papers. NotebookLM makes mincemeat of that. Within seconds, you can have a mind map from the data of 50 sources.
Case in point, I recently discovered an MIT paper on something called Recursive Language Models. It was dense and deep. Much of it was technically beyond me, but I knew just enough to think, "Wow, this is important." So I uploaded it to Gemini and asked, "Does this matter? Give me the two-page summary." Then I asked if I could build one on my home desktop and how long it would take. Two hours later, I had a mini LLM running in my living room.
That changes things. This tech is democratizing and emancipatory.
Why organizations must focus on more than efficiency gains

With that said, we can't be over-focused on the productivity gains of AI. That's the equivalent of asking Einstein to fold laundry.
I'm guilty of it. When I first used AI in my own business, it was for efficiency alone. I wanted to save 10-20 hours of drudgery per week. But doing the same work faster is really limiting.
We need to focus on new capabilities. I should have asked, "What does this allow me to do that I could not do before?"
This is critical for Fortune 500s.
Why leadership frameworks must evolve for the AI era

AI isn't changing leadership, and that is a problem.
The most common leadership framework is Transformational Leadership. That's from the 70s. Old leadership paradigms were built for a predictability that doesn't exist anymore.
We need to change to the Adaptive Leadership framework. It recognizes the pace of change, but also the emergent layer of AI adoption. Because we don't know what "done" looks like. We can't easily predict what shape pilots will take or how easily they will scale.
What an AI tech stack looks like for change leaders
My AI stack is always changing, but here's a snapshot:
- Descript — Video editor
- Granola — Note taker
- NotebookLM — Research and writing assistant
- Gamma — PowerPoint creator
- Claude — General purpose
- ChatGPT — General purpose
- Gemini — General purpose
- Obsidian — Notes
- Leonardo — Image creator
- Ollama — Automation
- VSCode — Coding
- Hugging Face — Community building
- LangChain
- PostgresSQL
As I mentioned above, Gamma and NotebookLM, in particular, were game changers for me.
Why different AI adoption paradigms will shape organizational culture
There are several paradigms for AI adoption.
Right now, Accenture is mandating AI logins — arguably, a coercive and blunt tool. In contrast, Walmart is using a "pull" approach, offering free access to AI education for all.
The question for Accenture is whether compelling people will damage creativity and psychological safety. Time will tell.
Why people-first AI adoption drives trust, literacy, and ROI
Something like 88% of organizations are trying to adopt AI, but fewer than a 30% are making money from it. The numbers are awful.
It's not surprising. As someone who has been on the human side of tech adoption for three decades, I know the gap between "shiny new tech" and usefulness takes years.
The barriers are organizational. Silos, culture, mindset, skills, and so on. People are scared of it. If you try to bang in a technology with that as the context, good luck.
My shtick, and a concept I birthed, is "people-first adoption." That's both a moral claim and a tactical approach. My hypothesis, which is used by two Fortune 500 companies, is that if you first empower people and develop use cases that support human flourishing, their fear and mistrust will be resolved. They will quickly adopt tech that improves their lives.
Then, with fluency, trust, literacy, and skills built, you can go hard for ROI.
What AI literacy looks like in practice
There are five dimensions to my research-based AI literacy model. Here's the high level:
- Concepts: A surface-level understanding of what is under AI's hood
- Skills: Knowing how to prompt and, later, build
- Ethics: A huge area, and nobody should be flying blind here
- Civics: How AI is affecting news, politics, parenting, and schools
- Critical thinking: It is smarter than you, but also much dumber sometimes. Thinking critically with AI is rarely taught, and most people suck at it.
Each of these must be taught if you want true AI literacy in your organization.
Why success can become a barrier to AI learning
Here's my take on leadership right now: Nobody can keep up, but they need to try.
AI is the most complex topic in the history of humanity, from models to algos to massive disruptive forces in enterprises, economies, and geopolitical flows.
The most frightening thing, though, comes from a story. Back in 1995, I was called in to investigate a derivatives fraud — small beer by today's standards, $100M. I was there to interrogate an options trader. Because I came from that world, I understood gamma, theta, and smile curves.
What amazed me was that nobody at the bank — not the CEO, the CFO, or the Chief Risk Officer — had a clue what this guy was talking about. They had billions of risk onboard, and no clue about derivatives.
AI is way more complicated, and C-suites are flying blind. That scares the bananas out of me. Ignorance at the top is dangerous.
But who has time? I ran a program for a bunch of execs and asked, "How much formal education did you do last year?" Three days, total. Between twelve of them. That just won't cut it.
Leaders need to set aside time for learning. They also have to remove their default attitude of, "If I don't know it, it can't be that important." Because currently, their power and success are enemies of learning.
Why leaders must make time to tinker
Now more than ever, leaders need to upskill. They need to become AI literate.
It is wholly hypocritical to have an AI education program for your workers without also upskilling yourself.
I expect most leaders would agree with this. But almost none will do it. They're already working 60 hours a week. They need support. They need structured programs. And they need time to tinker — maybe an hour or two per week, minimum.
Tinkering is non-negotiable. The books haven't been written, and even if they were, you can't learn these skills from a book.
Follow along
You can follow Paul Gibbons' work on LinkedIn, YouTube, and Substack. And check out Paul Gibbons Advisory.
More expert interviews to come on People Managing People.
