Leadership Shift: AI requires leaders to focus on strategic questions, not just being the smartest in the room.
AI Autonomy: Human-on-the-loop model lets AI handle tasks independently, enhancing organizational efficiency.
Skills Integration: AI tools like Claude can encode skills, turning patterns into automated practices within agents.
Productivity Theater: Many companies apply AI superficially, leading to inefficiencies instead of true productivity gains.
AI Dependency: Professionals must maintain critical thinking to avoid over-reliance on AI for decision-making.
Jurgen Appelo is an author, speaker, and founder. His most recent book, Human Robot Agent, explores leadership in the age of AI. He's spoken internationally about how AI is transforming organizations and how to avoid productivity theater. And he also runs a newsletter, The Solo Chief, which is all about orchestrating humans and AI agents.
We caught up with him for an illuminating chat about what happens when AI stops being a buzzword and starts doing real work. Here's what he had to say. 👇
Throwing darts at an org chart

My career path looks like someone threw darts at an org chart. I've been a software developer, team leader, project manager, development manager, CIO, entrepreneur, intrapreneur, and now, the creator of a Substack newsletter who speaks internationally about what happens to organizations when AI stops being a PowerPoint talking point and starts doing work.
I've also written several books on management and leadership, including Management 3.0, Managing for Happiness, and most recently, Human Robot Agent, which is about leadership in the age of AI.
Why leadership isn't about being the smartest person in the room anymore
With AI, leadership is being hollowed out and rebuilt simultaneously.
I used to think leadership meant being the smartest person in the room. But today, AI is the smartest entity in the room on about 90% of topics.
Leadership, now, is about knowing which questions to ask, which constraints to set, and when to intervene. That's harder than knowing the answers.
Why AI must shift from human-in-the-loop to human-on-the-loop
I'm betting my career on agentic organizations. Yes, I said that publicly, and I mean it.
The parts of management primarily involved in controlling information flow, approving things, and sitting in decision loops? Those are evaporating. AI can process information, generate options, and execute workflows faster than any manager can review their inbox.
For decades, organizations have funneled all intelligence — both human and artificial — through human bottlenecks. Even today, most "AI-powered" companies have employees prompting ChatGPT all day, sharing clever prompts in Slack channels, and celebrating AI wins in all-hands meetings. But every AI interaction still starts and ends with a human. The AIs never talk to each other. They execute nothing independently.
That's what we call humans-in-the-loop. And it's a massive constraint. Everything slows down to the speed of the slowest bottleneck.
AI-native competitors don't operate this way. They have AI agents handling entire value streams while humans focus on strategy, ethics, and edge cases. The throughput difference is staggering.
The takeaway: Managers do not create value by being in the loop.
The shift I'm advocating is humans-on-the-loop. Let AI agents handle workflows autonomously while humans focus on strategic direction, ethics, and the messy judgment calls that machines can't make. Different intelligences, different speeds, different infrastructure.
How patterns and practices can live inside of AI (instead of you)
Let me tell you about my "AlphaZero" moment.
Last month, I had eight conversations with readers of my Substack. I refined my Job-to-Be-Done, aka the actual problems my audience wrestled with. I downloaded all the meeting transcripts, stored them in Google Drive, grabbed a JTBD skill from a public skills marketplace, and asked Claude to analyze everything.
Claude performed the full ritual: functional needs, emotional needs, social needs, the whole Christensen framework. It then generated a report. I asked it to turn that report into a new custom skill optimized for my Substack, a JTBD evaluator that can now analyze future article drafts and tell me how well they align with my readers' pains and gains.
Twenty minutes of work. Maybe less. And you could do it with your team or your customers.
I sat there realizing: Clayton Christensen's framework no longer lives in my head, it now lives inside Claude. I don't need to know how it works. Claude performed the analysis the human way, using human language and following human-designed patterns. I was just the orchestrator giving an agent a task.
When patterns and practices live in the agent instead of in you, something fundamental shifts. Skills are the new patterns and practices. The question becomes: "When the agent gets it wrong, can I tell?"
Why productivity theater hinders AI potential
The biggest barrier between AI's promise and actual outcomes? Productivity theater. Full stop.
The pattern is everywhere: Executives go to a conference, nod at a keynote, and suddenly they're "all in on AI." They deploy Microsoft Copilot or some internal chatbot with a heroic name and a dashboard nobody asked for. They tell teams this will deliver a golden age of productivity.
What happens? Colleagues send AI note-takers to meetings they don't attend. Bots generate summaries nobody reads. People auto-reply with fluffy paragraphs that say nothing. AI-generated research reports multiply like TikTok videos. Slide decks bloom and die in shared drives. Nobody reads anything. Everyone just summarizes the translations and translates the summaries.
The media has noticed. Headlines like "Majority of CEOs Alarmed as AI Delivers No Financial Returns" and "AI-Generated Workslop Is Destroying Productivity" have been stacking up all year.
Companies are sprinkling AI on top of broken workflows, and it shows.
Where real acceleration happens — Novartis shrinking clinical trial site selection from weeks to hours, insurance companies automating 400,000 claims from three-week turnarounds to minutes, solo developers shipping full-stack apps over a weekend — the pattern is clear. These workflows didn't get 10% faster. They completely imploded. Years became weeks. Weeks became minutes.
How technostress impacts AI adoption and productivity

I'm subscribed to 100+ AI newsletters on Substack alone. Every new tool announcement triggers the same response: a small spike of anxiety followed by the urge to investigate. Multiply that by thirty announcements per week, and I wasn't learning anymore. I was shaking. Doomscrolling my to-do list like it was a LinkedIn news feed.
I was surprised by how quickly my nervous system adapted to continuous partial activation and how destructive that was.
Technostress. FOBO — Fear Of Becoming Obsolete. I actually made ChatGPT, Gemini, and Claude debate each other about how to deal with the problem they caused. Fair's fair, right?
The counterintuitive insight I got was that the more "up-to-date" you are, the more replaceable you become. If your value is knowing the latest features of Claude Code, you're exactly as valuable as a $20/month subscription. Nothing depreciates faster than AI knowledge.
In the age of AI, calmness is a unique selling point.
Why mastering fewer AI tools is a strategic advantage
Tool mastery is the wrong game. Leaders should focus on tool immunity. Your competitive advantage is delivering value with fewer technologies than your anxiety suggests.
That said, here's what I use daily:
- Claude (Anthropic) — My primary AI. I use Claude Projects extensively, with custom skills defined as markdown files. I've built a personal ghostwriter skill, a JTBD evaluator, a format improver, an SEO optimizer, and a marketing evaluator. Claude is my thinking partner, editor, analyst, and occasionally my therapist when the latest AI launch sends my cortisol through the roof. Assessment: the best reasoning model for long-form content work. Projects with skills changed my workflow more than any other single feature in AI.
- ChatGPT (OpenAI) — I use it for a second opinion. When I have a particularly important problem, I use multiple models and have them critique each other's feedback. ChatGPT is also good for quick, structured analysis. I run AI debates between ChatGPT, Gemini, and Claude and publish the results.
- Gemini (Google) — Third voice in my multi-model debates. Useful because my entire workspace is in Google, so it has native access to my documents and email. Snarkier than the other two, which I appreciate.
- Make — My automation backbone. I've built workflows that connect Claude to my content pipeline: article drafts flow through formatting checks, SEO analysis, social media post generation, and newsletter distribution. This is where workflow compression happens.
- Nano Banana Pro — For AI image generation. I have no clue how the model makes those images so amazing. It just works.
How Claude Projects revolutionized my workflow
My favorite tool is Claude Projects with custom Skills. No contest.
Here's why. A Claude Skill is a markdown file that teaches Claude how to perform a specific task. Each skill encodes my preferences, my standards, and my accumulated judgment.
What makes this different from just prompting? Persistence and accumulation. Every skill I build makes the next one better. They're becoming a system, an interconnected set of codified expertise that compounds over time.
My frameworks are now living inside the agents. That's both thrilling and terrifying. Thrilling because I'm 10x faster. Terrifying because if I stop practicing the underlying craft, I'll lose the ability to judge whether the agent is doing it right.
How to create an agentic advisory board
For strategy and decision-making, I use an "agentic advisory board" approach. When facing a strategic decision, I pose the same question to Claude, ChatGPT, and Gemini, then have them critique each other's responses.
This provides three distinct analytical perspectives, plus the friction of disagreement, exactly what a good advisory board should offer. The debates are often more rigorous than conversations I've had with human advisors.
Why most organizations are not AI ready
I spend a lot of time thinking about AI literacy for my readers and the organizations I advise.
Here's my honest take: Most AI literacy programs are garbage. They teach people which buttons to click in ChatGPT and call it "upskilling." That's like teaching someone to use a hammer and calling them an architect.
"AI-ready" means three things:
- Understanding what AI can and can't do at a conceptual level — not memorizing features that'll change next month, but grasping what pattern matching, language generation, and agentic workflows mean for your work.
- Identifying which workflows to redesign around AI rather than just sprinkling with it.
- Exercising judgment to know when the AI is wrong. This requires maintaining your own domain expertise.
Why professionals must not become dependent on AI
I fear waking up one day unable to write or think critically because I delegated my cognitive muscles to agents for too long. The single wringable neck becomes a single atrophied muscle.
I actively work against this. I still write every article myself. I still read physical books. I still do the thinking before asking Claude to check my work, not after. But the pull toward dependency is real, and it's constant.
I'm afraid of becoming the thing I warn others about: someone who looks productive but quietly outsources the capacity for original thought. That fear is useful. It keeps me practicing.
And I'm not going to let it happen.
Why leaders must wake up to the reality of AI

Stop chasing tools. Stabilize your principles, constrain your execution surface, and time-box your curiosity.
That's my one-sentence advice. Here's the longer version:
During the 2008 credit crisis, I once grabbed my CEO by the shoulders. Literally. The company was heading straight for a financial wall. Revenue dropping, runway shrinking. And he just froze like a deer in headlights, calmly preparing to sleepwalk the entire business off a cliff through inaction.
I told him, "Wake up, for God's sake! Our runway is evaporating. We have to act."
He did. Later, he thanked me.
I think about that moment whenever an executive says,"AI isn't generating returns." Of course it isn't, not the way you're doing it. You're treating AI like a note-taking assistant when you should be treating it as a workflow reinvention engine. You're sending brainstorm ideas to committees for approval instead of giving people permission to experiment.
If your leadership is turning AI adoption into a suggestion box and a governance workflow, someone needs to shake them awake before they walk the organization off a cliff.
Why the organizations that survive will be agentic
As far as what's to come, AI is turning every professional into a one-person operation, whether they like it or not. The people who learn to orchestrate AI agents, maintain their own judgment, and deliver real value with minimal overhead will thrive.
The organizations that survive will be agentic — dual-lane systems where AI handles execution at machine speed while humans focus on strategy, ethics, and the genuinely hard problems.
The organizations that keep funneling all intelligence through human bottlenecks will discover, probably too late, that their competitors were building ring roads while they were still stuck in traffic.
Follow Along
You can subscribe to Jurgen Appelo's free twice-a-week newsletter, The Solo Chief, where he drops sharp, iconoclastic essays that cut through the hype. Or follow him on LinkedIn and find out more at his site.
More expert interviews to come on People Managing People!
