Leaders as System Designers: The future of leadership is not controlling work, but designing the human–AI system: clarifying what stays human, what AI amplifies, and how small, empowered teams make faster, better decisions together.
AI-EQ Drives Adoption: Successful AI transformation depends less on tools and more on emotional environments. Leaders must address fear, protect human identity, channel over-enthusiasm, and create conditions where people feel valued, confident, and aligned in an AI-augmented world.
Redefine Value, Not Efficiency: Most AI programs fail because they chase automation or faster workflows. The real edge comes from using AI to reshape decision-making, redesign workflows, and build adaptive cultures — shifting identity, behavior, and habits to unlock new forms of strategic value.
In this interview, Miriam shares how the most effective leaders are shifting from directing tasks to designing systems, and why the key to AI success isn’t better tools, but better emotional environments.
Building human-centered, AI-ready organizations
I’m Miriam Gilbert, founder of Coincidencity, where my focus has always been to #rehumanizework.
My background spans the Big Four and a CFO seat, giving me deep exposure to high-stakes boardroom decisions and the realities of leading transformation under pressure. I’ve worked across industries, navigating mergers, large-scale restructures, and the early adoption of new technologies, often in moments where pace and precision mattered most. That experience gave me an insider’s view of both the opportunities and blind spots leaders face when steering organizations through change.
As an expert in driving peak performance, I’ve always aimed to move teams away from the “one-size-fits-all” approach to managing people and instead create systems, structures, and dynamics that allow people to flourish — and performance to soar.
Since 2022, we’ve been helping organizations harness AI in ways that make people and performance stronger using those proven principles.
At Coincidencity, we now partner with senior leaders and their teams to shape AI-augmented ways of working that are faster, more aligned, and deliver real strategic business value while keeping the human element at the center.
How AI is redefining leadership and decision-making
Leadership in an AI-first world shifts from directing work to designing the human–AI system.
My own shift has been moving beyond, “Why are we doing this?” to also asking, “What must remain human work — and what can AI reliably take on?” I treat creative and ethical judgment, context, and relationship-building as protected human domains, while using AI for patterning, simulation, and first-pass synthesis. For generating these AI scenarios, I typically work with enterprise-ready LLMs such as OpenAI, Amazon Q, Zoom AI, Anthropic, and Gemini.
At an organizational level, putting this into practice means moving away from rigid structures and toward smaller, empowered teams that can adapt quickly. Leaders are no longer gatekeepers of information but designers of environments where people and AI complement each other. Practically, that means letting go of the idea that hierarchy and more reporting lines create better decisions.
One simple example: Instead of asking for “more reports, faster,” the leader frames the decision to be made, specifies the variables that matter, sets acceptance thresholds and risk bounds, and asks AI for three scenario options with trade-offs. The team then pressure-tests those options against live context and commits to a short, pre-agreed experiment. The output isn’t a thicker deck; it’s a decision with rationale and next action.
At an organizational level, putting this into practice means moving away from rigid structures and toward smaller, empowered teams that can adapt quickly. Leaders are no longer gatekeepers of information but designers of environments where people and AI complement each other.
How leaders can use AI to create strategic clarity
One of the most powerful overhauls I’ve seen has been AI in decision making.
A leadership team I worked with was stuck in a cycle of producing more and more reports, hoping that volume would equal clarity. It only slowed decisions down and created frustration.
We shifted the process by using AI not to generate more data, but to frame better strategic choices. The exact setup varies depending on the AI solution the client already has in place — in this case, it was Copilot. Leaders started by defining the decision that mattered, clarifying the variables in play, and setting boundaries for risk and acceptance. From there, AI was used to generate a handful of scenarios with trade-offs spelled out. The team could then focus its energy on interrogating those scenarios, applying judgment, and aligning around the path forward.
The result was faster, more strategic decisions — not because AI gave “the answer,” but because it created a structured way to move past information overload and into purposeful action. That change in workflow also reduced the noise for teams downstream, who no longer had to scramble to produce endless decks, and it reinforced a culture where AI is seen as a strategic partner, not just a productivity tool.
How emotional insight reshapes AI leadership and adoption
One moment that stands out was working with a leadership team that had invested heavily in AI but was frustrated that adoption felt slow and fragmented. The instinct at first was to push for more training, maybe another hackathon.
But we decided on a different approach. Instead of defining the success of adoption by output volume — or worse, the number of AI logins — we looked at how people were experiencing AI in their daily roles.
Some felt threatened, others were unsure how their contribution still mattered. Some were frustrated by what they saw as shortcomings in the tools, while others were disillusioned by AI being forced into workflows that brought no real strategic value. By bringing those emotions into the open, leaders could investigate what needed to change, re-evaluate processes, and reframe AI not as a replacement but as a partner — freeing people to apply judgment, context, and creativity where it counted most.
A number of changes happened as a result:
- Several workflows were redesigned to let AI do the basic work.
- Success KPIs were reviewed — moving away from purely quantitative evaluation of task performance to include qualitative measures, too.
For me, the revelation was clear: AI adoption isn’t accelerated by more pressure or technical training. It requires cultivating the emotional conditions for people to feel confident and valuable in an AI-augmented system. That shift not only changed how I guide others, but how I lead myself: Alignment and performance followed naturally once the human layer was addressed.
Why leaders need emotional intelligence to guide teams through AI transformation
Developing what I call "AI-EQ" starts with treating it as a core AI in leadership capability, not a nice-to-have.
It’s about how leaders respond to uncertainty, how they guide their people through identity shifts, and how they hold space for both curiosity and discomfort when AI changes the way work gets done.
In practice, that means building three things:
- Awareness — noticing where emotions like threat, resistance, or disillusionment show up in teams.
- Discernment — asking what should remain human work and what AI can reliably amplify.
- Reframing — helping people see AI not as a replacement but as a partner that elevates their judgment and creativity.
I develop this in my own leadership through reflection and identity work, and I help others build it through our AI frameworks — simple practices like scenario-framing with AI, role clarification, and structured conversations that surface the human experience of change. We use tools like futurecasting, pre-mortem, destination graphing, roadmapping, identity mapping, gap analysis, etc. For example, combining futurecasting and pre-mortem analysis helped a team of engineers realize they worried about AI undermining their perceived identity of “innovators.”
The shift is subtle but powerful: once leaders build AI-EQ, teams move from compliance with tools to real ownership of how AI strengthens their contribution.
Why most AI transformations fail — and how leaders can fix them
Organizations often treat AI as a way to make the existing machine run faster.
Everyone gets trained to use it for meeting notes, first-draft emails, or quicker reports. Useful, yes, but that doesn’t create strategic value or competitive advantage. If every company does the same, the outcome is uniform efficiency and a race to the bottom.
It’s Blockbuster building a website to show store hours, while Netflix asked a different question: “What new business model does this technology make possible?”
That’s why so many AI programs feel underwhelming. It’s not that adoption is low - it’s that "adoption" is the wrong goal and focused on the wrong horizon. Measuring logins or counting outputs gives the illusion of progress while leaving the core business unchanged.
What matters is whether AI is used to redefine how value is created, how teams align, and how decisions are made under pressure. That requires leaders to move beyond output and into AI-EQ — developing the mindset to distinguish what should remain human work from what AI can amplify. Coupled with our AI frameworks, this shifts teams from “doing the same, faster” to designing new ways of operating that competitors can’t easily copy.
The frameworks are based on creating three shifts:
- An identity shift: How do team members perceive that they contribute value?
- A behavior shift: What are the behaviors that will contribute to better value outcomes?
- Habit formation: We coach the team to make the new behaviors the status quo using behavioral science-based prototypes.
When leaders reframe AI this way, they stop chasing efficiency and start building advantage - speed of alignment, clarity of decision-making, and the ability to move first when opportunities emerge. That’s where the real gap is, and where the future of leadership lies.
Invest in emergent leadership: people who emerge as connectors between team purpose and AI potential.
How leaders can manage both fear and over-enthusiasm in AI adoption
Resistance is expected, as always, with a change or new technology — fear of replacement, skepticism about tools, they are all common. But what I didn’t anticipate was the degree to which positive emotions can create unexpected challenges, too.
For example, I worked with a team where several early adopters were genuinely excited about AI. They were experimenting everywhere — drafting documents, automating steps, even redesigning parts of their workflows. But because there was no shared framework or alignment to priorities, their enthusiasm quickly turned into frustration. Experiments piled up, but few translated into real value, and colleagues began tuning out what they saw as “AI noise.”
That experience showed me that leading with AI-EQ isn’t just about addressing fear; it’s also about channeling enthusiasm into purposeful use. By helping that team identify where AI would free them for higher-value work — and where guardrails are needed — their excitement becomes a catalyst rather than a distraction. Those guardrails tend to be highly specific to each client. For instance, one client created a checklist for “working backward” from business outcomes before experimenting with AI to solve particular problems.
The surprise was realizing that every emotional response, not just resistance, has to be acknowledged and directed if organizations want AI to deliver lasting value.
Why system designers will lead the next era of AI-augmented organizations
Within five years, the most effective leaders will be judged less on what decisions they make and more on how they design the conditions for decisions to be made.
In an AI-augmented world, the advantage won’t come from having more data or faster analysis — AI will level that playing field. The edge will belong to leaders who can orchestrate the human–AI system: shaping the conversation on what must remain human, what can be augmented, and how to blend the two into something greater than either alone.
This will require a profound shift in leadership identity. Leaders will need to evolve from being decision-owners to context-setters — shaping the frameworks, boundaries, and emotional conditions that allow their teams to move quickly with AI while staying aligned to purpose and values.
In practice, that means the boardrooms of the future won’t be full of people poring over endless decks. Nor will it be cold automation. They’ll be environments where leadership is democratized, and people ask sharper questions, explore more scenarios, and commit to shorter cycles of action. Leadership will be less about hierarchy and more about curating adaptability at scale.
The leaders who thrive will be those who embrace AI-EQ — who understand not only the technical possibilities of AI but also the human emotions, identities, and cultural shifts it unleashes. They’ll know that trust, alignment, and purpose are what turn AI from a tool into a true competitive advantage.
How leaders can navigate AI transformation with confidence and clarity
Think of this as a moment of strategic imagination, not just operational adjustment.
Here's my advice:
- First, leaders shouldn’t wait for “perfect” AI. Instead, treat the tools you have today as prototypes for tomorrow’s strategy. Use current AI capabilities to explore—not deliver—by running parallel experiments that probe questions like, “What type of insight could reshape our value proposition?” or “How might partial automation open new markets?” Those experiments don’t need scale; they need clarity of purpose.
- Second, build transparency into the design of workflows and AI in organizational design. Invite your people into the construction of those tools—even at the prompt level. When they co-create AI questions and models—when they see the “hands behind the tool”—trust forms. And once trust exists, AI becomes something people shape, not something that happens to them.
- Third, invest in emergent leadership: people who emerge as connectors between team purpose and AI potential. These aren’t always your formal managers—they’re the people asking “what if?” across boundaries. Spot them, amplify them, and give them small mandates to experiment. They don’t need policy carve-outs—they need a license to test what the future could look like and to bring back learnings.
- Finally, shift your rhythm. Stop thinking in big AI bets that require board approval. Start thinking in micro-cycles with meaning — weekly check-ins focused solely on “What did AI reveal to us about our work, customer, or process?” Over time, those micro-learning moments build depth in your strategy. You don’t just catch up to competitors—you leap ahead.
Follow along
You can follow Miriam’s work on LinkedIn, her website, and her Substack newsletter as she continues to design and scale human-first, AI-powered systems.
More expert interviews to come on The People Managing People!
