AI Makes Leadership More Human: Lind emphasizes that while technology is evolving fast, leadership fundamentals remain rooted in empathy, adaptability, and understanding people.
Slow Down to Scale Faster: Rushing AI adoption without understanding workflows leads to scaling inefficiency. Lind advises leaders to deconstruct processes first, identify where AI truly adds value, and integrate it intentionally.
Build AI Literacy as a Measurable Discipline: Lind’s six-part AI effectiveness framework—Intentionality, Discernment, Ethical Alignment, Technical Fluency, Workflow Integration, and Delegation Discernment—helps organizations assess and improve how people use AI.
We sat down with him to get a sense of what workflows he's overhauling with AI, and the frameworks he uses to ensure AI literacy in organizations. Here's what he shared.
From accidental leader to strategic transformer
I’ve had a dynamic career that can best be described as living at the intersection of business, tech, and people. On paper, it looks like a career in corporate training, but in practice, it’s been driving business transformation through the people that businesses run on, and optimizing that transformation through technology.
I never had aspirations for leadership. I just wanted to change things for the better and use whatever skills and expertise I had to pull it off. However, that led to me running teams at some of the most recognizable companies in the world, like GE Healthcare and HR.com, getting involved with projects I could never have dreamed of, and having an industry voice that an introverted guy like me would have dreaded years ago.
Why AI is making leadership more human — not less
While my methods are shifting rapidly with AI, I would say the foundation of leadership hasn’t actually moved.
My role and leadership approach have always been focused on identifying root problems — solving them by surrounding myself with incredible people and bringing out their best, while optimizing the solution with tech. There are simply more options and tools we can pull in now.
And, if anything, I think the AI age is actually reinforcing how important that foundation is. In fact, I’d go so far as to say that AI will make leadership more human than it’s ever been.
There is enormous pressure to “do more” and “go faster,” but speeding up and producing more of the wrong things will wreck you. The leaders who thrive will be those who prioritize their understandings of what the people around them are capable of, and then build adaptability and capability in those people.
The leaders who recklessly chase vanity metrics and short-term gains will quickly find themselves struggling, and if they’re not careful, completely out of work.
Why you can't just throw AI at any problem
AI is only effective if you take the time to deconstruct the problem, identify all the disparate parts, surgically figure out where AI makes sense, and then reconstruct a hybrid solution that takes the best of what AI can do and pairs it with the best of what humans can do.
Right now, many leaders are operating under the assumption that they can throw AI at anything and it’s so smart that it will figure out the problem, design a solution, and successfully execute that solution with minimal errors.
That’s not how it works.
You have to take time to talk to the people who are currently doing the work and study the data around it. There’s a lot of work happening that isn’t initially apparent, but is absolutely critical to the outcome. And when that hidden work gets overlooked, that’s when AI goes wrong — fast.
How AI can scale all the wrong things
Similarly, AI amplifies whatever it touches, both good and bad. If you apply it to the right things, you’ll see improvement faster and at a greater scale than you can imagine.
Unfortunately, the same is true for the opposite: If you rush or miss something, it will exploit that and blow it up to a scale you cannot imagine. You also won’t have the usual time to react and respond. Mistakes can become catastrophic before you even realize they were made.
One example sticks with me. I was brought into a company after they’d replaced their customer service representatives with AI. Within weeks, customers were furious and canceling their service left and right. Even worse, customers had learned how to exploit the AI customer service and were talking AI into giving them free products and services.
It was a complete disaster. We had to go back and completely rethink the role of customer service — redefining where AI should handle things, when issues should be escalated to a person, and what skills the human agents needed to make the system work.
If you rush or miss something, it will exploit that and blow it up to a scale you cannot imagine. You also won’t have the usual time to react and respond. Mistakes can become catastrophic before you even realize they were made.
How leaders are using AI for everything from busywork to organizational redesign
I’m constantly experimenting with AI and improving processes based on where the technology is maturing. To do that, I map out every process and look for repetitive, low-risk steps that can be automated. That’s why I always say there aren’t specific “AI areas” — it’s an augmentation to everything we do.
In leadership, specifically, there’s a lot of robotic, repetitive work that eats up a lot of time and pulls you away from people. That work needs to be automated because when you focus on the repeatable, robotic areas of leadership, you become obsolete. That’s why I'm always pushing my teams to bring AI alongside them as a copilot — so they can focus more on the things that matter.
Three everyday workflows every leader should offload to AI
Here are three simple examples of repetitive workflows that I regularly offload to AI:
- Meeting notes: I use Fireflies.ai to transcribe my meetings, send me summaries, and then email me key highlights before the next meeting. This note taking app allows me to focus on the person I’m talking to instead of trying to jot down notes or make sure I’m taking care of any action items in the meeting.
- Meeting knowledge base: I drop transcripts from Fireflies.ai into NotebookLM (knowledge base software) and keep a notebook for each person or project, which lets me ask direct questions like: “What did I miss?”, “What decisions are pending?”, “What follow-ups are needed?”, and “What opportunities are in front of us next?” It turns conversations into an actionable, searchable knowledge base.
- AI leadership simulations: For a lot of orgs, I use Relativ.ai to design AI leadership sims built around key leadership behaviors using similar tech to my AI-effectiveness assessment. They give leaders an objective measure of capability and a clear development plan.
Three data-heavy tasks where AI delivers real leverage
And here are three examples of data-intensive tasks that, while less regular, would have been difficult to handle without AI:
- Consolidation and org redesign: With one org, we fed in job role data into AI and asked it to help identify where it found overlap in jobs — in other words, where did it seem like two jobs were doing the same thing in different places? It was for a large org, so this kind of clustering work would have taken a long time, but AI made it easy. Then, we followed up in those areas to understand whether or not it was actually duplicated work.
Many of the examples that AI gave us weren't actually duplicating work — either the function had a distinct need or they had just pulled an easy-to-use JD from the system to get a req opened and filled. In those cases, we updated things to more accurately reflect what was true. But some of AI's findings did lead to consolidation and org redesign. - Filling skill gaps: With another org, we identified a skill gap on the sales team related to a specific customer interaction. Normally, we would have deployed some content and encouraged managers to work with their direct reports, but realistically, it would have been physically impossible to scale a simulation to help people develop the skills required to do it well. Instead, we used AI to deploy a mobile conversation with an AI bot that simulated the situation and then layered AI conversational analytics to evaluate their performance, provide feedback, and deliver a complete dashboard of where everyone landed. That allowed leadership to then lean in and provide surgical solutions to the problem, measurably closing the gap.
For that project, I used Relativ.AI for the simulations and assessments, and built it on top of OpenAI and NotebookLM to power the underlying models and knowledge base. - Decision-making: When it comes to AI in big decision-making, I'll feed it significant amounts of data, but I'll use it more like a thoughtful partner. I'll come up with key things we're examining and ask how the data aligns to those categories.
I'll also ask things like, "What are we not considering?" or, "What hidden patterns do you see?" But it's never anything I just act on — it's more the jumping off point to validate or seek more detail from people closer to the work who can either confirm, clarify, or deny.
How to measure AI literacy across your team — and close the capability gap
The key step to AI literacy is identifying the gaps. To do this, I’ve broken down AI effectiveness into six core disciplines that can actually be measured:
- Intentionality: Intentionality is about using AI with purpose; not just for convenience. It measures whether your use of AI aligns with strategic goals rather than chasing trends or efficiency for efficiency’s sake. Tip: Pause before automating anything. Ask, “What problem am I actually trying to solve?” Purpose before process always wins.
- Discernment: Discernment measures your ability to know when to trust AI — and when not to. It’s about applying judgment to evaluate AI’s suggestions, accuracy, and fit within the context of human decision-making. Tip: Build friction into your workflow. Don’t accept the first AI output. Compare, question, and calibrate it before acting.
- Ethical Alignment: Ethical Alignment evaluates whether your AI usage respects organizational values, privacy, fairness, and integrity. It’s not about compliance checklists — it’s about consistent moral alignment.
Tip: Define your non-negotiables. Decide what shouldn’t be outsourced to AI before exploring what can. - Technical Fluency: Technical Fluency is your comfort level with AI tools, language, and limitations. It’s not coding — it’s comprehension. The more fluent you are, the more responsibly and creatively you can leverage AI. Tip: Don’t chase every new tool. Pick one or two and go deep, understanding their real capabilities and constraints.
- Workflow Integration: Workflow Integration measures how seamlessly AI fits into your daily processes and collaboration rhythms. Great AI use isn’t added on — it’s embedded. Tip: Look for bottlenecks or repetitive tasks that create drag. Start small and replace friction, not function.
- Delegation Discernment: Delegation Discernment gauges your ability to decide what to hand off to AI — and what to keep human. It’s the discipline of defining boundaries in partnership with technology. Tip: When in doubt, delegate low-stakes, data-heavy work to AI, but always keep the judgment, empathy, and accountability human.

Start small and replace friction, not function.
After defining these disciplines, I built an assessment tool to help my teams understand where they stand in those six areas. People have a conversation with an AI bot describing how they use AI in their work. That conversation is analyzed, and they get a personalized report showing their effectiveness across each discipline, plus practical steps for closing the gaps — all in about 10-15 minutes. And then we can redeploy the assessment over time to see ongoing progress — which makes it a living development framework, not a one-time evaluation.
The transformative, real-world results of an AI assessment
Here's an example of this type of AI assessment in practice. One org had rolled out Microsoft Copilot and saw it being used a lot, but really had no gauge of what was being done or if it was adding any value.
So, we rolled out that assessment tool with a few key objectives: Measurably identify how effective people were at using it, identify trends in what kinds of things people were using AI for, and provide specific development and growth plans for measurable improvement within six months.
Everyone in the company got a personal scorecard, which was then tied to a performance management goal around improving their AI effectiveness. And leadership was able to look at the big picture.
We first identified that there were some big holes in the "ethical alignment," as compared to the company values, which led to some new corporate standards that got baked into performance management — not just around AI ethics, but around how people were to conduct themselves in general.
We also identified high-risk opportunities where AI was being used in work that needed to stay human. The involved leaders were able to drill into the teams doing that work and help course-correct.
Over the course of six months, we ran the assessment two more times and saw measurable growth in all six areas with a major improvement around ethics as a result of those standards being focused on across the enterprise. Many of the leaders also ended up using the data that came back to help focus their teams' work, because we identified there were a lot of people who were unclear on their strategic priorities.
Christopher Lind's two-tier AI tool stack
When it comes to tools, I think about them in two categories: AI tools and AI-enhanced tools.
For tools that are purely AI:
- My go-tos are ChatGPT and Google Gemini. But I lean more heavily on ChatGPT because of its multimodal capabilities.
- I’d also add NotebookLM. When I need to organize and reference specific datasets, that’s a must-have. I’ve got a notebook for every project I work on. All the resources, every transcript — it all goes in there. That gives me a closed dataset I can search and build from.
- I use Fireflies.ai for all my meeting transcriptions.
- And Relativ.ai does all my simulation work. It's also what my AI-effectiveness assessment was built on.
On the AI-enhanced side:
- I’m obsessed with a video editing software called Descript. They’ve done a fantastic job integrating AI capabilities into creative workflows. And in the past, I used CapCut quite a bit.
- And I use Grammarly a lot as well.
Why your AI tool stack matters less than your AI strategy
With that said, when it comes to AI, most people are focused on the wrong thing. They're asking what tools people are using and which ones they should have. That's the wrong question.
People need to be examining which capabilities they need to strengthen and which ones can be easily automated. Then bend whatever AI tools they already have in their portfolios to do it.
The people who will succeed aren’t the ones who pick the right tools. They’re the people who were clear on what they needed to do and then wisely worked with the AI tools around them to do it.
So when I talk about the tools I use, it’s just to give context — not a checklist. I could accomplish everything I’m doing with a completely different set of tools because I know what capabilities I need from AI and can quickly assess which of my AI tools will get the job done.
Advice for leaders navigating AI transformation: Slow down to scale faster
Here's my advice: Slow down. Take the time to understand what you’re really trying to accomplish and how work gets done today.
Everyone is rushing to do something with AI without fully understanding their current workflows. As a result, they’re throwing AI at broken processes and scaling waste.
If you take the time to get it right on the front end, equip people to be successful with the tech, and define desired outcomes clearly, you’ll succeed.
You also need to be willing to say "No" to great ideas if they’re pulling you away from the things that matter most.
Follow along
To follow Christopher Lind's journey and learn from his experience of driving business transformation through people and technology, visit his website at christopherlind.co, subscribe to his insights on Substack, or connect with him on LinkedIn.
More expert interviews to come on People Managing People!
