Skip to main content

Nirit Cohen held senior HR and leadership roles at Intel, where she worked in a fast-paced technological environment for three decades before pursuing thought leadership. Now, she is a future-of-work strategist, Forbes columnist, and the creator of The Future of Less Work newsletter and podcast.

We sat down with Nirit to understand how work is shifting with the increased prevalence of AI, and what HR leaders must do to navigate this transition effectively. Here's what she had to say.

Leading the Future of Work

I spent nearly three decades at Intel, holding senior HR and leadership roles across the U.S., Europe, and Asia. Working inside a company where technology was reshaping industries in real time meant constantly rethinking how work was organized, how people created value, and how leaders made decisions under pressure.

Keep Reading—and Keep Leading Smarter

Create a free account to finish this piece and join a community of forward-thinking leaders unlocking tools, playbooks, and insights for thriving in the age of AI.

Step 1 of 3

Name*
This field is hidden when viewing the form

That experience shaped my leadership lens. I didn’t come to the future of work from theory or trend watching, but from being accountable for real outcomes, redesigning roles as technology evolved, scaling talent across regions, leading through uncertainty, and helping executives balance business priorities with human impact.

About fifteen years ago, as smartphones and mobile technology began to fundamentally change where and how people could work, I started asking a different question: "What will it feel like to work in the future?" That question pulled me into the future of work conversation long before it became mainstream.

By 2012, I led a project that published one of our first whitepapers on the future of knowledge work, looking ahead to 2025. What interested me then still drives my work today: how technology reshapes work itself, how leadership must evolve alongside it, and how organizations can design roles, systems, and cultures that keep humans relevant as automation accelerates.

Today, I work as a future-of-work strategist, writer, and advisor. Through my Forbes column and my podcast, The Future of Less Work, I explore the social and technological forces reshaping leadership, organizations, and careers. With one foot in decades of leadership decision-making and the other in shaping the narrative of how work is changing, I help leaders, HR teams, and solution providers make sense of it all and what to do about it.

Why AI Shifts Org Charts to Work Charts

I see the shift to AI-first work as more than a technology upgrade. It’s a redesign of how we organize human effort.

For decades, organizations were built around structures, jobs, and headcount. AI forces a different logic.

The real challenge now is orchestrating intelligence and capabilities across humans and machines around the work itself. This challenges us to define, with greater clarity than before, the uniquely human contribution to work.

Nirit Headshot (1)-05688
Nirit CohenOpens new window

Future of Work Strategist

For HR and leadership, this is a profound shift. It affects not only people processes, but many foundational structures we’ve relied on for decades.

Traditional org charts and fixed roles are too rigid for a world where work constantly shifts. Instead, we see the emergence of work charts, or temporary, outcome-driven configurations where teams assemble around problems rather than titles.

Early in my career, I believed clarity came primarily from structure: clear roles, stable hierarchies, and well-defined processes. I’ve learned that in environments shaped by AI and constant change, too much structure can reduce clarity. Alignment now comes from shared intent, trust, and ongoing conversation, not from control.

This means we must treat work as a portfolio that can evolve with both business needs and employee aspirations. It means moving away from job descriptions as the primary unit of work toward a more fluid understanding of outcomes, tasks, and capabilities.

We need to break down, recombine, and continuously reshape work, so people can focus on where human judgment and growth matter most. And the people processes that once powered static structures must now evolve to power dynamic workflows.

Why AI is Changing Skills and Responsibilities

This shift also changes the skills and responsibilities required across the organization. Many capabilities we once labeled as “management skills” are quickly becoming core work skills.

Anyone working with AI effectively manages digital agents: assigning tasks, reviewing outputs, and deciding what to act on. As a result, we can no longer teach leadership skills like delegation, evaluation, and feedback only at promotion. They must become foundational capabilities from day one.

At the same time, this changes what we expect from people managers and leaders. They no longer solely own the answers. When intelligence is widely available, the leader’s value lies in judgment, context, and choice. Leaders decide which questions are worth asking, where human oversight is essential, and when efficiency should give way to quality, ethics, or trust.

I’ve had to let go of the idea that experience alone provides a reliable roadmap. In a fast-changing landscape, experience is less a map and more a compass. Its value lies in judgment and the ability to navigate uncertainty, not in repeating what worked before.

Redefining Your Unique Value Changes How You Lead with AI

As a thought leader, I had to confront a difficult question:

What happens to that model when AI can generate, summarize, or explain much of what people seek from me?

And I think that question applies to all leaders now.

I always experiment with tools early, and I already used AI for what I call “work about the work”. Things like:

  • Synthesizing research
  • Drafting outlines
  • Summarizing conversations
  • Preparing first versions before applying my judgment and perspective.

But I realized I needed to deliberately redraw the boundary between my unique value and machine work. And this is more true than ever with the prevalence of agentic workflows.

I became intentional about how I work with AI. I start with an idea, brainstorm it with AI, ask it to gather inputs and create a first-pass synthesis, and then stress-test assumptions and angles by going back and forth.

I treat AI explicitly as a thinking partner, and I've been surprised by the depth of strategic reflection I gain from working with AI in this way. In the past, if I had walked out of a coaching session with a top-tier mentor gaining that level of insight, challenge, and reframing, I would have considered it money very well spent. I did not expect that from technology.

But importantly, AI is not a decision maker. I review, reframe, and consciously choose every output to ensure I don’t lose my voice or point of view.

That shift changed my attention allocation. I became more deliberate about where my human contribution truly matters:

  • Framing the narrative
  • Deciding what not to say
  • Connecting patterns across contexts
  • Shaping ideas for specific audiences.

Ultimately, AI forced me to be more honest about my actual role and more disciplined about letting go of work where I no longer add unique value.

An Intentionally Fluid AI Stack

A pyramid of overflowing glasses represents fluidity.

My AI tool stack is intentionally fluid. I regularly use several general-purpose generative AI tools, including ChatGPT, Claude, Gemini, and Perplexity. I switch between them depending on the task and the specific strengths I need.

Some excel at synthesis and structured thinking, others at language and tone, and others at surfacing sources or challenging assumptions. Using more than one tool helps me avoid overfitting my thinking to a single model’s patterns.

For research-heavy work, I use tools like Perplexity and NotebookLM to organize large volumes of information, compare perspectives, and surface connections I might otherwise miss. This significantly changed how I prepare for writing, podcasts, and advisory work because I engage with more material while reserving my attention for interpretation and judgment.

Currently, I’m exploring Notion, particularly its integration of AI into personal workflows and knowledge management.

Beyond specific tools, a key change in the last year involves how I discover and evaluate each tool. I actively follow newsletters, researchers, and practitioners who identify emerging tools and use cases.

I skim these daily, not to adopt everything, but to stay informed about what’s becoming possible. In particular, I closely watch the personal-helper-AI-agents space, like ClawdBot, as I'm looking forward to a time when I can hand over processes I do around work.

Most tools don’t make it into my regular workflow, and that’s intentional. Experimentation is constant, adoption is selective.

Why Organizations Fail to Get the Full Value of AI

A hand squeezes juice from a piece of fruit to depict the analogy of getting more juice from your AI efforts.

Organizations tend to introduce AI as a technology initiative, but its real impact is far more fundamental.

Because organizations typically operate through roles, org charts, and fixed success metrics, most companies layer AI onto existing structures. Jobs stay the same. Workflows stay the same. Performance expectations stay the same. Leaders expect AI to “transform” work without confronting which work still matters, how it should move through an AI-enabled organization, or what success should look like when intelligence is no longer scarce.

Leaders expect productivity gains, but they don’t change decision rights, approval layers, or how they measure outcomes. We rarely question whether our performance systems encourage learning for tomorrow’s work, or simply reward delivery based on yesterday’s assumptions.

When I work with organizations, I address this by starting with people-related processes, not technology. We look for the processes that either enable transformation or block it.

A simple example is how managers review ongoing work. Do they accept work done the way it’s always been done, or are they willing to pause, push back, and ask teams to redo it using new tools, even if that means it takes longer in the short term?

The way we measure performance, recognize effort, and evaluate success makes a real difference in whether people feel permitted to slow down and learn.

Another example is how organizations treat early adopters. Every organization has people who naturally experiment with new tools and ways of working. These individuals are critical to spreading practical knowledge across teams, but that only happens if organizations give them permission, formal roles, time, and recognition.

Too often, organizations treat this work as extracurricular, when it may be some of the most important work happening in the organization.

How Roles and Departments Will Evolve

Five years from now, most of us will not do the same work in the same way we do today, regardless of role, function, or industry. Not the what, and not the how.

Expertise Will Not Equal Value

Expertise Will Not Equal Value

At a personal level, this means a fundamental shift in identity. Value will no longer come from expertise alone or from owning a function. It will come from the ability to make sense of complexity, ask better questions, connect signals across domains, and help others navigate decisions that have no clear precedent.

This also means that departments, as we know them today, will matter less than the work they enable. Boundaries between HR, strategy, operations, and technology will continue to blur because designing work, developing people, and deploying intelligence will become inseparable activities.

In that sense, the future is about recognizing that none of us gets to stay exactly where we are, and deciding who we want to become next.

Why Each HR Leader Must Personally Own the AI Transition

The pace and scale of change are too fast for anyone else to figure this out. No universal playbook exists for which tools to use, which processes to change, or where your personal value will sit in an AI-shaped world. That work is now deeply individual, and each of us must own it.

This means deliberately making room for it. You must set aside time, attention, and even budget to understand what’s happening, explore, test, and experiment.

In the short term, this often slows delivery. You may produce less, or take longer, because you’re learning to work differently. But that investment is unavoidable. Waiting for clarity or for someone else to design the answer is no longer an option.

What matters is not just what you offload to AI, but how you redesign the value you create in its place. The real work involves understanding where your judgment, perspective, and experience make a difference and then intentionally building your role around that, rather than around tasks becoming easier to automate every day.

For leaders more broadly, the advice is similar but amplified. You can’t delegate this moment. Leaders must model the behavior they want to see. Make space for learning, allow short-term inefficiencies in service of long-term capability, and be explicit about what kinds of work deserve human attention.

David Rice
By David Rice

David Rice is a long time journalist and editor who specializes in covering human resources and leadership topics. His career has seen him focus on a variety of industries for both print and digital publications in the United States and UK.

Interested in being reviewed? Find out more here.