If you’re out there being told to slap AI tools onto everything and call it “digital transformation,” this episode is your reality check. I sat down with Darren Murph—yes, the remote‑work oracle behind GitLab’s all‑remote strategy—to pull back the curtain on what needs to exist before you ever type “chatbot” or “LLM integration” into your roadmap.
We dug into why good documentation isn’t optional anymore, why remote‑work lessons are now directly relevant to AI adoption, and how companies who rushed ahead without building infrastructure are setting themselves up for a trust disaster. In short: if your data, your knowledge systems, your culture aren’t ready for AI, this technology is not your solution—it’s your liability.
What You’ll Learn
- Why knowledge infrastructure matters more than tools when you’re layering AI onto an organization.
- How the fundamentals of remote work (async workflows, transparency, writing culture) set the stage for good AI adoption.
- The human dimensions of this transition—how to lead teams who feel threatened, how to align incentives, and how to govern innovation without killing it.
- What “culture in a distributed world” actually looks like when you’re also bringing AI into the mix—not as HR fluff, but as operational truth.
Key Takeaways
- Documentation ≈ trust capital. If your team pulls up an AI tool only to hit bad or misleading data repeatedly, you’ve just eroded trust—not just in the tech, but in management. As Darren says: “If AI leads you astray four or five times in a row… you’re going to be much less trusting of that technology.”
- Remote work = writing culture. The shift to asynchronous workflows over the past few years wasn’t just about Zoom fatigue—it trained companies in self‑service knowledge, clear docs, and distributed decision‑making. Those habits map directly into what AI adoption demands.
- Top down + bottom up both matter. You need vision and governance for AI (where are we going, what are we trying to achieve) and incentives and infrastructure that let people play and experiment. If you only have mandates and no permission to experiment, you’ll get compliance, not creativity.
- Incentives still matter. Changing the “what you get rewarded for” is essential. If you tell a team: “Use AI to save 20% time,” but you reward them for hours logged or face‑time in the office, things don’t change—they just adopt a new tool without behavior change.
- Hybrid or remote, you’re distributed. Whether you’re entirely remote, hybrid, or office‑based—if you’re using AI and modern workflows, you’re effectively distributed. The patterns carry over: async, documentation, transparency.
- Humanity isn’t optional. With AI taking over polished, routine work, the human stuff becomes more visible: improvisation, empathy, culture. Encourage your people to bring their full selves — and your systems to reflect that.
- Governance = two cords, not one. The best companies aren’t either “free for all” or “AI command‑centre locked down”. They define a shared vision (we’re going here) and let people roam within it. That gives you boundary and freedom both.
Chapters
- 0:00 – Intro & question: What foundational work do organizations need before layering AI?
- 2:10 – Darren: Many organizations rush to layer AI over what they currently have; they should step back.
- 4:49 – Discussion: How documentation hygiene is essential for trust in AI.
- 6:54 – Assessment: How ready are most organizations for this shift?
- 9:08 – Lessons from remote work that carry into AI adoption: transparency & async.
- 12:02 – Why AI reinforces the need for distributed‑friendly practices.
- 14:22 – Teams’ fear of AI: How to guide them through the transition.
- 16:07 – Incentives and operationalizing AI gains: Profit‑sharing, hackathons, dedicated teams.
- 18:12 – Governance: Balancing creativity and guardrails in AI use.
- 21:18 – Purpose, clarity & structure: Why they matter now more than ever.
- 22:53 – Culture in an AI‑driven remote workplace: Embracing humanity outside work and letting it back in.
- 25:04 – Outro & final thoughts.
Meet Our Guest

Darren Murph is a globally recognized Future of Work Architect, consultant, speaker, and author, celebrated as an “oracle of remote work” by CNBC and featured among Forbes’ Future of Work 50 for his trail-blazing contributions to distributed work and organizational design. Drawing on 15+ years of experience leading remote and hybrid teams, Darren co-authored the GitLab Remote Playbook and guides companies in building high-performance, resilience-driven operations through inclusivity, asynchronous workflows, and strategic communication.
Related Links:
- Join the People Managing People community forum
- Subscribe to the newsletter to get our latest articles and podcasts
- Connect with Darren on LinkedIn
- Check out Darren’s website
Related articles and podcasts:
David Rice: A lot of people out there, they're rushing to adopt AI. What foundational work do organizations need to do before layering AI into existing systems?
Darren Murph: I see a lot of organizations rushing to layer it over whatever they currently have, looking for sparks of efficiency without stepping back and building a great knowledge infrastructure, creating a one star experience for all employees.
David Rice: How does AI make documentation hygiene even more essential for trust and adoption?
Darren Murph: In a distributed world, you trust information even more than people. If AI leads you astray four or five times in a row, you're going to be much less trusting of that technology, not just that day, but every day going forward.
David Rice: What specific lessons from distributed work do you see kind of carrying over into how companies will successfully adopt AI?
Darren Murph: Two major first principles — transparency, and asynchronous work flows. The more that you've shifted your culture into a writing culture, the better off you are now that we are moving from the remote era to the AI era. I would encourage leaders to take a step back and make sure that you have the right foundations in project management and knowledge management.
David Rice: Welcome to The People Managing People Podcast — the show where we help leaders keep work human in the era of AI. I'm your host, David Rice. And on today's show I'm joined by Darren Murph. Now you may know him as the go-to guy on all things remote work. He's the former head of remote at GitLab and the founder of Page 52 Consulting.
We're gonna be chatting about how AI is inherently a remote work tool and how you can use it to build remote work cultures.
So Darren, welcome!
Darren Murph: Hey, thanks for having me.
David Rice: Alright, so just wanna jump right into it. You know, a lot of people are out there, they're rushing to adopt AI and of course it's everybody's talking like garbage in, garbage out.
That rule always applies. Certainly applies here. What foundational work do you think organizations need to do before layering AI into existing systems, especially when we think about remote work environments?
Darren Murph: Yeah, this is a great question. I see a lot of organizations rushing to adopt AI and layer it over whatever they currently have, and then they're looking for sparks of efficiency or movement or some sort of silver bullet for the future.
But I would encourage leaders to take a step back, take a pause, and make sure that you have the right foundations in project management and knowledge management. And the knowledge one is the most important. So I wanna share two examples to kind of paint the picture of what I mean here.
Imagine that your company is a widget that anyone in the world can buy on Amazon. So it gets shipped to their mailbox, they open it and they shake it out, and your widget plops on the counter and they say, how am I supposed to operate this thing? So then they look up in the envelope and they shake it some more expecting an operating manual to fall out.
Now in the absence of an operating manual, how do you think people are going to know how to use the thing that you've built? Well, a lot of trial and error. This is kind of like onboarding into most companies right now that have not codified the operating rhythms of their company. So what does this look like in the age of AI?
Well, everything compounds, so if you have everything well documented, that's going to compound into AI having a lot of great resources to extract from. It compounds in the opposite direction too. So imagine you're a ticketing platform and you sell concert tickets, and there's this upcoming concert on your platform and tens of thousands of people show up to buy tickets.
In addition to buying tickets, they wanna use AI to figure out what's the best value for where I can sit. And they also want to take things into consideration, like where is the sunlight going to be coming in at the time of day that this concert is scheduled for? And because I'm a base player, I actually want to know which seats should I optimize for based on where the band will be set up.
Is the bass player going to be on stage left or stage right? And many other things that you could imagine AI helping with. So imagine they show up to this ticketing site, but you, the leader has built infrastructure that has an old seating chart that has an old incarnation of the band with the wrong layout.
Some of the seeds aren't even mapped yet. So as a consumer trying to use AI to navigate this, you're going to have an incredibly frustrating experience. And it's likely that you'll not only never come back to this platform, but you'll find any and all areas on the internet to leave a one star review.
And so I worry that for leaders who are layering AI now, without stepping back and building a great knowledge infrastructure, you're actually creating a one star experience for all of your employees.
David Rice: That's a great analogy and I really liked, you know, sort of the picture that it paints. And like remote work, you know, this era has taught companies a lot about the importance of documentation and single sources of truth. But I'm curious, you know, how does AI make documentation, hygiene maybe even more essential for trust and adoption?
Darren Murph: We talk about trust as that's the speed in which your business is going to run. In a primarily co-located world where most people go to the same office to do work together, you really want to be able to build trust with the other humans that you are in close proximity to, but as the world becomes more distributed. You need to be able to trust information even more than people. And this is especially true in the age of AI.
I mean, you jokingly mentioned hallucinations, but seriously, when you are engaging with AI, if it leads you astray four or five times in a row, and your reputation or the outcome of a project depends on that. You're going to be much less trusting of that technology, not just that day, but every day going forward.
And so, while documentation was perhaps a nice to have in the past, because you could always use synchronous meetings and verbalization to patch over communication gaps, that's no longer true with AI. You can't ask AI for input on a certain piece of information. And then say whatever gaps you find in the knowledge base, go ask Tim and then come back with your complete analysis.
He can't just phone Tim up and fill in those gaps. At least not yet. And even if it could, I would say that's probably not the most efficient way to scale information.
David Rice: Well, you bring up an interesting point there. Like there's a lot of bad data floating around in, I would say, in the majority of organizations, quite frankly.
And so if we're like looking at what information can you trust? And what information can you feed it and trust that, I mean, what you're gonna get back is real. I'd say we've got a lot of work to do on the back end there to get ourselves ready. Like how would you grade most organizations' readiness in this area?
Darren Murph: Great question. So I work with organizations all the time in trying to build out this infrastructure and before we do any work, we actually do an analysis and the output of it puts them on a curve and it never fails. They think they're much further along on the curve. Then they actually are. And so in most cases, they are ready to adopt these infrastructure improvements.
They recognize and see the need of it. They're not saying that this is unimportant, but they have precious little actually implemented. And a telltale sign is do you treat knowledge like a product? Do you actually have someone internally that is either a chief documentarian or a chief librarian, or do you have a number of people that are running this as a product?
Think about security in an organization. This isn't something you just wing. Most organizations have a chief security officer, or they have systems in place. They have whole teams in place because you can't assume that every employee joining your company is going to be a master of security. The same is true with knowledge and documentation unless you came up with a journalism degree.
Most people aren't trained in how to think about taxonomy and knowledge information, but in the age of AI, this is so important within a company. Most people who are doing ChatGPT searches on the internet, they can get great results because it's looking at the entirety of the internet.
But when you scope that down to just what's in your organization. You can't go outside of those walls. It's on the company to say we need to take knowledge as seriously as we do security. It's time to actually invest in a knowledge base and put some systems and potentially people around building that out.
David Rice: Yeah, it's interesting you say that about org. So it's like a lot of us, right? We all think we're more advanced in our use of AI than maybe we are. So that's, it's very fitting actually. It extrapolates out to the org level. I'm curious, the remote work wave that came with COVID, it maybe was like the best preparation for this next era of work that we could have had.
I'm wondering what specific lessons from distributed work and that time that we're all forced into it, do you see, kind of carrying over into how companies will successfully adopt and deploy AI?
Darren Murph: Two major first principles of remote work that carry over well are transparency and asynchronous first workflows.
So companies quickly found when they all went remote that they had to be much more transparent than before because everyone needed information and you couldn't all be in the same place at all times. And so you had to quickly figure out, how do we codify all of this information that's in our minds and make it accessible and searchable?
This is paying dividends for companies that really invested in that during COVID, now that you lay your AI on top of it. Their output is much better because they've already done the upfront work of codifying what was in people's minds. The other part of this is asynchronous workflows. I'm sure you've seen so much data over the pandemic of the multiple hundreds of percentage point uptick in the amount of teams meetings that people engaged in, because that was the only way that they knew to get work done, and leaders quickly realized that this was not sustainable.
They implemented things like project management tools, Asana, or Clickup. These are great examples of moving work out of synchronous meetings into tools that are actually designed to move large projects forward. And so for organizations that did that, guess what all of those projects that are codified in a tool instead of just being out in the ether in a meeting.
Now AI can look into that as well. So said another way. The more that you've codified, the more that you've shifted your culture into a writing culture, the better off you are. Now that we are moving from the remote era to the AI era.
David Rice: Imagine having access to the world's best talent, whether it's that engineer in São Paulo, that head of sales in Dublin, or that incredible designer in Cape Town. Your next great hire could be anywhere in the world. With Oyster, they don't have to be the one that got away. Oyster helps companies hire talent globally, run accurate and on-time payroll, and stay compliant every step of the way. Build your dream team and grow with confidence because the world truly is your oyster.
It's so interesting every time I hear you talk about this and talking about a writing culture, it's funny 'cause I think back to like, you know, when I was coming outta school and people were like, ah, you know, your skillset might not be that valuable in 10 years. And it turns out actually it turned out to be quite valuable.
It is actually sort of the key, I think, to this next chapter, which I'm grateful for. Even companies calling employees back to the office, you know, a lot of 'em are still distributed across different cities and time zones. How does AI kinda reinforce the need for distributed friendly practices? Because even in this new reality, even when we're co-located, we're still separate so much of the time, hybrid in different cities.
So I, I'm curious what you think about that.
Darren Murph: Yeah, look at it this way. If you remove the physical office from most multinationals, business would go on, but if you removed the internet from most multinationals, their entire business would grind to a halt probably within a week. So which one is most important?
And so I chuckle a bit at this because even organizations that are making a huge deal about enforcing and mandating a return to office, unless they literally have one office where everyone is literally on one floor, they need to embrace remote first principles. There's been research that has done that.
Even if you're in one skyscraper, the employee's on floor seven and the employee's on floor 41 almost never engage with each other in person, so they might as well be oceans away. And for most multinationals, they have multiple offices and these span different cultures and languages and time zones. And so whether they want to identify as such or not, the vast majority of organizations in 2025 and beyond are distributed.
And so ignoring that reality just means that they're falling further and further behind the curve. When you think about how people engage with AI, even outside of their organization, it is not a physical place that they go. They don't get in a vehicle and drive down to the local Lions club in order to engage with AI.
They just use whatever device is in front of them. And so the further that we go along, I would posit that we are going to become a more distributed working society, not a less distributed working society and AI is only furthering that reality.
David Rice: I feel like that's like the premise for a movie. You know, you go down to the Lions Club to like talk to the AI, it's like zoltar in big, you know?
Darren Murph: Yeah. It's the new version of the payphone. Like we actually have to drive somewhere to make a phone call. I don't know, I think there's a another season of The Wire.
David Rice: I love it. Some employees, when we think about AI adoption, some folks are fearing it, right? And they see it as sort of like they're training their replacement.
And I think it's a very logical, sort of valid concern for a lot of folks. It seems to be sort of, we're in this phase of shifting our value away from tasks toward learning, adaptability, creativity. I'm curious, how do you see leaders guiding teams through that transition, especially in remote settings?
Darren Murph: Yeah. This is a tough challenge because it's not about the ones and zeros at this point. It's about the human heart and the human mind. And psychology is complicated, and so for people that have made their careers out of being great at doing a task and then creating visibility around doing that task.
This is a bit of an unsettling time because now the Future Prize is actually going to be who is the most innovative in discovering solutions, using new technologies that enable people other than themselves to accomplish tasks. And so I think what's happening here is the evolution that's happening is really as simple as what I just articulated, but for the human mind, they need a leader to guide them and coax them and honestly give them some grace as they go through this transition.
I would actually look to the great Charlie Munger who said, if you show me the incentives, I'll show you the outcome. And so for leaders who are trying to galvanize a team around doing things differently, think about the incentives. If your current organization is incentivized around tasks, you can't just implement this massive sea change around wanting people to think about their work differently and yet not change the incentives.
And so as you're guiding people and coaching people through the change, also go back to your total rewards and ask yourself, Hey, do we have the right systems in place to make sure that we're incentivizing people to go along with this versus resist.
David Rice: Some of your work, obviously you've talked quite a bit about culture, right, and cultural shifts. And I'm curious, when you talk about incentives like that, do we need a new sort of AI era incentive models, maybe it's profit sharing or anything that can sort of reduce fear and encourage innovation?
Darren Murph: Yeah. This is a great example of not reinventing the wheel. Speaking of Charlie Munger, he has this wonderful example Xerox many decades ago where they introduced a new model and a year later they realized that it was selling incredibly poorly. And they thought, is it marketing? Are we not conveying to the market that the new model is better than the old model?
But then they went and looked at the sales incentive structure and they realized that it had never been updated. And so the sales team was still incentivized to make more of selling the old unit. So somehow they were convincing people that the old model was even better than the new model in order to align themselves with the incentives that were in place.
And so this is just quintessential sales profit sharing. And so when I look at leaders saying, Hey, we want everyone here to use AI to make themselves 20% more efficient, or to save the company 20% in whatever metric it is. Have you considered giving some of that back to the individual? This is a very simple incentivized structure.
Companies do this all the time with hackathons. They bring all of their engineers together for a week and they say, whoever can build the next major iteration of our app, you get a bonus. And so it galvanizes a team around, Hey, now we are aligned towards a goal. Let's make progress. It really is to me as simple as consider profit sharing.
If a company or an individual or a team generates a 10 or 15% templatized, repeatable type of efficiency gain, give some of that back to them. Whether that's time back in their day or more flexibility or a monetary bonus. There's lots of ways that you can reward someone. But give some of that back. I don't think you should ask your team to make all of these gains for you, and then the organization just absorbs all of it.
David Rice: Yeah, I'd agree. And I've seen a lot of hackathons. It's interesting too, 'cause so much of what comes outta those often struggles to be operationalized. And I also, maybe it's part of even that incentive structure is figuring out how we're going to operationalize this. That way it gets incentivized even outside of the engineering team, I mean, in that example.
Darren Murph: One other point that I would mention here and it is a bit wild to think that this is out of the box thinking, but consider spinning up a dedicated team to find these gains. I mean, imagine how ludicrous it would be if you put out to the entire organization, Hey, make our company 20% more secure.
Do a lot of your own homework on security protocols of the day and phishing and all of these things that barely have anything to do with their job company is going to be much more efficient if they actually put the security team. On that task. And I would say that there's probably a lot of people in your company right now that are just huge personal fans of AI and they are already spending a lot of their personal time becoming experts in this.
And so you may not need to necessarily hire an external expert or maybe partner with one that can foster a team within your company, but there is something to be said about let the experts be the experts.
David Rice: Absolutely. It seems to me like AI adoption to this point. It's been a case of two extremes, right? Leaders who either let people go at it and they're sort of just like off on their own, or they kind of over govern AI use and they get very concerned with guardrails. Right?
And I'm curious, what does balance governance look like that allows creativity without chaos. It suits whatever work model you're in, whether it's remote or you know, you're hybrid, that kind of thing.
Darren Murph: I would take a two-pronged approach. One would be laying out a clear vision of what are we as an organization trying to achieve. And then also create an incentive structure for people to innovate and expand beyond what the leaders are currently capable of thinking of. So this kind of allows both of those things to happen.
You do allow people to "go wild". But you have an end in mind. They are incentivized around specific metrics. Maybe that's efficiency, for example. So at least they're going wild toward a certain goal that you've agreed on. And then on the other side of that, you can consider that govern around what are we trying to achieve as an organization.
How does this shape our strategy? And then how can we cascade that to everybody that works here, give them some sort of vision or a hope, some sort of inkling of what the future may hold, not just, Hey, let's get better at AI, or let's make sure that we have AI because everyone else is doing it and we're falling for social proof.
So I think if you do those two things in parallel, you'll unlock the true innovators. To run wild toward a direction that you want them to, and for everyone else, there's less fear involved because they can say, ah, I see where we are going as a team, and I understand how AI, like any technology is going to be one additional thing that's going to get us closer to achieving that.
David Rice: Yeah. We've talked a lot in recent months about purpose, right? And that sense of like, am I contributing to the bigger goal and what am does what I'm doing matter. You can communicate that, not just buy-in, but just the commitment level to executing is so much higher.
Darren Murph: It reminds me of a study that Google did, I think it was over 15 years ago, and they looked into the five resilient dynamics of an effective team, and one of them was clarity and structure.
And I look at teams today, and I would say maybe that's the most important one when you're in a distributed environment. If you give people clarity and structure on what are we aiming for, it allows them to align their purpose with the team and company purpose. And so they feel less adrift and they're more motivated to present their innovation to the company if they know that it's going in a direction that has been clearly communicated.
And interestingly enough, this loops all the way back to the beginning of our conversation around. Why have a writing culture? Why write things down? Why insist on rigor around documentation? Because that's where clarity and structure comes from. And there's been so many studies done that show people thrive on this, they love this, and if something is codified and written down, now you have a baseline for making it better.
So even if it's imperfect, even if it's a work in progress, things that are written down can be changed and evolved with more input.
David Rice: My final question for you is just sort of around, you know, with AI handling the polished and the routine, right? It seems like spontaneous, imperfect, human moments, they're kind of becoming a different currency almost in the workplace, right?
So how do organizations encourage humanity in an increasingly AI driven remote workplace?
Darren Murph: So funny, right? The "soft skills" are becoming all the more important when AI can write your emails, like what's left for you to do. But to me I think this unlocks a stance that I've had for many years, which is increasingly culture is going to be built outside of the workplace.
Progressive organizations are going to not only accept that they're going to empower that, and then they're gonna create avenues for that culture to be funneled back into the organization. What does that mean? It means that in a distributed world where AI is potentially a colleague, it is less likely that we're going to get all of our culture, all of our social impact from work.
And this is a big pill to swallow for people, leaders who have always wanted to control that experience, but you have to let go. An example that I'll share with you is during the pandemic, a lot of people came to me and they said, Hey, the Zoom happy hour, it worked for the first time, but then no one showed up for the second one.
How do we actually build team culture without forcing people into a Zoom call? And I said, well, if that hour is already a sunk cost, just deploy everybody in your organization out into their local environment for that same hour each week. And just tell them to do something that's meaningful to them, whether that's volunteering at an orphanage or volunteering for a food bank or Habitat for Humanity.
I said, ask them to wear company swag, wear your logo, and then maybe take a selfie of themselves while they're engaged in that work. And then when they come back to their next team meeting, spend the first 10 minutes talking about what everyone did. Share those selfies, share those aha moments. That is real, genuine, authentic connection.
And so, although that culture was actually built outside of the workplace is responsible for building a system to enable that to come back in. And I would argue that is much more genuine and authentic, and this is the opportunity that we have to stop manufacturing fun and actually enable people to bring that into the workplace.
David Rice: Yeah, and when you're doing these things right, purpose. It takes care of itself in a lot of ways. Right?
Darren Murph: When you let humans be humans, they really enjoy being themselves.
David Rice: Exactly. Well, Darren, I wanna thank you for coming on the show today. It's been great as always chatting with you.
Darren Murph: Absolutely, man. It's a pleasure.
And look, if any of your listeners are interested in what I'm building, I have a great team that would love to get you to some of the areas that we've discussed. And so reach out to me on LinkedIn or darren@darrenmurph.com.
David Rice: Excellent.
Thank you for joining us today on The People Managing People Podcast. If today's conversation gave you a new perspective, make sure to follow the show on Apple Podcast, Spotify, or whatever you use to listen to your podcast. You can also find more practical frameworks, case studies, tools for modern leadership at peoplemanagingpeople.com and of course in our newsletter, so get signed up for that.
And until next time, build a writing culture.
