Company A spent $8 million on AI transformation last year. Company B spent half that. Yet by every measure that matters—adoption rates, productivity gains, employee satisfaction, and return on investment—Company B is running circles around its better-funded competitor.
The difference isn't in the technology. Both companies licensed the same enterprise LLMs. Both hired consultants. Both announced their AI initiatives with the fanfare executives love. The difference lies in a single decision made at the start: how they allocated their budgets.
Company A followed the well-worn path: 60% on technology, 15% on training, 25% on "everything else." Company B inverted the pyramid entirely: 70% on people and processes, 20% on infrastructure, 10% on algorithms.
The pattern Company B followed wasn't invented in a vacuum. It emerged from research by Boston Consulting Group and MIT Sloan examining hundreds of AI transformations. Their finding was stark: the companies seeing real results from AI weren't the ones buying the most sophisticated tools. They were the ones investing the most in their people.
But as always, context is key. That research studied enterprises with $10 million-plus AI budgets, or in other words, companies with dedicated transformation teams, mature data infrastructure, and armies of specialists.
For mid-market companies with 5,000 employees or fewer, the constraints look fundamentally different. Many lack basic data infrastructure. They're buying AI through SaaS subscriptions where the algorithm costs are baked into monthly fees. The same dollar figures hit differently when you're working with $2 million instead of $20 million.
So does the 70-20-10 framework still apply? Or do mid-market companies need a different playbook entirely?
Unpacking the Framework
The 70-20-10 rule, at its core, describes where successful AI transformations actually spend their money and energy. The numbers represent allocation across three fundamental buckets:
- The 70%: People and Processes. This is change management programs, role-specific training, workflow redesign, governance frameworks, communication infrastructure, manager coaching, and user support. It's the unsexy work of clarifying who owns what, how decisions flow differently with AI in the mix, and what incentives need to shift.
- The 20%: Technology Infrastructure. This covers data preparation, integration layers, security frameworks, monitoring systems, and scalability architecture. It's everything that needs to work before the AI tools can actually function in your environment.
- The 10%: Algorithms and Models. These are the AI tools themselves—licenses, API costs, model fine-tuning, and vendor selection.
The pattern emerged from hard data. BCG's research showed that companies following this allocation saw dramatically higher adoption rates and measurable productivity gains. MIT Sloan found that 70% of AI's value depends on complementary investments in people and process, not on the sophistication of the technology.
The reason is that AI models are rapidly commoditizing. Any mid-market company can access the same frontier models from OpenAI, Anthropic, or Google. But organizational change capacity? That's not for sale.
The ability to redesign workflows, train managers to coach AI adoption, and build governance frameworks that enable rather than restrict is the differentiator.
Yet when mid-market leaders look at this framework, a reasonable question emerges: Can companies without mature data infrastructure really spend only 20% there? What about organizations buying SaaS AI products where the algorithm costs are bundled into subscription fees? Does the math still work?
Whitney Munro, founder of FLEX Partners, which guides companies through AI transformations, sees this tension play out constantly.
"The 70-20-10 framing is generally right," she says, "but where you see the damage start is a lack of sequencing and intent. Skipping the 'unsexy' work of clarifying ownership, purpose, incentives, workflows, etc. before even getting to the tools."
The framework isn't meant to be rigid, it's a principle. AI transformation is primarily a people and organizational challenge, not a technology challenge. For mid-market companies, that principle holds, but the specific percentages may need adjustment based on their starting point.
Why Companies Get the Allocation Wrong
The pressure to invest in technology first comes from every direction. AI vendors, naturally, sell technology. Board members want to see dashboards and tools, visible proof that the company is "doing AI." Investors ask about AI initiatives in earnings calls. IT departments speak the language of infrastructure and models.
What's harder to see, harder to measure, and harder to sell is the slow work of capability building. Training programs don't make for exciting board presentations. Workflow redesign doesn't photograph well for the annual report. Governance frameworks sound bureaucratic rather than innovative.
For mid-market companies specifically, several traps make the technology-first approach even more seductive:
- Resource scarcity creates a false shortcut. "We can't afford a dedicated AI transformation team, so let's just buy the tool and figure it out as we go." The logic seems sound until you realize the tool sits unused because no one redesigned the workflows around it.
- Speed pressure overwhelms thoughtful planning. Scale-ups especially feel the need to move fast. Taking three months to train managers and redesign processes feels slow compared to flipping on a new AI tool next week.
- Vendor dynamics obscure true costs. When you buy an enterprise SaaS AI product, the algorithm development and much of the infrastructure are bundled into a monthly subscription. This makes it seem like you're spending less on technology than you actually are, and makes it easier to underspend on the people side.
- Role overload creates an impossible situation. In mid-market companies, the same people who need to maintain current operations are the ones expected to transform them. Without dedicated resources for change management, transformation becomes a side project that never gets proper attention.
Clint Riley, COO at Globe Midwest, has led customer experience operations through multiple technology deployments. He sees the pattern repeat.
"Leaders often pour dollars into AI infrastructure and see limited results. The tools sit unused because teams aren't prepared, don't trust them, or stick to old habits. The real difference comes from investing first in people and processes," he says.
The cost misconception runs deep. Training feels expensive at the moment—$300,000 for focused AI training for 500 employees sounds like a lot. But that investment can cut routine errors by 40% in the first year.
Another $200,000 to redesign key workflows with AI integrated delivers efficiency improvements you can measure quarter over quarter. Compare that to $800,000 spent on AI tools that achieve 15% adoption because no one invested in the change management to support them.
The shadow AI phenomenon that emerged in 2025 provides the clearest evidence that technology-first approaches create the problems they're meant to solve. Research shows 90% of employees are using AI tools, but only 40% of enterprises have officially provided them.
This gap exists because companies deployed technology without support, leaving employees to figure it out themselves, creating exactly the security and governance risks that leaders worry about.
What 70% on People Actually Looks Like
The 70-20-10 framework sounds straightforward until you try to operationalize it. What does it actually mean to spend 70% of your AI budget on "people and processes"?
For a mid-market company with a $2M AI transformation budget over the first year, a people-first allocation might look like this:
$1.2M (60%) on People & Processes:
- $400K: Dedicated transformation resources (2-3 FTEs who understand both change management and AI capabilities)
- $300K: External expertise for change management, workflow design, and governance frameworks
- $200K: Training program development and delivery (not one-time sessions, but ongoing enablement)
- $150K: Manager enablement and coaching infrastructure
- $150K: Communication systems, feedback mechanisms, and iteration cycles
$600K (30%) on Infrastructure:
- $300K: Data preparation, integration, and quality improvement
- $150K: Security, compliance, and monitoring systems
- $150K: Analytics infrastructure to track both technical and human outcomes
$200K (10%) on AI Tools/Algorithms:
- Enterprise LLM subscriptions
- Specialized AI applications
- API costs and usage fees
Note the mid-market adjustment: This shows 60-30-10 rather than the pure 70-20-10 framework. Why? Mid-market companies often have larger infrastructure gaps than enterprise organizations. Legacy systems, data silos, and technical debt mean infrastructure requires higher investment relative to enterprise companies with modern data foundations.
The key principle remains: people investment must equal or exceed technology investment. If you're spending 60% on tools and infrastructure combined, with only 40% on people and processes, the research suggests you're leaving significant value on the table.
What the "People" Investment Actually Buys:
When executives see "$400K for transformation resources," they often ask: what are those people actually doing?
Dedicated transformation resources:
- Conducting workflow analysis before technology deployment
- Designing role-specific training (a finance analyst needs different AI skills than a customer service rep)
- Building governance frameworks that enable rather than block
- Creating feedback loops to iterate based on actual usage
- Coordinating across IT, HR, and business units
Manager enablement:
- Preparing managers to coach teams on AI tools (not just use them personally)
- Helping managers understand how performance evaluation shifts when AI assists work
- Supporting managers through their own identity shift as work changes
- Creating manager communities to share challenges and solutions
Communication and feedback infrastructure:
- Regular all-hands communications about transformation progress
- Channels for employees to surface issues and concerns
- Systems to capture what's working and what's not
- Transparent decision-making about which AI tools get deployed and why
Training that works:
- Not one-time workshops, but ongoing learning programs
- Hands-on practice environments where failure is safe
- Just-in-time learning tied to actual work needs
- Peer support networks for troubleshooting
Skimp on these investments and the results are fairly predictable: low adoption rates, employee resistance, shadow AI proliferation, and ultimately wasted technology spending. For C-suite leaders or board members who see this as "foo-foo" or being too "nice to employees", the critical reframe to highlight is that it's about capturing the value the technology makes possible but can't deliver on its own.
"Boards love seeing new tech roll out," Riley acknowledges. "So frame the people focus as smart risk management with strong, compounding returns: improved productivity, better retention, and business gains that build over time. Tech alone tends to peak quickly then level off."
The Mid-Market Reality Check
But that allocation assumes a relatively mature starting point. What about companies where the numbers need to shift more dramatically?
Alix Gallardo, Chief Product Officer at Invent, helps mid-market companies navigate exactly these scenarios. Her perspective on SaaS AI changes the calculation.
"When you buy SaaS AI, the vendor's already done the heavy lifting, the tough and expensive model work plus a lot of infrastructure. So your question should be: 'With all that done, where do we focus our limited internal time and budget?'"
Her breakdown for a mostly off-the-shelf SaaS setup still puts people first, but adjusts for the reality that infrastructure challenges often loom larger for mid-market companies.
"Mid-market setups tend to be messy," Gallardo observes. "Lots of disconnected SaaS products, legacy systems, native integrations, patchy data flows, and small IT teams. So it's normal for infrastructure to take up to 25–35% early on."
Consider three scenarios where the allocation needs adjustment:
Scenario 1: The Infrastructure-Starved Company
A 2,000-employee manufacturing company has been around for 30 years. Data lives in silos across multiple systems. Customer information sits in one database, production data in another, quality metrics in spreadsheets. The IT team is five people.
For this company, the allocation might need to be 50% people, 40% infrastructure, 10% algorithms—at least initially. Without basic data infrastructure, AI tools can't function effectively. But even here, the people investment can't be sacrificed. It just needs to be strategic, focused on building the internal capability to make better infrastructure decisions and on preparing the workforce for changes once the infrastructure is in place.
Scenario 2: The Agile Scale-Up
An 800-employee software company has been cloud-native from day one. Systems talk to each other. The team is technically sophisticated. Infrastructure isn't the bottleneck.
Here, the allocation might be 75% people, 15% infrastructure, 10% algorithms. The company's rapid growth means culture and processes need extra investment to scale. With a strong technical foundation, the constraint is organizational, not technological.
Scenario 3: The Risk-Averse Mid-Market Company
A 4,500-employee financial services firm operates in a heavily regulated industry. The culture is change-resistant, risk management is paramount, and compliance requirements are extensive.
This company might allocate 65% to people, 20% to infrastructure, and 15% to algorithms. The higher algorithm spend isn't about chasing the latest models—it's about investing in AI tools that come with built-in governance features, audit trails, and transparency mechanisms that reduce compliance risk.
The diagnostic isn't complicated. Rate your company on four dimensions from 1 to 10:
- Current data infrastructure quality
- Organization's change capacity
- Technical talent density
- Regulatory and compliance complexity
Lower infrastructure and talent scores suggest you'll need more investment there. Higher change capacity means you can put more into people and processes. High regulatory complexity might justify more investment in sophisticated tools with governance built in.
But across all scenarios, one principle holds: people investment must equal or exceed technology investment.
As Munro puts it: "Most organizations don't fail because of the AI itself, they fail because they are investing in prompts and expecting AI to be a magic solution rather than what they should be doing, which is ensuring their people know what AI is, what's possible, how to use it, what tools to use for their roles, and how to use it with integrity."
The Hidden Cost Problem
One reason companies get allocation wrong is that many don't actually know what they're spending in each bucket. The costs are distributed across departments and hidden in different budget lines.
Technology costs are usually visible: software licenses, cloud computing fees, vendor contracts. These show up clearly in IT budgets.
Infrastructure costs are sometimes hidden. Data engineering time gets buried in the IT department's operational budget. Integration work happens as part of "maintaining systems." Security reviews are someone's side project. Monitoring systems are an afterthought.
But people and process costs are often completely invisible. Employee time in training represents opportunity cost (work not done while people are learning). Manager coaching time doesn't appear in any budget. Process redesign efforts happen in conference rooms without clear allocation. Communication overhead is everywhere and nowhere. Governance meetings take time but rarely get counted as AI transformation costs.
Gallardo identifies three warning signs that your allocation is off, regardless of what your official budget says:
"A few power users love the AI tool, but most others ignore it. There's no clear change in decisions or workflows because of AI, just cool features but no real impact on how work gets done. New AI experiments are scattered all over the place with no shared standards, playbooks, metrics, case studies, or governance."
When you see these symptoms, "shifting your budget toward product management, operations, change management, and creating 'AI champions' moves the needle way more than throwing money at another integration project," she says.
Riley sees the same pattern: "Even when employees are quietly using AI tools on their own, the answer isn't stricter controls. It's guiding that energy with practical training and clear guidelines. Over-investing in people is rare, many companies actually do too little here, not too much."
To understand your true allocation, you need an honest audit. For the last quarter, account for:
- All AI-related software licenses, cloud costs, and vendor fees
- Time spent by IT and data teams on AI infrastructure, integration, and security
- Time employees spent in AI training (multiply hours by average hourly cost)
- Manager time spent coaching AI adoption
- Meeting time for governance, process redesign, and AI initiative planning
- External consultants and advisory costs
Categorize everything into the three buckets: algorithms, infrastructure, or people and processes. Most companies discover they're spending far more on technology and far less on people than they thought.
Why Getting It Right Matters
The performance differential between companies that get allocation right and those that don't is stark.
Research shows that organizations with human-centric approaches see 2.3 times more engaged employees and 1.5 times higher performance.
Companies where CHROs and CIOs work as true partners report being 15 times more productive in their AI initiatives. But this doesn't happen as often as you might think. The problem starts with a fundamental confusion about what CIOs should own versus what CHROs should own in AI transformation.
"What's fascinating is that the CIO has been assigned not only the design, test and deployment of tools, but also the adoption of them," observes David Swanagon, founder of the Machine Leadership Journal, based on his research into organizational AI readiness. "And I think one of the arguments that we're making through our research is that adoption should be owned by the CHRO, because it deals with culture, trust, autonomy, skills. The CIO should do the design test deployment, but stop there and then partner with the CHRO to manage the adoption."
LISTEN TO THE FULL EPISODE WITH DAVID SWANAGON
This distinction is structural. When CIOs own both deployment and adoption, the result is predictable, technically sound solutions that nobody uses, or that create organizational friction because they weren't designed with culture and capability in mind.
The cost of getting allocation wrong compounds quickly. Klarna made headlines with its AI efficiency gains, then quietly reversed course when the costs to people and culture became clear.
Shadow AI proliferates when companies don't provide supported tools and training, creating the security risks leaders fear. Industry data shows 80% of AI projects fail to scale, often because they were deployed without the organizational support needed for adoption—sometimes leading to AI becoming a scapegoat for broader organizational failures.
Employee burnout and resistance become the hidden taxes on technology-first approaches. When tools are deployed without proper training and workflow redesign, employees experience AI as something done to them rather than for them. The tools add to their workload rather than reducing it. Resistance hardens.
"It's extremely dysfunctional and concerning to see the bulk of companies that reach out to us for help think that once they 'turn on' a tool, the magic will spontaneously begin," Munro says. The dysfunction isn't just wasteful, it's actively harmful to the organization's ability to transform.
Yet mid-market companies that get the allocation right have a structural advantage over enterprises. Shorter communication lines mean changes propagate faster. Faster decision cycles enable rapid iteration. Closer leadership-employee connections allow for more authentic change management. More agile process adaptation means workflows can evolve as the organization learns.
"As a COO who has led CX operations through multiple tech deployments, I approach AI the same way: people first, then strong processes to support them, and technology as the enabler," Riley explains. "When the human element leads, the whole transformation sticks and delivers the sustainable results that matter most."
The ROI is transformative. That $300,000 in focused AI training that cuts errors by 40%? That's not a one-time gain. It's a capability that improves quarterly as people get more skilled. The $200,000 in workflow redesign? Those improvements compound because you've built a muscle for process innovation.
From Rule to Principle
The 70-20-10 rule isn't a rigid formula to be followed blindly. It's a principle derived from observing which companies actually succeed with AI and the reality that transformation is primarily a people and organizational challenge, not a technology challenge.
For mid-market companies, this principle demands translation but not abandonment. Your infrastructure gaps might require 25-35% of budget rather than 20%. Your SaaS AI tools might bundle algorithm costs into subscription fees that shift how you calculate the 10%. Your specific industry constraints might require adjustments.
But the core truth remains: if you're spending more on technology than on people, you're almost certainly leaving significant value on the table.
Riley's framing is pragmatic: "For a $2M starting budget, I'd allocate roughly 65% to people via targeted training and change management support. About 25% should go to essentials like data access and basic cloud setup, with only 10% on algorithms, mostly selecting strong SaaS rather than custom builds."
Gallardo's perspective accounts for the SaaS reality most mid-market companies face: "Yes, the breakdown stays the same even if you never touch the model itself. For a mostly off-the-shelf setup: ~10% goes to algorithms, ~20% goes to infrastructure, ~70% goes to people and processes. This is where you should be investing the bulk of your budget."
The insight both share is that success shows up in adoption rates, employee confidence, and day-to-day results, not just in dollars spent. The technology you buy matters less than what you do to help people use it effectively.
The executive action is straightforward: Conduct your allocation audit this month. Calculate what you're truly spending across algorithms, infrastructure, and people when you include all the hidden costs. If technology exceeds people, you've found your problem.
Then do the work.
- Invest in the unsexy work of change management before deploying the next tool.
- Build role-specific training programs.
- Establish AI champions.
- Enable your managers to coach adoption.
- Redesign workflows before implementing technology.
- Create governance frameworks that enable rather than restrict.
Remember Munro's warning: "Where you see the damage start is a lack of sequencing and intent. Skipping the 'unsexy' work of clarifying ownership, purpose, incentives, workflows, etc. before even getting to the tools."
The technology will be there when you're ready for it. Your people won't wait forever.
