Companies are pouring 93% of their AI budgets into technology and only 7% into people. Deloitte CTO Bill Briggs called this finding "stunning" — and pointed to it as the defining management failure of the current AI era. Given what the data shows is happening as a result, that descriptor seems about right.
Most executives have been treating AI as a technology problem. They buy platforms, negotiate enterprise licenses, stand up infrastructure, and wait for the productivity gains to appear. The gains mostly don't. And a mounting pile of evidence — from Deloitte, McKinsey, MIT, and Wharton — now points to a single explanation: the technology isn't where the investment is needed most.
For mid-market companies specifically, this budget imbalance isn't just inefficient. It's dangerous. Unlike Fortune 500 firms with the organizational slack to absorb failed rollouts and retry with a different vendor, mid-sized companies generally need their first wave of AI investments to land. The stakes are different, and so is the calculus.
The 93/7 Split Is Already Backfiring
Briggs's observation comes from consistent patterns across Deloitte's surveys and client work: companies treat AI like a hardware purchase. They buy the thing, deploy it, and assume people will adapt.
They don't. Or at least, not in ways that deliver value.
The consequences are specific. According to Deloitte's data, trust in AI drops sharply from the C-suite to the frontlines, from roughly 70% at the executive level down to about 7% among front-line employees. That trust collapse triggers what Briggs calls "shadow AI": employees bypassing official tools and either abandoning AI entirely, or using unapproved tools in ways that create compliance and security risks. Pilots stall. Costs run over. Companies that applied AI on top of already-broken processes end up, in Briggs's words, "weaponizing inefficiency."
McKinsey's 2025 State of AI survey found that 88% of organizations are using AI in at least one business function, yet only 6% have achieved enterprise-wide transformation with measurable bottom-line impact. An MIT study put it even harder: 95% of AI projects deliver zero return on investment. The technology is widely deployed. The value is largely absent. That gap has one explanation: organizations aren't investing in their own capacity to use what they're buying.
Why Mid-Market Companies Face Different Stakes
Large enterprises have buffers that mid-market companies don't. A Fortune 500 with 80,000 employees can absorb a failed AI rollout. Write it off, regroup, try again with a different vendor — it hurts, but it doesn't threaten the core business. A mid-sized company with 500 to 2,000 employees doesn't have that luxury. Wasted AI investment is a strategic setback that can take years to recover from, especially when competitors are executing better on the first try.
There's a meaningful upside to the mid-market position, though. A joint Wharton School and GBK Collective report found that smaller enterprises report stronger AI ROI than their larger peers. The reason appears to be agility: smaller organizations move faster from experimentation to actual workflow integration. They don't need six layers of approval between "this AI tool could help us" and "this AI tool is embedded in how we work." That speed advantage is real and significant — but only when the organizational investment matches the tech investment. Right now, most mid-market companies are making the same budget mistake as the Fortune 500s, without the Fortune 500's capacity to absorb the fallout.
Most mid-market leadership teams can't answer this question off the top of their heads: of every dollar we're spending on AI this year, how much is going toward helping our people actually change how they work? The teams that can answer it — and don't like the answer — are already ahead of the ones who haven't thought to ask.
What the 7% Actually Needs to Fund
Most companies treat the "people side" of AI as a training event. Send employees to a half-day workshop, hand them a login, call it done. That's not people investment. That's the illusion of it.
Role Redefinition
AI changes which tasks matter, not just which tasks get automated. When an AI tool can draft a first-pass analysis in seconds, the analyst's job shifts from creating the analysis to pressure-testing it, contextualizing it, and acting on it. Without explicit guidance about how roles are evolving, employees either ignore the AI tool or use it in ways that create new problems.
Role redefinition means sitting down with function-level managers and answering basic questions: What does a good day's work look like now? What decisions stay with humans? Where does AI handle the first draft and a human handles the judgment call? This work doesn't happen in a licensing agreement. It happens in conversations, updated job descriptions, and revised performance metrics.
Prompt Literacy Programs
Most employees who "don't like AI" have actually encountered it at its worst: vague outputs from vague inputs. Prompt literacy is a learnable skill, and it dramatically changes the quality of results people get from these tools. A well-designed program can be delivered in a few sessions, customized by role, and tied directly to the tools employees are already supposed to use.
This isn't about turning everyone into a prompt engineer. Keep programs short and practical, and they clear the biggest source of early skepticism faster than any internal memo or executive announcement could.
Process Reengineering (Not Just AI-on-Top)
This is the one most companies skip, and it's where the real money is.
One of Briggs's sharpest warnings is about what happens when companies bolt AI onto existing broken processes. The AI doesn't fix the process. It accelerates the dysfunction. If your invoicing workflow has seven redundant approval steps, an AI tool that pushes those approvals faster isn't progress. It's faster chaos.
Real process reengineering means identifying high-friction, high-volume workflows and redesigning them with AI built in from the start, not bolted on afterward. This is uncomfortable work. It forces teams to question how they've been operating, surfaces turf battles, and exposes legacy habits that no one has been willing to confront. Companies that do it well consistently report it as the highest-return item in their AI budgets — and the one most executives postpone because it means having hard conversations about how work actually gets done.
Start by mapping three to five workflows where volume is high and friction is visible. Not theoretical pain points — actual ones where employees regularly complain or workarounds have quietly become standard operating procedure. Those are the reengineering targets worth acting on.
Internal AI Champions
A GitHub Enterprise study on AI champion programs found that top-down mandates don't drive adoption. Employees who feel AI was done to them rather than with them find quiet ways to ignore it.
Internal AI champions are employees — typically mid-level, cross-functional, and respected by their teams — who get early access to AI tools, dedicated time to experiment, and organizational backing to share what they learn. They're not evangelists reading from a corporate script. They're real users solving real problems, and their credibility with peers is exactly what makes them effective. For mid-market companies with limited HR bandwidth, a champion network of even 5-10 people spread across departments will do more for adoption than any training deck.
Identifying the first cohort of AI champions before tools go live matters more than most executives realize. Give them early access and a direct line to report what's working and what isn't — that feedback loop shapes everything that comes after.
AI Governance
Governance sounds like a compliance box to check. In practice, it's the infrastructure that keeps AI investments from becoming liability. Without clear policies on which tools are approved, what data can flow through them, how outputs should be reviewed, and who's accountable when something goes wrong, employees will fill the vacuum themselves. Sometimes wisely, often not.
According to HBR, only 21% of companies are confident in their AI governance models. That gap tends to surface at the worst possible moments: a data breach, a regulatory audit, or a public-facing output that should never have been approved. A lightweight governance framework needs to answer five questions: What tools are approved? What data is off-limits? How do we audit AI-assisted decisions? Who owns AI-related incidents? What's the process for adding new tools? That's the minimum. Not having those answers documented is an active, ongoing risk.
A Practical Reallocation Framework
None of this means cutting tech investment. The technology matters. But every dollar spent on AI tooling needs a corresponding investment in adoption infrastructure, or the tooling spend is mostly wasted.
A workable rebalancing target for mid-market companies: aim for a 70/30 split at minimum, working toward 60/40 over two years.
In practice, that looks something like this:
- For every $100,000 in AI software and infrastructure, budget at least $40,000-50,000 toward people-side investments: training design, role redesign consulting, champion program coordination, and governance framework development.
- Treat the first 90 days of any AI deployment as an adoption phase, not a post-launch coast. Expect usage data to be low and plan for active intervention.
- Set adoption metrics alongside performance metrics: what percentage of intended users are engaging with the tools regularly, and what are they producing with them? This question matters as much as whether the tools technically work.
- The champion cohort deserves its own line: identify them before tools go live, not after. Give them early access, dedicated time, and a direct channel to report what's working. Their feedback shapes the rollout in ways no vendor implementation guide ever will.
The target ratio matters less than the principle behind it: every technology investment needs a corresponding investment in organizational capacity to use it.
The Real Doomsday Scenario
Most of the anxiety around AI focuses on the wrong risks — job displacement, sentient models, regulatory capture. Those conversations matter. But the more immediate problem for mid-market companies is quieter and less dramatic. It's spending real money on AI and getting nothing back, because no one invested in helping people work differently.
Deloitte's Briggs put it in terms worth holding onto: companies are treating AI like new technology when they should be treating it like new employees. New employees get onboarding. They get training. They get feedback loops and performance expectations. They don't get handed a login and left alone to figure out the organization.
The companies getting genuine ROI from AI right now don't necessarily have better technology. They have better onboarding, clearer role definitions, and people who were actually prepared before the tools went live.
Most mid-market executives already understand that people investment matters in theory. What they're actually deciding — whether they admit it or not — is whether to make that investment before the pilot graveyard fills up, or after the CFO starts asking questions that don't have good answers.