Three out of four mid-sized companies are running AI experiments. Only one in four is doing anything with the results.
That's the finding buried inside new economic research from the Centre for Economics and Business Research (Cebr), commissioned by HSBC UK. It's a more useful number than the headline that's been making rounds. Yes, £105 billion in additional revenue for UK mid-sized firms by 2030 is attention-grabbing. But the real story is that just 24% of those firms qualify as "productive adopters." The other 76% are running pilots, drafting emails with ChatGPT, and telling their boards they have an AI strategy.
They don't.
What "Productive Adoption" Actually Means
The HSBC/Cebr research covers Britain's mid-sized businesses: roughly 35,000 companies with annual revenues between £15 million and £300 million. In 2025, these firms generated 23% more value per employee than the wider economy. They're not failing. But they're about to split into two very different groups.
AI adoption across the segment jumped from 35% to 55% in two years. Most executives are pointing to that number as progress, and it is — but the research cuts through the self-congratulation. There's a difference between using AI and integrating it.
Experimenters use AI for peripheral work: polishing emails, summarizing documents, generating first drafts of things that would have taken an hour. Useful, sure. But it doesn't touch how the business runs.
Productive adopters are doing something different. AI is inside forecasting models, supply chain decisions, reporting pipelines, customer engagement workflows. It's in the processes that actually generate or protect revenue. And the gap in results is measurable: firms that cross this line see an average revenue-per-employee increase of around 4%. For a typical mid-sized company, that's roughly £4.5 million in additional revenue over four years, plus £1.3 million in economic value, compared to a peer that doesn't make the move.
A competitor running AI inside their pricing model for two years isn't just more efficient. They've made better decisions every week you haven't.
The Same Story, 4,000 Miles Away
What makes the HSBC/Cebr data hard to dismiss is that an independent research effort on the other side of the Atlantic landed on nearly identical conclusions, in industries with no obvious connection to the UK services sector.
BMO's March 2026 Business Outlook for the Midwest covered mid-market manufacturers, distributors, and industrial companies across Illinois, Wisconsin, Minnesota, and Indiana. The headline: "AI and automation shift from pilots to practical deployment." BMO found that 2026 has become a year of AI execution for U.S. mid-market firms, particularly where tight labor markets make doing more with fewer people a survival question, not just a productivity goal.
"Across the Midwest, companies are shifting decisively from planning to execution," said Tony Sciarrino, Head of BMO Commercial Bank, U.S. "The focus isn't on expansion at any cost. It's on putting capital and technology to work in ways that deliver measurable results."
The overlap between the UK and U.S. findings is specific enough to be useful. The companies crossing from experimentation to integration are pulling ahead. The companies still modeling the ROI case are losing ground relative to them in real time, even if it doesn't show up on this quarter's P&L.
The Moat Builds Quietly
More than 2,700 mid-sized businesses entered productive AI adoption in 2025 alone, according to HSBC's analysis. That cohort is projected to generate £30 billion in additional revenue over the next four years. But the revenue number understates what's actually happening.
Every quarter those companies run with AI embedded in their core decisions, they're accumulating proprietary data, tuning models to their specific business context, and building workflows that can absorb the next generation of tooling without starting from scratch. None of that shows up on a balance sheet. All of it shows up in competitive position.
Research on compounding AI advantage makes the mechanism clear: early deployments generate productivity gains, those gains fund more ambitious AI investments, which generate returns that fund the next round. After a few cycles, this stops looking like a project portfolio and starts looking like infrastructure. Infrastructure is expensive to replicate from scratch.
The firms ahead today aren't just running more efficiently. They're building something that late movers will struggle to match regardless of budget, because the relevant asset isn't the software. It's the accumulated learning.
The World Economic Forum estimates that mid-market firms globally could capture at least $2 trillion in AI-driven value, roughly the size of Canada's GDP. That value doesn't distribute equally. It flows toward the firms that integrate, not the ones that pilot.
Four Traits That Separate Integrators From Experimenters
The HSBC/Cebr research, the BMO data, and a cluster of mid-market implementation studies all point to the same traits in companies that actually cross from experimentation to integration.
They start with a specific operational constraint, not a general ambition. "Let's explore what AI can do for us" produces demos. Productive adopters start with a named problem: a forecasting process that requires four analysts and four days, a customer onboarding flow that generates 200 avoidable support tickets a month, an inventory system running 15% excess stock. AI applied to a defined friction point has a measurable outcome. AI applied to "digital transformation" has a slide deck.
They assign a named owner with a budget and a deadline. Helm & Nagel's analysis of mid-market AI failures keeps returning to the same pattern: ambitious workshop output followed by no clear accountability for next steps. Projects that have a named executive, a budget, and a hard deadline move from pilot to production. Projects that live in a cross-functional working group tend to generate increasingly polished presentations until someone quietly stops scheduling the meetings. The "named owner" variable turns out to matter more than almost any technology choice.
They measure in business terms. RSM's 2025 Middle Market AI Survey found that 92% of mid-market executives hit implementation challenges, and 62% found generative AI harder to deploy than anticipated. The firms that pushed through were tracking revenue per employee, customer retention, and margin — not "AI usage metrics" or "pilot completion rates." Operational measures create real accountability. Technology measures create quarterly updates.
They spend on people, not just software. Companies spend 93% of AI budgets on technology and 7% on training and change management, according to data cited in Deloitte's research. That ratio almost guarantees underperformance, not because the technology is bad but because 59% of employees end up saying AI tools slow them down when no one is managing the transition. BMO's Midwest findings show the same pattern: firms seeing real gains are pairing technology deployment with explicit change management, not assuming adoption will happen because the software is good.
A Quick Benchmark
Before deciding whether to accelerate, it's worth an honest read of where things actually stand. Three questions cut through most of the noise:
Is AI in your core revenue or cost processes? If it's only in marketing, admin, and content functions, you're experimenting. If it's in demand forecasting, pricing, supply chain, or customer success, you're moving toward integration.
Is there a named owner with a budget and a deadline? Not a steering committee. Not a center of excellence that meets monthly. A person who is personally accountable for specific business outcomes by a specific date.
Can you name a number that moved in the last 90 days because of AI? Not a completion milestone. An actual metric that improved. If the answer is no, the deployment isn't operational yet.
Most mid-market leadership teams score two out of three on a good day. That gap is where the work starts.
The Decision That's Actually Being Made
The HSBC/Cebr report was released alongside a £5 billion AI & Productivity Financing Initiative. That's a signal worth reading: major financial institutions are categorizing AI investment as a capital allocation question, not a technology question.
The executives still waiting for a cleaner ROI picture are implicitly making a capital allocation decision — choosing to leave roughly £4.5 million per company in additional revenue on the table while competitors compound advantages. Every quarter the analysis continues, the catch-up cost grows and the differentiation window shrinks.
HSBC's modelling projects AI adoption among mid-sized firms reaching 65% by 2030. As that number climbs, productive adoption stops being a differentiator and becomes table stakes. The firms moving now are capturing the premium while it still exists.
The question worth putting to a board isn't whether to do more AI. It's what another year of deliberation costs relative to what the 24% are building right now. That's a capital allocation question, which means it belongs in the room where capital gets allocated.
Revenue and economic output figures from the HSBC/Cebr report are in GBP. The £105 billion headline converts to approximately $130–135 billion at current exchange rates.