The Mid-Market AI Skills Gap Is a Bigger Threat Than the Technology Itself

Shahar

Picture a mid-sized company that bought three AI subscriptions last year. One sits in the marketing team's browser, half-used. One was trialled by finance, then quietly shelved when the person who championed it left. The third generates weekly reports that nobody quite trusts because the underlying data feeds are messy. The company says it's "using AI." Technically, it's not wrong.

This is the story hiding inside the headline numbers. Across the mid-market, AI adoption is genuinely accelerating. MYOB's 2026 Autonomous Business Report, based on a survey of more than 1,000 decision-makers in Australian and New Zealand mid-sized businesses, found that three-quarters of leaders report meaningful productivity gains from AI. Among the most advanced firms, that figure climbs to 92%. These are real improvements. Not rounding errors.

The same report found that while 77% of businesses claim AI is embedded in core processes, only 66% have structured staff training in place. Nearly four in ten respondents cite workforce skills and capacity for change as a top barrier. A third point directly to governance requirements as a constraint on progress. And older technology systems are preventing wider adoption for a significant chunk of the market.

The AI productivity narrative tells you how fast the winners are pulling ahead. It doesn't tell you why the rest are falling behind. The gap isn't the technology. The technology works. The gap is everything the technology sits inside: the skills, the governance, the systems. Most mid-market leaders don't have a clear plan for any of it.

The Real Barriers Are Organizational, Not Technical

The MYOB data maps mid-sized businesses across five foundations: data quality and integration, core systems and ERP, AI strategy and governance, workforce capability and training, and process digitisation. Businesses that score well across all five — the "Accelerating" cohort, representing 43% of the market — achieve 92% productivity improvement rates and see measurable commercial gains. The "Reacting" cohort, making up 16% of the market, reports only 37%.

That 55-point gap in productivity outcomes isn't explained by which AI tools each group purchased. It's explained by what surrounds those tools.

Cybersecurity and data privacy concerns top the barriers list at 45%, which is understandable. But look at what comes next: workforce skills and capacity for change at 37%, governance and compliance requirements at 36%. These are organizational problems. Buying a better AI product doesn't solve them. Signing an enterprise agreement doesn't solve them. They require investment in capability, policy, and process, and that's exactly where most mid-market companies are underinvesting.

The Deloitte 2026 State of AI in the Enterprise report reinforces the point. It found that insufficient worker skills are the single biggest barrier to integrating AI into existing workflows, ahead of data quality, budget, and technology readiness. Fifty-three percent of organizations are adjusting their talent strategy by educating the broader workforce to raise AI fluency. Only a third are redesigning career paths and roles. Most are adding training as a layer on top of existing structures rather than rethinking the structures themselves. That's a patch, not a fix.

Why Buying More Tools Won't Fix This

There's a pattern that repeats across mid-market AI adoptions: a tool gets selected, often by IT or an enthusiastic functional leader, deployed in one team or workflow, and declared a win when early results look promising. The pilot expands slightly. A few more teams start using it. Then growth stalls, because no one has figured out how the tool connects to the CRM, whether the data feeding it meets quality standards, who's responsible when it produces something wrong, or how to train employees who've never touched it.

Treat AI adoption as a procurement decision and you get exactly that: a contract, an onboarding session, and eventually a stalled pilot. An organizational transformation starts with the signed contract and keeps going. Someone still needs to handle workforce planning, redesign roles, write governance policies, and do the unglamorous work of integration architecture and learning infrastructure. It requires a leader who owns both the technology budget and the capability-building agenda.

Research on mid-market AI procurement failures points to three consistent failure modes: data readiness failure (the systems feeding the AI are unreliable), organizational absorption failure (the company can't operationalize what the AI produces), and change adoption failure (employees don't use it, or use it inconsistently). None of these is solved by upgrading to a better model.

The MYOB data makes this concrete. Among the lowest-maturity firms in their survey, the challenges aren't about lacking access to AI tools. They're about lacking the foundations to use them well. Leaders in the "Reacting" segment are largely stuck because they haven't addressed the prerequisites: clean data, integrated systems, trained staff, clear governance.

What a Capability-First Approach Looks Like

DBS Bank's Spark GenAI programme offers a useful contrast to the procurement trap. Originally launched in 2024 with Enterprise Singapore and IMDA, the programme was enhanced in April 2026 after data from the 2026 DBS Business Pulse Check Survey showed that nearly two in five SMEs (39%) were seeking expert guidance on how to meaningfully integrate AI, not just which tools to buy.

The enhanced programme takes a three-tier approach tied explicitly to organizational maturity:

  • Start targets firms just beginning with AI, focused on ready-to-deploy tools with immediate practical value
  • Accelerate moves businesses into targeted use cases, with group-based consultancy and more customised solutions
  • Scale is the deepest tier — upskilling, bespoke one-to-one consultancy, and backend systems integration for firms pursuing full operational embedding

The model meets companies where they are, treating skills, governance, and integration as core deliverables rather than optional add-ons. Over 380 companies joined in the first phase, and it was the demand for structured guidance, rather than just tool access, that drove the 2026 enhancement. The bottleneck for most businesses isn't finding the right AI product. It's building the organizational capability to use any AI product well.

The MYOB report makes the same point from a different angle: "Pair technology investment with a workforce investment," it advises. "Replace reactive investment with a defined AI strategy. Build governance before it becomes a constraint."

That's the pattern separating the 43% of businesses achieving the strongest AI outcomes from the 16% still reacting.

The Governance Gap Is Quietly Expensive

Governance sounds abstract until something goes wrong. Then it sounds like a board meeting nobody wanted.

Without governance, employees make their own decisions about which data they feed into AI tools — sometimes customer records, financial data, or confidential contracts. There's no standard for reviewing AI outputs before they influence decisions. When the AI produces something wrong, there's no clear accountability structure. The liability is real; it just hasn't materialized yet for most companies.

A 2026 analysis of mid-market AI governance puts it plainly: most mid-market companies are using AI without a governance framework in place. That gap may feel manageable right now. But regulatory pressure is increasing, customers are scrutinizing how their data is handled, and every quarter without governance in place narrows the window for a clean, proactive rollout.

Good governance doesn't need to be a 200-page policy document. A practical mid-market governance framework distills it to six components: an AI use inventory, data classification rules, vendor evaluation criteria, human oversight requirements, employee usage guidelines, and a quarterly review cadence. A COO or IT director can implement this baseline without a dedicated compliance team.

The Deloitte research found that enterprises where senior leadership actively shapes AI governance achieve measurably better outcomes than those leaving it to technical teams alone. Governance that lives only in the IT department doesn't translate into the workflow changes and cultural norms that actually protect the business. This is a strategic choice, not an IT ticket.

Three Questions Mid-Market Leaders Should Ask Right Now

1. Do your employees actually trust AI outputs?

Not "do they have access to AI tools," but do they have the skills to use those tools effectively and the confidence to act on what they produce? The MYOB data shows that only 66% of businesses have structured training in place despite 77% claiming AI is embedded in core processes. That 11-point gap represents people working with tools they haven't been properly trained on, which means they're either underusing them or using them in ways that introduce risk.

This is probably the most overlooked gap in the mid-market, and the hardest to see from the top. Usage dashboards tell you how often employees click into an AI tool — not whether anyone trusts the outputs enough to actually change a decision. Those are different things.

A useful diagnostic: ask five employees at different levels to walk you through how they actually use AI in a typical work week. Ask whether they've ever caught the tool being wrong, and what they did about it. What you hear will tell you more than any survey.

2. Who owns AI governance right now?

If the honest answer is "nobody" or "it varies by team," you have a gap. Governance doesn't need to be complex, but it does need to exist and be owned by someone with enough authority to enforce it. Start with: What data are employees permitted to share with AI tools? Who reviews outputs before they influence significant decisions? What happens when something goes wrong?

The Deloitte report is unambiguous: governance shaped by senior leadership delivers measurably better business value than governance left to technical teams alone.

3. Is your AI connected to your core systems, or floating alongside them?

A standalone AI tool working off its own data, disconnected from your CRM, ERP, or financial systems, can deliver local efficiency gains. It can't deliver enterprise-level productivity improvement. The MYOB report is explicit that the businesses achieving the strongest outcomes are those that have embedded AI into core processes on top of strong data and systems foundations, not those who've deployed AI tools alongside legacy infrastructure.

The question to ask isn't "do we have AI?" It's "does our AI know what our business knows?" If the answer is no, the path forward is less about buying new tools and more about integrating what you already have.


The mid-market is not behind on AI adoption. It's behind on AI readiness.

Most leaders who read this will recognize the distinction and still spend the next quarter evaluating another tool rather than fixing their training program, appointing a governance owner, or auditing their systems integration. The technology keeps improving regardless. The organizational infrastructure only improves if someone makes it a priority.

The technology, for the most part, is good enough. The question is whether the organization around it is.

Comments

Loading comments...
Share: Twitter LinkedIn