From Pilots to Companywide Execution: What Novo Nordisk's OpenAI Deal Teaches Every Mid-Market Executive

Shahar

Picture a company with a dozen AI pilots running simultaneously. Sales has a chatbot. Finance is testing a forecasting tool. Someone in operations built something in a spreadsheet that technically uses a model. Leadership is calling it an "AI strategy."

It isn't.

This is the state of AI at most mid-market organizations right now, and there's no judgment in that observation. Pilots are how you learn. The problem is that pilots have become where the learning stops. According to MIT's 2025 State of AI in Business report, more than 95% of companies are seeing little or no measurable return on AI investments, despite widespread experimentation. Over 80% have piloted tools like ChatGPT or Copilot. Fewer than 5% have moved custom AI solutions into production.

That gap is exactly where Novo Nordisk just planted a flag.

What Actually Happened Between Novo Nordisk and OpenAI

On April 14, 2026, Novo Nordisk announced a strategic partnership with OpenAI to integrate AI across its entire business: drug discovery, supply chain, manufacturing, and commercial operations, with pilot programs kicking off in 2026 and full integration targeted by year-end. The financial terms weren't disclosed, but the scope of the rollout was.

This wasn't a press release about a chatbot deployment. Novo Nordisk committed to embedding AI as an operational layer across multiple business functions simultaneously, alongside a structured workforce upskilling program and data governance and human oversight commitments that were spelled out in the partnership from day one.

That last detail is the one most coverage glossed over.

The reflex in business media is to report the headline partnership and move on. But the structural choices Novo Nordisk made, the how of this rollout, are where the real signal lives for any exec who's spent the last two years trying to figure out why their AI pilots haven't compounded into anything.

AI as an Operational Layer, Not a Department Tool

Novo Nordisk didn't scope this partnership to a single function. They embedded it across R&D, supply chain, manufacturing, and corporate operations at once. It runs against how most mid-market companies approach AI deployment, and that's the point.

The typical pattern goes like this: one department sees a problem, sponsors a pilot, gets results, and then tries to convince the rest of the organization to adopt. That model produces departmental wins and organizational friction. The next team doesn't want to use the first team's tool. IT has concerns about integration. Legal hasn't reviewed the data sharing agreements. The pilot calcifies.

What Novo Nordisk is doing instead is treating AI as infrastructure, more like a cloud migration than a software rollout. The goal is to analyze complex biological and clinical datasets, test hypotheses faster, shorten drug development timelines, and optimize supply chain logistics and demand forecasting under one coordinated deployment.

For a mid-market manufacturer, distributor, or healthcare organization, the translation looks like this: if you're running AI in three places but none of those places share data or operate under a common framework, you're still in pilot mode. The shift to execution happens when AI becomes a shared capability any function can draw from, rather than a custom solution each team has to build on its own.

Research from McKinsey has found that only 8% of mid-market companies have deployed AI operationally, meaning AI that runs embedded in core business processes rather than sitting adjacent to them. The gap between 80% piloting and 8% operational comes down to one thing: companies are treating AI as a tool rather than a layer.

Workforce Upskilling Has to Be Mandatory

Here's where most AI rollouts quietly fail. You can deploy the best model in the world. If the people using it don't understand what it does well, what it does poorly, and how to work with it effectively, it becomes shelfware.

Novo Nordisk built workforce upskilling into the OpenAI partnership as a first-class deliverable, not an afterthought. OpenAI will directly assist in upskilling Novo Nordisk's global workforce, building AI literacy across scientific, technical, and operational teams so employees can integrate AI into daily workflows, not just use a new tool occasionally.

AI literacy is not the same as AI familiarity. Familiarity means your team knows the product exists. Literacy means they understand what inputs produce reliable outputs, where the model's judgment shouldn't be trusted, how to structure a prompt for a process task versus a research task, and when to escalate a model's suggestion to a human decision-maker. Those are trainable skills, but they require actual training.

A 2026 study tracking agentic AI found that 57% of mid-market firms are still in the pilot stage, with only 15% having successfully scaled across business functions. When researchers traced back why pilots stalled, one leading cause was leadership misalignment: central AI teams owning pilots while functional leaders had no ownership of adoption outcomes. The workforce wasn't prepared to take the tool and run with it, so they didn't.

Upskilling isn't just a training program. It's an organizational commitment that someone, usually the C-suite, has to mandate and fund. Novo Nordisk made that commitment by writing it into the OpenAI partnership structure. Most mid-market companies leave upskilling to happen organically, which means it usually doesn't happen at all.

One mid-size MedTech company that successfully scaled AI deployment cut AI costs by 50% and completed RFPs 70% faster, but only after investing in structured training that brought non-technical staff up to the point where they could use tools independently. Before that investment, results depended on a handful of power users; everyone else was waiting in line for help. The bottleneck wasn't the technology. It was that only a small group of people could actually get value out of it.

Build Governance Before You Deploy, Not After

The most common governance mistake isn't ignoring governance. It's scheduling it for later.

The reasoning is understandable. Governance feels like friction when you're trying to move fast on a promising pilot. So teams defer it: "We'll sort out the data policies once we know this is going to work." By the time the pilot succeeds, the governance debt is real and painful, and scaling becomes blocked by problems that should have been solved at the design stage.

Novo Nordisk built strict data governance and human oversight protocols directly into the partnership structure before any pilot programs launched. That's pouring the foundation before you raise the walls, which is the order it's supposed to happen in.

For regulated industries like pharma, healthcare, and financial services, governance built after deployment isn't governance; it's catch-up. But even mid-market manufacturers and distributors operating outside heavy regulatory frameworks face the same underlying failure modes when they try to scale without it. Inconsistent data quality and unclear data ownership are the most common. No audit trail for model decisions is the one that creates the most organizational risk. And when a model drifts from the conditions it was validated on, there's often no process in place to catch it until something has already gone wrong.

A 2026 survey of enterprise AI deployments found that 76% of organizations reported their governance frameworks lagged behind their AI adoption. That 76% is most of the companies whose governance will buckle when pilots hit production load, not because the technology fails, but because the policy around it was never ready.

The practical fix is to treat data governance as a launch prerequisite, not a post-launch task. Before any pilot expands to production, answer four questions: Who owns this data? How do we track what the model is doing with it? Who reviews outputs before they affect decisions? What happens if the model starts behaving differently than it did in testing? These are much easier to answer at the design stage than after 10,000 decisions have already been made by a system you're just now trying to audit.

The Honest Self-Assessment: Are You Piloting or Executing?

Most executives overestimate where they are on this curve. Here's a straightforward test.

You're still piloting if:

  • Your AI tools live in one or two departments and don't share data with each other
  • Adoption depends on individual champions. Remove that person and the initiative stalls
  • You have no documented governance policies covering data inputs, model outputs, and human review
  • Upskilling has amounted to a product walkthrough, not a structured program tied to actual job functions

You're executing if:

  • AI is embedded in at least one core business process, not sitting adjacent to it
  • Multiple functions operate under a shared framework with common data policies and clear accountability
  • Governance predated the first deployment expansion, not the other way around
  • Functional leaders own adoption outcomes in their area. Not just the data team.

The gap between these two states isn't primarily a technology gap. 88% of observed AI proofs of concept don't make it to wide-scale deployment, and the root causes are organizational, not technical. Integration, governance, and workforce readiness account for most of the failures.

What Mid-Market Companies Can Actually Do With This

Novo Nordisk is a global pharma company with resources most mid-market organizations don't have. That's fair. But the decisions they made — embedding AI across functions rather than within them, pairing deployment with mandatory upskilling, locking in governance before the rollout started — don't require a global workforce to implement.

Start with one core business process, not a department tool. Ask what it would take to run AI as part of that process natively, embedded in the workflow itself rather than sitting alongside it. That's your first real execution target, and it's the cleanest test of whether you're building something operational or just adding another pilot to the collection.

The governance question needs to happen before the expansion, not after it. Who owns the data feeding this model? What oversight is in place before its outputs affect a decision? Document the answers. That documentation becomes the template you'll copy for every subsequent deployment and prevents you from rebuilding the policy from scratch each time.

Budget for upskilling the same way you budget for software licenses — which is to say, as a line item, not an aspiration. According to McKinsey research on AI in distribution, only about 30% of distributors report having sufficient talent to scale their AI efforts. That talent gap doesn't close on its own. Spending on tools without building the capability to use them well is a reliable way to produce expensive underutilization, and no org chart fixes that problem after the fact.

Novo Nordisk's partnership will generate plenty of coverage about drug discovery timelines and supply chain results as the year progresses. Those outcomes will be real. By the time the numbers come out, though, the most important part of this story will already be 12 months old. The choices that will determine whether any of it worked were made before a single pilot launched.

That's where execution starts.

Comments

Loading comments...
Share: Twitter LinkedIn