Why 88% Are Using AI, But Only 7% Are Winning

November 13, 2025
min read
IconIconIconIcon

A new McKinsey Global Survey reveals a striking disconnect in the AI landscape: Nearly nine out of ten organizations are now using AI—yet only 7% have fully scaled the technology across their enterprises to capture transformative value.

Register for an upcoming AI Ops Lab. Learn More

This isn't a technology problem. It's a strategy problem.

The Pilot Purgatory Problem

Here's what the data shows: While 88% of organizations report regular AI use in at least one business function, two-thirds remain stuck in experimentation or piloting phases. Even more telling, only 39% report any measurable EBIT impact at the enterprise level.

The pattern is clear—companies are deploying AI tools, but they're not embedding them deeply enough into workflows and processes to realize material business benefits.

This is the challenge we see repeatedly in our workshops at Magnetiz: Organizations rushing to "do AI" without a clear framework for validating which opportunities will actually drive business outcomes.

What Separates the 7% from Everyone Else

McKinsey's research identified a small group of "high performers"—organizations reporting both significant EBIT impact (5%+) and substantial value from AI. These companies aren't just using more AI. They're using it differently.

Three patterns stand out:

1. They think transformation, not just efficiency

While 80% of all respondents say their companies set efficiency as an AI objective, high performers are 3.6 times more likely to aim for transformative change to their business. They're asking "How can AI fundamentally reshape how we create value?" not just "How can AI make this process 10% faster?"

This aligns directly with what we emphasize in our AI validation framework: Before selecting a technology solution, you need to understand whether you're solving for incremental improvement or strategic transformation. The approach, investment, and success metrics are completely different.

2. They redesign workflows, not just automate tasks

High performers are nearly three times more likely (55% vs 20%) to fundamentally redesign workflows in their AI deployments. They're not simply dropping AI into existing processes—they're reimagining the process itself.

This is where many AI pilots fail. Organizations identify a use case, build a proof of concept, and then struggle to scale because they haven't addressed the workflow and change management required to make the AI solution actually usable in practice.

In our consulting work, this is why we focus heavily on the "usability" dimension of AI validation—ensuring that solutions fit into how people actually work, not how we wish they worked.

3. They have the right scaffolding in place

The survey reveals that high performers consistently implement specific management practices across six dimensions: strategy, talent, operating model, technology, data, and adoption. The most impactful practices include:

  • Defined processes for determining when AI outputs need human validation
  • Technology infrastructure that supports core AI initiatives
  • Clear AI roadmaps aligned with business strategy
  • Senior leadership actively engaged in driving adoption

Notably, high performers are also nearly five times more likely to invest more than 20% of their digital budgets in AI technologies.

The Use Case Validation Challenge

Here's what the data doesn't explicitly state but strongly implies: Most organizations are struggling to identify which AI opportunities are worth pursuing.

The report shows that while cost benefits are being realized in certain functions (particularly software engineering, manufacturing, and IT), revenue impact is less common—and enterprise-wide EBIT impact remains elusive for most.

This suggests that many organizations are selecting AI use cases based on technical feasibility or competitive pressure rather than strategic business value.

This is precisely the problem our AI Validation Framework is designed to solve. We help operations leaders systematically evaluate AI opportunities across four critical dimensions:

  • Value: Will this actually move our business metrics?
  • Usability: Can this realistically integrate into our workflows?
  • Feasibility: Do we have the data, tech, and talent to execute this?
  • Viability: Does this align with our strategy and regulatory environment?

Without a structured approach to these questions, organizations end up with AI pilots that demonstrate technical capability but fail to scale or capture meaningful business value.

The Agent Acceleration

The report also highlights growing interest in AI agents—systems that can autonomously plan and execute multiple workflow steps. 62% of organizations are at least experimenting with AI agents, with 23% scaling them somewhere in their enterprise.

This is where the transformation potential gets real. But it's also where the validation framework becomes even more critical.

Agentic AI isn't just a more powerful tool—it represents a fundamentally different approach to work design. The questions you need to answer before deploying agents are more complex:

  • What level of autonomy is appropriate for this process?
  • How do we validate agent decisions at scale?
  • What does "human in the loop" actually mean in this context?
  • How do we redesign roles and responsibilities around agent capabilities?

High performers, the survey shows, are 3+ times more likely than their peers to have scaled AI agents across functions. They're not getting there by deploying more agents—they're getting there by being more strategic about which processes benefit from agentic approaches.

From Experimentation to Transformation: A Practical Path

If your organization is among the 88% using AI but not yet capturing transformative value, here's what the research suggests you should focus on:

Start with strategic clarity

Before launching your next AI pilot, step back and ask: Are we aiming for incremental efficiency or business transformation? The answer should shape everything from budget allocation to success metrics to timeline expectations.

Validate before you build

Use a structured framework to evaluate potential AI use cases across value, usability, feasibility, and viability dimensions. This prevents the costly pattern of building technically impressive solutions that don't scale or deliver business impact.

Redesign workflows, not just tasks

When you identify a promising AI use case, invest time in understanding how it will change the broader workflow. Who needs to be involved? What new handoffs are created? What decisions need human validation? Map the future state before you build the solution.

Build the right scaffolding

The survey is clear: High performers implement specific management practices across strategy, talent, operating model, technology, data, and adoption dimensions. You don't need to be perfect across all six—but you do need intentional practices in each area.

Measure what matters

Track both use-case-level metrics (cost savings, revenue impact) and enterprise-level outcomes (EBIT impact, innovation velocity, competitive differentiation). The gap between these levels tells you whether you're truly scaling or just accumulating pilots.

The Bottom Line

The AI adoption paradox—widespread use but limited transformation—isn't a mystery. It's the predictable result of treating AI as a technology deployment challenge rather than a business transformation challenge.

The 7% who are winning with AI haven't cracked some secret technical code. They've simply been more disciplined about connecting AI capabilities to business strategy, validating opportunities before pursuing them, and redesigning work to capture value at scale.

The question for the other 93% isn't whether to use AI—it's how to stop experimenting and start transforming.

Want Help?

The AI Ops Lab helps operations managers identify and capture high-value AI opportunities. Through process mapping, value analysis, and solution design, you'll discover efficiency gains worth $100,000 or more annually.

Apply now to see if you qualify for a one-hour session, where we'll help you map your workflows, calculate the value of automation, and visualize your AI-enabled operations. Limited spots available. Want to catch up on earlier issues? Explore our resource Hub.

Magnetiz.ai is your AI consultancy. We work with you to develop AI strategies that improve efficiency and deliver a competitive edge.

Share this post
Icon