Many firms are stuck in "pilot purgatory," launching numerous small, siloed AI tests. While individually successful, these experiments fail to integrate into the broader business system, creating an illusion of progress without delivering strategic, enterprise-level value.

Related Insights

New McKinsey research reveals a significant AI adoption gap. While 88% of organizations use AI, nearly two-thirds haven't scaled it beyond pilots, meaning they are not behind their peers. This explains why only 39% report enterprise-level EBIT impact. True high-performers succeed by fundamentally redesigning workflows, not just experimenting.

Companies that experiment endlessly with AI but fail to operationalize it face the biggest risk of falling behind. The danger lies not in ignoring AI, but in lacking the change management and workflow redesign needed to move from small-scale tests to full integration.

Large enterprises navigate a critical paradox with new technology like AI. Moving too slowly cedes the market and leads to irrelevance. However, moving too quickly without clear direction or a focus on feasibility results in wasting millions of dollars on failed initiatives.

Instead of making one large, transformative bet on AI, Macy's is testing it across numerous departments (supply chain, HR, marketing) in small trials. This "pokers in the fire" approach allows for broad learning and discovery of value without overinvesting before the technology is fully mature or scaled.

Enterprises struggle to get value from AI due to a lack of iterative, data-science expertise. The winning model for AI companies isn't just selling APIs, but embedding "forward deployment" teams of engineers and scientists to co-create solutions, closing the gap between prototype and production value.

Organizations fail when they push teams directly into using AI for business outcomes ("architect mode"). Instead, they must first provide dedicated time and resources for unstructured play ("sandbox mode"). This experimentation phase is essential for building the skills and comfort needed to apply AI effectively to strategic goals.

Headlines about high AI pilot failure rates are misleading because it's incredibly easy to start a project, inflating the denominator of attempts. Robust, successful AI implementations are happening, but they require 6-12 months of serious effort, not the quick wins promised by hype cycles.

Teams that become over-reliant on generative AI as a silver bullet are destined to fail. True success comes from teams that remain "maniacally focused" on user and business value, using AI with intent to serve that purpose, not as the purpose itself.

The excitement around AI capabilities often masks the real hurdle to enterprise adoption: infrastructure. Success is not determined by the model's sophistication, but by first solving foundational problems of security, cost control, and data integration. This requires a shift from an application-centric to an infrastructure-first mindset.

According to Salesforce's AI chief, the primary challenge for large companies deploying AI is harmonizing data across siloed departments, like sales and marketing. AI cannot operate effectively without connected, unified data, making data integration the crucial first step before any advanced AI implementation.