Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Arvind Krishna predicts that the largest AI models will become commodities with low switching costs. This belief underpins IBM's strategy to *not* compete in building frontier models, but rather to partner with providers and focus on smaller, specialized enterprise models where they can build a moat.

Related Insights

The assumption that enterprise API spending on AI models creates a strong moat is flawed. In reality, businesses can and will easily switch between providers like OpenAI, Google, and Anthropic. This makes the market a commodity battleground where cost and on-par performance, not loyalty, will determine the winners.

Microsoft's decision to promote Anthropic models on Azure as aggressively as OpenAI's reflects a core belief from CEO Satya Nadella. He anticipates AI models will become commoditized, making the underlying intelligence interchangeable and the cloud platform the primary point of differentiation and value capture.

The common practice of model distillation suggests that AI capabilities will eventually be commoditized. As smaller models can cheaply mimic larger ones, differentiation will shift away from raw performance to product integration and price, likely triggering a massive price war among providers.

Leading AI models are becoming increasingly similar in capability. This rapid convergence suggests the underlying technology is becoming a commodity, and competitive advantage will likely shift to user interface, distribution, and specific applications rather than the core model itself.

If AI makes intelligence cheap and universally available, its economic value may collapse. This theory suggests that selling raw AI models could become a low-margin, utility-like business. Profitability will depend on building moats through specialized applications or regulatory capture, not on selling base intelligence.

AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.

Arvind Krishna forecasts a 1000x drop in AI compute costs over five years. This won't just come from better chips (a 10x gain). It will be compounded by new processor architectures (another 10x) and major software optimizations like model compression and quantization (a final 10x).

The true commercial impact of AI will likely come from small, specialized "micro models" solving boring, high-volume business tasks. While highly valuable, these models are cheap to run and cannot economically justify the current massive capital expenditure on AGI-focused data centers.

Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.

As AI models become commoditized, a slight performance edge isn't a sustainable advantage. The companies that win will be those that build the best systems for implementation, trust, and workflow integration around those models. This robust, trust-based ecosystem becomes the primary competitive moat, not the underlying technology.