LLMs are becoming commoditized. Like gas from different stations, models can be swapped based on price or marginal performance. This means competitive advantage doesn't come from the model itself, but how you use it with proprietary data.

Related Insights

Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.

According to Salesforce CEO Mark Benioff, the idea that a proprietary LLM provides a sustainable competitive advantage is a 'fantasy.' He frames them as the 'new disk drives'—commoditized infrastructure that businesses will simply swap for the cheapest and best-performing option, preventing any single provider from establishing a long-term moat.

Airbnb's CEO argues that access to powerful AI models will be commoditized, much like electricity. Frontier models are available via API, and slightly older open-source versions are nearly as good for most consumer use cases. The long-term competitive advantage lies in the application, not the underlying model.

Since LLMs are commodities, sustainable competitive advantage in AI comes from leveraging proprietary data and unique business processes that competitors cannot replicate. Companies must focus on building AI that understands their specific "secret sauce."

The assumption that enterprise API spending on AI models creates a strong moat is flawed. In reality, businesses can and will easily switch between providers like OpenAI, Google, and Anthropic. This makes the market a commodity battleground where cost and on-par performance, not loyalty, will determine the winners.

Salesforce CEO Marc Benioff claims large language models (LLMs) are becoming commoditized infrastructure, analogous to disk drives. He believes the idea of a specific model providing a sustainable competitive advantage ('moat') has 'expired,' suggesting long-term value will shift to applications, proprietary data, and distribution.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.

Unlike traditional SaaS where high switching costs prevent price wars, the AI market faces a unique threat. The portability of prompts and reliance on interchangeable models could enable rapid commoditization. A price war could be "terrifying" and "brutal" for the entire ecosystem, posing a significant downside risk.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.