According to Salesforce CEO Mark Benioff, the idea that a proprietary LLM provides a sustainable competitive advantage is a 'fantasy.' He frames them as the 'new disk drives'—commoditized infrastructure that businesses will simply swap for the cheapest and best-performing option, preventing any single provider from establishing a long-term moat.

Related Insights

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

The historical advantage of being first to market has evaporated. It once took years for large companies to clone a successful startup, but AI development tools now enable clones to be built in weeks. This accelerates commoditization, meaning a company's competitive edge is now measured in months, not years, demanding a much faster pace of innovation.

AI capabilities offer strong differentiation against human alternatives. However, this is not a sustainable moat against competitors who can use the same AI models. Lasting defensibility still comes from traditional moats like workflow integration and network effects.

The long-held belief that a complex codebase provides a durable competitive advantage is becoming obsolete due to AI. As software becomes easier to replicate, defensibility shifts away from the technology itself and back toward classic business moats like network effects, brand reputation, and deep industry integration.

Salesforce CEO Marc Benioff claims large language models (LLMs) are becoming commoditized infrastructure, analogous to disk drives. He believes the idea of a specific model providing a sustainable competitive advantage ('moat') has 'expired,' suggesting long-term value will shift to applications, proprietary data, and distribution.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

Unlike the cloud market with high switching costs, LLM workloads can be moved between providers with a single line of code. This creates insane market dynamics where millions in spend can shift overnight based on model performance or cost, posing a huge risk to the LLM providers themselves.

AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.

While personal history in an AI like ChatGPT seems to create lock-in, it is a weaker moat than for media platforms like Google Photos. Text-based context and preferences are relatively easy to export and transfer to a competitor via another LLM, reducing switching friction.

Salesforce CEO Claims LLMs Are Commodity 'Disk Drives,' Not Defensible Moats | RiffOn