Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Investor Dan Sundheim views LLMs as a hybrid business model. They resemble Netflix by spending heavily upfront on a fixed asset (the model) to be sold at high margins. They are like Spotify because their defensibility comes from personalizing a commoditized product, creating a sticky user experience that commands pricing power.

Related Insights

As AI model performance converges, the key differentiator will become memory. The accumulated context and personal data a model has on a user creates a high switching cost, making it too painful to move to a competitor even for temporarily superior features.

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

Salesforce CEO Marc Benioff claims large language models (LLMs) are becoming commoditized infrastructure, analogous to disk drives. He believes the idea of a specific model providing a sustainable competitive advantage ('moat') has 'expired,' suggesting long-term value will shift to applications, proprietary data, and distribution.

Unlike traditional APIs, LLMs are hard to abstract away. Users develop a preference for a specific model's 'personality' and performance (e.g., GPT-4 vs. 3.5), making it difficult for applications to swap out the underlying model without user notice and pushback.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

While AI will accelerate hyperscaler growth short-term, Dan Sundheim believes their business models will degrade. Their customer base will concentrate around a few LLMs who, once cash-flow positive, will likely in-source compute. This shift from a fragmented customer base to a concentrated one erodes the hyperscalers' pricing power and long-term defensibility.

Sam Altman argues that beyond model quality, ChatGPT's stickiest advantage is personalization. He believes as the AI learns a user's context and preferences, it creates a valuable relationship that is difficult for competitors to displace. He likens this deep-seated loyalty to picking a toothpaste brand for life.

Dan Sundheim argues that the biggest threat to LLMs is not their addressable market, which is nearly infinite, but the temptation to pursue too many verticals at once. Spreading a fixed-cost asset (the model) is economically rational, but history shows that companies rarely succeed when they simultaneously attack consumer, enterprise, and science without a focused A-team.

AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.