Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Gurley notes that major AI model providers like OpenAI and Anthropic are shifting from solely selling API access to building their own applications. This move up the stack signals a fear that being a pure model provider is not a defensible moat and could lead to commoditization.

Related Insights

The inconsistency and 'laziness' of base LLMs is a major hurdle. The best application-layer companies differentiate themselves not by just wrapping a model, but by building a complex harness that ensures the right amount of intelligence is reliably applied to a specific user task, creating a defensible product.

Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.

Specialized SaaS companies like Writer and Intercom are moving beyond simply wrapping OpenAI or Anthropic APIs. They are now training their own foundation models to create more defensible, vertically-integrated AI products, signaling a shift away from platform dependency toward bespoke AI stacks.

The assumption that enterprise API spending on AI models creates a strong moat is flawed. In reality, businesses can and will easily switch between providers like OpenAI, Google, and Anthropic. This makes the market a commodity battleground where cost and on-par performance, not loyalty, will determine the winners.

Companies like Anthropic and OpenAI are shifting from being API providers to building first-party "super apps." This creates a conflict where they might reserve their most powerful models for internal use, giving smaller, distilled versions to API customers, thus undermining the third-party ecosystem they helped create.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

Top-tier coding models from Google, OpenAI, and Anthropic are functionally equivalent and similarly priced. This commoditization means the real competition is not on model performance, but on building a sticky product ecosystem (like Claude Code) that creates user lock-in through a familiar workflow and environment.

Leading AI companies like Anthropic are positioning themselves as the infrastructure layer for intelligence, akin to how AWS provides infrastructure for computing. Their strategy is to partner with and enable existing SaaS companies, not to destroy them by competing directly at the application level.

The common critique of AI application companies as "GPT wrappers" with no moat is proving false. The best startups are evolving beyond using a single third-party model. They are using dozens of models and, crucially, are backward-integrating to build their own custom AI models optimized for their specific domain.

As AI models become commoditized, a slight performance edge isn't a sustainable advantage. The companies that win will be those that build the best systems for implementation, trust, and workflow integration around those models. This robust, trust-based ecosystem becomes the primary competitive moat, not the underlying technology.