Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.
Airbnb's CEO argues that access to powerful AI models will be commoditized, much like electricity. Frontier models are available via API, and slightly older open-source versions are nearly as good for most consumer use cases. The long-term competitive advantage lies in the application, not the underlying model.
While US firms lead in cutting-edge AI, the impressive quality of open-source models from China is compressing the market. As these free models improve, more tasks become "good enough" for open source, creating significant pricing pressure on premium, closed-source foundation models from companies like OpenAI and Google.
The assumption that enterprise API spending on AI models creates a strong moat is flawed. In reality, businesses can and will easily switch between providers like OpenAI, Google, and Anthropic. This makes the market a commodity battleground where cost and on-par performance, not loyalty, will determine the winners.
The assumption that startups can build on frontier model APIs is temporary. Emad Mostaque predicts that once models are sufficiently capable, labs like OpenAI will cease API access and use their superior internal models to outcompete businesses in every sector, fulfilling their AGI mission.
The long-held belief that a complex codebase provides a durable competitive advantage is becoming obsolete due to AI. As software becomes easier to replicate, defensibility shifts away from the technology itself and back toward classic business moats like network effects, brand reputation, and deep industry integration.
The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.
In the SaaS era, a 2-year head start created a defensible product moat. In the AI era, new entrants can leverage the latest foundation models to instantly create a product on par with, or better than, an incumbent's, erasing any first-mover advantage.
Fears of a single AI company achieving runaway dominance are proving unfounded, as the number of frontier models has tripled in a year. Newcomers can use techniques like synthetic data generation to effectively "drink the milkshake" of incumbents, reverse-engineering their intelligence at lower costs.
Despite billions in funding, large AI models face a difficult path to profitability. The immense training cost is undercut by competitors creating similar models for a fraction of the price and, more critically, the ability for others to reverse-engineer and extract the weights from existing models, eroding any competitive moat.
Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.