Airbnb's CEO argues that access to powerful AI models will be commoditized, much like electricity. Frontier models are available via API, and slightly older open-source versions are nearly as good for most consumer use cases. The long-term competitive advantage lies in the application, not the underlying model.
The founder predicts that hyper-specific vertical AI solutions are too easy to replicate. While they may find initial traction, they lack a durable moat. The stronger, long-term business is building horizontal tools that empower users to solve their own complex problems.
As startups build on commoditized AI platforms like GPT, product differentiation becomes less of a moat. Success now hinges on cracking growth faster than rivals. The new competitive advantages are proprietary data for training models and the deep domain expertise required to find unique growth levers.
The fear that large AI labs will dominate all software is overblown. The competitive landscape will likely mirror Google's history: winning in some verticals (Maps, Email) while losing in others (Social, Chat). Victory will be determined by superior team execution within each specific product category, not by the sheer power of the underlying foundation model.
Salesforce CEO Marc Benioff claims large language models (LLMs) are becoming commoditized infrastructure, analogous to disk drives. He believes the idea of a specific model providing a sustainable competitive advantage ('moat') has 'expired,' suggesting long-term value will shift to applications, proprietary data, and distribution.
Counter to fears that foundation models will obsolete all apps, AI startups can build defensible businesses by embedding AI into unique workflows, owning the customer relationship, and creating network effects. This mirrors how top App Store apps succeeded despite Apple's platform dominance.
Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.
The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.
Fears of a single AI company achieving runaway dominance are proving unfounded, as the number of frontier models has tripled in a year. Newcomers can use techniques like synthetic data generation to effectively "drink the milkshake" of incumbents, reverse-engineering their intelligence at lower costs.
The AI value chain flows from hardware (NVIDIA) to apps, with LLM providers currently capturing most of the margin. The long-term viability of app-layer businesses depends on a competitive model layer. This competition drives down API costs, preventing model providers from having excessive pricing power and allowing apps to build sustainable businesses.
Brian Chesky applies the classic "overestimate in a year, underestimate in a decade" framework to AI. He argues that despite hype, daily life hasn't changed much yet. The true shift will occur in 3-5 years, once the top 50 consumer apps are rebuilt as AI-native products.