While personal history in an AI like ChatGPT seems to create lock-in, it is a weaker moat than for media platforms like Google Photos. Text-based context and preferences are relatively easy to export and transfer to a competitor via another LLM, reducing switching friction.

Related Insights

Traditional SaaS switching costs were based on painful data migrations, which LLMs may now automate. The new moat for AI companies is creating deep, customized integrations into a customer's unique operational workflows. This is achieved through long, hands-on pilot periods that make the AI solution indispensable and hard to replace.

The notion of building a business as a 'thin wrapper' around a foundational model like GPT is flawed. Truly defensible AI products, like Cursor, build numerous specific, fine-tuned models to deeply understand a user's domain. This creates a data and performance moat that a generic model cannot easily replicate, much like Salesforce was more than just a 'thin wrapper' on a database.

While today's focus is on text-based LLMs, the true, defensible AI battleground will be in complex modalities like video. Generating video requires multiple interacting models and unique architectures, creating far greater potential for differentiation and a wider competitive moat than text-based interfaces, which will become commoditized.

AI capabilities offer strong differentiation against human alternatives. However, this is not a sustainable moat against competitors who can use the same AI models. Lasting defensibility still comes from traditional moats like workflow integration and network effects.

The long-held belief that a complex codebase provides a durable competitive advantage is becoming obsolete due to AI. As software becomes easier to replicate, defensibility shifts away from the technology itself and back toward classic business moats like network effects, brand reputation, and deep industry integration.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

Creating a basic AI coding tool is easy. The defensible moat comes from building a vertically integrated platform with its own backend infrastructure like databases, user management, and integrations. This is extremely difficult for competitors to replicate, especially if they rely on third-party services like Superbase.

Despite its massive user base, OpenAI's position is precarious. It lacks true network effects, strong feature lock-in, and control over its cost base since it relies on Microsoft's infrastructure. Its long-term defensibility depends on rapidly building product ecosystems and its own infrastructure advantages.

Unlike traditional SaaS where high switching costs prevent price wars, the AI market faces a unique threat. The portability of prompts and reliance on interchangeable models could enable rapid commoditization. A price war could be "terrifying" and "brutal" for the entire ecosystem, posing a significant downside risk.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.