Even with comparable model quality, user experience details create significant product stickiness for LLMs. Google's Gemini feels much slower than ChatGPT, and ChatGPT's mobile app includes satisfying haptic feedback. This superior, faster-feeling UX is a key differentiator that causes users to churn back from competitors.
As AI makes it easy to generate 'good enough' software, a functional product is no longer a moat. The new advantage is creating an experience so delightful that users prefer it over a custom-built alternative. This makes design the primary driver of value, setting premium software apart from the infinitely generated.
While not in formal business frameworks, speed of execution is the most critical initial moat for an AI startup. Large incumbents are slowed by process and bureaucracy. Startups like Cursor leverage this by shipping features on daily cycles, a pace incumbents cannot match.
Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.
Despite access to state-of-the-art models, most ChatGPT users defaulted to older versions. The cognitive load of using a "model picker" and uncertainty about speed/quality trade-offs were bigger barriers than price. Automating this choice is key to driving mass adoption of advanced AI reasoning.
Users will switch from an incumbent if a competitor makes the experience feel effortless. The key is to shift the user's feeling from maneuvering a complex 'tractor' to seamlessly riding a 'bicycle,' creating a level of delight that overcomes the high costs of switching.
While ChatGPT is still the leader with 600-700 million monthly active users, Google's Gemini has quickly scaled to 400 million. This rapid adoption signals that the AI landscape is not a monopoly and that user preference is diversifying quickly between major platforms.
Unlike traditional APIs, LLMs are hard to abstract away. Users develop a preference for a specific model's 'personality' and performance (e.g., GPT-4 vs. 3.5), making it difficult for applications to swap out the underlying model without user notice and pushback.