Despite constant new model releases, enterprises don't frequently switch LLMs. Prompts and workflows become highly optimized for a specific model's behavior, creating significant switching costs. Performance gains of a new model must be substantial to justify this re-engineering effort.
LLMs are becoming commoditized. Like gas from different stations, models can be swapped based on price or marginal performance. This means competitive advantage doesn't come from the model itself, but how you use it with proprietary data.
Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.
Simply offering the latest model is no longer a competitive advantage. True value is created in the system built around the model—the system prompts, tools, and overall scaffolding. This 'harness' is what optimizes a model's performance for specific tasks and delivers a superior user experience.
Traditional SaaS switching costs were based on painful data migrations, which LLMs may now automate. The new moat for AI companies is creating deep, customized integrations into a customer's unique operational workflows. This is achieved through long, hands-on pilot periods that make the AI solution indispensable and hard to replace.
Unlike traditional APIs, LLMs are hard to abstract away. Users develop a preference for a specific model's 'personality' and performance (e.g., GPT-4 vs. 3.5), making it difficult for applications to swap out the underlying model without user notice and pushback.
Enterprises will shift from relying on a single large language model to using orchestration platforms. These platforms will allow them to 'hot swap' various models—including smaller, specialized ones—for different tasks within a single system, optimizing for performance, cost, and use case without being locked into one provider.
An enterprise CIO confirms that once a company invests time training a generative AI solution, the cost to switch vendors becomes prohibitive. This means early-stage AI startups can build a powerful moat simply by being the first vendor to get implemented and trained.
For consumer products like ChatGPT, models are already good enough for common queries. However, for complex enterprise tasks like coding, performance is far from solved. This gives model providers a durable path to sustained revenue growth through continued quality improvements aimed at professionals.
Unlike the cloud market with high switching costs, LLM workloads can be moved between providers with a single line of code. This creates insane market dynamics where millions in spend can shift overnight based on model performance or cost, posing a huge risk to the LLM providers themselves.
While new large language models boast superior performance on technical benchmarks, the practical impact on day-to-day PM productivity is hitting a point of diminishing returns. The leap from one version to the next doesn't unlock significantly new capabilities for common PM workflows.