Lightspeed justifies investing in competing LLMs (xAI, Anthropic, Mistral) by viewing them as distinct software platforms targeting different markets (consumer, enterprise, open-source), not as interchangeable competitors. This framing enables a portfolio approach to the foundational AI layer.
Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.
Top AI labs like Anthropic are simultaneously taking massive investments from direct competitors like Microsoft, NVIDIA, Google, and Amazon. This creates a confusing web of reciprocal deals for capital and cloud compute, blurring traditional competitive lines and creating complex interdependencies.
While OpenAI pursues a broad strategy across consumer, science, and enterprise, Anthropic is hyper-focused on the $2 trillion software development market. This narrow focus on high-value enterprise use cases is allowing it to accelerate revenue significantly faster than its more diversified rival.
The fear that large AI labs will dominate all software is overblown. The competitive landscape will likely mirror Google's history: winning in some verticals (Maps, Email) while losing in others (Social, Chat). Victory will be determined by superior team execution within each specific product category, not by the sheer power of the underlying foundation model.
Venture investors aren't concerned when a portfolio company launches products that compete with their other investments. This is viewed as a positive signal of a massive winner—a company so dominant it expands into adjacent categories, which is the ultimate goal.
Rather than committing to a single LLM provider like OpenAI or Gemini, Hux uses multiple commercial models. They've found that different models excel at different tasks within their app. This multi-model strategy allows them to optimize for quality and latency on a per-workflow basis, avoiding a one-size-fits-all compromise.
Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.
Initially, even OpenAI believed a single, ultimate 'model to rule them all' would emerge. This thinking has completely changed to favor a proliferation of specialized models, creating a healthier, less winner-take-all ecosystem where different models serve different needs.
Anthropic is making its models available on AWS, Azure, and Google Cloud. This multi-cloud approach is a deliberate business strategy to position itself as a neutral infrastructure provider. Unlike competitors who might build competing apps, this signals to customers that Anthropic aims to be a partner, not a competitor.
The AI value chain flows from hardware (NVIDIA) to apps, with LLM providers currently capturing most of the margin. The long-term viability of app-layer businesses depends on a competitive model layer. This competition drives down API costs, preventing model providers from having excessive pricing power and allowing apps to build sustainable businesses.