Like Kayak for flights, being a model aggregator provides superior value to users who want access to the best tool for a specific job. Big tech companies are restricted to their own models, creating an opportunity for startups to win by offering a 'single pane of glass' across all available models.
The AI market is becoming "polytheistic," with numerous specialized models excelling at niche tasks, rather than "monotheistic," where a single super-model dominates. This fragmentation creates opportunities for differentiated startups to thrive by building effective models for specific use cases, as no single model has mastered everything.
The "AI wrapper" concern is mitigated by a multi-model strategy. A startup can integrate the best models from various providers for different tasks, creating a superior product. A platform like OpenAI is incentivized to only use its own models, creating a durable advantage for the startup.
Most successful SaaS companies weren't built on new core tech, but by packaging existing tech (like databases or CRMs) into solutions for specific industries. AI is no different. The opportunity lies in unbundling a general tool like ChatGPT and rebundling its capabilities into vertical-specific products.
Rather than committing to a single LLM provider like OpenAI or Gemini, Hux uses multiple commercial models. They've found that different models excel at different tasks within their app. This multi-model strategy allows them to optimize for quality and latency on a per-workflow basis, avoiding a one-size-fits-all compromise.
Enterprises will shift from relying on a single large language model to using orchestration platforms. These platforms will allow them to 'hot swap' various models—including smaller, specialized ones—for different tasks within a single system, optimizing for performance, cost, and use case without being locked into one provider.
The initial AI rush for every company to build proprietary models is over. The new winning strategy, seen with firms like Adobe, is to leverage existing product distribution by integrating multiple best-in-class third-party models, enabling faster and more powerful user experiences.
The belief that a single, god-level foundation model would dominate has proven false. Horowitz points to successful AI applications like Cursor, which uses 13 different models. This shows that value lies in the complex orchestration and design at the application layer, not just in having the largest single model.
Perplexity's CEO argues that building foundational models is not necessary for success. By focusing on the end-to-end consumer experience and leveraging increasingly commoditized models, startups can build a highly valuable business without needing billions in funding for model training.
The common critique of AI application companies as "GPT wrappers" with no moat is proving false. The best startups are evolving beyond using a single third-party model. They are using dozens of models and, crucially, are backward-integrating to build their own custom AI models optimized for their specific domain.
Instead of offering a model selector, creating a proprietary, branded model allows a company to chain different specialized models for various sub-tasks (e.g., search, generation). This not only improves overall performance but also provides business independence from the pricing and launch cycles of a single frontier model lab.