Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

A key value proposition for vertical AI applications is being model-agnostic. They act as a strategic layer for enterprises, allowing them to route tasks to the best available LLM at any given time. This de-risks enterprise AI strategy from being locked into a single model provider whose performance may be surpassed.

Related Insights

Recognizing there is no single "best" LLM, AlphaSense built a system to test and deploy various models for different tasks. This allows them to optimize for performance and even stylistic preferences, using different models for their buy-side finance clients versus their corporate users.

The "AI wrapper" concern is mitigated by a multi-model strategy. A startup can integrate the best models from various providers for different tasks, creating a superior product. A platform like OpenAI is incentivized to only use its own models, creating a durable advantage for the startup.

Navan's CEO sees the debate over which LLM is best as unimportant because the infrastructure is becoming a commodity. The real value is created in the application layer. Navan's own agentic platform, Cognition, intelligently routes tasks to different models (OpenAI, Anthropic, Google) to get the best result for the job.

Rather than committing to a single LLM provider like OpenAI or Gemini, Hux uses multiple commercial models. They've found that different models excel at different tasks within their app. This multi-model strategy allows them to optimize for quality and latency on a per-workflow basis, avoiding a one-size-fits-all compromise.

Enterprises will shift from relying on a single large language model to using orchestration platforms. These platforms will allow them to 'hot swap' various models—including smaller, specialized ones—for different tasks within a single system, optimizing for performance, cost, and use case without being locked into one provider.

Successful vertical AI applications serve as a critical intermediary between powerful foundation models and specific industries like healthcare or legal. Their core value lies in being a "translation and transformation layer," adapting generic AI capabilities to solve nuanced, industry-specific problems for large enterprises.

While the "bitter lesson" suggests powerful general models will dominate, vertical AI solutions can thrive by deeply integrating with a company's specific data, workflows, and project context. The model can't know this proprietary information; value is created by the application that bridges this gap.

Like Kayak for flights, being a model aggregator provides superior value to users who want access to the best tool for a specific job. Big tech companies are restricted to their own models, creating an opportunity for startups to win by offering a 'single pane of glass' across all available models.

Instead of building its own models, Razer's strategy is to be model-agnostic. It selects different best-in-class LLMs for specific use cases (Grok for conversation, ChatGPT for reasoning) and focuses its R&D on the integration layer that provides context and persistence.

Perplexity's core advantage is its model-agnostic orchestration. Unlike vertically integrated competitors (Google, OpenAI), it can select the best model for any task—whether from GPT, Claude, or open-source alternatives—to offer a superior, specialized "orchestra" of AI capabilities.