As foundational AI models become commoditized 'intelligence utilities,' the economic value moves up the stack. Orchestrators like OpenClaw, which can intelligently route tasks to the most efficient model based on cost or use case, are positioned to capture the margin that the underlying model providers cannot.

Related Insights

To survive against subsidized tools from model providers like OpenAI and Anthropic, AI applications must avoid a price war. Instead, the winning strategy is to focus on superior product experience and serve as a neutral orchestration layer that allows users to choose the best underlying model.

While AI automates tasks, it also generates new economic activity. Building and deploying these AI systems requires a new layer of infrastructure services (e.g., Vercel, Render, Cloudflare). This means economic value is shifting to the platforms that enable AI automation.

Enterprises will shift from relying on a single large language model to using orchestration platforms. These platforms will allow them to 'hot swap' various models—including smaller, specialized ones—for different tasks within a single system, optimizing for performance, cost, and use case without being locked into one provider.

The current oligopolistic 'Cournot' state of AI labs will eventually shift to 'Bertrand' competition, where labs compete more on price. This happens once the frontier commoditizes and models become 'good enough,' leading to a market structure similar to today's cloud providers like AWS and GCP.

If AI makes intelligence cheap and universally available, its economic value may collapse. This theory suggests that selling raw AI models could become a low-margin, utility-like business. Profitability will depend on building moats through specialized applications or regulatory capture, not on selling base intelligence.

Like Kayak for flights, being a model aggregator provides superior value to users who want access to the best tool for a specific job. Big tech companies are restricted to their own models, creating an opportunity for startups to win by offering a 'single pane of glass' across all available models.

Obsessing over linear model benchmarks is becoming obsolete, akin to comparing dial-up speeds. The real value and locus of competition is moving to the "agentic layer." Future performance will be measured by the ability to orchestrate tools, memory, and sub-agents to create complex outcomes, not just generate high-quality token responses.

Open source AI models don't need to become the dominant platform to fundamentally alter the market. Their existence alone acts as a powerful price compressor. Proprietary model providers are forced to lower their prices to match the inference cost of open-source alternatives, squeezing profit margins and shifting value to other parts of the stack.

The AI value chain flows from hardware (NVIDIA) to apps, with LLM providers currently capturing most of the margin. The long-term viability of app-layer businesses depends on a competitive model layer. This competition drives down API costs, preventing model providers from having excessive pricing power and allowing apps to build sustainable businesses.

In a world where AI makes software cheap or free, the primary value shifts to specialized human expertise. Companies can monetize by using their software as a low-cost distribution channel to sell high-margin, high-ticket services that customers cannot easily replicate, like specialized security analysis.