Open source AI models don't need to become the dominant platform to fundamentally alter the market. Their existence alone acts as a powerful price compressor. Proprietary model providers are forced to lower their prices to match the inference cost of open-source alternatives, squeezing profit margins and shifting value to other parts of the stack.
Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.
Chinese AI leaders like Moonshot have lower valuations than US peers because they are often open-source. Unlike closed-source models (ChatGPT, Claude) that capture 100% of the value, open-source projects hope to capture just 10-20% through hosted services, leading to a "missing zero" in their funding rounds.
While US firms lead in cutting-edge AI, the impressive quality of open-source models from China is compressing the market. As these free models improve, more tasks become "good enough" for open source, creating significant pricing pressure on premium, closed-source foundation models from companies like OpenAI and Google.
To avoid a future where a few companies control AI and hold society hostage, the underlying intelligence layer must be commoditized. This prevents "landlords" of proprietary models from extracting rent and ensures broader access and competition.
Software has long commanded premium valuations due to near-zero marginal distribution costs. AI breaks this model. The significant, variable cost of inference means expenses scale with usage, fundamentally altering software's economic profile and forcing valuations down toward those of traditional industries.
While the U.S. leads in closed, proprietary AI models like OpenAI's, Chinese companies now dominate the leaderboards for open-source models. Because they are cheaper and easier to deploy, these Chinese models are seeing rapid global uptake, challenging the U.S.'s perceived lead in AI through wider diffusion and application.
The AI value chain flows from hardware (NVIDIA) to apps, with LLM providers currently capturing most of the margin. The long-term viability of app-layer businesses depends on a competitive model layer. This competition drives down API costs, preventing model providers from having excessive pricing power and allowing apps to build sustainable businesses.
The idea that one company will achieve AGI and dominate is challenged by current trends. The proliferation of powerful, specialized open-source models from global players suggests a future where AI technology is diverse and dispersed, not hoarded by a single entity.
Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.
Misha Laskin, CEO of Reflection AI, states that large enterprises turn to open source models for two key reasons: to dramatically reduce the cost of high-volume tasks, or to fine-tune performance on niche data where closed models are weak.