Sam Altman famously laughed off the idea that a new venture could compete with OpenAI. Soon after, China's DeepSeek emerged, developing a comparable, and in some cases superior, AI model on a shoestring budget, proving incumbency and capital aren't insurmountable moats.

Related Insights

OpenAI, the initial leader in generative AI, is now on the defensive as competitors like Google and Anthropic copy and improve upon its core features. This race demonstrates that being first offers no lasting moat; in fact, it provides a roadmap for followers to surpass the leader, creating a first-mover disadvantage.

While US firms lead in cutting-edge AI, the impressive quality of open-source models from China is compressing the market. As these free models improve, more tasks become "good enough" for open source, creating significant pricing pressure on premium, closed-source foundation models from companies like OpenAI and Google.

The emergence of high-quality open-source models from China drastically shortens the innovation window of closed-source leaders. This competition is healthy for startups, providing them with a broader array of cheaper, powerful models to build on and preventing a single company from becoming a chokepoint.

The rise of Chinese AI models like DeepSeek and Kimmy in 2025 was driven by the startup and developer communities, not large enterprises. This bottom-up adoption pattern is reshaping the open-source landscape, creating a new competitive dynamic where nimble startups are leveraging these models long before they are vetted by corporate buyers.

Monologue's success, built by a single developer with less than $20,000 invested, highlights how AI tools have reset the startup playing field. This lean approach enabled rapid development and achieved product-market fit where heavily funded competitors have struggled, proving capital is no longer the primary moat.

Sam Altman argues that the key to winning is not a single feature but the ability to repeatedly innovate first. Competitors who copy often replicate design mistakes and are always a step behind, making cloning a poor long-term strategy for them.

Despite its early dominance, OpenAI's internal "Code Red" in response to competitors like Google's Gemini and Anthropic demonstrates a critical business lesson. An early market lead is not a guarantee of long-term success, especially in a rapidly evolving field like artificial intelligence.

To escape platform risk and high API costs, startups are building their own AI models. The strategy involves taking powerful, state-subsidized open-source models from China and fine-tuning them for specific use cases, creating a competitive alternative to relying on APIs from OpenAI or Anthropic.

Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.

A bigger risk than OpenAI's tech plateauing is its business model being destroyed by competition. If rivals like Google make their LLMs free, OpenAI's high valuation and massive spending become unsustainable as it would be forced to compete on price, not performance.