Arthur Mensch argues that the core knowledge for training advanced AI models is limited and circulates quickly among top labs. This diffusion of knowledge prevents any single company from creating a sustainable IP-based lead, which is accelerating performance convergence and commoditization across the industry.
LLMs are becoming commoditized. Like gas from different stations, models can be swapped based on price or marginal performance. This means competitive advantage doesn't come from the model itself, but how you use it with proprietary data.
Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.
Marc Andreessen observes that once a company demonstrates a new AI capability is possible, competitors can catch up rapidly. This suggests that first-mover advantage in AI might be less durable than in previous tech waves, as seen with companies like XAI matching state-of-the-art models in under a year.
In a group of 100 experts training an AI, the top 10% will often drive the majority of the model's improvement. This creates a power law dynamic where the ability to source and identify this elite talent becomes a key competitive moat for AI labs and data providers.
The cost for a given level of AI capability has decreased by a factor of 100 in just one year. This radical deflation in the price of intelligence requires a complete rethinking of business models and future strategies, as intelligence becomes an abundant, cheap commodity.
Fears of a single AI company achieving runaway dominance are proving unfounded, as the number of frontier models has tripled in a year. Newcomers can use techniques like synthetic data generation to effectively "drink the milkshake" of incumbents, reverse-engineering their intelligence at lower costs.
Despite billions in funding, large AI models face a difficult path to profitability. The immense training cost is undercut by competitors creating similar models for a fraction of the price and, more critically, the ability for others to reverse-engineer and extract the weights from existing models, eroding any competitive moat.
In an age where AI can quickly commoditize features, traditional moats like data are weakening. Miro's CEO argues the only sustainable competitive advantage is an organization's speed of learning—its ability to rapidly identify market signals, separate them from noise, and act decisively.
The idea that one company will achieve AGI and dominate is challenged by current trends. The proliferation of powerful, specialized open-source models from global players suggests a future where AI technology is diverse and dispersed, not hoarded by a single entity.
Contrary to the 'winner-takes-all' narrative, the rapid pace of innovation in AI is leading to a different outcome. As rival labs quickly match or exceed each other's model capabilities, the underlying Large Language Models (LLMs) risk becoming commodities, making it difficult for any single player to justify stratospheric valuations long-term.