Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Large, centralized AI models are vulnerable to 'distillation attacks,' where a smaller model can be trained cheaply by querying the larger one. This technical reality, combined with the moral hypocrisy of creators restricting copying after scraping the internet, strongly suggests a future dominated by decentralized, open-source models.

Related Insights

As powerful AI models become capable of running offline on local devices, they challenge the centralized, platform-based model of companies like Google and Facebook. This shift towards decentralized intelligence could fundamentally disrupt the digital economy by removing the need for gatekeepers.

Contrary to fears of a monopoly, the AI market is heading toward a diverse ecosystem. The proliferation of open-weight models and specialized tooling allows companies to build and control their own differentiated AI systems rather than simply renting intelligence token-by-token from a handful of large labs.

Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.

China is gaining an efficiency edge in AI by using "distillation"—training smaller, cheaper models from larger ones. This "train the trainer" approach is much faster and challenges the capital-intensive US strategy, highlighting how inefficient and "bloated" current Western foundational models are.

Fears of a single AI company achieving runaway dominance are proving unfounded, as the number of frontier models has tripled in a year. Newcomers can use techniques like synthetic data generation to effectively "drink the milkshake" of incumbents, reverse-engineering their intelligence at lower costs.

As developers increasingly use AI coding assistants like Claude Code, they flood public repositories like GitHub with high-quality, AI-generated outputs. This effectively turns the internet into a massive, unavoidable training dataset for competing models, making it difficult to police "distillation" as a violation of terms.

To avoid a future where a few companies control AI and hold society hostage, the underlying intelligence layer must be commoditized. This prevents "landlords" of proprietary models from extracting rent and ensures broader access and competition.

Despite billions in funding, large AI models face a difficult path to profitability. The immense training cost is undercut by competitors creating similar models for a fraction of the price and, more critically, the ability for others to reverse-engineer and extract the weights from existing models, eroding any competitive moat.

Block's CTO believes the key to building complex applications with AI isn't a single, powerful model. Instead, he predicts a future of "swarm intelligence"—where hundreds of smaller, cheaper, open-source agents work collaboratively, with their collective capability surpassing any individual large model.

The idea that one company will achieve AGI and dominate is challenged by current trends. The proliferation of powerful, specialized open-source models from global players suggests a future where AI technology is diverse and dispersed, not hoarded by a single entity.