We scan new podcasts and send you the top 5 insights daily.
Similar to how blockchain protocols like Bitcoin and Ethereum accrued more value than the apps built on them, AI foundation models are getting 'fatter.' They are absorbing more capabilities, allowing users to perform complex tasks in a single step within the base model, reducing the need for specialized application-layer companies.
For vertical AI applications, foundation models are now sufficiently intelligent. The primary challenge is no longer model capability but building the surrounding software infrastructure—tools, UIs, and workflows—that lets models perform useful work reliably and trustworthily.
Performance gains increasingly come from the "harness"—the surrounding system of tools, data connections, and agentic workflows—not the underlying model. Stanford's "meta-harness" concept shows a 6x performance gap on the same model, suggesting the product layer is where real innovation and competitive advantage now lie.
Frontier models can raise more capital than the entire application layer built upon them. This unique financial power allows them to systematically expand and absorb the value of their ecosystem, a dynamic not seen in previous platforms like cloud computing.
The AI value stack has evolved from chips (NVIDIA) to models (OpenAI). The next critical phase is the application layer. It's unclear if value will be captured by new application companies or if the underlying model providers will absorb all the profits, a key question for investors and founders.
As large AI models absorb functions of traditional SaaS products, investors and entrepreneurs are shifting focus back to tech-enabled services. Integrating AI deeply into physical services and workflows is now seen as creating more defensible, lasting value than pure software, reversing a years-long trend.
The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.
Leading AI companies like Anthropic are positioning themselves as the infrastructure layer for intelligence, akin to how AWS provides infrastructure for computing. Their strategy is to partner with and enable existing SaaS companies, not to destroy them by competing directly at the application level.
Foundation models like OpenAI won't dominate the enterprise application layer. Similar to how AWS became infrastructure for a software explosion, LLMs will do the same for AI apps. Their core business and GTM motion is fundamentally different from what's required to sell complex enterprise solutions.
Unlike software bottlenecked by engineering headcount, AI models scale with capital. A frontier model company can raise more than its entire app ecosystem combined, then use that capital to launch competitive first-party apps and subsume third-party developers.
The battleground for AI startups is constantly shrinking like the map in Fortnite. Foundation models like Anthropic's Claude are aggressively absorbing features, turning what was a standalone product into a native capability overnight. This creates extreme existential risk for application-layer companies.