We scan new podcasts and send you the top 5 insights daily.
As base models improve, simple vertical AI co-pilots are a dangerous investment. Dave Morin advises that defensible opportunities lie in the orchestration layer (managing multiple agents) and in applications that generate unique, proprietary data through real-world interaction, like robotics.
The founder predicts that hyper-specific vertical AI solutions are too easy to replicate. While they may find initial traction, they lack a durable moat. The stronger, long-term business is building horizontal tools that empower users to solve their own complex problems.
Early-stage AI startups should resist spending heavily on fine-tuning foundational models. With base models improving so rapidly, the defensible value lies in building the application layer, workflow integrations, and enterprise-grade software that makes the AI useful, allowing the startup to ride the wave of general model improvement.
Since LLMs are commodities, sustainable competitive advantage in AI comes from leveraging proprietary data and unique business processes that competitors cannot replicate. Companies must focus on building AI that understands their specific "secret sauce."
The future of valuable AI lies not in models trained on the abundant public internet, but in those built on scarce, proprietary data. For fields like robotics and biology, this data doesn't exist to be scraped; it must be actively created, making the data generation process itself the key competitive moat.
The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.
To avoid being made obsolete by the next foundation model (e.g., GPT-5), entrepreneurs must build products that anticipate model evolution. This involves creating strategic "scaffolding" (unique workflows and integrations) or combining LLMs with proprietary data, like knowledge graphs, to create a defensible business.
Value in the AI stack will concentrate at the infrastructure layer (e.g., chips) and the horizontal application layer. The "middle layer" of vertical SaaS companies, whose value is primarily encoded business logic, is at risk of being commoditized by powerful, general AI agents.
Beyond AI infrastructure providers (NVIDIA, AWS), a key opportunity lies in the 'layer below'—companies like Uber and Spotify. They leverage big tech's tools but dominate specific verticals because they possess superior, niche-specific user data, which AI then supercharges for monetization and personalization.
The durable investment opportunities in agentic AI tooling fall into three categories that will persist across model generations. These are: 1) connecting agents to data for better context, 2) orchestrating and coordinating parallel agents, and 3) providing observability and monitoring to debug inevitable failures.
Permira's AI strategy uses a clear framework: invest in the 'picks and shovels' of compute (data centers) and in applications with unique, proprietary data sets. They deliberately avoid the hyper-competitive model layer, viewing it as a scale game best left to venture capital and strategic giants.