The middle layer of the AI stack (software infrastructure for data movement or frameworks) is a difficult place to build a company. Foundation models are incentivized to add more capabilities from below, leaving little room for defensible platforms in between applications.

Related Insights

Creating frontier AI models is incredibly expensive, yet their value depreciates rapidly as they are quickly copied or replicated by lower-cost open-source alternatives. This forces model providers to evolve into more defensible application companies to survive.

In a new, high-risk category, betting on infrastructure ('shovels') isn't necessarily safer. If the category fails, both app and infra lose. But if it succeeds, the application layer captures disproportionately more value, making the infrastructure a lower-upside bet for the same level of existential risk.

Counter to fears that foundation models will obsolete all apps, AI startups can build defensible businesses by embedding AI into unique workflows, owning the customer relationship, and creating network effects. This mirrors how top App Store apps succeeded despite Apple's platform dominance.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.

Unlike traditional SaaS, AI applications have a unique vulnerability: a step-function improvement in an underlying model could render an app's entire workflow obsolete. What seems defensible today could become a native model feature tomorrow (the 'Jasper' risk).

The founder of Stormy AI focuses on building a company that benefits from, rather than competes with, improving foundation models. He avoids over-optimizing for current model limitations, ensuring his business becomes stronger, not obsolete, with every new release like GPT-5. This strategy is key to building a durable AI company.

The primary reason multi-million dollar AI initiatives stall or fail is not the sophistication of the models, but the underlying data layer. Traditional data infrastructure creates delays in moving and duplicating information, preventing the real-time, comprehensive data access required for AI to deliver business value. The focus on algorithms misses this foundational roadblock.

The excitement around AI capabilities often masks the real hurdle to enterprise adoption: infrastructure. Success is not determined by the model's sophistication, but by first solving foundational problems of security, cost control, and data integration. This requires a shift from an application-centric to an infrastructure-first mindset.

An AI app that is merely a wrapper around a foundation model is at high risk of being absorbed by the model provider. True defensibility comes from integrating AI with proprietary data and workflows to become an indispensable enterprise system of record, like an HR or CRM system.