While AI labs could build competing enterprise apps, the required effort (sales teams, customizations) is massive. For a multi-billion dollar company, the resulting revenue is a rounding error, making it an illogical distraction from their core model-building business.

Related Insights

Instead of competing with OpenAI's mass-market ChatGPT, Anthropic focuses on the enterprise market. By prioritizing safety, reliability, and governance, it targets regulated industries like finance, legal, and healthcare, creating a defensible B2B niche as the "enterprise safety and reliability leader."

When evaluating AI startups, don't just consider the current product landscape. Instead, visualize the future state of giants like OpenAI as multi-trillion dollar companies. Their "sphere of influence" will be vast. The best opportunities are "second-order" companies operating in niches these giants are unlikely to touch.

Startups like Cognition Labs find their edge not by competing on pre-training large models, but by mastering post-training. They build specialized reinforcement learning environments that teach models specific, real-world workflows (e.g., using Datadog for debugging), creating a defensible niche that larger players overlook.

In the current market, AI companies see explosive growth through two primary vectors: attaching to the massive AI compute spend or directly replacing human labor. Companies merely using AI to improve an existing product without hitting one of these drivers risk being discounted as they lack a clear, exponential growth narrative.

The enduring moat in the AI stack lies in what is hardest to replicate. Since building foundation models is significantly more difficult than building applications on top of them, the model layer is inherently more defensible and will naturally capture more value over time.

AI favors incumbents more than startups. While everyone builds on similar models, true network effects come from proprietary data and consumer distribution, both of which incumbents own. Startups are left with narrow problems, but high-quality incumbents are moving fast enough to capture these opportunities.

For consumer products like ChatGPT, models are already good enough for common queries. However, for complex enterprise tasks like coding, performance is far from solved. This gives model providers a durable path to sustained revenue growth through continued quality improvements aimed at professionals.

For enterprise AI, the ultimate growth constraint isn't sales but deployment. A star CEO can sell multi-million dollar contracts, but the "physics of change management" inside large corporations—integrations, training, process redesign—creates a natural rate limit on how quickly revenue can be realized, making 10x year-over-year growth at scale nearly impossible.

The AI value chain flows from hardware (NVIDIA) to apps, with LLM providers currently capturing most of the margin. The long-term viability of app-layer businesses depends on a competitive model layer. This competition drives down API costs, preventing model providers from having excessive pricing power and allowing apps to build sustainable businesses.

Investing in startups directly adjacent to OpenAI is risky, as they will inevitably build those features. A smarter strategy is backing "second-order effect" companies applying AI to niche, unsexy industries that are outside the core focus of top AI researchers.

AI Labs Like Anthropic Won't Disrupt Niche Apps Due to Immaterial Revenue Gains | RiffOn