Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

By building a feature that competes directly with startups using its own API, Anthropic demonstrates the "platform risk" inherent in the AI ecosystem. Like Amazon with its Basics line, foundation model companies can observe usage, identify valuable applications, and integrate them, creating a kill-zone for dependent companies.

Related Insights

AI companies built to fill feature gaps on top of foundation models are at high risk. As core models rapidly improve, they often absorb these adjacent features, disintermediating the "wrapper" companies. Their early-adopter customers are also the quickest to switch to better tools.

Developers using OpenAI's API are warned that Sam Altman will analyze their usage data to identify and build competing features. This follows the classic playbook of platform owners like Microsoft and Facebook who studied third-party developers to absorb the most valuable use cases.

Developers using OpenAI's API risk having their innovations copied. The company allegedly studies API usage to identify successful applications and then builds competing features, a strategy historically employed by platform giants like Microsoft and Facebook to absorb value from their ecosystems.

Widespread anxiety from founders before OpenAI's Developer Day highlights a key challenge for AI startups. The fear is not a new competitor, but that the underlying platform (OpenAI) will launch a feature that completely absorbs their product's functionality, making their business obsolete overnight.

Startups are becoming wary of building on OpenAI's platform due to the significant risk of OpenAI launching competing applications (e.g., Sora for video), rendering their products obsolete. This "platform risk" is pushing developers toward neutral providers like Anthropic or open-source models to protect their businesses.

Startups building on top of AI models, like coding assistant Cursor, are extremely vulnerable. As foundation model companies like Anthropic improve their own native capabilities (e.g., Claude Code), they can quickly capture the market and render specialized tools obsolete.

Unlike software bottlenecked by engineering headcount, AI models scale with capital. A frontier model company can raise more than its entire app ecosystem combined, then use that capital to launch competitive first-party apps and subsume third-party developers.

The battleground for AI startups is constantly shrinking like the map in Fortnite. Foundation models like Anthropic's Claude are aggressively absorbing features, turning what was a standalone product into a native capability overnight. This creates extreme existential risk for application-layer companies.

Startups like Cursor that are built on foundation models face existential platform risk. Their supplier (e.g., Anthropic) could limit access, degrade service, or copy their product, effectively killing their business, much like the scorpion stinging the frog mid-river.

A growing movement in the startup community involves not using OpenAI's API. Founders fear OpenAI, in its push for revenue, will release services that directly compete with and kill startups built on its platform, similar to Microsoft's historical "embrace, extend, extinguish" strategy.