We scan new podcasts and send you the top 5 insights daily.
Leading AI companies like Anthropic are positioning themselves as the infrastructure layer for intelligence, akin to how AWS provides infrastructure for computing. Their strategy is to partner with and enable existing SaaS companies, not to destroy them by competing directly at the application level.
By building a feature that competes directly with startups using its own API, Anthropic demonstrates the "platform risk" inherent in the AI ecosystem. Like Amazon with its Basics line, foundation model companies can observe usage, identify valuable applications, and integrate them, creating a kill-zone for dependent companies.
Specialized SaaS companies like Writer and Intercom are moving beyond simply wrapping OpenAI or Anthropic APIs. They are now training their own foundation models to create more defensible, vertically-integrated AI products, signaling a shift away from platform dependency toward bespoke AI stacks.
Contrary to fears that AI will destroy enterprise software, Jensen Huang predicts the opposite. He argues that enterprise software companies are poised to become a massive value-added reseller channel for foundation models from companies like Anthropic and OpenAI, leading to a logarithmic expansion of the AI market through their existing go-to-market channels.
Counter to fears that foundation models will obsolete all apps, AI startups can build defensible businesses by embedding AI into unique workflows, owning the customer relationship, and creating network effects. This mirrors how top App Store apps succeeded despite Apple's platform dominance.
By investing billions in both OpenAI and Anthropic, Amazon creates a scenario where it benefits if either becomes the dominant model. If both falter, it still profits immensely from selling AWS compute to the entire ecosystem. This positions AWS as the ultimate "picks and shovels" play in the AI gold rush.
Foundation models like OpenAI won't dominate the enterprise application layer. Similar to how AWS became infrastructure for a software explosion, LLMs will do the same for AI apps. Their core business and GTM motion is fundamentally different from what's required to sell complex enterprise solutions.
While AI labs could build competing enterprise apps, the required effort (sales teams, customizations) is massive. For a multi-billion dollar company, the resulting revenue is a rounding error, making it an illogical distraction from their core model-building business.
Anthropic is making its models available on AWS, Azure, and Google Cloud. This multi-cloud approach is a deliberate business strategy to position itself as a neutral infrastructure provider. Unlike competitors who might build competing apps, this signals to customers that Anthropic aims to be a partner, not a competitor.
The fundamental shift from AI isn't about replacing foundational model companies like OpenAI. Instead, AI creates a new technological substrate—productized intelligence—that will engender an entirely new breed of software companies, marking the end of the traditional SaaS playbook.
While OpenAI battles Google for consumer attention, Anthropic is capturing the lucrative enterprise market. Its strategy focuses on API spend and developer-centric tools, which are more reliable and scalable revenue generators than consumer chatbot subscriptions facing increasing free competition.