While Apple's public-facing AI strategy involves Google, its internal product development and tooling are heavily powered by Anthropic's Claude. Apple runs custom versions of the model on its own servers, indicating a deep, non-public integration with a key AI player.
Unlike competitors feeling pressure to build proprietary AI foundation models, Apple can simply partner with providers like Google. This reveals Apple's true moat isn't the model itself but its massive hardware distribution network, giving it leverage to integrate best-in-class AI without the high cost of in-house development.
Apple initially planned to rebuild Siri around Anthropic's Claude AI model. However, Anthropic demanded "a crap ton of money"—several billion dollars a year with doubling prices—which caused Apple to abandon the deal and partner with Google's Gemini instead.
Microsoft is not solely reliant on its OpenAI partnership. It actively integrates competitor models, such as Anthropic's, into its Copilot products to handle specific workloads where they perform better, like complex Excel tasks. This pragmatic "best tool for the job" approach diversifies its AI capabilities.
Apple isn't trying to build the next frontier AI model. Instead, their strategy is to become the primary distribution channel by compressing and running competitors' state-of-the-art models directly on devices. This play leverages their hardware ecosystem to offer superior privacy and performance.
Apple is avoiding massive capital expenditure on building its own LLMs. By partnering with a leader like Google for the underlying tech (e.g., Gemini for Siri), Apple can focus on its core strength: productizing and integrating technology into a superior user experience, which may be the more profitable long-term play.
Legora pivoted its core model provider from OpenAI to Anthropic, driven by a strategic belief that Anthropic is aligning more with enterprise-grade needs while OpenAI is increasingly targeting the B2C market. This signals a potential bifurcation in the foundation model landscape based on end-market focus.
In a major strategic move, Apple is white-labeling Google's Gemini model to power the upcoming, revamped Siri. Apple will pay Google for this underlying technology, a tacit admission that its in-house models are not yet competitive. This partnership aims to fix Siri's long-standing performance issues without publicly advertising its reliance on a competitor.
The Apple-Google AI deal isn't a simple API call. Apple is incorporating Gemini models directly, allowing it to adapt them for products like Siri while processing data within its own on-device or "private cloud" infrastructure. This structure is key to upholding its stringent user privacy standards.
Brex spending data reveals a key split in LLM adoption. While OpenAI wins on broad enterprise use (e.g., ChatGPT licenses), startups building agentic, production-grade AI features into their products increasingly prefer Anthropic's Claude. This indicates a market perception of Claude's suitability for reliable, customer-facing applications.
ChatGPT Apps are built on the Model Context Protocol (MCP), invented by Anthropic. This means tools built for ChatGPT can theoretically run on other MCP-supporting models like Claude. This creates an opportunity for cross-platform distribution, as you aren't just building for OpenAI's ecosystem but for a growing open standard.