ChatGPT Apps are built on the Model Context Protocol (MCP), invented by Anthropic. This means tools built for ChatGPT can theoretically run on other MCP-supporting models like Claude. This creates an opportunity for cross-platform distribution, as you aren't just building for OpenAI's ecosystem but for a growing open standard.

Related Insights

Agent Skills and the Model Context Protocol (MCP) are complementary, not redundant. Skills package internal, repeatable workflows for 'doing the thing,' while MCP provides the open standard for connecting to external systems like databases and APIs for 'reaching the thing.'

OpenAI has quietly launched "skills" for its models, following the same open standard as Anthropic's Claude. This suggests a future where AI agent capabilities are reusable and interoperable across different platforms, making them significantly more powerful and easier to develop for.

OpenAI integrated the Model-Centric Protocol (MCP) into its agentic APIs instead of building its own. The decision was driven by Anthropic treating MCP as a truly open standard, complete with a cross-company steering committee, which fostered trust and made adoption easy and pragmatic.

Microsoft is not solely reliant on its OpenAI partnership. It actively integrates competitor models, such as Anthropic's, into its Copilot products to handle specific workloads where they perform better, like complex Excel tasks. This pragmatic "best tool for the job" approach diversifies its AI capabilities.

MCP was born from the need for a central dev team to scale its impact. By creating a protocol, they empowered individual teams at Anthropic to build and deploy their own MCP servers without being a bottleneck. This decentralized model is so successful the core team doesn't know about 90% of internal servers.

MCP acts as a universal translator, allowing different AI models and platforms to share context and data. This prevents "AI amnesia" where customer interactions start from scratch, creating a continuous, intelligent experience by giving AI a persistent, shared memory.

Unlike sticky cloud infrastructure (AWS, GCP), LLMs are easily interchangeable via APIs, leading to customer "promiscuity." This commoditizes the model layer and forces providers like OpenAI to build defensible moats at the application layer (e.g., ChatGPT) where they can own the end user.

In a significant strategic move, OpenAI's Evals product within Agent Kit allows developers to test results from non-OpenAI models via integrations like Open Router. This positions Agent Kit not just as an OpenAI-centric tool, but as a central, model-agnostic platform for building and optimizing agents.

Unlike the failed GPT Store which required users to actively search for apps, the new model contextually surfaces relevant apps based on user prompts. This passive discovery mechanism is a massive opportunity for developers, as users don't need to leave their natural workflow to find and use new tools.

Brex spending data reveals a key split in LLM adoption. While OpenAI wins on broad enterprise use (e.g., ChatGPT licenses), startups building agentic, production-grade AI features into their products increasingly prefer Anthropic's Claude. This indicates a market perception of Claude's suitability for reliable, customer-facing applications.