Skills and MCP are not competitors but complementary layers in an agent's architecture. Skills provide vertical, domain-specific knowledge (e.g., how to behave as an accountant), while MCP provides the horizontal communication layer to connect the agent to external tools and data sources.
The first authentication spec was unusable for enterprises because it combined the auth server (like Okta) and the resource server into one. This prevents integration with central Identity Providers (IDPs). The spec was fixed by separating them, making the MCP server a pure resource server.
The MCP protocol's primitives are not directly influenced by current model limitations. Instead, it was designed with the expectation that models would improve exponentially. For example, "progressive discovery" was built-in, anticipating that models could be trained to fetch context on-demand, solving future context bloat problems.
Unlike traditional internet protocols that matured slowly, AI technologies are advancing at an exponential rate. An AI standards body must operate at a much higher velocity. The Agentic AI Foundation is structured to facilitate this rapid, "dog years" pace of development, which is essential to remain relevant.
Placing MCP within a neutral foundation like the AAIF is a strategic move to build industry confidence. It guarantees the protocol will remain open and not be controlled or made proprietary by a single company (like Anthropic). This neutrality is critical for encouraging widespread, long-term investment and adoption.
"Code Mode" is not an alternative to MCP but a more efficient way to use it. Instead of multiple sequential tool calls, the model generates a single script that executes multiple actions in a sandbox. MCP still provides the core benefits of authentication, discoverability, and a standardized, LLM-friendly API.
The evolution of a protocol like MCP depends on a tight feedback loop with real-world implementations. Open source clients such as Goose serve as a "reference implementation" to test and demonstrate the value of new, abstract specs like MCPUI (for user interfaces), making the protocol's benefits concrete.
The MCP transport protocol requires holding state on the server. While fine for a single server, it becomes a problem at scale. When requests are distributed across multiple pods, a shared state layer (like Redis or Memcache) becomes necessary to ensure different servers can access the same session data.
Real-world adoption in specific verticals like finance is shaping the MCP protocol. For example, legal contracts requiring mandatory attribution of third-party data are leading to a "financial services interest group" to define extensions. This shows how general-purpose protocols must adapt to niche industry compliance needs.
The MCP protocol made the client's return stream optional to simplify implementation. However, this backfired as most clients didn't build it, rendering server-side features like elicitations and sampling unavailable because the communication channel didn't exist. This is a key lesson in protocol design.
MCP shouldn't be thought of as just another developer API like REST. Its true purpose is to enable seamless, consumer-focused pluggability. In a successful future, a user's mom wouldn't know what MCP is; her AI application would just connect to the right services automatically to get tasks done.
The AI space moves too quickly for slow, consensus-driven standards bodies like the IETF. MCP opted for a traditional open-source model with a small core maintainer group that makes final decisions. This hybrid of consensus and dictatorship enables the rapid iteration necessary to keep pace with AI advancements.
MCP was born from the need for a central dev team to scale its impact. By creating a protocol, they empowered individual teams at Anthropic to build and deploy their own MCP servers without being a bottleneck. This decentralized model is so successful the core team doesn't know about 90% of internal servers.
