The MCP protocol made the client's return stream optional to simplify implementation. However, this backfired as most clients didn't build it, rendering server-side features like elicitations and sampling unavailable because the communication channel didn't exist. This is a key lesson in protocol design.

Related Insights

The evolution of a protocol like MCP depends on a tight feedback loop with real-world implementations. Open source clients such as Goose serve as a "reference implementation" to test and demonstrate the value of new, abstract specs like MCPUI (for user interfaces), making the protocol's benefits concrete.

The MCP transport protocol requires holding state on the server. While fine for a single server, it becomes a problem at scale. When requests are distributed across multiple pods, a shared state layer (like Redis or Memcache) becomes necessary to ensure different servers can access the same session data.

The first authentication spec was unusable for enterprises because it combined the auth server (like Okta) and the resource server into one. This prevents integration with central Identity Providers (IDPs). The spec was fixed by separating them, making the MCP server a pure resource server.

The MCP protocol's primitives are not directly influenced by current model limitations. Instead, it was designed with the expectation that models would improve exponentially. For example, "progressive discovery" was built-in, anticipating that models could be trained to fetch context on-demand, solving future context bloat problems.

MCP was born from the need for a central dev team to scale its impact. By creating a protocol, they empowered individual teams at Anthropic to build and deploy their own MCP servers without being a bottleneck. This decentralized model is so successful the core team doesn't know about 90% of internal servers.

A major unsolved problem for MCP server providers is the lack of a feedback mechanism. When an AI agent uses a tool, the provider often doesn't know if the outcome was successful for the end-user. This "black box" makes iterating and improving the tools nearly impossible.

Real-world adoption in specific verticals like finance is shaping the MCP protocol. For example, legal contracts requiring mandatory attribution of third-party data are leading to a "financial services interest group" to define extensions. This shows how general-purpose protocols must adapt to niche industry compliance needs.

Saying yes to numerous individual client features creates a 'complexity tax'. This hidden cost manifests as a bloated codebase, increased bugs, and high maintenance overhead, consuming engineering capacity and crippling the ability to innovate on the core product.

Tasklet's experience shows AI agents can be more effective directly calling HTTP APIs using scraped documentation than using the specialized MCP framework. This "direct API" approach is so reliable that users prefer it over official MCP integrations, challenging the assumption that structured protocols are superior.

Exposing a full API via the Model Context Protocol (MCP) overwhelms an LLM's context window and reasoning. This forces developers to abandon exposing their entire service and instead manually craft a few highly specific tools, limiting the AI's capabilities and defeating the "do anything" vision of agents.

Making Client-Side Bidirectional Streaming Optional Effectively Killed Server-Side Features | RiffOn