While projects like Agency and A2A solve crucial communication and identity problems for AI agents, these are foundational. The larger, unsolved challenge preventing distributed superintelligence is the semantic layer: enabling agents to establish shared meaning and intent.

Related Insights

While direct vector space communication between AI agents would be most efficient, the reality of heterogeneous systems and human-in-the-loop collaboration makes natural language the necessary lowest common denominator for interoperability for the foreseeable future.

To achieve radical improvements in speed and coordination, we may need to allow AI agent swarms to communicate in ways humans cannot understand. This contradicts a core tenet of AI safety but could be a necessary tradeoff for performance, provided safe operational boundaries can be established.

As models become more powerful, the primary challenge shifts from improving capabilities to creating better ways for humans to specify what they want. Natural language is too ambiguous and code too rigid, creating a need for a new abstraction layer for intent.

OpenAI has quietly launched "skills" for its models, following the same open standard as Anthropic's Claude. This suggests a future where AI agent capabilities are reusable and interoperable across different platforms, making them significantly more powerful and easier to develop for.

Today's AI agents can connect but can't collaborate effectively because they lack a shared understanding of meaning. Semantic protocols are needed to enable true collaboration through grounding, conflict resolution, and negotiation, moving beyond simple message passing.

Moving beyond isolated AI agents requires a framework mirroring human collaboration. This involves agents establishing common goals (shared intent), building a collective knowledge base (shared knowledge), and creating novel solutions together (shared innovation).

While AI models excel at gathering and synthesizing information ('knowing'), they are not yet reliable at executing actions in the real world ('doing'). True agentic systems require bridging this gap by adding crucial layers of validation and human intervention to ensure tasks are performed correctly and safely.

Current AI development focuses on "vertical scaling" (bigger models), akin to early humans getting smarter individually. The real breakthrough, like humanity's invention of language, will come from "horizontal scaling"—enabling AI agents to share knowledge and collaborate.

The future of AI is not just humans talking to AI, but a world where personal agents communicate directly with business agents (e.g., your agent negotiating a loan with a bank's agent). This will necessitate new communication protocols and guardrails, creating a societal transformation comparable to the early internet.

Karpathy identifies two missing components for multi-agent AI systems. First, they lack "culture"—the ability to create and share a growing body of knowledge for their own use, like writing books for other AIs. Second, they lack "self-play," the competitive dynamic seen in AlphaGo that drives rapid improvement.