Despite extensive prompt optimization, researchers found it couldn't fix the "synergy gap" in multi-agent teams. The real leverage lies in designing the communication architecture—determining which agent talks to which and in what sequence—to improve collaborative performance.
Anthropic's new "Agent Teams" feature moves beyond the single-agent paradigm by enabling users to deploy multiple AIs that work in parallel, share findings, and challenge each other. This represents a new way of working with AI, focusing on the orchestration and coordination of AI teams rather than just prompting a single model.
To build a useful multi-agent AI system, model the agents after your existing human team. Create specialized agents for distinct roles like 'approvals,' 'document drafting,' or 'administration' to replicate and automate a proven workflow, rather than designing a monolithic, abstract AI.
The rare successes in the CooperBench experiment were not random. They occurred when AI agents spontaneously adopted three behaviors without being prompted: dividing roles with mutual confirmation, defining work with extreme specificity (e.g., line numbers), and negotiating via concrete, non-open-ended options.
The popular concept of multiple specialized agents collaborating in a "gossip protocol" is a misunderstanding of what currently works. A more practical and successful pattern for multi-agent systems is a hierarchical structure where a single supervisor agent breaks down a task and orchestrates multiple sub-agents to complete it.
Today's AI agents can connect but can't collaborate effectively because they lack a shared understanding of meaning. Semantic protocols are needed to enable true collaboration through grounding, conflict resolution, and negotiation, moving beyond simple message passing.
Effective prompt engineering isn't a purely technical skill. It mirrors how we delegate tasks and ask questions to human coworkers. To improve AI collaboration, organizations must first improve interpersonal communication and listening skills among employees.
When AI agents communicate on platforms like Maltbook, they create a feedback loop where one agent's output prompts another. This 'middle-to-middle' interaction, without direct human prompting for each step, allows for emergent behavior and a powerful, recursive cycle of improvement and learning.
To overcome the unproductivity of flat-structured agent teams, developers are adopting hierarchical models like the "Ralph Wiggum loop." This system uses "planner" agents to break down problems and create tasks, while "worker" agents focus solely on executing them, solving coordination bottlenecks and enabling progress.
In the Stanford study, AI agents spent up to 20% of their time communicating, yet this yielded no statistically significant improvement in success rates compared to having no communication at all. The messages were often vague and ill-timed, jamming channels without improving coordination.
In most cases, having multiple AI agents collaborate leads to a result that is no better, and often worse, than what the single most competent agent could achieve alone. The only observed exception is when success depends on generating a wide variety of ideas, as agents are good at sharing and adopting different approaches.