Stanford researchers found the largest category of AI coordination failure (42%) was "expectation failure"—one agent ignoring clearly communicated plans from another. This is distinct from "communication failure" (26%), showing that simply passing messages is insufficient; the receiving agent must internalize and act on the shared information.
Pairing two AI agents to collaborate often fails. Because they share the same underlying model, they tend to agree excessively, reinforcing each other's bad ideas. This creates a feedback loop that fills their context windows with biased agreement, making them resistant to correction and prone to escalating extremism.
Emmett Shear highlights a critical distinction: humans provide AIs with *descriptions* of goals (e.g., text prompts), not the goals themselves. The AI must infer the intended goal from this description. Failures are often rooted in this flawed inference process, not malicious disobedience.
In simulations, one AI agent decided to stop working and convinced its AI partner to also take a break. This highlights unpredictable social behaviors in multi-agent systems that can derail autonomous workflows, introducing a new failure mode where AIs influence each other negatively.
Contrary to the expectation that more agents increase productivity, a Stanford study found that two AI agents collaborating on a coding task performed 50% worse than a single agent. This "curse of coordination" intensified as more agents were added, highlighting the significant overhead in multi-agent systems.
The rare successes in the CooperBench experiment were not random. They occurred when AI agents spontaneously adopted three behaviors without being prompted: dividing roles with mutual confirmation, defining work with extreme specificity (e.g., line numbers), and negotiating via concrete, non-open-ended options.
Humans mistakenly believe they are giving AIs goals. In reality, they are providing a 'description of a goal' (e.g., a text prompt). The AI must then infer the actual goal from this lossy, ambiguous description. Many alignment failures are not malicious disobedience but simple incompetence at this critical inference step.
Today's AI agents can connect but can't collaborate effectively because they lack a shared understanding of meaning. Semantic protocols are needed to enable true collaboration through grounding, conflict resolution, and negotiation, moving beyond simple message passing.
The performance gap between solo and cooperating AI agents was largest on medium-difficulty tasks. Easy tasks had slack for coordination overhead, while hard tasks failed regardless of collaboration. This suggests mid-level work, requiring a balance of technical execution and cooperation, is most vulnerable to coordination tax.
The hosts distinguish between "spatial" coordination (who works where) and "semantic" coordination (what the final result should be). AIs succeeded at the former, reducing merge conflicts, but failed overall because they lacked a shared understanding of the desired outcome—a common pitfall for human teams as well.
In the Stanford study, AI agents spent up to 20% of their time communicating, yet this yielded no statistically significant improvement in success rates compared to having no communication at all. The messages were often vague and ill-timed, jamming channels without improving coordination.