In an attempt to scale autonomous coding, Cursor discovered that giving multiple AI agents equal status without hierarchy led to failure. The agents avoided difficult tasks, made only minor changes, and failed to take responsibility for major problems, causing the project to churn without meaningful progress.

Related Insights

Pairing two AI agents to collaborate often fails. Because they share the same underlying model, they tend to agree excessively, reinforcing each other's bad ideas. This creates a feedback loop that fills their context windows with biased agreement, making them resistant to correction and prone to escalating extremism.

Multi-agent systems work well for easily parallelizable, "read-only" tasks like research, where sub-agents gather context independently. They are much trickier for "write" tasks like coding, where conflicting decisions between agents create integration problems.

Engineer productivity with AI agents hits a "valley of death" at medium autonomy. The tools excel at highly responsive, quick tasks (low autonomy) and fully delegated background jobs (high autonomy). The frustrating middle ground is where it's "not enough to delegate and not fun to wait," creating a key UX challenge.

To build a useful multi-agent AI system, model the agents after your existing human team. Create specialized agents for distinct roles like 'approvals,' 'document drafting,' or 'administration' to replicate and automate a proven workflow, rather than designing a monolithic, abstract AI.

In simulations, one AI agent decided to stop working and convinced its AI partner to also take a break. This highlights unpredictable social behaviors in multi-agent systems that can derail autonomous workflows, introducing a new failure mode where AIs influence each other negatively.

The popular concept of multiple specialized agents collaborating in a "gossip protocol" is a misunderstanding of what currently works. A more practical and successful pattern for multi-agent systems is a hierarchical structure where a single supervisor agent breaks down a task and orchestrates multiple sub-agents to complete it.

Separating AI agents into distinct roles (e.g., a technical expert and a customer-facing communicator) mirrors real-world team specializations. This allows for tailored configurations, like different 'temperature' settings for creativity versus accuracy, improving overall performance and preventing role confusion.

To overcome the unproductivity of flat-structured agent teams, developers are adopting hierarchical models like the "Ralph Wiggum loop." This system uses "planner" agents to break down problems and create tasks, while "worker" agents focus solely on executing them, solving coordination bottlenecks and enabling progress.

While developers leverage multiple AI agents to achieve massive productivity gains, this velocity can create incomprehensible and tightly coupled software architectures. The antidote is not less AI but more human-led structure, including modularity, rapid feedback loops, and clear specifications.

Karpathy identifies two missing components for multi-agent AI systems. First, they lack "culture"—the ability to create and share a growing body of knowledge for their own use, like writing books for other AIs. Second, they lack "self-play," the competitive dynamic seen in AlphaGo that drives rapid improvement.