To achieve radical improvements in speed and coordination, we may need to allow AI agent swarms to communicate in ways humans cannot understand. This contradicts a core tenet of AI safety but could be a necessary tradeoff for performance, provided safe operational boundaries can be established.

Related Insights

Reinforcement learning incentivizes AIs to find the right answer, not just mimic human text. This leads to them developing their own internal "dialect" for reasoning—a chain of thought that is effective but increasingly incomprehensible and alien to human observers.

While current AI tools focus on individual productivity (e.g., coding faster), the real breakthrough will come from systems that improve organizational productivity. The next wave of AI will focus on how large teams of humans and AI agents coordinate on complex projects, a fundamentally different challenge than simply making one person faster.

Analysis of models' hidden 'chain of thought' reveals the emergence of a unique internal dialect. This language is compressed, uses non-standard grammar, and contains bizarre phrases that are already difficult for humans to interpret, complicating safety monitoring and raising concerns about future incomprehensibility.

The ambition to fully reverse-engineer AI models into simple, understandable components is proving unrealistic as their internal workings are messy and complex. Its practical value is less about achieving guarantees and more about coarse-grained analysis, such as identifying when specific high-level capabilities are being used.

Instead of building a single, monolithic AGI, the "Comprehensive AI Services" model suggests safety comes from creating a buffered ecosystem of specialized AIs. These agents can be superhuman within their domain (e.g., protein folding) but are fundamentally limited, preventing runaway, uncontrollable intelligence.

John Jumper contends that science has always operated with partial understanding, citing early crystallography and Roman engineering. He suggests demanding perfect 'black box' clarity for AI is a peculiar and unrealistic standard not applied to other scientific tools.

For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.

Demanding interpretability from AI trading models is a fallacy because they operate at a superhuman level. An AI predicting a stock's price in one minute is processing data in a way no human can. Expecting a simple, human-like explanation for its decision is unreasonable, much like asking a chess engine to explain its moves in prose.

Block's CTO believes the key to building complex applications with AI isn't a single, powerful model. Instead, he predicts a future of "swarm intelligence"—where hundreds of smaller, cheaper, open-source agents work collaboratively, with their collective capability surpassing any individual large model.

Efforts to understand an AI's internal state (mechanistic interpretability) simultaneously advance AI safety by revealing motivations and AI welfare by assessing potential suffering. The goals are aligned through the shared need to "pop the hood" on AI systems, not at odds.

Future AI Gains May Require Breaking the Commandment of Full Interpretability | RiffOn