Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

AI excels at learning fixed rules, like in chess or identifying a cat. However, it falters in domains like financial markets or politics where the 'game' is adversarial and multiplayer. Any successful AI strategy is quickly identified and countered, rendering it ineffective.

Related Insights

Pairing two AI agents to collaborate often fails. Because they share the same underlying model, they tend to agree excessively, reinforcing each other's bad ideas. This creates a feedback loop that fills their context windows with biased agreement, making them resistant to correction and prone to escalating extremism.

Ken Griffin is skeptical of AI's role in long-term investing. He argues that since AI models are trained on historical data, they excel at static problems. However, investing requires predicting a future that may not resemble the past—a dynamic, forward-looking task where these models inherently struggle.

Having AIs that provide perfect advice doesn't guarantee good outcomes. Humanity is susceptible to coordination problems, where everyone can see a bad outcome approaching but is collectively unable to prevent it. Aligned AIs can warn us, but they cannot force cooperation on a global scale.

AI agents are powerful for execution, like growing a social media account with a known playbook. However, they struggle with creativity and original thought. This means future competitive advantage will shift from execution ability to the quality of the initial human idea and access to unique distribution channels, which agents cannot replicate.

In warfare or business, an opponent's sheer speed can render superior intelligence irrelevant. A novice chess player making four moves for every one of a grandmaster's will win. Similarly, AI systems that can execute faster will defeat more intelligent but slower counterparts.

AI excels at solving problems with clear, verifiable answers, like advanced math, allowing for effective training. It struggles with complex societal issues like unemployment because there is no single, universally agreed-upon "correct" solution to train against, making it difficult to evaluate the AI's path.

Demis Hassabis identifies a key obstacle for AGI. Unlike in math or games where answers can be verified, the messy real world lacks clear success metrics. This makes it difficult for AI systems to use self-improvement loops, limiting their ability to learn and adapt outside of highly structured domains.

Advanced AIs, like those in Starcraft, can dominate human experts in controlled scenarios but collapse when faced with a minor surprise. This reveals a critical vulnerability. Human investors can generate alpha by focusing on situations where unforeseen events or "thick tail" risks are likely, as these are the blind spots for purely algorithmic strategies.

AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.

Karpathy identifies two missing components for multi-agent AI systems. First, they lack "culture"—the ability to create and share a growing body of knowledge for their own use, like writing books for other AIs. Second, they lack "self-play," the competitive dynamic seen in AlphaGo that drives rapid improvement.

AI Struggles in Adversarial Multiplayer Domains Like Markets and Politics | RiffOn