Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

In games too complex for a clean search tree (e.g., StarCraft), AIs use 'neural fictitious self-play.' They train specialized model-free RL agents to be a 'best response' against specific, fixed opponents. These specialists are then distilled into a single, robust policy that averages across many opponents.

Related Insights

Go's search space is larger than the number of atoms in the universe, making exhaustive search impossible. AlphaGo's core breakthrough was using neural networks to intelligently guide its search, evaluating only the most promising moves and making an intractable problem solvable.

Simulating strategies with memory (like "grim trigger") or with multiple players causes an exponential explosion of simulation branches. This can be solved by having all simulated agents draw from the same shared sequence of random numbers, which forces all simulation branches to halt at the same conceptual "time step."

Reinforcement learning achieves superhuman results not by inventing alien concepts, but by surfacing and combining rare behaviors that are already possible within a model's vast pre-trained distribution. The goal of pre-training is to make this search for novel solutions more efficient and less random.

MCTS acts like the Dagger (Dataset Aggregation) algorithm in robotics. For every state in a game, even one on a losing path, MCTS provides a 'better' action. This teaches the policy not just the optimal path, but also how to recover and get back to it from suboptimal states, creating a more robust agent.

Beyond supervised fine-tuning (SFT) and human feedback (RLHF), reinforcement learning (RL) in simulated environments is the next evolution. These "playgrounds" teach models to handle messy, multi-step, real-world tasks where current models often fail catastrophically.

By removing all human game data and learning only from self-play, AlphaZero first rediscovered human strategies and then discarded them for superior, 'alien' ones. This showed that relying solely on human data can limit an AI's potential, anchoring it to existing knowledge and cognitive biases.

Instead of training on the single best action from its search (a one-hot label), AlphaGo's policy network learns to imitate the entire probability distribution of moves from MCTS. This 'soft label' contains far more information, enabling a much more effective and sample-efficient form of knowledge distillation.

Monte Carlo Tree Search (MCTS) acts as a 'policy improvement operator.' After the search finds a better move distribution, the policy network is trained to directly predict this improved distribution. This distills the expensive search process into the network itself, making it stronger over time.

Humans stop analyzing a game when they intuit a winning or losing position. AlphaGo’s value function mimics this by predicting the eventual outcome from any board state. This allows the search to be drastically shortened, as it doesn't need to play out every possibility to the very end.

Unlike typical reinforcement learning which learns from sparse win/loss signals, AlphaGo's method is remarkably stable. It uses MCTS to generate an 'improved' move for every state, turning the problem into a simple supervised learning task of imitating a better version of itself, avoiding high-variance gradients.

For Unsearchable Games like StarCraft, AIs Train "Best Response" Policies Against Fixed Opponents | RiffOn