While RL is compute-intensive for the amount of signal it extracts, this is its core economic advantage. It allows labs to trade cheap, abundant compute for expensive, scarce human expertise. RL effectively amplifies the value of small, high-quality human-generated datasets, which is crucial when expertise is the bottleneck.
AI startup Mercore's valuation quintupled to $10B by connecting AI labs with domain experts to train models. This reveals that the most critical bottleneck for advanced AI is not just data or compute, but reinforcement learning from highly skilled human feedback, creating a new "RL economy."
Pre-training on internet text data is hitting a wall. The next major advancements will come from reinforcement learning (RL), where models learn by interacting with simulated environments (like games or fake e-commerce sites). This post-training phase is in its infancy but will soon consume the majority of compute.
AI labs like Anthropic find that mid-tier models can be trained with reinforcement learning to outperform their largest, most expensive models in just a few months, accelerating the pace of capability improvements.
Beyond supervised fine-tuning (SFT) and human feedback (RLHF), reinforcement learning (RL) in simulated environments is the next evolution. These "playgrounds" teach models to handle messy, multi-step, real-world tasks where current models often fail catastrophically.
Knowledge work will shift from performing repetitive tasks to teaching AI agents how to do them. Workers will identify agent mistakes and turn them into reinforcement learning (RL) environments, creating a high-leverage, fixed-cost asset similar to software.
Karpathy criticizes standard reinforcement learning as a noisy and inefficient process. It assigns credit or blame to an entire sequence of actions based on a single outcome bit (success/failure). This is like "sucking supervision through a straw," as it fails to identify which specific steps in a successful trajectory were actually correct.
Unlike pre-training's simpler data pipeline, RL involves many "moving parts" because each task can have a unique grading setup and infrastructure. This complexity, not just the algorithm itself, is the primary challenge for researchers managing live training runs at scale.
Instead of relying on expensive, omni-purpose frontier models, companies can achieve better performance and lower costs. By creating a Reinforcement Learning (RL) environment specific to their application (e.g., a code editor), they can train smaller, specialized open-source models to excel at a fraction of the cost.
When determining what data an RL model should consider, resist including every available feature. Instead, observe how experienced human decision-makers reason about the problem. Their simplified mental models reveal the core signals that truly drive outcomes, leading to more stable, faster-learning, and more interpretable AI systems.
Like fossil fuels, finite human data isn't a dead-end for AI but a crucial, non-renewable resource. It provides the initial energy to bootstrap more advanced, self-sustaining learning systems (the AI equivalent of renewable energy), which couldn't have been built from scratch. This frames imitation learning as a necessary intermediate step, not the final destination.