When determining what data an RL model should consider, resist including every available feature. Instead, observe how experienced human decision-makers reason about the problem. Their simplified mental models reveal the core signals that truly drive outcomes, leading to more stable, faster-learning, and more interpretable AI systems.
Reinforcement learning incentivizes AIs to find the right answer, not just mimic human text. This leads to them developing their own internal "dialect" for reasoning—a chain of thought that is effective but increasingly incomprehensible and alien to human observers.
AI models show impressive performance on evaluation benchmarks but underwhelm in real-world applications. This gap exists because researchers, focused on evals, create reinforcement learning (RL) environments that mirror test tasks. This leads to narrow intelligence that doesn't generalize, a form of human-driven reward hacking.
Even with vast training data, current AI models are far less sample-efficient than humans. This limits their ability to adapt and learn new skills on the fly. They resemble a perpetual new hire who can access information but lacks the deep, instinctual learning that comes from experience and weight updates.
Training AI agents to execute multi-step business workflows demands a new data paradigm. Companies create reinforcement learning (RL) environments—mini world models of business processes—where agents learn by attempting tasks, a more advanced method than simple prompt-completion training (SFT/RLHF).
Reinforcement Learning with Human Feedback (RLHF) is a popular term, but it's just one method. The core concept is reinforcing desired model behavior using various signals. These can include AI feedback (RLAIF), where another AI judges the output, or verifiable rewards, like checking if a model's answer to a math problem is correct.
Focusing on which reinforcement learning algorithm is best (e.g., PPO vs. DPO) is misguided. The more critical factor is the quality and verifiability of the input data signal itself, which exists on a spectrum from subjective human preference (RLHF) to objective, verifiable truth.
Much RL research from 2015-2022 has not proven useful in practice because academia rewards complex, math-heavy ideas. These provide implicit "knobs" to overfit benchmarks, while ignoring simpler, more generalizable approaches that may lack intellectual novelty.
Karpathy identifies the AI community's 2010s focus on reinforcement learning in games (like Atari) as a misstep. These environments were too sparse and disconnected from real-world knowledge work. Progress required first building powerful representations through large language models, a step that was skipped in early attempts to create agents.
As reinforcement learning (RL) techniques mature, the core challenge shifts from the algorithm to the problem definition. The competitive moat for AI companies will be their ability to create high-fidelity environments and benchmarks that accurately represent complex, real-world tasks, effectively teaching the AI what matters.
The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.