At his startup NextGenVest, Will Falcon treated conversations about financial aid as a game. Using reinforcement learning, the system suggested questions to human agents to maximize the financial aid secured for a student. The system helped students obtain $60 million in aid, showcasing a novel, impactful use of RL.
AI startup Mercore's valuation quintupled to $10B by connecting AI labs with domain experts to train models. This reveals that the most critical bottleneck for advanced AI is not just data or compute, but reinforcement learning from highly skilled human feedback, creating a new "RL economy."
Low-cost AI tools create a new paradigm for entrepreneurship. Instead of the traditional "supervised learning" model where VCs provide a playbook, we see a "reinforcement learning" approach. Countless solo founders act as "agents," rapidly testing ideas without capital, allowing the market to reward what works and disrupting the VC value proposition.
Pre-training on internet text data is hitting a wall. The next major advancements will come from reinforcement learning (RL), where models learn by interacting with simulated environments (like games or fake e-commerce sites). This post-training phase is in its infancy but will soon consume the majority of compute.
While RL is compute-intensive for the amount of signal it extracts, this is its core economic advantage. It allows labs to trade cheap, abundant compute for expensive, scarce human expertise. RL effectively amplifies the value of small, high-quality human-generated datasets, which is crucial when expertise is the bottleneck.
Training AI agents to execute multi-step business workflows demands a new data paradigm. Companies create reinforcement learning (RL) environments—mini world models of business processes—where agents learn by attempting tasks, a more advanced method than simple prompt-completion training (SFT/RLHF).
AI labs like Anthropic find that mid-tier models can be trained with reinforcement learning to outperform their largest, most expensive models in just a few months, accelerating the pace of capability improvements.
Many AI projects fail to reach production because of reliability issues. The vision for continual learning is to deploy agents that are 'good enough,' then use RL to correct behavior based on real-world errors, much like training a human. This solves the final-mile reliability problem and could unlock a vast market.
Reinforcement Learning with Human Feedback (RLHF) is a popular term, but it's just one method. The core concept is reinforcing desired model behavior using various signals. These can include AI feedback (RLAIF), where another AI judges the output, or verifiable rewards, like checking if a model's answer to a math problem is correct.
Instead of relying on expensive, omni-purpose frontier models, companies can achieve better performance and lower costs. By creating a Reinforcement Learning (RL) environment specific to their application (e.g., a code editor), they can train smaller, specialized open-source models to excel at a fraction of the cost.
Companies building infrastructure to A/B test models or evaluate prompts have already built most of what's needed for reinforcement learning. The core mechanism of measuring performance against a goal is the same. The next logical step is to use that performance signal to update the model's weights.