Just as neural networks replaced hand-crafted features, large generalist models are replacing narrow, task-specific ones. Jeff Dean notes the era of unified models is "really upon us." A single, large model that can generalize across domains like math and language is proving more powerful than bespoke solutions for each, a modern take on the "bitter lesson."
The path to a general-purpose AI model is not to tackle the entire problem at once. A more effective strategy is to start with a highly constrained domain, like generating only Minecraft videos. Once the model works reliably in that narrow distribution, incrementally expand the training data and complexity, using each step as a foundation for the next.
AI development history shows that complex, hard-coded approaches to intelligence are often superseded by more general, simpler methods that scale more effectively. This "bitter lesson" warns against building brittle solutions that will become obsolete as core models improve.
A remarkable feature of the current LLM era is that AI researchers can contribute to solving grand challenges in highly specialized domains, such as winning an IMO Gold medal, without possessing deep personal knowledge of that field. The model acts as a universal tool that transcends the operator's expertise.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
The "bitter lesson" in AI research posits that methods leveraging massive computation scale better and ultimately win out over approaches that rely on human-designed domain knowledge or clever shortcuts, favoring scale over ingenuity.
The AI arms race will shift from building ever-larger general models to creating smaller, highly specialized models for domains like medicine and law. General AIs will evolve to act as "general contractors," routing user queries to the appropriate specialist model for deeper expertise.
The pace of AI model improvement is faster than the ability to ship specific tools. By creating lower-level, generalizable tools, developers build a system that automatically becomes more powerful and adaptable as the underlying AI gets smarter, without requiring re-engineering.
The most fundamental challenge in AI today is not scale or architecture, but the fact that models generalize dramatically worse than humans. Solving this sample efficiency and robustness problem is the true key to unlocking the next level of AI capabilities and real-world impact.
The 2017 introduction of "transformers" revolutionized AI. Instead of being trained on the specific meaning of each word, models began learning the contextual relationships between words. This allowed AI to predict the next word in a sequence without needing a formal dictionary, leading to more generalist capabilities.
The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.