We scan new podcasts and send you the top 5 insights daily.
Google AI leader Jeff Dean highlighted "continual learning"—a model's ability to learn from new inputs post-training—as a key step toward AGI. That leaders are discussing it publicly suggests a breakthrough is near, which could rapidly accelerate AI capabilities and lead to a "fast takeoff" scenario.
The popular conception of AGI as a pre-trained system that knows everything is flawed. A more realistic and powerful goal is an AI with a human-like ability for continual learning. This system wouldn't be deployed as a finished product, but as a 'super-intelligent 15-year-old' that learns and adapts to specific roles.
The concept that AIs can build better AIs, creating an accelerating feedback loop, is no longer theoretical. Leaders from Anthropic, OpenAI, and Google DeepMind have publicly confirmed they are actively using current AI models to develop the next generation, making RSI a practical engineering pursuit.
The popular concept of AGI as a static, all-knowing entity is flawed. A more realistic and powerful model is one analogous to a 'super intelligent 15-year-old'—a system with a foundational capacity for rapid, continual learning. Deployment would involve this AI learning on the job, not arriving with complete knowledge.
Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.
While desirable for adaptability, creating models that learn continuously risks a winner-take-all dynamic where one company's model becomes uncatchably superior. This also represents a risky 'depth-first search' toward AGI, prematurely committing to the current transformer paradigm without exploring safer alternatives.
Demis Hassabis identifies critical capabilities missing from today's AI systems. The biggest hurdles are continual learning (the ability for a trained model to learn new things without retraining) and hierarchical, long-term planning. This suggests that simply scaling current architectures may not be enough to achieve AGI.
Demis Hassabis argues that current LLMs are limited by their "goldfish brain"—they can't permanently learn from new interactions. He identifies solving this "continual learning" problem, where the model itself evolves over time, as one of the critical innovations needed to move from current systems to true AGI.
The key to a truly intelligent enterprise AI is not a static model, but one that uses reinforcement learning (RL) to continuously update its own weights overnight based on daily interactions, a concept known as 'continuous learning'.
Google's new AI coding "Strike Team," with personal involvement from Sergey Brin, is focused on improving its models for internal Google engineers first. The goal is to create a feedback loop where AI helps build better AI, a concept Brin calls "AI takeoff," treating any friction in this process as a top-priority blocker for achieving AGI.
A major flaw in current AI is that models are frozen after training and don't learn from new interactions. "Nested Learning," a new technique from Google, offers a path for models to continually update, mimicking a key aspect of human intelligence and overcoming this static limitation.