Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Recursive aims to build superintelligence by creating an AI that can apply the scientific method to its own improvement. The goal is to automate the cycle of ideation, implementation, and validation of new AI research, enabling the system to recursively self-improve in an open-ended fashion.

Related Insights

The AI development cycle of experimentation and bottleneck-solving is already a form of recursive self-improvement. Kyle Corbitt argues this loop is currently constrained by human intelligence. Once AIs become better at directing this process, progress will accelerate rapidly.

The concept that AIs can build better AIs, creating an accelerating feedback loop, is no longer theoretical. Leaders from Anthropic, OpenAI, and Google DeepMind have publicly confirmed they are actively using current AI models to develop the next generation, making RSI a practical engineering pursuit.

Molly Gibson's venture, Lila Sciences, aims for AI that doesn't just analyze data but autonomously executes the entire scientific method. By connecting generative models to automated labs, the AI can formulate hypotheses, run physical experiments, and learn from the results in a continuous loop, achieving a superhuman pace of discovery.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

Unlike any prior tool, AI can be directly applied to improve its own creation. It designs more efficient computer chips, writes better training code, and automates research, creating a recursive self-improvement loop that rapidly outpaces human oversight and control.

Beyond just coding, improving AI models requires subtle skills like designing effective reinforcement learning environments or managing human expert feedback. Newman questions how close we are to recursive self-improvement by asking if AIs can automate these tasks, which rely on nuanced "taste and judgment" rather than just raw computational ability.

Companies like OpenAI and Anthropic are not just building better models; their strategic goal is an "automated AI researcher." The ability for an AI to accelerate its own development is viewed as the key to getting so far ahead that no competitor can catch up.

The ultimate goal isn't just modeling specific systems (like protein folding), but automating the entire scientific method. This involves AI generating hypotheses, choosing experiments, analyzing results, and updating a 'world model' of a domain, creating a continuous loop of discovery.

The ultimate goal for leading labs isn't just creating AGI, but automating the process of AI research itself. By replacing human researchers with millions of "AI researchers," they aim to trigger a "fast takeoff" or recursive self-improvement. This makes automating high-level programming a key strategic milestone.

Sam Altman's goal of an "automated AI research intern" by 2026 and a full "researcher" by 2028 is not about simple task automation. It is a direct push toward creating recursively self-improving systems—AI that can discover new methods to improve AI models, aiming for an "intelligence explosion."