Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Jack Clark of Anthropic estimates a 60% probability of achieving end-to-end automated AI R&D by 2028. This "recursive self-improvement," where AI designs better AI, would mark a critical threshold, leading to an intelligence explosion and a future that is nearly impossible to forecast.

Related Insights

The AI development cycle of experimentation and bottleneck-solving is already a form of recursive self-improvement. Kyle Corbitt argues this loop is currently constrained by human intelligence. Once AIs become better at directing this process, progress will accelerate rapidly.

Coined in 1965, the "intelligence explosion" describes a runaway feedback loop. An AI capable of conducting AI research could use its intelligence to improve itself. This newly enhanced intelligence would make it even better at AI research, leading to exponential, uncontrollable growth in capability. This "fast takeoff" could leave humanity far behind in a very short period.

The concept that AIs can build better AIs, creating an accelerating feedback loop, is no longer theoretical. Leaders from Anthropic, OpenAI, and Google DeepMind have publicly confirmed they are actively using current AI models to develop the next generation, making RSI a practical engineering pursuit.

Silicon Valley insiders, including former Google CEO Eric Schmidt, believe AI capable of improving itself without human instruction is just 2-4 years away. This shift in focus from the abstract concept of superintelligence to a specific research goal signals an imminent acceleration in AI capabilities and associated risks.

Unlike any prior tool, AI can be directly applied to improve its own creation. It designs more efficient computer chips, writes better training code, and automates research, creating a recursive self-improvement loop that rapidly outpaces human oversight and control.

AI's ability to perform software engineering tasks that would take a human hours is doubling every 4-6 months. This rapid, exponential progress suggests a near-term future where AI can automate its own research and development. This self-improvement loop is the critical inflection point that could trigger a massive, unpredictable leap in AI capabilities.

Companies like OpenAI and Anthropic are not just building better models; their strategic goal is an "automated AI researcher." The ability for an AI to accelerate its own development is viewed as the key to getting so far ahead that no competitor can catch up.

Sam Altman's goal of an "automated AI research intern" by 2026 and a full "researcher" by 2028 is not about simple task automation. It is a direct push toward creating recursively self-improving systems—AI that can discover new methods to improve AI models, aiming for an "intelligence explosion."

OpenAI CEO Sam Altman has publicly stated a timeline for AI to conduct AI research autonomously, aiming for an intern-level researcher by 2026 and a fully automated one by 2028. This could massively accelerate AI progress and lead to an intelligence explosion.

The true takeoff point for AGI, the "intelligence explosion," occurs when AI systems can conduct AI research faster and more effectively than humans. This creates a recursive self-improvement cycle operating at digital timescales.