/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
  2. It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast
It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Apr 11, 2026

AI insider Ajeya Cotra discusses "crunch time," a brief window to use AI for AI safety work amidst recursive self-improvement and an intelligence explosion.

Skeptics of Rapid AI Progress Default to a 'Bottleneck Objection' Priori

The belief that AI progress will be slow often stems from a strong prior that 'things are just always hard and slow.' This 'bottleneck objection' leads skeptics to assume unforeseen drag factors will always emerge, causing them to dismiss detailed scenarios for rapid acceleration without engaging with the specifics.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Frame an AI 'Pause' as a Massive Redirection of AI Labor, Not a Binary Stop

Ajeya Cotra reframes the concept of an AI pause. Instead of a binary 'stop' (0% of labor on R&D), she suggests thinking of it as a spectrum. The goal should be to redirect the vast majority of AI labor from accelerating capabilities to solving safety, biodefense, and other critical societal challenges.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Frontier AI Labs Are Converging on Using AI Systems to Align Their Own Successors

Ajeya Cotra reports that leading developers like OpenAI, Anthropic, and DeepMind are converging on a strategy where each generation of AI is used to help align, control, and understand the subsequent, more powerful generation. This recursive approach is their primary plan for ensuring AI safety during rapid takeoff.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Hedge Against AI 'Crunch Time' Compute Scarcity by Investing in GPU Manufacturers

During a rapid AI takeoff, the cost of compute could become prohibitively expensive, blocking safety efforts. Ajeya Cotra advises organizations to hedge this risk by investing in companies like Nvidia or even owning physical GPUs, ensuring they can afford the necessary AI 'labor' when it matters most.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Effective Altruism's Unique Niche is Incubating Speculative Causes Before They Become Mainstream

The EA community's distinctive contribution is acting as an incubator for important but unconventional cause areas like AI takeover risk or digital sentience. Its tolerance for rigorous speculation allows it to nurture these fields when they are too 'weird' or unproven for mainstream attention, eventually maturing them for broader adoption.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Your Direct Manager and Micro-Environment Dictate Career Satisfaction More Than Grand Mission

After years in a high-impact role, Ajeya Cotra concluded that day-to-day job satisfaction and effectiveness are shaped more by the micro-environment—like the working relationship with a direct manager—than by alignment with an organization's grand mission. Mundane, local factors have an outsized impact on motivation and burnout.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

AI Labs' Safety Plans Will Likely Fail From Insufficient Resource Allocation, Not Technical Flaws

The 'use AI for safety' plan adopted by frontier labs is most likely to fail not because alignment techniques are ineffective, but because competitive pressures will prevent them from redirecting a meaningful fraction of their AI labor away from capabilities research and towards safety work when it matters most.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

The 'Use AI for Safety' Strategy Fails if Capabilities Are Ordered Unluckily

The plan to use AI to solve its own safety risks has a critical failure mode: an unlucky ordering of capabilities. If AI becomes a savant at accelerating its own R&D long before it becomes useful for complex tasks like alignment research or policy design, we could be locked into a rapid, uncontrollable takeoff.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Major Philanthropies Should Prepare to Pivot Billions to Buying Compute During AI's 'Crunch Time'

Ajeya Cotra suggests a radical shift for philanthropies like Open Philanthropy. Their best strategic play during the critical AI 'crunch time' may be to deploy billions of dollars not on human salaries, but on buying massive amounts of compute to direct AI labor towards solving safety and defense challenges.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

Experts Disagree 10,000-Fold on AI's Economic Impact Due to Conflicting Historical Priors

The vast disagreement on AI's future economic impact—from minor boosts to over 1000% annual growth—stems from conflicting reference points. Skeptics cite the last 150 years of steady 2% growth, while futurists point to the long-arc acceleration of human history since the agricultural revolution.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

An AI Intelligence Explosion Hinges on Automating the Full Physical Supply Chain, Not Just Software

A true, self-sustaining intelligence explosion requires more than AI automating its own software R&D. Ajeya Cotra emphasizes it must also automate the entire physical stack—from designing robots to fabricating chips and mining raw materials. This physical feedback loop is a critical, often overlooked bottleneck.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago

AI Labs Should Report Internal Capability Metrics, Not Just Public Releases, as an Early Warning System

To avoid a surprise intelligence explosion, Ajeya Cotra argues for transparency measures beyond model release cards. Labs should report internal metrics on a fixed cadence, like how AI is accelerating their own R&D or passing internal benchmarks, as this provides a crucial early warning of dangerous capability jumps.

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast thumbnail

It's Crunch Time: Ajeya Cotra on RSI & AI-Powered AI Safety Work, from the 80,000 Hours Podcast

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis·6 hours ago