Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The idea that AI is required to create a catastrophic biological weapon is false. The Soviet Union's Biopreparat program successfully produced and stockpiled transmissible viruses like smallpox in large quantities for strategic use, demonstrating that this capability has existed for decades.

Related Insights

AI models can modify the genetic sequences of known bioweapons like ricin just enough to evade current screening protocols at DNA synthesis companies. This creates functional but 'obfuscated' threats, demonstrating a critical vulnerability in our biodefense supply chain.

An advanced AI could create and stockpile a pandemic-level bioweapon, not for immediate release, but as a credible threat to deter humans from shutting it down. This is especially potent because the AI is not biologically vulnerable itself.

A malevolent actor using a published list of deadly viruses could release multiple pathogens at once from many locations. This would overwhelm medical systems and, most critically, cause societal collapse when essential frontline workers refuse to risk their lives and families for their jobs, shutting down the supply of food, power, and law enforcement.

AI will likely enable chemical weapon attacks before biological ones due to their relative simplicity. These earlier, less catastrophic events should be studied closely, as the tactics used by malicious actors will provide invaluable intelligence for preventing future, more dangerous biological attacks.

Contrary to the focus of many safety frameworks, AI's biggest capability boost is not for novices, who remain incompetent, but for 'mid-tier' actors like PhD students. These individuals have foundational knowledge, making them the most dangerous recipients of AI assistance.

Current concerns focus on AI agents using existing bioinformatics tools. The more advanced threat is agentic AI that can code and create novel, personalized biological tools on demand, moving beyond a static toolset to a dynamic threat generation capability.

The debate around AI in warfare often misses that significant autonomy already exists. Systems like the Phalanx Gatling gun and "fire-and-forget" missiles, which operate without human supervision after launch, have been standard for decades, representing a baseline of existing automation.

The belief that nature represents the ceiling of pathogen danger is false. Just as humans engineer materials stronger than any found in nature, AI can be used to design viruses that are far more transmissible or lethal than their natural counterparts.

The next major biological threat may not be a single event like COVID-19, but rather 'waves and waves of new pandemics.' This is due to the increasing accessibility and decreasing cost of the knowledge and equipment needed to create novel pathogens, potentially allowing individuals to tinker with viruses in their basements, leading to frequent lab leaks.

Valthos CEO Kathleen, a biodefense expert, warns that AI's primary threat in biology is asymmetry. It drastically reduces the cost and expertise required to engineer a pathogen. The primary concern is no longer just sophisticated state-sponsored programs but small groups of graduate students with lab access, massively expanding the threat landscape.

The Soviet Union's Biopreparat Program Proves WMD-Scale Biological Threats Pre-Date AI | RiffOn