We scan new podcasts and send you the top 5 insights daily.
The next major biological threat may not be a single event like COVID-19, but rather 'waves and waves of new pandemics.' This is due to the increasing accessibility and decreasing cost of the knowledge and equipment needed to create novel pathogens, potentially allowing individuals to tinker with viruses in their basements, leading to frequent lab leaks.
A core flaw in virus hunting is moving pathogens from isolated natural environments to labs in dense population centers. Despite security ratings, all categories of labs have a history of leaks. The lack of a uniform reporting system means we don't know the failure rate, making labs a riskier container than nature.
A malevolent actor using a published list of deadly viruses could release multiple pathogens at once from many locations. This would overwhelm medical systems and, most critically, cause societal collapse when essential frontline workers refuse to risk their lives and families for their jobs, shutting down the supply of food, power, and law enforcement.
Instead of trying to control open-source AI models, which is intractable, the proposed strategy is to control the small, expensive-to-produce functional datasets they train on. This preserves the beneficial open-source ecosystem while preventing the dissemination of dangerous capabilities like viral design.
Current biosecurity screens for threats by matching DNA sequences to known pathogens. However, AI can design novel proteins that perform a harmful function without any sequence similarity to existing threats. This necessitates new security tools that can predict a protein's function, a concept termed "defensive acceleration."
Deep Vision's plan to publish the genomes of deadly viruses would effectively give the "killing power of a nuclear arsenal" to an estimated 30,000 unvetted individuals with synthetic biology skills. In the bio-age, openly publishing certain information can be a greater security threat than physical weapons.
Research that made bird flu transmissible between mammals is not illegal. Since the COVID-19 pandemic, it has been broadly defunded by governments, but private labs face little oversight, creating a significant biosecurity blind spot.
In a significant shift, leading AI developers began publicly reporting that their models crossed thresholds where they could provide 'uplift' to novice users, enabling them to automate cyberattacks or create biological weapons. This marks a new era of acknowledged, widespread dual-use risk from general-purpose AI.
AI is reducing the cognitive overhead required to navigate biological knowledge, blurring the line between professional labs and motivated individuals. This trend actualizes Freeman Dyson's 2007 prediction that biotech, like computing, would become a decentralized, creative craft.
Futurist Freeman Dyson predicted biotechnology would follow computing's path, moving from large institutions to individual creators. AI is accelerating this shift by lowering the cognitive barrier to entry, potentially making biological design an accessible, decentralized craft. This counters the dominant narrative of AI as a purely centralizing force.
Valthos CEO Kathleen, a biodefense expert, warns that AI's primary threat in biology is asymmetry. It drastically reduces the cost and expertise required to engineer a pathogen. The primary concern is no longer just sophisticated state-sponsored programs but small groups of graduate students with lab access, massively expanding the threat landscape.