Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While 80% of DNA synthesis companies voluntarily screen orders for dangerous pathogen sequences, the system is not mandatory. This creates a glaring loophole, as a malicious actor can simply place their order with the 20% of companies that do not perform this critical safety check.

Related Insights

Models designed to predict and screen out compounds toxic to human cells have a serious dual-use problem. A malicious actor could repurpose the exact same technology to search for or design novel, highly toxic molecules for which no countermeasures exist, a risk the researchers initially overlooked.

China's binding regulations mean companies focus safety efforts on the 31 specific risks defined by the government. This compliance-driven approach can leave them less prepared for emergent risks like CBRN or loss of control, as resources are directed toward meeting existing legal requirements rather than proactive, voluntary measures.

A core flaw in virus hunting is moving pathogens from isolated natural environments to labs in dense population centers. Despite security ratings, all categories of labs have a history of leaks. The lack of a uniform reporting system means we don't know the failure rate, making labs a riskier container than nature.

The danger of AI creating harmful proteins is not in the digital design but in its physical creation. A protein sequence on a computer is harmless. The critical control point is the gene synthesis process. Therefore, biosecurity efforts should focus on providing advanced screening tools to synthesis providers.

Instead of trying to control open-source AI models, which is intractable, the proposed strategy is to control the small, expensive-to-produce functional datasets they train on. This preserves the beneficial open-source ecosystem while preventing the dissemination of dangerous capabilities like viral design.

Current biosecurity screens for threats by matching DNA sequences to known pathogens. However, AI can design novel proteins that perform a harmful function without any sequence similarity to existing threats. This necessitates new security tools that can predict a protein's function, a concept termed "defensive acceleration."

Deep Vision's plan to publish the genomes of deadly viruses would effectively give the "killing power of a nuclear arsenal" to an estimated 30,000 unvetted individuals with synthetic biology skills. In the bio-age, openly publishing certain information can be a greater security threat than physical weapons.

Research that made bird flu transmissible between mammals is not illegal. Since the COVID-19 pandemic, it has been broadly defunded by governments, but private labs face little oversight, creating a significant biosecurity blind spot.

A biosecurity data-level (BDL) framework, modeled after biosafety levels for labs, would keep 99% of biological data open-access. Only the top 1% of data—that which links pathogen sequences to dangerous properties like transmissibility—would face restrictions like requiring use-approval.

Valthos CEO Kathleen, a biodefense expert, warns that AI's primary threat in biology is asymmetry. It drastically reduces the cost and expertise required to engineer a pathogen. The primary concern is no longer just sophisticated state-sponsored programs but small groups of graduate students with lab access, massively expanding the threat landscape.