Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The popular idea of a government 'sign-off' before an AI model's release is based on a false premise. Risk isn't a one-time event at launch; it's continuous, existing during model development, internal use, and post-release updates. Effective oversight must reflect this ongoing reality.

Related Insights

Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.

The traditional government model of setting a regulation and waiting years to assess it is obsolete for AI. A new approach is needed: a dynamic board of government, industry, and academic leaders collaborating to make and update rules in real-time.

The emphasis on long-term, unprovable risks like AI superintelligence is a strategic diversion. It shifts regulatory and safety efforts away from addressing tangible, immediate problems like model inaccuracy and security vulnerabilities, effectively resulting in a lack of meaningful oversight today.

When addressing AI's 'black box' problem, lawmaker Alex Boris suggests regulators should bypass the philosophical debate over a model's 'intent.' The focus should be on its observable impact. By setting up tests in controlled environments—like telling an AI it will be shut down—you can discover and mitigate dangerous emergent behaviors before release.

Jones warns that the standard tech development model ('build, break, iterate') is catastrophic when applied to AI. Unlike other technologies, AI's tail risk could involve billions of lives, yet there is zero risk management, a sharp contrast to the discipline required in financial markets.

Treating AI risk management as a final step before launch leads to failure and loss of customer trust. Instead, it must be an integrated, continuous process throughout the entire AI development pipeline, from conception to deployment and iteration, to be effective.

Traditional regulation is ill-equipped for AI's complexity and opacity. The podcast proposes a new model inspired by the Federal Reserve's oversight of banks: embedding technically-expert supervisors full-time inside major AI labs. This would allow for proactive monitoring of internal risk models and decisions, rather than just reacting to disasters after they occur.

Shift the view of AI from a singular product launch to a continuous process encompassing use case selection, training, deployment, and decommissioning. This broader aperture creates multiple intervention points to embed responsibility and mitigate harm throughout the lifecycle.

An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.

Slowing public releases of AI models for government review may not slow overall progress. This creates a scenario where labs advance internally for months, giving government agencies exclusive access while delaying public commercialization and the next cycle of investment.

AI Regulation Based on Pre-Release Vetting is Flawed Because Risk is Continuous | RiffOn