Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

A16z advocates for a "gap analysis" approach to AI regulation. Instead of assuming a legal vacuum exists, lawmakers should first examine how existing, technology-neutral laws—like consumer protection or civil rights statutes—already apply to AI harms. New legislation should only target clearly identified gaps.

Related Insights

A key distinction in AI regulation is to focus on making specific harmful applications illegal—like theft or violence—rather than restricting the underlying mathematical models. This approach punishes bad actors without stifling core innovation and ceding technological leadership to other nations.

Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.

India is taking a measured, "no rush" approach to AI governance. The strategy is to first leverage and adapt existing legal frameworks—like the IT Act for deepfakes and data protection laws for privacy—rather than creating new, potentially innovation-stifling AI-specific legislation.

When addressing AI's 'black box' problem, lawmaker Alex Boris suggests regulators should bypass the philosophical debate over a model's 'intent.' The focus should be on its observable impact. By setting up tests in controlled environments—like telling an AI it will be shut down—you can discover and mitigate dangerous emergent behaviors before release.

A16z proposes a federalist approach to AI governance. The federal government, under the Commerce Clause, should regulate AI *development* to create a single national market. States should focus on regulating the harmful *use* of AI, which aligns with their traditional role in areas like criminal law.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

Policymakers confront an 'evidence dilemma': act early on potential AI harms with incomplete data, risking ineffective policy, or wait for conclusive evidence, leaving society vulnerable. This tension highlights the difficulty of governing rapidly advancing technology where impacts lag behind capabilities.

A16z argues we are in the "Wright Brothers moment" of AI. Regulating foundational models now—which are essentially just math—would stifle fundamental discovery, akin to trying to regulate flight experiments before airplanes existed. The focus should be on application-level harms, not the underlying technology development.

The UK's strategy of criminalizing specific harmful AI outcomes, like non-consensual deepfakes, is more effective than the EU AI Act's approach of regulating model size and development processes. Focusing on harmful outcomes is a more direct way to mitigate societal damage.

There is a temptation to create a flurry of AI-specific laws, but most harms from AI (like deepfakes or voice clones) already fall under existing legal categories. Torts like defamation and crimes like fraud provide strong existing remedies.