Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Overly-specific regulation focused on AI tools (e.g., model size) risks accidentally stifling valuable, unforeseen use cases. A better policy focuses on outcomes. For example, prosecute fraud committed with an LLM, but don't regulate the LLM itself, thereby protecting innovation while punishing misuse.

Related Insights

A key distinction in AI regulation is to focus on making specific harmful applications illegal—like theft or violence—rather than restricting the underlying mathematical models. This approach punishes bad actors without stifling core innovation and ceding technological leadership to other nations.

When addressing AI's 'black box' problem, lawmaker Alex Boris suggests regulators should bypass the philosophical debate over a model's 'intent.' The focus should be on its observable impact. By setting up tests in controlled environments—like telling an AI it will be shut down—you can discover and mitigate dangerous emergent behaviors before release.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

A16z argues we are in the "Wright Brothers moment" of AI. Regulating foundational models now—which are essentially just math—would stifle fundamental discovery, akin to trying to regulate flight experiments before airplanes existed. The focus should be on application-level harms, not the underlying technology development.

Mark Cuban advocates for a specific regulatory approach to maintain AI leadership. He suggests the government should avoid stifling innovation by over-regulating the creation of AI models. Instead, it should focus intensely on monitoring the outputs to prevent misuse or harmful applications.

The UK's strategy of criminalizing specific harmful AI outcomes, like non-consensual deepfakes, is more effective than the EU AI Act's approach of regulating model size and development processes. Focusing on harmful outcomes is a more direct way to mitigate societal damage.

Undersecretary Rogers warns against "safetyist" regulatory models for AI. She argues that attempting to code models to never produce offensive or edgy content fetters them, reduces their creative and useful capacity, and ultimately makes them less competitive globally, particularly against China.

A16z advocates for a "gap analysis" approach to AI regulation. Instead of assuming a legal vacuum exists, lawmakers should first examine how existing, technology-neutral laws—like consumer protection or civil rights statutes—already apply to AI harms. New legislation should only target clearly identified gaps.

Comparing AI to a nuclear weapon is misleading because AI is a general-purpose technology, not a single-use weapon. A better analogy is the Industrial Revolution. Society didn't give governments control over industrialization; it regulated specific dangerous end-uses like chemical weapons. Similarly, we should ban specific destructive AI applications, not the underlying technology.

There is a temptation to create a flurry of AI-specific laws, but most harms from AI (like deepfakes or voice clones) already fall under existing legal categories. Torts like defamation and crimes like fraud provide strong existing remedies.