A key distinction in AI regulation is to focus on making specific harmful applications illegal—like theft or violence—rather than restricting the underlying mathematical models. This approach punishes bad actors without stifling core innovation and ceding technological leadership to other nations.

Related Insights

Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.

Universal safety filters for "bad content" are insufficient. True AI safety requires defining permissible and non-permissible behaviors specific to the application's unique context, such as a banking use case versus a customer service setting. This moves beyond generic harm categories to business-specific rules.

When addressing AI's 'black box' problem, lawmaker Alex Boris suggests regulators should bypass the philosophical debate over a model's 'intent.' The focus should be on its observable impact. By setting up tests in controlled environments—like telling an AI it will be shut down—you can discover and mitigate dangerous emergent behaviors before release.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

Mark Cuban advocates for a specific regulatory approach to maintain AI leadership. He suggests the government should avoid stifling innovation by over-regulating the creation of AI models. Instead, it should focus intensely on monitoring the outputs to prevent misuse or harmful applications.

The UK's strategy of criminalizing specific harmful AI outcomes, like non-consensual deepfakes, is more effective than the EU AI Act's approach of regulating model size and development processes. Focusing on harmful outcomes is a more direct way to mitigate societal damage.

Contrary to its controversial reputation, New York's RAISE Act is narrowly focused on catastrophic risks. The bill's threshold for action is extraordinarily high: an AI must contribute to 100 deaths, $1 billion in damage, or a fully automated crime, far from regulating everyday AI applications.

Undersecretary Rogers warns against "safetyist" regulatory models for AI. She argues that attempting to code models to never produce offensive or edgy content fetters them, reduces their creative and useful capacity, and ultimately makes them less competitive globally, particularly against China.

An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.

There is a temptation to create a flurry of AI-specific laws, but most harms from AI (like deepfakes or voice clones) already fall under existing legal categories. Torts like defamation and crimes like fraud provide strong existing remedies.

AI Policy Should Regulate Malicious Applications, Not the Underlying Mathematical Models | RiffOn