Contrary to its controversial reputation, New York's RAISE Act is narrowly focused on catastrophic risks. The bill's threshold for action is extraordinarily high: an AI must contribute to 100 deaths, $1 billion in damage, or a fully automated crime, far from regulating everyday AI applications.

Related Insights

When lobbying against New York's RAISE Act for AI safety, the industry's own estimate of the compliance burden was surprisingly low. They calculated that a tech giant like Google or Meta would only need to hire one additional full-time employee, undermining the argument that such regulation would be prohibitively expensive.

The emphasis on long-term, unprovable risks like AI superintelligence is a strategic diversion. It shifts regulatory and safety efforts away from addressing tangible, immediate problems like model inaccuracy and security vulnerabilities, effectively resulting in a lack of meaningful oversight today.

When addressing AI's 'black box' problem, lawmaker Alex Boris suggests regulators should bypass the philosophical debate over a model's 'intent.' The focus should be on its observable impact. By setting up tests in controlled environments—like telling an AI it will be shut down—you can discover and mitigate dangerous emergent behaviors before release.

Instead of trying to legally define and ban 'superintelligence,' a more practical approach is to prohibit specific, catastrophic outcomes like overthrowing the government. This shifts the burden of proof to AI developers, forcing them to demonstrate their systems cannot cause these predefined harms, sidestepping definitional debates.

The bill regulates not just models trained with massive compute, but also smaller models trained on the output of larger ones ('knowledge distillation'). This is a key technique Chinese firms use to bypass US export controls on advanced chips, bringing them under the regulatory umbrella.

Drawing from the nuclear energy insurance model, the private market cannot effectively insure against massive AI tail risks. A better model involves the government capping liability (e.g., above $15B), creating a backstop that allows a private insurance market to flourish and provide crucial governance for more common risks.

Other scientific fields operate under a "precautionary principle," avoiding experiments with even a small chance of catastrophic outcomes (e.g., creating dangerous new lifeforms). The AI industry, however, proceeds with what Bengio calls "crazy risks," ignoring this fundamental safety doctrine.

An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.

Assemblyman Alex Boris argues against copying California's AI safety bill (SB53). Unlike state-specific data privacy laws, such a bill wouldn't grant new rights to New Yorkers, as any company large enough to be affected in New York is already subject to the California law, making the effort redundant.

An anonymous CEO of a leading AI company told Stuart Russell that a massive disaster is the *best* possible outcome. They believe it is the only event shocking enough to force governments to finally implement meaningful safety regulations, which they currently refuse to do despite private warnings.