When lobbying against New York's RAISE Act for AI safety, the industry's own estimate of the compliance burden was surprisingly low. They calculated that a tech giant like Google or Meta would only need to hire one additional full-time employee, undermining the argument that such regulation would be prohibitively expensive.
The emphasis on long-term, unprovable risks like AI superintelligence is a strategic diversion. It shifts regulatory and safety efforts away from addressing tangible, immediate problems like model inaccuracy and security vulnerabilities, effectively resulting in a lack of meaningful oversight today.
The US President's move to centralize AI regulation over individual states is likely a response to lobbying from major tech companies. They need a stable, nationwide framework to protect their massive capital expenditures on data centers. A patchwork of state laws creates uncertainty and the risk of being forced into costly relocations.
The President's AI executive order aims to create a unified, industry-friendly regulatory environment. A key component is an "AI litigation task force" designed to challenge and preempt the growing number of state-level AI laws, centralizing control at the federal level and sidelining local governance.
AI companies engage in "safety revisionism," shifting the definition from preventing tangible harm to abstract concepts like "alignment" or future "existential risks." This tactic allows their inherently inaccurate models to bypass the traditional, rigorous safety standards required for defense and other critical systems.
An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.
The existence of internal teams like Anthropic's "Societal Impacts Team" serves a dual purpose. Beyond their stated mission, they function as a strategic tool for AI companies to demonstrate self-regulation, thereby creating a political argument that stringent government oversight is unnecessary.
Laws like California's SB243, allowing lawsuits for "emotional harm" from chatbots, create an impossible compliance maze for startups. This fragmented regulation, while well-intentioned, benefits incumbents who can afford massive legal teams, thus stifling innovation and competition from smaller players.
Both Sam Altman and Satya Nadella warn that a patchwork of state-level AI regulations, like Colorado's AI Act, is unmanageable. While behemoths like Microsoft and OpenAI can afford compliance, they argue this approach will crush smaller startups, creating an insurmountable barrier to entry and innovation in the US.
Advocating for a single national AI policy is often a strategic move by tech lobbyists and friendly politicians to preempt and invalidate stricter regulations emerging at the state level. Under the guise of creating a unified standard, this approach effectively ensures the actual policy is weak or non-existent, allowing the industry to operate with minimal oversight.
The approach to AI safety isn't new; it mirrors historical solutions for managing technological risk. Just as Benjamin Franklin's 18th-century fire insurance company created building codes and inspections to reduce fires, a modern AI insurance market can drive the creation and adoption of safety standards and audits for AI agents.