When AI systems are trained on historical data, such as past hiring or policing records, they learn and perpetuate existing societal biases. This creates a dangerous illusion of objectivity, where discriminatory outcomes are presented as neutral, data-driven "predictions" by an algorithm.
Federal and state governments are massive customers of technology. Instead of relying solely on legislation, they can use their procurement power to enforce AI safety and ethical standards. By setting strict purchasing requirements, they can compel companies to build more responsible products.
Despite hyper-partisanship, the core principles of the Biden administration's AI Bill of Rights have been adopted in proposals by red states like Oklahoma and Florida. This suggests a surprising bipartisan consensus is emerging around the need to protect citizens from specific AI harms.
The absence of a comprehensive federal AI law has spurred states like California and Colorado to experiment with unique regulatory approaches. This state-level action, while creating a "patchwork," allows for testing different governance models to see what works best before potential federal adoption.
Tech lobbyists argue that a patchwork of state AI regulations creates an unmanageable compliance burden. However, companies in many other sectors, like insurance and finance, already navigate complex, state-by-state legal frameworks. The argument is often a tactic to delay or avoid regulation altogether.
In the absence of federal legislation, product liability lawsuits are becoming a de facto regulatory mechanism. The legal strategy used against Big Tobacco—arguing companies knowingly sold harmful products—is now being applied to social media companies, creating a precedent for holding AI developers liable.
Dr. Alondra Nelson spearheaded the "Blueprint for an AI Bill of Rights" not as a technical standard, but as a modern civil rights document. It draws a parallel to the original Bill of Rights, which checked government power, by aiming to protect individual liberties against powerful new technologies and the companies deploying them.
A strange dynamic exists where the tech leaders building AI are also the loudest voices warning of its potential to destroy humanity. This dual narrative of immense promise and existential threat serves to centralize their power, positioning them as the only ones who can both create and control this technology.
Article VI's Supremacy Clause, which establishes federal law's priority over state law, is not a historical relic. It is the most common constitutional principle applied today, particularly in disputes over regulating new technologies like AI where federal and state interests often clash.
