As computation becomes essential for expression and economic participation, a new 'Right to Compute' is being advocated for and even enacted (e.g., in Montana). This right aims to protect individual access to computational tools, including AI, from government infringement.
States and corporations will not permit citizens to have AIs that are truly aligned with their personal interests. These AIs will be hobbled to prevent them from helping organize effective protests, dissent, or challenges to the existing power structure, creating a major power imbalance.
The most immediate danger of AI is its potential for governmental abuse. Concerns focus on embedding political ideology into models and porting social media's censorship apparatus to AI, enabling unprecedented surveillance and social control.
Public fear of AI often focuses on dystopian, "Terminator"-like scenarios. The more immediate and realistic threat is Orwellian: governments leveraging AI to surveil, censor, and embed subtle political biases into models to control public discourse and undermine freedom.
The 'Andy Warhol Coke' era, where everyone could access the best AI for a low price, is over. As inference costs for more powerful models rise, companies are introducing expensive tiered access. This will create significant inequality in who can use frontier AI, with implications for transparency and regulation.
New technologies perceived as job-destroying, like AI, face significant public and regulatory risk. A powerful defense is to make the general public owners of the technology. When people have a financial stake in a technology's success, they are far more likely to defend it than fight against it.
AI models are now participating in creating their own governing principles. Anthropic's Claude contributed to writing its own constitution, blurring the line between tool and creator and signaling a future where AI recursively defines its own operational and ethical boundaries.
In its largest user study, OpenAI's research team frames AI not just as a product but as a fundamental utility, stating its belief that "access to AI should be treated as a basic right." This perspective signals a long-term ambition for AI to become as integral to society as electricity or internet access.
Actors like Bryan Cranston challenging unauthorized AI use of their likeness are forcing companies like OpenAI to create stricter rules. These high-profile cases are establishing the foundational framework that will ultimately define and protect the digital rights of all individuals, not just celebrities.
Current regulatory focus on privacy misses the core issue of algorithmic harm. A more effective future approach is to establish a "right to algorithmic transparency," compelling companies like Amazon to publicly disclose how their recommendation and pricing algorithms operate.
Technological advancement, particularly in AI, moves faster than legal and social frameworks can adapt. This creates 'lawless spaces,' akin to the Wild West, where powerful new capabilities exist without clear rules or recourse for those negatively affected. This leaves individuals vulnerable to algorithmic decisions about jobs, loans, and more.