AI doesn't have an inherent moral stance. It is a tool that amplifies the intentions of its wielder. If used by those who support democracy, it can strengthen it; if used by those who oppose it, it can weaken it. The outcome is determined by the user, not the technology itself.
Public fear of AI often focuses on dystopian, "Terminator"-like scenarios. The more immediate and realistic threat is Orwellian: governments leveraging AI to surveil, censor, and embed subtle political biases into models to control public discourse and undermine freedom.
AI provides a structural advantage to those in power by automating government systems. This allows leaders to bypass the traditional unwieldiness of human bureaucracy, making it trivial for an executive to change AI parameters and instantly exert their will across all levels of government, thereby concentrating power.
A common misconception is that a super-smart entity would inherently be moral. However, intelligence is merely the ability to achieve goals. It is orthogonal to the nature of those goals, meaning a smarter AI could simply become a more effective sociopath.
When a state's power derives from AI rather than human labor, its dependence on its citizens diminishes. This creates a dangerous political risk, as the government loses the incentive to serve the populace, potentially leading to authoritarian regimes that are immune to popular revolt.
Dr. Li rejects both utopian and purely fatalistic views of AI. Instead, she frames it as a humanist technology—a double-edged sword whose impact is entirely determined by human choices and responsibility. This perspective moves the conversation from technological determinism to one of societal agency and stewardship.
AI represents a fundamental fork in the road for society. It can be a tool for mass empowerment, amplifying individual potential and freedom. Or, it can be used to perfect the top-down, standardized, and paternalistic control model of Frederick Taylor, cementing a panopticon. The outcome depends on our values, not the tech itself.
Problems like astroturfing (faking grassroots movements) and disinformation existed long before modern AI. AI acts as a powerful amplifier, making these tactics cheaper and more scalable, but it doesn't invent them. The solutions are often political and societal, not purely technological fixes.
While making powerful AI open-source creates risks from rogue actors, it is preferable to centralized control by a single entity. Widespread access acts as a deterrent based on mutually assured destruction, preventing any one group from using AI as a tool for absolute power.
Effective AI policies focus on establishing principles for human conduct rather than just creating technical guardrails. The central question isn't what the tool can do, but how humans should responsibly use it to benefit employees, customers, and the community.
Viewing AI as just a technological progression or a human assimilation problem is a mistake. It is a "co-evolution." The technology's logic shapes human systems, while human priorities, rivalries, and malevolence in turn shape how the technology is developed and deployed, creating unforeseen risks and opportunities.