The tech industry often builds technologies first imagined in dystopian science fiction, inadvertently realizing their negative consequences. To build a better future, we need more utopian fiction that provides positive, ambitious blueprints for innovation, guiding progress toward desirable outcomes.
People increasingly consume real-life events as passive entertainment. AI can economically enable mass-market interactive media where user choices create different outcomes. This could help teach that the future is contingent on our collective decisions, not a pre-written script to be watched.
Instead of defaulting to skepticism and looking for reasons why something won't work, the most productive starting point is to imagine how big and impactful a new idea could become. After exploring the optimistic case, you can then systematically address and mitigate the risks.
The discourse around AI risk has matured beyond sci-fi scenarios like Terminator. The focus is now on immediate, real-world problems such as AI-induced psychosis, the impact of AI romantic companions on birth rates, and the spread of misinformation, requiring a different approach from builders and policymakers.
Aligning AI with a specific ethical framework is fraught with disagreement. A better target is "human flourishing," as there is broader consensus on its fundamental components like health, family, and education, providing a more robust and universal goal for AGI.
The scarcest resource in AI is a positive vision for the future. Non-technical individuals can have an outsized impact by writing aspirational fiction. Stories like the movie 'Her' inspire developers and can steer the trajectory of the entire field, making imagination a critical skill.
The most profound innovations in history, like vaccines, PCs, and air travel, distributed value broadly to society rather than being captured by a few corporations. AI could follow this pattern, benefiting the public more than a handful of tech giants, especially with geopolitical pressures forcing commoditization.
Dr. Li rejects both utopian and purely fatalistic views of AI. Instead, she frames it as a humanist technology—a double-edged sword whose impact is entirely determined by human choices and responsibility. This perspective moves the conversation from technological determinism to one of societal agency and stewardship.
The overwhelming majority of AI narratives are dystopian, creating a vacuum of positive visions for the future. Crafting concrete, positive fiction is a uniquely powerful way to influence societal goals and guide AI development, as demonstrated by pioneers who used fan fiction to inspire researchers.
AI will create negative consequences, like the internet spawned the dark web. However, its potential to solve major problems like disease and energy scarcity makes its development a net positive for society, justifying the risks that must be managed along the way.
This analogy frames a realistic, cautiously optimistic post-AGI world. Humans may lose their central role in driving progress but will enjoy immense wealth and high living standards, finding meaning outside of economic production, similar to younger children of European nobility who didn't inherit titles.