Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The most likely future is a "weird" state we can't easily classify as good or bad. Rather than comparing today to a hypothetical endpoint, we should focus on evaluating the desirability of the path, or trajectory, we are on.

Related Insights

The discourse often presents a binary: AI plateaus below human level or undergoes a runaway singularity. A plausible but overlooked alternative is a "superhuman plateau," where AI is vastly superior to humans but still constrained by physical limits, transforming society without becoming omnipotent.

While discourse often focuses on exponential growth, the AI Safety Report presents 'progress stalls' as a serious scenario, analogous to passenger aircraft speed, which plateaued after 1960. This highlights that continued rapid advancement is not guaranteed due to potential technical or resource bottlenecks.

Viewing AGI development as a race with a winner-takes-all finish line is a risky assumption. It's more likely an ongoing competition where systems become progressively more advanced and diffused across applications, making the idea of a single "winner" misleading.

Public and expert opinions on AI are split between two extremes: it will either save humanity or destroy it. There is a notable absence of a moderate, middle-ground perspective, which is a departure from how previous technological shifts like the internet were discussed.

Utopian visions often lead to dystopia because we can't accurately define an ideal future. A better goal is "Viatopia"—a societal state that isn't the final destination but a stable waypoint from which we can safely navigate to a near-best future. It prioritizes a good decision-making process over a specific outcome.

Dr. Li rejects both utopian and purely fatalistic views of AI. Instead, she frames it as a humanist technology—a double-edged sword whose impact is entirely determined by human choices and responsibility. This perspective moves the conversation from technological determinism to one of societal agency and stewardship.

The narrative around advanced AI is often simplified into a dramatic binary choice between utopia and dystopia. This framing, while compelling, is a rhetorical strategy to bypass complex discussions about regulation, societal integration, and the spectrum of potential outcomes between these extremes.

AI represents a fundamental fork in the road for society. It can be a tool for mass empowerment, amplifying individual potential and freedom. Or, it can be used to perfect the top-down, standardized, and paternalistic control model of Frederick Taylor, cementing a panopticon. The outcome depends on our values, not the tech itself.

Due to extreme uncertainty and a lack of real-time data, discussions about AI's future, even among top executives, are fundamentally about storytelling. The void of concrete knowledge is being filled by narratives of either utopia or dystopia, making the discourse more literary than purely analytical.

Viewing AI as just a technological progression or a human assimilation problem is a mistake. It is a "co-evolution." The technology's logic shapes human systems, while human priorities, rivalries, and malevolence in turn shape how the technology is developed and deployed, creating unforeseen risks and opportunities.

Evaluate AI's Future by Its Trajectory, Not by a Static Utopian or Dystopian Endpoint | RiffOn