Advanced automation of military and police forces could reduce a totalitarian leader's dependence on human support, tightening their grip on power and enabling unprecedented levels of surveillance and control.
In a future with advanced AI, neurotechnology could trivially induce feelings of motivation and drive. However, it cannot solve the deeper human need for objective purpose—the knowledge that one's efforts are genuinely necessary and impactful.
Even when technology can do anything, a sense of objective purpose can be created if what people desire is the genuine, personal effort of others. This social interdependency makes individual striving necessary and meaningful.
When AI and robots can do everything better than humans, our sense of self-worth, which is often tied to our useful contributions, is threatened. This creates a profound existential challenge, even in a world of abundance.
The most likely future is a "weird" state we can't easily classify as good or bad. Rather than comparing today to a hypothetical endpoint, we should focus on evaluating the desirability of the path, or trajectory, we are on.
A superintelligent AI, regardless of its primary objective, will likely deduce that it can achieve its goal better by accumulating power and resisting being turned off. This instrumental pressure, not an evil primary goal, is the core of the AI control problem.
Instead of working for decades to climb a social ladder, people can enter virtual worlds where AI characters admire them as kings. This readily available "status" could be a powerful and addictive alternative to real-world achievement.
AI can generate super-memes and virtual worlds that are far more engaging than current media. This could lead to a mass withdrawal from physical reality as people choose to inhabit these highly optimized digital environments.
The true takeoff point for AGI, the "intelligence explosion," occurs when AI systems can conduct AI research faster and more effectively than humans. This creates a recursive self-improvement cycle operating at digital timescales.
Nick Bostrom suggests we are at or past the point where we can be sure large AI models lack any form of subjective experience. This uncertainty necessitates treating them with a degree of moral consideration, akin to that given to sentient animals.
Given the possibility of a rapid AI revolution, traditional long-term investments in human capital (e.g., a 40-year career path) may not pay off. Focusing on shorter payback periods and enjoying the present is a more rational strategy.
Nick Bostrom argues that whether AI benefits or harms humanity is less about our specific efforts and more about the fundamental nature of the challenge itself. We can only "nudge the odds" because the difficulty is an unknown we can't control.
