Infrastructure designed to be unstoppable, like the Internet Computer, presents a fundamental dilemma: it could enable rogue AIs, but it also offers a crucial check against concentrated power from governments or large corporations.
A key, informal safety layer against AI doom is the institutional self-preservation of the developers themselves. It's argued that labs like OpenAI or Google would not knowingly release a model they believed posed a genuine threat of overthrowing the government, opting instead to halt deployment and alert authorities.
While mitigating catastrophic AI risks is critical, the argument for safety can be used to justify placing powerful AI exclusively in the hands of a few actors. This centralization, intended to prevent misuse, simultaneously creates the monopolistic conditions for the Intelligence Curse to take hold.
A ban on superintelligence is self-defeating because enforcement would require a sanctioned, global government body to build the very technology it prohibits in order to "prove it's safe." This paradoxically creates a state-controlled monopoly on the most powerful technology ever conceived, posing a greater risk than a competitive landscape.
Major tech companies view the AI race as a life-or-death struggle. This 'existential crisis' mindset explains their willingness to spend astronomical sums on infrastructure, prioritizing survival over short-term profitability. Their spending is a defensive moat-building exercise, not just a rational pursuit of new revenue.
Pausing or regulating AI development domestically is futile. Because AI offers a winner-take-all advantage, competing nations like China will inevitably lie about slowing down while developing it in secret. Unilateral restraint is therefore a form of self-sabotage.
As AI capabilities accelerate toward an "oracle that trends to a god," its actions will have serious consequences. A blockchain-based trust layer can provide verifiable, unchangeable records of AI interactions, establishing guardrails and a clear line of fault when things go wrong.
The fundamental challenge of creating safe AGI is not about specific failure modes but about grappling with the immense power such a system will wield. The difficulty in truly imagining and 'feeling' this future power is a major obstacle for researchers and the public, hindering proactive safety measures. The core problem is simply 'the power.'
The fear of killer AI is misplaced. The more pressing danger is that a few large companies will use regulation to create a cartel, stifling innovation and competition—a historical pattern seen in major US industries like defense and banking.
While making powerful AI open-source creates risks from rogue actors, it is preferable to centralized control by a single entity. Widespread access acts as a deterrent based on mutually assured destruction, preventing any one group from using AI as a tool for absolute power.
The AI safety community fears losing control of AI. However, achieving perfect control of a superintelligence is equally dangerous. It grants godlike power to flawed, unwise humans. A perfectly obedient super-tool serving a fallible master is just as catastrophic as a rogue agent.