The suspicious death of an MIT fusion researcher echoes historical patterns, like Nikola Tesla's suppression, where breakthrough technologies threatening established industries (e.g., energy) face violent opposition from powerful incumbents like 'Big Oil'.

Related Insights

It's futile to debate *whether* transformative technologies like AI and robotics should be developed. If a technology offers a decisive advantage, it *will* be built, regardless of the risks. The only rational approach is to accept its inevitability and focus all energy on managing its implementation to stay ahead.

New technologies perceived as job-destroying, like AI, face significant public and regulatory risk. A powerful defense is to make the general public owners of the technology. When people have a financial stake in a technology's success, they are far more likely to defend it than fight against it.

Like railroads, AI promises immense progress but also concentrates power, creating public fear of being controlled by a new monopoly. The populist uprisings by farmers against railroad companies in the 1880s offer a historical playbook for how a widespread, grassroots political movement against Big Tech could form.

It's exceptionally rare for a company to make fundamental changes once its founders are gone. They become "frozen in time," like 1950s Havana. This institutional inertia explains why established industries, like legacy auto manufacturers, were unable to effectively respond to a founder-led disruptor like Elon Musk's Tesla.

Regulating technology based on anticipating *potential* future harms, rather than known ones, is a dangerous path. This 'precautionary principle,' common in Europe, stifles breakthrough innovation. If applied historically, it would have blocked transformative technologies like the automobile or even nuclear power, which has a better safety record than oil.

While AI may eventually create a world of abundance where energy and labor are free, the transition will be violent. The unprecedented scale of job displacement, coupled with a societal loss of meaning, will likely lead to significant bloodshed and social upheaval before any utopian endpoint is reached.

Whether in old industries like oil or new ones like AI, amassing massive wealth attracts a personality type willing to eliminate threats to protect their power. This dynamic is about the psychology of power itself, not the specific industry a company operates in.

New technology can ignite violent conflict by making ideological differences concrete and non-negotiable. The printing press did this with religion, leading to one of Europe's bloodiest wars. AI could do the same by forcing humanity to confront divisive questions like transhumanism and the definition of humanity, potentially leading to similar strife.

The history of nuclear power, where regulation transformed an exponential growth curve into a flat S-curve, serves as a powerful warning for AI. This suggests that AI's biggest long-term hurdle may not be technical limits but regulatory intervention that stifles its potential for a "fast takeoff," effectively regulating it out of rapid adoption.

The moment an industry organizes in protest against an AI technology, it signals that the technology has crossed a critical threshold of quality. The fear and backlash are a direct result of the technology no longer being a gimmick, but a viable threat to the status quo.