When people perceive that their political participation is futile and that corporations can simply lobby their way past regulation, they are more likely to support violence. A sense of political efficacy is a powerful antidote to radicalization.
Research shows people anticipating downward mobility, like job loss from AI, enter a psychological "domain of loss." This makes them risk-seeking and more likely to support or commit violent acts, as they feel they have less to lose.
Offering UBI confirms the public's fear that their labor has no future value. This reinforces a power dynamic of tech leaders as "moral agents" and the public as passive "moral patients," stripping people of dignity and provoking resentment rather than gratitude.
The public’s anxiety about AI didn’t form in a vacuum. Industry leaders consistently framed AI as an imminent, dangerous, job-destroying force. The public has now taken them at their word, with some reacting violently to the perceived threat.
Psychological theory suggests the public "typecasts" powerful figures like CEOs as moral agents, perceiving them as less capable of suffering. Simultaneously, they see themselves as moral patients or victims of the system, which explains the lack of empathy when elites are attacked.
If one truly believes AI poses a non-trivial extinction risk, utilitarian ethics can lead to an alarming conclusion: extreme actions, including violence, are justified to prevent a catastrophically greater harm. This presents a core philosophical paradox for the AI safety movement.
