Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While deepfakes garner attention, research from as early as 2020 shows AI can measurably change political opinions using only simple text. This scalable, text-based persuasion is a potent tool for information operations that may be more impactful than more technologically complex manipulations.

Related Insights

Unlike historical propaganda which used centralized broadcasts, today's narrative control is decentralized and subtle. It operates through billions of micro-decisions and algorithmic nudges that shape individual perceptions daily, achieving macro-level control without any overt displays of power.

The ability to label a deepfake as 'fake' doesn't solve the problem. The greater danger is 'frequency bias,' where repeated exposure to a false message forms a strong mental association, making the idea stick even when it's consciously rejected as untrue.

The modern information landscape is saturated with AI-generated propaganda from all sides. It is no longer sufficient to be skeptical of foreign adversaries; one must actively question and verify information from domestic governments as well, as all parties use these tools to shape narratives.

A content moderation failure revealed a sophisticated misuse tactic: campaigns used factually correct but emotionally charged information (e.g., school shooting statistics) not to misinform, but to intentionally polarize audiences and incite conflict. This challenges traditional definitions of harmful content.

Because the general public poorly understands AI, the topic becomes a blank canvas for political manipulation. Politicians can create any perception they want—from job-stealing menace to national security threat—to shape opinion and move votes.

The next wave of social movements will be AI-enhanced. By leveraging AI to craft hyper-personalized and persuasive narratives, new cults, religions, or political ideologies can organize and spread faster than anything seen before. These movements could even be initiated and run by AI.

Polling data reveals the most effective political messaging combines fears about AI with populist economic promises like job and income guarantees. This hybrid "AI populism" tests significantly better than generic populism or standalone AI-focused messages, indicating a public desire for radical solutions to technological disruption.

The most immediate danger from AI is not a hypothetical superintelligence but the growing delta between AI's capabilities and the public's understanding of how it works. This knowledge gap allows for subtle, widespread behavioral manipulation, a more insidious threat than a single rogue AGI.

AIs can analyze vast personal data to understand and manipulate human psychology with superhuman precision. By tailoring arguments to an individual's profile, as seen in a "Change My Mind" subreddit experiment, AIs can effectively "program" human responses far better than humans can program AIs.

Humans are more psychologically malleable to persuasion from AI chatbots than from other people. We lack the typical social defenses like "losing face" or resisting manipulation when interacting with a non-human entity, making AI a powerful tool for changing deeply held beliefs.