The ability to label a deepfake as 'fake' doesn't solve the problem. The greater danger is 'frequency bias,' where repeated exposure to a false message forms a strong mental association, making the idea stick even when it's consciously rejected as untrue.
The modern information landscape is so saturated with noise, deepfakes, and propaganda that discerning the truth requires an enormous investment of time and energy. This high "cost" leads not to believing falsehoods, but to a general disbelief in everything and an inability to form trusted opinions.
A content moderation failure revealed a sophisticated misuse tactic: campaigns used factually correct but emotionally charged information (e.g., school shooting statistics) not to misinform, but to intentionally polarize audiences and incite conflict. This challenges traditional definitions of harmful content.
We are months away from AI that can create a media feed designed to exclusively validate a user's worldview while ignoring all contradictory information. This will intensify confirmation bias to an extreme, making rational debate impossible as individuals inhabit completely separate, self-reinforced realities with no common ground or shared facts.
The modern media ecosystem is defined by the decomposition of truth. From AI-generated fake images to conspiracy theories blending real and fake documents on X, people are becoming accustomed to an environment where discerning absolute reality is difficult and are willing to live with that ambiguity.
The rapid advancement of AI-generated video will soon make it impossible to distinguish real footage from deepfakes. This will cause a societal shift, eroding the concept of 'video proof' which has been a cornerstone of trust for the past century.
The most immediate danger from AI is not a hypothetical superintelligence but the growing delta between AI's capabilities and the public's understanding of how it works. This knowledge gap allows for subtle, widespread behavioral manipulation, a more insidious threat than a single rogue AGI.
Beyond generating fake content, AI exacerbates public skepticism towards all information, even from established sources. This erodes the common factual basis on which society operates, making it harder for democracies to function as people can't even agree on the basic building blocks of information.
As social media and search results become saturated with low-quality, AI-generated content (dubbed "slop"), users may develop a stronger preference for reliable information. This "sloptimism" suggests the degradation of the online ecosystem could inadvertently drive a rebound in trust for established, human-curated news organizations as a defense against misinformation.
Cryptographically signing media doesn't solve deepfakes because the vulnerability shifts to the user. Attackers use phishing tactics with nearly identical public keys or domains (a "Sybil problem") to trick human perception. The core issue is human error, not a lack of a technical solution.
A significant societal risk is the public's inability to distinguish sophisticated AI-generated videos from reality. This creates fertile ground for political deepfakes to influence elections, a problem made worse by social media platforms that don't enforce clear "Made with AI" labeling.