Counterintuitively, as AI makes it easy to fake any video or audio, the power of "gotcha" recordings will diminish. The plausible deniability of "it could be a deepfake" may free people from the social surveillance state created by smartphone cameras.
The development of advanced surveillance in China required training models to distinguish between real humans and synthetic media. This technological push inadvertently propelled deepfake and face detection advancements globally, which were then repurposed for consumer applications like AI-generated face filters.
As AI makes it easy to fake video and audio, blockchain's immutable and decentralized ledger offers a solution. Creators can 'mint' their original content, creating a verifiable record of authenticity that nobody—not even governments or corporations—can alter.
Adam Mosseri’s public statement that we can no longer assume photos or videos are real marks a pivotal shift. He suggests moving from a default of trust to a default of skepticism, effectively admitting platforms have lost the war on deepfakes and placing the burden of verification on users.
The modern media ecosystem is defined by the decomposition of truth. From AI-generated fake images to conspiracy theories blending real and fake documents on X, people are becoming accustomed to an environment where discerning absolute reality is difficult and are willing to live with that ambiguity.
Politician Alex Boris argues that expecting humans to spot increasingly sophisticated deepfakes is a losing battle. The real solution is a universal metadata standard (like C2PA) that cryptographically proves if content is real or AI-generated, making unverified content inherently suspect, much like an unsecure HTTP website today.
The rise of convincing AI-generated deepfakes will soon make video and audio evidence unreliable. The solution will be the blockchain, a decentralized, unalterable ledger. Content will be "minted" on-chain to provide a verifiable, timestamped record of authenticity that no single entity can control or manipulate.
The rapid advancement of AI-generated video will soon make it impossible to distinguish real footage from deepfakes. This will cause a societal shift, eroding the concept of 'video proof' which has been a cornerstone of trust for the past century.
As AI makes creating complex visuals trivial, audiences will become skeptical of content like surrealist photos or polished B-roll. They will increasingly assume it is AI-generated rather than the result of human skill, leading to lower trust and engagement.
Cryptographically signing media doesn't solve deepfakes because the vulnerability shifts to the user. Attackers use phishing tactics with nearly identical public keys or domains (a "Sybil problem") to trick human perception. The core issue is human error, not a lack of a technical solution.
The current debate focuses on labeling AI-generated content. However, as AI content floods the internet and becomes the default, the more efficient system will be to label the smaller, scarcer category: authentic, human-created content.