Cryptographically signing media doesn't solve deepfakes because the vulnerability shifts to the user. Attackers use phishing tactics with nearly identical public keys or domains (a "Sybil problem") to trick human perception. The core issue is human error, not a lack of a technical solution.

Related Insights

The development of advanced surveillance in China required training models to distinguish between real humans and synthetic media. This technological push inadvertently propelled deepfake and face detection advancements globally, which were then repurposed for consumer applications like AI-generated face filters.

As AI makes it easy to fake video and audio, blockchain's immutable and decentralized ledger offers a solution. Creators can 'mint' their original content, creating a verifiable record of authenticity that nobody—not even governments or corporations—can alter.

The proliferation of AI-generated content has eroded consumer trust to a new low. People increasingly assume that what they see is not real, creating a significant hurdle for authentic brands that must now work harder than ever to prove their genuineness and cut through the skepticism.

Politician Alex Boris argues that expecting humans to spot increasingly sophisticated deepfakes is a losing battle. The real solution is a universal metadata standard (like C2PA) that cryptographically proves if content is real or AI-generated, making unverified content inherently suspect, much like an unsecure HTTP website today.

The rise of convincing AI-generated deepfakes will soon make video and audio evidence unreliable. The solution will be the blockchain, a decentralized, unalterable ledger. Content will be "minted" on-chain to provide a verifiable, timestamped record of authenticity that no single entity can control or manipulate.

The rapid advancement of AI-generated video will soon make it impossible to distinguish real footage from deepfakes. This will cause a societal shift, eroding the concept of 'video proof' which has been a cornerstone of trust for the past century.

As AI makes creating complex visuals trivial, audiences will become skeptical of content like surrealist photos or polished B-roll. They will increasingly assume it is AI-generated rather than the result of human skill, leading to lower trust and engagement.

By presenting AI-generated video in an intentionally low-resolution format like a doorbell camera, creators can mask imperfections. This prevents the uncanny valley effect, where near-perfect but flawed CGI is unsettling, making the content feel more authentic and viral.

The current debate focuses on labeling AI-generated content. However, as AI content floods the internet and becomes the default, the more efficient system will be to label the smaller, scarcer category: authentic, human-created content.

A significant societal risk is the public's inability to distinguish sophisticated AI-generated videos from reality. This creates fertile ground for political deepfakes to influence elections, a problem made worse by social media platforms that don't enforce clear "Made with AI" labeling.