Adam Mosseri’s public statement that we can no longer assume photos or videos are real marks a pivotal shift. He suggests moving from a default of trust to a default of skepticism, effectively admitting platforms have lost the war on deepfakes and placing the burden of verification on users.
The proliferation of AI-generated content has eroded consumer trust to a new low. People increasingly assume that what they see is not real, creating a significant hurdle for authentic brands that must now work harder than ever to prove their genuineness and cut through the skepticism.
The modern information landscape is so saturated with noise, deepfakes, and propaganda that discerning the truth requires an enormous investment of time and energy. This high "cost" leads not to believing falsehoods, but to a general disbelief in everything and an inability to form trusted opinions.
The modern media ecosystem is defined by the decomposition of truth. From AI-generated fake images to conspiracy theories blending real and fake documents on X, people are becoming accustomed to an environment where discerning absolute reality is difficult and are willing to live with that ambiguity.
Politician Alex Boris argues that expecting humans to spot increasingly sophisticated deepfakes is a losing battle. The real solution is a universal metadata standard (like C2PA) that cryptographically proves if content is real or AI-generated, making unverified content inherently suspect, much like an unsecure HTTP website today.
CEO Adam Mosseri observes a major cultural shift on Instagram away from the high-saturation, photoshopped look. The content now driving cultural relevance is its opposite: raw, unprocessed 'photo dumps.' In a world of hyper-production, users crave content that feels more authentic.
The rise of convincing AI-generated deepfakes will soon make video and audio evidence unreliable. The solution will be the blockchain, a decentralized, unalterable ledger. Content will be "minted" on-chain to provide a verifiable, timestamped record of authenticity that no single entity can control or manipulate.
The rapid advancement of AI-generated video will soon make it impossible to distinguish real footage from deepfakes. This will cause a societal shift, eroding the concept of 'video proof' which has been a cornerstone of trust for the past century.
As AI makes creating complex visuals trivial, audiences will become skeptical of content like surrealist photos or polished B-roll. They will increasingly assume it is AI-generated rather than the result of human skill, leading to lower trust and engagement.
Cryptographically signing media doesn't solve deepfakes because the vulnerability shifts to the user. Attackers use phishing tactics with nearly identical public keys or domains (a "Sybil problem") to trick human perception. The core issue is human error, not a lack of a technical solution.
The current debate focuses on labeling AI-generated content. However, as AI content floods the internet and becomes the default, the more efficient system will be to label the smaller, scarcer category: authentic, human-created content.