/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. The Road to Accountable AI
  2. Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust
Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI · Apr 23, 2026

Deepfake expert Henry Ajder discusses the evolution of synthetic media, the crisis of digital trust, and the challenges of detection and regulation.

Deepfake Harms Persist as Deception, Doubt, and Degradation, Unchanged by Tech Advances

While the realism, efficiency, and accessibility of deepfake technology have exploded, the fundamental ways it causes harm have not. The core malicious vectors remain scamming, humiliating, and deceiving people. This consistency provides a stable framework for understanding and combating the threat.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago

Public Media Literacy Training for Deepfake Spotting Is Actively Harmful

Telling the public to "look for the tells" in AI media is counterproductive. As generative models rapidly improve, these tips become obsolete, giving people a dangerous and false sense of their ability to discern real from fake. This false confidence makes them more vulnerable, not less.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago

Today's Deepfake Tech Solutions Are Too Flawed for Widespread Public Trust

Current responses to deepfakes are insufficient. Detection is an endless cat-and-mouse game with high error rates. Watermarking can be compromised. Provenance systems struggle with explainability for complex media edits. None provide the categorical confidence needed to solve the crisis of digital trust.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago

The Open-Source Nature of AI Made the Proliferation of Deepfakes Largely Unavoidable

The opportunity to "solve" the deepfake problem early on never truly existed. Because the underlying tools are often open-source and built from accessible software libraries, restricting them is like trying to regulate mathematics. This makes top-down control and early intervention nearly impossible.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago

Ethically Ambiguous 'Gray Fakes' Pose Tougher Societal Questions Than Malicious Content

While harms like fraud are clearly bad, a vast middle ground of "gray fakes" exists. Applications like synthetically resurrecting the deceased or AI satire unsettle us without a clear ethical consensus. This ambiguity creates complex challenges for platforms and policymakers.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago

The Startup Ecosystem Systemically Punishes AI Companies for Prioritizing Safety Over Speed

From an entrepreneurial perspective, delaying a product launch to invest in safety testing is strategically unsound. While it may be the moral high ground, it doesn't secure the next funding round. The market fundamentally rewards speed over caution, creating a systemic barrier to responsible AI development.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago

AI Developers Can Use Forums Like 4chan for Low-Cost Threat Intelligence

Instead of expensive, formal red-teaming, developers can monitor online communities where users actively try to jailbreak and misuse AI tools. Observing their techniques provides invaluable, real-world insight into potential weaponization, allowing for proactive reverse-engineering of safety measures.

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust thumbnail

Henry Ajder, Latent Space Advisory: Deepfakes and the Crisis of Digital Trust

The Road to Accountable AI·a day ago