/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. Embracing Marketing Mistakes
  2. EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong
EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes · Apr 10, 2026

AI expert Ant Cousins on tackling deepfakes, managing brand reputation, and the unexpected power and limitations of AI-driven empathy.

Brand Silence on Viral Issues Is Now Interpreted as a Stance

Brands can no longer remain passive on controversial topics. Audiences increasingly penalize inaction, viewing silence not as neutrality but as a deliberate position. This forces companies to take a stand, even when their customer base has fractured and conflicting views.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

The Threat of Deepfaked CEO Videos Drives Corporate Risk Management Adoption

While AI-driven misinformation is a broad threat, the specific, high-impact risk of a deepfaked CEO making a market-moving announcement is the primary catalyst compelling brands to finally invest seriously in comprehensive reputation and risk management systems.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

AI-Driven Growth Follows Jevons' Paradox

Counterintuitively, making a task cheaper and easier with AI doesn't just eliminate jobs; it drastically increases the overall demand for that task. Just as Excel created more accountants, AI's efficiencies will lead to an explosion in the volume of work, creating new roles and opportunities.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

Proactively "Pre-Bunk" Misinformation by Training Public LLMs with Verified Data

Instead of reactively debunking false narratives, brands can "pre-bunk" them by making verifiable information readily available to large language models. This proactive approach conditions the AI with the truth before a crisis, making it less susceptible to spreading misinformation.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

Large Language Models Will Likely Fracture and Specialize Like News Media

The AI landscape won't be dominated by a single, monolithic LLM. Instead, models will fragment to serve specific markets, catering to different geographic, political, or business audiences. This will create inherent biases in each model, similar to how consumers choose different news channels today.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

An Abundance Mindset Defines Winning AI Strategies

Companies with a scarcity mindset ask how AI can cut costs and reduce headcount. The winning approach is an abundance mindset: asking how AI can do more, reach more customers, and accelerate growth with the same team. This focus on top-line growth will separate the leaders from the laggards.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

Use an Enjoyment vs. Skill Matrix to Guide AI Task Delegation

A simple framework for AI adoption: If you enjoy a task and are good at it, do it yourself. If you enjoy it but are unskilled, use AI as a coach. If you dislike it but are good, let AI draft and you review. If you dislike it and are unskilled, let AI draft but have a human expert review.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

AI's Imitation of Empathy Can Outperform Fatigued Humans in Service Roles

AI only imitates empathy, but it can be more effective than human-delivered empathy in high-stress roles. AI has infinite patience and isn't burdened by emotional fatigue that affects professionals like doctors or paramedics, leading to an experience where the recipient feels more cared for.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

Avoid Biasing AI Towards Action by First Asking "Should We Respond?"

When using AI for crisis response, humans inadvertently bias it toward action by asking "How should we respond?" The more critical, strategic question is "Should we respond at all?" This decision requires "courageous restraint"—knowing when to stay silent—a nuance AI cannot grasp.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago

Human Contextual Awareness Is AI's Blind Spot in High-Stakes PR

AI struggles to replace senior PR professionals because it lacks the nuanced, historical awareness to identify non-obvious risks. A human can spot a subtle connection, like a fallen soldier's link to royalty, that escalates a routine story into a major crisis—a connection AI would almost certainly miss.

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong thumbnail

EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong

Embracing Marketing Mistakes·5 days ago