Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Marketers' fears about legal risks with AI are often overblown, as FTC guidance is largely unchanged. An AI avatar making a fake testimonial is illegal, just as it is for a human creator. The core rules against deceptive claims apply equally, regardless of whether the spokesperson is real or generated.

Related Insights

Regardless of an AI's capabilities, the human in the loop is always the final owner of the output. Your responsible AI principles must clearly state that using AI does not remove human agency or accountability for the work's accuracy and quality. This is critical for mitigating legal and reputational risks.

Responding to a growing consumer backlash against AI-generated content, brands are beginning to market their creative as authentically human-made. American Eagle's '100% Aerie real' campaign explicitly states no AI was used for models or retouching, positioning human creation as a key brand differentiator and trust signal.

As CGI becomes photorealistic, spotting fake hardware demos is harder. An unexpected giveaway has emerged: the use of generic, AI-generated captions and descriptions. This stilted language, intended to sound professional, can ironically serve as a watermark of inauthenticity, undermining the credibility of the visuals it accompanies.

Beyond data privacy, a key ethical responsibility for marketers using AI is ensuring content integrity. This means using platforms that provide a verifiable trail for every asset, check for originality, and offer AI-assisted verification for factual accuracy. This protects the brand, ensures content is original, and builds customer trust.

Marketing leaders shouldn't wait for FTC regulation to establish ethical AI guidelines. The real risk of using undisclosed AI, like virtual influencers, isn't immediate legal trouble but the long-term erosion of consumer trust. Once customers feel misled, that brand damage is incredibly difficult to repair.

OnlyFans deliberately bans fully AI-generated accounts to protect its human creators' ability to monetize. CEO Keily Blair bets that as AI-generated "slop" proliferates online, users will increasingly crave and pay more for authentic, human-produced content and the genuine connection it provides.

Consumer trust in AI-generated content hinges more on utility than authenticity. If an AI avatar provides a valuable solution to a viewer's problem, audiences are highly receptive. The focus should be on solving the 'What's in it for me?' question, regardless of the presenter's nature.

There is a temptation to create a flurry of AI-specific laws, but most harms from AI (like deepfakes or voice clones) already fall under existing legal categories. Torts like defamation and crimes like fraud provide strong existing remedies.

Vague marketing slogans are now a liability. AI actively verifies claims by seeking proof like awards, certifications, or third-party citations. If your business makes an assertion without verifiable proof, AI will penalize your trust score and credibility.

The backlash against J.Crew's AI ad wasn't about the technology, but the lack of transparency. Customers fear manipulation and disenfranchisement. To maintain trust, brands must be explicit when using AI, framing it as a tool that serves human creativity, not a replacement that erodes trust.

FTC Guidelines for AI Ad Testimonials Mirror Existing Rules for Human Creators | RiffOn