Effective aggressive ads, like Apple's 'Get a Mac' campaign, are rooted in verifiable truths about competitors. In contrast, 'dirty' ads, like political attacks, rely on creating deceptive impressions. Anthropic's ads are argued to be closer to the latter, as they portray a future for AI ads that isn't based on factual product plans.
Sam Altman states that OpenAI's first principle for advertising is to avoid putting ads directly into the LLM's conversational stream. He calls the scenario depicted in Anthropic's ads a 'crazy dystopic, bad sci-fi movie,' suggesting ads will be adjacent to the user experience, not manipulative content within it.
Sam Altman counters Anthropic's ads by reframing the debate. He positions OpenAI as a champion for broad, free access for the masses ("billions of people who can't pay"), while painting Anthropic as an elitist service for the wealthy ("serves an expensive product to rich people"), shifting the narrative from ad ethics to accessibility.
The campaign's simple 'keep thinking' message subtly reframes Anthropic's AI as a human-augmenting tool. This marks a significant departure from the company's public reputation for focusing on existential AI risk, suggesting a deliberate effort to build a more consumer-friendly and less threatening brand.
By dropping critical ads just before the Super Bowl and OpenAI's planned ad launch, Anthropic made it impossible for OpenAI to craft and run a response ad in time. This maximized the unchallenged impact of their campaign by muddying the waters at a critical moment.
A smaller competitor can attack the market leader without naming them. Everyone assumes the criticism targets the dominant player, allowing the challenger to land hits on the category as a whole, which disproportionately harms the leader. This is a powerful metaphor for challenger marketing.
Dario Amadei's public criticism of advertising and "social media entrepreneurs" isn't just personal ideology. It's a strategic narrative to position Anthropic as the principled, enterprise-focused AI choice, contrasting with consumer-focused rivals like Google and OpenAI who need to "maximize engagement for a billion users."
The conflict between AI labs has moved beyond a 'cold war' of poaching talent to a public battle for perception. Anthropic’s ads represent a 'gloves off' moment, using what the hosts call 'fear-mongering' and 'propaganda' to directly attack a competitor's business model on a massive stage like the Super Bowl.
Anthropic's ads lack a call-to-action, indicating their primary goal isn't consumer downloads. Instead, they use fear-mongering to "muddy the water" around OpenAI's upcoming ad product, aiming to make enterprise decision-makers and regulators wary of ad-supported AI models before they launch.
By attacking the concept of ads in LLMs, Anthropic may not just hurt OpenAI but also erode general consumer trust in all AI chatbots. This high-risk strategy could backfire if the public becomes skeptical of the entire category, including Anthropic's own products, especially if they ever decide to introduce advertising.
Anthropic's ads imply OpenAI's upcoming ad integration will compromise AI responses with biased, low-quality suggestions. This is a "dirty" but effective tactic, creating fear and doubt about a competitor's product by attacking the category leader without naming them.