Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

A growing marketing strategy for new AI companies is to pay influencers for positive promotion without requiring them to disclose it as an advertisement. This creates an artificial sense of organic buzz and can be considered a form of lobbying to win mindshare on social platforms, blurring the line between authentic recommendation and paid placement.

Related Insights

The economic incentive for AI-generated posts on platforms like Reddit is a B2B service. Startups sell companies the promise of "organic mentions," using AI bots that engage in normal-seeming conversations before strategically recommending or mentioning a client's product.

There is emerging evidence of a "pay-to-play" dynamic in AI search. Platforms like ChatGPT seem to disproportionately cite content from sources with which they have commercial deals, such as the Financial Times and Reddit. This suggests paid partnerships can heavily influence visibility in AI-generated results.

Small, pre-approval psychedelic biotechs using paid YouTube promotions with exaggerated claims risk damaging the entire field's effort to build scientific legitimacy. This marketing tactic, typically seen with consumer products, undermines attempts to attract serious investors and pharma partners by creating hype that is harmful to the sector's credibility.

A new marketing tactic involves creating high-quality, AI-generated content on platforms like Reddit to promote a product. The goal is to have this seemingly authentic user content indexed and then surfaced by LLMs like ChatGPT in their summaries, creating an insidious and hard-to-detect marketing channel.

Even when an influencer genuinely loves a product, the "paid partnership" disclosure creates consumer skepticism. This trend diminishes the power of traditional influencers, making authentic user-generated content and genuine testimonials a more trusted source for marketing.

Marketing leaders shouldn't wait for FTC regulation to establish ethical AI guidelines. The real risk of using undisclosed AI, like virtual influencers, isn't immediate legal trouble but the long-term erosion of consumer trust. Once customers feel misled, that brand damage is incredibly difficult to repair.

AI companies manage media coverage by offering or withholding access to top executives. By dangling this 'carrot,' they implicitly pressure journalists and podcasters to provide favorable coverage and avoid platforming critics, thus controlling the public narrative.

The influencer economy is facing its own disruption from AI. Brands will soon leverage completely fictional, AI-generated personalities for marketing, which is a natural evolution from human influencers taking brand deals away from traditional Hollywood celebrities.

For an AI chatbot to successfully monetize with ads, it must never integrate paid placements directly into its objective answers. Crossing this 'bright red line' would destroy consumer trust, as users would question whether they are receiving the most relevant information or simply the information from the highest bidder.

Alexis Ohanian shares a tactic where founders secretly purchase all moderator accounts for a relevant subreddit. This gives them control to subtly promote their products within a community that appears organic. It's a form of black-hat marketing designed to influence conversations and game the "SEO" for AI models.