Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Meta's core ad-targeting algorithm is not a neutral party in platform fraud; it is an active accelerant. By design, the system identifies vulnerable users (e.g., the elderly). Once a user clicks a single scam ad, the algorithm learns to flood their feed with more, creating a vicious, automated cycle of exploitation for profit.

Related Insights

Platforms follow a predictable cycle called 'inshittification.' First, they offer a great user experience to achieve scale. Next, they squeeze users to benefit advertisers. Finally, they squeeze advertisers to maximize their own profits. This model explains why platforms inevitably prioritize profit over user well-being and safety.

Marketers should reframe AI-driven scams, especially those using deepfakes in paid ads, as direct competitors. These are not just security risks; they are sophisticated marketing funnels bidding against your own efforts to capture the same customers and divert revenue, directly impacting campaign success.

Internal Meta documents show the company knowingly accepts that its scam-related ad revenue will lead to regulatory fines. However, it calculated that the profits from this fraud ($3.5B every six months from high-risk ads alone) 'almost certainly exceeds the cost of any regulatory settlement'.

Rather than simply failing to police fraud, Meta perversely profits from it by charging higher rates for ads its systems suspect are fraudulent. This 'scam tax' creates a direct financial incentive to allow illicit ads, turning a blind eye into a lucrative revenue stream.

Previously, marketers told Meta who to target. With the new AI algorithm, marketers provide diverse creative, and the AI uses that creative to find the right audience. Targeting control has shifted from human to machine, fundamentally changing how ads are built and optimized.

Internal Meta documents revealed the company knowingly earned 10% of its revenue (approx. $16B annually) from scam ads. Leadership performed a cold calculation, concluding these massive profits would far exceed any potential regulatory fines. This reframes platform safety failures not as negligence, but as a deliberate, profit-maximizing business strategy where penalties are just a cost of doing business.

The real danger of algorithms isn't their ability to personalize offers based on taste. The harm occurs when they identify and exploit consumers' lack of information or cognitive biases, leading to manipulative sales of subpar products. This is a modern, scalable form of deception.

An 11-year Meta veteran explains that Facebook's ad value shifted from demographics to interest targeting, and now to a sophisticated AI. Today, the best strategy is often to remove granular targeting and let the system's machine learning find the right audience automatically.

Internal Meta documents project that 10% of the company's total annual revenue, or $16 billion, comes from advertising for scams and banned goods. This reframes fraud not as a peripheral problem but as a significant, core component of Meta's advertising business model.

While 10% of Meta's revenue comes from fraud, the company's anti-fraud team was blocked from taking any action that would impact more than 0.15% of total revenue. This minuscule 'revenue guardrail' was an explicit internal directive to ensure anti-fraud efforts would not succeed.