Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Internal Meta documents revealed the company knowingly earned 10% of its revenue (approx. $16B annually) from scam ads. Leadership performed a cold calculation, concluding these massive profits would far exceed any potential regulatory fines. This reframes platform safety failures not as negligence, but as a deliberate, profit-maximizing business strategy where penalties are just a cost of doing business.

Related Insights

Platforms follow a predictable cycle called 'inshittification.' First, they offer a great user experience to achieve scale. Next, they squeeze users to benefit advertisers. Finally, they squeeze advertisers to maximize their own profits. This model explains why platforms inevitably prioritize profit over user well-being and safety.

Businesses and financial institutions intentionally accept a certain level of fraud. The friction required to eliminate it entirely would block too many legitimate transactions, ultimately costing more in lost revenue (lower conversion) than the fraud itself. It is a calculated trade-off between security and usability.

Marketers should reframe AI-driven scams, especially those using deepfakes in paid ads, as direct competitors. These are not just security risks; they are sophisticated marketing funnels bidding against your own efforts to capture the same customers and divert revenue, directly impacting campaign success.

Internal Meta documents show the company knowingly accepts that its scam-related ad revenue will lead to regulatory fines. However, it calculated that the profits from this fraud ($3.5B every six months from high-risk ads alone) 'almost certainly exceeds the cost of any regulatory settlement'.

Meta's core ad-targeting algorithm is not a neutral party in platform fraud; it is an active accelerant. By design, the system identifies vulnerable users (e.g., the elderly). Once a user clicks a single scam ad, the algorithm learns to flood their feed with more, creating a vicious, automated cycle of exploitation for profit.

Rather than simply failing to police fraud, Meta perversely profits from it by charging higher rates for ads its systems suspect are fraudulent. This 'scam tax' creates a direct financial incentive to allow illicit ads, turning a blind eye into a lucrative revenue stream.

Many social media and ad tech companies benefit financially from bot activity that inflates engagement and user counts. This perverse incentive means they are unlikely to solve the bot problem themselves, creating a need for independent, verifiable trust layers like blockchain.

After an internal team successfully slashed problematic ad revenue from China by 50%, Meta CEO Mark Zuckerberg personally intervened. Following his input, the effective anti-scam team was disbanded, as its success was negatively impacting the company's $18 billion in Chinese ad sales.

Internal Meta documents project that 10% of the company's total annual revenue, or $16 billion, comes from advertising for scams and banned goods. This reframes fraud not as a peripheral problem but as a significant, core component of Meta's advertising business model.

While 10% of Meta's revenue comes from fraud, the company's anti-fraud team was blocked from taking any action that would impact more than 0.15% of total revenue. This minuscule 'revenue guardrail' was an explicit internal directive to ensure anti-fraud efforts would not succeed.