Medium's CEO frames the AI training data issue as a classic prisoner's dilemma. Because AI companies chose an "antisocial" path of scraping without collaboration, platforms are now forced to defect as well—blocking crawlers and threatening data poisoning to create leverage and bring them to the negotiating table.
The NYT's seemingly contradictory AI strategy is a deliberate two-pronged approach. Lawsuits enforce intellectual property rights and prevent unauthorized scraping, while licensing deals demonstrate a clear, sustainable market and fair value exchange for its journalism.
Medium's CEO revealed the company providing data for a critical Wired article about "AI slop" was simultaneously trying to sell its AI detection services to Medium. This highlights a potential conflict of interest where a data source may benefit directly from negative press about a target company.
Content creators are in an impossible position. They can block Google's crawlers and lose their primary traffic source, effectively committing "business suicide." Alternatively, they can allow access, thereby providing the content that fuels the very AI systems undermining their business model.
Medium's CEO argues the true measure of success against spam is not the volume of "AI slop" received, but how little reaches end-users. The fight is won through sophisticated recommendation and filtering algorithms that protect the reader experience, rather than just blocking content at the source.
ChatGPT's inability to access The Wirecutter, owned by the litigious New York Times, exemplifies how corporate conflicts create walled gardens. This limits the real-world effectiveness of AI agents, showing business disputes can be as significant a barrier as technical challenges, preventing users from getting simple answers.
The market reality is that consumers and businesses prioritize the best-performing AI models, regardless of whether their training data was ethically sourced. This dynamic incentivizes labs to use all available data, including copyrighted works, and treat potential fines as a cost of doing business.
AI services crawl web content but present answers directly, breaking the traditional model where creators earn revenue from traffic. Without compensation, the incentive to produce quality content diminishes, putting the web's business model at risk.
Major AI players treat the market as a zero-sum, "winner-take-all" game. This triggers a prisoner's dilemma where each firm is incentivized to offer subsidized, unlimited-use pricing to gain market share, leading to a race to the bottom that destroys profitability for the entire sector and squeezes out smaller players.
The New York Times' lawsuit against OpenAI prevents ChatGPT from accessing content from its subsidiary, Wirecutter. This highlights how legal battles over proprietary data are creating "walled gardens," limiting the capabilities of AI agents and forcing users back to traditional web browsing for specific tasks.
Unlike Google Search, which drove traffic, AI tools like Perplexity summarize content directly, destroying publisher business models. This forces companies like the New York Times to take a hardline stance and demand direct, substantial licensing fees. Perplexity's actions are thus accelerating the shift to a content licensing model for all AI companies.