The New York Times' lawsuit against OpenAI prevents ChatGPT from accessing content from its subsidiary, Wirecutter. This highlights how legal battles over proprietary data are creating "walled gardens," limiting the capabilities of AI agents and forcing users back to traditional web browsing for specific tasks.

Related Insights

The industry has already exhausted the public web data used to train foundational AI models, a point underscored by the phrase "we've already run out of data." The next leap in AI capability and business value will come from harnessing the vast, proprietary data currently locked behind corporate firewalls.

The NYT's seemingly contradictory AI strategy is a deliberate two-pronged approach. Lawsuits enforce intellectual property rights and prevent unauthorized scraping, while licensing deals demonstrate a clear, sustainable market and fair value exchange for its journalism.

In an AI-driven ecosystem, data and content need to be fluidly accessible to various systems and agents. Any SaaS platform that feels like a "walled garden," locking content away, will be rejected by power users. The winning platforms will prioritize open, interoperable access to user data.

This conflict is bigger than business; it’s about societal health. If AI summaries decimate publisher revenues, the result is less investigative journalism and more information power concentrated in a few tech giants, threatening the diverse press that a healthy democracy relies upon.

The idea of a truly "open web" was a brief historical moment. Powerful, proprietary "organizing layers" like search engines and app stores inevitably emerge to centralize ecosystems and capture value. Today's AI chatbots are simply the newest form of these organizing layers.

ChatGPT's inability to access The Wirecutter, owned by the litigious New York Times, exemplifies how corporate conflicts create walled gardens. This limits the real-world effectiveness of AI agents, showing business disputes can be as significant a barrier as technical challenges, preventing users from getting simple answers.

The OpenAI-Disney partnership establishes a clear commercial value for intellectual property in the AI space. This sets a powerful legal precedent for ongoing lawsuits (like NYT v. OpenAI), compelling all other LLM developers to license content rather than scrape it for free, formalizing the market.

Amazon is suing Perplexity because its AI agent can autonomously log into user accounts and make purchases. This isn't just a legal spat over terms of service; it's the first major corporate conflict over AI agent-driven commerce, foreshadowing a future where brands must contend with non-human customers.

OpenAI's platform strategy, which centralizes app distribution through ChatGPT, mirrors Apple's iOS model. This creates a 'walled garden' that could follow Cory Doctorow's 'inshittification' pattern: initially benefiting users, then locking them in, and finally exploiting them once they cannot easily leave the ecosystem.

Unlike Google Search, which drove traffic, AI tools like Perplexity summarize content directly, destroying publisher business models. This forces companies like the New York Times to take a hardline stance and demand direct, substantial licensing fees. Perplexity's actions are thus accelerating the shift to a content licensing model for all AI companies.