ChatGPT's inability to access The Wirecutter, owned by the litigious New York Times, exemplifies how corporate conflicts create walled gardens. This limits the real-world effectiveness of AI agents, showing business disputes can be as significant a barrier as technical challenges, preventing users from getting simple answers.

Related Insights

The industry has already exhausted the public web data used to train foundational AI models, a point underscored by the phrase "we've already run out of data." The next leap in AI capability and business value will come from harnessing the vast, proprietary data currently locked behind corporate firewalls.

While tech giants could technically replicate Perplexity, their core business models—advertising for Google, e-commerce for Amazon—create a fundamental conflict of interest. An independent player can align purely with the user's best interests, creating a strategic opening that incumbents are structurally unable to fill without cannibalizing their primary revenue streams.

The NYT's seemingly contradictory AI strategy is a deliberate two-pronged approach. Lawsuits enforce intellectual property rights and prevent unauthorized scraping, while licensing deals demonstrate a clear, sustainable market and fair value exchange for its journalism.

Content creators are in an impossible position. They can block Google's crawlers and lose their primary traffic source, effectively committing "business suicide." Alternatively, they can allow access, thereby providing the content that fuels the very AI systems undermining their business model.

This conflict is bigger than business; it’s about societal health. If AI summaries decimate publisher revenues, the result is less investigative journalism and more information power concentrated in a few tech giants, threatening the diverse press that a healthy democracy relies upon.

The idea of a truly "open web" was a brief historical moment. Powerful, proprietary "organizing layers" like search engines and app stores inevitably emerge to centralize ecosystems and capture value. Today's AI chatbots are simply the newest form of these organizing layers.

The current trend toward closed, proprietary AI systems is a misguided and ultimately ineffective strategy. Ideas and talent circulate regardless of corporate walls. True, defensible innovation is fostered by openness and the rapid exchange of research, not by secrecy.

The main barrier to AI's impact is not its technical flaws but the fact that most organizations don't understand what it can actually do. Advanced features like 'deep research' and reasoning models remain unused by over 95% of professionals, leaving immense potential and competitive advantage untapped.

OpenAI's platform strategy, which centralizes app distribution through ChatGPT, mirrors Apple's iOS model. This creates a 'walled garden' that could follow Cory Doctorow's 'inshittification' pattern: initially benefiting users, then locking them in, and finally exploiting them once they cannot easily leave the ecosystem.

Unlike Google Search, which drove traffic, AI tools like Perplexity summarize content directly, destroying publisher business models. This forces companies like the New York Times to take a hardline stance and demand direct, substantial licensing fees. Perplexity's actions are thus accelerating the shift to a content licensing model for all AI companies.