The initial AI boom was fueled by scraping the public internet. Cuban predicts the next phase will be dominated by exclusive data deals. Content owners, like medical journals, will protect their IP and auction it to the highest-bidding AI companies, creating valuable data silos.

Related Insights

The industry has already exhausted the public web data used to train foundational AI models, a point underscored by the phrase "we've already run out of data." The next leap in AI capability and business value will come from harnessing the vast, proprietary data currently locked behind corporate firewalls.

LLMs have hit a wall by scraping nearly all available public data. The next phase of AI development and competitive differentiation will come from training models on high-quality, proprietary data generated by human experts. This creates a booming "data as a service" industry for companies like Micro One that recruit and manage these experts.

Public internet data has been largely exhausted for training AI models. The real competitive advantage and source for next-generation, specialized AI will be the vast, untapped reservoirs of proprietary data locked inside corporations, like R&D data from pharmaceutical or semiconductor companies.

Mark Cuban warns that patenting work makes it public, allowing any AI model to train on it instantly. To maintain a competitive data advantage, he suggests companies should increasingly rely on trade secrets, keeping their valuable IP out of the public domain and away from competitors' models.

A key competitive advantage for AI companies lies in capturing proprietary outcomes data by owning a customer's end-to-end workflow. This data, such as which legal cases are won or lost, is not publicly available. It creates a powerful feedback loop where the AI gets smarter at predicting valuable outcomes, a moat that general models cannot replicate.

Cuban identifies a massive, overlooked opportunity: acquiring the intellectual property (patents, data, designs) from millions of defunct businesses. This "dead IP" could be aggregated and sold at a high premium to foundational model companies desperate for unique training data.

For years, access to compute was the primary bottleneck in AI development. Now, as public web data is largely exhausted, the limiting factor is access to high-quality, proprietary data from enterprises and human experts. This shifts the focus from building massive infrastructure to forming data partnerships and expertise.

With the public internet fully indexed, LLMs now require net-new, high-fidelity data to improve. This has created a booming market for domain experts in fields like law, finance, and medicine to work as freelance "AI trainers." This new job category involves creating complex, proprietary data sets, often for high compensation.

The future of valuable AI lies not in models trained on the abundant public internet, but in those built on scarce, proprietary data. For fields like robotics and biology, this data doesn't exist to be scraped; it must be actively created, making the data generation process itself the key competitive moat.

As algorithms become more widespread, the key differentiator for leading AI labs is their exclusive access to vast, private data sets. XAI has Twitter, Google has YouTube, and OpenAI has user conversations, creating unique training advantages that are nearly impossible for others to replicate.