The proliferation of sensors, especially cameras, will generate massive amounts of video data. This data must be uploaded to cloud AI models for processing, making robust upstream bandwidth—not just downstream—the critical new infrastructure bottleneck and a significant opportunity for telecom companies.

Related Insights

A new wave of startups, like ex-Twitter CEO's Parallel, is attracting significant investment to build web infrastructure specifically for AI agents. Instead of ranking links for humans, these systems deliver optimized data directly to AI models, signaling a fundamental shift in how the internet will be structured and consumed.

Direct AI disruption is a minimal concern for telecom companies. The more significant threat comes from hyperscalers like AWS and Azure, which already dominate Europe's B2B cloud market with an 85% share. The real risk is these giants leveraging their cloud infrastructure to enter the B2C telecom space via virtualized networks.

Today's AI is largely text-based (LLMs). The next phase involves Visual Language Models (VLMs) that interpret and interact with the physical world for robotics and surgery. This transition requires an exponential, 50-1000x increase in compute power, underwriting the long-term AI infrastructure build-out.

The current AI infrastructure build-out is structurally safer than the late-90s telecom boom. Today's spending is driven by highly-rated, cash-rich hyperscalers, whereas the telecom boom was fueled by highly leveraged, barely investment-grade companies, creating a wider and safer distribution of risk today.

Unlike the speculative "dark fiber" buildout of the dot-com bubble, today's AI infrastructure race is driven by real, immediate, and overwhelming demand. The problem isn't a lack of utilization for built capacity; it's a constant struggle to build supply fast enough to meet customer needs.

For years, access to compute was the primary bottleneck in AI development. Now, as public web data is largely exhausted, the limiting factor is access to high-quality, proprietary data from enterprises and human experts. This shifts the focus from building massive infrastructure to forming data partnerships and expertise.

Unlike the dot-com bubble's finite need for fiber optic cables, the demand for AI is infinite because it's about solving an endless stream of problems. This suggests the current infrastructure spending cycle is fundamentally different and more sustainable than previous tech booms.

The primary reason multi-million dollar AI initiatives stall or fail is not the sophistication of the models, but the underlying data layer. Traditional data infrastructure creates delays in moving and duplicating information, preventing the real-time, comprehensive data access required for AI to deliver business value. The focus on algorithms misses this foundational roadblock.

The infrastructure demands of AI have caused an exponential increase in data center scale. Two years ago, a 1-megawatt facility was considered a good size. Today, a large AI data center is a 1-gigawatt facility—a 1000-fold increase. This rapid escalation underscores the immense and expensive capital investment required to power AI.

The excitement around AI capabilities often masks the real hurdle to enterprise adoption: infrastructure. Success is not determined by the model's sophistication, but by first solving foundational problems of security, cost control, and data integration. This requires a shift from an application-centric to an infrastructure-first mindset.