The Sora team views video as having lower "intelligence per bit" compared to text. However, the total volume of available video data is vastly larger and less tapped. This suggests that, unlike LLMs facing a data crunch, video models can scale with more data for a very long time.

Related Insights

AI generating high-quality animation is more impressive than photorealism because of the extreme scarcity of training data (thousands of hours vs. millions for video). Sora 2's success suggests a fundamental improvement in its learning efficiency, not just a brute-force data advantage.

The computational requirements for generative media scale dramatically across modalities. If a 200-token LLM prompt costs 1 unit of compute, a single image costs 100x that, and a 5-second video costs another 100x on top of that—a 10,000x total increase. 4K video adds another 10x multiplier.

While today's focus is on text-based LLMs, the true, defensible AI battleground will be in complex modalities like video. Generating video requires multiple interacting models and unique architectures, creating far greater potential for differentiation and a wider competitive moat than text-based interfaces, which will become commoditized.

Sora doesn't process pixels or frames individually. Instead, it uses "space-time tokens" — small cuboids of video data combining spatial and temporal information. This voxel-like representation is the fundamental unit, enabling the model to understand properties like object permanence through global attention.

The generative video space is evolving so rapidly that a model ranked in the top five has a half-life of just 30 days. This extreme churn makes it impractical for developers to bet on a single model, driving them towards aggregator platforms that offer access to a constantly updated portfolio.

A critical weakness of current AI models is their inefficient learning process. They require exponentially more experience—sometimes 100,000 times more data than a human encounters in a lifetime—to acquire their skills. This highlights a key difference from human cognition and a major hurdle for developing more advanced, human-like AI.

To analyze video cost-effectively, Tim McLear uses a cheap, fast model to generate captions for individual frames sampled every five seconds. He then packages all these low-level descriptions and the audio transcript and sends them to a powerful reasoning model. This model's job is to synthesize all the data into a high-level summary of the video.

The primary challenge in creating stable, real-time autoregressive video is error accumulation. Like early LLMs getting stuck in loops, video models degrade frame-by-frame until the output is useless. Overcoming this compounding error, not just processing speed, is the core research breakthrough required for long-form generation.

The primary performance bottleneck for LLMs is memory bandwidth (moving large weights), making them memory-bound. In contrast, diffusion-based video models are compute-bound, as they saturate the GPU's processing power by simultaneously denoising tens of thousands of tokens. This represents a fundamental difference in optimization strategy.

Google's strategy involves building specialized models (e.g., Veo for video) to push the frontier in a single modality. The learnings and breakthroughs from these focused efforts are then integrated back into the core, multimodal Gemini model, accelerating its overall capabilities.