Google's research head distinguishes between innovation—the continuous, iterative process of improvement applied across product and research—and true breakthroughs. Breakthroughs are fundamental shifts that solve problems not previously solvable in principle, such as the Transformer architecture that underpins modern AI.

Related Insights

While more data and compute yield linear improvements, true step-function advances in AI come from unpredictable algorithmic breakthroughs like Transformers. These creative ideas are the most difficult to innovate on and represent the highest-leverage, yet riskiest, area for investment and research focus.

Unlike traditional engineering, breakthroughs in foundational AI research often feel binary. A model can be completely broken until a handful of key insights are discovered, at which point it suddenly works. This "all or nothing" dynamic makes it impossible to predict timelines, as you don't know if a solution is a week or two years away.

Conventional innovation starts with a well-defined problem. Afeyan argues this is limiting. A more powerful approach is to search for new value pools by exploring problems and potential solutions in parallel, allowing for unexpected discoveries that problem-first thinking would miss.

Instead of a linear handoff, Google fosters a continuous loop where real-world problems inspire research, which is then applied to products. This application, in turn, generates the next set of research questions, creating a self-reinforcing cycle that accelerates breakthroughs.

The era of guaranteed progress by simply scaling up compute and data for pre-training is ending. With massive compute now available, the bottleneck is no longer resources but fundamental ideas. The AI field is re-entering a period where novel research, not just scaling existing recipes, will drive the next breakthroughs.

While language models are becoming incrementally better at conversation, the next significant leap in AI is defined by multimodal understanding and the ability to perform tasks, such as navigating websites. This shift from conversational prowess to agentic action marks the new frontier for a true "step change" in AI capabilities.

Contrary to the prevailing 'scaling laws' narrative, leaders at Z.AI believe that simply adding more data and compute to current Transformer architectures yields diminishing returns. They operate under the conviction that a fundamental performance 'wall' exists, necessitating research into new architectures for the next leap in capability.

Nubar Afeyan argues that companies should pursue two innovation tracks. Continuous innovation should build from the present forward. Breakthroughs, however, require envisioning a future state without a clear path and working backward to identify the necessary enabling steps.

The true measure of a new AI model's power isn't just improved benchmarks, but a qualitative shift in fluency that makes using previous versions feel "painful." This experiential gap, where the old model suddenly feels worse at everything, is the real indicator of a breakthrough.

Cohere's CEO believes if Google had hidden the Transformer paper, another team would have created it within 18 months. Key ideas were already circulating in the research community, making the discovery a matter of synthesis whose time had come, rather than a singular stroke of genius.

Innovation is Constant Improvement; Breakthroughs Solve Problems Previously Deemed Unsolvable | RiffOn