We scan new podcasts and send you the top 5 insights daily.
The "low-hanging fruit" argument for diminishing returns in science is flawed because it assumes a static problem space. Progress is often explosive when entirely new fields, like computer science, emerge from other domains, opening up a fresh landscape of easy problems where rapid breakthroughs are once again possible.
While more data and compute yield linear improvements, true step-function advances in AI come from unpredictable algorithmic breakthroughs like Transformers. These creative ideas are the most difficult to innovate on and represent the highest-leverage, yet riskiest, area for investment and research focus.
A 10x increase in compute may only yield a one-tier improvement in model performance. This appears inefficient but can be the difference between a useless "6-year-old" intelligence and a highly valuable "16-year-old" intelligence, unlocking entirely new economic applications.
Difficult challenges often remain unsolved because they are consistently approached with the same tools and viewpoints. True progress requires introducing a novel perspective, a new tool, or temporarily shifting focus to a more tractable problem.
Unlike fields with finite demand, the appetite for scientific discovery is infinite. Therefore, automating science won't displace scientists. Instead, it will create more questions and opportunities, transforming the scientist's role into a manager or 'wrangler' of AI systems that explore hundreds of ideas simultaneously.
Citing Leopold Ashenbrenner's essay, the hosts argue that AI progress isn't linear. It relies on "unhovelers"—fundamental scientific discoveries like new attention mechanisms that unlock massive, non-linear gains, defying simple extrapolation of current trends.
Ilya Sutskever argues the 'age of scaling' is ending. Further progress towards AGI won't come from just making current models bigger. The new frontier is fundamental research to discover novel paradigms and bend the scaling curve, a strategy his company SSI is pursuing.
The massive investment in AI mirrors the HFT speed race. Both are driven by a fear of falling behind and operate on a logarithmic curve of diminishing returns, where each incremental gain requires exponentially more resources. The strategic question in both fields becomes how far to push.
Broad improvements in AI's general reasoning are plateauing due to data saturation. The next major phase is vertical specialization. We will see an "explosion" of different models becoming superhuman in highly specific domains like chemistry or physics, rather than one model getting slightly better at everything.
The era of guaranteed progress by simply scaling up compute and data for pre-training is ending. With massive compute now available, the bottleneck is no longer resources but fundamental ideas. The AI field is re-entering a period where novel research, not just scaling existing recipes, will drive the next breakthroughs.
Contrary to the "bitter lesson" narrative that scale is all that matters, novel ideas remain a critical driver of AI progress. The field is not yet experiencing diminishing returns on new concepts; game-changing ideas are still being invented and are essential for making scaling effective in the first place.