Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Liskov notes that criticism of her Turing Award often came from people who took her contributions, like data abstraction, for granted. The ideas were so deeply integrated into modern programming that younger generations couldn't imagine a time before they existed, making the invention itself invisible—a testament to its profound impact.

Related Insights

Sci-fi predicted parades when AI passed the Turing test, but in reality, it happened with models like GPT-3.5 and the world barely noticed. This reveals humanity's incredible ability to quickly normalize profound technological leaps and simply move the goalposts for what feels revolutionary.

Intel's team viewed their first microprocessor as an incremental improvement for building calculators, not a world-changing invention. The true revolution was sparked by outsiders who applied the technology in unforeseen ways, like building the first personal computers. This highlights that creators often cannot predict the true impact of their inventions.

Liskov developed her famous principle by analyzing Smalltalk's inheritance. Her research group focused on defining modules by their specified behavior, not their internal implementation. This perspective allowed her to solve a problem the implementation-focused OOP community was struggling with: a subclass must behave like its superclass to be substitutable.

Liskov chose academia for the freedom to pursue any research direction she found interesting. However, she calls this a "gift and a curse." The gift is total autonomy; the curse is that your success, including tenure, is ultimately decided by how the broader research community values the problems you choose to solve and your contributions.

The definition of a top-tier individual contributor can change as a company matures. At Mozilla, the "Distinguished Engineer" role evolved from recognizing deep knowledge of the internal codebase to rewarding those who drove world-changing impact on industry standards and web technologies.

John Martinis's 1985 experiment demonstrating quantum mechanics at a macro scale was noteworthy but not seen as a Nobel-worthy breakthrough at the time. Its significance grew over decades as it became the foundation for the burgeoning field of quantum computing, showing the long-tail impact of foundational research.

Amjad Masad draws a parallel between modern AI-powered coding in English and Grace Hopper's creation of the compiler. Both were forms of abstraction met with skepticism from purists who believed developers needed to work at a lower level (machine code then, traditional coding now).

Thomas Peterffy frames AI not as a separate category of technology, but as a natural evolution in programming. He sees it as the ultimate high-level language, moving from machine code to assembler and finally to natural language, but qualitatively part of the same developmental path.

Great ideas like deep learning were not immediately recognized. Their value emerged over time as others built upon them. This suggests an idea's fruitfulness is a product of its context and cultural adoption, not just its isolated brilliance, making it difficult for an AI to evaluate its ultimate impact.

The foundational concept for modern LLMs, the attention mechanism, originated from an intern, Dima Badanao, in Yoshua Bengio's lab. The idea was so brilliant that its potential for success was immediately apparent upon explanation, before it was even coded.