The world has never been truly deterministic, but slower cycles of change made deterministic thinking a less costly error. Today, the rapid pace of technological and social change means that acting as if the world is predictable gets punished much more quickly and severely.

Related Insights

The legal system, despite its structure, is fundamentally non-deterministic and influenced by human factors. Applying new, equally non-deterministic AI systems to this already unpredictable human process poses a deep philosophical challenge to the notion of law as a computable, deterministic process.

With past shifts like the internet or mobile, we understood the physical constraints (e.g., modem speeds, battery life). With generative AI, we lack a theoretical understanding of its scaling potential, making it impossible to forecast its ultimate capabilities beyond "vibes-based" guesses from experts.

Seemingly sudden crashes in tech and markets are not abrupt events but the result of "interpretation debt"—when a system's output capability grows faster than the collective ability to understand, review, and trust it, leading to a quiet erosion of trust.

In the current AI landscape, knowledge and assumptions become obsolete within months, not years. This rapid pace of evolution creates significant stress, as investors and founders must constantly re-educate themselves to make informed decisions. Relying on past knowledge is a quick path to failure.

In the AI era, the pace of change is so fast that by the time academic studies on "what works" are published, the underlying technology is already outdated. Leaders must therefore rely on conviction and rapid experimentation rather than waiting for validated evidence to act.

The most pressing AI safety issues today, like 'GPT psychosis' or AI companions impacting birth rates, were not the doomsday scenarios predicted years ago. This shows the field involves reacting to unforeseen 'unknown unknowns' rather than just solving for predictable, sci-fi-style risks, making proactive defense incredibly difficult.

Vinod Khosla's core philosophy is that only improbable, black-swan events create significant change. Since you can't predict which improbable event will matter, the correct strategy is to build maximum agility and adaptability to seize opportunities as they arise.

Unlike other industries accustomed to deterministic software, the finance world is already familiar with non-deterministic systems through stochastic pricing models and market analysis. This cultural familiarity gives financial professionals a head start in embracing the probabilistic nature of modern AI tools.

Quoting G.K. Chesterton, Antti Ilmanen highlights that markets are "nearly reasonable, but not quite." This creates a trap for purely logical investors, as the market's perceived precision is obvious, but its underlying randomness is hidden. This underscores the need for deep humility when forecasting financial markets.

AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.