Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

This mindset, prevalent in tech, views the world as a series of databases that can be controlled with structured language. It assumes that by controlling the software representation, one can control reality. This fails to account for the complexity and messiness of human systems, which cannot be neatly captured in automatable loops.

Related Insights

Contrary to the hype, AI isn't a substitute for human thought. It's a powerful pattern-matching tool that consumes vast data. A growing problem is that AI is increasingly training on its own regurgitated output, creating a closed loop that lacks genuine novelty or external grounding.

The complexity in LLMs isn't intelligence emerging in silicon; it reflects our own. These models are deep because they encode the vast, causally powerful structure of human language and culture. We are looking at a high-resolution imprint of our own collective mind.

A core failure of current AI products is that they require users to make their lives 'legible' by consolidating all their data. This asks people to conform to the machine's needs, reversing the fundamental design principle that computers should adapt to people, not the other way around.

Predictive technology introduces a fundamental tension. While AI offers unprecedented clarity into future outcomes, its very implementation makes the world more complex and interconnected. This creates a feedback loop where the tool for prediction is also a source of new, unpredictable variables.

Developers fall into the "agentic trap" by building complex, fully-automated AI coding systems. These systems fail to create good products because they lack human taste and the iterative feedback loop where a creator's vision evolves through interaction with the software being built.

Many software development conventions, like 'clean code' rules, are unproven beliefs, not empirical facts. AI interacts with code differently, so engineers must have the humility to question these foundational principles, as what's 'good code' for an LLM may differ from what's good for a human.

The common metaphor of AI as an artificial being is wrong. It's better understood as a 'cultural technology,' like print or libraries. Its function is to aggregate, summarize, and transmit existing human knowledge at scale, not to create new, independent understanding of the world.

Don't blindly trust AI. The correct mental model is to view it as a super-smart intern fresh out of school. It has vast knowledge but no real-world experience, so its work requires constant verification, code reviews, and a human-in-the-loop process to catch errors.

AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.

Unlike computers, human brains have no distinction between hardware and software; every memory physically alters the brain's structure. Furthermore, neurons are not simple on/off transistors; their firing is influenced by a complex chemical bath of hormones and neurotransmitters, making them more analog than digital.