Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

We shouldn't view technology as separate from ourselves. It is the unique way humans substantiate mental constructs in the world, from language to machines. We co-evolve with these creations, and understanding them is key to understanding ourselves.

Related Insights

The complexity in LLMs isn't intelligence emerging in silicon; it reflects our own. These models are deep because they encode the vast, causally powerful structure of human language and culture. We are looking at a high-resolution imprint of our own collective mind.

AI, like the microscope or telescope, will fundamentally alter human epistemology—how we acquire and understand knowledge. By changing our relationship with tools like language, AI will evolve our concepts of self, reality, and what is logically possible, reshaping philosophy and the very nature of thought.

Technology isn't a cold, separate discipline; it's the manifestation of our deepest desires and dreams. This is why we instinctively give it mythological names (e.g., Apollo space program) and frame it in epic narratives. It's how we make sense of our own creations.

The current state of AI development parallels early human evolution. Just as the invention of language enabled a step-function change in human collaboration and intelligence, AI agents now require their own 'language'—a set of shared protocols—to move beyond individual tasks and unlock collective problem-solving.

The common metaphor of AI as an artificial being is wrong. It's better understood as a 'cultural technology,' like print or libraries. Its function is to aggregate, summarize, and transmit existing human knowledge at scale, not to create new, independent understanding of the world.

We often think of "human nature" as fixed, but it's constantly redefined by our tools. Technologies like eyeglasses and literacy fundamentally changed our perception and cognition. AI is not an external force but the next step in this co-evolution, augmenting what it means to be human.

Thomas Peterffy frames AI not as a separate category of technology, but as a natural evolution in programming. He sees it as the ultimate high-level language, moving from machine code to assembler and finally to natural language, but qualitatively part of the same developmental path.

Neurological studies show the human brain maps a tool's tip as if it were our hand. This implies that a powerful physical intelligence should not be tied to a specific body (e.g., a humanoid) but should be a general "brain" capable of controlling any embodiment, from a bulldozer to a multi-fingered hand.

Drawing from the theory of Cultural Materialism, technological infrastructure dictates a society's values. For instance, yoking an ox changed views on animal sanctity. As AI makes human economic output obsolete, our societal value system may shift to see humans as inefficient or even parasitic.

Viewing AI as just a technological progression or a human assimilation problem is a mistake. It is a "co-evolution." The technology's logic shapes human systems, while human priorities, rivalries, and malevolence in turn shape how the technology is developed and deployed, creating unforeseen risks and opportunities.