Philosophy trains entrepreneurs to think crisply about what's possible and to form theories of human nature. This is crucial for imagining new products and services that can change how people behave and interact with the world.
Thought experiments like the trolley problem artificially constrain choices to derive a specific intuition. They posit perfect knowledge and ignore the most human response: attempting to find a third option, like breaking the trolley, that avoids the forced choice entirely.
While geological and biological evolution are slow, cultural evolution—the transmission and updating of knowledge—is incredibly fast. Humans' success stems from shifting to this faster clock. AI and LLMs are tools that dramatically accelerate this process, acting as a force multiplier for cultural evolution.
To improve LLM reasoning, researchers feed them data that inherently contains structured logic. Training on computer code was an early breakthrough, as it teaches patterns of reasoning far beyond coding itself. Textbooks are another key source for building smaller, effective models.
We often think of "human nature" as fixed, but it's constantly redefined by our tools. Technologies like eyeglasses and literacy fundamentally changed our perception and cognition. AI is not an external force but the next step in this co-evolution, augmenting what it means to be human.
Wittgenstein grounded language games in a shared biological reality. LLMs raise a fascinating question: are they part of our "form of life"? They are trained on human data, but they are not biological and learn differently, which may mean their "truth functions" are fundamentally alien to ours.
To sharpen your thinking, use ChatGPT as a Socratic partner. Feed it your argument and ask it to generate both supporting points and strong counterarguments. This dialectical process helps you anticipate objections and refine your position, leading to a more robust final synthesis.
LLMs initially operate like philosophical nominalists (truth from language patterns), a model that proved more effective than early essentialist AI attempts. Now, we are trying to ground them in reality, effectively adding essentialist characteristics—a Hegelian synthesis of opposing ideas.
Early Wittgenstein's "logical space of possibilities" mirrors how LLM embeddings map words into a high-dimensional space. Late Wittgenstein's "language games" explain their core function: next-token prediction and learning through interactive feedback (RLHF), where meaning is derived from use and context.
Philosophy should have been central to AI's creation, but its academic siloing led to inaction. Instead of engaging with technology and building, philosophers remained focused on isolated cogitation. AI emerged from engineers who asked "what can I make?" rather than just asking "what is a mind?".
