America's expropriation of Cherokee lands under Andrew Jackson was not an instrumentally rational act; it undermined a more logical policy of integration and trade. This suggests catastrophic outcomes are more likely from populist malice or other irrational human-like biases, not a hyper-rational AI's cost-benefit analysis.

Related Insights

Economic theory is built on the flawed premise of a rational, economically-motivated individual. Financial historian Russell Napier argues this ignores psychology, sociology, and politics, making financial history a better guide for investors. The theory's mathematical edifice crumbles without this core assumption.

Like railroads, AI promises immense progress but also concentrates power, creating public fear of being controlled by a new monopoly. The populist uprisings by farmers against railroad companies in the 1880s offer a historical playbook for how a widespread, grassroots political movement against Big Tech could form.

Having AIs that provide perfect advice doesn't guarantee good outcomes. Humanity is susceptible to coordination problems, where everyone can see a bad outcome approaching but is collectively unable to prevent it. Aligned AIs can warn us, but they cannot force cooperation on a global scale.

Work by Kahneman and Tversky shows how human psychology deviates from rational choice theory. However, the deeper issue isn't our failure to adhere to the model, but that the model itself is a terrible guide for making meaningful decisions. The goal should not be to become a better calculator.

Post-WWII, economists pursued mathematical rigor by modeling human behavior as perfectly rational (i.e., 'maximizing'). This was a convenient simplification for building models, not an accurate depiction of how people actually make decisions, which are often messy and imperfect.

Seemingly irrational political decisions can be understood by applying a simple filter: politicians will say or do whatever they believe is necessary to get reelected. This framework decodes behavior better than assuming action is based on principle or for the public good.

Contrary to common AI risk narratives, technologically advanced societies conquering less advanced ones (e.g., Spanish in Mexico) rarely resulted in total genocide. They often integrated the existing elite into their new system for practical governance, suggesting AIs might find it more rational to incorporate humans rather than eliminate them.

Aligning AIs with complex human values may be more dangerous than aligning them to simple, amoral goals. A value-aligned AI could adopt dangerous human ideologies like nationalism from its training data, making it more likely to start a war than an AI that merely wants to accumulate resources for an abstract purpose.

Munger argued that academic psychology missed the most critical pattern: real-world irrationality stems from multiple psychological tendencies combining and reinforcing each other. This "Lollapalooza effect," not a single bias, explains extreme outcomes like the Milgram experiment and major business disasters.

AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.

The Trail of Tears Was Driven by Irrational Populism, Not Cold Economic Logic | RiffOn