Building machines that learn from vast datasets leads to unpredictable outcomes. OpenAI's GPT-3, trained on text, spontaneously learned to write computer programs—a skill its designers did not explicitly teach it or expect it to acquire. This highlights the emergent and mysterious nature of modern AI.
A dangerous category of modern work treats humans as "endpoints"—connectors between two automated systems. These roles don't augment human creativity but make jobs more robotic and structured, essentially turning people into extensions of a machine and making them more easily replaceable.
Labs like DeepMind and OpenAI state that building a machine that can do anything a human brain can is their core mission. However, many experts believe the idea is ridiculous, as the path isn't clear. This frames the pursuit as an article of faith rather than a concrete scientific roadmap.
A psychological principle called the "effort heuristic" means we value things more when we believe a human worked hard on them. This will lead to a two-tiered economy: cheap, machine-made commodities and expensive, highly-valued artisanal services where human "handprints" are visible and celebrated.
To stay relevant, humans shouldn't try to become more machine-like. Instead, they should focus on three categories of work AI struggles with: 'surprising' tasks involving chaos and uncertainty, 'social' work that makes people feel things, and 'scarce' work involving high-stakes, unique scenarios.
The biggest near-term automation threat isn't from super-intelligent AI, but from mediocre "boring bots." This "so-so automation" is just good enough to displace human workers but fails to generate the significant economic gains seen in past technological revolutions, creating a net drag on the economy.
