AI systems often collapse because they are built on the flawed assumption that humans are logical and society is static. Real-world failures, from Soviet economic planning to modern systems, stem from an inability to model human behavior, data manipulation, and unexpected events.
The feeling of deep societal division is an artifact of platform design. Algorithms amplify extreme voices because they generate engagement, creating a false impression of widespread polarization. In reality, without these amplified voices, most people's views on contentious topics are quite moderate.
An AI tool can map citation or patent networks to find unexplored "blank spots" bordered by heavy research activity. These gaps represent high-potential opportunities for superstar papers or valuable patents, as any discovery there will connect and influence many adjacent fields.
Data has become a primary means of production alongside capital and labor. Following historical parallels with agricultural co-ops and labor unions, communities will likely form "data unions" to pool their data, enabling collective bargaining with large corporations and restoring individual power.
In a Washington D.C. study, citizens expressed a desire for personal AI agents to help them navigate complex regulations and paperwork. This reveals a key user need: people want AI as a personal advocate against systemic complexity, not just as a tool for institutional optimization.
A simple, on-premise AI can act as a "buddy" by reading internal documents that employees are too busy for. It can then offer contextual suggestions, like how other teams approach a task, to foster cross-functional awareness and improve company culture, especially for remote and distributed teams.
Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.
Citing Nobel laureate Danny Kahneman, who estimated 95% of human behavior is learned by observing others, AI systems should be designed to complement this "social foraging" nature. AI should act as an advisor providing context, rather than assuming users are purely logical decision-makers.
The cost for a given level of AI performance halves every 3.5 months—a rate 10 times faster than Moore's Law. This exponential improvement means entrepreneurs should pursue ideas that seem financially or computationally unfeasible today, as they will likely become practical within 12-24 months.
