A core failure of current AI products is that they require users to make their lives 'legible' by consolidating all their data. This asks people to conform to the machine's needs, reversing the fundamental design principle that computers should adapt to people, not the other way around.
This mindset, prevalent in tech, views the world as a series of databases that can be controlled with structured language. It assumes that by controlling the software representation, one can control reality. This fails to account for the complexity and messiness of human systems, which cannot be neatly captured in automatable loops.
The backlash against AI is moving from digital discourse to the physical world. Politicians opposing data centers are winning elections, while supporters have faced violent attacks. This indicates the tech industry has failed to earn the 'social permission' for its massive infrastructure and resource consumption.
The tech industry believes better marketing can solve AI's unpopularity. However, the public's negative experiences and the feeling of being dehumanized into data are the real issues. You cannot advertise people out of their own lived experiences, revealing a fundamental disconnect between tech and society.
Tech leaders often equate the law's structured language with deterministic software code, believing it can be easily automated. They miss that our legal system requires ambiguity to function, allowing for interpretation and argument. This fundamental difference creates a collision when trying to apply a purely computational model to the law.
Contrary to the belief that familiarity breeds positivity, polling data shows Gen Z uses AI more than any other group but holds the most negative views. Their anger towards AI is growing year-over-year, suggesting that increased exposure to current tools is driving dissatisfaction, not enthusiastic adoption.
