AI doesn't have an inherent moral stance. It is a tool that amplifies the intentions of its wielder. If used by those who support democracy, it can strengthen it; if used by those who oppose it, it can weaken it. The outcome is determined by the user, not the technology itself.
California's CalMatters uses an AI called 'Tip Sheet' to analyze public records of politicians, including speeches, votes, and campaign contributions. The AI flags anomalies and potential stories, which it then provides exclusively to human journalists to investigate, creating a powerful human-AI partnership.
Professions like law and medicine rely on a pyramid structure where newcomers learn by performing basic tasks. If AI automates this essential junior-level work, the entire model for training and developing senior experts could collapse, creating an unprecedented skills and experience gap at the top.
Problems like astroturfing (faking grassroots movements) and disinformation existed long before modern AI. AI acts as a powerful amplifier, making these tactics cheaper and more scalable, but it doesn't invent them. The solutions are often political and societal, not purely technological fixes.
Public perception of AI is skewed by headline-grabbing chatbots. However, the most widespread and impactful AI applications are the invisible predictive algorithms powering daily tools like Google Maps and TikTok feeds. These systems have a greater cumulative effect on daily life than their conversational counterparts.
The benchmark for AI performance shouldn't be perfection, but the existing human alternative. In many contexts, like medical reporting or driving, imperfect AI can still be vastly superior to error-prone humans. The choice is often between a flawed AI and an even more flawed human system, or no system at all.
The concentration of AI power in a few tech giants is a market choice, not a technological inevitability. Publicly funded, non-profit-motivated models, like one from Switzerland's ETH Zurich, prove that competitive and ethically-trained AI can be created without corporate control or the profit motive.
Drawing on Cory Doctorow's insight, the immediate risk for workers isn't being replaced by a competent AI, but by an incompetent one. AI only needs to be good enough to convince a manager to fire a human, leading to a lose-lose situation of job loss and declining work quality.
