While framed as a "wisdom of the crowds" tool, prediction markets can be easily manipulated. Wealthy individuals or campaigns can place large bets to create a perception of momentum or inevitability, effectively using the market as a propaganda vehicle to influence public opinion rather than simply reflect it.
When an algorithm deems someone "unemployable," that person is denied jobs, thus validating the prediction. The system generates its own accuracy by creating the reality it purports to predict, leaving no error signal to correct itself. Oxford philosopher Carissa Véliz calls this a "perfect crime" as the evidence disappears.
Following philosopher Harry Frankfurt's definition, a bullshitter is someone who disregards truth entirely to achieve a desired effect. Oxford philosopher Carissa Véliz argues LLMs fit this model perfectly, as they are designed to please and engage users, not track truth. They will say whatever works, true or not, to satisfy the user.
Pervasive anxiety about the future stems from its uncertainty. Oxford philosopher Carissa Véliz reframes this uncertainty as good news. A future that isn't written is a future that can be influenced. This means we possess the agency to intervene and create a better world, an opportunity that a fully predictable future would eliminate.
When a bank rejects a loan based on clear, factual criteria (e.g., insufficient funds), the applicant can take specific actions to rectify it. Rejections based on opaque predictive models are not facts but "educated guesses," which cannot be proven false, leaving applicants with no recourse and shielding institutions from accountability.
Introducing predictive algorithms into the legal system for bail, parole, or even lawsuit viability shifts its foundation. Justice becomes a game of probabilities rather than a process based on principles. This makes it easier for guilty parties to escape, as they only need to make a case seem slightly unlikely to succeed, distorting justice.
While circumventing automated hiring systems seems proactive, it may inadvertently select for a specific personality type: aggressive, insistent, and willing to break rules. This can filter out brilliant but less socially aggressive candidates and potentially incentivize the same traits found in fraudsters, rather than creating a purely meritocratic backchannel.
Predictive algorithms recommend content based on past successes. However, truly transformative art, like the TV show *Seinfeld*, often performs poorly with initial audiences. It succeeds by changing cultural sensibilities over time. A world driven by prediction risks filtering out these innovations that reshape our tastes, rather than just catering to them.
