Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Using AI models to simulate voter responses isn't a replacement for traditional polling. These AI personas are trained on existing polling data, making their outputs a less reliable, second-hand interpretation rather than a source of new, authentic public opinion.

Related Insights

Despite the hype, AI-moderated user interviews are not yet a reliable tool. Even Anthropic, creators of Claude, ran a study with their own AI moderation tool that produced unimpressive, low-quality questions, highlighting the immaturity of the technology.

Attempts to use AI for "synthetic customer calls" failed because the models are overly agreeable, expressing a 10/10 purchase intent for any idea. This "sycophancy mode" makes them useless for genuine validation, proving there is no substitute for talking to real, nuanced humans.

A UK startup has found that LLMs can generate accurate, simulated focus group discussions. By creating diverse digital personas, the AI reproduces the nuanced and often surprising feedback that typically requires expensive and slow in-person research, especially in politics.

AI models are not optimized to find objective truth. They are trained on biased human data and reinforced to provide answers that satisfy the preferences of their creators. This means they inherently reflect the biases and goals of their trainers rather than an impartial reality.

Contrary to the narrative that prediction markets make polling obsolete, they heavily rely on polling data as a fundamental input. Without polls, these markets would be based on "vibes and fundraising numbers," lacking a crucial data-driven foundation.

The online world, particularly platforms like the former Twitter, is not a true reflection of the real world. A small percentage of users, many of whom are bots, generate the vast majority of content. This creates a distorted and often overly negative perception of public sentiment that does not represent the majority view.

Traditional surveys on AI adoption suffer from response bias. A more accurate method, borrowed from political polling, is to ask business leaders about their competitors' or peers' AI usage, not their own. This removes self-reporting bias and reveals truer market penetration.

Synthetic models don't merely inherit human biases because they are trained on vast datasets that have already been processed, scrubbed, and validated by researchers. The AI learns from the 'corrected' view of public opinion, not the raw, biased inputs from individual survey takers.

An experiment showed human opinion on smartphones was easily swayed by preceding positive or negative questions. Qualtrics' synthetic AI panel maintained a consistent sentiment, demonstrating its resistance to 'priming' bias. This allows it to provide a more stable and arguably 'honest' baseline reading.

Generative AI models are trained on existing human-generated text, causing them to reflect and amplify mainstream thought. When prompted on contrarian topics, they will either omit them or frame them as fringe ideas. AI is a tool for understanding the consensus view, not for generating truly original, non-consensus insights.