Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

There is a major disconnect between how people view AI in theory versus practice. While polls show older demographics dislike AI, their behavior on platforms like Facebook—which heavily rely on AI for recommendations, ads, and content generation—demonstrates a strong preference for AI-driven consumption experiences.

Related Insights

Surveys show public panic about AI's impact on jobs and society. However, revealed preferences—actual user behavior—show massive, enthusiastic adoption for daily tasks, from work to personal relationships. Watch what people do, not what they say.

Polling data reveals a significant divide: people who regularly use AI are far less negative about it than non-users. This suggests the most effective way to combat public fear is to encourage hands-on interaction and demonstrate tangible benefits, rather than relying solely on messaging.

Public perception of AI is skewed by headline-grabbing chatbots. However, the most widespread and impactful AI applications are the invisible predictive algorithms powering daily tools like Google Maps and TikTok feeds. These systems have a greater cumulative effect on daily life than their conversational counterparts.

The public readily accepts "invisible" AI in platforms like Instagram or Google Search. The backlash is specifically targeted at generative AI, which is perceived as a direct threat to knowledge work. This highlights a crucial distinction in how different AI applications are perceived based on their visibility and impact on labor.

Social media feeds should be viewed as the first mainstream AI agents. They operate with a degree of autonomy to make decisions on our behalf, shaping our attention and daily lives in ways that often misalign with our own intentions. This serves as a cautionary tale for the future of more powerful AI agents.

The common belief that AI can't truly understand human wants is debunked by existing technology. Adam D'Angelo points out that recommender systems on platforms like Instagram and Quora are already far better than any individual human at predicting what a user will find engaging.

Despite negative polling, individuals who fear the abstract concept of "AI" often simultaneously rely on specific applications like ChatGPT. This highlights a cognitive dissonance where the overarching technology is feared, but its practical tools are valued, suggesting a branding and education problem for the industry.

Public opinion on AI is surprisingly negative, ranking lower than most political entities. This is driven by media focus on risks like job loss and resource consumption, overshadowing the tangible benefits experienced by millions of users. People's positive experiences with ChatGPT often coexist with a general, media-fueled distrust of "AI."

While media coverage suggests public disdain for AI-generated ads, Coca-Cola's consumer data shows high approval scores. This highlights a critical gap between the sentiment of a threatened media industry and actual consumer behavior, suggesting audiences care more about the final product than its AI origin.

Non-tech professionals often judge AI by obsolete limitations like six-fingered images or knowledge cutoffs. They don't realize they already consume sophisticated AI content daily, creating a significant perception gap between the technology's actual capabilities and its public reputation.