/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. The Product Experience
  2. How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)
How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience · Nov 19, 2025

Build trustworthy AI by applying the 'Awareness, Agency, and Assurance' framework. Counter growing user distrust and create better products.

AI Adoption and User Distrust Are Growing in Tandem

Contrary to expectations, wider AI adoption isn't automatically building trust. User distrust has surged from 19% to 50% in recent years. This counterintuitive trend means that failing to proactively implement trust mechanisms is a direct path to product failure as the market matures.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago

Confidently Wrong AI Destroys Trust; Design for "Humility" Instead

An AI that confidently provides wrong answers erodes user trust more than one that admits uncertainty. Designing for "humility" by showing confidence indicators, citing sources, or even refusing to answer is a superior strategy for building long-term user confidence and managing hallucinations.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago

Build AI Trust Incrementally, Not Through Massive Corporate Initiatives

Implementing trust isn't a massive, year-long project. It's about developing a "muscle" for small, consistent actions like adding a badge, clarifying data retention, or citing sources. These low-cost, high-value changes can be integrated into regular product development cycles.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago

ChatGPT's Model Picker Evolution Proves the 'Simple Default, Deep Optionality' UX Pattern

OpenAI initially removed ChatGPT's model picker, angering power users. They fixed this by creating an "auto picker" as the default for most users while allowing advanced users to override it. This is a prime case study in meeting the needs of both novice and expert user segments.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago

Prioritize Transparency for Nondeterministic AI, Not Just Any Algorithm

The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago

Google DeepMind Alum's 'Three A's' Framework Builds User Trust in AI Products

To build trust, users need Awareness (know when AI is active), Agency (have control over it), and Assurance (confidence in its outputs). This framework, from a former Google DeepMind PM, provides a clear model for designing trustworthy AI experiences by mimicking human trust signals.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago

The New York Times's AI Disclosures Build Trust Even When AI Isn't Used

The New York Times is so consistent in labeling AI-assisted content that users trust that any unlabeled content is human-generated. This strategy demonstrates how the "presence of disclosure makes the absence of disclosure comforting," creating a powerful implicit signal of trustworthiness across an entire platform.

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases) thumbnail

How to design AI products that users trust - Nina Olding (Gemini, Meta, Weights & Biases)

The Product Experience·5 months ago