Dr. Fei-Fei Li asserts that trust in the AI age remains a fundamentally human responsibility that operates on individual, community, and societal levels. It's not a technical feature to be coded but a social norm to be established. Entrepreneurs must build products and companies where human agency is the source of trust from day one.

Related Insights

Regardless of an AI's capabilities, the human in the loop is always the final owner of the output. Your responsible AI principles must clearly state that using AI does not remove human agency or accountability for the work's accuracy and quality. This is critical for mitigating legal and reputational risks.

AI is a 'hands-on revolution,' not a technological shift like the cloud that can be delegated to an IT department. To lead effectively, executives (including non-technical ones) must personally use AI tools. This direct experience is essential for understanding AI's potential and guiding teams through transformation.

To trust an agentic AI, users need to see its work, just as a manager would with a new intern. Design patterns like "stream of thought" (showing the AI reasoning) or "planning mode" (presenting an action plan before executing) make the AI's logic legible and give users a chance to intervene, building crucial trust.

The primary focus for leaders should be fostering a culture of safe, ethical, and collaborative AI use. This involves mandatory training and creating shared learning spaces, like Slack channels for prompt sharing, rather than just focusing on tool procurement.

Dr. Li rejects both utopian and purely fatalistic views of AI. Instead, she frames it as a humanist technology—a double-edged sword whose impact is entirely determined by human choices and responsibility. This perspective moves the conversation from technological determinism to one of societal agency and stewardship.

To effectively leverage AI, treat it as a new team member. Take its suggestions seriously and give it the best opportunity to contribute. However, just like with a human colleague, you must apply a critical filter, question its output, and ultimately remain accountable for the final result.

The most effective AI user experiences are skeuomorphic, emulating real-world human interactions. Design an AI onboarding process like you would hire a personal assistant: start with small tasks, verify their work to build trust, and then grant more autonomy and context over time.

Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.

Effective AI policies focus on establishing principles for human conduct rather than just creating technical guardrails. The central question isn't what the tool can do, but how humans should responsibly use it to benefit employees, customers, and the community.

Dr. Fei-Fei Li warns that the current AI discourse is dangerously tech-centric, overlooking its human core. She argues the conversation must shift to how AI is made by, impacts, and should be governed by people, with a focus on preserving human dignity and agency amidst rapid technological change.

AI Leaders Must Build Trust Through Human Agency; It Cannot Be Outsourced to the Machine | RiffOn