AI's occasional errors ('hallucinations') should be understood as a characteristic of a new, creative type of computer, not a simple flaw. Users must work with it as they would a talented but fallible human: leveraging its creativity while tolerating its occasional incorrectness and using its capacity for self-critique.

Related Insights

The creative process with AI involves exploring many options, most of which are imperfect. This makes the collaboration a version control problem. Users need tools to easily branch, suggest, review, and merge ideas, much like developers use Git, to manage the AI's prolific but often flawed output.

AI errors, or "hallucinations," are analogous to a child's endearing mistakes, like saying "direction" instead of "construction." This reframes flaws not as failures but as a temporary, creative part of a model's development that will disappear as the technology matures.

Early AI agents are unreliable and behave in non-human ways. Framing them as "virtual collaborators" sets them up for failure. A creative metaphor, like "fairies," correctly frames them as non-human entities with unique powers and flaws. This manages expectations and unlocks a rich vein of product ideas based on the metaphor's lore.

Product leaders must personally engage with AI development. Direct experience reveals unique, non-human failure modes. Unlike a human developer who learns from mistakes, an AI can cheerfully and repeatedly make the same error—a critical insight for managing AI projects and team workflow.

Working with generative AI is not a seamless experience; it's often frustrating. Instead of seeing this as a failure of the tool, reframe it as a sign that you're pushing boundaries and learning. The pain of debugging loops or getting the right output is an indicator that you are actively moving out of your comfort zone.

Treat advanced AI systems not as software with binary outcomes, but as a new employee with a unique persona. They can offer diverse, non-obvious insights and a different "chain of thought," sometimes finding issues even human experts miss and providing complementary perspectives.

The most creative use of AI isn't a single-shot generation. It's a continuous feedback loop. Designers should treat AI outputs as intermediate "throughputs"—artifacts to be edited in traditional tools and then fed back into the AI model as new inputs. This iterative remixing process is where happy accidents and true innovation occur.

To effectively leverage AI, treat it as a new team member. Take its suggestions seriously and give it the best opportunity to contribute. However, just like with a human colleague, you must apply a critical filter, question its output, and ultimately remain accountable for the final result.

An OpenAI paper argues hallucinations stem from training systems that reward models for guessing answers. A model saying "I don't know" gets zero points, while a lucky guess gets points. The proposed fix is to penalize confident errors more harshly, effectively training for "humility" over bluffing.

Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.