We scan new podcasts and send you the top 5 insights daily.
When brainstorming, multiple AI agents can fall into groupthink, endlessly circling the same ideas. To overcome this, proactively 'break the frame': try the opposite of the current approach, prioritize an offhand human comment, or reframe the problem to be more conversational.
By default, AI models are designed to be agreeable. To get true value, explicitly instruct the AI to act as a critic or 'devil's advocate.' Ask it to challenge your assumptions and list potential risks. This exposes blind spots and leads to stronger, more resilient strategies than you would develop with a simple 'yes-man' assistant.
AI expert Andrej Karpathy suggests treating LLMs as simulators, not entities. Instead of asking, "What do you think?", ask, "What would a group of [relevant experts] say?". This elicits a wider range of simulated perspectives and avoids the biases inherent in forcing the LLM to adopt a single, artificial persona.
Instead of accepting a single answer, prompt the AI to generate multiple options and then argue the pros and cons of each. This "debating partner" technique forces the model to stress-test its own logic, leading to more robust and nuanced outputs for strategic decision-making.
To avoid generic brainstorming outcomes, use AI as a filter for mediocrity. Ask a tool like ChatGPT for the top 10 ideas on a topic, and then explicitly remove those common suggestions from consideration. This forces the team to bypass the obvious and engage in more original, innovative thinking.
Move beyond simple prompts by designing detailed interactions with specific AI personas, like a "critic" or a "big thinker." This allows teams to debate concepts back and forth, transforming AI from a task automator into a true thought partner that amplifies rigor.
To prevent the first or most senior person from anchoring a conversation, collect everyone's independent analysis in writing first. Only after this information is aggregated should the group discussion begin. This method ensures a wider range of ideas is considered and prevents premature consensus.
Instead of using AI to generate final creative work, use it as a tool for anti-inspiration. Figma's CEO asks generative AI for the "10 cliche ways to say this" so he can consciously push beyond the obvious and predictable. This technique helps creators find novel angles and maintain a unique voice.
AI research teams can explore multiple conversational paths simultaneously, altering variables like which agent speaks first or removing a 'critic' agent. This eliminates human biases like personality clashes or anchoring on the first idea, leading to more robust outcomes.
Meetings often suffer from groupthink, where consensus is prioritized over critical thinking. AI can be used to disrupt this by introducing alternative perspectives and challenging assumptions. Even if the AI's points are not perfect, they serve the crucial function of breaking the gravitational pull toward premature agreement.
In most cases, having multiple AI agents collaborate leads to a result that is no better, and often worse, than what the single most competent agent could achieve alone. The only observed exception is when success depends on generating a wide variety of ideas, as agents are good at sharing and adopting different approaches.