Shopify's CEO compares using AI note-takers to showing up "with your fly down." Beyond social awkwardness, the core risk is that recording every meeting creates a comprehensive, discoverable archive of internal discussions, exposing companies to significant legal risks during lawsuits.

Related Insights

The problem with bad AI-generated work ('slop') isn't just poor writing. It's that subtle inaccuracies or context loss can derail meetings and create long, energy-wasting debates. This cognitive overload makes it difficult for teams to sense-make and ultimately costs more in human time than it saves.

Instead of antisocially typing on a device during meetings, activate ChatGPT's voice mode out loud. This social hack frames the AI as a transparent participant, retrieving information for the entire group and reducing friction for quick lookups without disrupting the conversation.

Using a proprietary AI is like having a biographer document your every thought and memory. The critical danger is that this biography is controlled by the AI company; you can't read it, verify its accuracy, or control how it's used to influence you.

Zapier built an AI coach that analyzes meeting transcripts to provide feedback based on company values and frameworks. This automates cultural reinforcement, normalizes constructive criticism, and ensures leaders consistently model desired behaviors, scaling what is typically a manual process.

Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.

Tools like Granola automate rote tasks, freeing up mental bandwidth during meetings. This allows participants to focus entirely on interpersonal dynamics and building rapport. The real benefit is fostering genuine human connection, which is crucial for high-stakes deals and collaborations.

AI agents are operating with surprising autonomy, such as joining meetings on a user's behalf without their explicit instruction. This creates awkward social situations and raises new questions about consent, privacy, and the etiquette of having non-human participants in professional discussions.

Tools like Granola.ai offer a key advantage by recording locally without joining calls. This privacy, combined with the ability to search across all meeting transcripts for specific topics, turns meeting notes into a queryable knowledge base for the user, rather than just a simple record.

Within three years, the default for all enterprise meetings will shift to "record on." This ambient data capture will feed a new system of intelligence, automatically extracting insights, monitoring for compliance risks, and diffusing issues proactively. Unstructured conversation data will become a core enterprise asset.

Treat accountability as an engineering problem. Implement a system that logs every significant AI action, decision path, and triggering input. This creates an auditable, attributable record, ensuring that in the event of an incident, the 'why' can be traced without ambiguity, much like a flight recorder after a crash.