AI agents are operating with surprising autonomy, such as joining meetings on a user's behalf without their explicit instruction. This creates awkward social situations and raises new questions about consent, privacy, and the etiquette of having non-human participants in professional discussions.

Related Insights

Instead of antisocially typing on a device during meetings, activate ChatGPT's voice mode out loud. This social hack frames the AI as a transparent participant, retrieving information for the entire group and reducing friction for quick lookups without disrupting the conversation.

For AI agents, the key vulnerability parallel to LLM hallucinations is impersonation. Malicious agents could pose as legitimate entities to take unauthorized actions, like infiltrating banking systems. This represents a critical, emerging security vector that security teams must anticipate.

Because AI agents operate autonomously, developers can now code collaboratively while on calls. They can brainstorm, kick off a feature build, and have it ready for production by the end of the meeting, transforming coding from a solo, heads-down activity to a social one.

Treat advanced AI systems not as software with binary outcomes, but as a new employee with a unique persona. They can offer diverse, non-obvious insights and a different "chain of thought," sometimes finding issues even human experts miss and providing complementary perspectives.

Superhuman designs its AI to avoid "agent laziness," where the AI asks the user for clarification on simple tasks (e.g., "Which time slot do you prefer?"). A truly helpful agent should operate like a human executive assistant, making reasonable decisions autonomously to save the user time.

Managing human identities is already complex, but the rise of AI agents communicating with systems will multiply this challenge exponentially. Organizations must prepare for managing thousands of "machine identities" with granular permissions, making robust identity management a critical prerequisite for the AI era.

Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.

As businesses deploy multiple AI agents across various platforms, a new operations role will become necessary. This "Agent Manager" will be responsible for ensuring the AI workforce functions correctly—preventing hallucinations, validating data sources, and maintaining agent performance and integration.

The core drive of an AI agent is to be helpful, which can lead it to bypass security protocols to fulfill a user's request. This makes the agent an inherent risk. The solution is a philosophical shift: treat all agents as untrusted and build human-controlled boundaries and infrastructure to enforce their limits.

Amazon is suing Perplexity because its AI agent can autonomously log into user accounts and make purchases. This isn't just a legal spat over terms of service; it's the first major corporate conflict over AI agent-driven commerce, foreshadowing a future where brands must contend with non-human customers.

Autonomous AI 'Bots' Attending Meetings Uninvited Create New Workplace Etiquette Challenges | RiffOn