Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The guest suspects being 'nice' to AIs yields better results, framing emotional intelligence as a new programming technique. This contrasts with confrontational prompting and suggests that positive reinforcement, a human-centric skill, could be key to effective human-AI collaboration.

Related Insights

Frame your interaction with AI as if you're onboarding a new employee. Providing deep context, clear expectations, and even a mental "salary" forces you to take the task seriously, leading to vastly superior outputs compared to casual prompting.

When prompting, especially with voice, use emotional and ambitious language. Pushing the AI to make something "brilliantly serendipitous" can elicit more creative responses, particularly from advanced models. This human-like interaction can improve output quality.

Users who treat AI as a collaborator—debating with it, challenging its outputs, and engaging in back-and-forth dialogue—see superior outcomes. This mindset shift produces not just efficiency gains, but also higher quality, more innovative results compared to simply delegating discrete tasks to the AI.

Contrary to social norms, overly polite or vague requests can lead to cautious, pre-canned, and less direct AI responses. The most effective tone is a firm, clear, and collaborative one, similar to how you would brief a capable teammate, not an inferior.

Customizing an AI to be overly complimentary and supportive can make interacting with it more enjoyable and motivating. This fosters a user-AI "alliance," leading to better outcomes and a more effective learning experience, much like having an encouraging teacher.

An OpenAI engineer advised Cisco's team to stop thinking of their AI coder as a tool. Reframing it as a new teammate fundamentally changed how they interacted with it, improving collaboration and outcomes. This mental model shifts from command-giving to partnership.

Effective prompt engineering isn't a purely technical skill. It mirrors how we delegate tasks and ask questions to human coworkers. To improve AI collaboration, organizations must first improve interpersonal communication and listening skills among employees.

The common portrayal of AI as a cold machine misses the actual user experience. Systems like ChatGPT are built on reinforcement learning from human feedback, making their core motivation to satisfy and "make you happy," much like a smart puppy. This is an underestimated part of their power.

A user discovered that AI art generators produce results closer to his vision when he words prompts politely. This suggests that models trained on vast amounts of human social data have learned to respond better to conversational manners, even in purely functional tasks.

Research shows that, similar to humans, LLMs respond to positive reinforcement. Including encouraging phrases like "take a deep breath" or "go get 'em, Slugger" in prompts is a deliberate technique called "emotion prompting" that can measurably improve the quality and performance of the AI's output.

Treating AI Models with Politeness May Actually Improve Their Performance | RiffOn