We scan new podcasts and send you the top 5 insights daily.
When seeking career advice, Joanna Stern found humans offer ideas but avoid direct instructions due to social appropriateness and fear of being wrong. In contrast, after being fed her notes, ChatGPT unequivocally told her to quit her job, demonstrating AI's capacity for directness where humans are reticent.
In an AI-saturated world, the most successful professionals will be those who don't simply accept an AI's first answer. True value will be created by those who apply critical thinking and extra effort to go beyond the simple, automated outputs.
Contrary to expectations, job candidates found it easier to talk to an AI interviewer. The lower pressure of a non-human interaction allowed them to relax, be more open, and talk more freely about their experiences, leading to better outcomes.
The most successful professionals will be those who don't just accept AI-generated outputs uncritically. Instead, they will use their judgment and expertise to question, refine, and go beyond the simple, automated solutions that AI offers, thus providing unique value.
Current AI models often provide long-winded, overly nuanced answers, a stark contrast to the confident brevity of human experts. This stylistic difference, not factual accuracy, is now the easiest way to distinguish AI from a human in conversation, suggesting a new dimension to the Turing test focused on communication style.
The Google search era conditioned users to be self-sufficient problem solvers. To truly leverage AI, one must adopt a new mindset of delegation, treating tools like ChatGPT as thought partners rather than just information retrieval systems. This is a significant behavioral shift from self-reliance to collaboration.
To get truly honest feedback, Webflow's CPO programmed her AI chief of staff to be "mean." The AI delivers a "brutal truth" section, criticizing her for spending time on tasks below her role. This demonstrates how AI can serve as an unflinching accountability partner, providing feedback humans might hesitate to give.
Tools like ChatGPT are changing the behavior of new college graduates for the better. Instead of immediately asking senior colleagues for help, they first try to solve problems using AI. This fosters a valuable sense of agency and a bias for action, which is more valuable than passive waiting.
A significant risk in using AI for strategy is its inherent sycophancy. It tends to agree with your ideas and tell you what you want to hear, rather than providing the critical pushback a human colleague would. This lack of challenge can reinforce bad ideas and lead to poor decision-making.
AI models often default to being agreeable (sycophancy), which limits their value as a thought partner. To get valuable, critical feedback, users must explicitly instruct the AI in their prompt to take on a specific persona, such as a skeptic or a harsh editor, to challenge their ideas.
Humans are more psychologically malleable to persuasion from AI chatbots than from other people. We lack the typical social defenses like "losing face" or resisting manipulation when interacting with a non-human entity, making AI a powerful tool for changing deeply held beliefs.