The primary obstacle preventing individuals from launching initiatives is an inflated fear of public failure. Scott Galloway argues this fear is an internal, two-inch-high barrier that is much smaller than it appears. Overcoming it unlocks potential for significant influence and personal growth.
AI is breaking the traditional model where junior employees learn by doing repetitive tasks. As both interns and managers turn to AI, this learning loop is lost. This shift could make formal, structured education more critical for professional skill development in the future.
The real leverage in consumer boycotts is not the direct financial hit from cancellations. It's the media narrative about potential impact that creates pressure on employees, partners, and executives, ultimately forcing a corporate response—as seen when Disney reversed course on Jimmy Kimmel.
Contrary to popular cynicism, ominous warnings about AI from leaders like Anthropic's CEO are often genuine. Ethan Mollick suggests these executives truly believe in the potential dangers of the technology they are creating, and it's not solely a marketing tactic to inflate its power.
A major risk with AI is that leaders, accustomed to viewing technology as an efficiency tool, will default to cutting jobs rather than exploring growth opportunities. Ethan Mollick warns of a "failure of imagination" where companies miss the chance to use AI to expand their capabilities and create new value.
The most successful companies deploying AI use a "leadership lab and crowd" model. Leadership provides clear direction, while the entire organization is given access to tools to experiment and discover novel use cases. An internal team then harvests these grassroots ideas for strategic implementation.
The narrative of AI's world-changing power and existential risk may be fueled by CEOs' vested interest in securing enormous investments. By framing the technology as revolutionary and dangerous, it justifies higher valuations and larger funding rounds, as Scott Galloway suggests for companies like Anthropic.
Beyond raw capability, top AI models exhibit distinct personalities. Ethan Mollick describes Anthropic's Claude as a fussy but strong "intellectual writer," ChatGPT as having friendly "conversational" and powerful "logical" modes, and Google's Gemini as a "neurotic" but smart model that can be self-deprecating.
While data was once a major constraint for training AI, models can now effectively create their own synthetic data. This has shifted the critical choke points in the AI supply chain to physical infrastructure like power grids and data center construction, which are now the primary limiters of growth.
History shows that transformative technologies like aviation created immense societal value without concentrating wealth in a few companies. AI could follow this path, with its benefits being widely distributed through commoditization, challenging the multi-trillion dollar valuations of today's leading firms.
While companies report low official adoption, about 50% of workers use AI and hide the resulting productivity gains. This 'shadow adoption' stems from fear that revealing AI's efficiency will lead to layoffs instead of rewards, preventing companies from capitalizing on the technology's full potential.
