There is a growing gap between the entertainment value of building with AI tools—likened to playing with Legos—and the actual, sustained utility of the creations. Many developers build novel applications for fun but rarely use them, suggesting a challenge in finding true product-market fit.
Today's dominant AI tools like ChatGPT are perceived as productivity aids, akin to "homework helpers." The next multi-billion dollar opportunity is in creating the go-to AI for fun, creativity, and entertainment—the app people use when they're not working. This untapped market focuses on user expression and play.
AI lowers the barrier to entry, flooding the market with "whiteboard founded" companies tackling low-hanging fruit. This creates a highly competitive, consensus-driven environment that is the opposite of a "good quest." The real challenge is finding meaningful problems.
There is a massive gap between what AI models *can* do and how they are *currently* used. This 'capability overhang' exists because unlocking their full potential requires unglamorous 'ugly plumbing' and 'grunty product building.' The real opportunity for founders is in this grind, not just in model innovation.
Without a strong foundation in customer problem definition, AI tools simply accelerate bad practices. Teams that habitually jump to solutions without a clear "why" will find themselves building rudderless products at an even faster pace. AI makes foundational product discipline more critical, not less.
After building numerous AI tools, Craig Hewitt realized many popular applications (e.g., AI avatars, voice cloning) are worthless novelties. He pivoted from creating flashy tech demos to focusing only on building commercially viable products that solve tangible business problems for customers.
Many users of generative AI tools like Suno and Midjourney are creating content for their own enjoyment, not for professional use. This reveals a 'creation as entertainment' consumer behavior, distinct from the traditional focus on productivity or job displacement.
The perceived limits of today's AI are not inherent to the models themselves but to our failure to build the right "agentic scaffold" around them. There's a "model capability overhang" where much more potential can be unlocked with better prompting, context engineering, and tool integrations.
Companies racing to add AI features while ignoring core product principles—like solving a real problem for a defined market—are creating a wave of failed products, dubbed "AI slop" by product coach Teresa Torres.
Despite AI tools making it easier than ever to design, code, and launch applications, many people feel stuck and don't know what to build. This suggests a deficit in big-picture thinking and problem identification, not a lack of technical capability.
Jason Fried argues that while AI dramatically accelerates building tools for yourself, it falls short when creating products for a wider audience. The art of product development for others lies in handling countless edge cases and conditions that a solo user can overlook, a complexity AI doesn't yet master.