We scan new podcasts and send you the top 5 insights daily.
Despite having computer grading technology for decades that is far more precise than humans, the collectibles industry has rejected it. Collectors behave like gamblers, preferring the subjective nature of human grading and the chance to "win" a higher grade by resubmitting an item.
Users are dissatisfied with purely AI-generated creative outputs like interior design, calling it "slop." This creates an opportunity for platforms that blend AI's efficiency with a human's taste and curation, for which consumers are willing to pay a premium.
Premium loyalty programs, like airline status tiers, are a monetized system for accessing favorable human judgment and exceptions to standard rules. This provides a powerful market-based argument that pure, rigid AI automation will have a value ceiling because people pay to escape it.
As AI commoditizes execution and intellectual labor, the only remaining scarce human skill will be judgment: the wisdom to know what to build, why, and for whom. This shifts economic value from effort and hard work to discernment and taste.
Concepts like good taste or judgment aren't magical human traits but are a form of "embedded measurement" in our brains. This data, collected through unique, lived experiences (especially edge cases), is not yet digitized and thus remains a key differentiator from AI models trained on public data.
Studies show people often prefer AI-generated art based on quality alone, but their preference flips to the human-created version once they know the source. This reveals a deep-seated bias for human effort, posing a significant "Catch-22" for marketers who risk losing audience appreciation if their AI usage is discovered.
A world where AI agents perfectly follow policies would be brittle and frustrating. Human systems work because they have an implicit assumption of discretionary non-compliance. People value, and will pay for, the possibility that a human can bend the rules for them in a messy situation.
Even when AI performs tasks like chess at a superhuman level, humans still gravitate towards watching other imperfect humans compete. This suggests our engagement stems from fallibility, surprise, and the shared experience of making mistakes—qualities that perfectly optimized AI lacks, limiting its cultural replacement of human performance.
National tests in Sweden revealed human evaluators for oral exams were shockingly inconsistent, sometimes performing worse than random chance. While AI grading has its own biases, they can be identified and systematically adjusted, unlike hidden human subjectivity.
A major cost in card grading is two-way shipping and manual inspection. A disruptive model would allow users to submit high-resolution scans via a proprietary app. AI could perform the initial grade, and the company would only ship the final, high-quality display slab, cutting costs and turnaround time.
Customers are so accustomed to the perfect accuracy of deterministic, pre-AI software that they reject AI solutions if they aren't 100% flawless. They would rather do the entire task manually than accept an AI assistant that is 90% correct, a mindset that serial entrepreneur Elias Torres finds dangerous for businesses.