Users notice AI tools getting worse at simple tasks. This may not be a sign of technological regression, but rather a business decision by AI companies to run less powerful, cheaper models to reduce their astronomical operational costs, especially for free-tier users.
Some AI pioneers genuinely believe LLMs can become conscious because they hold a reductionist view of humanity. By defining consciousness as an 'uninteresting, pre-scientific' concept, they lower the bar for sentience, making it plausible for a complex system to qualify. This belief is a philosophical stance, not just marketing hype.
The public AI debate is a false dichotomy between 'hype folks' and 'doomers.' Both camps operate from the premise that AI is or will be supremely powerful. This shared assumption crowds out a more realistic critique that current AI is a flawed, over-sold product that isn't truly intelligent.
Science fiction has conditioned the public to expect AI that under-promises and over-delivers. Big Tech exploits this cultural priming, using grand claims that echo sci-fi narratives to lower public skepticism for their current AI tools, which consistently fail to meet those hyped expectations.
