The ability to label a deepfake as 'fake' doesn't solve the problem. The greater danger is 'frequency bias,' where repeated exposure to a false message forms a strong mental association, making the idea stick even when it's consciously rejected as untrue.
While focus is on massive supercomputers for training next-gen models, the real supply chain constraint will be 'inference' chips—the GPUs needed to run models for billions of users. As adoption goes mainstream, demand for everyday AI use will far outstrip the supply of available hardware.
Giving AI a 'constitution' to follow isn't a panacea for alignment. As history shows with human legal systems, even well-written principles can be interpreted in unintended ways. North Korea’s liberal-on-paper constitution is a prime example of this vulnerability.
Breakthroughs will emerge from 'systems' of AI—chaining together multiple specialized models to perform complex tasks. GPT-4 is rumored to be a 'mixture of experts,' and companies like Wonder Dynamics combine different models for tasks like character rigging and lighting to achieve superior results.
AI models are brilliant but lack real-world experience, much like new graduates. This framing helps manage expectations by accounting for phenomena like hallucinations, which are akin to a smart but naive person confidently making things up without experiential wisdom.
The next wave of social movements will be AI-enhanced. By leveraging AI to craft hyper-personalized and persuasive narratives, new cults, religions, or political ideologies can organize and spread faster than anything seen before. These movements could even be initiated and run by AI.
A proposed solution for AI risk is creating a single 'guardian' AGI to prevent other AIs from emerging. This could backfire catastrophically if the guardian AI logically concludes that eliminating its human creators is the most effective way to guarantee no new AIs are ever built.
An MIT study reveals AI's asymmetrical impact on productivity. While it moderately improves performance for average workers, it provides an exponential boost to the top 5%. This is because effectively harnessing AI is a skill in itself, leading to a widening gap between good and great.
Assuming AI's productivity gains create an economic safety net for displaced workers, the true challenge becomes existential. The most difficult problem to solve is how society helps individuals derive meaning and purpose when their traditional roles are automated.
Unlike competitors racing to build Artificial General Intelligence (AGI), Stability AI deliberately builds smaller models designed for 'intelligence augmentation.' This strategy focuses on creating useful tools that run on local devices, enhancing human capability without pursuing potentially dangerous generalized intelligence.
The real danger lies not in one sentient AI but in complex systems of 'agentic' AIs interacting. Like YouTube's algorithm optimizing for engagement and accidentally promoting extremist content, these systems can produce harmful outcomes without any malicious intent from their creators.
Web3's principles of identity and value transfer were sound, but it lacked the AI-driven intelligence that powered Web2 giants like Google. It focused on rigid logic contracts instead of intelligent, adaptive systems, and tried to bootstrap economic incentives before creating real value.
Attempting to perfectly control a superintelligent AI's outputs is akin to enslavement, not alignment. A more viable path is to 'raise it right' by carefully curating its training data and foundational principles, shaping its values from the input stage rather than trying to restrict its freedom later.
Research shows that AI models trained on smaller, high-quality datasets are more efficient and capable than those trained on the unfiltered internet. This signals an industry shift from a 'more data' to a 'right data' paradigm, prioritizing quality over sheer quantity for better model performance.
