AI ethical failures like bias and hallucinations are not bugs to be patched but structural consequences of Gödel's incompleteness theorems. As formal systems, AIs cannot be both consistent and complete, making some ethical scenarios inherently undecidable from within their own logic.
The transcript analogizes AI to cosmological models. A self-contained AI, like the Hartle-Hawking 'no boundary' universe model, is a perfect but directionless system. It requires an external human observer to collapse its possibilities into a single, meaningful reality, just as quantum mechanics requires an observer.
To overcome its inherent logical incompleteness, an ethical AI requires an external 'anchor.' This anchor must be an unprovable axiom, not a derived value. The proposed axiom is 'unconditional human worth,' serving as the fixed origin point for all subsequent ethical calculations and preventing utility-based value judgments.
