We scan new podcasts and send you the top 5 insights daily.
When a bank rejects a loan based on clear, factual criteria (e.g., insufficient funds), the applicant can take specific actions to rectify it. Rejections based on opaque predictive models are not facts but "educated guesses," which cannot be proven false, leaving applicants with no recourse and shielding institutions from accountability.
Instead of trying to anticipate every potential harm, AI regulation should mandate open, internationally consistent audit trails, similar to financial transaction logs. This shifts the focus from pre-approval to post-hoc accountability, allowing regulators and the public to address harms as they emerge.
When an algorithm deems someone "unemployable," that person is denied jobs, thus validating the prediction. The system generates its own accuracy by creating the reality it purports to predict, leaving no error signal to correct itself. Oxford philosopher Carissa Véliz calls this a "perfect crime" as the evidence disappears.
Fair lending laws require banks to give specific reasons for a credit denial, which is difficult with complex AI models. To navigate this, banks first use traditional models for a decision. If it's a "no," they then use AI to find a way to approve the applicant, avoiding the regulatory disclosure hurdle.
A crucial function for humans in an AI-driven economy is to serve as a target for lawsuits. Because you can't easily sue a data center, regulated professions will require a 'human in the loop' to take legal responsibility. This creates a valuable economic role for humans: being a legally accountable entity.
Unlike a human judge, whose mental process is hidden, an AI dispute resolution system can be designed to provide a full audit trail. It can be required to 'show its work,' explaining its step-by-step reasoning, potentially offering more accountability than the current system allows.
When buying AI solutions, demand transparency from vendors about the specific models and prompts they use. Mollick argues that 'we use a prompt' is not a defensible 'secret sauce' and that this transparency is crucial for auditing results and ensuring you aren't paying for outdated or flawed technology.
As AI models are used for critical decisions in finance and law, black-box empirical testing will become insufficient. Mechanistic interpretability, which analyzes model weights to understand reasoning, is a bet that society and regulators will require explainable AI, making it a crucial future technology.
With frontier models, creators deny responsibility for user applications, while users claim no control over the model's inner workings. Sovereign AI eliminates this gap. By controlling the entire stack, an organization becomes fully accountable, satisfying regulators who need proof of what an AI did and why.
Instead of using AI to score consumers, Experian applies it to governance. AI systems monitor financial models for 'drift'—when outcomes deviate from predictions—and alert human overseers to the specific variables causing the issue, ensuring fairness and regulatory compliance.
Technological advancement, particularly in AI, moves faster than legal and social frameworks can adapt. This creates 'lawless spaces,' akin to the Wild West, where powerful new capabilities exist without clear rules or recourse for those negatively affected. This leaves individuals vulnerable to algorithmic decisions about jobs, loans, and more.