Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Policies preventing employers from asking about criminal records early in the hiring process backfired. Unable to see an applicant's record, some employers resorted to guessing based on demographics, leading to increased discrimination against young Black men who had no criminal record. This highlights the need to 'fail fast' and test policies before wide implementation.

Related Insights

A company found its top engineers were "difficult." Before changing hiring criteria to favor this trait, they checked their worst-performing engineers and found they were also difficult. The trait was common to all engineers, not a signal of success, revealing a classic survivorship bias.

By limiting the hiring pool to specific demographics (e.g., a "woman of color"), organizations like the fire department or even the Vice Presidency are no longer selecting from the most qualified candidates overall. Carolla argues this is a form of meritocracy decay that guarantees a lower-quality outcome.

Risk assessment tools used in courts are often trained on old data and fail to account for societal shifts in crime and policing, creating "cohort bias." This leads to massive overpredictions of an individual's likelihood to commit a crime, resulting in harsher, unjust sentences.

The belief that simply 'hiring the best person' ensures fairness is flawed because human bias is unavoidable. A true merit-based system requires actively engineering bias out of processes through structured interviews, clear job descriptions, and intentionally sourcing from diverse talent pools.

Many leaders hire defensively, trying to avoid a costly mistake. This fear-based mindset leads to negative assumptions and misinterpretations of candidate signals. Shifting to an abundance mindset—believing the right person is out there—fosters curiosity and leads to better evaluation and hiring outcomes.

While President Biden's AI executive order explicitly pushed for DEI, states like Colorado are achieving the same goal using subtler language. By prohibiting 'algorithmic discrimination' and 'disparate impact,' they effectively force AI companies to build DEI-centric bias layers into their models.

When police departments face severe staffing shortages due to cultural vilification, they may lower hiring standards. This can lead to hiring individuals with criminal backgrounds, who then commit heinous acts as officers, further damaging public trust and exacerbating the original problem.

Young professionals' offensive or foolish online posts become a permanent liability. HR departments conduct Google searches and will discard applicants with problematic histories without ever telling them why, effectively closing doors to future opportunities based on past digital indiscretions.

While reducing evictions helps current tenants, it creates an unintended consequence: landlords become more cautious when selecting new ones. Knowing it's harder to remove a problematic tenant, landlords increase screening scrutiny, which can lead to discriminatory practices against applicants perceived as higher risk, making it harder for newcomers to find housing.

For over four decades, a 1981 consent decree effectively banned technical assessments in federal hiring due to fears of disparate impact lawsuits. This forced a reliance on self-reported skills, crippling the government's ability to evaluate technical talent. The recent reversal of this decree finally allows for modern, merit-based hiring.