We scan new podcasts and send you the top 5 insights daily.
While more data seems better, comprehensive imaging scans can be problematic. Each measurement carries a false positive risk, so the cumulative probability of receiving a disruptive, incorrect result becomes material, leading to unnecessary stress and follow-up procedures.
Contrary to trends in wellness, a full-body MRI doesn't catch cancer early. A mass visible on an MRI already contains billions of cells and may have spread. Furthermore, it often leads to a rabbit hole of invasive tests for benign abnormalities, causing unnecessary harm.
Dr. Deb Schrag suggests the main challenge for new molecular cancer screening technologies is not invention, but implementation. The critical task will be deploying these tools at a population scale and effectively managing the logistical challenge of distinguishing true positives from false alarms.
While PSMA PET scans are more sensitive, they create a clinical dilemma because pivotal trials defining treatment efficacy were based on conventional imaging (CT/bone scans). This forces oncologists to either re-image patients with older technology to match trial criteria or make treatment decisions based on PET data that lacks a clear evidence-based framework for response assessment.
Hims' expansion into selling non-FDA-approved multi-cancer early detection tests raises concerns among researchers. Offering these to its relatively young, low-risk user base could lead to false positives, triggering unnecessary and costly 'diagnostic odysseys' for patients who are merely worried.
AI finds the most efficient correlation in data, even if it's logically flawed. One system learned to associate rulers in medical images with cancer, not the lesion itself, because doctors often measure suspicious spots. This highlights the profound risk of deploying opaque AI systems in critical fields.
Our cognitive wiring prefers making harmless errors (false positives, e.g., seeing a predator that isn't there) over fatal ones (false negatives). This "better safe than sorry" principle, as described by Michael Shermer, underlies our susceptibility to misinformation and snap judgments.
While wearables generate vast amounts of health data, the medical system lacks the evidence to interpret these signals accurately for healthy individuals. This creates a risk of false positives ('incidentalomas'), causing unnecessary anxiety and hindering adoption of proactive health tech.
A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.
AI platforms can analyze existing medical images, like CT scans ordered for a cough, to find subtle, early signs of cancers. This repurposes vast amounts of routine diagnostic data into a powerful, passive screening tool, allowing for incidental discoveries of diseases like pancreatic cancer without new procedures.
Individual early-detection tests like blood biopsies or MRIs are imperfect, leading to false positives and negatives. The next step in diagnostics is a "multimodal" approach, layering different screening types, such as genomic blood tests and imaging, to create a more accurate and comprehensive picture of a patient's health.