Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The study reported that no patients in the "before 3 PM" group ever received a dose after 3 PM over four cycles. In a busy, real-world cancer center, such perfect adherence is practically impossible due to logistical issues. This flawless data suggests the study might be a retrospective analysis of curated data rather than a truly prospective trial.

Related Insights

To demonstrate a long-term survival benefit without a new trial, Neuvivo hired a research firm to track down patients from the original study. By collecting "last date alive" information in a blinded fashion, they generated statistically significant survival data years after the trial concluded.

Despite compelling data from trials like PATINA, some patients with ER+/HER2+ breast cancer refuse maintenance endocrine therapy due to side effects. This highlights a real-world gap between clinical trial evidence and patient adherence, forcing oncologists to navigate patient preferences against optimal treatment protocols.

A key red flag was the study's ClinicalTrials.gov entry using past-tense language for its inclusion criteria (e.g., "patients received immunotherapy"). This, along with an exclusion criterion for "loss to follow up," strongly suggests the study was a retrospective analysis of existing patient data, not a prospective trial as presented.

The study utilized "interruption-free survival" as a primary endpoint, a pragmatic measure derived from real-world data. This serves as a valuable surrogate for treatment toxicity, as clinicians typically pause treatment in response to adverse events, providing a quantifiable measure of a drug's real-world tolerability.

Immunotherapy antibodies bind to immune cells for weeks or months, a pharmacodynamic (PD) effect far longer than their pharmacokinetic (PK) half-life. This long-lasting binding suggests that minor variations in infusion timing for subsequent doses are unlikely to impact overall outcomes, casting doubt on the study's core hypothesis.

In the ASCENT-07 trial, investigators may have prematurely switched patients from the standard chemotherapy arm to superior, commercially available ADCs at the first hint of progression. This real-world practice can mask an experimental drug's true benefit on progression-free survival.

The study presented three different datasets over a short period. While efficacy endpoints like PFS and OS changed, the toxicity data remained identical. This is highly unusual, as resolving censored patient data for efficacy should also lead to updated toxicity information, suggesting a rushed or incomplete analysis process.

Experts believe the stark difference in complete response rates (5% vs 30%) between two major ADC trials is likely due to "noise"—variations in patient populations (e.g., more upper tract disease) and stricter central review criteria, rather than a fundamental difference in the therapies' effectiveness.

The study's progression-free survival (PFS) curve was unusually smooth, lacking the stepwise drops expected from scheduled scans in oncology trials. More alarmingly, the "numbers at risk" table showed more patients remaining than were represented on the graph at certain time points—a statistical impossibility suggesting a significant reporting or programming error.

The PSMA edition trial's fixed six-cycle Lutetium regimen, designed nearly a decade ago, is now seen as suboptimal. This illustrates how the long duration of clinical trials means their design may not reflect the latest scientific understanding (e.g., adaptive dosing) by the time results are published and debated.