The risk of developing myeloid neoplasms from PARP inhibitors in the frontline ovarian cancer setting is very low, around 1%. However, it is critical to adhere to the recommended 2-3 year treatment duration and then stop the therapy to avoid unnecessary long-term risk.

Related Insights

The introduction of ADCs into frontline ovarian cancer treatment creates a new challenge: conflicting biomarkers. A patient's tumor might be positive for both HER2 (an ADC target) and a BRCA mutation (a PARP inhibitor target), forcing clinicians to choose between two effective targeted therapies without clear guidance.

The traditional six-month timeframe for defining platinum sensitivity is being challenged. A growing theory suggests that tumors progressing while on a PARP inhibitor have a distinct biology that responds poorly to subsequent platinum, indicating a potential need to move directly to therapies like ADCs.

While not yet validated, ctDNA is being used by clinical experts as a de-escalation tool to provide confidence when stopping long-term maintenance therapies like PARP inhibitors. This novel application focuses on reducing treatment burden rather than solely detecting disease progression.

Real-world data shows that in platinum-sensitive ovarian cancer patients who have progressed on PARP inhibitors, subsequent platinum-based chemotherapy has a surprisingly low response rate of only 20%. This quantifies a significant opportunity for highly active ADCs to potentially replace platinum in this growing patient population.

The modest benefit of PARP inhibitors in metastatic breast cancer, compared to ovarian cancer, is likely due to resistance induced by prior exposure to DNA-damaging agents like anthracyclines. This explains the clinical rationale for moving PARP inhibitors to earlier treatment settings, such as neoadjuvant or adjuvant therapy, before resistance develops.

Lutetium faces criticism for its fixed 6-cycle regimen, which may be suboptimal as the PSMA target diminishes with ADT. However, this critique is rarely applied to other drugs like PARP inhibitors, which are given until progression. This highlights a double standard and the tension between using a fixed regimen for regulatory approval versus finding the optimal dose in practice.

The selection between PARP inhibitors like olaparib and niraparib is not one-size-fits-all. It's a personalized decision based on patient preference for dosing frequency (once vs. twice daily), tolerance for side effects like hypertension, and potential drug-drug interactions.

To combat the significant myelosuppression from the standard 28-day venetoclax cycle in AML, many clinicians are adopting a strategy of performing a bone marrow biopsy around day 21 and pausing the drug if blast clearance is achieved to allow for hematologic recovery.

The development of PARP-1 selective inhibitors like seriparib signals a shift in drug innovation. Instead of only chasing higher efficacy, these new agents aim for a more favorable toxicity profile (less GI toxicity, fewer dose discontinuations) to improve patient quality of life and treatment adherence.

Giving adjuvant olaparib to BRCA-mutated patients who have already achieved a pathologic complete response (pCR) from neoadjuvant platinum-based chemotherapy is discouraged. Their prognosis is already excellent, so adding a PARP inhibitor offers little potential benefit while exposing them to unnecessary risks of toxicity, such as MDS/AML.