Assessing the Feasibility of Utilizing Artificial Intelligence-Segmented Dominant Intraprostatic Lesion for Focal Intraprostatic Boost with External Beam Radiation Therapy - Beyond the Abstract

Recently, the accurate delineation of the dominant intraprostatic gross tumor volume (GTV) in prostate multi-parametric magnetic resonance images (mpMRI) has become a critically important issue for many prostate radiation oncologists. The FLAME randomized controlled trial reported that focal intraprostatic boosts with external beam radiation therapy were associated with improved biochemical progression free survival.

This was achieved without a marked increase in clinically significant toxicity.1 However, accurate and reproducible generation of intraprostatic GTV remains challenging, due to variations in scanning parameters, image quality, intraprostatic lesion characteristics, and image interpretation.

Over the past few years, deep-learning artificial intelligence (AI) algorithms have shown tremendous promise in segmenting complex lesions from medical images. We became interested in evaluating whether such algorithms could help radiation oncologists (ROs) in delineating intraprostatic GTV from mpMRI. In particular, we wanted to better understand how AI-delineated GTVs compared against RO-delineated GTVs with respect to lesion detectability and geometric accuracy. We were also interested in whether AI-delineated GTVs provided a similar degree of intraprostatic boost dose to a reference GTV as those provided by RO-delineated GTVs.

As such, we curated a dataset of 35 patients with PI-RADS 4-5 lesions. We were fortunate to find 5 ROs willing to independently provide GTV delineations based on radiology reports. We also obtained GTV delineations from the nnUNet AI algorithm. This self-configuring network was previously trained on mpMRI from 89 different patients.2 We found that the AI algorithm had a lesion-based sensitivity of 82.6% and a positive predictive value (PPV) of 86.4%. Both values were lower than those observed from the ROs (84.8-95.7% for sensitivity; 95.1-100.0% for PPV). However, among the 30 GTVs mutually identified by AI and all ROs, AI GTV had a Dice coefficient (median 0.82; IQR: 0.69, 0.85) that was no different from those provided by any of the ROs.

We used an automated treatment planning algorithm for generating intraprostatic boost plans for each RO and AI-delineated GTV. All plans were generated utilizing metrics from the FLAME trial (PTV 77 Gy in 35 fractions; GTV dose up to 95%; rectum D1cc < 77 Gy; bladder D1cc < 80 Gy). We found that the AI-delineated GTV achieved a reference GTV D98% that was no different than those provided by 3 of 5 ROs. However, the presence of false negative lesions was associated with decreased reference GTV D98%. Furthermore, we found that the dose to the GTV (D98%) used for creating the intraprostatic boost plan was associated with mean proximities from the rectum and bladder, and inversely associated with GTV volume. Regarding this latter point, the mean GTV D98 was 82.2 Gy for GTV volumes >= 3 mL, compared with 94.4 Gy for GTV volumes < 3 mL. For larger intraprostatic lesions, greater dose escalation may be achieved with brachytherapy approaches.

Our study suggests that AI-delineated GTVs from mpMRI hold potential promise for creating intraprostatic boost plans. However, further prospective study is required before AI can be implemented fully in the clinic. We would highly recommend prospective radiology review of AI-generated lesions, given the negative dosimetric consequences of false negative lesions, as well as the potential for increasing organ-at-risk exposure with false positive lesions.

Written by: Martin T. King, MD, PhD, Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, MA

Reference:

  1. Kerkmeijer LGW, Groen VH, Pos FJ, et al. Focal Boost to the Intraprostatic Tumor in External Beam Radiotherapy for Patients With Localized Prostate Cancer: Results From the FLAME Randomized Phase III Trial. J Clin Oncol. 2021;39(7):787-796.
  2. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18(2):203-211.
Read the Abstract