Does deep learning software improve the consistency and performance of radiologists with various levels of experience in assessing bi-parametric prostate MRI?


Creative Commons License

Arslan A., ALİS D. C., Erdemli S., Seker M. E., Zeybel G., Sirolu S., ...Daha Fazla

INSIGHTS INTO IMAGING, sa.1, 2023 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1186/s13244-023-01386-w
  • Dergi Adı: INSIGHTS INTO IMAGING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, EMBASE, Directory of Open Access Journals
  • Acıbadem Mehmet Ali Aydınlar Üniversitesi Adresli: Evet

Özet

ObjectiveTo investigate whether commercially available deep learning (DL) software improves the Prostate Imaging-Reporting and Data System (PI-RADS) scoring consistency on bi-parametric MRI among radiologists with various levels of experience; to assess whether the DL software improves the performance of the radiologists in identifying clinically significant prostate cancer (csPCa).MethodsWe retrospectively enrolled consecutive men who underwent bi-parametric prostate MRI at a 3 T scanner due to suspicion of PCa. Four radiologists with 2, 3, 5, and > 20 years of experience evaluated the bi-parametric prostate MRI scans with and without the DL software. Whole-mount pathology or MRI/ultrasound fusion-guided biopsy was the reference. The area under the receiver operating curve (AUROC) was calculated for each radiologist with and without the DL software and compared using De Long's test. In addition, the inter-rater agreement was investigated using kappa statistics.ResultsIn all, 153 men with a mean age of 63.59 +/- 7.56 years (range 53-80) were enrolled in the study. In the study sample, 45 men (29.80%) had clinically significant PCa. During the reading with the DL software, the radiologists changed their initial scores in 1/153 (0.65%), 2/153 (1.3%), 0/153 (0%), and 3/153 (1.9%) of the patients, yielding no significant increase in the AUROC (p > 0.05). Fleiss' kappa scores among the radiologists were 0.39 and 0.40 with and without the DL software (p = 0.56).ConclusionsThe commercially available DL software does not increase the consistency of the bi-parametric PI-RADS scoring or csPCa detection performance of radiologists with varying levels of experience.