Tubule-U-Net: a novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer


Creative Commons License

Tekin E., Yazıcı Ç., Kusetogullari H., TOKAT F., Yavariabdi A., Iheme L. O., ...Daha Fazla

Scientific Reports, cilt.13, sa.1, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 13 Sayı: 1
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1038/s41598-022-27331-3
  • Dergi Adı: Scientific Reports
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, BIOSIS, CAB Abstracts, Chemical Abstracts Core, EMBASE, MEDLINE, Veterinary Science Database, Directory of Open Access Journals
  • Acıbadem Mehmet Ali Aydınlar Üniversitesi Adresli: Evet

Özet

The tubule index is a vital prognostic measure in breast cancer tumor grading and is visually evaluated by pathologists. In this paper, a computer-aided patch-based deep learning tubule segmentation framework, named Tubule-U-Net, is developed and proposed to segment tubules in Whole Slide Images (WSI) of breast cancer. Moreover, this paper presents a new tubule segmentation dataset consisting of 30820 polygonal annotated tubules in 8225 patches. The Tubule-U-Net framework first uses a patch enhancement technique such as reflection or mirror padding and then employs an asymmetric encoder-decoder semantic segmentation model. The encoder is developed in the model by various deep learning architectures such as EfficientNetB3, ResNet34, and DenseNet161, whereas the decoder is similar to U-Net. Thus, three different models are obtained, which are EfficientNetB3-U-Net, ResNet34-U-Net, and DenseNet161-U-Net. The proposed framework with three different models, U-Net, U-Net++, and Trans-U-Net segmentation methods are trained on the created dataset and tested on five different WSIs. The experimental results demonstrate that the proposed framework with the EfficientNetB3 model trained on patches obtained using the reflection padding and tested on patches with overlapping provides the best segmentation results on the test data and achieves 95.33%, 93.74%, and 90.02%, dice, recall, and specificity scores, respectively.