Analyzing Components of a Transformer under Different Dataset Scales in 3D Prostate CT Segmentation

Yicong Tan, Prerak Mody*, Viktor van der Valk, Marius Staring, Jan van Gemert

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

27 Downloads (Pure)

Abstract

Literature on medical imaging segmentation claims that hybrid UNet models containing both Transformer and convolutional blocks perform better than purely convolutional UNet models. This recently touted success of hybrid Transformers warrants an investigation into which of its components contribute to its performance. Also, previous work has a limitation of analysis only at fixed dataset scales as well as unfair comparisons with other models where parameter counts are not equivalent. Here, we investigate the performance of a hybrid Transformer network i.e. the nnFormer for organ segmentation in prostate CT scans. We do this in context of replacing its various components and by constructing learning curves by plotting model performance at different dataset scales. To compare with literature, the first experiment replaces all the shifted-window(swin) Transformer blocks of the nnFormer with convolutions. Results show that the convolution prevails as the data scale increases. In the second experiment, to reduce complexity, the self-attention mechanism within the swin-Transformer block is replaced with an similar albeit simpler spatial mixing operation i.e. max-pooling. We observe improved performance for max-pooling in smaller dataset scales, indicating that the window-based Transformer may not be the best choice in both small and larger dataset scales. Finally, since convolution has an inherent local inductive bias of positional information, we conduct a third experiment to imbibe such a property to the Transformer by exploring two kinds of positional encodings. The results show that there are insignificant improvements after adding positional encoding, indicating the hybrid swin-Transformers deficiency in capturing positional information given our dataset at its various scales. Through this work, we hope to motivate the community to use learning curves under fair experimental settings to evaluate the efficacy of newer architectures like Transformers for their medical imaging tasks. Code is available on https://github.com/prerakmody/ window-transformer-prostate-segmentation.

Original languageEnglish
Title of host publicationMedical Imaging 2023
Subtitle of host publicationImage Processing
EditorsOlivier Colliot, Ivana Isgum
PublisherSPIE
Number of pages13
ISBN (Electronic)9781510660335
DOIs
Publication statusPublished - 2023
EventMedical Imaging 2023: Image Processing - San Diego, United States
Duration: 19 Feb 202323 Feb 2023

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12464
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2023: Image Processing
Country/TerritoryUnited States
CitySan Diego
Period19/02/2323/02/23

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • 3D Swin-Transformer
  • Convolution
  • Pooling
  • Positional Encoding Learning curves
  • Radiotherapy
  • Segmentation

Fingerprint

Dive into the research topics of 'Analyzing Components of a Transformer under Different Dataset Scales in 3D Prostate CT Segmentation'. Together they form a unique fingerprint.

Cite this