Room: AAPM ePoster Library
Purpose: To implement and evaluate two deep network architectures in the segmentation of glioblastoma (GBM) from magnetic resonance (MR) images.
Methods: The FLAIR (Fluid-Attenuated Inversion Recovery) images of 111 patients with GBM imaged at our institution were used for training and validation. The training and validation sets consisted of 100 and 11 patients, respectively. An external cohort of 50 FLAIR images from the TCGA-GBM dataset was used as the testing set. All FLAIR MRIs were annotated with expert-validated delineations of the tumor. Two convolutional network architectures were trained – U-Net and multiple resolution residual network (MRRN). The MRRN approach simultaneously integrates features computed at multiple image feature and resolution levels through a number of residually connected feature streams to compute the segmentation. Such a representation allows the network to enlarge the semantic context in the image and improves its ability to detect and segment structures that are variable in size and shape such as tumors. We compared the MRRN method against the well-known U-Net approach that uses a series of convolutions and pooling layers with skip connections to pass the information from the image at various resolutions for refining the segmentation. Training and validation were performed with a total of 17,523 and 840 images, respectively, of size 256x256. The best model from validation was selected for testing on the TCGA-GBM dataset. Performance was measured using the Dice Similarity Coefficient (DSC).
Results: The median DSC for GBM segmentation was 0.87 (IQR: 0.85-0.90) with the MRRN and 0.85 (IQR: 0.81-0.88) with the U-Net.
Conclusion: Our approach for the automatic segmentation of glioblastoma tumors from FLAIR images achieved more accurate segmentations compared to the standard U-Net method.