Room: AAPM ePoster Library
Propose: Brain tumor segmentation using multi-modality MRI scans is critical for disease diagnosis, surgical planning and treatment assessment. We propose a brain tumor segmentation method that surpasses previous methods due to improved handling of dataset imbalance and suppressing false-positive classifications.
Method: The data used in experiments comes from BraTS 2019 training set. We use 80% of the dataset for training(207HGG,60LGG) and the remaining 20% for validation(52HGG,16LGG). Pre-processing include N4BiasFieldCorrection and z-score normalization for the brain region. Nonbrain regions are removed. Each pre-processed image was divided into 27 patches and the patch size is 64?64?64. We used a cascaded 3D U-Net to segment the brain tumors. The first 3D U-Net uses four modalities images as inputs, and outputs the mask of whole tumor (WT). The second 3D U-Net only uses T1ce, T2 and Flair images and the patches which comprise all three tumor classes are kept for training to segment the WT into three substructures: edema (ED), tumor core (TC) and enhancing tumor (ET). The depth of 3D U-Net is 4. P-ReLu and focal loss were used as the activation and loss function, providing the activation of negative features and the reduction of the relative loss for well-classified examples. The batch size was 2 and the group normalization was used to stabilize the computed statistics.
Result: The mean dice scores on the training and validation dataset are (0.729, 0.911, 0.823) and (0.731, 0.908, 0.822) for ET, WT, and TC, respectively. The hausdorff95 distance on the training and validation dataset are (4.790, 5.550, 5.870) and (4.986, 5.620, 6.339) for ET, WT, and TC, respectively.
Conclusion: There are two distinct advantages of the two-step framework. The initial segmentation of WT helps suppress false-positive classifications in non-tumorous areas. Combined with the use of focal loss, the proposed method mitigates the effect of unbalanced data.
Image Analysis, Segmentation, 3D