Purpose: To present an algorithm developed using convolutional neural networks (CNN) to automatically segment organs at risk (OAR) in thorax computed tomography (CT) images.
Methods: Radiation treatment planning of lung and esophageal cancer patients begins with the delineation of the tumor along with the surrounding normal tissues on CT images. The manual delineation of these organs is a tedious process which may contain large degrees of inter- and intra-observer variability. Due to significant variations in shape and low CT contrast of some organs (e.g. esophagus), manual segmentation can be especially challenging. In this study we address this problem using neural networks and automatically segment four OAR in the thorax: aorta, esophagus, heart and trachea. Our neural network architecture, consisting of ~17 million trainable parameters, is similar to U-Net with five blocks of down-sampling and up-sampling layers with each block consisting of two 2D convolutions. The output of each down-sampling layer is concatenated with the up-sampling block to restore spatial resolution of the segmentation. The network was trained on publically available dataset of 40 patients from SegTHOR challenge and tested on 20 patients blinded during the training. To evaluate performance of the model, the dice coefficient with Hausdorff distance were computed for each organ separately.
Results: The dice coefficient on the testing dataset were 0.90 Â± 0.04, 0.75 Â± 0.08, 0.92 Â± 0.03 and 0.86 Â± 0.05 for aorta, esophagus, heart and trachea respectfully. The Hausdorff distance were 0.43 Â± 0.39, 0.78 Â± 0.51, 0.35 Â± 0.24 and 0.76 Â± 0.68mm for the corresponding organs. On the average contouring of single patient CT takes 20-25s using this approach.
Conclusion: We present a neural network approach to segment OAR of thoracic CT images using 2D CNN architecture. Our model, being significantly faster than manual contouring, can achieve accuracy comparable to expert delineation.