MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Student Beats the Teacher: Deep Learning Using a 3D Convolutional Neural Network (CNN) for Augmentation of CBCT Reconstructed From Under-Sampled Projections

Z Jiang*, F Yin , L Ren , Duke University Medical Center, Durham, NC

Presentations

(Monday, 7/15/2019) 7:30 AM - 9:30 AM

Room: 221AB

Purpose: CBCT reconstructed using FDK from under-sampled projections suffers from prominent noise and streak artifacts. The purpose of this study is to augment the quality of under-sampled CBCT by developing a novel deep learning based method using 3D symmetric residual convolutional neural network (3D-SRCNN).

Methods: The 3D-SRCNN consists of symmetrically-stacked fully-connected 3D-convolution layers, 3D-deconvolution layers and residual connection. For training, 24 breath-hold lung CBCT data were used. CBCT was reconstructed by FDK from 127 projections that were retrospectively extracted from around 900 projections acquired in clinical CBCT scans, and were fed into the 3D-SRCNN, which was trained to learn a restoring pattern from under-sampled CBCT to fully-sampled CBCT. For testing, the trained 3D-SRCNN was used to augment under-sampled CBCT of a new patient different from training data. Results were compared to the reference fully-sampled CBCT images qualitatively and quantitatively using structure similarity (SSIM).

Results: 3D-SRCNN substantially reduced noise and streaks in the FDK-based under-sampled CBCT images with image details well restored. Quantitatively, 3D-SRCNN-agumented images had higher SSIM than the FDK-based reconstruction (0.77 for 3D-SRCNN versus 0.44 for FDK). Augmentation time for a volume of dimension 256×256x96 was about 6.3 seconds, making the technique very applicable for clinical tasks. One interesting finding is that images augmented by the proposed 3D-SRCNN showed even less streaks compared to the reference CBCT, even though the model was trained to match the reference CBCT. This “student beats the teacher� phenomenon showed that the 3D-SRCNN model was able to learn the true restoring pattern despite imperfections in the training target, and consequently predicted results superior to the reference data.

Conclusion: The proposed 3D-SRCNN method is effective and efficient in augmenting the FDK-based under-sampled CBCT image quality. It can reduce imaging dose in 3D/4D-CBCT while maintaining high quality, which can be very valuable in image guided radiotherapy.

Funding Support, Disclosures, and Conflict of Interest: This work was supported by NIH grant R01 CA-184173.

Keywords

Cone-beam CT, Reconstruction

Taxonomy

IM- Cone Beam CT: Image Reconstruction

Contact Email