Room: AAPM ePoster Library
To evaluate the impact of training a deep-learning (DL) neural network based algorithm for sparse-view CT with cues from the third dimension, and evaluate the impact of sparse-view sampling on image quality.
A DL neural network was constructed from the U-net framework to reconstruct images for sparse-view CT. The DL neural network was trained with data simulated from customized Shepp-Logan phantoms, as well as publicly available CT data at The Cancer Imaging Archive (TCIA). During network training, the network takes each low-quality CT slice image reconstructed by the FBP algorithm as input, and also heeds to the cues from its neighboring CT slide images, which results in a 3D training strategy. After the training, the neural network was used to predict high-quality images from a testing set of images, at various sparse-view levels. The image quality of the predicted images was evaluated in terms of RMSE, SSIM, and PSNR.
Examination of image qualities shows that, compared to full-view CT, good quality can be maintained from sparse-view CT with up to one-sixth of sampling views. As sparsity increases, it became more challenging to obtain a good image quality. Training the neural network with and without cues from the third dimension makes a significant difference in the image quality, for the same sparsity level.
This work indicates that there is potential to recover the lost in-plane information from sparse-view CT via the information from the third dimension during the training of DL neural network for sparse-view CT image reconstruction. With prior information, such as known anatomy of the imaged objects (e.g. head or lung), even a higher level of sparsity is possible. The DL neural network based reconstruction algorithm for sparse-view CT will find use in future imaging system development with lower radiation dose and faster imaging speed.
Not Applicable / None Entered.