Room: Exhibit Hall | Forum 6
Purpose: Traditionally, a tomographic image is formulated as an inverse problem for a given set of measured data from different angular views. Here we propose a deep learning strategy of tomographic X-ray or other related imaging modalities with ultra-sparse sampling.
Methods: We develop hierarchical neural networks for imaging with ultra-sparse views and develop a structured training process for deep learning to bridge the dimensionality in X-ray or optical imaging. The essences of our approach are the introduction of a novel feature domain transformation and a robust encoding/decoding framework. The performance of the proposed approach is evaluated using digital phantoms, where projection images are digitally produced from CT images using the geometry consistent with a clinical on-board cone-beam CT system for radiation therapy. Data augmentation like organ deformation are used to produce annotated data pairs that mimic different imaging situations. The ultra-sparse view approach is also applied to optical imaging and tested using phantom studies.
Results: The deep learning model is deployed on a few cases and the single-view reconstructed results are compared with ground truth. For case 1, the averaged MAE/RMSE/SSIM/PSNR values over all testing samples for single-view reconstruction are 0.018, 0.177, 0.929, and 30.523, respectively. The indices for case 2 are found to be 0.025, 0.385, 0.838, and 27.157, respectively. Both qualitative and quantitative results demonstrate the model is capable of achieving high-quality 3D image reconstruction even with only a single or few 2D projection. Similar success was achieved in optical imaging with ultra-sparse view.
Conclusion: We propose a novel deep learning framework for volumetric imaging with ultra-sparse data sampling. This work pushes the boundary of tomographic imaging to the single-view limit and present a useful solution to many tomographic imaging modalities.
Not Applicable / None Entered.