Room: AAPM ePoster Library
Purpose: To develop and investigate multi-scale residual dense network (MS-RDN) and compare it to residual encoder-decoder convolutional neural network (RED-CNN) for deep learning-driven sparse-view image reconstruction in cone-beam dedicated breast computed tomography (BCT) to enable radiation dose reduction.
Methods: De-identified cone-beam projection datasets of BIRADS 4/5 women (n=34) from a prior IRB-approved, HIPAA-compliant study acquired on a clinical prototype BCT system (300 projections, full-scan, 49 kV, 1.4 mm Al HVL, 12.6 mGy MGD) employing FDK reconstruction were used and served as reference. Sparse-view (100 projections, full-scan, 4.2 mGy MGD) were reconstructed using ramp-filtered FDK algorithm and served as inputs to the deep learning networks. Four network configurations (MS-RDN and RED-CNN with single 2D slices and five contiguous 2D slices) were trained with reference FDK reconstructions as labels. The dataset split for training/validation/testing in terms of number of cases (2D slices) were 20 (8346)/5 (1920)/9 (4056). Normalized mean squared error (NMSE) and bias were computed with respect to the reference FDK reconstruction, to assess the performance. Generalized linear models (repeated measures ANOVA) were used to test if the metrics differed, followed by Bonferroni-corrected pairwise t-tests.
Results: Overall, MS-RDN produced sharper images compared to the RED-CNN counterparts. NMSE and bias differed significantly among the 4 convolutional neural networks (P<0.0001). Both MS-RDN and RED-CNN with single slice training showed artifacts in the sagittal and axial planes. Multi-slice MS-RDN significantly reduced bias (P<0.0001; mean reduction: 0.817975×10^-4 cm^-1) and improved NMSE (P<0.0001; mean increase: 0.144 dB) compared to multi-slice RED-CNN.
Conclusion: Multi-slice training suppresses artifacts in sagittal and axial planes. Multi-slice MS-RDN outperformed multi-slice RED-CNN in terms of quantitative metrics and provided sharper images. Multi-slice MS-RDN provides a better alternative for deep learning-driven sparse-view image reconstruction in cone-beam BCT and needs to be further investigated.
Funding Support, Disclosures, and Conflict of Interest: Supported in part by the National Cancer Institute (NCI) of the National Institutes of Health (NIH) grants R01CA199044 and R21CA134128. The contents are solely the responsibility of the authors and do not necessarily reflect the official views of the NCI or the NIH.