Room: 221CD
Purpose: Gadoxetate uptake rate (kâ‚?) quantified from liver DCE MRI is a promising measure of regional liver function. Clinical exams typically have low temporal resolution (LTR) compared to high temporal resolution (HTR) experimental acquisitions. Clinical demands incentivize shortening the exams. This study evaluates the error in kâ‚? estimation from a neural network based approach as compared to a linearized-single input two compartment (LSITC) model.
Methods: Liver HTR DCE MRI were acquired in 22 patents with at least 16 minutes of post-contrast data sampled at least every 13 seconds. A simple neural network (NN) with 4 hidden layers was trained on voxel-wise LTR data to predict kâ‚?. LTR data was created by subsampling HTR data to contain 6 time points to replicate the characteristics of clinical LTR data. Both the total length and the placement of points in the training data was varied considerably to encourage robustness to variation. Training was performed by randomly selecting 3 million voxels, holding 3/5ths for training, 1/5th for validation, and 1/5th for testing. Additional 1 million voxels generated using a GAN were split between training and validation sets to augment the data.The performance of the NN was compared to direct application of LSITC on both LTR and HTR data. The error was assessed when subsampling lengths from 16 to 4 minutes, enabling assessment of robustness to acquisition length.
Results: NRMSE of kâ‚? was 0.85, 2.18, 0.95, for LSITC applied to HTR and LTR data, and NN applied to LTR data of 16 min. As the acquisition length shortened, errors greatly increased for LSITC approaches. The NN approach outperformed the LSITC approach even with HTR data with length shorter than 15 minutes.
Conclusion: Shortening acquisition time substantively increased error in directly applied LSITC, while the implemented NN showed much greater robustness to data length.
Quantitative Imaging, Image-guided Therapy, Pharmacokinetic Modeling