MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

Quality-Guided Deep Reinforcement Learning for Parameter Tuning in Iterative CT Reconstruction

C Shen*, Y Gonzalez , L Chen , S Jiang , X Jia , innovative Technology Of Radiotherapy Computations and Hardware (iTORCH) Laboratory, and Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX

Presentations

(Sunday, 7/14/2019) 2:00 PM - 3:00 PM

Room: Stars at Night Ballroom 2-3

Purpose: Regularization parameters in iterative CT reconstruction control the trade-off between data fidelity and regularization on image quality. Their values critically affect resulting image quality. These parameters are usually adjusted manually. Such a parameter-tuning process is not only tedious, but becomes impractical if there exists a large number of parameters. To address this problem, we propose a novel quality-guided deep reinforcement learning (QDRL) framework to intelligently evaluate the image quality and automatically adjust parameters in a human-like fashion.

Methods: Aiming at encoding human intelligence for parameter tuning in an artificial intelligence system, we design a QDRL framework that simultaneously establishes a parameter-tuning policy network (PTPN) and a quality assessment network (QAN). We consider an example problem of iterative CT reconstruction with pixel-wise regularization parameters. By observing each image patch from reconstructed CT image, PTPN determines a parameter-tuning action (direction and magnitude of change) for the pixel at its central location. The purpose of QAN is to evaluate the quality of an image patch, outputting a probability of being high quality. PTPN and QAN are trained simultaneously in an end-to-end DRL training process with the output of QAN being a reward to guide the reinforcement learning process of PTPN.

Results: End-to-end QDRL was successfully conducted to train both PTPN and QAN. The trained PTPN can guide the parameter tuning process, yielding high-quality reconstruction with mean error of 5.82% and 6.06% for training and testing cases, respectively, while the error in reconstruction using manual parameters is 6.31% on training and 6.45% on testing cases. QAN is capable of evaluating the image quality, assigning higher scores to images of visually better quality.

Conclusion: We have successfully developed a novel QDRL framework to achieve automatic regularization parameter tuning in an iterative CT reconstruction problem with the number of parameters involved clearly beyond human capability.

Keywords

Not Applicable / None Entered.

Taxonomy

Not Applicable / None Entered.

Contact Email