MENU

Click here to

×

Are you sure ?

Yes, do it No, cancel

The Science of Safety - From Brute Force QA to Focused Effort

S Mutic1*, A Kalet2*, J Lamb3*, (1) Washington University School of Medicine, St Louis, Missouri, (2) University of Washington, Seattle, WA, (3) University of California, Los Angeles, Los Angeles, CA




Presentations

(Wednesday, 7/17/2019) 7:30 AM - 8:30 AM

Room: Stars at Night Ballroom 2-3

In this symposium, we will focus on the scientific aspects of safety efforts in radiotherapy. To begin, we will describe the potential uses of safety data spanning from applications in individual clinics to what can be learned from multi-institutional databases. For more than a decade there have been efforts to systematically collect safety data in radiation oncology, both on an institutional basis as well as nationally and internationally. With the surge of interests in big data and artificial intelligence in our field, historic safety data has become an interesting resource for improving safety. Both conventional systems engineering and more novel approaches have been used to analyze safety data and develop improvements. We will analyze practical clinical approaches as well as more recent research and development projects. Next, we investigate the use of an artificial intelligence method for automated radiotherapy treatment plan verification. This NIH supported work focuses on applying probabilistic reasoning (Bayesian inference) to mimic some of the clinical judgement used by clinicians to perform error checking. This novel method uses machine learning of historic data to generate the knowledge base used later to verify new plans. Results thus far will be presented along with a definition of the role of probabilistic reasoning in developing a QA tool to aid clinicians in the task of plan QA. Finally, we survey recent work relating to the use of automation and standardization to promote radiotherapy safety. The results will be placed in a larger context of the study of automation and machine-human interaction in the overlap medical enterprise and beyond. We discuss the risks of automation, such as over-reliance and alert fatigue, and approaches to understanding the interaction of technology with human factors, from Reason’s theory of latent errors to cognitive biases in medical decision making. Practical approaches to implementing new automation-based safety systems in the clinic will be described.

Learning Objectives:
1. Describe how historic incident learning data from multi-institutional databases can be used to improve patient safety
2. Understand how probabilistic reasoning applies to error checking in radiation oncology
3. Understand how to overcome some of the risks of automation in radiotherapy

Funding Support, Disclosures, and Conflict of Interest: Dr. Mutic discloses grants and consulting from Varian Medical Systems. Dr. Kalet discloses funding from the National Institutes of Health (1R41CA217452). Dr. Lamb discloses research funding and consulting from VewRay, and research funding from the Agency for Healthcare Quality and Safety (1R01HS026486).

Handouts

Keywords

Not Applicable / None Entered.

Taxonomy

Not Applicable / None Entered.

Contact Email