The recent ACR Virtual 2020 Imaging Informatics Summit showed how far radiology AI has come, its extraordinary promise, and also the hard work being done to make these tools reliable and valuable for widespread use by radiologists. It also illustrated ACR’s major role in making this technology useful for its members.
When radiologists think of radiology AI, or machine learning (ML), typically we think of evaluating images—using patterns in the pixels to detect, categorize, and predict clinical pathology. While advances in these areas are occurring at light-speed, stumbling blocks related to generalizing AI are formidable. There is no guarantee that AI will work well in the myriad clinical settings that currently exist in our practices, with our different makes and models of imaging machines, our varied protocols, and local variations in populations and disease prevalence. This is not to say these problems are insolvable; almost daily, papers on advances demonstrate the huge amount of research by radiology informaticists as well as academic and industry computer scientists, bioengineers, and biostatisticians.
Several issues continue to slow AI’s use in medical imaging
Pixel-evaluating ML tools learn in large part by seeing radiology exams. In general, they work best when trained on large numbers of exams, with the widest variety of settings possible. Lack of access to such a wide variety of exams, from many different machines and using various protocols and techniques, is one of the main limiting factors to producing robust, generalizable, radiology AI.
Another thorny issue is patient privacy. It’s plausible for radiologists to assume that any data released outside the firewalls of our institution will be used in ways we have not anticipated, and those uses could potentially cause harm to patients or to us. So ideally, we want to keep our data inside our own firewalls, where we have better control over their use. At the same time, we want to be able to train ML not only on our own exams, but also on exams from many different sites.
How is the ACR developing technology helping to solve the problem?
The ACR is uniquely situated to resolve this challenging, vexing dichotomy, and has made a major commitment to solve this issue. Most of us are familiar with TRIAD, a user-friendly app we use to submit images to ACR for accreditation. A number of radiologists have also used ACR Connect®, the next generation of TRIAD which will interface with registries such as the National Radiology Data Registry (NRDR), and the Qualified Clinical Data Registry (QCDR) for the physician quality reporting system. The ACR is leveraging these technologies to expand ACR Connect to empower Federated Learning (FL) for radiology ML.
Rather than sending our exams outside our firewalls in order to train an ML algorithm, FL infrastructure sends the ML algorithm to each of us, inside our own separate firewalls. Each of us then trains that particular ML algorithm on our data, and once we’re done training it on our own data, we send the algorithm back to a central location. Through multiple different techniques, the central site combines each version of the algorithm to build what is hopefully an extremely robust, generalizable, final model, which will work reliably in all of our settings.
The ACR goal is to build this infrastructure to be secure, user friendly, and radiologist focused. The first version is being tested now with seven sites, with more sites scheduled to join soon. Another goal is to provide enough support that even small imaging settings that lack sophisticated IT support will be not only able, but strongly encouraged, to participate.
What AI tools are available to use today
While this new iteration of ACR Connect should empower new ML products in the future, today a number of algorithms are already FDA cleared for use. At the Informatics Summit, several speakers discussed what to look for when purchasing an AI model. Returning to the generalizability issue, the first thing radiologists should do is methodically and proactively evaluate how any AI product they plan to purchase, or have already purchased, works on their own data. Does it work on every CT scanner you have? Will it work with any protocol or technique variations you might use routinely? What happens when it gets a case with motion artifact, or significantly unusual findings?
Once an AI product is incorporated into workflow, the next step is a to develop a formal, written plan for regular, continuous, quantitative evaluation of the tool, to be sure it continues to work at the same level of accuracy as it did when you started using it.
Most radiology ML products commercially available today are adjuncts to radiologists’ workflow, so if a product is later found to no longer be working as well as expected, speakers reported no significant issues when turning the product off and returning to the original reading process.
This was but a small slice of the tsunami of useful information presented at the Informatics Summit. The AI-related talks and in-depth recordings by subject matter experts will be made available to ACR members in coming months. If you have questions about recording availability, contact firstname.lastname@example.org.
J. Raymond Geis, MD | Senior Scientist, ACR Data Science Institute | Adjunct Associate Professor of Radiology, National Jewish Health, Denver, CO
As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.