Assessing and Measuring the Usability of AI in Clinical Environments

How can institutions, vendors, and physicians work together to implement AI algorithms that have been critically assessed in routine clinical practice? As 2019 begins, I’m excited to see us getting closer to the patient-centered, technology-driven future in medical imaging we’ve imagined for years. In October 2018, ACR’s Data Science Institute (DSI) announced the launch of our first 50 use cases in our Define-AI Directory. This is the starting point in building a framework to facilitate the development and implementation of AI applications to help radiology professionals in disease detection, characterization, and treatment. Many more use cases, including those for non-interpretive applications of AI in our specialty, will soon follow.

While use cases are the beginning, great challenges remain, including ensuring they are integrated seamlessly into clinical workflow, and validating, assessing, and measuring the performance of these potentially groundbreaking technologies in actual clinical practice.

For this reason, we at the ACR DSI have begun collaborating on the next steps in creating an AI ecosystem, including building out the Assess-AI process and registry.

Verifying that an algorithm not only works well in a single institution but is broadly applicable to routine practice, presents unique challenges for AI developers. Using data collected in the ACR National Radiology Data Registry , Assess-AI combines specific information related to an algorithm’s effectiveness (as reported by radiologists at the point of care) with specific metadata related to the exam as specified in the Define-AI use case. It then provides developers with regular reports of their algorithms’ performance across multiple sites. Clinical sites receive reports as well, showing algorithm performance at their site compared to national benchmarks.

By monitoring algorithm performance in clinical practice and capturing real-world data during clinical use in a clinical data registry, Assess-AI provides developers with longitudinal algorithm performance data. As a result, it paves a pathway to meet any FDA post-market surveillance requirements and, ultimately, smooths the road to deploy AI in clinical settings.

Assess-AI is a vendor-neutral tool designed to work for all AI developers and reporting platforms. In November at RSNA 2018, DSI announced our collaboration with Aidoc and Nuance Healthcare to use Assess-AI at the University of Rochester Medicine to test and refine the processes for data gathering. We are excited that developers are recognizing the long-term importance of documenting the longitudinal performance of AI in routine clinical practice.

These are exciting times, as we move from talking about what AI might do to assessing the performance of algorithms in real-world settings and evaluating their impact on patient care. In 2019, the question on radiologists’ minds should no longer be “Will we be replaced by AI?” but “How can institutions, vendors, and physicians work together to implement AI that has been critically assessed in routine clinical practice?”



By Bibb Allen, Jr., MD, FACR, ACR DSI Chief Medical Officer; Diagnostic Radiologist, Grandview Medical Center, Birmingham, AL

Assessing and Measuring the Usability of AI in Clinical Environments

  • You may also like

    SIIM-ACR Data Science Summit 2024: From Data to Diagnostics with Tested, Transparent and Trustworthy AI

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.