Overcoming Technical Roadblocks to AI Deployment with Standardization

One of the frustrations of implementing AI applications has been the development of platforms and interfaces needed to bring AI tools into clinical workflows. Custom interfaces and infrastructure have to be developed for each institution, and the lack of standardization made this process cumbersome and inefficient.

In June 2021 DICOM Working Group 23 released its initial work, for review by Working Group 6, on a supplement to the DICOM Standard for Service Discovery and Control. Supplement 224 proposes a way to standardize management of AI processing services and applications, which will help overcome current barriers to adoption faced by AI developers and implementors by enabling simpler distribution and deployment, and improving federation capabilities. While there is still work to do — including public comment and letter ballot versions of the supplement to be developed during upcoming Working Group 23 meetings — the goals of this supplement are a cause for enthusiasm.

The outcome of this effort is likely to result in several substantial changes for those who want to deploy AI tools. By breaking down applications into components, a hosting platform will have key information about what the application is capable of, what components comprise the application, their settings, and how they are invoked. These descriptive schemas are written in an application’s manifest, which also includes traits called entry points that describe how data is accepted and results are emitted.

Supplement 224 provides mechanisms that fit the constructs described in Integrating the Healthcare Enterprise’s AI Workflow for Imaging and AI Results. The supplement also provides an entry point suitable for use by the recently released ACR Data Science Institute’s Model Application Programming Interface.

How will this make a difference?

For AI developers, having a standard rubric to describe applications, AI models, and more will allow them to focus on creating applications to meet clinical needs, such as those in ACR Define-AI use cases — rather than focusing on custom interfaces or infrastructure for each installation. Not only will developers benefit, but institutions are likely to find sharing of custom and “homegrown” applications easier. And institutions with a compliant hosting platform will find a marketplace of best-in-class AI applications available, which will allow for more timely deployment of AI.

Pay-as-you-go and trial/test usage of AI becomes much easier as well, as applications become plug-and-play with compliant platforms, and can take the form of hosted services, containers, or executables. Additionally, the headaches of de-identification will be reduced by bringing applications inside an institution’s private network. This allows AI models to be more easily deployed and verified against institution data to help guide AI purchases and investments. For research purposes, models can be shared among institutions for federated or distributed learning, and better models will result from the diverse datasets used in training.

ACR AI-LAB™ has shown what we gain from standardization

When ACR’s AI-LAB is used as a platform for AI development, it provides the same interface for applications during the research and development phase as is used in production. It also provides a sharing mechanism for sites on the network. This allows for federated or distributed learning and validation from a platform that mimics commercial production platforms.

Researchers and developers can transition to production without building custom interfaces. As a result, smaller institutions can participate in AI initiatives as part of a federation. They can integrate their unique data for training, testing, and validation activities such as ACR’s Assess-AI or Certify-AI. The DICOM Supplement 224 is expected to have many of the same benefits and remove some of the barriers that have prevented AI from spreading in medical imaging.

While they won’t be put into place overnight, once DICOM Supplement 224 standards are implemented, the days of customized, one-off, or proprietary interfaces for containerized processing, service-oriented architecture, and microservices — all of which have proliferated in the medical imaging space — will become a thing of the past. Radiologists, researchers, and AI developers will be able to focus their efforts on developing better AI tools for patient care and will see more AI deployment in diverse healthcare settings.

Safwan Halabi, MD, Associate Professor of Radiology, Northwestern University Feinberg School of Medicine | Vice-Chair of Imaging Informatics, Ann & Robert H. Lurie Children's Hospital of Chicago

  • You may also like

    Drifting Away: When Your A+ Decision-Making AI Machine Falls to Average … or Worse

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.