The lifecycle of a healthcare AI algorithm can be divided into four sequential phases: algorithm development, FDA clearance, clinical integration, and post-deployment monitoring. Success in each of these phases is necessary for an algorithm to cross innovation’s “valley of death” and achieve widespread adoption. In this blog post, we focus on the clinical integration phase with plans to explore post-deployment monitoring in subsequent posts.
Over the past several years, AI developers have demonstrated their ability to navigate the first two of these phases: algorithm development and FDA clearance. Evidence of their success is documented in the 343 AI/ML-enabled medical devices that have achieved FDA clearance, the vast majority occurring in just the last three years.
Why algorithms have trouble crossing the valley
Despite the growing numbers of FDA cleared/approved healthcare AI applications, why are so few radiologists using AI? The answer lies in the latter two phases of the healthcare AI lifecycle: clinical integration and post-deployment monitoring. These two phases are particularly daunting because they require extensive collaboration between developers and algorithm adopters. In contrast, the early development phases of algorithm development and FDA approval can be performed with relatively little input from end users.
Clinical integration, or the integration of technology into clinical workflows, is essential to any healthcare IT application, but it is uniquely critical in the value proposition underlying the decision to purchase a healthcare AI algorithm. Most AI algorithms are not directly reimbursed. Their value proposition lies in promises of enhanced diagnostic capabilities and increased radiologist efficiency. Without demonstrating successful clinical integration, it is difficult to believe promises of better outcomes and more efficient staffing — even with today’s radiologist shortage.
For healthcare organizations, achieving clinical integration begins with appropriate product selection. Ideally, new AI-enabled applications should be evaluated in an environment that mirrors their own IT ecosystem. In reality, due to both vendor- and user-side limitations, pre-purchase demos of these applications might be restricted to an unrealistic environment that doesn’t accurately highlight the real-world challenges of integration.
Once an application has been selected, the ability of the user to integrate it into existing IT products could be unexpectedly limited, sharply contrasting with the seamless integration within the demo environment. This risk can best be mitigated by installing a trial version of the application onsite, but this is a time-consuming process requiring significant IT resources. Another alternative is for the purchaser to request the developer put them in contact with an existing customer who is using the system in a similar IT environment.
Once an application has been installed and deployed in a clinical environment, it risks becoming a very expensive unused icon unless appropriate training is created. Ensuring utilization of novel applications, such as radiology AI algorithms, is particularly challenging, since they introduce a novel step in the workflow rather than replacing a prior process. Technical training on how to use the tool must be accompanied by a credible explanation of why the tool should be used in the first place. In our current pilot at Vanderbilt, this stage of clinical integration has required a multi-pronged communications strategy with distinct messaging to technologists, radiologists, and the clinicians ordering the studies.
Several issues make the AI developer’s side tricky
Healthcare AI developers face a different set of challenges in clinical integration, including regulatory constraints. Any FDA-cleared/approved AI-enabled application faces significant regulatory limits to its flexibility once the product is cleared/approved. While the FDA envisions providing some flexibility to AI developers via a “predetermined change control plan” in premarket submission, the scope of these changes will likely be limited. This differs from many healthcare IT applications, such as the EMR, where substantial changes can be made with minimal or no regulatory oversight.
Developers also face limited flexibility in integrating with other healthcare IT systems, including EMR, PACS, and dictation software vendors. While the AI developers need to integrate with the larger platforms to create efficient and safe workflows, the same is not true for the larger vendors. Investing in an integration with a small AI company, whose long-term survival is uncertain, can be a tough sell, particularly when integration could pose health IT security risks.
Significant resource and bureaucratic constraints within the healthcare organizations that are attempting to purchase and deploy healthcare AI applications also create hurdles for AI developers. From security and architectural review committees to staffing limitations for software installations, each customer has a unique set of challenges. Finding resources for the deployment and support of new applications is almost always more difficult than getting resources for upgrading or replacing pre-existing systems.
Preparing for the challenges ahead is key
While the challenges threatening successful clinical integration are daunting, once they are identified they can be addressed. A frequent threat to overcoming clinical integration is ignoring or significantly underestimating that the problem even exists. Developers looking to sell their application are understandably unlikely to focus on the challenges of clinical integration until the purchase is completed. Healthcare organizations need to acknowledge this dynamic and develop processes to overcome the multiple addressable threats to clinical integration, from ensuring realistic assessment of the product to developing an adequate training process to support its deployment.
While the first two phases of the lifecycle of a health AI application are predominantly driven by developers, the final two phases require significant responsibility on the part of healthcare organizations. Acknowledging this and putting in place appropriate strategies will pay dividends in value creation and ultimately drive better patient outcomes.
Collin G. Howser, MD, IR/DR Resident | Vanderbilt University Medical Center Department of Radiology | Nashville, TN
Brent V. Savoie, MD, JD | Vice-Chair, Radiology Informatics and Section Chief, Cardiothoracic Imaging | Vanderbilt University Medical Center | Nashville, TN
Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices | FDA
As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.