Op-Ed: When Healthcare AI Passes the Turing Test

A headshot of Dr. Keith Dreyer

Keith Dreyer, DO, PhD, FACR

ACR DSI Chief Science Officer
Chief Data Science Officer, Massachusetts General Hospital and Brigham and Women’s Hospital
Boston, MA 

 

Generative artificial intelligence (AI) has the potential to revolutionize the healthcare sector by facilitating more accurate diagnostics, personalized treatment plans, and improved patient care (Jiang et al., 2017). However, AI applications in healthcare face stringent regulatory hurdles, with the U.S. Food and Drug Administration (FDA) frequently classifying them as medical devices. This paper examines whether a true human-equivalent AI could pass current regulatory requirements for clinical use. And, if not, proposes alternative approaches to regulating generative AI in healthcare. 

The classification of generative AI as a medical device presents several challenges. Regulatory requirements for approving a medical device to work autonomously are not well-established, leading to uncertainty in the approval process. In contrast, human healthcare professionals have a clear pathway to achieve ‘autonomy’ in their field (Wartman & Combs, 2018). This disparity raises the question of whether the existing regulatory framework for AI in healthcare is adequate or, in regulatory speak, ‘safe and effective’. 

The Turing Test, proposed by Alan Turing in 1950, is a method in artificial intelligence for determining whether a computer can achieve human intelligence. It involves a human evaluator conversing with a machine and another human, unaware of which is which. If the evaluator can't distinguish the machine from the human, the machine is considered to have passed the test. 

Let's reverse this Turing test experiment and apply it to medical device acceptance. Suppose a human physician is isolated in another room, akin to the computer in the Turing Test. This human is treated as an AI-based medical device and subjected to the FDA’s 'Software as a Medical Device' (SaMD) pathways to be approved for clinical work. 

"It's not man versus machine, it's man with machine versus man without" - Pedro Domingos, 2014

Given the stringent requirements for software validation, cybersecurity, and risk management in the FDA’s SAMD approval process, it would be incredibly difficult, if not impossible, for any human to pass the required tests and be deemed acceptable for autonomous clinical use. This thought experiment highlights the potential shortcomings of the current regulatory approach to AI in healthcare. 

Now, let's consider an alternative approach: allowing an AI algorithm to use the 'human pathway' for approval as a physician. To pass similar human tests, the AI would need to meet the requirements for medical licensure set by the Federation of State Medical Boards state licensing, board certification from the American Board of Medical Specialties, and hospital credentialing from The Joint Commission.

Recent advancements in AI have demonstrated the potential for algorithms to achieve a level of expertise comparable to or exceeding that of human physicians in various medical domains. However, it is important to consider the ethical, legal, and social implications of granting AI algorithms the same status as human physicians.

artificial intelligence woman silhouetteAs generative AI continues to rapidly evolve, our approach to approving AI for clinical use must adapt in tandem. The current regulatory framework may stifle innovation and prevent the healthcare industry from fully benefiting from the potential of AI. It may be time to rethink our classification of generative AI as a medical device and consider a more equitable approval process that reflects the unique capabilities of both human physicians and AI algorithms alike. 

Developing a new regulatory framework that accommodates the rapidly evolving landscape of AI in healthcare is essential for ensuring the safe and effective integration of AI into medical practice. This framework should be based on principles of transparency, accountability, and robust validation, allowing AI algorithms to contribute to a better healthcare system that provides more accurate diagnoses, targeted treatments, and personalized patient care.

One potential approach to regulating AI in healthcare is to create a separate pathway that combines elements of both human physician and medical device approval processes. This hybrid pathway would assess the AI algorithm's performance and safety, as well as its ethical, legal, and social implications. It could involve a tiered approval system, similar to the medical licensure process for human physicians, requiring AI algorithms to demonstrate proficiency in specific medical domains before being granted autonomy. 

Additionally, as with maintenance of certification, this new regulatory framework should incorporate ongoing monitoring and evaluation of AI algorithms in practice. This would ensure that the algorithms continue to perform at a high standard, adapt to new medical knowledge, and remain aligned with ethical guidelines.

Collaboration between AI developers, healthcare professionals, regulatory bodies, and patient advocacy groups is crucial for the successful development and implementation of such a regulatory framework. By fostering a multidisciplinary dialogue, we can create a more comprehensive and nuanced understanding of the potential benefits and risks associated with AI in healthcare, ultimately leading to better decision-making and safer, more effective patient care.

Considering the potential benefits of AI in healthcare, allowing artificial intelligence to operate in a trainee-mode under the supervision of a licensed professional, similar to how human trainees participate in patient care today, could be a viable approach. This would facilitate AI's gradual integration into medical practice while ensuring patient safety and allowing professionals to closely monitor and evaluate its performance and effectiveness in real-world scenarios. artificial intelligence lungs

The current approach to regulating generative AI in healthcare may not adequately address the unique capabilities and challenges posed by AI algorithms. As AI continues to evolve and demonstrates its potential to revolutionize medical practice, it is imperative that we develop a regulatory framework that ensures the safe and effective integration of AI into healthcare. By rethinking our classification of generative AI as a medical device and considering a more equitable approval process, we can pave the way for AI to contribute to a better healthcare system that provides more accurate diagnoses, targeted treatments, and personalized patient care. 


About the Author

Dr. Dreyer is the ACR DSI Chief Science Officer and the Chief Data Science Officer and Chief Imaging Information Officer at Mass General Brigham. He is also an Associate Professor of Radiology at Harvard Medical School. He has authored hundreds of scientific papers, presentations, chapters, articles and books, and has lectured worldwide on clinical data science, cognitive computing, clinical decision support, clinical language understudying, digital imaging standards, and implications of technology on the quality of healthcare and payment reform initiatives.

Op-Ed: When Healthcare AI Passes the Turing Test

  • You may also like

    Unlocking AI Transparency in Radiology: A New Era with AICentral.org

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.