I had a CT scan during my recent emergency room visit. Afterward, the doctor informed me that some incidental findings should be biopsied. I had no idea if that conclusion was made by an AI algorithm or by a human. The human who gave me that result explained the situation and the next steps. Would I have felt differently if I had known the conclusion was arrived at by AI? Was there any reason I should know? Who’s checking the AI work? Is my scan going to be used in future AI development? Who will profit from using my scan if it is used?
When I take my car in for repair, it doesn’t care what’s done to it. Attach it to a computer for diagnosis—no problem. Make adjustments based on what the computer says—fine. But, as the owner of the car, I want assurance that the computer is making accurate diagnoses, and I want to know what has been done, why, and what it will cost.
Humans aren’t cars
We humans are a bit different—the car is us! We are animate, opinionated, and have motives, values, and goals that are an essential part of who we are and what we want from healthcare. In short, we have individual differences in our bodily mechanics and in what we want and will do in the service of the health of those bodies.
Radiology performs a critical and growing role in accurate diagnosis. We rely on the acquired skills of the radiologist to accurately identify and diagnose. What does a patient make of the notion that the reading of imaging results might be done by a machine that has learned its trade via AI?
At the end of the day, patients care most about accurate evaluation of their situation and effective treatment. If a learning machine can do that better than a trained human, why would we object? But how can we know if the learning machine is more accurate? In the end, we have to put our confidence in the creators, interpreters, and communicators—the same way we must with our doctor.
Can machines replace doctors?
We might know or suspect that our physician has some cultural, gender, or age biases that need consideration in diagnosis and treatment. Do we have comfort knowing that the AI algorithm assessing our case is free of such biases and that the reporting or implementation of findings will fairly consider those issues?
And yet, humans are social creatures whose very being depends on complex human interactions. To take that away from medicine diminishes the humanness of the providers and patients.
Humans are not passive actors in medical dramas. We make decisions about our health and care, we integrate what we learn from medical professionals with what we already know and believe. We decide when to seek care, how to care for ourselves, and when to follow medical recommendations. We want our providers to understand us and our situation. We want to be at least equal partners in decisions about our health and care.
The Turing Test proposed that once humans interacted with a machine and could not tell whether the interaction was with a machine or a person, it would establish the machine as having reached a level of personhood. Yet even if such a state were achieved with radiological imaging and reporting, most of us still want to have a human connection.
For the foreseeable future, doctors will remain far superior to AI in incorporating personal context into clinical decisions and making treatment recommendations. Right now, AI solutions are becoming good at doing one thing well—narrow AI tasks—and are not as good at synthesizing information the way doctors do and answering anything beginning with “should we.” Will the day come when AI avatars can do that as well as humans? That remains to be seen. Will the time come when many, most, or all patients will be just as happy getting their AI-derived results from an AI-created avatar as a doctor? Perhaps, but there’s much work to be done before we should assume any answers—especially universal ones.
What does the future hold?
For over a half-century, studies have shown that statistical predictions are often more accurate than clinical decisions made by humans. Even so, patients will continue to want the best that AI can provide joined with the best of the abilities of human clinicians, who incorporate their personal human qualities into medical decisions. ACR is continually looking for ways AI can support doctors in improving patient care. Doctors, along with patients, should be the ones answering the kinds of questions raised here.
David Andrews, Patient Advocate
As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.