Radiologists Are the Original Cyborgs
When Wilhelm Roentgen first used his machine to extract new information about what was happening inside a patient in 1895, he defined the original cyborg radiologist. This human-machine combination has evolved steadily ever since, with new machines building on the old to increase the information we obtain.
Machine learning (ML) is the latest step in this progression. These math equations describe methods that can absolutely extract more information from the pixel data of images than humans can find—even people who have spent decades learning how to look at radiology exams. Simply put, there are patterns in the data that are just too subtle for humans to perceive.
Because it’s relatively easy to build simple AI models to demonstrate this, no wonder computer scientists and others outside medicine predict the end of radiologists. However, extracting information and knowing what that information means are different.
Radiologists are integral to bridging that gap.
The devil is in the details, though. Translating AI models into real, useful tools is devilishly difficult. ML algorithms that find patterns in pixel data at a granular level are often easily fooled by noise, and they often require more consistent and similar image exams than are currently produced.
Any variation in how pixels are generated may affect the algorithm. Which variation is significant, however, is neither well understood nor predictable. Even if it was understood, each equipment manufacturer generates pixels slightly differently, and each software or hardware update, reconstruction kernel modification, or even environmental deviation may alter pixel values.
On top of this, radiologists have innumerable variations in how they image patients, from tweaking MR sequences to radiography technique tables to thresholds for 3D reconstructions. How will any of these affect a particular ML algorithm? We don’t really know.
While computers may be good at finding new patterns in pixel data, machines don’t necessarily know what those new patterns mean in terms of making predictions or directing actions. Currently, ML algorithms almost always rely on humans to describe what the pixel patterns mean.
Up to now, the popular media has been filled with clickbait articles such as “Computer Better than Doctor?” at any variety of carefully defined tasks that play to a computer’s strengths. What hasn’t happened yet, but is likely to come, are a flurry of articles such as, “Computer Kills Grandma” or, “Computer Discriminates Against (take your pick), and Nobody Knows Why.”
To help head that off, we radiologists need to communicate better with data scientists, for example collaborating to define specific use cases and building standardized data structures. Together, we will discover how best to use ML to help us extract more information, figure out what parts machines do well and what parts expert human radiologists do well, and make ourselves even better cyborgs.
By Raym Geis, MD, Senior Scientist, ACR Data Science Institute; Adjunct Associate Professor of Radiology, National Jewish Health, Denver, CO