My Department's Experience with AI
Artificial Intelligence (AI) was a major emphasis at RSNA in 2017, and I had an opportunity to check out the latest technology from several AI start-ups. Hands-on experiences with simulated PACS workstations quickly convinced me that AI technology could improve our accuracy and efficiency and reduce turnaround time.
The next step was to find an AI technology partner for the radiology department at the University of Rochester’s Medical College. It was a learning process for us, but we navigated it successfully. Here are the steps we followed to bring AI to my institution.
How we chose an AI vendor
One limitation of current AI software is how it interacts with a radiologist’s work list. While all AI software is able to identify findings on images, not all AI software creates notification of high priority cases. Notifications were a priority for us and we worked with the vendor to implement this feature.
Compliance and workflow were also concerns. In order to be HIPPA-compliant, patient-identifying information has be to removed when it is sent to the cloud — then re-attached when it is sent back to the PACS. AI image processing can be performed either on a server located onsite or on a cloud-based system.
Though cloud-based systems are more cost-effective, they come with a built-in issue — sending sensitive information over the internet. As I evaluated the right AI partner for us, we looked for flexibility in how their inference models would integrate software into our existing PACS system, create work list notifications, and implement a cloud-based system in a HIPPA-compliant fashion. Those factors were a huge issue for us — and we considered them among the most important factors in choosing a partner.
After returning from RSNA, I spoke with my chairperson about a potential collaboration between a potential developer partner and our department. After an enthusiastic response, I arranged a series of conference calls and onsite meetings with the vendor to discuss expectations and goals.
Because the vendor I selected was interested in an opportunity to test its software at a multicenter radiology department — and our goal was to gain experience using AI software and conduct research — we agreed to work together. We decided the first application would be detection of intracranial hemorrhage on head CT.
Our first project
The AI tool we deployed was designed to improve our workflow. Before launching the test, we had to work through a few sticking points. One of our initial concerns was how to best differentiate stat exams ordered by referring physicians from routine exams tagged by the AI software as urgent. Our solution was to tag the AI-identified cases with a yellow badge symbol — whereas stat exams were marked in red. As a result, cases tagged with a yellow badge to indicate the presence of an intracranial hemorrhage were read with the same priority as stat cases. Consequently, the time interval from the CT scan completion to notifying the ordering clinician was reduced.
Our new AI tool was extremely accurate at finding both obvious bleeds and extremely subtle bleeds. As expected, it did not determine the cause of the bleed or identify any other types of findings unrelated to the intracranial hemorrhage. The end result was a useful AI tool on our PACS workstation that can detect subtle bleeds, which can easily be missed by junior residents or even by experienced faculty when the caseload gets heavy.
Where we are today
Many of the attendings and residents in my department have embraced AI technology. We now have several active research projects evaluating turnaround time, accuracy of the tool, and its utility in a mobile stroke unit. We have also instituted weekly research meetings that include attendings, residents, and scientists employed by our vendor.
In the near future, we will deploy additional AI tools to identify pulmonary embolism on CT pulmonary angiograms, free air in the abdomen on CT, and cervical spine fracture on CT. All of these tools are examples of narrow AI, which is sensitive for doing a single task, in this case finding a single abnormality on an image. Despite the millions of dollars being poured into research, broad AI, capable of solving problems across multiple tasks, has not yet been developed. Perhaps, someday it will be able to identify all abnormalities on medical imaging, but that is not the case today.
Our department is asking many important questions about the use of AI, including:
• How can we best demonstrate the value of AI in radiology and justify the added expense?
• In what way should we allow private companies access to the “big data” that resides in our PACS archives?
• Who owns our data?
We are addressing these questions by developing a dedicated team consisting of radiology faculty, radiology residents, and IT staff to find the answers. In addition, we are collaborating with ACR’s Data Science Institute to help understand the role AI can play in improving diagnostic accuracy and efficiency, the value and reimbursement for the technology, and the potential limitations of AI technology.
As a mid-career academic radiologist, I have seen significant changes in the practice of radiology. Prior to AI, the biggest change was moving from film to PACS. All would agree that we are much more efficient and accurate with PACS. Based on our initial results with AI, it is clear that this technology will result in even greater improvements in efficiency and accuracy, while at the same time reducing work fatigue. Ultimately, work satisfaction will be improved as we are freed up to spend more time doing the work that drew us to this field.
Eric P. Weinberg, MD, FACR | University of Rochester Medical College, Professor of Clinical Imaging Sciences and Medical Director of University Medical Imaging