Produce Multimedia Reports that are Easier to Understand

Purpose

To automatically create a radiology report with embedded images and videos based on a radiologist’s plain text report.

Tag(s)

Non-Interpretative

Panel

Patient Facing

Define-AI ID

19110003

Originator

Patient Facing Panel
Lead Morgan McBee

Panel Chair

Michael Moore
Non-Interpretative Panel Chairs Alexander J Towbin, Adam Prate

Panel Reviewers

Patient Facing Panel

License

Creative Commons 4.0 
Status Public Comment
                               

Clinical Implementation


Value Proposition

While the technology exists to create multimedia-enabled radiology reports with embedded images and video clips of studies, few have been widely adopted as the current implementations require a significant amount of manual input. Through the use of machine learning algorithms, multimedia-enabled reports could be automatically generated from plaintext reports. These reports would be much more user-friendly for both patients and referring clinicians.


Narrative(s)

A 58-year-old man presents to his primary care physician with abdominal pain. The physician orders an abdominal CT scan. The radiologist interpreting the study found no explanation for the patient’s pain but did note an indeterminate 1 cm lesion in the left adrenal gland. The patient is quite technologically-savvy but has limited health literacy. He logs into the patient portal to read the report from the radiologist. He is quite relieved to see that there are no acute abnormalities, but he does not know what to make of the 1 cm lesion in his left adrenal gland. He is a very visual person but has no desire to scroll through the hundreds of images comprising his abdominal CT. He is, however, very eager to see the adrenal lesion. Given that his patient portal is not image-enabled, he must wait until his next doctor’s appointment to see the images.


Workflow Description


The algorithm should process the report after being finalized by the radiologist. It could use natural language processing (NLP) or other techniques at developer discretion to parse the report to find series and image numbers dictated in the report by the radiologist to embed images or video clips into the report. Additionally, these techniques should be able to localize relevant descriptors in cases in which image numbers are not included in the report. This could be accomplished in several ways: 

  1. by further parsing the text of the report (e.g., 9 mm lesion in the upper pole of the right kidney) and then attempting to locate an appropriate image within the study. This could be accomplished with other machine learning algorithms that could identify relevant pathology based on the text of the report.

  2. by using contextual clues within the imaging data itself (such as an arrow annotation or measurement) to locate an appropriate image within the study. These should be included with the image to be embedded in the report.


Considerations for Dataset Development


The algorithm could be used for any study type as all radiologic studies result in text-based radiology reports. Based on the above-provided narrative, representative images and or video clips of the adrenal glands would be included in the report, including the adrenal lesion in question. The report must be viewable by patients, and the most accessible means of doing so is to make it web-viewable to be available via the patient portal.

Technical Specifications


Inputs

Radiologist's Report

Definition

The radiologist’s report.



Primary Outputs

Image or Video files embedded into the report

RadElement ID


Definition

An array of image and/or video files corresponding to imaging findings and their corresponding references embedded within the report.

Data Type

N/A

Value Set

N/A

Units

N/A

Future Development Ideas


For this algorithm to be successful, it must be able to be viewable by end users (patients and referring clinicians), and standards are needed for widespread implementation to be possible.