Radiology and Pathology Report Correlation

Purpose

Correlation of Radiology and Pathology reports

Tag(s)

Non-Interpretative

Panel

Reading Room

Define-AI 

19120001

Originator

Ross Filice, Namita Gandhi
Lead Ross Filice, Namita Gandhi

Panel Chair

Ben Wandtke
Non-Interpretative Panel Chairs Alexander J Towbin, Adam Prater  

Panel Reviewers

Reading Room Subpanel

License

Creative Commons 4.0 
Status Public Commenting
                               

Clinical Implementation


Value Proposition

Correlating radiology with pathology has long been of interest to radiologists as it helps with peer learning, continuing education, ensures multidisciplinary patient care and helps meet regulatory requirements[Sorace, Murphey]. Historically, this has been a manual process requiring cumbersome paper-based documentation, but with the widespread use of Electronic health records, this information is now available in a computer consumable format. Early work has been performed for rad-path correlation using language modeling [Filice], but further work is needed to develop mechanisms for identifying discrepancies and timely user feedback. Modern artificial intelligence and machine learning techniques can help us realize the benefits of rad-path correlation without the manual effort.

Narrative(s)

A doctor notices the prostate-specific antigen (PSA) level of a 55-year-old male is higher than normal and suspects there may be cancer. The doctor orders a screening magnetic resonance imaging (MRI) examination of the prostate. The radiologist reports a prostate imaging reporting and data system (PI-RADS) score of 4 for a lesion identified in the MRI examination indicating a high chance of cancer. The patient undergoes a targeted prostate biopsy because of these imaging findings. The pathology report confirms the presence of cancer. These radiology and pathology reports are automatically correlated as relevant and presented to the radiologist for review. The application flags potential discrepancies between the radiology and pathology reports. The radiologist reviews both reports and provides feedback on whether the matching algorithm worked and whether any potential discrepancy was appropriately or inappropriately flagged; this feedback is used to iteratively improve the matching algorithm. The radiologist also provides feedback on whether the pathology report was concordant with their imaging interpretation; if not, further workup may be required to ensure proper patient care.

Workflow Description

Radiology and pathology reports are consumed electronically by some mechanism such as health level 7 (HL7) report extraction, fast healthcare interoperability resources (FHIR) interface, or other custom database query. Reports are likely pre-filtered to at least be sure they belong to the same patient and potentially limited to a reasonable time period (perhaps 30-60 days), but these pre-filtering steps are subject to debate and it may be valuable to consider longer time periods if model performance characteristics and radiologist interest allow it. The reports are then processed by the AI model to determine if they should be considered relevant matches or not. It should be noted that candidate matching can be many-to-many meaning that more than one pathology report can be relevant to a candidate radiology report and more than one radiology report may match a candidate pathology report.

Relevant simply means that the pathology report (or reports) makes sense in the context of what was described in the radiology report (or reports). For example, for a CT of the chest, pertinent pathology reports could include a lung biopsy, pleural fluid analysis, or chest wall sampling. Something that would be clearly irrelevant might be a brain biopsy result. But there may be some subjective cases where some users might find it interesting but others would not; for example peripheral blood smear results, pericardial fluid, or coronary artery plaque analysis. Because there may be some subjectivity in what is relevant, developing a model that can be tuned to an institutional or even individual level preference would be valuable. The converse many-to-many relationship should also be considered in that the lung biopsy result may be relevant not only to the chest CT but also the whole body PET and the CT-guided biopsy procedure with similar potential subjectivity.

These proposed matches are displayed through some interface to the radiologist to review and provide feedback. Saliency or explainability features would be helpful so the radiologist understands why these are considered matches. Natural language processing or other electronic systems would preprocess the imaging and pathology reports assessing for potential concordance or discrepancy. The radiologist would confirm or remove any inappropriate flagging.

Potential future directions may apply labels such as concordant, discordant, adequate (in terms of biopsy sample adequacy), inadequate, and perhaps others. Correlation may be drawn to other outcomes in addition to pathology such as surgical procedures or morbidity/mortality measures.

Considerations for Dataset Development


What procedures might prompt this algorithm?

While this may be subject to debate or the developer’s interest, recent work [Kelahan, Dane, Filice] have shown reasonable success considering all radiology reports, both diagnostic and interventional, generated within a radiology department or practice. Generally speaking, it is recommended that all radiology reports be considered as candidates for this algorithm.

Are there different types of reports that should be considered when training the algorithm? 

As above, it is recommended that all radiology reports be considered as candidates. Matches would only be proposed if the same patient has a subsequent pathology report that can be evaluated and is considered relevant.

Automatic trigger or radiologist-initiated launch?

The algorithm should run automatically. Whether a radiologist is notified automatically, and by what means and at what frequency, is subject to debate but these preferences will likely be set locally.


Technical Specifications


Inputs

Radiologist Report

Procedure

All

Views

N/A

Data Type

Text (HL7, FHIR response, or database query)

Modality

All

Body Region

All

Anatomic Focus

All

 

Pathologist Report

Procedure

All

Views

N/A

Data Type

Text (HL7, FHIR response, or database query)

Modality

All

Body Region

All

Anatomic Focus

All



Primary Outputs

 

Radiology and Pathology Report Matches

Definition

Whether a pathology report is relevant to a radiology report

Data Type

Boolean

Value Set

[Match, Nonmatch]

Units

N/A

Figures


 

 

 

Future Development Ideas


This use case is only to define whether a pathology report is considered relevant to a radiology report as defined above. However, there may be many directions one could build on such a platform including automatically determining whether a pathology result is concordant with the radiology procedure or interpretation, automatically determining whether a biopsy sample is adequate, correlating with other outcomes such as surgery, morbidity, and mortality, and perhaps others.

References


[1]  Sorace, J.,  Aberle, D. R.,  Elimam, D.,  Lawvere, S.,  Tawfik, O., & Wallace, D. (2012, September 05). Integrating pathology and radiology disciplines: An emerging opportunity? Retrieved from https://bmcmedicine.biomedcentral.com/articles/10.1186/1741-7015-10-100 

[2] Rubin DL, Desser TS. A data warehouse for integrating radiologic and pathologic data. J Am Coll Radiol. 2008;5:210-7.

[3] Kelahan LC, Kalaria AD, Filice RW. PathBot: a radiology-pathology correlation dashboard. J Digit Imaging. 2017;30:681-6.

[4] Kohli MD, Kamer AP. Story of Stickr – design and usage of an automated biopsy follow up tool. Presented on December 3, 2014, at the Radiological Society of North America Annual Meeting; Chicago, IL.

[5] Dane B, Doshi AD, Gfytopoulos S, Bhattacharji P, Recht M, Moore W. Automated radiology-pathology module correlation using a novel report matching algorithm by organ system. Acad Radiol. 2018;25:673-80.

[6] Murphey MD, Madewell JE, Olmsted WW, Ros PR, Neiman HL. A history of radiologic pathology correlation at the Armed Forces Institute of Pathology and its evolution into the American Institute for Radiologic Pathology. Radiology. 2012;262(2):623-34.