BOLD AIR Summit

Thought-provoking discussions about the ethical and legal issues surrounding AI in radiology.

 

Bold Air SummitAs one of the representatives working on the Ethics of AI in Radiology: European and North American Multi-Society Statement, I was excited to attend BOLD AIR — the Bioethics, Law, and Data-sharing: AI in Radiology Summit — held at New York University (NYU) Langone Medical Center on April 24, 2019. The one-day event was co-sponsored by the ACR, Radiology Society of North America, Massachusetts General Hospital, Stanford Center for Artificial Intelligence in Medicine and Imaging, and Center for Advanced Imaging Innovation and Research. The Summit was organized by the departments of radiology at NYU Langone and Stanford University School of Medicine.

My background as a thoracic radiologist, with a strong interest in AI, and my work on the Ethics of AI in Radiology statement has made it clear to me that there are many difficult ethics considerations in AI without a clear path to solutions. By attending the Summit, I hoped to find clarity and answers through the discussions of experts in bioethics, law, and imaging from around the country — there in attendance to discuss ethical and legal issues related to the rapidly developing field of AI in radiology.

The Current State of AI

Yvonne W. Lui, MD, a neuroradiologist at NYU Langone, kicked off the event, stressing that AI is poised to have a significant impact on the healthcare industry. Applications for AI in radiology already include algorithms for lesion detection, tumor segmentation, and faster MR image acquisition.

Key to advances in deep learning research is access to large public datasets — although regulations, legal, and ethical issues surrounding data sharing and governance need to be addressed. Her message resonated with those of us who are tackling some of these challenges as researchers within our own institutions, and also with those in the technology sector who hope to derive values and create AI algorithms from untapped healthcare data.

Guidance on Fairness and Ethics in AI in Radiology

David B. Larson, MD, MBA, a radiologist from Stanford, discussed the stages of transformation from features or characteristics into useful data that is processed into information, which then becomes generalizable knowledge with model derivation.

Dr. Larson emphasized that the use of clinical imaging data for AI should be guided by the general code of ethics in medicine:

1. Respect for persons or autonomy
2. Beneficence, non-maleficence
3. Justice and fairness
4. Obligation to contribute to the common purpose of improving future care

While this is sound general guidance, implementing an accurate and representative model requires us to balance the needs of each entity involved. For example, the needs of healthcare organizations, technology companies developing AI algorithms, and society as a whole may not align with the needs of individual patients. When healthcare organizations seek improved efficiency to serve more patients, and patients have a personal interest in maximizing their own quality of care, it could set up an inherent conflict for those developing AI.

Raym Geis, MD, FACR, presented the summary of the Ethics of AI in Radiology statement. The paper was intended to be aspirational rather than prescriptive, examining ethics in three major areas: data, algorithms, and practice. He explored issues related to privacy, transparency, data value and transactions, data use agreements, ground truth, and bias in AI. While stating, “To improve the common good, we have a moral duty to use AI to extract the most information possible about patients and their disease,” seems simple to aspire to, the interplay of factors such as patient privacy and monetization of data requires thorough considerations.

Though many ethics principles introduced in the statement are widely accepted in Western cultures, not all are considered the norms around the world. For example, there are cultures in which the benefits of discoveries to help society as a whole are considered of far greater importance than the privacy of individuals.

There is little doubt that advances of healthcare-related AI research can occur with greater speed in countries with less stringent regulation of data acquisition. However, the act of extracting the “most information possible” can lead to unintended consequences, such as misuse of private information that patients have shared with their healthcare providers. The ethical approach requires regulations or boundaries to extract only clinically useful data.

Bioethical Issues Related to the Use of Data for AI in Imaging

In order to use and share data responsibly, Jennifer E. Miller, PhD, an Assistant Professor at Yale University School of Medicine, proposed data sharing. Her proposal includes methods such as data visiting, liquid democracy, and data escrow — with the goal of allowing patients and institutions more control over their data, while simultaneously providing algorithm developers with data for training and testing.

Traditionally, privacy and consent must be balanced with the benefits of research. In the era of AI, there are additional challenges of maximizing the benefits of data collected for other purposes while, at the same time, protecting patients and their data from misuse by healthcare systems and algorithm developers.

As David Magnus, PhD, of Stanford discussed, missing data from under-represented populations or biases in the training data can lead to flaws in the resulting algorithm — creating self-fulfilling predictions that reinforce existing disparities.

One possible solution is for an AI algorithm to alert users of its limitations and applicability when patient demographics or characteristics are under-represented in the training dataset. Increasing the diversity of training datasets can also minimize potential biases.

Legal and Regulatory Issues Related to AI

Studying the limitations and applicability of AI algorithms is key in clearing the algorithms for clinical use. Attorney John J. Smith, MD, JD, discussed legal and regulatory issues developers face. Imaging AI is currently treated as a medical device by the FDA. Algorithms must be frozen and tested on a validation dataset to get FDA approval.

In comparison, Computer Assisted Triage (CADt) alerts radiologists or other healthcare professionals to the potential presence of an abnormality without marking the image. CADt devices can be supported by a lower threshold of standalone performance testing. Computer Assisted Detection (CADe), the more traditional CAD, marks images with potential abnormalities. CADe requires standalone performance testing and a reader study for FDA approval, which is more difficult. Computer Assisted Diagnosis (CADx), which provides output that includes a diagnosis and/or confidence level, requires standalone performance testing and might require a reader study if images are marked.

Evaluation of algorithms to gain FDA approval requires a dataset with established ground truth; this involves a significant investment of time and money. Expansion of indications for use and the evolving nature of AI is not allowed for in the current paradigm. Improvements in algorithms would require new validation and FDA submissions. While attempts are underway to adapt the regulation to the new landscape of AI, much work is needed to improve the process.

The FDA is now proposing to introduce a “predetermined change control plan” in premarket submissions where anticipated modifications and the associated methodology to implement the changes in algorithms can be included in the “Algorithm Change Protocol.” More details can be found on the FDA website. This approach would allow patients to benefit from the iterative improvement of AI software through user feedback and increased exposure to additional data.

Needs and Modes of Collaborations

To wrap up the day, AI thought leaders — including Thomas M. Grist, MD, FACR, Shannon Werb, Keith J. Dreyer, DO, MD, PhD, FACR, and Kevin Lyman — discussed the importance of and challenges in building partnerships to create AI solutions to address clinical questions and improve radiology workflow. Dr. Dreyer described the ACR Data Science Institute’s (DSI) recent efforts to empower radiologists to participate in the development of AI algorithms by launching the ACR AI-LAB™.

As radiologists, we are increasingly confronted with ethical and legal issues related to data ownership and access, partnership with for-profit entities, and algorithm development and deployment. No single event can offer solutions to all of the complex ethical and legal issues surrounding AI in radiology. However, I left BOLD AIR with a clear vision that awareness of existing and potential ethical and legal issues — and continued open dialogues among all stakeholders — are vital in guarding against missteps and ensuring future advances of AI in radiology.

It is incumbent on those of us involved in radiology-based AI research and implementation to accelerate awareness and discussion of these issues among our colleagues and collaborators. The ACR’s 2019 Imaging Informatics Summit will examine many of these issues in addition to new strategies for implementing AI in practice.

 

Carol C. Wu, MD | Associate Professor of Radiology at UT MD Anderson Cancer Center