Spotlight on AI Manufacturer GE HealthCare

Parminder Bhatia, Chief AI Officer at GE HealthCareWe sat down with Parminder Bhatia, Chief AI Officer at GE HealthCare, to learn more about GE HealthCare's mission, products and their thoughts on transparency in medical imaging artificial intelligence. Bhatia leads cross-company AI organization with a focus on foundational models and generative AI, as well as the integration of AI into the development, manufacturing, and operation of medical devices. He's also responsible for overseeing the strategic implementation of AI technologies to improve patient care, operational efficiency, and outcomes.


Q: Can you tell us about GE HealthCare and its mission?

Parminder Bhatia: GE is a 130-year-old company, but it's also a one-year-old company, because that's when we became a separate entity. GE HealthCare is a leading global medical technology pharmaceutical diagnostics and digital solution innovator. Our focus is on a wide array of digital problems and solutions, providing integrated solutions to our customers, services and data analytics to make hospitals more efficient, clinicians more effective, and on therapy side of things, more precise. 

A lot of our devices focus on how to provide the right level of granularity which is going to help streamline the process to get better, more precise therapies, and eventually make patients healthier and happier. As part of GE HealthCare, we focus on three major pillars. One of them is around smarter devices, like how to make our CT, MR, ultrasound, all those machines smarter. The second focus is how to connect and improve ecosystems. A patient will go through multiple different screenings, multiple different diagnoses and devices in that ecosystem. A patient going through prostate cancer might have to initially start with the screening or assessment on an ultrasound device, and then on an MR, CT, and follow-ups as well. So, the whole idea is, how do we create that connective tissue where devices can actually interact and talk to each other in a meaningful way? Starting with devices, we need to make that smarter, and then focus on the ecosystem to connect the dots and make the ecosystem smarter too. Then the third pillar becomes digital and AI. How do we integrate so that an MR machine can talk to an ultrasound machine, or even a pathology device or a machine? That's where the digital cloud and AI become important components. How do we want to deliver precision care from GE HealthCare perspective? And as we are on this journey, how do we build digital solutions and technologies which are more scalable that can actually go across a wide array of technologies? Starting with fewer diseases, but then scaling to multiple diseases as well.


Q: What is your key product and how can it help radiologists?

PG: For the second year running, GE HealthCare has the highest number of FDA approved AI applications with 58+ AI-enabled applications. One of our products is called AIR Recon DL which solves the problem of MR scan time, where we are actually able to reduce the scan time by 50% using AI machine learning by providing the same quality of images and really improving the workflow. If you're thinking about a neurodegenerative disease such as Alzheimer's, patients might have difficulty remaining in a machine for a longer time. So by providing the knob, you’re able to reduce or increase scan time depending on what is the right level of image that you want to capture. So that's one of our flagship products, it’s been out there for last three years, and it's widely used.  Not only does it reduce scan time by nearly in half through automation without compromising on image quality, but it has also served over 10 million patients since we introduced the technology early in the pandemic.

We also had a recent 510K clearance on what's called auto segmentation for CT. It streamlines the process of deliberating or identifying the organs which might be at risk during radiation therapy. In radiation therapy, you want to make sure you maximize the radiation onto the tumor and minimize the impact on the surrounding organs, so this technology helps you. If you look at the entire workflow, radiologists have to manually look into what organs they need to avoid and where the therapy needs to get to. 

This actually streamlines the process where, with machine learning and AI, you can automatically identify organs at risk so that in the entire cycle, what used to take them three hours is actually cut down to 15-20 minutes. That way they can spend more time on the diagnosis and with the patients as well. One of the key components for this is building foundation models. In the past, we would have done one anatomy or a few organs at a time. Where we are moving is figuring out how to build these larger models that can capture not just one anatomy, but multiple anatomies, so that eventually, instead of 5 products that would have come in two to three years, we are bringing them all together in single shot. It reduces the time for product development, and it also provides more value to the customers right from the beginning itself. 

And then on ultrasound, about a year to a year and a half ago we acquired a company called Caption Health. During an ultrasound, a sonographer has to take scans of a patient. You'd have sonographers with one year of experience to 17 years of experience, so there will be huge difference in the quality because there is a learning curve. This tool provides that learning curve as you get into the technologies. Think of a novice sonographer who wants to take a diagnostic quality ultrasound. It gives you the guardrails of “hey, you need to move slightly left. Hey, you need to move slightly down” and kind of giving you those guardrails to get a better-quality image. It’s a demo that makes it lot easier, but it also provides capabilities where now you can think about some of these technologies in remote or even home environments. 


Q: What makes GE HealthCare unique?

PB: As a digital industry leader, we serve more than 1 billion patients a year and that is powered by these AI-enabled applications and these are going to have real impact. Our focus now is going from making smarter devices to making that entire care pathway a smarter ecosystem. By being able to go from diagnosis to screening to early assessment, we are able to make sure that patients can actually get the right treatment sooner, which has an impact on patient outcomes. For instance, prostate cancer if caught early, the survival rate is 97%, so I think the impact is huge. The cost-benefit goes to the hospitals as well, because being able to identify the precise treatment sooner helps you in that entire process. That's where it becomes really useful. 

We have these devices and the ecosystem, and we work with our customers and really streamline that workflow  I think that's what differentiates us. In the last year, we have laid a lot of emphasis on strategic partnerships. For instance, we've been doing research partnerships with institutes like MGB, Vanderbilt University, and even with Mayo Clinic, we announced a recent partnership in the field of theranostics. I think these partnerships become really important because you need partners where you can build these capabilities and even evaluate some of these technologies together.

"As a digital industry leader, we serve more than 1 billion patients a year and that is powered by these AI-enabled applications." 


Q: What are your thoughts on the Transparent-AI program?

PB: When we heard about ACR launching responsible AI tools, we knew we needed to be involved in this. Especially as one of the leaders in the space, we had the responsibility to provide that. By last year’s RSNA, we had 5 applications that were already part of that Transparent-AI initiative and 12 other applications are on the AI Central website. As we're building different capabilities, we’re thinking about how to make our devices and our ecosystem smarter. While all of them have responsible AI, in my group we have a separate dedicated effort just with responsible AI. Transparency becomes an important key, even evaluations of these models become key, because for a lot of these technologies training takes about 10 to 20% of the time, and 80-90% of the time is actually spent on validation. I think anytime we bring out a product, it’s about multi-site validation for months just to make sure the technology is doing as expected, and then creating and improving on that. Bringing these capabilities back in the form of AI Central, and others as well, we are confident to show those components.

As we know, the technology as it's being built out, a lot of them operate as black boxes, which means it can make it difficult for healthcare professionals to understand how the output decisions are generated. By making things transparent, and even going beyond that, emphasizing making models explainable with what and why the outputs are the way they are, there’s a huge emphasis on that. So that then, at the end, it's always the clinician who's in the loop who has to make the call. But now they not only see the output, but why it came in, and then they can rationalize whether it aligns with their thinking, or it might not be the right output at a given instance, and they can provide their own expertise. 


Q: What can we expect from GE HealthCare in the next few years? Any exciting developments or projects on the horizon?

PB: Oh, yeah, a lot. An example we'll start with is research work. For instance, I talked about the foundation models. We recently published a research work which is a SonoSAM. Imagine you have a smart phone, and there's a moving object, like someone is running. If I need to take their photo, I might need to adjust the phone. Maybe I need to click on it, make sure person is zoomed in, and then the smartphone with the focus should be able to connect that capability as well. This research will go in a similar direction in the ultrasound space where with just a few clicks, you’ll be able to highlight the important areas in the ultrasound ensuring doctors can see exactly what you want them to see, like a lesion tumor, or an organ which they're interested in, and it provides this real time update. A tumor might not just stay stationary, it might move in 4D as well, so it's not only able to identify that, but once you identify that, you can actually track that. The reason why we call this technology a foundation model, is because it's been trained on huge amounts of data, and it's able to generalize on things it hasn't even seen. For instance, this model never saw a breast lesion, but it was able to perform on it with really high accuracy. And I think that's the advantage with these technologies, they can learn from different modalities. It was able to learn things from CT data and MR data and other data capabilities as well. There’s a lot of cross-learning between different modalities that happens. I think this technology is going to be quite revolutionary across the care path. 

And obviously, there's a lot of work that's happening we have been doing in like the next few years. There'll be a photon counting CT, which is going to come out. It's already in the research phase and there's lot of work that's happening where you can get really granular and precise image from the CT scans. There's a lot of work that's happening as we move across our devices, with ultrasound with MR, with CT, but I think this foundation model will actually become the glue across all of them as well. 

The other area which I'm optimistic that we'll get to as an industry is around model feedback loop. So today, let's say, I have an iPhone with 50,000+ engineers, and every 3 weeks you get an update on the device. The AI algorithms right now, there's no feedback loop across. So imagine a model 3-5 years down the line  yes, it did really well at a particular point, but what happens in two months, six months, nine months? Models can drift, and as software it might not perform as expected in nine months. The first thing we need to identify is having a feedback loop where we are able to see is the model doing as expected. Once you're able to do that, then we can have a loop completed and maybe we need to improve the model or maybe we need to use this model for these settings. So having those guardrails, I think that's going to be really important. And again, it goes back to touching on the transparency point. Because, if these technologies are doing as expected, yes, we can provide the inputs on how it was trained on, but you need to have the same loop on how it's used in real world settings as well. That is going to be a really important area for us as an industry to move in that direction. 

And obviously, the last one, one of the other advantages with these large foundation models is they're multimodal in nature and there’s no other problem which is as multimodal as healthcare where you have structured data, unstructured data, images coming in, pathology data coming in. How do you combine them? That’s where these models become really useful. 


Q: What do you envision for the future of AI and radiology?

PB: I think a future that is personalized, prevention-oriented, and affordable can make a difference in the lives of patients, radiologists, and clinicians by offering all of these prescriptive, predictive AI-driven insights to help overall improve the clinical and operational workflows. We touch upon clinical workflow, but I think operational workflow becomes really important as well. For instance, we have one of the products, intelligent imaging protocol manager, which what it does is convert protocol from the clinician to the radiologist, from the radiologist to the technologist, then sends it to the right scanner what that protocol should be. There are a lot of steps which means it is time consuming for a lot of people; there can be bottlenecks. Maybe some places have a shortage of radiologists, some might have a shortage of technicians, but because there are so many steps, even if you take out one step that is a blocker, it will make the entire pipeline smoother.

"I think a future that is personalized, prevention-oriented, and affordable can make a difference in the lives of patients, radiologists, and clinicians [...]"

So how do we automate these things in a meaningful way? Eventually, if a radiologist is spending one to two hours just on this one thing, that means they're already burnt out. We already have less capacity, so how do we augment that capacity? By giving them time to spend with patients, to spend the things that can really be more impactful, reducing the administrative and cognitive overload in a lot of those areas. Now foundation models that are multimodal in nature are another important aspect. In the last 5 years there's been a lot of discussions around aggregating the data. So, assuming we can aggregate, I have a data store which has pathology imaging data, EHR, MR, everything is there. What do I do with that? I think that's the next question. If your data is not combined in the right way, you cannot get meaningful insights out of it. Technology should be used to synthesize or combine that data in a meaningful way as well. Once you're able to synthesize the data efficiently, then you can have any diverse hospital staff who can think about it using chat bots to do things like querying the patient record to get the right information, which could be in imaging data, could be in the unstructured clinical notes, could be in the structured part. Just streamlining that process provides more avenues for collaboration because you would have radiologists, technologists, radiation oncologists, clinicians, a wide array of users who are going to use the same data.

We’re seeing so much with ChatGPT, but I think it opens avenues for new interfaces like how do we bring in voice as an interface? We talked about MR  if you went through an MR, you might see a technologist who would go in and set you up, and then they have their workstation outside as well. They need to set the protocols and get those things aligned. But these things can be automated. They need to spend more time with the patient. Can they use voice and other things? Okay, this is the protocol, this is the settings, that way you're able to automate that workflow so they spend more time with the patient. That's going to be important for radiology and even the entire workflow, because you're reducing the rescans because a lot of time errors happen. Figuring out how to reduce errors is going to be important -- that's where it's going to become a really important voice and these technologies as an interface. Another aspect is how do we bring in health equity, bringing healthcare professionals on the same level field? With the technologies, there's initial work which is really promising. I think there's lot of acceptance on what I talked about with Caption Health, how customers are interacting with that because that's the technology that's really going to help them augment their skills and bring a lot of value. That's an area which is going to have an impact across the globe, because you can think of having distribution of these technologies anywhere in the world. 

It comes with the caveat. One aspect is going to be change management, because some of them might require a little bit of knobs, or questions about how we implement them into existing workflows. We need to make sure they're not disruptive. How do we bring in provide training across these things to healthcare professionals, integrating AI seamlessly and ensuring that AI complements human decision making? That's going to be an important area because the technology is going to grow, but how do we bring everyone onto that in a more cohesive way? Is going to be important to actually have this technology be used at that scale.


To see their full range of medical imaging AI products, visit GE Healthcare on AI Central.


This interview is part of a series interviewing medical imaging AI manufacturers on AI Central to help our members better understand the imaging AI marketplace. Since its inception in 2018, the ACR Data Science Institute® AI Central database has evolved from a short online list of FDA-cleared imaging AI products to the most complete and up-to-date online, searchable directory of commercially available imaging AI products in the United States. More than 200 software as a medical device (SaMD) FDA-cleared products have been curated by more than 100 manufacturers, and thousands of radiologists per month access the site in search of suitable AI solutions. Learn more on AICentral.org.