Abdul Hamid Halabi is the head of healthcare at NVIDIA, and he offers an interesting look into the technology being offered by Clara, a new platform that’s bringing graphics processing unit (GPU) enabled imaging instruments to the medical world for faster reconstruction and higher quality images with up to an 80 percent reduction in irradiation to the patient. In addition to better graphics and higher resolution images, Halabi discusses the powerful upgrade in scientific computing brought by this technology, which will allow for the development of algorithms for faster detection of hemorrhage and the ability to prioritize work lists for radiologists in accordance with urgency, as well as algorithms that can automatically measure anatomy and produce reports from numerous images, whether generated by ultrasound, CT, MRI, or X-ray. Interested in learning more? Press play for an informative conversation that sheds light on the exciting future of technological advancement in medical imaging.
Visit https://developer.nvidia.com/clara for more information.
Richard Jacobs: Hello. This is Richard Jacobs with the future tech and future tech health podcast and I have Abdul Hamid Halabi. He’s the head of healthcare at NVidia. I think a lot of people have heard of NVidia. I didn’t realize that they were even into house care very much. But we’re going to talk about the Nexgen imaging technology called Clara. So Abdul thanks for coming. How are you doing today?
Abdul Hamid: Hey Richard, thank you so much for having me. Really excited to be with you.
Richard Jacobs: Oh, good. So tell me about the Clara technology, what’s the overall premise of it and then we’ll get into some details of it.
Abdul Hamid: Well Clara is our healthcare platform at NVidia. A lot of people know NVidia from our gaming days. So for those who play games, they keep playing games more and more cause these are really important technologies that are being created and being useful for AI. So then that of our history and how Clara came about. You know, back in 1999, roughly, NVidia decided to invest in programmable shaders, which allows the GPU to becomes programmable for you to create these amazing games or maybe to a movie or stimulate a car before you’re actually making it so you can be more efficient. That same technology turns out was really useful for also high-performance computing and for scientific computing. So in areas of healthcare, like instruments, medical instruments then to take signal they fell like X-ray and turn it into an image that is actually a scientific computing problem, which has benefited from the GPUs for a very long time. Of course, in 2012, we became the platform for deep learning and AI and over the last 18 months or so, we invested in other AI technologies like support vector machines and another platform called drop it. And I’ve caused these sort of four components from, you know, the visualization that we introduced in 1999. The scientific computing to deep learning and machine learning that could have used our platform that went across many, many industries for AI and we decided to go vertical in two of these industries. So one of them was transportation with our effort and self-driving cars. And the other one was healthcare and the platform that one vertical in healthcare for developers is Clara. So a little bit about Clara, I think what may help before we get in there is to understand, where NVidia has been in healthcare and we’ve been in that space for about 12 years now. It started with the use of GPUs for rendering so that first vertical I described within our platform. So when you looked at an ultrasound machine and the first time you saw that 3d image of a baby, there was a GPU behind that over graphics processor units. Then next time you got into a CT scanner and the scan was much faster, you would expose to 80% less radiation than usual. That was because GPUs enabled the instrument manufacturers to innovate with algorithms such as attractive reconstruction, which reviews any four x-rays and still get you the same quality image. And that happened a while ago and then obviously now with deep learning, there’s a lot of use cases and I think if you combine the use cases in medical from instruments and medical imaging, the construction to visualization to AI right now, Clara was the thought form to help bring all of these technologies as easily as possible to the developers so they can take advantage of them. And we did this by introducing two tool kits. We announced the last one just recently. So one tool kit was targeted as the litigation of AI to make it easier to build and manage and deploy AI and the other one was targeted at instruments in order to enable them with GPU technology or a faster reconstruction for better visualization and obviously to bring AI.
Richard Jacobs: Okay. So tell me about Clara. Why the focus on improving the imaging? Is it because NVidia is really robust in the visual representation using their graphics cards of Games and now this is going to translate over to medical imaging and improve the resolution of features that you have mentioned or is it something else?
Abdul Hamid: No, absolutely. I think that the power of the graphics processor is in its parallel processing. So if you think about gaming and why we did really well in gaming, it’s because if we’re playing a game really, really fast, you don’t want us to be calculating, each pixel on your screen. You’d want all of these pixels to be calculated at the same time very quickly. So by design, the graphics processor underneath is a parallel processor and it turns out that a lot of the use cases within the medical imaging world can actually benefit from this parallelism. So that was sort of a natural progression of using the hardware that we have, in medical imaging. And it was apparent in the use cases that came out of deep learning first. So a lot of them were focused on radiology and medical imaging.
Richard Jacobs: What are some of these specific applications where NVidia has improved the resolution of particular imaging? Is it in x-ray? Is it in CT, MRI or is it across the board and you know, are there any things that have come from the improvement of the imaging? Are you at that stage yet where this actually deployed in medicine?
Abdul Hamid: Oh yeah, for sure. So it’s really difficult today to come across a CT scanner for example that does not include the GPU in it. In fact, multiple same applies on the ultrasound, it’s been revolutionized the GPUs I would say by moving into software-defined beamforming, MR enabling compressed sensing techniques, forecaster scanning, X-ray uses it, memo-graphic imaging uses it. So it’s a cost that we need them in the medical imaging industry. There is very high adoption and now with AI adoption is even increasing. So you’ll see FDA approved algorithm such as ace from Canon, for example, that improves reconstruction for high-quality imaging. You’ll see algorithms from GE using AI for hemorrhage section so you can purify the worklist out of a scanner. You’ll see examples from Siemens for example, where they’re automatically measuring anatomy and producing an automatic report using AI, which has GPUs behind it.
Richard Jacobs: So are there any particular images that you’ve seen or NVidia has put forth as an example of you know, the before and after or the Non-NVidia, the Non-Clara, Clara coupled the technology showing improved resolution.
Abdul Hamid: Yeah. It’s sometimes about the resolution, right and you will see images like this and some of the most beautiful images you’ll see are on cinematic rendering actually, where it really almost feel that even though you’re not inside the body and you’re using x-ray to look into the body, it’s as if you’re looking at it for real. And Siemens is one of the leaders in that space. But I think what you would do, you confusing brains a little bit more than graphics. I think one of the verticals is graphics and higher resolution imaging and you will see that. I think what we also bring that’s really powerful is scientific computing. So the fact that instead of actually acquiring more and more data, so if you think about taking a picture with your phone, in the past, you really needed to have this amazing lens, this massive camera, and this crazy technique in order to get a beautiful photograph of anything you’re trying to photograph, that tree or cherry blossom. I just came back from Japan. But if you look at the images we’re getting with our cell phones today, they’re getting really, really close, even though we don’t have that hardware and the equipment that we had on the previous images, SLR, you know, cameras, those are still beautiful. But you’re able to almost get very similar images using just your cell phone. And that is possible because we know the burden of creating the really amazing images from being on the hardware and the specific lenses and all this stuff into a software problem. So, given the data that I can get from this sensor, the camera or from the x-ray machine, can I actually produce more amazing images? And that happened to be the way of GPU is really, really good. So when I talked about moving from, you know, getting the same CT quality image using 80% plus x-ray dosage, that is because we moved the problem from being a hardware problem to becoming software and our software, we can innovate much faster. So the instrument becomes software defined just like your phone every other day, you’re getting much better images out of the phone even though you’re not really upgrading the hardware. And that’s the capability that Claire brings. So GPUs enables you to do this software-defined development and innovation and GPUs make it possible inside of the instruments.
Richard Jacobs: Yeah, that’s interesting. If you can reduce the radiation, someone would experience from a CT for instance, by up to 80%, and that’s tremendous. That would take away a lot of the fear of having to get CTs, you know, in a short period of time. If you have an unfortunate condition and you still get better imagery. I mean that’s fantastic. Huge result.
Abdul Hamid: That’s true and what’s really cool is, you know, this is being done obviously by ordinance and our partners utilizing our developer platform. Another great example is MR. So MR is this amazing modality that you could look inside of the brain and inside of the body and diagnose a lot of diseases early and you don’t have to worry about the x-rays, for they’re all doing it. The problem with MR is it takes a very long time and you have to hold your breath. It’s challenging for young kids and challenging for elderly folks. So you can imagine AI coming through and we have already seen examples of this where we can actually shrink the exam time significantly. So it becomes a useful modality for more and more people, which is something I’m really excited about.
Richard Jacobs: How long is the exam time? Typically we had to lay there and you know, hold our breath multiple times or not move.
Abdul Hamid: So it depends. It could range from anywhere, 10 minutes, 15 minutes, all the way up to an hour with anyone the exam that you’re taking. But if you’re able to shrink this exam, you know by half, that would be phenomenal. Which are some of the results that we’ve seen from some of the really great startups we’re working with?
Richard Jacobs: Is this just for an MRI or is this also for CT or pet scans?
Abdul Hamid: 50% to be fast anyway, so I think the focus on the CT side has been on dosage reduction. The focus on the MR side has been reducing the time it takes to get scammed.
Richard Jacobs: Yeah, I mean I’ve had some of these scans and that is very difficult to lay there and not move for, you know, 30, 40 minutes and it’s really hard. So all of these improvements, we’ll make it a much better experience.
Abdul Hamid: Yeah, absolutely. I think that the potential of getting better images, it’s something that we all want to have. Also the ability to analyze these images more automatically if something that I think AI opens up, we’re really excited about it.
Richard Jacobs: Yeah also with the reduced time to do an MRI you get you can cycle it more, you know, but that a given clinic and it’s being able to be used eight times a day is it could be used 12 times a day in the clinic. We’ll get a lot more use out of it and they’ll benefit more as well.
Abdul Hamid: Yeah, absolutely. We need higher efficiency and it’s especially true when you think of, you know, we tend to be in a, in a great area where we have access to scanners and stuff, but if you look at third world countries and developing countries where these scanners are not as available, they’re typically on 24/7 as opposed to the 12 hours a day that we had on them here in the United States. There is a massive opportunity on a global scale.
Richard Jacobs: Well, this is announced, but are there any other benefits that you’re shooting for? You mentioned the interpretation of the scans. Can you talk a little bit about how that’s improving and changing or becoming more automated?
Abdul Hamid: Yeah, absolutely. I think that AI has the potential of obviously improving the imaging as we talked, but in assisting the radiologists and the technologists and getting higher quality images and getting more information out of the images, I think it’s also shown a ton of promise. So one of the things that radiologists and technologists have to do often is, quickly, the list of studies that they have to look at, for example, and then decide which ones they’re going to start with. Now that’s a fact that AI, for example, could help with prioritizing the list. So if a patient comes in and we think that they have a brain hemorrhage, they’re bleeding in their head, you’d want to have the radiologist be alert, to read this study for. So these are some of the use cases we’re saying. Once you’ve kind of optimized the work list or the radiologist, AI had the promise of also calculating things on their behalf. So extracting information out of the images really quickly where, for example, you could, if somebody is doing a cardiac study, perhaps you can have the computer and look at the cardiac study and automatically calculate ejection fraction, which is the percentage of blood pumped by the heart on each cycle. So these are things that take time from the radiologist or the technologist and they really don’t need to be doing that. We can actually teach a computer to do them. So we’re certainly excited about that use case as well.
Richard Jacobs: I guess you could also do initial filtering where the AI looks at all the images and only have certain features show up that should be passed onto a radiologist, you know, for final clarification. That may help as well, you know, grading or sorting and that may make the radiologist’s work a lot faster too.
Abdul Hamid: It’s a massive use case, I think. If you think about that, the radiologist on average has about two seconds per image in order to diagnose the disease which is a really, really short time. They work really hard trying to help the patients around and if we’re able to have them look at the cases that are more difficult earlier in the day or as soon as they come, the quality of the work would go up for both the physician and the patient. So an example is Ohio State University. So doctor Luciano Prevedello and doctor Rick Weiss and the team over there produced an algorithm that can detect a stroke, hemorrhage, most critical diseases for the head and using Clara basically they were able to deploy it into their workflow immediately. So now every study coming through to Ohio State University gets processed and the ones which they believe has a high likelihood of a brain injury or a critical case for the brain will go up to the top of the list for the Ohio State University, radiologist study. So the easiest cases are made a lot of them.
Richard Jacobs: Yeah it is.
Abdul Hamid: Yeah, the challenging part, Richard is and we’re really focused on recognizing that there’s going to be a lot of these algorithms, right. So, use cases obviously, which improve the workflow, use cases which improve quality, use cases which reduce costs, or increase access all of them. There’s going to be a lot of them. We, in fact, if you think just simple math, there is something like 10 modalities being you know, X-ray, CT, MR, there are probably 10 organ systems between, you know, cardiovascular and musculoskeletal and reign within each one of these sense organ systems there are 10 organs and so within each one of these organs there are 10s diseases that you can diagnose. So you’re looking at something on the order of maybe thousands if not tens of thousands of algorithms that need to be created that can be created and can benefit the patients, can benefit the hospital and can benefit the doctors and the system overall, how do you do that? How do you create thousands and thousands of algorithms? How do you enable physicians to get involved in that process is a challenge that we’re thinking about as well. Recognizing that an algorithm that’s developed in one hospital may not work actually at a different hospital. So we had an example of this where an algorithm was created at MGH and the algorithm was basically used to study how healthy your heart is. So it was done on a cardiac CPA and it was measuring the thickness of the wall of the left ventricle. So it’s a useful measure for how healthy your heart is. The algorithm was able to measure the thickness of the heart that’s over 96% accuracy at MGH. And then as part of the pilot, we decided to take that algorithm and move it to Ohio State University. When we moved to Ohio State University, the accuracy of that algorithm dropped to something like 87%. When we analyzed a little bit, we realized that the patient population at Ohio State University had a higher prevalence of high blood pressure than the population at MGH. So, as a result, the thickness of the left ventricular wall was actually more at Ohio State University and the Algorithm did not see that at MGH. So when it saw it for the first time at Ohio State University, it didn’t know what to do with them.
Richard Jacobs: Why?
Abdul Hamid: Because it’s new. I mean the bottom line is the algorithms and deep leanings, they learned from example, they learn by seeing things, you show them examples and they learn from those. When they see something new, they don’t know how to extrapolate really well Right The thing is, and the best example of this is self-driving, right? So if you think about a car that is trained to, you know, to drive phenomenally, right. If I showed it to you, hey, would you be willing to get into this car? In general, most people, about 60, 70% will say, yes, I’d like to try the self-driving car. However, if I asked you, how would you go about it if you knew that this car was trained to drive in the UK?
Richard Jacobs: Well. So when you deployed this in the new hospital, did you run through a training set or did you feel like, oh, you don’t need to do that, let’s just put it in production.
Abdul Hamid: No, so a typical path is, usually you try to put it in production, but I think what we learned is when you bring it to a new hospital, you actually need to do a little bit of speed training. It’s called transfer learning. So what we ask Dr. Richard White’s to do is would you actually annotate a few of these five blood pressure examples? And they did. And with a, you know, a fraction of the number of cases that we used originally trained that network. So when you did this, and maybe 10% of the cases where re-annotated at Ohio State University, we were actually able to retrain the algorithm and bring it back to the 96% accuracy that was at MGH. So what we learned from this is there’s a huge need for localization of the algorithm, right. And if we followed sort of the idea of the self-driving car, which was trained in the UK, that car would be very dangerous in the US, even though it knows how to really drive it would drive on the wrong side of the street. So however it was minor retraining, it still knows out to drive. With minor training, you can train it said, hey, you need to start driving on the right side of the seat, which is exactly what we need to do with medical energy. So we need to provide, I think what car AI provided and we announced a little while ago is the ability to bring AI to the adaptation process. So you can create data sets locally, ability to train locally. Once you have the large algorithm, create it somewhere else if you’re able to adapt it for your own patients and your own practice and then the ability to deploy as quickly in the hospital. So these were three functionalities that we bring with Clara AI.
Richard Jacobs: Well, once you bring it from hospital A to hospital B and you retrain them a little bit and you run it, have you tried going back to hospital A with it, you know, with the new retrained version and would that not work? Does the algorithm literally have to go into each setting retrained and then it stays there? I guess another way to ask you this, do you get any benefit from having the algorithm be used in multiple settings and then integrating all of that learning into one master algorithm? Or is that not how it works?
Abdul Hamid: Oh, absolutely. I think we’re also working on a technique that’s called federated learning in general. Where the idea is, let me get as much data from different centers, different populations, different scanners, and use all of this data collection to create new algorithms. So that’s something that’s really successful. We’re working with King’s College London right now and a bunch of other universities. Yes, there is a benefit. Absolutely.
Richard Jacobs: Interesting. Okay. So, what’s the near term future of the technology over the next year or two? What milestones are you guys looking to achieve?
Abdul Hamid: I think the most important thing in my mind that they recognize that, you know, everybody sees the promise of AI in medical imaging where we’re able to know a world with AI is so much better than it was without AI because we’re more efficient or whatever you’re saying costs with increasing quality and using error rate, making, you know, physicians and their expertise available to everybody around the world especially in areas which don’t have access to such knowledge. I think what’s important to recognize is radiologists need to be involved. So, at the end of it, what AI is doing is it leaking somebody’s knowledge into an application that others can use. And one of our biggest goals is to enable radiologists that could use as much of the AI as possible, which is why we created the Clara platform, you know, enabling them to annotate much faster within their own workflow, enabling them to bring algorithms from other institutions and localize them within the hospital. While maintaining their data inside of the hospital, enabling them to deploy really quickly. And I think, if we can keep going down that path, we can imagine a flourishing ecosystem of AI development. So we partnered with the American College of Radiology just last Monday and we also have a partnership. We’ve been working together for over a year in order to bring this technology through their member network to every hospital. So I really look forward to making this technology as robust as possible and making it available to all of our partners from physicians to the vendor community, to the start-up community to keep AI going.
Richard Jacobs: Okay. Well very good. So, Abdul what’s the best way for people to find out more about NVidia in general and then the Clara technology platform.
Abdul Hamid: We have NVidia.com and look at the Clara platform and other platforms.
Richard Jacobs: Very good. Okay. Well, Abdul thanks for coming on the podcast and just idea of technology yourself. I think you made it very clear and understandable to see what it’s doing. So thank you.
Abdul Hamid: All right. Well, thank you, Richard, for your time.
Podcast: Play in new window | Download | Embed