The Power of Artificial Intelligence in Urology "Presentation" - Jodi Maranchie

November 16, 2023

Jodi Maranchie delivers a presentation on the impact of artificial intelligence (AI) in urologic oncology. She begins by explaining the basics of machine learning and its progression to deep learning, emphasizing how AI excels in pattern recognition, a key aspect of medical diagnosis. Dr. Maranchie showcases AI's revolutionary role in pathology, particularly in tissue segmentation and tumor assessment, where it achieves high accuracy in identifying relevant areas and automating pathology reports. 

Biographies:

Jodi Maranchie, MD, FACS, University of Pittsburgh/UPMC, Pittsburgh, PA


Read the Full Video Transcript

Gordon Brown: We're going to kick this session off with Dr. Jodi Maranchie. She's currently an associate professor of urology at the University of Pittsburgh School of Medicine. She comes to the University of Pittsburgh after serving as the director of urologic oncology and urology at the University of Massachusetts, and she completed her training as an afoot scholar at the National Cancer Institute. She's going to kick us off with a discussion around artificial intelligence in urology. Welcome.

Jodi Maranchie:
Thank you. Yeah. And thanks so much for inviting me to give this exciting and fun talk about where we are with artificial intelligence and how it's changing the face of urologic oncology. I'm going to give a brief overview of machine learning and then talk about how AI is revolutionizing the way we diagnose, prognosticate, and treat GU cancers. And India's going to follow up with how AI is helping to lessen the burden on healthcare workers and improve our quality of life on a daily basis.

So what is artificial intelligence? There we go. Perhaps like me, you grew up with this, the tricorder, this handy little device, didn't even have to touch the patient. You just wave it from head to toe, tells you what's wrong and what to do and you didn't question it. The computer just knew. Imagine how much time that would've saved on training. As physicians, a lot of what we do is pattern recognition, the arrangement of cells on a slide, the appearance of soft tissue on imaging, and discrete sets of signs and symptoms that quickly focus us down onto the problem. And this is something that computers do very well. So can we train a computer to do much of this sorting for us? And that's the essence of machine learning. If your job was to sort photos of ducks and trucks and you did that all day long, you might want to streamline the process by teaching a computer to do that for you.

And you would start out by figuring out what features you typically use to distinguish that. And you might say, okay, I'm going to say anything with a flat bill and webbed feet and a pointy tail is a duck and anything with a windshield, tires, and headlights is a truck. And then you start feeding the pictures in and it'll do a really good job of sorting until you get something like this and it doesn't recognize any of your features. So then you go back to the drawing board and you say, okay, well I've got to add more features and then it'll be better. And each time you go back and add features, you're training it to be a more effective algorithm.

But you can see that's labor-intensive. It takes a lot of human hours to input this and it's going to do a good job, but if you think about it, it's never going to do better than the human who trained it in the first place because it's just using the same features and that's where deep learning comes in. Sorry. Rather than tell a computer what features you want it to look for with deep learning, you just give it a whole bunch of labeled pictures and you run it through and you say, sort these into groups and you let the computer decide what features are going to matter. It saves a lot of hours on input and teaching, and in the end, it has the potential not only to sort things for you but the potential to find things that we can't even see in the data that are clinically relevant.

A major disadvantage of deep learning is that it really is a bit of a black box. So you have to take a leap of faith. You're letting the computer sort things. If you ask a computer to group things into groups, it will group things into groups. And you know that not every association that's real is relevant. Unfortunately, with AI as it currently is, it can't really tell us what those features are that it's decided to use. So we throw it into the black box and see what comes out. The more data you put in, the finer the algorithm, the better it's going to learn and the better it's going to do. And with the explosion of digital images in today's world, we are prime for this.

So it's not surprising that the pathologists were the pioneers in AI. They're dealing with images all the time and they can scan whole slide images now into digital and feed them through the computer and the computer, they've trained it to segment parts of the slide. Usually, the pathologist is sitting there all day looking at these glass slides under the microscope trying to find the relevant areas. So if you can get the AI to first mark each of the relevant areas where they see the mucosa, where they see the muscularis propria, where the nuclei are, they can even segment the nucleus from the cytoplasm and calculate the nuclear to cytoplasmic ratio and identify all of these things. Hundreds of man-hours were put into teaching the computer how to segment these areas. But in general, they can identify these things now with about 98% accuracy. They can even characterize the roundness of a nucleus and automate much of the work for the pathologists.

Once we have accurate tissue segmentation, the next clear step is to train the artificial intelligence to assess the depth of the tumor and give us a stage for a bladder tumor or to automatically grade tumors using a model that's based on nuclear size, cytoplasmic color, nuclear shape, and other patterns in the connective tissue, all with remarkable accuracy. And your artificial intelligence can even put together a synoptic pathology report using all of those above features. As the volume of annotated digital images continues to grow, we're starting to see a shift towards deep learning. Essentially asking the question, is there more diagnostic or prognostic information in these images than we are already aware of?

In this study by Lucas and colleagues from the University of Amsterdam, the artificial intelligence was trained with superficial bladder cancers, images from patients who had either recurred within five years or hadn't recurred within five years. The AI started by segmenting all of the features that we just talked about and then it studied the images and ultimately came up with a black box sorting algorithm of about 200 features that it could distinguish between these two sets. When that was then applied to a second set of patients, the AI was able to predict five-year recurrence with an area under the curve of 0.72, markedly higher than just with the clinical features age, stage, grade that we already had. Whatever features it had latched onto during its sorting were clearly prognostic over the traditional pathology report and other clinically available features.

And here's another very, I think, cool example. This is from Salt and colleagues where the AI was able to examine photos and identify the tumor-infiltrating lymphocytes, not just that they were there and what cell types they were, but where they were located spatially. Were they in a pattern diffusely throughout, or were they just scattered very rarely or were they surrounding areas of interest? And based on the spatial pattern of these tumor-infiltrating lymphocytes, they were able to predict which of these bladder tumors were going to progress. So again, an example of finding information in the images that we already have that we didn't even think about or dream about, and probably would've been inaccessible to humans without thousands and thousands of man-hours.

Switching to radiology. This is another area where we have reams of digital images and just an explosion of data. And when combined with deep learning, we've now got the field of radiomics. By naked eye, we can't really reliably differentiate a clear cell kidney tumor from a chromophobe or an oncocytoma, which is why there's such a field for biopsy and surveillance. Can the artificial intelligence glean more information from those same pictures that we're looking at that will help to classify tumors or predict outcomes? It turns out that the computer can rapidly extract hundreds of radiographic nuances, including tumor shape, the intensity of the images, the heterogeneity of the intensity within the image, and overall image texture down to the pixel level.

Computed tomography texture analysis allows for quantification of lesion heterogeneity in the area of interest. And if you combine this with some fancy statistical modeling, the AI can cluster tumors with surprising accuracy and can actually distinguish papillary from clear cell from oncocytoma. And I just want to say that this particular study was done with a small set of images and with the way the machine learning works, the more images that are passed into this, the more refined the algorithm will be and it'll probably prove to be even more effective and change the way we practice renal tumor management.

Radiomics have also been used to predict firm and grade, finding the more aggressive tumors, and I love this one. They've shown that with radiomics, you can look at a set of clear cell renal cancers and you can distinguish the ones that are VHL mutations from the PBRM1's from the BAP1 mutations, which is frankly incredible because they all look exactly the same, not only on the radiographic images, but also histologically. And they are able to predict response to TKI and IO therapy to help designate which way to go with these tumors. And also early foray into predicting metastasis and survival after treatment.

And one more example now into prostate cancer. This is work that was presented by Andrew Armstrong just this year at ASCO. He and his colleagues from Duke asked if artificial intelligence could be used to identify which high-risk or intermediate-risk prostate cancer patients who were going to be treated with primary radiation therapy would benefit from the full 28 months of ADT versus the shorter course of just four months. This was a clinical study that they had done, and so they had the answer from their original clinical trial.

They started the AI project with six RTOG trials that had been concluded to train the algorithm to look at the core needle biopsies, plus the clinical information from six clinical trials and develop unsupervised deep machine learning, develop a biomarker for which patients benefit from the additional hormone therapy. Once the biomarker was in place, they applied it to RTOG 9202. I'll tell you, there were clinical features that were added to this algorithm, but in the end, they said that 40% of the weight of the biomarker came from the biopsy core images pre-treatment. The original RTOG 9... And there's the black box that I wanted to show you. We don't know what it was looking at in those core biopsies, but in the end, it distinguished them. And it either said, this is a patient who is going to benefit and this is a patient who's not.

This is the actual clinical outcomes of the 20-year clinical follow-up for RTOG 9202. The original trial had demonstrated a hazard ratio of 0.64 favoring long-term ADT in the entire cohort of intermediate and high-risk prostate cancer patients. They found that about 26% of the population had a cancer-specific death, if they had short-term versus 17% who had the longer term ADT. Couldn't really distinguish which ones mattered. When they went back and applied the AI biomarker for benefit, what they found is that the patients who were biomarker negative, as predicted, had pretty much no benefit from this additional two years of ADT and could have been spared the additional cost and toxicity of being on hormone treatment.

In contrast, the patients who were biomarker positive, you actually spread those curves even further. You've now got a hazard ratio of 0.55, and there's a clear benefit in this group to receiving the additional two years of ADT. There was something in those core biopsies that the computer was able to see that we can't glean with a naked eye and aren't even really clear on what it's seeing, but it's finding something. And if this biomarker had been applied, we would've spared the additional ADT in one-third of the high and very high-risk patients, and we would've identified more than 40% of the intermediate group who actually did benefit from the additional ADT.

So in summary, we are just at the very start of the AI revolution in medicine and oncology, which will not only streamline our workflow and efficiency but probably extract additional information from existing data sets to help develop prognostic patterns that we haven't even dreamed of. And ultimately, this will benefit our patients by reducing toxicity and burden of healthcare. Thank you.