The Future of Prostate Cancer Diagnosis: Combining Non-Invasive Tests for Personalized Care - Eric Kim

May 10, 2024

Preston Sprenkle discusses with Eric Kim the integration of MRI and genomic classifiers in prostate cancer management. They explore the potential of artificial intelligence (AI) to enhance diagnostic accuracy and personalization of treatment. Dr. Kim reflects on his project, which investigates correlations between MRI findings and genomic data from a large cohort, noting the challenges of consistency across different racial groups and the non-alignment of MRI with certain genomic indicators. The discussion highlights the evolving role of AI in reducing variability and improving outcomes by potentially offering non-invasive diagnostic options that could minimize the need for invasive procedures like biopsies. The conversation points towards a future where AI could provide more precise, equitable cancer care.


Eric Kim, MD, Urologist, The University of Nevada, Reno School of Medicine, Reno, NV

Preston Sprenkle, MD, Associate Professor of Urology, Director of the Urologic Oncology Fellowship and Research, Yale School of Medicine, New Haven, CT

Read the Full Video Transcript

Preston Sprenkle: Good morning. I'm Preston Sprenkle. I'm a urologic oncologist at Yale University. We're here at the AUA. I'm speaking with Dr. Eric Kim, a urologic oncologist just recently from Wash U and soon to be at the University of Nevada at Reno.

Eric Kim: Thanks for having us and happy to discuss our project.

Preston Sprenkle: What do you think the future is? I mean, I know talking with Mark Emberton and some of their group, he's very much, "If you can see it, it's okay to treat it, but if you can't see it, there's not the need to treat it." I mean, we're seeing some of that borne out in your abstract.

Eric Kim: Yeah, I think he's a real believer. I still fear the false positive rate for MRI, especially with maybe less experienced radiologists. Maybe they're a little tentative, afraid to call a negative because they don't want it to be on them if something's missed. Hopefully one day, maybe with seven TMRs coming to market, use of AI, that's something we've been studying at Wash U formerly and hopefully will study at the University of Nevada, but using AI on diffusion-based sequences to do a better job predicting what's the biologic signal. So yeah, hopefully one day, the ideal thing would be to have a non-invasive tool tell you as much as you can, and then you could really restrict the biopsies to the people that you think would benefit from knowing that they have prostate cancer.

Preston Sprenkle: Yeah. So how are we going to get there? It seems like we keep adding more tests to keep substratifying and improving our understanding of risk. We all, I think, want one test. I'm putting you on the spot here. What would you envision that being? Do you think we're going to be able to get to one of these exams being adequate on its own?

Eric Kim: I hope so. If you look at, I think it's ArteraAI for digital path, it's incredible that these AI platforms, these AI models are picking out nuances that the human pathologist isn't. Not to say that the pathologist isn't perfect, but they're human. Just like we're human. We're not perfect. So the fact that the models are able to pick out something that's predictive of metastasis, that's maybe even stronger than Gleason score, which Gleason score has held up incredibly well. So credit to Gleason for getting that right. But yeah, I think in an ideal world, if you'd have almost the same type of information that Decipher gives you, maybe even more than that, coming from a non-invasive test, a combination of blood, urine, and imaging that immediately tells you, "Hey, this patient is the one that's going to die of prostate cancer if you don't intervene, and then everyone else you can leave alone."

Preston Sprenkle: Or the potential of AI that is going to somehow whip all this stuff and see patterns that we don't and make it work for us. But I agree, I think that's the hope. That's where we all would like this to go.

Eric Kim: That's the hope. Our R1-funded lab, what we've been doing is, again, a novel diffusion-based sequence using an AI model to model those parameters to Gleason score. We have preliminary data right now for the first, I think, 200, 250 patients, and our AUC for the model is around 0.9.

So I think the future is closer than we realize. And that's not perfect. Once you know how the cookie's made or the hamburger's made or whatever, you start to think, "Man, there's a lot of nuance and a lot of slippery slopes here." But no, I think we're on the same page. I think that's the future of prostate cancer care at least. But maybe everything. We're already kind of there with kidney tumors. We don't necessarily biopsy. I don't know what the standard is at Yale, but at least at Wash U, the standard has really not been to get a biopsy except for specific situations.

Preston Sprenkle: Yeah, well, there's also new imaging technology that's going to potentially have a big impact on that as well. I think you're highlighting a lot of the important advances that are going to be changing how we fundamentally approach the management of these cancers, and I think it's very exciting. Everyone's using the term AI when it's not AI. I'm sure you, as someone involved in it, it sort of rankles you a little bit too. But I think the opportunity with these machine learning models to be able to identify patterns that we are not able to see is alluring.

Eric Kim: Oh, for sure. Yeah. No, and then it turns a subjective, whether that's the radiologist's interpretation, which again, I'm not poo-pooing the radiologists. It's just they're human like we're human. So it turns a subjective into a quasi-objective readout. Same thing with the digital path. Maybe that's 3 to 5 years off. I think in the meantime, my takeaway from the data we've pulled is we like to get MRIs on most of our patients. I think it's helpful information. I think that our field agrees. And I think Decipher is very helpful information as well. Again, this is something you guys put out a while ago, the fact that they don't correlate directly tells you that there's independent information you're getting from both. The nice thing, again, not to pump Decipher, but unlike some of the other genomic tests, it is an independent variable. So you as a clinician can integrate the information as you see fit without accidentally double-counting.

Preston Sprenkle: Definitely. I'm going all in on AI here a little bit because I just find it fascinating. One of the things that you mentioned is it's somewhat objective. So I think that's one of the main potential things we think, hypothesize for AI, is that it's going to be more reproducible. I've always thought of it as, "Okay, we're going to have something that is at least diminishing the confidence intervals, even if it's not as good as expert pathology that you have, that we have." But not everyone has that. So if we're able to be more confident that we're labeling things the same, at least not have that 30% cross-read that we're getting currently, that's going to be better. But do we actually know with these language learning models, that it's actually going to be giving you the same consistent outcome?

Eric Kim: Yeah, that's a fair point, right? I'm right there with you. I think the ideal is that you reduce variability. And again, these AI models are trash in, trash out. So you have to make sure, if you're doing a supervised model, you have to be very, very careful about the information you're providing the model. And again, some of these unsupervised models may be a different story. We haven't gone that route in terms of what we're working on.

But for these highly supervised models, I think our personal experience, what's happening is we have a radiologist, Joseph Ippolito, who's part of our study team, he's an incredible MRI reader. Just like some surgeons are really good at cystectomy, some are really good at cystoscopy. Some radiologists, I think, are just really good at reading MRIs better than their peers. And so I think what's happening for us is the model is representing him. And like you said, Joe, great guy, workaholic. He can't read every freaking MRI at Wash U, much less America. But if you train the model to be as good as him, well, then you've really elevated, you've raised the denominator. It's almost like robotics democratizing prostatectomy. You raise the bar such that very accurate, quasi-objective MRI interpretation is available to everybody, not just at high-volume academic centers.