(UroToday.com) The 2023 American Urological Association Annual Meeting included a surgical technology and simulation session featuring work from Dr. Abhinav Khanna and colleagues presenting results of their study investigating the use of artificial intelligence (AI) for the purposes of annotating surgical videos, specifically in robotic-assisted radical prostatectomy (RARP). Reviewing video is useful for advancement in many different areas, such as in refining technique for sports or improving on public speaking. This is also for surgical techniques, as analyzing surgical videos can allow surgeons to gain valuable insight into their practice and to focus on constantly improving skills for the sake of patients. The opportunities presented range from identifying key pitfalls to revisiting intraoperative decision-making. Despite the numerous advantages to reviewing videos, the issue lies in the time-intensive and laborious nature of such practice, which only serves to limit its routine and widespread use. Therefore, Dr. Khanna and colleagues sought to develop an algorithm for the automated identification of key surgical steps during RARP.
Under the supervision of two fellowship-trained urologic oncologists, a team of medical image annotators manually annotated retrospective surgical videos from RARP performed at a tertiary-care academic referral center. These full-length surgical videos were annotated with the following steps of surgery: preparation, adhesiolysis, lymph node dissection, Retzius space dissection, anterior bladder neck transection, posterior bladder neck transection, seminal vesicle and posterior dissection, lateral (including neurovascular bundle) and apical dissection, urethral transection, urethrovesical anastomosis, specimen retrieval and final inspection.
Out of a total of 107 full-length RARP videos, 70 cases were used to train the computer vision algorithm to perform automated video annotation. This was followed by utilizing 14 videos to internally validate the algorithm. Finally, 23 videos were used for the testing stage, in which the accuracy of automated video annotation was determined by comparing to manual human annotations. In summary, the total accuracy of the algorithm was 87.6% when comparing manual human video annotation and AI-enabled automated video analysis. It is interesting to note that while algorithm accuracy was lowest for the final inspection and extraction step (63.0%), it was highest for the vesicourethral anastomosis step (98.6%).
To conclude the presentation, Dr. Khanna emphasized that automated surgical video analysis has practical applications in retrospective video review by surgeons and for surgical training. However, it also can be key in quality assessment and for the development of future algorithms to associate perioperative and long-term outcomes with intraoperative surgical events.
Expressing more interest in the applications of this technology for the context of surgery, one of the moderators posed the question of further uses for automated video analysis. Dr. Khanna responded with challenging the audience to think of turning surgical video into a more structured unit. With time stamps and an understanding of how long each step takes, some applications can include tracking whether there is improved efficiency over time on particularly difficult steps. Additionally, comparing these key points with the work of other surgeons may also provide valuable insight and highlight differences in technique based on years of experience. Intrigued by the finding that the algorithm was only 63% accurate for the inspection and extraction step, I decided to pose my own question to Dr. Khanna and asked him why he thought this was the case. I also asked what can be done to improve this on further renditions of the computer vision algorithm. He responded by explaining how that step is the most “subjective” step in terms of when it starts and how to define it. As such, even a group of surgeons reviewing a video may have some disagreement. Additionally, Dr. Khanna explained that as the least substantive step in the surgery, a slight loss of fidelity in the model is somewhat inconsequential in the context of the overall purpose of the algorithm. In future studies, it may be interesting to train the model to annotate other types of surgery and to compare the steps in which the model experiences a drop in accuracy.
Presented by: Abhinav Khanna, MD, Department of Urology, Mayo Clinic, Rochester, MinnesotaWritten by: Kelvin Vo, Department of Urology, University of California Irvine, @kelvinvouci on Twitter during the 2023 American Urological Association (AUA) Annual Meeting, Chicago, IL, April 27 – May 1, 2023
References:
- Abhinav Khanna, Alenka Antolin, Maya Zohar, Omri Bar, Danielle Ben-Ayoun, Stephen A. Boorjian, Igor Frank, Paras Shah, Vidit Sharma, R. Houston Thompson, Dotan Asselmann, Tamir Wolf, Matthew Tollefson. Artificial Intelligence-Enabled Automated Identification of Key Steps in Robotic-Assisted Radical Prostatectomy [abstract]. In: American Urological Association Annual Meeting, April 28-May 1, 2023, Chicago, Illinois