AUA 2018: Crowdsourcing Robotics

San Francisco, CA (UroToday.com) Thomas Lendvay, MD from the University of Washington introduced 3 one-minute videos of surgeons performing a step in the robotic prostatectomy and had AUA members (polled 1 week prior), the panelists and attendees of the plenary session (the crowd) rate them based on bimanual dexterity, depth perception, efficiency, sensitivity and robotic control. 

After viewing the first video, the attendees scored the first video as good on all 5 skills, which was similar to the AUA members score. These scores also corresponded with Michael Stifelman, MD scorings as well. Stifelman took to the stage and began explaining why crowdsourcing could be beneficial. First, there are no US standardized credentialing guidelines. Typically, over 1700 hospitals use a simple paper that looks at the surgeon’s case volume/time, readmission rate, mortality rate and proctor sign off, and from an arbitrary cut off in these parameters determines their skill level. This method is subjective and does not specifically look at surgeon’s skill level. Crowdsourcing, on the other hand, provides a score that appears to be a reliable method to differentiate a novice from an expert surgeon. Moreover, Stifelman explains that crowdsourcing can be further used for lifelong learning, providing focused areas for improvement, ranking surgeons in quartiles and focusing on improvement in the bottom quartiles, and finally as a resident teaching tool.

Lendvay moves on to show another video and polls the audience on the same 5 skills. Overall the surgeon preferred a good job but had areas that needed to be improved. These areas were similarly identified by both the expert and AUA members. Chandru Sundaram, MD took to the stage and questioned crowdsourcing: although it is good, but could we do better? He brought up the idea of neural networks, which is seeing beyond our naked eye and uses machine learning to assess competence during surgery. Sundaram brought up a study using automated performance metrics vs. manually observed metrics (done by two experts) to assess surgical skills. Interestingly, the machine and manual observers were both in agreement and able to distinguish expert from novice surgeons. Another study assessing the cognitive workload during surgery identified that EEG is the single best modality to assess workload and EEG correlated with the performance on the simulator. Sundaram concludes that machine learning may be the future objective measure of surgeon performance and the real-time physiological measurement of surgeon workload could correlate with performance. 

Finally, the third video was presented, and the audience again had similar skill results to the AUA members where the surgeon was found to be okay, but had a few areas of improvement. Interestingly, James Peabody, MD, scored this video slightly higher than the audience or the AUA members. He begins by stating that a 1 min video will not tell you everything about the surgeon, however it does have quick, valuable information. The video provides a structured skills assessment. This in turn can identify areas for improvement allowing an intervention such as a skills workshop or coaching to improve those skills.

Lendvay concludes this session by stating that technique matters, our peers can provide valuable feedback, skills can be elucidated with minimal review time, and scalable assessments are possible. 


Presented by: Thomas Lendvay, MD University of Washington, Chandru Sundaram, MD, Indiana University School of Medicine, James Peabody, MD, Henry Ford Hospital James Peabody and Michael Stifelman, MD, Hackensack University Medical Center

Written by: Egor Parkhomenko Department of Urology, University of California-Irvine, medical writer for UroToday.com at the 2018 AUA Annual Meeting - May 18 - 21, 2018 – San Francisco, CA USA