Prostate Cancer Foundation 2018 Scientific Retreat

Prostate Cancer Foundation 2018 Scientific Retreat

INTERVIEW WITH ANDREA MIYAHIRA
The Prostate Cancer Foundation: A Discussion with Andrea Miyahira

VIEW ALL PCF VIDEOS

Prostate Cancer Foundation 2018 Scientific Retreat

Prostate Cancer Foundation 2018 Scientific Retreat

INTERVIEW WITH KENNETH PIENTA
The Process of Metastasis in Prostate Cancer

VIEW ALL PCF VIDEOS

European Society for Medical Oncology 2018 Congress

European Society for Medical Oncology 2018 Congress

INTERVIEW WITH FRED SAAD
A Renewed Analysis of ERA 223

VIEW ALL ESMO VIDEOS

Videos
State-of-the-industry video lectures by leading urology experts
Latest Videos
Featured Videos

TAIPEI, TAIWAN (UroToday.com) - Introduction and Objectives: Crowdsourcing is the practice of obtaining services from a large group of people; typically from an online community such as the Amazon.com Mechanical Turk Project. We hypothesized that the ‘crowd’ could score performances comparably to scores derived from expert surgeons of dry lab laparoscopic skill tasks videotaped during the AUA BLUS curriculum validation project.

wceMethods: 24 candidate videos of laparoscopic skill tasks performed by surgeons of varying levels of laparoscopic case experience - 12 suturing and 12 pegboard transfer performances were evaluated by 5 faculty experts and at least 60 Amazon.com Mechanical Turk crowd-workers. Each rater provided responses to the same multi-domained rating scale from the Global Objective Assessment of Laparoscopic Skills (GOALS) tool. We compared mean global performance scores provided by experts and crowd-workers using Cronbach’s alpha and estimated performance-specific passing probabilities by cut-offs established with receiver operating characteristic (ROC) curves.

Results: Within 48 hours we received 1,840 crowd-worker ratings, of which 1,438(78.2%) passed analysis eligibility criteria based on discrimination questions used to assess the integrity of the scorer’s responses. Faculty experts completed the reviews in 10 days. C-SATS ratings provided excellent discrimination between passing and failing video performances as defined by faculty experts (area under ROC curve = 96.9%; 95% CI: 90.3%–100%).

Conclusions: A properly-sized and qualified crowd can accurately score laparoscopic skill performances on par with faculty experts. Crowd-based ratings may be an efficient method for assessing passing/failing performances, and for measuring change in performance after training.

Source of Funding: None

 
Listen to an interview with Thomas Lendvay, one of the authors of this study.

 

Presented by Thomas Lendvay,1 Bryan Comstock,1 Timothy Averch,2 Geoffrey Box Bodo Knudsen,3 Timothy Brand,4 Michael Fernandino,5 Jihad Kaouk,6 Jaime Landman,7 Benjamin Lee,9 Elspeth McDougall,9 Ashleigh Menhadji,8 Bradley Schwartz,10 Robert Sweet Timothy Kowalewski11 at the 32nd World Congress of Endourology & SWL - September 3 - 7, 2014 - Taipei, Taiwan

1Unviersity of Washington, USA
2University of Pittsburgh, USA
3Ohio State University, USA
4Madigan Army Medical Center, USA
5Duke University, USA
6Cleveland Clinic Foundation, USA
7University of California, Irvine, USA
8Tulane University, USA
9University of British Columbia, Canada
10Southern Illinois University, USA
11University of Minnesota, USA

 

@UroToday
E-Newsletters

Newsletter subscription

Free Daily and Weekly newsletters offered by content of interest

The fields of GU Oncology and Urology are rapidly advancing. Sign up today for articles, videos, conference highlights and abstracts from peer-review publications by disease and condition delivered to your inbox and read on the go.

Subscribe