Clinical practice guidelines (CPGs) are increasingly being utilized by leading organizations to help clinicians make high-quality, evidence-based healthcare decisions. They hold promise for improving the quality, appropriateness, and cost-effectiveness of medical therapies. They are also becoming the basis of quality-of-care measures that are likely to affect urologists’ reimbursements, with pay-for-performance measures on the horizon. The methodological quality of CPGs developed by different organizations, however, varies considerably. These differences reflect each organization’s specific mission, size, financial resources, membership, and target audiences. Therefore, before specific recommendations from CPGs are implemented into clinical practice, their underlying methodology and quality of evidence should be critically reviewed.
In our study, we assessed published CPGs on the treatment of localized prostate cancer to evaluate the rigor, applicability, and transparency of their recommendations. To do so, we searched for CPGs in English on the therapeutic management of prostate cancer from leading organizations over a 15-year study period (1999 through 2014). CPGs that were limited to early detection, screening, staging, and/or diagnosis were excluded.
To evaluate the quality of the CPGs in our study, we applied structured data abstraction. Four independent reviewers used the validated AGREE II instrument to assess the quality of CPGs in six domains: (1) scope and purpose, (2) stakeholder involvement, (3) rigor of development, (4) clarity of presentation, (5) applicability, and (6) editorial independence. The reviewers' scores were then expressed as standardized domain scores on a percentage scale (0% to 100%). We calculated the domain scores by adding up all the scores of the individual items in a domain, then scaling the total as a percentage of the maximum possible score for that domain.
Thirteen CPGs met our inclusion criteria. Overall, the highest median scores in our study were in the following AGREE II domains: clarity of presentation, editorial independence, and scope and purpose. The lowest median score was for applicability, at 28.1%. Even though the median score for editorial independence was high, at 85.4%, variability was also substantial, with an interquartile range of 12.5%-100%. We also noted significant deficiencies in the domains of applicability and stakeholder involvement, as well as a large degree of heterogeneity in the methodological quality of CPGs developed by different organizations (Figure 1). Overall, the United Kingdom’s National Institute for Health and Care Excellence (NICE) CPG achieved the highest scores: > 80% for all domains, except applicability (domain 5), with a still-high 77.1%. The American Urological Association (AUA) CPG also scored consistently above 80%, but the score for applicability (domain 5) was only 11.5%.
Based on these results, we found that current published CPGs on the treatment of prostate cancer were of variable methodological quality and transparency; they frequently fell short of current standards. In addition, we found major shortcomings in the domains of stakeholder involvement and applicability that potentially undermine the validity of CPGs, raising concerns about their dissemination and impact. In the domain of applicability (domain 5), for example, our evaluation demonstrated that most guidelines did not address the barriers and facilitators to implementing recommendations, nor did they provide ways of auditing their effective implementation. As a result, clinicians may have difficulty enacting clinical recommendations set forth in CPGs, and organizations proposing these guidelines may have limited means to monitor how their recommendations are being put into practice.
Use of CPGs by clinicians represents a final translation hurdle of applying scientific research towards patient care. Inconsistencies and lack of methodological rigor identified in our study threaten to undermine the common goal of advancing high-quality patient care. Given that CPGs are designed to guide clinician behavior and provide explicit recommendations for the treatment of typical index patients based on the best available research evidence, our study highlights the importance of urologists carefully reviewing the quality of CPGs prior to utilizing them. Urologists should be familiar with the defining characteristics of clinical guidelines that deserve the label “evidence-based.” Meanwhile, guideline developers in urology should strive to raise the bar by adopting a transparent, methodologically rigorous and ideally unified framework for rating the quality of evidence and moving from evidence to recommendations.
Figure 1: Boxplot of AGREE II scores for prostate cancer guidelines (n=13) group by domain.
Mohit Gupta, MD and Philipp Dahm, MD
Department of Urology, University of Florida, Florida; and Department of Urology, Minneapolis, Minnesota