A comparison of methods for identifying hospital performance outliers in cardiac surgery


Date
Event

Short title: Identifying hospital performance outliers Xiaoting Wu, Ph.D.1; Min Zhang, Ph.D.2; Richard L. Prager, M.D.1,3; Donald S. Likosky, Ph.D.1,3; for the Michigan Society of Thoracic and Cardiovascular Surgeons Quality Collaborative (1) Department of Cardiac Surgery, University of Michigan, Ann Arbor, Michigan, (2) Department of Biostatistics, University of Michigan; (3) Michigan Society of Thoracic and Cardiovascular Surgeons Quality Collaborative

Presented at the QCOR 2017 Scientific Sessions, Quality of Care and Outcomes Research, Arlington, VA, April 2 – 3, 2017

Abstract

Background: A number of statistical approaches have been advocated and implemented to estimate adjusted hospital outcomes for public reporting or reimbursement. The ability of these methods to identify hospital performance outliers in support of quality improvement has not been fully investigated.

Methods and Results: We leveraged data from patients undergoing coronary artery bypass grafting surgery between 2012-2015 at 33 hospitals participating in a statewide quality collaborative. We applied 5 different statistical approaches (1: indirect standardization with standard logistic regression models, 2: indirect standardization with fixed effect models, 3: indirect standardization with random effect models, 4: direct standardization with fixed effect models, 5: direct standardization with random effect models) to estimate hospital post-operative pneumonia rates, adjusting for patient risk. Unlike the standard logistic regression models, both fixed effect and random effect models accounted for hospital effect. We applied each method to each year, and subsequently compared methods in their ability to identify hospital performance outliers. Pneumonia rates ranged from 0% to 26.2%. The standard logistic regression models for 2013-2015 had c-statistics of 0.73-0.75, fixed effect models had c-statistics of 0.81-0.83, and random effect models had c-statistics of 0.80-0.83. Each method differed in its ability to identify performance outliers. In direct standardization, random effect models stabilized the hospital rates by moving the estimated rates toward the average, fixed effect models produced larger standard errors of hospital effect (e.g., low case volume hospitals). In indirect standardization, the three models showed high agreement on their derived observed/ expected ratio. Indirect standardization with fixed or random effect models identified similar hospital performance outliers in each year.

Conclusions: The surveyed approaches varied in their ability to identify performance outliers. Given its higher sensitivity to outlier hospitals and more stable estimates of hospital effects, indirect standardization methods with random effect models may best support quality improvement activities.

Slides: See the handout