ProPublica has released the Surgeon Scorecard, their study (with accompanying app and news report) on complications in surgery. There have been many such national studies of Medicare and other administrative databases; this one is novel in that it evaluates the complication rate of individual surgeons, rather than grouping the complication rate by hospital. When ProPublica announced last week that they’d be releasing these reports, there was much gnashing of teeth regarding the interpretation of data, correction factors for operating on more complex patients, and public response to the findings. This is likely to become a pretty big story this week (at least); USA Today has already picked it up, and it’s right up NPR’s alley as well.
First, disclaimers: I’m not listed in the report. I almost exclusively perform emergency surgery, and only elective surgeries were included in the analysis. Additionally, the surgeons included from my institution come out looking fine; every one is in the “medium” group in The Ohio State University Hospitals report. My conflict of interest here comes not from my affiliation with Ohio State, but rather my role as the Medical Director for Process Improvement in the Department of Surgery here. As such, I’m more involved in systematic care than individual performance, but have evaluated the latter as well—just never regarding cholecystectomies, the only general surgery procedure ProPublica evaluated.
I read the article. While I was frankly quite surprised it identified individual surgeons by name, it was less sensational than I’d feared. The article quite reasonably described concerns about individual surgeon performance, about transparency and the difficulties in normalizing data and interpreting results, and about the “captain of the ship” approach to surgery in both its faults and its benefits. The authors describe outliers that are both positive and negative, although it’s unlikely anyone other than the outliers themselves will remember the names of the positive ones. Tales of patients who died or suffered serious complications after yet another surgery by an apparently subpar surgeon who wasn’t investigated by his (every surgeon named is male) employer are plentiful and horrible.
The methods, as usual, will be the most hotly contested portion of the study. Using administrative data is fraught with known complications and inaccuracies, including only Medicare patients removes a huge portion of the population undergoing elective surgery, and including only a single general surgery procedure (cholecystectomy) while completely ignoring plastic surgery, otolaryngology, and gynecology makes generalizability of the report difficult to fathom. A few methodology questions, despite ProPublica’s best efforts, do remain. (Was outpatient cholecystectomy included? It’s not entirely clear.) [See the note below.] Finally, the data itself isn’t as plentiful as one might hope: Skeptical Scalpel coyly writes that “general surgeons can relax” as so few seem to be included in the scorecard due to the low number of their procedures that made the cut. Indeed, for my own institution, only a single surgeon is included for performing laparoscopic cholecystectomies—and I don’t know him.
ProPublica had good reasons for choosing the procedures they did, and must be congratulated not only for the excellent writing in the report, but also for including independent experts in patient safety as part of the methods, for publicly releasing the exact methods used, and for entering into a dialog with readers of the report. Author Marshall Allen not only held a Twitter chat (#surgeonscorecard) on the day of the report’s release, answering plenty of questions about methods and intent (and politely replying to some who were quite rude to him), but also invited “experts” to add their own comments about the article. These include critical remarks from Johns Hopkins Vice President Peter Pronovost and praise from UCSF Professor Robert Wachter, both big names in the patient safety movement.
I suggested prior to its release that surgeons’ response to the scorecard would be extraordinarily important. Surgeons are not underdogs. Talking about the inevitability of surgical complications, even deaths, does not enamor the public with our profession. Discussing methods is boring and doesn’t make for the sound bites needed to combat criticism in the media. Instead—and not simply for the sake of public opinion—we must embrace the report, discuss it intelligently, even critically, but note that it will make us better rather than complaining that it makes us look worse than we are.
I get it. Data can be misinterpreted. Results can be spun, and reports can be skewed. In the strange new world that is patient safety and complication transparency, we will sometimes have to explain complication rates to patients and their families. Rather than complaining about this, however, we really can use the data ourselves to improve those rates.
I’m not listed in the scorecard, and that’s a shame. My patients have had complications, and presumably some have been preventable. I know which ones have been readmitted to my hospital, but often don’t hear about the ones who go elsewhere. The scorecard could be helpful to me, and hopefully will be useful to many—as long as they approach it appropriately instead of instinctively going to battle stations.
I am the product of my failures, not my successes. That I may learn from them, that I may learn from those experienced by others, and that I may keep from experiencing the same failures again, that is my hope, and that is the hope I have for my patients. If others can learn from failures as well, perhaps they and their patients won’t have to go through them.
Brief Update, 14 July 2015 8:46 PM EST: Shortly after I posted this, ProPublica editor Olga Pierce kindly responded to a post on Twitter by my colleague Luke Selby and clarified that the “patient cohort included only Medicare inpatient stays“; outpatient or “same-day” cholecystectomies weren’t counted as part of the analysis. Additionally, her Twitter feed shows a plethora of responses to questions like Luke’s, identification of “a quirk in the data that was excluding cancer centers”, and even a note that they’ll be putting their code on Github. As you might expect from my comments above and my previous writing about open data, I couldn’t be more happy with their responsiveness and ongoing plans for this project. This would be a bit of that “practice what you preach” idea.