Hospital Rankings Get Serious

*Read accompanying posts from Leapfrog and Consumer Reports

After years of breaking down, my sedan recently died.  Finding myself in the market for a new car, I did what most Americans would do – went to the web.  Reading reviews and checking rankings, it quickly became clear that each website emphasized something different: Some valued fuel-efficiency and reliability, while others made safety the primary concern.  Others clearly put a premium on style and performance.  It was enough to make my head spin, until I stopped to consider: What really mattered to me?  I decided that safety and reliability were my primary concerns and how fun a car was to drive was an important, if somewhat distant, third consideration.

For years, many of us have complained about the lack of similarly accessible, reliable information about healthcare.  These issues are particularly salient when we consider hospital care. Despite a long-standing belief that all hospitals are the same, the truth is startlingly different:  where you go has a profound impact on whether you live or die, whether you are harmed or not.  There is an urgent need for better information, especially as consumers spend more money out of pocket on healthcare.  Until recently, this type of transparent, consumer-focused information simply didn’t exist.

Over the past couple of months, things have begun to change. Three major organizations recently released groundbreaking hospital rankings.  The Leapfrog Group, a well-respected organization focused on promoting safer hospital care, assigned hospitals grades (“A” through “F”) based on how well it cared for patients without harming them*.  Consumer Reports, famous for testing cars and appliances, came out with its first “safety score” for hospitals.  U.S. News & World Report, probably the most widely known source for rating colleges, released its annual hospital rankings, but this time with an important change in methodology.  While others have also gotten into this game, these three organizations bring the highest amount of care, transparency, and credibility.

What is particularly striking, though, is how differently the hospitals come out in the recommendations.   Just like the different car rating websites that emphasize different things, these three have also taken approaches very different from each other.  If we want to understand why some hospitals get an “A” on Leapfrog or are rated one of America’s Best Hospitals but get an abysmal score on Consumer Reports, we have to look “under the hood.”

It is worth noting that there are important similarities across the rating systems.  They each include hospital infection rates, though to varying degrees.  All three incorporate hospital mortality rates, though they weight them differently.  In accompanying blogs, colleagues from the Leapfrog Group and Consumer Reports each explain their methods and approach in far greater detail than I can, so I highlight what each emphasizes, and focus on their differences.

Let’s start with Leapfrog, whose primary focus is ensuring that when you walk into the hospital, your chances of being hurt or killed by the medical care you receive (as opposed to dying from the underlying condition that brought you in) are minimized.  Each year, as many as 100,000 people die in U.S. hospitals due to medical errors, a stunning number that has changed very little over the past decade.  The most common causes of death and injury in hospitals are medication errors and healthcare-associated infections, though there is no shortage of ways that you can be injured.  Leapfrog creates a composite safety score, emphasizing whether a hospital has computerized prescribing, how well it staffs its intensive care units, and what its rates of infections are.  Your chances of being injured in the hospital are generally lower at an “A” hospital than a “C” hospital.

Consumer Reports (CR) also calculates a “safety score,” but its focus is very different.  First, CR also incorporates infection rates.  It also emphasizes three areas not incorporated by Leapfrog.  The first is “double” CT scans (which, thankfully, are rare) and represent un-necesary radiation.  The second is patients’ reports on how well their providers communicated about medications and how well providers explained what would happen after discharge.  I’m a huge fan of patient-reported measures (it’s the one chance we consumers get to rate hospitals).  The challenge is that it captures both patient expectations as well as hospital performance.   The latest evidence suggests that patients’ expectations, including their tendency to give high scores, vary a lot based on who they are and where they live.  A final area of emphasis is readmissions, a very complicated metric.  The best evidence to date suggests that only a small proportion of readmissions are preventable by hospitals.  In fact, good social support at home and the ability to get in and see your primary care physician probably matters much more than what the hospital does.  The other factors in the CR score that are emphasized less (including mortality) can be found here.

U.S. News has been rating hospitals for a while, though they made a very important methodological change recently.  Three things make up their score:  mortality (1/3), reputation (1/3) and other stuff (1/3).  The other stuff is a collection of safety and technology indicators.  The reputation score comes from a survey of specialists in each field and is meant to capture the intangible knowledge about hospital care that the rest of us don’t have.  Finally, and most importantly, they now emphasize mortality rates, which in many ways are “the bottom line” for a hospital’s performance.  Their risk-adjusted mortality rate captures both how well the hospital provides the right treatments (a.k.a. “effectiveness”) and how well it avoids harming patients (a.k.a. “safety”).

Implications:  Who wins and who loses?

Different approaches lead to different winners and losers (see table below).  In the Leapfrog scores, there aren’t large variations by hospital type (i.e. size, teaching status).  There aren’t large regional variations either – in every part of the country, there are lots of winners and losers.  It is worth noting that safety-net hospitals (as we have previously defined) generally do worse on Leapfrog (19% of those who got an “A” were safety-net hospitals, compared to 31% who got a “C”).

Because CR emphasizes infections but also patient experiences and readmissions, big “winners” on their score are small, community-based hospitals with very few minority or poor patients.  Hospitals in the Midwest do particularly well.  Major teaching hospitals do extremely poorly (three times more likely to end up in the worst quartile).  These worst quartile hospitals also have a lot more minority and poor patients – most likely because hospitals that care for poor patients have higher readmission rates (remember, poor social supports at home likely drive this) and worse patient experience scores.

U.S. News top-ranked hospitals are usually big, academic teaching hospitals.  On this metric, at least, it’s nearly impossible for a small community hospital to be rated as one of the best in the country.

What’s a consumer to do?

If you’re lucky enough to find a hospital that gets rated highly by all three organizations, I’d take that in a heartbeat.  It’s like finding a car that drives well, looks stylish, is reliable, and safe!  No brainer.  Unfortunately, those hospitals are rare.  In Boston, Massachusetts General Hospital was ranked the #1 hospital in the country by U.S. News.  It got an “A” from Leapfrog. It was near the bottom of Massachusetts hospitals in the CR rating, receiving a score of just 45 out of 100.

It can be easy to decide if the safety, or the style, or the performance of a car is most important to you. Unfortunately, choosing what’s most important in health care can make us ask difficult and seemingly unreasonable questions. Is my primary goal to survive my hospitalization, avoid infections and medication errors, or have a reasonably good experience?  Every individual has to decide what matters most.  If a low mortality rate is most important, U.S. News is your best bet.  If you care most about patient safety, then Leapfrog is the way to go.  Consumer Reports emphasizes infections, unnecessary radiation and patient experience.  If those matter most, CR is your best bet.

My personal list ranks mortality as most important (by far), followed by safety, with patient experience an important but distant third.  Others will make different choices. This is why we need different lists and different types of hospitals. It’s time for consumers to use this new information to make better choices.  I know that there is little precedent for consumers choosing healthcare providers based on quality.  I’ve always believed it is because they lacked good data.  In an era of greater transparency, if consumers vote with their feet, it’ll make hospital care better for everyone.

*Please note that I served on a blue-ribbon panel that advised the Leapfrog Group on their methodology.  I received no financial compensation for this effort.  The final choice of metrics and the scoring methodology for the Leapfrog Safety Score was made by the Leapfrog Group and not by the blue-ribbon panel.

Here’s how the rankings break out:

12 Replies to “Hospital Rankings Get Serious”

    1. Have you seen the (choice) tools of the dutch cancerpatient-organisations? E.g. Colorectal cancer. (In dutch… ) No ranking. Information about what people may expect of the colorectal cancer care (according to the patientorganisations) and about the differences between hospitals.

      Absolutely not complete yet, step by step we offer more information. 2013/2014; more outcomes, including patient reported outcomes (PROMS)!

  1. These measures are incomplete to say the least. Who would want to buy a car based solely on safety ratings, reliability, or other people’s opinions, including professionals?

    Don’t get me wrong these measures are useful, but how about some objective measures of risk-adjusted outcomes one may expect from a hospital and it’s care team (physician/surgeon and other clinical staff)? I think it’s a shame they don’t already exist.

    Remember, there are many ways to bake a cake, but at the end of the day the only thing that really matters is whether it tastes good or not.

  2. Realistically, most of us have only a couple of hospital choices, and the nation-wide variations are irrelevant. But several features stand out:
    * If a hospital truly deserves an “F,” or 4-out-of-100, why is it allowed to operate? Yes, each state has incompatible and hidden standards of its own, but one might reasonably wonder whether the watchdogs’ interests are more in selling subscriptions than in safeguarding our health.
    * Sure, there are different methodologies used, but if they actually represent a notion of general “quality,” they should deliver scores that are more alike than different. How well do the scores correlate nationally?
    * How likely is it that a national health organization could break out the impact as you do in e.g., estimating the impact of exogenous influences such as expectations, in a more systematic way, to allow individuals to get a report card scored the way they want?

  3. Great post! I wrote a story about this topic for The Greenfield Recorder recently. Had tried to contact you for an interview, but our schedules didn’t line up. Just one correction: Mass. General got a 45 out of 100 from CR, I suspect you just forgot the second digit when writing the post. It was indeed near the bottom of pack in Massachusetts hospitals; one of the interesting findings was that the bottom six in CR’s ratings all scored an A rating from Leapfrog.

    1. Chris — thank you for catching the error. Not sure how that slipped through but really appreciate it. Sorry I didn’t get back to you earlier when you wrote me.

    1. Brad — great reminder that we obviously don’t know how to do hospital wide mortality very well yet. Thanks for your comments…and my car is driving great.

  4. Patient safety is strongly influenced by the quality of an institution’s nurses. Both the Onstitue of Medicine Future of Nursing report and the ANCC Magnet accreditation address the education, training, interdisciplinary requirements for exemplary nursing care. Overlooking this most critical group of health care providers is short sighted and incomplete.

  5. Juidth,

    Agree wholly that the quantity and certainly the quality of nursing care is hugely important to patient safety. We still don’t have good metrics around either (certainly not as it pertains to the quality of nursing care). Without widely available metrics on nursing care, its going to be hard for those who build hospital rankings to include nursing care.

    1. Leapfrog measures and publicly reports magnet status and a series of nursing workforce safe practices. For hospitals that report to Leapfrog, this has impact on their Hospital Safety Score. We have no other publicly available information on nursing, so for hospitals that don’t report there is no weighting of nursing.

  6. No one is doing autopsies to see where the most misdiagnoses are made and where fatalities were characterized as resulting from an ailment that didn’t exist or that wasn’t the cause of the death.
    No one is keeping track of the crime rate: unnecessary operations, abuses, etc. Patients cannot get such events in the record even when they survive to scream about them.
    No one is checking to see how often injured patients are sued, or threatened with suits, to silence them.
    No one is checking to see how many injured patients are refused readmission and find themselves unable to get anyone local even to diagnose, let alone treat, their injuries – a common occurrence for injured patients.

    BTW, the number you quote “100,000 people die in U.S. hospitals due to medical errors” is the number of people who die from hospital acquired infections alone, according to the CDC (who say 99,000 for infections alone) not counting other unnecessary sources of death. And many of those infections are difficult to characterize as errors. When caregivers don’t get paid for preventing those, don’t have time to prevent those, and routine policies are enacted that allow a certain percentage of patients to get infected because it would be too expensive to prevent them, it is difficult to define those deaths as accidents rather than as the provider’s concept of acceptable costs of doing business.
    If you add other sources of accidental death, other than ones that cause infections, like sin, incompetence, other results of negligence and other acceptable costs of doing business that lead to other sources of fatalities, the number of unnecessary deaths is considerably higher than 100,000. More than one study suggests it is more in the range of 300,000.
    Whatever the actual number is, it certainly is higher than 100,000 (see:

Leave a Reply

Your email address will not be published. Required fields are marked *