Monthly Archives: August 2012

The Stage 2 Meaningful Use of EHRs Final Rules: Still No Surprises but Important Steps Forward

Six months to the day after the Centers for Medicare and Medicaid Services (CMS) released the “preliminary rules” for Meaningful Use, the final rules are in.  For clinicians and policymakers who want to see Electronic Health Records (EHRs) play a key role in driving improvements in the healthcare system, there’s a lot to like here.

For the Office of the National Coordinator (ONC), the agency that oversees the federal health information technology incentive program, the Meaningful Use rules are a balancing act. On one hand, ONC wants to get as many clinicians and hospitals on board with simply adopting EHRs (and thus, the need to set a low bar). On the other hand, they want to ensure that once people start using EHRs, they are using them in a “meaningful” way to drive improvements in care (and thus, the need to set a high bar).  I think ONC got that balance just about right.

Let me begin with a little background.  In 2009, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act, setting aside about $30 billion for incentives for ambulatory care providers and acute-care hospitals to adopt and “meaningfully use” EHRs.  Congress specified that the executive branch would define Meaningful Use (MU) and would do so in three stages.  The first stage was finalized in 2010 and its goals were simple – start getting doctors and hospitals on board with the use of EHRs.  By most metrics, stage 1 was quite successful.  The proportion of doctors and hospitals using EHRs jumped in 2011, and all signs suggested continued progress in 2012.  Through July 2012, approximately 117,000 eligible professionals and 3,600 hospitals have received some sort of incentive payment.

When Stage 2 preliminary rules were released in March 2012, I blogged that there were no surprises.  ONC had foreshadowed many of the additions: that a majority of prescriptions be written electronically, that quality measures be generated from the EHR, and that there be a greater demonstration of health information exchange.  These additions remain in the final rules.  Yet, as I skimmed through the 600+ pages of federal register that answer all the comments and describe the final rules, I was struck by the importance of Stage 2 – an importance that can be lost in the details.

A gentle start at shifting the way doctors use EHRs.  Most experts will view Stage 2 as simply building on Stage 1 (e.g. more data in electronic format, etc.)  This is true, but it’s not really what’s interesting here.  What I think is so important is that Stage 2 begins, gently at first, to get doctors to start using EHRs differently. Six months ago, I said that CMS would likely drop the requirement that patients actually engage with the EHR to download or otherwise share their data.  Holding doctors and hospitals accountable for what their patients do seemed untenable to me.  I’m happy to say that I was wrong. ONC set the bar very low:  providers only have to show that 5 percent of their patients are doing it.  It’s a small number but it’s really important.  The new rule holds providers accountable for what their patients do, and that’s a fundamental shift.

There are other important shifts, such as getting providers to actually start sharing data with other providers.  Provider organizations have guarded clinical data jealously, seeing it as a tool to keep their patients coming back.  This begins to change in Stage 2.  Finally, automating clinical quality measures is the holy grail of quality measurement and improvement.  Having EHRs automatically generate quality measures can facilitate larger efforts geared toward payment and delivery system reform.  The Stage 2 rules move us forward.

Some providers are still left out and others are lagging.  There are challenges ahead to be sure, many of which have to do with the broader incentive program.  HITECH leaves thousands of providers out of the incentive program altogether – nursing homes, rehabilitation facilities, and psychiatric facilities are just some of the institutions that will not be getting financial resources for using EHRs.  As we have written previously in Health Affairs, these providers are starting off from a miserable baseline of adoption and are likely falling further behind.  Given that so many of the sickest, costliest patients end up at these institutions, not having their clinical data available in electronic format will make care improvement much, much harder.  There are others that seem to be falling off as well:  small, rural hospitals, for instance, are struggling, as are providers in small practices.  Regional Extension Centers are supposed to be helping these entities, but early evidence of their impact is not as encouraging as it could be.

It is worth remembering that the point of Meaningful Use, at least as I see it, is to be the infrastructure for broader healthcare reform.  EHRs unto themselves may not be a panacea for the cost and quality problems that plague our healthcare system, but they are fundamentally important to addressing those problems.  Want to run an Accountable Care Organization without using interoperable EHRs?  Good luck.  What Stage 2 does is subtle but important:  it slows down the on-ramp enough to give doctors and hospitals a chance to get on board.  But, it also shifts the focus so that providers start using EHRs in ways that engage their patients, share their data, and measure quality in a robust way.

Will it work?  Given my track record on predictions, I’d rather not say.  But if CMS and other payers can do their job of creating the right incentives for broader payment and delivery system changes, ONC can claim credit for having created the infrastructure necessary to make it work.

Ashish K.Jha, MD, MPH, The Stage 2 Meaningful Use of EHRs Final Rules: Still No Surprises but Important Steps Forward, Health Affairs Blog, 24 August 2012, Copyright ©2012 Health Affairs by Project HOPE – The People-to-People Health Foundation, Inc.

Profits, Quality, and U.S. Hospitals

The recent articles in the New York Times about the Hospital Corporation of America (HCA) have once again raised important questions about the role of for-profit hospitals in the U.S. healthcare system.  For-profits make up about 20% of all hospitals and many of them are part of large chains (such as HCA). Critics of for-profit hospitals have argued that these institutions sacrifice good patient care in their search for better financial returns.  Supporters argue that there is little evidence that their behavior differs substantially from non-profit institutions or that their care is meaningfully worse.

To me, this is essentially an empirical question. Yet, as I read through the articles, I was struck by the dearth of data provided on the quality of care at these hospitals.  Based on the comments that followed the stories, it was clear that many readers came away thinking that these hospitals had sacrificed quality in order to maximize profits.  Here, I thought an ounce of evidence might be helpful.

Measuring quality:

There is no perfect way to measure the quality of a hospital.  However, the science of quality measurement has made huge progress over the past decade.  There is increasing consensus around a set of metrics, many of which are now publicly reported by the government and even are part of pay-for-performance schemes.  While one can criticize every one of these metrics as imperfect, taken together, they paint a relatively good, broad picture of the quality of care in an institution.  We focused on five metrics with widespread acceptance:

  1. Patient experience (as measured by HCAHPS, the national metric for grading hospitals)
  2. Process quality (whether hospitals are adherent to evidence-based guidelines)
  3. Mortality rates (Proportion of people who die within 30 days of hospitalization, taking into account the “sickness” of the patient)
  4. Readmission rates (Proportion of people who are readmitted within 30 days of discharge, taking into account the “sickness” of the patient)
  5. Hospital Safety Score (a measure of how effective a hospital likely is at preventing medically-induced harm to patients).

An important caveat: The NY Times article highlighted terrible, unethical practices by some physicians at HCA hospitals who appear to put in cardiac stents when there was no clinical indication.  We don’t have the data to examine whether this practice occurs more often at HCA hospitals than at other institutions. Therefore, I’ve decided to focus more broadly at hospital quality.  Most of the metrics above have been approved by the National Quality Forum, are widely regarded by “experts” as being good, and are used by Medicare to judge and pay for quality.

How we analyzed the data:

We examined all U.S. hospitals in four groups:  Privately-owned non-profit hospitals, government-owned public hospitals, for-profit hospitals that were not part of the HCA chain, and HCA hospitals.  In our analysis we “adjusted” for characteristics that are beyond the hospital’s control such as size, teaching status, urban versus rural location, and region of the country (adjusting is important:  imagine that all the for-profit hospitals were large and large hospitals generally had better quality.  Without adjustment, we’d say for-profit hospitals were better and therefore, we should encourage more for-profit hospitals.  With adjustment, we’d be able to hold size differences constant and examine the actual relationship between quality and the profit status of the hospital).

What we found:

In the table below, we use “non-profit” hospitals as the reference group because it’s the largest group of hospitals.  All the scores that are statistically different (at p-value <0.05) are highlighted either in red if they are significantly worse or in green if they are significantly better.

Interpretation

The best part of looking at data is that you get to draw your own conclusions.  Here are mine. Public hospitals are struggling on nearly every metric. For-profit hospitals outside of the HCA are a mixed bag – they do worse on patient experience (as we’ve found before), better on processes measures, and somewhat worse on mortality and readmission rates.  They are about average on the Leapfrog safety score.

However, HCA hospitals look pretty good. They tend to have good patient experience scores, really excellent process quality (adherence to guidelines) and are average or above average on mortality and readmissions (pneumonia mortality does appear to be high, though not statistically significant). They do very well on the Leapfrog safety score* (nearly half got an “A”).

My takeaway is that although which hospital you go to has a profound impact on whether you live or die, whether the hospital is “for-profit” or “not for profit” has very little to do with it.  What really matters is leadership, focus on quality, and a dedication to improvement.  That appears to happen equally well (or badly, depending on your perspective) in both for-profit and non-profit hospitals.

So, when it comes to quality, it’s time to stop thinking about it as an issue of “for-profit versus non-profit” hospitals.  Instead, it’s time to start talking about the large number of relatively poor performing hospitals where patients are being hurt or killed un-necessarily.  Those hospitals come in all sizes, shapes, and yes, ownership structures, and we have to figure out how to make them better.

Finally, these analyses were run by Sidney T. Le, a terrific young analyst in our group.  You should follow him on twitter (@sidtle) although his love of Stanford sports can be a challenge. Consider yourself warned.

*The Leapfrog safety score was developed by a group of experts (full disclosure:  I was on that panel – but don’t worry, there were many people much smarter than me on the panel).

Hospital Rankings Get Serious

*Read accompanying posts from Leapfrog and Consumer Reports

After years of breaking down, my sedan recently died.  Finding myself in the market for a new car, I did what most Americans would do – went to the web.  Reading reviews and checking rankings, it quickly became clear that each website emphasized something different: Some valued fuel-efficiency and reliability, while others made safety the primary concern.  Others clearly put a premium on style and performance.  It was enough to make my head spin, until I stopped to consider: What really mattered to me?  I decided that safety and reliability were my primary concerns and how fun a car was to drive was an important, if somewhat distant, third consideration.

For years, many of us have complained about the lack of similarly accessible, reliable information about healthcare.  These issues are particularly salient when we consider hospital care. Despite a long-standing belief that all hospitals are the same, the truth is startlingly different:  where you go has a profound impact on whether you live or die, whether you are harmed or not.  There is an urgent need for better information, especially as consumers spend more money out of pocket on healthcare.  Until recently, this type of transparent, consumer-focused information simply didn’t exist.

Over the past couple of months, things have begun to change. Three major organizations recently released groundbreaking hospital rankings.  The Leapfrog Group, a well-respected organization focused on promoting safer hospital care, assigned hospitals grades (“A” through “F”) based on how well it cared for patients without harming them*.  Consumer Reports, famous for testing cars and appliances, came out with its first “safety score” for hospitals.  U.S. News & World Report, probably the most widely known source for rating colleges, released its annual hospital rankings, but this time with an important change in methodology.  While others have also gotten into this game, these three organizations bring the highest amount of care, transparency, and credibility.

What is particularly striking, though, is how differently the hospitals come out in the recommendations.   Just like the different car rating websites that emphasize different things, these three have also taken approaches very different from each other.  If we want to understand why some hospitals get an “A” on Leapfrog or are rated one of America’s Best Hospitals but get an abysmal score on Consumer Reports, we have to look “under the hood.”

It is worth noting that there are important similarities across the rating systems.  They each include hospital infection rates, though to varying degrees.  All three incorporate hospital mortality rates, though they weight them differently.  In accompanying blogs, colleagues from the Leapfrog Group and Consumer Reports each explain their methods and approach in far greater detail than I can, so I highlight what each emphasizes, and focus on their differences.

Let’s start with Leapfrog, whose primary focus is ensuring that when you walk into the hospital, your chances of being hurt or killed by the medical care you receive (as opposed to dying from the underlying condition that brought you in) are minimized.  Each year, as many as 100,000 people die in U.S. hospitals due to medical errors, a stunning number that has changed very little over the past decade.  The most common causes of death and injury in hospitals are medication errors and healthcare-associated infections, though there is no shortage of ways that you can be injured.  Leapfrog creates a composite safety score, emphasizing whether a hospital has computerized prescribing, how well it staffs its intensive care units, and what its rates of infections are.  Your chances of being injured in the hospital are generally lower at an “A” hospital than a “C” hospital.

Consumer Reports (CR) also calculates a “safety score,” but its focus is very different.  First, CR also incorporates infection rates.  It also emphasizes three areas not incorporated by Leapfrog.  The first is “double” CT scans (which, thankfully, are rare) and represent un-necesary radiation.  The second is patients’ reports on how well their providers communicated about medications and how well providers explained what would happen after discharge.  I’m a huge fan of patient-reported measures (it’s the one chance we consumers get to rate hospitals).  The challenge is that it captures both patient expectations as well as hospital performance.   The latest evidence suggests that patients’ expectations, including their tendency to give high scores, vary a lot based on who they are and where they live.  A final area of emphasis is readmissions, a very complicated metric.  The best evidence to date suggests that only a small proportion of readmissions are preventable by hospitals.  In fact, good social support at home and the ability to get in and see your primary care physician probably matters much more than what the hospital does.  The other factors in the CR score that are emphasized less (including mortality) can be found here.

U.S. News has been rating hospitals for a while, though they made a very important methodological change recently.  Three things make up their score:  mortality (1/3), reputation (1/3) and other stuff (1/3).  The other stuff is a collection of safety and technology indicators.  The reputation score comes from a survey of specialists in each field and is meant to capture the intangible knowledge about hospital care that the rest of us don’t have.  Finally, and most importantly, they now emphasize mortality rates, which in many ways are “the bottom line” for a hospital’s performance.  Their risk-adjusted mortality rate captures both how well the hospital provides the right treatments (a.k.a. “effectiveness”) and how well it avoids harming patients (a.k.a. “safety”).

Implications:  Who wins and who loses?

Different approaches lead to different winners and losers (see table below).  In the Leapfrog scores, there aren’t large variations by hospital type (i.e. size, teaching status).  There aren’t large regional variations either – in every part of the country, there are lots of winners and losers.  It is worth noting that safety-net hospitals (as we have previously defined) generally do worse on Leapfrog (19% of those who got an “A” were safety-net hospitals, compared to 31% who got a “C”).

Because CR emphasizes infections but also patient experiences and readmissions, big “winners” on their score are small, community-based hospitals with very few minority or poor patients.  Hospitals in the Midwest do particularly well.  Major teaching hospitals do extremely poorly (three times more likely to end up in the worst quartile).  These worst quartile hospitals also have a lot more minority and poor patients – most likely because hospitals that care for poor patients have higher readmission rates (remember, poor social supports at home likely drive this) and worse patient experience scores.

U.S. News top-ranked hospitals are usually big, academic teaching hospitals.  On this metric, at least, it’s nearly impossible for a small community hospital to be rated as one of the best in the country.

What’s a consumer to do?

If you’re lucky enough to find a hospital that gets rated highly by all three organizations, I’d take that in a heartbeat.  It’s like finding a car that drives well, looks stylish, is reliable, and safe!  No brainer.  Unfortunately, those hospitals are rare.  In Boston, Massachusetts General Hospital was ranked the #1 hospital in the country by U.S. News.  It got an “A” from Leapfrog. It was near the bottom of Massachusetts hospitals in the CR rating, receiving a score of just 45 out of 100.

It can be easy to decide if the safety, or the style, or the performance of a car is most important to you. Unfortunately, choosing what’s most important in health care can make us ask difficult and seemingly unreasonable questions. Is my primary goal to survive my hospitalization, avoid infections and medication errors, or have a reasonably good experience?  Every individual has to decide what matters most.  If a low mortality rate is most important, U.S. News is your best bet.  If you care most about patient safety, then Leapfrog is the way to go.  Consumer Reports emphasizes infections, unnecessary radiation and patient experience.  If those matter most, CR is your best bet.

My personal list ranks mortality as most important (by far), followed by safety, with patient experience an important but distant third.  Others will make different choices. This is why we need different lists and different types of hospitals. It’s time for consumers to use this new information to make better choices.  I know that there is little precedent for consumers choosing healthcare providers based on quality.  I’ve always believed it is because they lacked good data.  In an era of greater transparency, if consumers vote with their feet, it’ll make hospital care better for everyone.

*Please note that I served on a blue-ribbon panel that advised the Leapfrog Group on their methodology.  I received no financial compensation for this effort.  The final choice of metrics and the scoring methodology for the Leapfrog Safety Score was made by the Leapfrog Group and not by the blue-ribbon panel.

Here’s how the rankings break out:


Consumer Reports accompaniment to “Hospital Rankings Get Serious”

Guest post by John Santa, MD MPH, accompanying Hospital Rankings Get Serious

Including Consumers In the Safety Journey

For more than a decade, those in health care have known that serious safety problems were present in our health systems. Many of the country’s most prominent health care leaders have done their best to make safety improvement a priority. Patients and families have shared their stories, pleading for more attention to the errors that have become so commonplace. Published research has shown dramatic improvements can occur when the culture of a hospital changes in a way that prioritizes safety and reduction of harm. Many physicians and hospitals have stepped up to these changes, demonstrating that striking improvement can occur. But much of our health care system has not made these improvements, and in many cases has not been transparent with consumers about these risks.

In early July Consumer Reports published its first Hospital Safety Ratings of more than 1,159 hospitals in 44 states across the U.S. Hospital errors and mistakes contribute to the deaths of 180,000 hospital patients a year, according to 2010 figures from the Department of Health and Human Services. And another 1.4 million patients on Medicare are seriously hurt by their hospital care.

But hospital safety information is difficult to come by; the U.S. government doesn’t track it the same way it does say, automobile crashes, and it becomes especially difficult for consumers to know and interpret how well hospitals in their community are doing. Our safety ratings covered just 18 percent of all U.S. hospitals, but included some of the largest and best-known hospital systems. The ratings were focused on six, safety-related categories: infections, readmissions, communication, CT scanning, complications, and mortality. We did not rate hospitals based on how much a consumer liked being there or not, or other hospital experiences they had, or even the benefit a consumer experienced by being at a certain hospital. We focused entirely on safety.

Our safety composite score was developed after several years of working with publicly-reported hospital quality data. Besides using data that was reported to federal or state governments, we selected measures where there was enough information to include as many hospitals as possible, areas that had the greatest effect on patient safety, measures that were focused on outcomes, instead of processes, and that were valid and reliable, as assessed by our internal statisticians and our external experts. All of the information we published was the most currently available and most has been reported for sufficient time for hospitals to implement improvement efforts. Finally, we looked at areas where consumers could take some sort of action to protect themselves. We realize that our safety composite measures some aspects of safety but not all aspects. We are working on developing more safety measures to add to the composite. We envision this work as a long term journey not a set destination.

Why did we focus entirely on safety? Because the amount of accidental harm inflicted upon patients in a hospital setting is nearly epidemic, and in many cases, almost entirely avoidable. We believe that safety should be a top priority for hospitals, and to underscore that point, we set a high bar to determine the cut-offs that defined our Ratings. We included risk adjustments in several areas. But we do not believe making more extensive adjustments in the data is appropriate, especially when an error can always be prevented. For example, in the case of hospital infections, we believe that zero errors is a reasonable goal. And we believe that is the case for other events too, such as pressure sores.

We acknowledge that this challenged hospitals and researchers. Yet given the slow rate of system-wide improvement in safety and errors, we think it is appropriate to stimulate new approaches and more critical thinking.

We think that good science is done by independent teams who are transparent in their methodology and use data accessible to all. We have had multiple interactions, conversations, and presentations involving hospitals, and we know that many hospitals would disagree with our decisions and the subsequent ratings. While we consulted multiple experts and researchers, and reviewed multiple studies, we developed our safety composite independent of any other rating effort, research team or existing strategy. We made all the final decisions.

We think the best science includes input from those who are most likely to benefit and those most at risk. We are fortunate to have input from consumers who understand the benefits and risk of health care. We have urged consumers to use all resources available, including other publications, to assess the benefits and risks of a health care intervention. Our priority was always—and remains—a focus on what can best improve the health of, and reduce the harm to, consumers.

Leapfrog accompaniment to “Hospital Rankings Get Serious”

Guest post by Leah Binder, accompanying Hospital Rankings Get Serious

Here’s some stories from my summer reading list. In one story, a woman is stopped by security at the airport when the metal detector goes off. Security guards can find no cause for this and eventually let her board the flight. But this makes her wonder, so she calls her doctor. Later an X-Ray shows a metal retractor in her abdomen, which is a surgical instrument the size of a crowbar somehow left in her body after recent surgery.

Here’s another story in the same book. We’ve all laughed about doctor’s handwriting, and wondered how pharmacists learn to read those scribbles. In this story, a doctor in a hospital handwrote a prescription. The prescription is misread because of the handwriting. As a direct result, the patient dies.

You won’t find this book at the beach, though it’s every bit as harrowing as anything Dean Koontz or Stephen King could dream up: I found these stories in a premier textbook on patient safety: Understanding Patient Safety, by Dr. Robert Wachter. Throw away your memories of dry, highlighter-burned textbooks from your school days, this one is a shocking page turner.

Chilling as the patient stories are, here’s what really raises the goosebumps on your arm: these stories are all true, and worse, they aren’t particularly unusual. There is one medication error per day per hospitalized patient—more in the ICU. One in four Medicare patients admitted to a hospital suffer some form of unintended harm during their stay. An estimated 6000 never events—like that retained retractor–happen every month to Medicare beneficiaries in the U.S.

Despite this, Dr. Wachter’s textbook is oddly reassuring, because the book bursts with 450+ pages of solutions–great ideas, excellent case studies, well researched protocols, and interesting evidence about what hospitals can successfully do to save lives. We have a wealth of evidence and a plethora of dedicated, motivated people in health care who have already demonstrated they can avoid many of these terrible errors.

But if the problems are clear and the solutions abundant, then why are we still losing so many lives? The employers and other purchaser members of Leapfrog concluded that what’s missing is market pressure. Given the many competing priorities hospitals face in this time of turbulent change, consumers and purchasers need to make clear that safety is the priority. That means consumers need to insist on the importance of safety when they talk with their doctors and nurses, and when possible they should vote with their feet to protect themselves and their families from harm in an unsafe hospital. Purchasers need to structure their contracting and benefits to favor and reward safety.

All of this rests on one critical resource: transparency. Consumers and purchasers need to have information about safety in a format that they can use. That is why Leapfrog, a national nonprofit with a membership of employers and other purchasers, launched the Hospital Safety Score.

The Leapfrog Board modeled the Score on the restaurant safety inspection policies recently enacted in Los Angeles and New York City. In those cities, health department inspectors give restaurants a letter grade rating their safety, and restaurants are required to post the grade on their front entryway. Within one year of implementation, a poll found that two-thirds of New Yorkers consulted the letter grade before choosing a restaurant. Distilling complex data into one comprehensible letter grade clearly helped the dining public, so Leapfrog hopes it might work for the hospitalized public. On June 6, 2012, we issued letter grades rating the safety of over 2600 general hospitals across the country. (www.hospitalsafetyscore.org).

To calculate the score using the best evidence, Leapfrog sought advice from a blue ribbon panel of experts that included nine of the nation’s top researchers in patient safety. Dr. Wachter served on the panel, along with three leading researchers from Harvard, and others from Johns Hopkins, Vanderbilt, Michigan, Stanford, and others. These experts advised Leapfrog on which publicly available measures of safety to consider using, and offered guidance in calculating scores using those measures. Leapfrog considered this advice in calculating the grades for hospitals.

Leapfrog’s Hospital Safety Score focuses exclusively on errors, accidents, injuries, and infections in hospitals—the unintended sometimes deadly events that no patient ever seeks out from a hospital stay. There are many other issues affecting the performance of a hospital that Leapfrog did not consider, such as mortality rates for certain procedures or patient experience reports. Other ratings in other places offer perspectives on those issues for consumers.  The Hospital Safety Score rates hospitals on whether they have the procedures and protocols in place to prevent harm and death, as well as the rate of actual harm and death to patients from accidents and errors.

The good news is that hundreds of hospitals are demonstrating excellent performance in safety, and earned an A. But not all hospitals perform as well, and consumers and purchasers deserve to know which is which.

The Hospital Safety Score is one tool among many consumers should use to choose a hospital. We link to other resources on our website, including the Consumer Reports safety ratings and others. Just as consumers can consult several different reviews before making a major purchase like a car, so should consumers consider different views about different aspects of hospital performance. Personally, I will welcome the opportunity to consult a variety of reviews if and when my family faces the critical decision about admission to a hospital. But for me, safety will always come first.

I have met many people who suffered egregious, unnecessary harm in American hospitals, and without exception they tell their story publicly for one reason: to make sure what happened to them doesn’t happen to others. Beneath the Hospital Safety Score and the Consumer Reports rating are the stories of hundreds of thousands of such victims. We calculated our scores because their experience counts.