Author Archives: akj808

Judging Health Systems: Focusing on What Matters

Austin Frakt and Aaron Carroll recently approached me about a New York Times UpShot piece aiming to rank eight healthcare systems they had chosen: Australia, Canada, France, Germany, Singapore, Switzerland, the United Kingdom, and the United States. This forced me to think about a pretty fundamental question: what do we want from a healthcare system?

What Does an Ideal Healthcare System Look Like?

I would argue that most people want a healthcare system where they can get timely access to high quality, affordable care and one that also promotes innovation of new tests and treatments. But underlying these sentiments are a lot of important issues that need unpacking. First, what does it mean to be able to access care when you need it? A simple way to think about this is being able to see a doctor (or other healthcare professional) quickly and easily and in cases where there are follow-on tests, procedures, and treatments, you can get them without much delay. This brings up one important point: while experts often discount the importance of timeliness, regular people generally don’t: anyone who has waited weeks or months for a follow-up after an abnormal test result or to get a needed surgery knows that waiting times are not just an inconvenience. Delayed access can be stressful, agonizing and in some instances, downright harmful.

Beyond access, of course, we want care we can afford. Almost all of us need some sort of insurance that would pay for an unexpected, catastrophic healthcare expense (like spending a few weeks in an ICU). Most of us need some sort of financial coverage for other, slightly less expensive services such as an MRI or a knee replacement. And even still, some of us will struggle to pay for simpler things like doctors’ visits and need financial help there as well.  There is broad consensus that we want a healthcare system where people aren’t denied the services they need because they can’t afford them.

While accessibility, timeliness, and affordability are key, there are other aspects of care that get less attention but are just as important: we want care that is safe and effective and produces the best outcomes possible.  It’s great if you can have timely cardiac surgery and pay little or nothing out of pocket. But if you die unnecessarily from a preventable error, you didn’t get what you needed from the healthcare system.

Finally, we want a healthcare system that creates new knowledge so that we get better at caring for sick people. One of my earliest memories of medical school was caring for a young woman, an artist with two small children, who died of a complication of chronic myelogenous leukemia after a bone marrow transplant. Today, her disease could have been managed by a simple, daily pill that has turned CML into a chronic, yet manageable disease.  A system that generates new therapies that save lives is critical and its importance is often overlooked when assessing health system performance.

Health System Organization

So what is the ideal way to organize a healthcare system to accomplish these goals? One school of thought believes that market-based systems are the solution because they rely on competition, customize care for individuals, keep prices down, and allow the highest quality providers to flourish. For others, the answer is a government-run, single-payer system where everyone has equal access, gets comparable quality, and patients don’t have to worry about costs because the government takes care of it. While either approach can be supported with selected data and facts, as I have looked at health systems from around the globe, one theme becomes obvious: systems organized very differently can achieve comparable levels of performance and no single approach consistently outperforms others.

So which countries have the best systems? As the UpShot piece outlines, we did a tournament-style competition where in each round, we had to pick winners and losers.  At the end, we were also asked to rank the selected 8 countries based on our overall assessment. To do so, my approach was simple. Health systems should be judged not by how they are organized (i.e. markets or government) but what they produce. How well does it do what a healthcare system ought to do? So that’s the approach I took.

Evaluating Health Systems

That leads us to the next question: what metrics should we use? If you made it to the first day of a health policy 101 class, you learned about two metrics: per capita spending and life expectancy.  If you made it to the second  class, you learned that unfortunately, these are far too crude to tell you much about health system performance and do not help generate an actionable set of policy prescriptions.  Health care spending is driven by many factors, including what is encompassed in spending calculations (research and development? medical education?) and prices (if one country pays its nurses half as much as another – does that mean the first is twice as efficient?).  Life expectancy is even more complicated as it is driven in large part by behavior, lifestyle, and genetics of the underlying population. As Irene Papanicolas and I point out in our recent JAMA piece, drawing these boundaries when comparing healthcare systems is important.

So if we can’t just look at those metrics, what else should we examine? While one could evaluate literally hundreds of metrics, I prioritized 16 (see Table 1).

None of these are perfect but they seemed reasonable to me – a few on access, quality, cost and innovation. Ultimately, I was interested in assessing performance in areas that are clearly within the purview of the healthcare system – how many people are covered and covered for what? How quickly can you see someone when you’re sick? How good is the system at taking care of you when something terrible happens, like you have a stroke or a heart attack? Does the system generate lots of innovation so that everyone’s care gets better over the time?  I tried not to overly weigh any one of these but tried to look at them holistically.

My Rankings

Based on these measures (for country data, see Table 2), my ranking of the selected health systems is as follows:

  1. Switzerland
  2. Germany
  3. U.S.A.
  4. U.K.
  5. France
  6. Australia
  7. Canada
  8. Singapore

A few caveats.  First, these are all very good healthcare systems – and we’re generally comparing systems that are far superior to much of the rest of the world. Second, there was rarely a clear winner in head to head competitions.  Switzerland and Germany both have excellent systems and reasonable people could draw a different conclusion from the same data. I struggled among the U.S., France, Australia, and the U.K., all of which had clear strengths and clear challenges. Singapore lagged behind in large part because there is so little data about their performance and lack of data means it might be better than it looks, or it could be worse. I just don’t know.

The ranking of the U.S. above places like France and the UK may be surprising. Some people will point, rightly, to the fact that the U.S. has the highest spending in the world yet still has people who are uninsured. The healthcare spending problem of the U.S. is largely a political choice – we have extraordinarily high prices on everything from physician salaries to pharmaceuticals.  While some of these high prices may spur innovation (i.e. pharmaceuticals), the cost of spending nearly 20% of our GDP on healthcare means less money for everything else. We could do better with different policy choices.

On the issue of universal coverage, things are a bit more complicated. While its narrowly true that the U.S. is the only country here without universal coverage, it’s too simplistic.  First, 91% of Americans are now insured (thanks in part to the ACA).  Some countries have universal coverage for their citizens but not necessarily for immigrants or other groups. Second, it is important to consider what is actually covered.  While most Americans can get access to the latest treatments, in many countries, access to the most expensive therapies can be difficult or non-existent.  I don’t know if we will get to 100% coverage but we are inching towards it and I hope that with the next set of policy reforms, we can get into the high 90’s.  And that would be good.

Finally, if you take a big step back and look at the data, Americans do better than average in timely access, especially to specialty services and “elective” surgery (which is often not that elective).  They tend to be among the leaders in acute care quality, when healthcare means the difference between life and death, although the quality of primary care could surely be better.  And America is the innovation engine of the world, pumping out new drugs and treatments that benefit the whole world.  All of that earns America a high rank in my book – behind Switzerland and Germany but ahead of others. You can disagree but overall, while the U.S. healthcare system has a lot of work ahead, we should not overlook its strengths – and they are sizeable.

So here’s the big picture: when it comes time to measure health system performance, it’s important to think about boundaries (what is the responsibility of the healthcare system and what isn’t).  It’s also important to consider whether the system is delivering what people need: coverage of a broad range of services, especially those that are important for the sickest among us, timely access to affordable, high quality care, and innovation that ensures care gets better over time.  For most people, whether the system is market-based or government-run matters a lot less than whether it’s meeting their needs. And that’s the way it should be.


Readmissions, Observation, and Improving Hospital Care

Reducing Hospital Use

Because hospitals are expensive and often cause harm, there has been a big focus on reducing hospital use.  This focus has been the underpinning for numerous policy interventions, most notable of which is the Affordable Care Act’s Hospital Readmissions Reduction Program (HRRP), which penalizes hospitals for higher than expected readmission rates.  The motivation behind HRRP is simple:  the readmission rate, the proportion of discharged patients who return to the hospital within 30 days, had been more or less flat for years and reducing this rate would save money and potentially improve care.  So it was big news when, as the HRRP penalties kicked in, government officials started reporting that the national readmission rate for Medicare patients was declining.

Rising Use of Observation Status

But during this time, another phenomenon was coming into focus: increasing use of observation status.  When a patient needs hospital services, there are two options: that patient can be admitted for inpatient care or can be “admitted to observation”. When patients are “admitted to observation” they essentially still get inpatient care, but technically, they are outpatients.  For a variety of reasons, we’ve seen a decline in patients admitted to “inpatient” status and a rise in those going to observation status.  These two phenomena – a drop in readmissions and an increase in observation – seemed related.

I – and others – spoke publicly about our concerns that the drop in readmissions was being driven by increasing observation admissions. An analysis by David Himmelstein and Steffie Woolhandler in the Health Affairs blog suggested that most of the drop in readmissions could be accounted for both by increases in observation status and by increases in returns to the emergency department that did not lead to readmission.  Two months later, a piece by Claire Noel-Miller and Keith Lund, also in the Health Affairs blog, found that the hospitals with the biggest drop in readmissions appeared to have big increases in their use of observation status.  It seemed like much of the drop in readmissions was about reclassifying people as “observation” and administratively lowering readmissions without changing care.

New Data

Now comes a terrific, high quality study in the New England Journal of Medicine that takes this topic head on.  The authors examine directly whether the hospitals that lowered their readmission rates were the same ones that increased their observation status – and find no correlation.  None.  If you’re ever looking for a scatter plot of two variables that are completely uncorrelated, look no further than Figure 3 of the paper.  The best reading of the evidence prior to the study did not turn out to be the truth.  It reminds me of the period we were all convinced, based on excellent observational data, that hormone replacement therapy was lifesaving for women with cardiovascular disease.  And that became the standard of care – until someone conducted a randomized trial, and found that HRT provided little benefit to these patients.  That’s why we do research – it moves our knowledge forward.

blog pic

Where are we now?

So where does this leave us?  Is the ACA’s readmissions policy a home run?  Here’s what we know:  the HRRP has, most likely (we have no controls) led to fewer patients being readmitted to the hospital. Second, the HRRP does not seem responsible for the increase in observation stays.

Here’s what we don’t know: is a drop in readmissions a good thing for patients? It may seem obvious that it is but if you think about it, you realize that readmission rate is a utilization measure, not a patient outcome.  It’s a measure of how often patients use inpatient services within 30 days of discharge. Utilization measures, unto themselves, don’t tell you whether care is good or bad. So the real question is — has the HRRP improved the underlying quality of care? It might be that we have improved on care coordination, communications between hospitals and primary care providers, and ensuring good follow-up. That likely happened in some places. Alternatively, it might be that we have just made it much harder for that older, frail woman with heart failure sitting in the emergency room to get admitted if she was discharged in the last 30 days. That too has likely happened in some places. But how much of it is the former versus the latter? Until we can answer that question, we won’t know whether care is better or not.

Beyond understanding why readmissions have fallen, we also don’t know how HRRP has affected the other things that hospitals ought to focus on, such as mortality and infection rates. If your parent was admitted to the hospital with pneumonia, what would be your top priority? Most people would say that they would like their parent not to die. The second might be to avoid serious complications like a healthcare associated infection or a fall that leads to a hip fracture. Another might be to be treated with dignity and respect. Yes, avoiding being readmitted would be nice – but for me at least, it pales in comparison to avoiding death and disability.  We know little about the potential spillover effects of the readmission penalties on the things that matter the most.

So here we are – a good news study that says readmissions are down because fewer people are being readmitted to the hospital, not because people are being admitted to observation status. That’s important.  But the real challenge is in figuring out whether patients are better off.  Are they more likely to be alive after hospitalization? Do they have fewer functional limitations? Less pain and suffering? Until we answer those questions, it’ll be hard to know whether this policy is making the kind of difference we want. And that’s the point of science – using data to answer those questions. Because we all can have our opinions – but ultimately, it’s the data that counts.

Misunderstanding Propublica: transparency, confidence intervals, and the value of data

In July the investigative journalists at ProPublica released an analysis of 17,000 surgeons and their complication rates. Known as the “Surgeon Scorecard,” it set off a firestorm. In the months following, the primary objections to the scorecard have become clearer and were best distilled in a terrific piece by Lisa Rosenbaum. As anyone who follows me on Twitter knows, I am a big fan of Lisa –she reliably takes on health policy groupthink and incisively reveals that it’s often driven by simplistic answers to complex problems.

So when Lisa wrote a piece eviscerating the ProPublica effort, I wondered – what am I missing? Why am I such a fan of the effort when so many people I admire– from Rosenbaum to Peter Pronovost and, most recently, other authors of a RAND report – are highly critical? When it comes to views on the surgeon scorecard, reasonable people see it differently because they begin with a differing set of perspectives. Here’s my effort to distill mine.

 What is the value of transparency?

Everyone supports transparency. Even the most secretive of organizations call for it. But the value of transparency is often misunderstood. There’s strong evidence that most consumers haven’t, at least until now, used quality data when choosing providers.  But that’s not what makes transparency important. It is valuable because it fosters a sense of accountability among physicians for better care. We physicians have done a terrible job policing ourselves. We all know doctors who are “007s” – licensed to kill. We do nothing about it. If I need a surgeon tomorrow, I will find a way to avoid them, but that’s little comfort to most Americans, who can’t simply call up their surgeon friends and get the real scoop. Even if patients won’t look at quality data, doctors should and usually do.

Data on performance changes the culture in which we work. Transparency conveys to patients that performance data is not privileged information that we physicians get to keep to ourselves. And it tells physicians that they are accountable. Over the long run, this has a profound impact on performance. In our study of cardiac surgery New York, transparency drove many of the worst surgeons out of the system – they moved, stopped practicing, or got better. Not because consumers were using it, but because when the culture and environment changed, poor performance became harder to justify.

Aren’t bad data worse than no data?

One important critique of ProPublica’s effort is that it represents “bad data,” that its misclassification of surgeons is so bad that it’s worse than having no data at all. Are ProPublica’s data so flawed that they represent “bad data”? I don’t think so. Claims data reliably identify who died or was readmitted. ProPublica used these two metrics – death and readmissions due to certain specific causes – as surrogates for complications. Are these metrics perfect measures of complications? Nope. As Karl Bilimoria and others have thoughtfully pointed out – if surgeon A discharges patients early, her complications are likely to lead to readmissions whereas as surgeon B, who keeps his patients in the hospital longer will see the complications in-house. Surgeon A will look worse than Surgeon B while having the same complication rate.  While this may be a bigger problem for some surgery compared to others, the bottom line is that for the elective procedures examined by ProPublica, most complications are diagnosed after discharge.

Similarly, Peter Pronovost pointed out that if I am someone with a high propensity to admit, I am more likely to readmit someone with a mild post-operative cellulitis than my colleague, and while that might be good for my patients, I am likely to be dinged by ProPublica metrics for the same complication. But this is a problem for all readmissions measures. Are these issues limitations in the ProPublica approach? Yes. Is there an easy fix that they could apply to address either one of them? Not that I can think of.

But here’s the real question: are these two limitations, or any of the others listed by the RAND report, so problematic as to invalidate the entire effort? No. If you needed a surgeon for your mom’s gallbladder surgery and she lived in Tahiti (where you presumably don’t know anyone) – and Surgeon A had a ProPublica “complication rate” of 20% and surgeon B had a 2% complication rate, without any other information, would you really say this is worthless? I wouldn’t.

A reality test for me came from that cardiac surgeon study I mentioned from New York State. As part of the study, I spoke to about 30 surgeons with varying performance. Not one said that the report card had mislabeled a great surgeon as being a bad one. I heard about how the surgeon had been under stress, or that transparency wasn’t fair or that mortality wasn’t a good metric. I heard about the noise in the data, but no denials of the signal. In today’s debate over ProPublica, I see a similar theme: lots of complaints about methodology, but no evidence that the results aren’t valuable.

But let’s think about the alternative. What if the ProPublica reports are so bad that they have negative value? While I think this is not true – what should our response be?  It should create a strong impetus for getting the data right. When risk-adjustment fails to account for severity of illness, the right answer is to improve risk-adjustment, not to abandon the entire effort. Bad data should lead us to better data.

Misunderstanding the value of p-values and confidence intervals

Complication Rates

An example of the confidence intervals displayed on the Surgeon Scorecard.

Another popular criticism of the ProPublica scorecard is that its confidence intervals are wide, a line of reasoning which I believe misunderstands p-values and confidence intervals. Let’s return to your mom, who lives in Tahiti and still needs gallbladder surgery. What if I told you that I was 80% sure that surgeon A was better than average, and I was 80% sure that surgeon B was worse than average. Would you say that is useless information? Because the critique – that the 95% confidence intervals in the Propublica reports are wide – requires that we be 95% sure about an assertion to reject the null hypothesis. That threshold has a long historical context and is important when the goal is to not make a type 1 error (don’t label someone as a bad surgeon unless you are really sure he or she is bad). But if you want to avoid a type 2 error (which is what patients want – don’t get a bad surgeon, even if you might miss out on a good one), a p-value of 0.2 and 80% confidence intervals look pretty good. Of course, the critique about confidence intervals comes mostly from physicians who can get to very high certainty by calling their surgeon friends and finding out who is good. It’s a matter of perspective. For surgeons worried about being mislabeled, 95% confidence intervals seem appropriate.  But for the rest of the world, a p-value of 0.05 and 95% confidence intervals is way too conservative.

A final point about the Scorecard – and maybe the most important:  This is fundamentally hard stuff, and ProPublica deserves credit for starting the process. The RAND report outlines a series of potential deficiencies, each of which is worth considering – and to the extent that it’s reasonable, ProPublica should address them in the next iteration.  That said – a key value of the ProPublica effort is that it has launched an important debate about how we assess and report surgical quality. The old way – where all the information was privileged and known only among physicians – is gone. And it is not coming back. So here’s the question for the critics: how do we move forward constructively – in ways that build trust with patients, spur improvements among providers, and don’t hinder access for the sickest patients?  I have no magic formula. But that’s the discussion we need to be having.

Disclosure: In the early phases of the development of the Surgeon Scorecard, I was asked (along with a number of other experts) to provide guidance to ProPublica on their approach. I did, and received no compensation for my input.

The ProPublica Report Card: A Step in the Right Direction

A controversial report card

ProPublica's surgeon report card

Last week, Marshall Allan and Olga Pierce, two journalists at ProPublica, published a surgeon report card detailing complication rates of 17,000 individual surgeons from across the nation. A product of many years of work, it benefitted from the input of a large number of experts (as well as folks like me). The report card has received a lot of attention … and a lot of criticism. Why the attention? Because people want information about how to pick a good surgeon. Why the criticism?  Because the report card has plenty of limitations.

As soon as the report was out, so were the scalpels. Smart people on Twitter and blogs took the ProPublica team to task for all sorts of reasonable and even necessary concerns. For example, it only covered Medicare beneficiaries, which means that for many surgeries, it missed a large chunk of patients. Worse, it failed to examine many surgeries altogether. But there was more.

The report card used readmissions as a marker of complications, which has important limitations. The best data suggest that while a large proportion of surgical readmissions are due to a complication, readmissions are also affected by other factors, such as how sick the patient was prior to surgery (the ProPublica team tried to account for this), his or her race, ethnicity, social supports—and even the education and poverty level of their community. I have written extensively about the problems of using readmissions after medical conditions as a quality measure. Surgical readmissions are clearly better but hardly perfect. They even narrowed the causes of readmissions using expert input to improve the measure, but even so, it’s hardly ideal. ProPublica produced an imperfect report. 

How to choose a surgeon

So what to do if you need a surgeon?  Should you use the ProPublica report card?  You might consider doing what I did when I needed a surgeon after a shoulder injury two years ago:  ask colleagues. After getting input about lots of clinicians, I honed in on two orthopedists who specialized in shoulders. I then called surgeons who had operated with these guys and got their opinions. Both were good, I was told, but one was better. Yelp?  I passed. Looking them up on the Massachusetts Registry of Medicine?  Seriously?  Never crossed my mind.

But what if, just by chance, you are not a physician? What if you are one of the 99.7% of Americans who didn’t go to medical school? What do you do?  If your insurance covers a broad network and your primary care physician is diligent and knows a large number of surgeons, you may get referred to someone right for you. Or, you could rely on word of mouth, which means relying on a sample size of one or two.

So what do patients actually do?  They cross their fingers, pray, and hope that the system will take care of them. How good is that system at taking care of them? It turns out, not as good as it should be. We know that mortality rates vary three-fold across hospitals. Even within the same hospital, some surgeons are terrific, while others? Not so much. Which is why I needed to work hard to find the right orthopedist. Physicians can figure out how to navigate the system. But what about everyone else?

I was on service recently and took care of a guy, Bobby Johnson (name changed, but a real guy), who was admitted yet again for an ongoing complication from his lung surgery. He had missed key events because of his complications—including his daughter’s wedding—because he was in the hospital with a recurrent infection. He wondered if he would have done better with a different hospital or a different surgeon. I didn’t know how to advise him.

And that’s where ProPublica comes into play. The journalists spent years on their effort, getting input from methodologists, surgeons, and policy experts. In the end, they produced a report with a lot of strengths, but no shortage of weaknesses. But despite the weaknesses, I never heard them question whether the endeavor was worth it at all.  I’m glad they never did.

Because the choice wasn’t between building the perfect report card and building the one they did. The choice was between building their imperfect report card and leaving folks like Bobby with nothing. In that light, the report card looks pretty good. Maybe not for experts, but for Bobby.

A step towards intended consequences

 Colleagues and friends that I admire, including the brilliant Lisa Rosenbaum, have written about the unintended consequences of report cards. And they are right. All report cards have unintended consequences. This report card will have unintended consequences. It might even make, in the words of a recent blog, “some Morbidity Hunters become Cherry Pickers” (a smart, witty, but quite critical piece on the ProPublica Report Card). But asking whether this report card will have unintended consequences isn’t the right question. The right question is – will it leave Bobby better off? I think it will. Instead of choosing based on a sample size of one (his buddy who also had lung surgery), he might choose based on sample size of 40 or 60 or 80. Not perfect. Large confidence intervals? Sure. Lots of noise?  Yup. Inadequate risk-adjustment? Absolutely. But, better than nothing? Yes. A lot better.

All of this gets at a bigger point raised by Paul Levy:  is this really the best we can do? The answer, of course, is no. We can do much better, but we have chosen not to. We have this tool—it’s called the National Surgical Quality Improvement Program (NSQIP). It uses clinical data to carefully track complications across a large range of surgeries and it’s been around for about twenty years. Close to 600 hospitals use it (and about 3,000 hospitals choose not to). And no hospital that I’m aware of makes its NSQIP data publicly available in a way that is accessible and usable to patients. A few put summary data on Hospital Compare, but it’s inadequate for choosing a good surgeon. Why are the NSQIP data not collected routinely and made widely available? Because it’s hard to get hospitals to agree to mandatory data collection and public reporting. Obviously those with the power of the purse—Medicare, for instance—could make it happen. They haven’t.

Disruptive innovation, a phrase coined by Clay Christensen, is usually a new product that, to experts, looks inadequate. Because it is. These innovations are not, initially, as good as what the experts use (in this case, their network of surgeons). They initially dismiss the disrupter as being of poor quality. But disruptive innovation takes hold because, for a large chunk of consumers (i.e. patients looking for surgeons), the innovation is both affordable and better than the alternative. And once it takes hold, it starts to get better. And as it does, its unintended consequences will become dwarfed by its intended consequences:  making the system better. That’s what ProPublica has produced.  And that’s worth celebrating.

Readmissions Penalty at Year 3: How Are We Doing?

A few months ago, the Centers for Medicare and Medicaid Services (CMS) put out its latest year of data on the Hospital Readmissions Reduction Program (HRRP). As a quick refresher – HRRP is the program within the Affordable Care Act (ACA) that penalizes hospitals for higher than expected readmission rates. We are now three years into the program and I thought a quick summary of where we are might be in order.

I was initially quite unenthusiastic about the HRRP (primarily feeling like we had bigger fish to fry), but over time, have come to appreciate that as a utilization measure, it has value. Anecdotally, HRRP has gotten some hospitals to think more creatively, focusing greater attention on the discharge process and ensuring that as patients transition out of the hospital, key elements of their care are managed effectively. These institutions are thinking more carefully about what happens to their patients after they leave the hospital. That is undoubtedly a good thing. Of course, there are countervailing anecdotes as well – about pressure to avoid admitting a patient who comes to the ER within 30 days of being discharged, or admitting them to “observation” status, which does not count as a readmission. All in all, a few years into the program, the evidence seems to be that the program is working – readmissions in the Medicare fee-for-service program are down about 1.1 percentage points nationally. To the extent that the drop comes from better care, we should be pleased.

HRRP penalties began 3 years ago by focusing on three medical conditions: acute myocardial infarction, congestive heart failure, and pneumonia. Hospitals that had high rates of patients coming back to the hospital after discharge for these three conditions were eligible for penalties. And the penalties in the first year (fiscal year 2013) went disproportionately to safety-net hospitals and academic institutions (note that throughout this blog, when I refer to years of penalties, I mean the fiscal years of payments to which penalties are applied. Fiscal year 2013, the first year of HRRP penalties, refers to the period beginning October 1, 2012 and ending September 30, 2013). Why? Because we know that when it comes to readmissions after medical discharges such as these, major contributors are the severity of the underlying illness and the socioeconomic status of the patient. The readmissions measure tries to adjust for severity, but the risk-adjustment for this measure is not very good. And let’s not even talk about SES. The evidence that SES matters for readmissions is overwhelming – and CMS has somehow become convinced that if a wayward hospital discriminates by providing lousy care to poor people, SES adjustment would somehow give them a pass. It wouldn’t. As I’ve written before, SES adjustment, if done right, won’t give hospitals credit for providing particularly bad care to poor folks. Instead, it’ll just ensure that we don’t penalize a hospital simply because they care for more poor patients.

Surgical readmissions appear to be different. A few papers now have shown, quite convincingly, that the primary driver of surgical readmissions is complications. Hospitals that do a better job with the surgery and the post-operative care have fewer complications and therefore, fewer readmissions. Clinically, this makes sense. Therefore, surgical readmissions are a pretty reasonable proxy for surgical quality.

All of this gets us to year 3 of the HRRP. In year 3, CMS expanded the conditions for which hospitals were being penalized to include COPD as well as surgical readmissions, specifically knee and hip replacements. This is an important shift, because the addition of surgical readmissions should be helpful to good hospitals that provide high quality surgical care. Therefore, I would suspect that teaching hospitals, for instance, would do better now that the program also includes surgical readmissions than when the program did not. But, we don’t know.

So, with the release of year 3 data on readmissions penalties by individual hospital, we were interested in answering three questions: first, how many hospitals have managed to sustain penalties across all three years? Second, who are the hospitals who have gotten consistently penalized (all three years) versus not? And finally, do the penalties appear to be targeting a different group of hospitals in year 3 (when CMS included surgical readmissions) than they did in year 1 (when CMS just focused on medical conditions)?

Our Approach

We began with the CMS data released in October 2014, which lists, for each individual eligible hospital, the penalties it received for each of the three years of the penalty program. We linked these data to several databases that have detailed information about hospital characteristics, including size, teaching status, Disproportionate Share Hospital (DSH) Index – our proxy for safety net status — ownership, region of the country, etc. We ran both bivariate models as well as multivariable models. We show bivariate models because from a policy point of view, that’s the most salient (i.e. who got the penalties versus who didn’t).

Our Findings

Here’s what we found:

About 80% of eligible U.S. hospitals received a penalty for fiscal year 2015 and 57% of U.S. hospitals eligible for the penalties were penalized each of the three years. The penalties were not evenly distributed. While 41% of small hospitals received penalties in each of the three years, more than 70% of large hospitals did. There were large variations in likelihood of getting penalized every year based on region: 72% of hospitals in the Northeast versus 27% in the West. Teaching hospitals and safety-net hospitals were far more likely to be penalized consistently, as were the hospitals with the lowest financial margins (Table 1).

Table 1: Characteristics of hospitals receiving readmissions penalties all 3 years.


Consistent with our hypothesis, while penalties went up across the board for all hospitals, we found a shift in the relative level of penalties between 2013 (when the HRRP only included medical readmissions) versus 2015 (when the program included both medical and surgical readmissions). This really comes out in the data on major teaching hospitals: In 2013, the average penalty for teaching hospitals was 0.38% (compared to 0.25% for minor teaching or 0.29% for non-teaching). By 2015, that gap is gone: the average penalty for teaching hospitals was 0.44% versus 0.54% for non-teaching hospitals. Teaching hospitals got lower readmission penalties in 2015, presumably because of the addition of the surgical readmission measures, which tend to favor high quality hospitals. In the same way, we see the gap in terms of the penalty level between safety-net hospitals and other institutions narrowed between 2013 and 2015 (Figure).

1_size 2_teaching status 3_safetynet

Figure: Average Medicare payment penalty for excessive readmissions in 2013 and 2015

Note that “Safety-net” refers to hospitals in the highest quartile of disproportionate share index, and “Low DSH” refers to hospitals in the lowest quartile of disproportionate share index.


Your interpretation of these results may differ from mine, but here’s my take. Most hospitals got penalties in 2015 and a majority have been penalized all three years. Who is getting penalized seems to be shifting – away from a program that primarily targets teaching and safety-net hospitals towards one where the penalties are more broadly distributed, although the gap between safety-net and other hospitals remains sizeable.  It is possible that this reflects teaching hospitals and safety-net hospitals improving more rapidly than others, but I suspect that the surgical readmissions, which benefit high quality (i.e. low mortality) hospitals are balancing out the medical readmissions, which, at least for some conditions such as heart failure, tends to favor lower quality (higher mortality) hospitals. Safety-net hospitals are still getting bigger penalties, presumably because they care for more poor patients (who are more likely to come back to the hospital) but the gap has narrowed. This is good news. If we can move forward on actually adjusting the readmissions penalty for SES (I like the way MedPAC has suggested) and continue to make headway on improving risk-adjustment for medical readmissions, we can then evaluate and penalize hospitals on how well they care for their patients. And that would be a very good thing indeed.


Finding the stars of hospital care in the U.S.

Why do star ratings?

Now we’re giving star ratings to hospitals? Does anyone think this is a good idea? Actually, I do. Hospital ratings schemes have cropped up all over the place, and sorting out what’s important and what isn’t is difficult and time consuming. The Centers for Medicare & Medicaid Services (CMS) runs the best known and most comprehensive hospital rating website, Hospital Compare. But, unlike most “rating” systems, Hospital Compare simply reports data on a large number of performance measures – from processes of care (did the patient get the antibiotics in time) to outcomes (did the patient die) to patient experience (was the patient treated with dignity and respect?). The measures they focus on are important, generally valid, and usually endorsed by the National Quality Forum. The one big problem with Hospital Compare? It isn’t particularly consumer friendly. With the large number of data points, it might take consumers hours to sort through all the information and figure out which hospitals are good and which ones are not on which set of measures.

To address this problem, CMS just released a new star rating system, initially focusing on patient experience measures. It takes a hospital’s scores on a series of validated patient experience measures and converts them into a single star rating (rating each hospital 1 star to 5 stars). I like it. Yes, it’s simplistic – but it is far more useful than the large number of individual measures that are hard to follow. There was no evidence that patients and consumers were using any of the data that were out there. I’m not sure that they will start using this one – but at least there’s a chance. And, with excellent coverage of this rating system from journalists like Jordan Rau of Kaiser Health News, the word is getting out to consumers.

Our analysis

In order to understand the rating system a little bit better, I asked our team’s chief analyst, Jie Zheng, to help us better understand who did well, and who did badly on the star rating systems. We linked the hospital rating data to the American Hospital Association annual survey, which has data on structural characteristics of hospitals. She then ran both bivariate and multivariable analyses looking at a set of hospital characteristics and whether they predict receiving 5 stars. Given that for patients, the bivariate analyses are most straightforward and useful, we only present those data here.

Our results

What did we find? We found that large, non-profit, teaching, safety-net hospitals located in the northeastern or western parts of the country were far less likely to be rated highly (i.e. receiving 5 stars) than small, for-profit, non-teaching, non-safety-net hospitals located in the South or Midwest. The differences were big. There were 213 small hospitals (those with fewer than 100 beds) that received a 5-star rating. Number of large hospitals with a 5 star rating? Zero. Similarly, there were 212 non-teaching hospitals that received a 5-star rating. The number of major teaching hospitals (those that are a part of the Council of Teaching Hospitals)? Just two – the branches of the Mayo Clinic located in Jacksonville and Phoenix. And safety net hospitals? Only 7 of the 800 hospitals (less than 1%) with the highest proportion of poor patients received a 5-star rating, while 106 of the 800 hospitals with the fewest poor patients did. That’s a 15-fold difference. Finally, another important predictor? Hospital margin – high margin hospitals were about 50% more likely to receive a 5-star rating than hospitals with the lowest financial margin.

Here are the data:

Screen Shot 2015-04-21 at 12.18.06 AM


There are two important points worth considering in interpreting the results. First, these differences are sizeable. Huge, actually. In most studies, we are delighted to see 10% or 20% differences in structural characteristics between high and low performing hospitals. Because of the approach of the star ratings, especially with the use of cut-points, we are seeing differences as great as 1500% (on the safety-net status, for instance).

The second point is that this is only a problem if you think it’s a problem. The patient surveys, known as HCAHPS, are validated, useful measures of patient experience and important outcomes unto themselves. I like them. They also tend to correlate well with other measures of quality, such as process measures and patient outcomes. The star ratings nicely encapsulate which types of hospitals do well on patient experience, and which ones do less well. One could criticize the methodology for the cut-points that CMS used for determining how many stars to award for which scores. I don’t think this is a big issue. Any time you use cut-points, there will be organizations right on the bubble, and surely it is true that someone who just missed being a 5 star is similar to someone who just made it. But that’s the nature of cut-points – and it’s a small price to pay to make data more accessible to patients.

Making sense of this and moving forward

CMS has signaled that they will be doing similar star ratings for other aspects of quality, such as hospital performance on patient safety. The validity of those ratings will be directly proportional to the validity of the underlying measures used. For patient experience, CMS is using the gold standard. And the goals of the star rating are simple: motivate hospitals to get better – and steer patients towards 5-star hospitals. After all, if you are sick, you want to go to a 5-star hospital. Some people will be disturbed by the fact that small, for-profit hospitals with high margins are getting the bulk of the 5 stars while large, major teaching hospitals with a lot of poor patients get almost none. It feels like a disconnect between what we thinks are good institutions and what the star ratings seem to be telling us. When I am sick – or if my family members need hospital care, I usually choose these large, non-profit academic medical centers. So the results will feel troubling to many. But this is not really a methodology problem. It may be that sicker, poor patients are less likely to rate their care highly. Or it may be that the hospitals that care for these patients are generally not as focused on patient-centered care. We don’t know. But what we do know is that if patients start really paying attention to the star ratings, they are likely to end up at small, for-profit, non-teaching hospitals. Whether that is a problem or not depends wholly on how you define what is a high quality hospital.

JAMA Forum: Innovating Care for Medicare Beneficiaries: Time for Riskier Bets and Embracing Failure

Of all the pressing challenges in the US health care system, lack of innovation in delivery may be the most important. Indeed, as we come upon the 50th anniversary of Medicare, a few facts seem apparent. What we do for patients—whether they have infectious diseases, heart disease, or cancer—has changed dramatically. Yet, how we do those things—the basic structure of our health care delivery system—has changed very little.

Read more at JAMA Forum…

Changing my mind on SES Risk Adjustment

I’m sorry I haven’t had a chance to blog in a while – I took a new job as the Director of the Harvard Global Health Institute and it has completely consumed my life.  I’ve decided it’s time to stop whining and start writing again, and I’m leading off with a piece about adjusting for socioeconomic status. It’s pretty controversial – and a topic where I have changed my mind.  I used to be against it – but having spent some more time thinking about it, it’s the right thing to do under specific circumstances.  This blog is about how I came to change my mind – and the data that got me there.

Changing my mind on SES Risk Adjustment

We recently had a readmission – a straightforward case, really.  Mr. Jones, a 64 year-old homeless veteran, intermittently took his diabetes medications and would often run out.  He had recently been discharged from our hospital (a VA hospital) after admission for hyperglycemia.  The discharging team had been meticulous in their care.  At the time of discharge, they had simplified his medication regimen, called him at his shelter to check in a few days later, and set up a primary care appointment.  They had done basically everything, short of finding Mr. Jones an apartment.

Ten days later, Mr. Jones was back — readmitted with a blood glucose of 600, severely dehydrated and in kidney failure.  His medications had been stolen at the shelter, he reported, and he’d never made it to his primary care appointment.  And then it was too late, and he was back in the hospital.

The following afternoon, I spoke with one of the best statisticians at Harvard, Alan Zaslavsky, about the case.  This is why we need to adjust quality measures for socioeconomic status (SES), he said.  I’m worried, I said. Hospitals shouldn’t get credit for providing bad care to poor patients.  Mr. Jones had a real readmission – and the hospital should own up to it.  Adjusting for SES, I worried, might create a lower standard of care for poor patients and thus, create the “soft bigotry of low expectations” that perpetuates disparities.  But Alan made me wonder: would it really?

To adjust or not to adjust?

Because of Alan’s prompting, I re-examined my assumptions about adjustment for SES. As he walked me through the data, I concluded that the issue of adjustment was far more nuanced than I had appreciated.

Here’s the key: effective socio-economic adjustment doesn’t reward providers for giving bad care to poor patients. It just ensures that they aren’t penalized for taking care of more of them. In my clinical example, if people like Mr. Jones had a higher readmission rate, adjusting for SES wouldn’t give hospitals credit for lower quality care to poor patients.  Done right, it would give credit to hospitals for having more poor patients, and that’s an important difference.  Consider three scenarios of hospital performance on a readmission rates (modified from our JAMA piece).


In scenario 1 and 2, let’s assume that patients are readmitted 20% of the time on average, whether or not they’re poor.  In scenario 1, Hospital A (a safety-net hospital) has higher readmission rates for everyone.  They may have more poor patients, but their readmission rate is high for both poor and non-poor patients.  So, compared to Hospital B, they look worse in unadjusted and adjusted scores.  Adjustment doesn’t help.

In scenario 2, Hospital A has higher readmission rates for its poor patients and therefore has an overall readmission rate of 25%.  Hospital B doesn’t suffer from readmitting its poor patients too often – hence its readmission rate is 20%.  In this case, safety-net hospitals look worse than Hospital B in both unadjusted and adjusted analyses.  Again, adjustment doesn’t help.

In scenario 3, Hospital A and B both struggle with readmissions for their poor patients – as does the rest of the country.  The only thing that differentiates Hospital A from Hospital B is the proportion of poor patients in the hospital.  In this case, adjustment makes a big difference.  By adjusting, we account for the different proportions of poor patients between Hospital A and B.  Adjustment ensures that organizations are judged by how well they care for their patients, not by how many poor patients they have.

One Size Does Not Fit All

The debate about whether to adjust for socioeconomic status needs to be far more nuanced than it has been to date.  Specifically, we must recognize that quality measurement has multiple purposes, and we need to think about each one when deciding whether to adjust or not.  If the goal is transparency –letting patients know how they are likely to fare – then the best approach is stratified data. In scenario 3 (where adjustment makes a difference) a poor patient will do about as well at both hospitals – and unadjusted numbers are misleading, because they tell poor patients that hospital B is better.  If Hospital B has a larger co-pay or is out-of-network, you have done real harm by pushing a patient to a more expensive place that doesn’t provide better care.


To push hospitals to improve quality, unadjusted numbers are best.  In all three scenarios, Hospital A should be more motivated to get better than Hospital B because for its patients, it tends to have worse performance.  But in each scenario, the hospitals need stratified data. Without it they will have no idea where to target their efforts.

For penalties, we should use adjusted data.  It will make no difference in scenarios 1 and 2.  But, in scenario 3, it makes little sense to penalize the safety net hospital compared to other hospitals just for taking care of more poor patients.  That’s not a smart policy.  Penalties for bad care for poor patients?  Sure.  Penalties just for caring for more poor patients?  Not so sure.

 A way forward

The bottom line is that the care of poor patients is not evenly distributed across all U.S. hospitals.  Some hospitals have a lot more patients like Mr. Jones than others have.  And caring for people like him, who are homeless and without a social network, is challenging.  None of us are very good at it.  Why penalize the safety-net hospitals just for taking care of more poor patients?

Given the concern that safety-net hospitals may be disproportionately penalized, a bi-partisan group of Senators (3 Democrats and 3 Republicans) has signed on to a bill that would require CMS to account for SES when it doles out penalties for the HRRP (Senate Bill 2501).  It’s an excellent start.

Adjusting for SES is an acknowledgement that medicine is not the only factor – and indeed may be a relatively minor factor – in health outcomes. For Mr. Jones, homelessness and poverty clearly contributed to his readmission to the hospital. Bad medical care did not. We should have no qualms penalizing safety-net hospitals for providing sub-standard care.  But we just shouldn’t penalize them simply because they have more poor patients.

Penalizing Hospitals for Being Unsafe

Why Adverse Events are a Big Problem

Adverse events – when bad things happen to patients because of what we as medical professionals do – are a leading cause of suffering and death in the U.S. and globally.  Indeed, as I have written before, patient safety is a major issue in American healthcare, and one that has gotten far too little attention. Tens of thousands of Americans die needlessly because of preventable infections, medication errors, surgical mishaps, and so forth.  As I wrote previously, according to Office of Inspector General (OIG), when an older American walks into a hospital, he or she has about a 1 in 4 chance of suffering some sort of injury during their stay.  Many of these are debilitating, life-threatening, or even fatal.  Things are not much better for younger Americans.

Given the magnitude of the problem, many of us have decried the surprising lack of attention and focus on this issue from policymakers.  Well, things are changing – and while some of that change is good, some of it worries me.  Congress, as part of the Affordable Care Act, required Centers for Medicare and Medicaid Services (CMS) to penalize hospitals that had high rates of “HACs” – Hospital Acquired Conditions.  CMS has done the best it can, putting together a combination of infections (as identified through clinical surveillance and reported to the CDC) and other complications (as identified through the Patient Safety Indicators, or PSIs).  PSIs are useful – they use algorithms to identify complications coded in the billing data that hospitals send to CMS.  However, there are three potential problems with PSIs:  hospitals vary in how hard they look for complications, they vary in how diligently they code complications, and finally, although PSIs are risk-adjusted, their risk-adjustment is not very good — and sicker patients generally have more complications.

So, HACs are imperfect – but the bottom line is, every metric is imperfect.  Are HACs particularly imperfect?  Are the problems with HACs worse than with other measures?  I think we have some reason to be concerned.

HACs – Who Gets Penalized?

Our team was asked by Jordan Rau of Kaiser Health News to run the numbers.  He sent along a database that listed CMS’s calculation of the HAC score for every hospital, and the worst 25% that were likely to get penalized. So, we ran some numbers, looking at characteristics of hospitals that do and do not get penalized:


These are bivariate relationships – that is, major teaching hospitals were 2.9 times more likely to be penalized than non-teaching hospitals.  This does not simultaneously adjust for the other characteristics because as a policy matter, it’s the unadjusted value that matters.  If you want to understand to what degree academic hospitals are being penalized because they also happen to be large, then you need multivariate analyses – and therefore, we went ahead and ran a multivariable model – and even in the multivariable model (logistic model with each of the above variables in the model), the results are qualitatively similar although not all the differences remain statistically significant.

What Does This Mean?

So how should we interpret these data?  A simple way to think about it is this:  who is getting penalized?  Large, urban, public, teaching hospitals in the Northeast with lots of poor patients.  Who is not getting penalized?  Small, rural, for-profit hospitals in the South.  Here are the data from the multivariable model:  The chances that a large, urban, public, major teaching hospital that has lots of poor patients (i.e. top quartile of DSH Index) will get the HAC penalty?  62%.  The chances that a small, rural, for-profit, non-teaching hospital in the south with very few poor patients will get the penalty? 9%.

Is that a problem?  You could make the argument that these large, Northeastern teaching hospitals are terrible places to get care – while the hospitals that are really doing it well are the small, rural, for-profit hospitals in the south. May be. I suspect this is much more about the underlying patient population and vigilance than actual safety.  Beth Isarel Deaconess Medical Center (BIDMC) in Boston is one of the very few hospitals in the country with exceptionally low mortality rates across all three publicly reported conditions and a hospital that I have written about as having great leadership and a laser focus attention on quality. And yet, it is being penalized as being one of the hospitals with, according to the HAC metric, a poor record on safety.  So is Brigham and Women’s (though I’m affiliated there, so watch my bias) – a pioneer in patient safety whose chief quality and safety officer is David Bates, one of nation’s foremost safety gurus.  So are the Cleveland Clinic and Barnes Jewish, RWJF Medical Center, LDS Hospital in Salt Lake, and Indiana University Hospital, to name a few.

So what are we to do?  Is this just whining that our metrics aren’t perfect?  Don’t we have to do something to move the needle on patient safety?  Absolutely.  But, we are missing a great opportunity to do something much more useful.  Patient safety as a field has been stuck.  It’s been 15 years since the IOM’s To Err is Human report came out – and by all counts, progress has been painstakingly slow.  Therefore, I am completely on board with the sentiment behind Congressional intent and CMS’s efforts.  We have to do something – but I think we should do something a little different.

If you look across the safety landscape, one thing becomes clear:  when we have good measures, we make progress.  We have made modest improvements in hospital acquired infections – because of tremendous work by the CDC (and their clinically-based National Hospital Surveillance Network) that collects good data on patient safety and feeds it back to hospitals. We have also made some progress on surgical complications, partly because a group of hospitals are willing to collect high quality data, and feed it back to their institutions.  But the rest of the field of patient safety?  Not so much.  What we need are good measures.  And, luckily, there is still a window of opportunity if we are willing to make patient safety a priority.

How to Move Forward

This gets us to the actual solution:  harnessing the power of meaningful use in the Electronic Health Records incentive program.  We need clinically-based, high quality patient safety metrics.  Electronic health records can capture these far more effectively than billing codes can.  The federal government is giving out billions of dollars to doctors and hospitals that “meaningfully use” certified EHRs.  A couple of years ago, David Classen and I wrote a piece in NEJM that outlined how the federal government, if it wanted to be serious about patient safety, could require, that EHR systems measure, track, and feed back patient safety events as part of certification and requirements for meaningful use.  The technology is there.  Lots of companies have developed adverse event monitoring tools.  It just requires someone to decide that improving patient safety is important – and that clinically-based metrics are useful.

So here we are – HACs.  Well intentioned – and a step forward, I think, in the effort to make healthcare better.  Everyone I know thinks HACs have important limitations – but reasonable people disagree over whether their flaws make them unusable for financial incentives or not.  The good news is that all of us can agree that we can do much better.  And now is the time to do it.

Launching PH555X: Improving Global Health: Focusing on Quality and Safety

Last year, about 43 million people around the globe were injured from the hospital care that was intended to help them; as a result, many died and millions suffered long-term disability.  These seem like dramatic numbers – could they possibly be true?

If anything, they are almost surely an underestimate.  These findings come from a paper we published last year funded and done in collaboration with the World Health Organization.  We focused on a select group of “adverse events” and used conservative assumptions to model not only how often they occur, but also with what consequence to patients around the world.

Our WHO-funded study doesn’t stand alone; others have estimated that harm from unsafe medical care is far greater than previously thought.  A paper published last year in the Journal of Patient Safety estimated that medical errors might be the third leading cause of deaths among Americans, after heart disease and cancer.  While I find that number hard to believe, what is undoubtedly true is this:  adverse events – injuries that happen due to medical care – are a major cause of morbidity and mortality, and these problems are global.  In every country where people have looked (U.S., Canada, Australia, England, nations of the Middle East, Latin America, etc.), the story is the same. Patient safety is a big problem – a major source of suffering, disability, and death for the world’s population.

The problem of inadequate health care, the global nature of this challenging problem, and the common set of causes that underlie it, motivated us to put together PH555X.  It’s a HarvardX online MOOC (Massive Open Online Course) with a simple focus: health care quality and safety with a global perspective.  I believe that this will be a great course—not because I’m teaching it, but because we have assembled a team of terrific experts.  But, let me be clear:  putting this MOOC together is unlike any educational experience I have ever had before.

First, you get to assemble the faculty – and here, I had almost no constraints.  Want to learn about quality measurement?  We have Jishnu Das (World Bank economist whose ground-breaking work includes sending trained, fake patients into doctors’ offices in Delhi) and Niek Klazinga (a Dutch physician who led the creation of the Health Care Quality Indicators for the OECD).  These two guys have thought more deeply and broadly about quality measurement than almost anyone else in the world.  What about the role of leadership?  We have Agnes Binagwaho (Minister of Health, Rwanda) and Julio Frenk (former Minister of Health, Mexico and current dean of the Harvard School of Public Health) speaking about what leadership in quality looks like from a health minister’s perspective.  We have T.S. Ravikumar, the CEO of a massive public hospital system in Pondicherry, India talking about how his decision to prioritize quality transformed his institution.

Sometimes, when you want the best people in the world, you don’t even have to go very far. On patient safety, we only had to cross the street for David Bates, Chief Quality Officer at Brigham and Women’s Hospital and patient safety maven. When we wanted to learn about the empirical basis for the role of management in improving quality, we went across town to Harvard Business School to spend time with Rafaella Sadun.  And when we wanted to learn about quality improvement, we only had to cross the Charles River to find Maureen Bisignano, CEO of IHI.

Beyond getting to assemble an excellent, world-class faculty, the MOOC is a completely different approach to education.  Because this course has never been offered before –we had the freedom to write a fresh syllabus specifically for online learners. This is not a live course copied onto a web platform. These are not hour-long lectures videotaped from the back of a classroom.  Our lectures are short, pithy conversations on pressing topics.  Instead of asking Professor Ronen Rozenblum, an Israeli expert on patient experience, to lecture about how and why we might measure patient-reported outcomes, we are having a meaningful discussion – back and forth, where I get to challenge his assumptions and let him articulate why patient experience should be considered an integral part of quality and more importantly, why he cares.

Beyond the discussions, we have interactive sessions where students create content.  One of my favorites? Through this course we will crowd source the first global “atlas” on healthcare quality.  Lets be honest, it’s one thing for me to point to individual studies on hospital infections in Canada or India, but right now, we have no place to turn to if we want to really understand key issues in healthcare quality around the globe and how they compare to one another.  The goal of this exercise is as simple as it is ambitious. By the end of the course, we will draft a resource that maps out where the world is on the journey towards a safe, effective, patient-centered healthcare systems.  It will be created by the collective energy and creativity of people in the course – a range of students, providers, policy folks and people just simply passionate about improving the delivery of healthcare. It will be a public good for us all to use and improve.

Finally, we have a few enticements to keep everyone engaged.  The attrition rate in these courses tends to be high, so we have a few carrots.  First, half-way through the course we will have a series of live discussion in which expert faculty will help students solve pressing quality and safety problems in their own institutions.  Have a problem with high infection rates in your ICU?  We will get an expert on nosocomial infections to help you think it through and figure out how to begin to solve it.  Wondering how to keep your family members safe during their hospital visit?  We will have healthcare consumer experts help you navigate those waters.  Finally, at the end of the course, students have the opportunity to submit a 1200 word thought piece on the importance of improving quality and safety in their own context whether as a clinician, patient, or health policy expert.  The top three pieces will be published in the BMJ Quality and Safety, arguably the most influential global quality and safety journal.

This is a grand experiment in a new way of teaching, engaging, and creating information on quality and safety of healthcare.  I’m sure there are parts that won’t work, but we will learn along the way.  I’m also sure that the pressing issues facing the US – healthcare that is not nearly as safe, effective, or patient-centered as it should be – are similar to issues facing not just other high-income countries, but also low and  middle -income countries. Thinking globally about these issues, and their adaptable solutions, can help us all deliver better care.

Quality needs to be on the global health agenda.  Don’t believe me?  Take the course.