Public reporting of the quality of care delivered by physicians, hospitals, and other health care organizations has been around for a while. Some of the earliest efforts began in the 1990s, when the New York State Department of Health began reporting risk-adjusted mortality rates for surgeons performing cardiac surgery in that state. The early reports could be obtained by mailing a request to the Department of Health, which would send along a paper copy of the latest data.
Over time, as technology improved, so did the breadth and depth of public reporting. By 2004, the Centers for Medicare and Medicaid Services (CMS) was reporting performance data for nearly every hospital in the country, dozens of states were reporting their own data, and many private entities were publicly grading hospitals. Despite the proliferation of public reporting websites, CMS’ Hospital Compare uses the most validated set of metrics available and has remained the most comprehensive resource.
There’s been just one problem: it’s unclear whether anyone actually uses it. Physicians and hospitals seem to use it to see how they compare with their competitors, but there’s no evidence that consumers use it. And that’s not surprising, because Hospital Compare is difficult to navigate, presenting performance data based on dozens of metrics in ways that are technically correct but incomprehensible for most consumers.
Making Reporting Accessible
So in an effort to make Hospital Compare data more accessible, CMS launched the stars program. The notion was simple: grade all the hospitals using a 1- to 5-star rating, in the way we grade restaurants or hotels. Although a restaurant’s most important qualities can be boiled down to 1 or 2 things (food and service), how do we best capture the multifaceted nature of hospital quality? CMS began by focusing on 1 domain (patient experience), with the expectation that it would build from there. In 2015, the agency released its first ratings, assigning each hospital 1 to 5 stars based on patient experience scores.
The approach has been controversial, with some arguing that patient experience is a poor measure of quality and others suggesting that stars oversimplify the complexity of hospital care. Thus, a crucial question remains: is this rating useful for consumers? Or might it actually do more harm than good?
To address this issue, my colleagues and I recently published a study in JAMA Internal Medicine that answered a simple question: if a patient used the star ratings, would he or she end up at a lower mortality hospital? Although there are many features of hospital quality that matter, it’s hard to imagine one that is more important than risk-adjusted mortality. Patients want to avoid complications and to be treated with dignity and respect, but nothing matters as much as avoiding premature death. And some hospitals are better at that than others. Risk-adjusted mortality rates are the best measure of hospital quality we currently have.
We examined whether, holding all other factors constant, picking a 5-star hospital would lead patients to a hospital with lower mortality hospital than that of a 1-star hospital? It turns out that it does—a lot. The effect size is substantial: there is a 1.4 percentage point difference in mortality between a 5-star hospital (mortality rate 9.8%) and a 1-star hospital (mortality rate 11.2%), with a monotonic relationship (more stars, lower mortality). For every 70 patients shifted from a 1-star hospital to a 5-star hospital, we would save 1 life. That’s an important effect.
When Are Stars Useful? And When Are They Not?
A few more things worth reflecting on. Our model held a lot of factors constant. We adjusted for hospital size, teaching status, location (urban vs rural), and even local health care markets (as measured by hospital referral region). This is likely the most patient-centered view. Patients aren’t likely to use stars in a vacuum and are likely choosing among a small subset of similar hospitals.
But what if patients ignored everything else and just focused on the stars? Would they still be useful? When we reran the analysis without accounting for size, teaching status, and other factors, morality rates adjusted only for risk were virtually identical for all hospitals (10.8% to 10.9%). We saw no relationship between number of stars and mortality rates.
This means that stars aren’t a substitute for other information. For example, we know that for certain major conditions, large, teaching hospitals may have better outcomes. Patients shouldn’t ignore that. But when choosing among large teaching hospitals in their region, for instance, stars can be helpful. Holding those other factors constant, stars help patients identify lower-mortality hospitals.
Why are stars, which are based on patient experience, so helpful? Stars likely measure the effectiveness of an organization’s underlying management and culture. Hospitals’ performance on patient experience is influenced by a variety of factors, such as patients’ socioeconomic status or severity of illness. That’s why it’s not useful to compare the patient experience scores (and thus, the star ratings) of a small rural hospital to an urban teaching hospital. They care for very different populations. But among urban teaching hospitals—for instance, those with better patient experience—are likely better managed and appear to have lower mortality.
New Star Ratings Coming
CMS recently announced that the agency plans to release a new star rating that will be far more comprehensive, combining approximately 60 different measures into a single star rating. Will it be as useful to consumers? It depends on how well it holds up to examination. The ratings combine both useful measures, such as mortality and patient experience, with flawed ones, such as patient safety indicators calculated using claims data. CMS might have done better by just stopping with the patient experience stars, lest they have a program with contradictory star ratings that create more confusion, diluting the progress made so far.
In the complex world of measuring hospital quality, CMS’ system of hospital star ratings based on patient experience scores has been a good step forward. If used correctly, it can help steer patients to the right hospitals. But it has to be understood and used in context—namely, when comparing similar hospitals. The new ratings CMS is about to launch are far more comprehensive, including many more metrics. This may seem like a good idea, but it’s worth remembering that when it comes to quality measures, as in so many things in life, more isn’t better. Better is better. We need to focus on what we can measure well and, most important, focus on what matters most to patients.
About the author: Ashish K. Jha, MD, MPH, is K. T. Li Professor of International Health and Health Policy at the Harvard T. H. Chan School of Public Health and a practicing internist at the Veterans Affairs Boston Healthcare System. He received his doctor of medicine from Harvard Medical School and was trained in internal medicine at the University of California, San Francisco. He received his master’s in public health from Harvard School of Public Health. Dr Jha’s major research interests lie in improving the quality and costs of health care. His work has focused on 4 primary areas—public reporting, pay for performance, health information technology, and leadership—and the roles they play in fixing the US health care system.
About The JAMA Forum: JAMA has assembled a team of leading scholars, including health economists, health policy experts, and legal scholars, to provide expert commentary and insight into news that involves the intersection of health policy and politics, economics, and the law. Each JAMA Forum entry expresses the opinions of the author but does not necessarily reflect the views or opinions of JAMA, the editorial staff, or the American Medical Association. More information is available here and here.