Author Insights: Studies That Show Large Treatment Effects Are Usually Wrong

An analysis by John P. Ioannidis, MD, DSc, of Stanford University, and colleagues suggests that physicians and patients should view studies that suggest large treatment effects with skepticism. Image: Stanford School of Medicine Office of Communications & Public Affairs

Most medical interventions have modest effects, and studies that suggest big effects are usually small and are eventually proven wrong, according to an analysis of medical studies published in JAMA today.

Studies that appear to demonstrate that a medication or other therapy can have a large effect on health conditions often make headlines and may lead clinicians and patients to embrace these interventions. But a team of researchers recently scoured the medical literature for reports of clinical trials with promising study results to assess the strength of such studies and the durability of their findings. As it turns out, they found that most studies that find a large treatment effect are very small, which increases the odds that the findings are due to chance—and in the long run, the findings of a large effect are usually not validated. Subsequent studies usually show a much more modest effect, the authors found. Additionally, few of these promising trials indicate that the medical intervention under study does much to prolong life; instead, they find that interventions may have a big effect on laboratory measures of health.

John P. A. Ioannidis, MD, DSc, of Stanford University, discussed his findings with news@JAMA.

news@JAMA: Why did you and your colleagues decide to conduct this study?

Dr Ioannidis: It’s an effort to see whether medical interventions can have large effects. Even though we see lots of single trials with large effects, if other studies try to replicate the results, the effects are smaller. To me it’s a lesson of modesty about medical interventions. The large majority of medical interventions have small or modest effects.

news@JAMA: Why do you think so few large effects were replicated?

Dr Ioannidis: I think it’s mostly a combination of luck and statistics. If you have thousands of studies, it’s possible by chance that some will give a large effect; it’s an exaggerated result that is unlikely to retain its strength in subsequent studies. Many large effects are statistical artifacts. Some may be the result of bias in the trial.

news@JAMA: Why were large effects mostly demonstrated in small studies?

Dr Ioannidis: I think it’s mostly a statistical effect. In a small study, if just a couple people more have an event, you may get a huge odds ratio. But in a large trial, that is not going to have a large effect. Small studies also may be more easily affected by bias. But statistics alone would do the trick.

news@JAMA: You found few studies that demonstrated a treatment could extend life. Were there any notable exceptions?

Dr Ioannidis: For mortality, we could only find 1 situation where there was a validated large treatment effect, extracorporeal oxygenation for severe respiratory failure in newborns. This was really the exception. There may be other interventions with equally huge mortality benefits, but no one has dared test them. For example, a randomized trial is not really possible [to evaluate the effects on mortality] of resuscitating someone who has stopped breathing. But it is unlikely that there are many interventions that have a benefit so clear-cut that no one is daring to test them.
news@JAMA: Why do you think so few studies showed that an intervention could extend life?

Dr Ioannidis: The most likely reason is that medical interventions don’t have an impact on mortality most of the time, and when they do, their benefits are more modest.

news@JAMA: If these interventions have small effects, are they worthwhile?

Dr Ioannidis: It’s not that we’re not making progress, but that newer interventions increase survival in incremental amounts. With treatment of myocardial infarction, we have seen a gradual decline of mortality. But it has been achieved with incremental steps, with about a dozen interventions that each decrease mortality by 10% to 15%. Same thing with cancer. Maybe we should be happy with these incremental benefits and not be misled by large treatment effects in studies.

news@JAMA: How do you think clinicians and patients should use this information?

Dr Ioannidis: Clinicians and patients should be a little conservative when they see a small trial with a very large effect. They may need to wait to see if the result is validated by other studies. Most of the time the effect size will be smaller in future studies.

news@JAMA: What about researchers? How should they use this information?

Dr Ioannidis: For researchers, this means maybe we should try to run large trials with better designs and less bias. It’s not that we don’t have many clinical trials—there are millions—but they are usually small. It’s not always possible to run megatrials, but modestly sized trials may be better. Otherwise, it becomes kind of a random toss.



Categories: Evidence-Based Medicine, Evidence-Based Medicine, Statistics and Research Methods, Uncategorized

Follow

Get every new post delivered to your Inbox.

Join 236,349 other followers