Author Insights: Best Type of Anesthesia for Hip Fracture Surgery? New Study Sheds Light on Debate

Mark Neuman, MD, MSc discusses the potential benefits of regional anesthesia over general anesthesia during hip fracture surgery. Image from author.

Mark Neuman, MD, MSc discusses the potential benefits of regional anesthesia over general anesthesia during hip fracture surgery. Image from author.

Because hip fractures are a major cause of disability and mortality in the elderly population, determining what constitutes optimal surgical and postsurgical care for people who get hip fracture surgery remains a key focus for clinical research. One area of debate concerns whether patients undergoing hip fracture surgery fare better with regional (spinal or epidural) or general anesthesia.

Today, new findings released in JAMA suggest that compared with general anesthesia, regional anesthesia may be associated with a shorter hospital stay but not with a lower mortality rate.

To study the association between anesthesia type used during hip fracture surgery and surgical outcomes (namely the length of hospital stay and the risk of death within 30 days of surgery), researchers from the University of Pennsylvania collected information on more than 50 000 patients who underwent hip fracture surgery in New York state from 2004-2011. However, this question is a difficult one to study because patients who get regional anesthesia tend to be sicker than those given general anesthesia, which may bias the results (regional anesthesia is thought to be safer and have fewer adverse effects than general anesthesia).

To reduce this potential bias in their study, the investigators used a special analytic method called an instrumental variable analysis. Their analysis used the distance the patients lived from the hospital where they received surgery as an “instrument” of comparison so that instead of comparing outcomes for regional vs general anesthesia per se, the results compared outcomes for those who lived near hospitals specializing in regional anesthesia vs those who lived near hospitals specializing in general anesthesia. Comparisons using this distance as an “instrumental variable” theoretically controls for unknown imbalances in the 2 patient populations, something that is often difficult to do in observational research when randomization is not possible.

Results showed that there was no difference in mortality between individuals who lived near a hospital specializing in regional anesthesia vs one specializing in general anesthesia (5.4% vs 5.8%, respectively). However, there was a difference in length of hospital stay observed, with regional anesthesia being associated with a 0.6-day shorter length of stay than general anesthesia.

Lead author Mark Neuman, MD, MSc, discusses these findings with news@JAMA.

news@JAMA: Why did you perform this study?

Dr Neuman: The issue of what kind of anesthesia is best for patients with hip fractures getting hip surgery has been a question for a long time. A lot of people are interested in this question because over 300 000 hip fractures a year happen in the United States, and over a million happen worldwide—it’s a common problem. Furthermore, almost all hip fracture patients gets surgery, which means they need anesthesia. There has been some research suggesting that regional anesthesia may be better for hip fracture surgery, but the question remains unresolved.

news@JAMA: In your study, you used a unique method called instrumental variable analysis to look at this question. Can you briefly describe what this type of analysis is and why you decided to do it?

Dr Neuman: Instrumental variable analysis has been used in economics research in the past, and is now becoming more popular in clinical research. The basic issue that it tries to address is one of selection bias. In this case, the problem in comparing outcomes from regional versus general anesthesia is that patients who get regional anesthesia tend to be sicker than those who get general anesthesia, as it’s thought that there are less risks and side effects with regional anesthesia compared with general. So, if you were just to look at outcomes, regional anesthesia would probably be associated with worse outcomes. Instrumental variable analysis is a way of addressing that kind of selection bias, by using an unrelated “instrument” as a natural source of randomization.

In our study we used the distance from where patients lived in relation to the closest hospital as our instrument. Patients don’t choose to live in a certain neighborhood based on what kind of anesthesia the closest hospital in their neighborhood specializes in. Therefore patients who live closer to hospitals specializing in regional anesthesia should not be sicker than those living closer to hospitals specializing in general anesthesia. Using distance as an instrument for comparison thus allows for a more unbiased comparison of the 2 types of anesthesia.

news@JAMA: Using this analysis, the results of your study showed no difference in mortality associated with the 2 types of anesthesia, but a slightly decreased length of stay associated with the regional anesthesia group. Were these results surprising?

Dr Neuman: The results were a bit surprising to us. The fact that the length of stay was shorter by about half a day was interesting. Even though this may not seem like a big difference, with instrumental variable analyses, the bar for finding any difference at all is really set quite high. So the fact that there was even this amount of difference is encouraging that there may be a real difference in outcomes.

In terms of the mortality, we didn’t find a difference favoring regional anesthesia as other studies in the past have. But, I don’t think the results rule out the possibility that a difference exists, given that the direction of the results still favored regional anesthesia, and again, the bar with instrumental variable analysis is set fairly high. Therefore, this study adds a piece of information to this arena of evidence, but the evidence is far from definitive. It highlights the need for a well-designed randomized trial to further study this issue.

news@JAMA: What would your take home message be at this point for patients who need surgery for hip fracture and have questions about what type of anesthesia they should get?

Dr Neuman: I would say that the bottom line is that we still have incomplete information. The evidence from our study and other studies suggest that regional anesthesia might offer benefits, but it is not strong enough to make a definitive statement. The decision about the type of anesthesia really depends on the hospital, the surgeon, and the culture of practice. In the United States about 80% of patients get general anesthesia and 20% get regional anesthesia for hip fracture surgery. In the United Kingdom, it’s about 50/50.

news@JAMA: So whose choice should it be? What is the ultimate goal of doing a randomized trial to answer this question?

Dr Neuman: It should always be the informed patient’s choice. At the end of the day, even if we prove with a randomized trial that regional anesthesia has lower mortality than general anesthesia, some patients might still say, “I don’t feel comfortable with regional anesthesia, and I will take that mortality risk,” which is perfectly reasonable. What we’re trying to determine is the true magnitude of these risks and benefits, so we can present these numbers accurately to patients when providing them with the opportunity for informed decision making.

Author Insights: Published Studies Often Conflict With Results Reported to ClinicalTrials.gov

Joseph S. Ross, MD, MHS, of Yale University School of Medicine and his colleagues found discrepencies between the reporting of results in journals and ClinicalTrials.gov. Image: Yale University

Joseph S. Ross, MD, MHS, of Yale University School of Medicine and his colleagues found discrepencies between the reporting of results in journals and ClinicalTrials.gov. Image: Yale University

Study results published in major medical journals often conflict with the data its authors have submitted to ClinicalTrials.gov, according to an analysis published in JAMA today.

The ClinicalTrials.gov registry, maintained by the National Library of Medicine, was created to help improve transparency in the medical literature by ensuring that all results of clinical trials, whether published or not, are archived in a single repository. A 2007 law mandated that researchers post results of studies on all products regulated by the US Food and Drug Administration (FDA) within 12 months. Many journals have also pledged to require their authors to report their findings in the registry. But numerous problems with the registry have been documented since its creation, including a failure of many researchers to report their results and sloppy data entry by investigators.

A new analysis by Joseph S. Ross, MD, MHS, an assistant professor of medicine at Yale University School of Medicine, and his colleagues raise questions about the accuracy of what is reported in the registry and in the medical literature. The team compared the results of 96 trials published in top-tier medical journals, including JAMA, the New England Journal of Medicine, and the Lancet, with the results of those trials reported in ClinicalTrials.gov. They found at least 1 discrepency in the results reported for 93 of the trials. Results matched in both the registry and journal article in only about half the cases.

Ross discussed the findings with news@JAMA.

news@JAMA: Why did you choose to do this study?

Dr Ross: Our research group is interested in thinking of ways to improve the quality of clinical research. When the Food and Drug Administration amendments were passed requiring results reporting [to the ClinicalTrials.gov registry], we were interested in how that would play out. There have been studies about how compliant researchers are with this requirement. We wanted to look at how accurate the reported findings are. By comparing the reported results to published trials, we wanted to see how well it was working. What we found was a surprise.

news@JAMA: Why were the results surprising?

Dr Ross: We found important discrepancies between the results reported in ClinicalTrials.gov and the published results. We don’t know which is right. There were lots of end points reported in 1 source that weren’t reported in the other.

news@JAMA: Can you give an example?

Dr Ross: We started by looking at the primary end points published in high-impact journals and what end points were reported in ClinicalTrials.gov. Of 90-some-odd trials, there were 150 to 160 primary end points; 85% were described in both sources, 9% only in ClinicalTrials.gov and 6% only in the publications.

For the more than 2000 secondary end points, 20% were reported only in ClinicalTrials.gov and 50% only in publications. Only 30% were described in both sources.

You see that only part of the information is available in 1 source. We need to make the sources as complete as possible. The publications need to link back to ClinicalTrials.gov because they often don’t include all the end points.

news@JAMA:Why might there be such a difference?

Dr Ross: There are a lot of potential explanations.

More end points were reported in the published papers than in ClinicalTrials.gov. This suggests authors are reporting end points in the paper that make the results look better that weren’t predetermined. That can skew the literature.

news@JAMA: Could edits made by the journals, such as requests for more information or new analyses, or typographical errors account for some discrepancies?

Dr Ross: It could be editing. An authorship team submits the results and these are publications that have strong editorial staffs. There could be slightly different approaches in analysis submitted to the 2 sources.

Some are typographical errors. For example, 1 study reported a hazard ratio of 4 in ClinicalTrials.gov instead of the hazard ratio of 2 in the study [the hazard ratio and standard deviation were transposed]. That perverts the study result.

news@JAMA: What can be done to improve the accuracy results in reporting?

Dr Ross: These results are increasingly being used by researchers and in meta-analyses; we want them to be accurate. The journals pay a large staff of full-time editors to make sure these studies don’t have errors, but ClinicalTrials.gov has a relatively small staff. We may need a larger endeavor than what the National Library of Medicine originally envisioned.

A third of the discordant results led to a different interpretation of the trial. This a problem we need to be attending to. We studied the highest-tier journals, so this is likely the best-case scenario. These are likely the highest-achieving researchers. Who knows what’s happening with lower-tier journals?

Author Insights: Rapid Drug Approvals Leave Many Safety Questions Unanswered

An analysis by Thomas J. Moore, AB (above), a senior scientist at the Institute for Safe Medication Practices, and Curt Furberg, MD, PhD, of the Wake Forest School of Medicine in Winston-Salem, North Carolina, found that studies used for expedited drug approvals are so small they may be unable to answer key safety questions.

An analysis by Thomas J. Moore, AB (above), a senior scientist at the Institute for Safe Medication Practices, and Curt Furberg, MD, PhD, of the Wake Forest School of Medicine in Winston-Salem, North Carolina, found that studies used for expedited drug approvals are so small they may be unable to answer key safety questions.

The US Food and Drug Administration (FDA) has created fast-track approval processes to speed certain drugs to market, but an analysis of these expedited approvals finds they often leave important safety questions unanswered. The analysis was published today in JAMA Internal Medicine.

To help expedite approval of drugs, the FDA has created processes that waive some of the requirements that are part of a standard drug approval. These expedited reviews, popular with industry and patient groups, are used for drugs that the FDA determines represent “a significant therapeutic advance” or that fill unmet needs. The Obama administration has also proposed additional ways to speed the pace of drug approval.

But an analysis of the differences between standard and fast-track reviews by Thomas J. Moore, AB, a senior scientist at the Institute for Safe Medication Practices (ISMP) in Alexandria, Virginia, and Curt Furberg, MD, PhD, of the Wake Forest School of Medicine in Winston-Salem, North Carolina, found that although fast-track approvals may shave about 2½ years off approval time, they also provide less information about the safety and efficacy of the drugs.

Moore and Furberg examined 20 drugs approved by the FDA in 2008, including 8 that received expedited reviews and 12 that received standard reviews, finding that the expedited drugs took a median of 5.1 years of clinical development to reach approval compared with 7.5 years for the drugs undergoing standard approval. But the expedited drugs were tested on far fewer patients—a median of 104 patients, compared with a median of 580 patients for the standard review drugs. Safety problems emerged after approval for drugs in both categories, but many safety questions that might have been resolved by postmarketing studies by the FDA remain unanswered, as less than one-third of 85 such studies had been completed by 2013.

Moore discussed the implications of the study’s findings with news@JAMA.

news@JAMA: Why did you decide to do this study?

Thomas Moore: The question this study was trying to address is: are novel drugs today tested enough? We found that many questions are left unanswered.

news@JAMA: What kinds of questions are going unanswered with the expedited reviews?

Thomas Moore: As trials get smaller or shorter, you know less about critical issues, like whether there is target organ toxicity, whether certain subpopulations develop adverse events, what the contraindications are, and how the response differs between women and men. The more you look, the more problems you identify.

news@JAMA: How long does it take to get these answers after approval?

Thomas Moore: For novel treatments, the drugs are developed and approved very quickly, in about 5 years. But answers to unanswered safety questions come slowly after approval. Two studies, one by ISMP and one by the FDA, have found that significant safety warnings emerge at a median of 11 years after approval. We are approving drugs quickly, but we are taking a very long time to address the questions we left on the table.

news@JAMA: What implications do your findings have for proposals to further speed the approval process?

Thomas Moore: The FDA is advancing several proposals that would further reduce the amount of preapproval testing. Is this movement in the wrong direction? This is an issue for the medical community to debate and think about. Physicians and patients need to think about whether we want drugs quicker or whether we want more study so we know how to use them wisely.

news@JAMA: What do you think physicians and patients should know about drugs that have undergone expedited approval?

Thomas Moore: Physicians need to be aware when they use novel drugs approved under expedited review the safety information available is much more limited than for other drugs they use. That means they should use these drugs with more caution and extra vigilance.

What patients need to realize is that even experienced physicians can’t always tell if a drug is working. We have one really good tool to assess drug effectiveness and safety: randomized controlled clinical trials. The sad story we have witnessed is that often patients want new drugs and are willing to take risks to get them, but then hundreds of thousands of dollars may be spent on the drug before we realize it doesn’t work or patients [are harmed]. Patients need to be aware of the importance of clinical testing so they can benefit from drugs and reduce their chance of getting hurt.

The wise and safe use of drugs requires clinical testing.

Clinicaltrials.gov Database Reveals “Sausage Making” in Clinical Research

Deborah Zarin, MD, director of clinicaltrials.gov, raised concerns during a conference this week about the “casualness” of some clinical investigators about research involving humans. Image: teekid/iStockphoto.com

Deborah Zarin, MD, director of clinicaltrials.gov, raised concerns during a conference this week about the “casualness” of some clinical investigators about research involving humans. Image: teekid/iStockphoto.com

With more than 150 000 clinical trials now registered, clinicaltrials.gov, the database created to increase transparency in clinical research, is providing some unflattering insights, according to research findings reported at the Seventh International Congress on Peer Review and Biomedical Publication in Chicago this week.

No one anticipated the role of clinicaltrials.gov “as a window into the sausage factory,” said Deborah Zarin, MD, director of clinicaltrials.gov, which is run by the US National Library of Medicine (NLM) at the National Institutes of Health.

The database, which began including a summary of study results in 2009, was intended to increase access to data from clinical trials of medical interventions, to reduce selective publication of results to make an intervention look more effective, and to hold researchers more accountable. According to Zarin, the effort has made it possible to probe how closely scientists are following their protocols and to assess the reliability of results reported in the clinical literature—research that was impossible to carry out before the advent of the database.

For example, Jessica E. Becker, of Yale University School of Medicine, compared study results for 95 clinical trials reported in the clinicaltrials.gov database with corresponding study results published in major medical journals. She found that for 1 in 5 of the studies, there was a discrepancy between the results reported in the database and the published findings. In 6 of the studies, the differences were great enough to alter the interpretation of the study’s results. Becker suggested a few explanations for these problems, including reporting errors, typos in the journals, or intentional distortion of results to present more favorable findings in the publication.

“Discrepancies between different sources of summary results data raise concerns about the reliability and validity of the summarization process,” said Zarin in an e-mail after the meeting.  “This has led to calls for access to participant-level data as a way of providing the capacity for independent replication or audit of the results reports. “

Zarin discussed these calls in a recent editorial, but noted after the meeting that clinicaltrials.gov is not set up to accept individual results. Zarin said she is “eager to watch as experience is gained with participant-level data, and to see how clinicaltrials.gov and NLM can contribute to the overall endeavor.”

Medical writer Serina Stretton, of ProScribe Medical Communications in Noosaville, Australia, presented an analysis of publication agreements between authors and study funders reported in clinicaltrials.gov. Although most of the more than 300 trials examined noted that a publication agreement existed, 74% of which met commonly accepted standards, 8 trials involving 776 human participants had a so-called gag order that prevented publication of the results. Stretton suggested patients and investigators refuse to participate in such trials.

Zarin noted that so far, clinicaltrials.gov has proven it is possible to create a national clinical trials results database and that it can be a useful resource. But further research is needed on its usefulness and whether it ultimately improves clinical research, she added.

She reported that for about 66% (6000) of the more than 9000 trials that have reported summary results in the database, there is no other source for the public to access the data. The researchers are also required to report in the database all serious adverse events associated with an intervention, Zarin noted. This may provide the public with a more comprehensive listing of potential risks associated with an intervention and may provide journal editors with a way to verify that authors are fully disclosing risks in their publications, she suggested.

Working with authors submitting to the database has also caused Zarin to question the role of primary authors in some studies. Many authors have been unable to explain their study data when staff members from the NLM called with questions. Protocols are often sloppy, as well, she added. She said her experiences have raised concerns about “the casualness with which many people experiment on humans.”

Please note: The blog was updated Wednesday, Sept. 11 2013, based on a clarification by Dr. Zarin on her views concerning the reporting of individual level data.

Author Insights: Studies That Show Large Treatment Effects Are Usually Wrong

An analysis by John P. Ioannidis, MD, DSc, of Stanford University, and colleagues suggests that physicians and patients should view studies that suggest large treatment effects with skepticism. Image: Stanford School of Medicine Office of Communications & Public Affairs

Most medical interventions have modest effects, and studies that suggest big effects are usually small and are eventually proven wrong, according to an analysis of medical studies published in JAMA today.

Studies that appear to demonstrate that a medication or other therapy can have a large effect on health conditions often make headlines and may lead clinicians and patients to embrace these interventions. But a team of researchers recently scoured the medical literature for reports of clinical trials with promising study results to assess the strength of such studies and the durability of their findings. As it turns out, they found that most studies that find a large treatment effect are very small, which increases the odds that the findings are due to chance—and in the long run, the findings of a large effect are usually not validated. Subsequent studies usually show a much more modest effect, the authors found. Additionally, few of these promising trials indicate that the medical intervention under study does much to prolong life; instead, they find that interventions may have a big effect on laboratory measures of health. Continue reading

Chimpanzees Aren’t Necessary for Most Research, Says IOM

Retiring from research? Chimpanzees are not necessary for most biomedical research funded by the National Institutes of Health, according to a new report. (Image: Peter-John Freeman/iStockphoto)

The era of the chimpanzee as a research tool in federally funded biomedical studies has, for the most part, come to an end.

After 9 months of meetings, workshops, and an unprecedented number of public comments, an Institute of Medicine (IOM) report said today that chimpanzees aren’t necessary for most biomedical research that’s funded by the National Institutes of Health (NIH).

“There is… decreasing scientific need for chimpanzees due to the emergence of nonchimpanzee models and [new] technologies,” said Jeffrey Kahn, PhD, MPH, chair of the IOM committee that produced the report and a senior faculty member at the Johns Hopkins University Berman Institute of Bioethics in Baltimore.

The committee made it clear that its recommendations are not an outright ban on using the animals in NIH research. But the report outlines the first criteria ever established to determine whether chimpanzees, which are genetically and behaviorally similar to humans, should be used in a given research project. Continue reading

Changes in Access to Death Data May Impede Medical Research

An important source of death records, which are used for epidemiological and longitudinal outcomes studies, may no longer be available to researchers. (Andreas Reh/iStockphoto.com)

Certain types of biomedical research will be compromised by a change by the US Social Security Administration (SSA) in its policies regarding public release of death records, said Eugene H. Blackstone, MD, director of clinical investigations in the department of thoracic and cardiovascular surgery at the Cleveland Clinic.

Blackstone said the SSA’s Death Master File has been a fairly accurate and inexpensive source of data about all deaths in the United States, used by many researchers for epidemiological and long-term outcomes studies. The changes to the Death Master File will make it useless to researchers, but few are aware of this impending problem, he said. Continue reading