Study results published in major medical journals often conflict with the data its authors have submitted to ClinicalTrials.gov, according to an analysis published in JAMA today.
The ClinicalTrials.gov registry, maintained by the National Library of Medicine, was created to help improve transparency in the medical literature by ensuring that all results of clinical trials, whether published or not, are archived in a single repository. A 2007 law mandated that researchers post results of studies on all products regulated by the US Food and Drug Administration (FDA) within 12 months. Many journals have also pledged to require their authors to report their findings in the registry. But numerous problems with the registry have been documented since its creation, including a failure of many researchers to report their results and sloppy data entry by investigators.
A new analysis by Joseph S. Ross, MD, MHS, an assistant professor of medicine at Yale University School of Medicine, and his colleagues raise questions about the accuracy of what is reported in the registry and in the medical literature. The team compared the results of 96 trials published in top-tier medical journals, including JAMA, the New England Journal of Medicine, and the Lancet, with the results of those trials reported in ClinicalTrials.gov. They found at least 1 discrepency in the results reported for 93 of the trials. Results matched in both the registry and journal article in only about half the cases.
Ross discussed the findings with news@JAMA.
news@JAMA: Why did you choose to do this study?
Dr Ross: Our research group is interested in thinking of ways to improve the quality of clinical research. When the Food and Drug Administration amendments were passed requiring results reporting [to the ClinicalTrials.gov registry], we were interested in how that would play out. There have been studies about how compliant researchers are with this requirement. We wanted to look at how accurate the reported findings are. By comparing the reported results to published trials, we wanted to see how well it was working. What we found was a surprise.
news@JAMA: Why were the results surprising?
Dr Ross: We found important discrepancies between the results reported in ClinicalTrials.gov and the published results. We don’t know which is right. There were lots of end points reported in 1 source that weren’t reported in the other.
news@JAMA: Can you give an example?
Dr Ross: We started by looking at the primary end points published in high-impact journals and what end points were reported in ClinicalTrials.gov. Of 90-some-odd trials, there were 150 to 160 primary end points; 85% were described in both sources, 9% only in ClinicalTrials.gov and 6% only in the publications.
For the more than 2000 secondary end points, 20% were reported only in ClinicalTrials.gov and 50% only in publications. Only 30% were described in both sources.
You see that only part of the information is available in 1 source. We need to make the sources as complete as possible. The publications need to link back to ClinicalTrials.gov because they often don’t include all the end points.
news@JAMA:Why might there be such a difference?
Dr Ross: There are a lot of potential explanations.
More end points were reported in the published papers than in ClinicalTrials.gov. This suggests authors are reporting end points in the paper that make the results look better that weren’t predetermined. That can skew the literature.
news@JAMA: Could edits made by the journals, such as requests for more information or new analyses, or typographical errors account for some discrepancies?
Dr Ross: It could be editing. An authorship team submits the results and these are publications that have strong editorial staffs. There could be slightly different approaches in analysis submitted to the 2 sources.
Some are typographical errors. For example, 1 study reported a hazard ratio of 4 in ClinicalTrials.gov instead of the hazard ratio of 2 in the study [the hazard ratio and standard deviation were transposed]. That perverts the study result.
news@JAMA: What can be done to improve the accuracy results in reporting?
Dr Ross: These results are increasingly being used by researchers and in meta-analyses; we want them to be accurate. The journals pay a large staff of full-time editors to make sure these studies don’t have errors, but ClinicalTrials.gov has a relatively small staff. We may need a larger endeavor than what the National Library of Medicine originally envisioned.
A third of the discordant results led to a different interpretation of the trial. This a problem we need to be attending to. We studied the highest-tier journals, so this is likely the best-case scenario. These are likely the highest-achieving researchers. Who knows what’s happening with lower-tier journals?