About half of high-risk cardiac devices approved by the US Food and Drug Administration (FDA) were approved without comparative data on whether they provide better outcomes than other treatment options, found an analysis published online in JAMA.
Advances in medical technology have led to the emergence of devices such as stents, defibrillators, and mechanical heart valves to support or sustain the life of individuals with cardiovascular conditions. These devices are considered high risk because their implantation may require invasive procedures or because once implanted, they are necessary to sustain life. Because of this, many physicians and patients may be surprised to learn that such FDA-approved devices do not have to meet the same high standards of evidence required for drugs used to treat cardiovascular disease. Companies that seek FDA approval for a drug, for example, must conduct 2 randomized controlled trials to compare the effects of their product with a placebo or with another treatment (a so-called active comparator or active control) for the condition. But device makers are not required to submit study data involving an active control.
To find out how often devices are approved without an active control, Rita Redberg, MD, a professor of medicine at the University of California, San Francisco, and colleagues analyzed data from the FDA on all high-risk cardiac devices approved via the most stringent premarket approval pathway between January 1, 2000, and December 31, 2011. They found that 48% of devices (58 of 121) were approved based on at least 1 study involving an active control and 35% (42 of 121) were approved without any control group or even a comparison with performance benchmarks for the product.
“We were quite surprised at how few had an active control group,” Redberg said. “It’s important to know if using a device is better than the alternative.”
Dr Redberg, who is also the editor of the JAMA Network’s Archives of Internal Medicine, discussed these findings with news@JAMA.
news@JAMA: Why hasn’t the FDA required device makers to use comparator groups in studies submitted to the agency, as it has for drug makers?
Dr Redberg: The FDA began regulating drugs in 1938. Devices came along later; the agency only began regulating devices in 1976. At the time, there were fewer devices and they were simpler. There wasn’t as much consideration of high-risk devices. Medicine has changed and there is a lot more technology. The growth in device technology has way outpaced the regulations.
news@JAMA: Why is it important to have information about the comparative effectiveness of a device before approval?
Dr Redberg: These are implanted devices. Stents can’t just be removed. There are a lot more risks associated with these devices than with drugs. A drug you can just stop, but you can’t just stop using a device. For example, the Sprint Fidelis implantable cardioverter-defibrillator lead was found to be prone to fracture 3 years after its approval and has since been recalled (Dhruva SS and Redberg RF. PLoS Med. 2012;9:e1001277). Now, hundreds of thousands of patients have these devices, which are prone to fracture and have been linked to deaths, but it is dangerous to remove them.
news@JAMA: Are there cases when it may not be possible to have a comparator for a device?
Dr Redberg: For ventricular assist devices, it would be difficult to have an active comparator. But in most cases, you can have a control group for a device because it would be whatever medical therapy or device you were using before the new device came along.
news@JAMA: You noted in your article that some device studies are using historical or other comparison groups. What are these?
Dr Redberg: I wasn’t aware you could use a historical control until this study. In the case of a defibrillator or stent, instead of having a randomized control group, you take data collected from a previously used control group from another study of these devices. That allows bias in the selection of a control group.
About half didn’t use any control group; they used objective performance criteria or a single-arm trial. Performance criteria might tell you if the device was successfully inserted but not whether the patient is better off compared to alternatives.
news@JAMA: Have there been any recent examples of high-risk cardiovascular devices that have been shown to be less effective after marketing?
Dr Redberg: Those studies are not often done. Most of the studies done after approval are done to expand the groups you’d use the device in. When stents were compared to medical therapy, they were found to be no better. But overall, the studies are not being done and we need them. Registries such as the National Cardiovascular Data Registry and observational data also can play an important role here, as devices are often used in a different (and sicker) population than the participants included in the FDA approval study.
news@JAMA: How do you think this oversight can be improved?
Dr Redberg: We need to improve the evidence criteria and require high-quality evidence of device safety and effectiveness. For drugs, we require 2 high-quality clinical trials. We could require randomized trials with controls for devices. It is important to strengthen premarket and postmarket evidence. If a device is approved on 6 months of trial data, we need to continue collecting data after approval for the long-term because these devices are going to be in a long time. Are they working as we thought they were? The FDA should also be able to act on such postapproval data.
news@JAMA: What is the main take-home message for physicians and patients?
Dr Redberg: The important question to ask is: what are the safety and effectiveness data for this device compared to the alternative treatments? And what are the harms compared to the alternative treatments? Devices are a great improvement if they are shown to be better, but unfortunately, our study found that we can’t always answer that question.