As I alluded to
yesterday, a whopper of a study has just appeared in the
New England Journal of Medicine. It tracked each study antidepressant submitted to the FDA, comparing the results as seen by the FDA in comparison with the data published in the medical literature. The FDA uses raw data from the submitting drug companies for each study. This makes great sense, as the FDA statisticians can then compare their analyses to the analyses from drug companies, in order to make sure that the drug companies were analyzing their data accurately.
After studies are submitted to the FDA, drug companies then have the option of submitting data from their trials for publication in medical journals.
Unlike the FDA, journals are not checking raw data. Thus, it is possible that drug companies could selectively report their data. An example of selective data reporting would be to assess depression using four measures. Suppose that two of the four measures yield statistically significant results in favor of the drug. In such a case, it is possible that the two measures that did not show an advantage for the drug would simply not be reported when the paper was submitted for publication. This is called "burying data," "data suppression," "selective reporting," or other less euphemistic terms. In this example, the reader of the final report in the journal would assume that the drug was highly effective because it was superior to placebo on two of two depression measures, left completely unaware that on two other measures the drug had no advantage over a sugar pill. Sadly, we know from
prior research that
data are often suppressed in such a manner. In less severe cases, one might just switch the emphasis placed on various outcome measures. If a measure shows a positive result, allocate a lot of text to discussing that result and barely mention the negative results.
But wait, there's an even better way to suppress data. Suppose that a negative study is submitted to the FDA. There is no commercial value in presenting negative results on a product. Indeed, it makes no sense from a commercial vantage point to submit a clinical trial that shows no advantage for one's drug for publication in a medical journal. While it earns a bit of good PR for being honest, it would of course hurt sales for the drug, which would not please shareholders.
From an amoral, purely financial view, there is no reason to publish negative trial results. On the other hand, there is science. One of the first things that any medical student hopefully learns is that scientists should report all of their results so that other scientists, physicians, the media, and the general public have an up-to-date and comprehensive understanding of all scientific findings. Yes, this may sound naive, but this is how science is supposed to work in an ideal world.
Back to the NEJM study.
Were manufacturers of antidepressants playing by the rules of science or the rules of the almighty dollar? Take a look at this table excerpted from the study...
The FDA concluded that 38 studies yielded positive results. 37 of these 38 studies were published. The FDA found mixed or "questionable" results in 12 studies. Of these 12 studies, six were not published, and six others were published
as if they were positive findings. Of the 24 studies that the FDA concluded were negative, three were published accurately, five were published as if they were positive findings, and 16 were not published.
To summarize, positive studies were nearly always reported while mixed and negative studies were nearly always either not published or published in a manner that spun the results unreasonably. How does one turn a questionable or negative finding into a positive one? As mentioned above, report the results that are favorable to your product and sweep the remaining results under the rug.
Overall, how do the statistics for this group as prepared by the FDA compare to the statistics in medical journal publications? Remember, physicians are trained to highly value medical journals, as they are the storehouse for "evidence-based medicine." I'll borrow a quote from the study authors:
For each drug, the effect-size value based on published literature was higher than the effect-size value based on FDA data, with increases ranging from 11 to 69%
Well, that's not very reassuring. Effect size refers to the magnitude of the difference between the drug and placebo.
Note that for every single drug, the effect size as reported in the medical literature (the foundation for "evidence based medicine) was greater than the effect size calculated from the FDA's data. Remember, the FDA's data is based on raw data submitted by drug companies, and is thus much less subject to bias than data that the drug companies manipulate prior to submitting for publication in a medical journal. Other highlights from the authors:
Not only were positive results more likely to be published, but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome... we found that the efficacy of this drug class is less than would be gleaned from an examination of the published literature alone. According to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results. The statistical significance of a study’s results was strongly associated with whether and how they were reported, and the association was independent of sample size.
I'll say it one more time: Every single drug had an inflated effect size in the medical literature in comparison with the data held by the FDA. To move into layman's terms for a moment, manufacturers of every single drug appear to have cheated. This is not some pie in the sky statistics review -- this is the medical literature (the foundation of "evidence-based medicine") being much more optimistic about the effects of antidepressants than is accurate. This is marketing trumping science.
The drugs that were found to have increased their effects as a result of selective publication and/or data manipulation:
- Bupropion (Wellbutrin)
- Citalopram (Celexa)
- Duloxetine (Cymbalta)
- Escitalopram (Lexapro)
- Fluoxetine (Prozac)
- Mirtazapine (Remeron)
- Nefazodone (Serzone)
- Paroxetine (Paxil)
- Sertraline (Zoloft)
- Venlafaxine (Effexor)
That is every single drug approved by the FDA for depression between 1987 and 2004. Just a few of many tales of data suppression and/or spinning can be found below:
Props to the
Wall Street Journal (David Armstrong and Keith Winstein in particular) and the
New York Times (Benedict Carey) for quickly getting on this important story.
There are some people who seem unmoved by this story. Indeed, some people are crying that this is an unfair portrayal of the drug industry. More on their curious take on the situation coming later.
I'll close with a question: What does this say about the key opinion leaders whose names appear as authors on most of these published clinical trials in which the data is reported inaccurately?