Thursday, May 03, 2007

Uh-Oh Chuck, They STILL Out to Get Us, Man

The ARISE-RD study, which examined the addition of Risperdal (risperidone) as an add-on treatment for persons who were not responding well to antidepressant treatment, has been discussed several times on this site. I had some suspicions about this study, one of which was recently validated. If you are already familiar with the background, please feel free to skip to the bold heading “Change in Findings.”

Background. The study had the following phases:
1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for greater than six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks
2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks
3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.

Please read the linked posts for more detail on the following…

Conflicts of Interest. Nearly all of the authors failed to disclose their conflicts of interest. One of the authors who failed to disclose relevant conflicts was Charles Nemeroff, who was also the editor of the journal (Neuropsychopharmacology) in which the study was published, so he cannot claim ignorance of the rules. In the ARISE-RD study, Nemeroff was also republishing data he had previously published, which is a no-no. Read the linked post for more details.

Authorship. The authorship was switched around as the study went from earlier abstract form to final copy. Especially curious was the addition of key opinion leader Martin Keller to the authorship line. Why switch authors? My take is that the more big names one can slap on a study, the thicker its veneer of academic credibility. If one follows the trail of this study, one would have to believe that Keller designed the study after it was already completed – something is fishy here… Read both linked posts (1 and 2) for background.

Statistical legerdemain was at work. The authors, in an earlier report, declared the measures they would be investigating to assess the efficacy of treatment. Several of these measures were reported incompletely or not at all in the final published version of the paper. This looks a lot like burying negative data. In addition, some of their analyses seemed to yield results that could best be described as magical. Read the linked post for background on the statistical issues.

Change in Findings. I speculated earlier that their finding that risperidone warded off depression to a significantly greater extent than placebo for patients who did not respond at all to initial antidepressant treatment was bogus. Well, the authors just published a brief corrigendum in Neuropsychopharmacology in which they state that

Following the publication of this article, the authors noted that in the abstract and in the next to last paragraph of the results section, a P-value for part of one of the post hoc analyses was incorrectly reported. A significant P-value was reported for both the difference in time to relapse and for relapse rates in a subgroup of patients fully non-responsive to citalopram monotherapy. Although the P-value for time to relapse was correctly reported, the correct P-value for the comparison of relapse rates is not significant (P=0.4; CMH test). This change does not alter the major findings of the study nor any of the conclusions of the report. We appreciate the assistance of a diligent reader in identifying this error.

First off, since I noted in a prior post that their original test looked suspiciously wrong, I suppose that I may have been the diligent reader. If so, you’re kindly welcome. If it was someone else who brought it to their attention, then thanks for doing so.

What they are basically saying, to the statistically uninitiated, is that they reported that risperidone appeared to be effective for a group of people but that, in fact, their analysis was wrong. The analysis, done correctly, shows that risperidone was not effective in preventing the return of depression in persons who…

(a) initially showed no response to antidepressant treatment, and
(b) whose depression improved while taking risperidone as an add-on treatment and then
(c) took risperdone for six months

…in comparison with people who were switched to a placebo after showing improvement in symptoms (b). In other words, risperidone did not prevent relapse into depression.

In the abstract of their paper, it is stated that

Open-label risperidone augmentation substantially enhanced response in treatment-resistant patients.

Later in their paper, it is stated that

Our secondary analysis revealed that patients who were least responsive to citalopram monotherapy may be those most likely to benefit from continuation therapy with risperidone.

Great – except that this is the result they just retracted. In other words, if you showed no response at all initially to the antidepressant, then whether you were allotted to receive a placebo or Risperdal in the final study phase made no difference; you were equally likely to experience a relapse of depression. So, despite their claim to the contrary, this does, in fact, change one of their major conclusions.

Was a Stork Involved? Where do incorrect findings come from? Two sources, generally. One is from Honest Mistake-Ville. Second is from corporate headquarters. Given the large number of authors on this paper, it seems pretty odd to me that nobody would have caught the error. I have no problem with honest mistakes being made, but given the large number of other issues surrounding authorship (1, 2), failure to disclose conflicts of interest, and statistical/data reporting issues, I’m very suspicious. When you take this latest finding away, look what happens. Here are the main findings from the abstract that compare risperidone to placebo, quoted directly, edited to show the most recent correction…

Median time to relapse was 102 days with risperidone augmentation and 85 days with placebo (NS); relapse rates were 53.3% and 54.6%, respectively. In a post-hoc analysis of patients fully nonresponsive to citalopram monotherapy, median time to relapse was 97 days with risperidone augmentation and 56 with placebo (p=0.05); relapse rates were 56.1% and 64.1%, respectively (p≤0.05) (p=.4).

So risperidone wins on only one of four analyses in comparison to placebo. And it is the weakest analysis of the bunch, one that looks at only a subgroup of the participants, and finds that you get a little more time, on average, before you relapse, but you still have essentially the same likelihood of becoming depressed as if you had taken placebo.

When the study was published, press releases were issued that attested to the drug’s efficacy in helping patients with treatment-resistant depression. Now that the second correction has been made to the paper (the first was related to not disclosing relevant conflicts of interest), there will be no press releases correcting the earlier, overly optimistic press releases. Many physicians have likely received a copy of this study in their mailboxes, but I am certain they will not receive a follow-up notice to inform them that the results were incorrect.

This reminds me of another post that described similar problems occurring throughout the so-called scientific investigation process.

The ARISE-RD study is now officially nominated for a Golden Goblet Award. Not sure which authors merit individual nomination, though both Nemeroff (1, 2) and Keller (1, 2) have appeared multiple times on this site regarding other issues.

No comments:

Post a Comment