Tuesday, November 28, 2006

Uh Oh Chuck They Out To Get Us Man: Stats

This is the third (and perhaps final) post in a series on a study recently published in Neuropsychopharmacology which used risperidone (Risperdal) as an add-on treatment for depression. The study had three phases, as follows:

1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for > six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks

2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks

3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.

Let’s start with examining the differences between the trial report found on clinicaltrials.gov and the trial as published in Neuropsychopharmacology. The clinical trials report indicated that the primary outcome measures were: a) change in Montgomery-Asberg Depression Rating Scale (MADRS); b) time to relapse, as measured by Hamilton Rating Scale for Depression and Clinical Global Impression (CGI) scores.

Secondary measures include: a) Response rate, measured by at least a 50% improvement in MADRS score; b) Change in Hamilton Rating Scale for Depression (HAM-D) and c) Clinical Global Impressions (CGI) scale scores.

Now, to the journal report. Under the results for the open-label risperidone augmentation, on page 9 of the early online version of the study, it is stated that the MADRS was “the primary measure used to assess depression severity.” Nowhere are the results of the MADRS response criteria (>= 50% change in MADRS scores) reported. Where did this go? If this was a predetermined test of treatment response, shouldn’t it be reported? While means and standard deviations of the MADRS are reported, the alleged measure for treatment response is strangely missing.

It’s also unclear what happened to the CGI scores, as means and standard deviations for this instrument are not reported anywhere. It’s mentioned that scores on this measure were used as one measure of relapse, but the means and standard deviations are missing.

Under the results from the double-blind continuation phase, we can see that the rate of relapse was 53.3% for risperidone and 54.6% for placebo. The time to relapse was 102 days for risperidone augmentation and 85 days for placebo augmentation, for which the associated p-value was .52. But a post-hoc analysis found the p-value of the difference in time to relapse was p <.05. The authors state that this difference was found because they switched to a linear ranks test. I’m no expert on this test, so I can’t make a judgment, but I can say that I’m suspicious any time a p-value goes from .52 to .05 just by switching statistical tests. At the very least, an explanation in the article is in order, as it is noteworthy that just switching a statistical test made such a change in results.

Post-hoc analysis part 2. An additional post-hoc analysis was conducted using the subgroup of patients who were fully nonresponsive to citalopram monotherapy. In other words, the people who showed the poorest response to SSRI treatment were examined in separate analyses. Their median time to relapse and relapse rate were reported as significantly different, in favor of the risperidone group. The relapse rate was 56% in the risperidone group and 64% in the placebo group. The associated -value was reported as .05. However, I conducted my own analysis and came up with Chi-Square = .922 and a p-value of .337. It is mentioned earlier in the paper that the authors used the Cochran-Mantel-Haenzel test and this explains how the p-value shrunk drastically. Again, a post-hoc analysis was conducted which changed the results substantially, yet the authors did not discuss reasons behind these large discrepancies. What this would appear to mean is that time to relapse differed substantially more than chance depending on the site where patients received treatment. The CMH test stratified by treatment site, which I believe would then account for differences due to treatment site. If treatment response really was dependent to a significant extent on the treatment site, this bears mention in the article, but such a discussion is nowhere to be found.

Table 2. The results here are quite interesting. This refers to the double-blind section of the study in which patients who had shown symptom resolution while receiving risperidone were randomly assigned to continue risperidone or to receive placebo. On the MADRS, patients receiving risperidone, on average, gained 11.2 points (i.e., their depression worsened by 11.2 points), whereas patients on placebo gained 10.4 points. Thus, there was a slight, but certainly not significant, difference in favor of persons on placebo worsening less than persons on risperidone. On the HAM-D, patients receiving risperidone worsened by an average of 7.6 points whereas they worsened by an average of 7.9 points on placebo. Between the two measures, it is clear that on average, there was very little difference between risperidone and placebo. However (and take out your notepads, please), patients in both groups got significantly worse over time in the third phase of the study. Thus, the scenario for the average patient is that he/she sees a relatively brief improvement in symptoms while taking risperidone then returns to a period of moderate depressive symptoms. The authors do not discuss that mean scores between groups did not differ at all in the third phase of the study.

The only evidence to emerge from this study, really, is that an open-label treatment resulted in a decrease of symptoms. If Janssen really wanted to impress, they would have included an active comparator. Say, an older antipsychotic, a so-called “mood stabilizer,” or perhaps another atypical antipsychotic. Or, if not feeling daring, at least add a placebo to the mix. Based on the study results, we cannot even conclude that risperidone augmentation worked better than adding a sugar pill to SSRI treatment.

In summary, it is unknown what happened to some of the secondary outcome measures (CGI scores, MADRS response rate) and the statistical analyses used in some cases required more explanation, as their use led to a big change in interpretation of the results.

So what do we have here? I believe this is an excellent example of a study conducted for marketing purposes. I bet that many reprints of this article have been purchased by Janssen, which will be passed on by cheerleaders, er drug reps, to physicians in a ploy to market Risperdal as an adjunctive treatment for depression. Additionally, there are likely “key opinion leaders,” perhaps including some of the study authors, who are willing to stump for Risperdal as an adjunctive treatment for depression at conferences, meetings, and dinners. With this study now published in Neuropsychopharmacology, there can be little doubt that such marketing strategies now have a glimmer of scientific sparkle on their side, although upon closer examination, the scientific evidence is very weak at best. Yet too few doctors will bother to perform closer examination of the meager science behind the marketing as the atypical antipsychotics continue their march toward rebranding as “broad spectrum psychotropic agents,” as Rispserdal was referred to in this press release regarding the present study.

I encourage interested readers to also check out my earlier posts regarding the questionable authorship of the paper (possibly involving magic!) as well as the rather blatant undisclosed conflicts of interest associated with the study. This is so distressing that I think I’ll have to chill out with a couple of Quaaludes, er, earlier versions of broad spectrum psychotropic agents.

3 comments:

  1. I'm late to the game for your writings, but I remember reading about this 'research' last year. It is scary what people pass of now-adays for research, and what the vast majority of research consumers miss when they read everything at face value

    ReplyDelete
  2. Thanks Tim. Consumers should indeed possess a reasonable degree of skepticism.

    ReplyDelete
  3. Sometime, just for chuckles, have a look at the number of publications for which Nemeroff and his biomedical power buddies claim authorship. Pushing 4 figures? You bet! What fraction of the papers these "scientists" have "authored" do you think they've actually even read? Still, journals, universities and NIH continue to reward the count more than the content - both science and society suffer.

    Been there, wouldn't do that.

    ReplyDelete