Study Design. Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).Well, these experts in study design decided that once was just not enough, so they ran the same exact study twice. Same huge design flaw. Similar results. The results are posted in the Journal of Clinical Psychopharmacology. By a statistically significant, though not overwhelming margin, those on Abilify + antidepressant improved more than those on antidepressant + placebo. Or did they?
Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?
Dear Patients: We Don't Care What You Say. On depression measures rated by clinicians, patients did modestly better on Abilify. But on measures completed by the patients, there was no statistically significant difference between Abilify + antidepressant vs. placebo + antidepressant. So the patients didn't actually perceive themselves as being less depressed -- um, shouldn't the opinion of the patients matter? The message that the authors are trying to make is that the opinion of the clinical raters matters much more than the opinions of patients, which strikes me as ludicrous. These people are depressed, not floridly psychotic, so I think they would have a pretty decent idea as to their mental health status.
The authors attempted to explain this inconvenient finding away as follows:
These [patient-rated] scales were included for exploratory purposes, and the lack of emphasis on these ratings may have contributed to increased variance. The corresponding clinician-rated versions were not included, which may have hindered patients in responding accurately to the self-rated version.I'm really confused here -- what do they mean that a "lack of emphasis" resulted in increased variance? And as for accusing the patients of not completing the ratings accurately, that just sounds like sour grapes to me. If there was a significant difference favoring Abilify, you can bet your life savings that the authors would not have accused the patients of reporting inaccurately. Only if a drug is shown to not work can we accuse patients of inaccurate reporting because all new drugs must work; such is the dogma of modern day psychopharmacology gone wild.
The authors close their paper with the following jewel:
Given the public health challenge of antidepressant nonresponse, this is a significant clinical finding.We're getting knee-deep in bogus public health claims these days. Even though patients don't perceive that they improved any more on Abilify relative to a placebo, this is a significant benefit to public health? You bet. If someone has a defense for designing a study in such a manner, I'm all ears, but this really looks like a blatantly biased study that still managed to find no benefit (according to patients) in using Abilify.