In the "you're kidding me" category, we have a report from Forbes that Abilify (aripiprazole) is going to be going up for FDA priority review as a depression treatment. I was able to track down exactly one placebo-controlled study using this drug as an antidepressant. Participants who did not show satisfactory response to an antidepressant trial were assigned to receive either Abilify or a placebo in addition to their antidepressant. As you'll see, this was a study worthy of close examination.
Study Results. I read the study results and was underwhelmed. The authors (via their ghostwriter(s) to some unknown extent) reported that the difference between those receiving add-on Abilify vs. add-on placebo was three points on the Montgomery-Asberg Depression Rating Scale (MADRS). For perspective, the MADRS has 10 questions, each rated from zero to six. So suppose three of those ten questions show an improvement of one point each. Whoopee. But keep reading -- it becomes bizarre.
Study Design. Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).
Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?
Ghostwriter Watch: Yep, the study acknowledged "Editorial support for the preparation of this manuscript was provided by Ogilvy Healthworld Medical Education; funding was provided by Bristol-Myers Squibb Co." Of the study authors, all but one were employees of BMS or Otsuka (which is also involved in marketing Abilify).
Unless BMS is hiding data somewhere, this is hardly the stuff breakthrough treatments are made of. Not that the FDA has a history for expecting much in terms of efficacy, but seriously -- can we have a study without a ridiculously biased design before we jump on the Abilify for depression bandwagon?
Oh, the wonders of "evidence based medicine." This one reminds me of the ARISE-RD study for Risperdal as a depression treatment.
Update: I forgot to mention that this is not the first time I've been puzzled by Abilify's claims. For information on how Abilify is supposedly a great long-term treatment for bipolar disorder, you really have to get the story from Furious Seasons, who had a great post on the topic in December 2006. Get ready for more flimsy evidence from BMS.
"What?!"
ReplyDeleteSuch was my utterance upon reading the first sentence of your thoughtful piece.
Thanks Maria. I was actually going to title my post "Abilify for Depression: Say WHA??", so we must've been on the same wavelength, but my desire to include a reference to RUN-DMC lyrics in the title overcame me.
ReplyDeleteThe tutor analogy explained things quite clearly for me. Thanks for that .
ReplyDeleteI'm not completely familiar with Abilify. Are there issues surrounding it akin to Seroquel and Zyprexa? (I've noticed issues with atypicals, in general, but I'm not clear on Abilify specifics.)
Marissa,
ReplyDeleteI think Abilify is generally clean with regard to weight gain and diabetes, at least in comparison to the drugs you mentioned. The rate of akathisia (restlessness, possibly agitation) seems pretty high with Abilify, however.
Sorry, maybe I am being naive. But isn't the point of an adjunct study to show that the drug works when added to another drug? Having abilify adjunct to an antidepressant vs the antidepressant alone is essentially a placebo-controlled study. And they appear are doing it in patients who don't respond to antidepressants, which suggests that they want to develop abilify in more severely depressed patients. As a psychiatrist (which I am not) how else would you do this study to test it as a 2nd/3rd treatment option? Given the atypical safety records, I would rather see them do this study than one with less severe patients as a first option for treatment...
ReplyDeleteAnon,
ReplyDeleteThat's not exactly what happened. First, patients had to fail treatment with antidepressant plus placebo then some patients received antidepressant plus Abilify, while others remained on antidepressant plus placebo. So essentially patients were prescreened to not respond to antidepressant plus placebo. That's the main issue.
A study that took patients who failed antidepressant treatment, assigned half to adjunctive placebo and half to adjunctive Abilify -- that would have been a better design.
OK, so they were getting rid of the placebo effect before randomization, giving themselves a better chance of success? So here's another question - why would they get an accelerated FDA review? Isn't that reserved for "unmet needs"? Considering the number of antidepressants on the market, it seems a little odd.
ReplyDeleteAnonyomus,
ReplyDeleteYes, exactly -- getting rid of all placebo responders prior to randomization seems iffy. Placebo run-in periods often occur in trials, but not of the length that occurred in this trial.
They're playing up the treatment-resistant depression angle as the "unmet need," and we'll see if the FDA takes the bait ...
Amazing stuff. I'm going to add this to my list of case examples - "is this research misconduct or not". It may fall into the category of deliberately deceptive design. I suppose the question is "if an impartial true scientist designed such a study, is this what they would have designed". If not, this would appear to me to constitute both research misconduct and a violation of the trust of the participants. If it wasn't deliberate, then it's plain stupid. Either way the scientific credentials of Bristol-Myers Squibb don't seem to have been enhanced if this is all as it seems.
ReplyDeleteDon't understand the title though.
Aubrey,
ReplyDeleteThanks for the comment. The study does indeed raise more questions than it answers.
The title is a reference to an irrelevant, but fantastic song from Run-D.M.C. (It's Tricky).
What is the power for detecting a difference in change of three between the "placebo" and "treated arms" here? The within-arm change is obviously entirely irrelevant, particularly in view of the design (that would lead to massive regression to the mean effects).
ReplyDeleteI suppose in a sense the add-on design may be appropriate if the results are restricted only to the precise clinical situation (i.e the aim of this trial was to prove that Abilify "works" in patients who have shown no response to the particular antidepressant X, given for exactly Y weeks, with non-response defined exactly as Z. I suppose it could be approved for those patients only (if a 3 point improvement is deemed relevant).
Still worried about the statistics. P < 0.001 for N=180 (x2)at a difference in change of 3 points seems to me to be near impossible if my understanding of the design is correct.