Tuesday, January 27, 2009

Abilify For Depression: I'm Not the Only Skeptic


In April 2008, findings were published in the Journal of Clinical Psychopharmacology which claimed that the atypical antipsychotic aripiprazole (Abilify) was an effective add-on treatment for depression. I heartily disagreed with the study's conclusions, noting that the patient-rated depression measure did not demonstrate an advantage over placebo, an inconvenient result that the authors tried to explain away as if was unimportant. I also pointed out that the study design was biased in favor of Abilify:
Study Design. Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).

Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?
Some commenters agreed with my take on the matter while others did not. Two letters to the editor published in the latest Journal of Clinical Psychopharmacology raised concerns about the study. Alexander Tsai, from UCLA, wrote that he was concerned that the advantage for Abilify was small (2.8 points on the Montgomery-Asberg Depression Rating Scale ) and that the study design was biased in favor of Abilify (agreeing with my earlier point).

Dr. Bernard Carroll, wrote in his letter that:
  • The advantage of Abilify over placebo was small
  • There was no advantage on the patient-rated measure
  • Due to the notable side effect profile of Abilify, clinical raters could likely distinguish patients who were taking Abilify from those who were taking placebo, which could have biased their ratings. Thus, he questions if the study was truly double-blind.
  • The authors did not report whether the occurrence of several side effects were more common on Abilify than placebo. Dr. Carroll calculated that akathisia, fatigue, restlessness, and insomnia were all significantly more common on Abilify and wondered why the authors did not include such data in their report.
  • The authors did not note the relationship between akathisia (severe restlessness/tension) and suicide, which is concerning given that Abilify produces akathisia in droves.
The Defense: Robert Berman from Bristol-Myers Squibb wrote back to defend the study. His points were not impressive. Noting that Abilify did not outperform placebo on the self-report measure in the trial, he wrote that "this may be due to the lower sensitivity" of the measure. So the drug wasn't the failure -- blame the rating scale instead. The people at BMS picked the scale and when it doesn't give results they like, then suddenly it's a poor measurement of depression. I bet Dr. Berman would not have complained about the instrument had it yielded results in favor of Abilify.

Adverse Events: As for not reporting adverse events, well, there's a perfectly good explanation hidden somewhere in here...
...we have clearly reported rates of spontaneously reported treatment-emergent events that occurred at a rate of 5% or greater in any treatment group. As this study is not designed to collect adverse events in a systematic manner, statistical comparison between treatment groups is not appropriate.
So let me get this straight. They discussed "spontaneously reported" events, which would refer to the events reported by the patients without much questioning. Everyone knows that spontaneous reports are a joke because most side effects are not spontaneously reported. Based on spontaneous report, the rate of sexual side effects in SSRI's is quite low. But when you bother to ask people taking SSRIs questions about their sexual functioning, the rates of sexual problems increase drastically. So when Dr. Berman goes on to write that no suicide-related adverse events were reported in the study, keep in mind that the study investigators were not asking about such events. So it may be more accurate to say that nobody committed suicide during the study, but nobody was tracking suicidal ideation unless patients reported such problems themselves. Yes, suicidal ideation was covered a little bit by measures used in the study, but a more systematic assessment would have been helpful. To give the authors credit, at least they did include a couple measures of extrpyramidal symptoms, from which we gathered that akathisia happened in 25% of patients. Yikes.

Saying that the study was not designed to collect adverse event data in a systematic manner is frightening. If adverse event collection was not systematic, then the authors writing in the study report that "adverse events were generally mild to moderate" is meaningless. You can't say that adverse event data were not collected in any sort of systematic manner then also say that the study is "safe," as the authors claim in their paper. This is the definition of duplicitous. In any case, the authors should have reported that several adverse events were significantly more likely to occur on Abilify than placebo rather than making the ridiculous claim that comparing adverse event rates between treatment and placebo is not appropriate.

Dr. Berman does not address the less than 3-point benefit for Abilify over placebo. There is also no real explanation to address the concerns of Dr. Tsai and myself, who noted that the study design was biased in favor of Abilify.

Kudos to Dr. Caroll and Dr. Tsai for taking the time to write excellent letters which addressed quite problematic issues in this study. Every time I see a commercial pimping Abilify for depression, I cringe. It's good to know that some people in the medical community are seeing through the weak research that "supports" the use of Abilify as an antidepressant.

Citation for the offending study below:

Ronald N Marcus et al. (2008). The Efficacy and Safety of Aripiprazole as Adjunctive Therapy in Major Depressive Disorder Journal of Clinical Psychoopharmacology, 28 (2), 156-165

Friday, January 16, 2009

Zyprexa: Lilly Admits Guilt, But Also Blame Physicians

In February 2007, I wrote a post in which I described evidence that Lilly's antipsychotic olanzapine (Zyprexa) was marketed off-label for dementia. The evidence I discussed was based on documents generously and bravely hosted at Furious Seasons. At the time, I was careful to avoid labeling the practices as illegal -- they were definitely unethical but I couldn't really say for sure what if a law was broken. However, a law firm known to represent Lilly was regularly visiting my website at the time, which made me think that Lilly was seriously concerned about legal troubles. I suppose they had good reason to be worried.

I can now officially say that the off-label marketing of Zyprexa for dementia was criminal. Lilly just admitted to committing a crime in the off-label marketing of the drug for dementia and settled legal charges for a cool $1.4 billion. And there are more cases still on the books.

For a really interesting take on this situation, listen to New York Times reporter Gardiner Harris. You can find his talk embedded in the New York Times story from January 14, 2009, which is linked here. The plea agreement in the latest case is available here.

It is important to remember that pimping Zyprexa for dementia is far from a victimless crime. Antipsychotics, including Zyprexa, have been linked to an increased rate of death in elderly patients and have also been shown to be of little to no more benefit than a placebo in reducing dementia-related symptoms (1, 2). For a disturbing account of the widespread inappropriate use of such medications, read this post and weep.

This is truly a case where lust for profits likely led to the early demise of who-knows-how-many patients. And we're just talking about dementia, not the other cases where Lilly went berzerk with marketing Zyprexa (1, 2).

Blame the Physicians Too: While much of the blame for the overuse of antipsychotics in the elderly can be placed on corporations such as Lilly, it is also true that Lilly does not directly administer the drugs. Physicians need to understand that prescribing drugs which have been found to offer little benefit but are linked to killing patients -- how is that legitimately practicing medicine? First, do no harm?? Yes, I know that dementia is a hell of a difficult condition to handle. But does that mean we should be doling out ineffective and potentially deadly treatments to "manage" persons with dementia. Yes, reps from Lilly (and likely others) wined and dined physicians, "educating" them about the benefits of Zyprexa and other antipsychotics. That's their job -- to positively spin their products. No different than a used car salesperson except that drug reps are typically much better looking.

Doctors need to use critical thinking skills -- you don't just listen to a drug rep or skim a drug-company provided journal article reprint then jump on the Zyprexa bandwagon. How about learning how to evaluate evidence so that junky marketing disguised as science does not persuade you to write inappropriate scripts? Yes, we can be outraged that Lilly and others pimp ineffective and dangerous treatments, but the physicians are the most important link. If they cannot be better educated to understand clinical trial results, and cannot take time to critically review the scientific literature, then this pattern will repeat itself over and over again. It takes tricky pharmaceutical marketing in combination with an audience that is unwilling to think critically for this type of tragedy to occur. And occur again, it will.

Unfortunately, the published scientific literature is quite biased, as negative studies tend to vanish rather than grace the pages of our journals. But it's still a much better idea for prescribers to actually read journals and critically examine their findings, as opposed to relying on marketing alone. Better yet would be for research data on medications (negative and positive) to be available for all to see.

Monday, January 12, 2009

The Budget Crisis, Universities, and Key Opinion Leaders

Everyone knows that state budgets across the United States are in a crunch. All state-supported universities are looking for sources of income outside of taxpayer funds. As state legislatures look to cut money, many state universities are in for a big budget hit. So if the state is going to pony up less money, how can a university survive...?

Perhaps by seeking to entice industry funding. Set up a few clinical trials and see what happens. There is nothing inherently wrong about university faculty working on industry-sponsored research. In an ideal world, all goes according to plan and all benefit from such collaboration. Universities love industry collaboration because it brings in good money. Researchers like to collaborate with industry for some altruistic motives, such as receiving funding to work on investigating treatments that might hopefully bring about better lives for people struggling with various ailments. Because receiving funding makes the university
administration happy, it also makes life at a university medical center much more pleasant for those who bring in the bucks.

But how do things really work? Sometimes, they go well. But there are also nondisclosure agreements, in which an "independent" academic researcher gives away any right to discuss the data from clinical trials that he/she is working on unless approval is given by industry. As Graham Emslie, key opinion leader in the field of child psychiatry, can attest to, there are certainly many cases where negative results were found for a drug, but the negative data were buried to avoid any untoward publicity. Academics often farm out their writing of joint work with industry to ghostwriters who spin the final product to pimp a product rather than accurately describe the results. As regular readers know, this is just the tip of the iceberg.

If academics are willing to be oversee industry-sponsored research, have substantial input into writing the final presentation of the results, and actually review the data from these joint ventures with industry, then academic-industry collaboration can be fruitful. However, if academics are simply used to recruit patients for clinical trials, stamp their names on papers consisting of data with which they are entirely unfamiliar, and are complicit in hiding negative data, then the current sad state of affairs will continue unabated.

Given the current financial situation, universities will be encouraging faculty very strongly to get external funding for their work, and we can only hope that academics will behave responsibly when such collaborations occur.

Wednesday, January 07, 2009

Sowing the Seeds of Lexapro


I'm reading an article with my jaw completely agape and I thought I'd share the pain. The good people at Forest Pharmaceuticals have put together a tragic waste of journal space. The editorial board at the journal Depression and Anxiety should call an emergency meeting to see how this thing got published. Any peer reviewer who put a stamp of approval on this should be forced to listen to Michael Bolton's Greatest Hits at maximum volume for 12 hours straight.

OK, so what am I having a fit about? Here's what happened in this so-called study. 109 primary care doctors were recruited to participate, for which they were doubtlessly paid a decent chunk per patient (not discussed in the manuscript). The lucky depressed patients of these physicians then received escitalopram (Lexapro) for six months. The manuscript mentions that the "investigators" (the primary care docs) "were not required to have previous clinical research experience to be selected for this study." Yeah, no kidding.

There was no control group, and there had already been dozens of studies on the effects of Lexapro in depression, so how are we getting any new info out of this study? Maybe because this is investigating Lexapro in primary care settings; maybe there was no research on that beforehand. Well, no. The manuscript writes that "The efficacy and tolerability of escitalopram in MDD have been extensively evaluated in primary-care settings," citing four relevant studies. So the study is actually not an attempt to answer a scientific question. So what, exactly, is this study?

Looks and smells like a seeding trial, about which Harold Sox and Dummond Rennie wrote:
This practice—a seeding trial—is marketing in the guise of science. The apparent purpose is to test a hypothesis. The true purpose is to get physicians in the habit of prescribing a new drug. Why would a drug company go to the expense and bother of conducting a trial involving hundreds of practitioners— each recruiting a few patients—when a study based at a few large medical centers could accomplish the same scientific purposes much more efficiently? The main point of the seeding trial is not to get high-quality scientific information: It is to change the prescribing habits of large numbers of physicians. A secondary purpose is to transform physicians into advocates for the sponsor’s drug. The company flatters a physician by selecting him because he is “an opinion leader” and incorporates him in the research team with the title of “investigator.” Then, it pays him good money: a consulting fee to advise the company on the drug’s use and another fee for each patient he enrolls. The physician becomes invested in the drug’s future and praises its good features to patients and colleagues. Unwittingly, the physician joins the sponsor’s marketing team. Why do companies pursue this expensive tactic? Because it works.
So these primary care doctors now feel like "researchers," even though their investigation had essentially zero scientific merit. That probably makes these "investigators" feel important -- and the association between feeling important/scientific and Lexapro is a feeling Forest was banking on to increase Lexapro prescriptions in Canada.

Findings: So what did this extremely important piece of seeding, er, research find? Get ready... Lexapro is safe and effective. To quote the authors: "Escitalopram was well tolerated, safe, and efficacious. Escitalopram can be used with confidence to treat patients with MDD in Canadian primary-care settings." And "As adherence to antidepressant treatment is paramount to achieving long-term recovery, the present results suggest that escitalopram should be considered among the first-line choices of antidepressant used in primary care." So with no control group, we can determine that a Lexapro prescription should be among the first things that come to mind when treating depression. This is mind-boggling. This journal often published good work, but this is among the most uninformative pieces of research I have read. Unless one is thinking about marketing, in which case it is very enlightening.

Citation: Pratap Chokka, Mark Legault (2008). Escitalopram in the treatment of major depressive disorder in primary-care settings: an open-label trial Depression and Anxiety, 25 (12) DOI: 10.1002/da.20458

Monday, January 05, 2009

We're All Mentally Disordered: College-Age Edition

A study in the December 2008 issue of the Archives of General Psychiatry concluded that almost half of college aged Americans suffered from a DSM-IV disorder over a one-year timeframe. Yes, I am behind the curve on this one -- Furious Seasons was all over this last month (1, 2). Rather than rant about the very odd idea that half of young adults are suffering from a mental disorder, I want to start by mentioning one aspect of the study -- perhaps the most important one. Let's look at how the diagnoses were assigned. To quote from the study:

All of the diagnoses were made according to DSM-IV criteria using the National Institute on Alcohol Abuse and Alcoholism Alcohol Use Disorder and Associated Disabilities Interview Schedule–DSM-IV version, a valid and reliable fully structured diagnostic interview designed for use by professional interviewers who are not clinicians.

If the interviewers are not clinicians, on what basis are they trained to understand what makes for truly significant distress that might justify a mental health diagnosis versus someone who is suffering from more mild symptoms that do not comprise a mental disorder? Here's some information from a different study that used a different slice of the same overall dataset on which the December 2008 study was based:

Approximately 1800 lay interviewers from the US Bureau of the Census administered the NESARC using laptop computer–assisted software that included built-in skip, logic, and consistency checks. On average, the interviewers had 5 years’ experience working on census and other health-related national surveys. The interviewers completed 10 days of training. This was standardized through centralized training sessions under the direction of NIAAA and census headquarters staff.

So the figures that will be trotted out in the media ad infinitum about the shoddy mental health of American youth are based on laptop-assisted interviews conducted by people who apparently have no formal training in mental health. Maybe mental health and related disability are really so easy to assess that we don't need experienced, formally trained interviewers. If that's the case, maybe we should just have Census Bureau interviewers provide initial mental health assessments in clinical care settings -- after all, if they are such good mental disorder detectors, couldn't we just train a bunch of interviewers rather than spend millions of dollars training and paying mental health professionals? Think of the savings!

I mean no disrespect toward the Census Bureau interviewers. They are performing important work that in many instances helps us to better understand the health of the nation. All I'm saying is that we might want to avoid uncritically accepting judgments of our nation's mental health based on interviewers who lack mental health training and experience.

Friday, January 02, 2009

KOL Continues to Vanish

Charles Nemeroff, about whom I have written much, continues to disappear. His latest vanishing act: From a psychiatric research gathering in Berlin in late November 2008. Their website now reads: "Dr. Charles B. Nemeroff (Atlanta, the USA) called his participation off in the congress and its scientific contributions." We can only hope that they had another key opinion leader of his stature to replace him.

Back story.