Wednesday, March 25, 2009

APA Monitor: We Don't Need No Stinking Evidence

The American Psychological Association publishes two monthly publications for members, the well-regarded journal American Psychologist, and the APA's newspaper, Monitor on Psychology. I've been having issues with The Monitor for as long as I can remember. At times, I think the magazine makes claims that are not at all substantiated by evidence, which really bothers me. Why? Because psychology is supposed to be a science; it is what separates psychologists from life coaches or snake oil salesmen. I usually skim the Monitor for about 30 seconds per month, but when I saw the cover for this month's issue, my intuition told me to look out for voodoo. The title: Brain imaging: New technologies for research and practice.

So I browsed through the glossy pages, looking for something to catch my eye. Then, on page 36, there it was...

A pacemaker for your brain? Electric brain stimulation may give hope to people with unremitting depression

Oooh. Sounded promising, so I gave it my full attention. Keep in mind that this was in the "Science Watch" section. The article begins:

It's about the size of the letter "o" in this sentence and may have the power to lift deep, unrelenting depression.

OK, there's the attention-grabber. It then goes on to describe deep brain stimulation (DBS). Before long, I ran across:

Since 2005, more than 60 people worldwide have received DBS for treatment-resistant mood disorders. For about 60 percent of them, there's a "striking improvement in their symptoms of depression," says Andres Lozano, MD, PhD, a neuroscientist at the University of Toronto who performs DBS surgery.

Well, that practically screams "valid scientific findings," asking a surgeon if his technique works. What was he gonna say, "Nah, I think DBS is a bunch of hooey. I only do it because it pays really well." I'm willing to bet that physicians who practiced bloodletting were also quite confident that the majority of their patients showed "striking improvement," which is why we conduct controlled trials rather than rely on subjective opinion. Later in the article, the author notes that the results from DBS are "dramatic and promising." The author also notes that

A number of other behavioral and mood disorders might also benefit from DBS. Benjamin Greenberg, MD, PhD, a psychiatrist at Brown University in Providence, R.I., is using DBS to treat obsessive-compulsive disorder, with success rates similar to [Helen] Mayberg's and Lozano's. Also similar is Greenberg's claim that OCD people who've had DBS are then able to tolerate and respond to behavioral therapy.

This broad success leads Mayberg to believe that DBS is establishing itself as an important tool for treating disorders that otherwise won't budge.

OK, so Lozano claims that 60% of people make "striking improvement"; what about others? As mentioned above, Helen Mayberg has done some research on this topic. The article describes one of her studies. Here comes the most convincing evidence I've ever witnessed:

The initial trial included six people who met diagnostic criteria for major depressive disorder. The two researchers and their colleagues implanted electrodes in the white matter adjacent to their patients' subgenual cingulate cortexes and fired up their pacemakers. All the patients, who were awake during the procedure, reported a "sudden calmness or lightness," Mayberg and Lozano reported in the paper.

The researchers followed up with the patients by administering monthly depression scales. After six months, four of the six showed significantly fewer depressive symptoms. To make sure they weren't getting a placebo effect, Mayberg and Lozano secretly switched off the electrodes in their best-responding patient. After about two weeks, the patient's scores began to drop. After about a month, his depressive symptoms had returned. The researchers switched it back on and six weeks later he was back up to non-depressive levels.

So the author of the article, based on the subjective opinion of a psychiatrist and a neurosurgeon, along with and an uncontrolled study of six people concludes that DBS:

  • Has shown "broad success"
  • "A number of other behavioral and mood disorders might also benefit from DBS"
  • "May have the power to lift deep, unrelenting depression"
  • Has shown "dramatic and promising" results

The author threw in a few caveats about side effects (though he essentially gave it a clean bill of health), and also noted that DBS should be reserved for patients with longstanding depression and who have not shown positive results with other treatments. So it stopped short of being a blanket endorsement of DBS, yet it did really make it sound like a fantastic treatment for longstanding depression despite the very meager evidence cited in its support. I often complain about poorly designed studies, suppression of negative data, or misinterpreted results leading to drugs being touted as unrealistically safe and effective. But this article shows that it doesn't necessarily take drug company involvement to pimp a treatment well beyond the scientific evidence.

For all I know, DBS may turn out to be The Holy Grail in treating depression of all shapes and sizes. I cast no aspersions on the researchers mentioned in the article, as searching for ways to treat seemingly intractable cases of depression is doing God's work. But the writer did a horrendous job of overblowing the evidence in favor of DBS. This kind of article feeds the popular notion that psychologists are a bunch of flakes who know nothing about science. The APA Monitor can do much better than this.

Friday, March 20, 2009

Seroquel, Haldol, and The Full Court Media Press

I was very pleased to have been acknowledged in a recent story in the St. Paul Pioneer Press. The reporter, Jeremy Olson, wrote the following in his story:

An Internet psychiatry blog first raised questions March 2 about the research Schulz presented at the APA conference and why it lacked any of the company's findings."It raises troubling questions when an independent academic author presents results that are in direct opposition to the underlying data," wrote the blogger, an anonymous academic.

He didn't cite my blog by name -- the unwieldy long name which I stupidly chose for the site may be responsible for that -- but I'm nonetheless grateful that my site was acknowledged for its work on this story. He is referencing my post in which I noted that a University of Minnesota psychiatry professor (Charles Schulz) had stated in a press release that Seroquel was "more effective" than Haldol. This was based upon his analysis of data comparing Seroquel to the much older antipsychotic drug Haldol in the treatment of schizophrenia. Yet an internal AstraZeneca analysis found that Haldol was actually more effective than Seroquel. Both the Pioneer Press and the Star Tribune, the two big papers in the Minneapolis-St. Paul area ran stories on the controversy.

When asked about his lavishing of praise on Seroquel in the press release, the Pioneer Press said:

In an interview with the Pioneer Press last week, Schulz defended his research and presentation of Seroquel as accurate and ethical. However, he acknowledged the corporate press release from his APA presentation might have exaggerated in calling Seroquel "significantly superior."

"You know," he said, "I can't disagree with that."

Schulz said the following in the Star Tribune:

In an interview this week, Schulz said the pharmaceutical company never shared its doubts about Seroquel, which went on to become a blockbuster, with annual sales of $4.5 billion today. "I don't recall anybody calling up and saying, oh my goodness, we have this problem," he said. At the same time, Schulz acknowledged that his own study did not really show that Seroquel was more effective than the older drug. "That's a bit of a misunderstanding," he said. "I think the overall message is that it works about the same."

Thanks to a helpful reader, I was able to track down what appears to be Schulz's presentation from 2000. It says "...quetiapine was clearly statistically significantly superior to placebo as well as to haloperidol..." This appears to contradict his statement that Haldol and Seroquel "work about the same." Again, the data from Schulz's presentation don't match AstraZeneca's internal analysis. Schulz is obviously backing away from his earlier praise for Seroquel, for which he deserves some credit. The problem was that Schulz, along with a laundry list of researchers in psychiatry were caught in a tidal wave of unbridled enthusiasm for the atypical antipsychotics, first as wonder drugs for schizophrenia, then as the Next Big Thing in bipolar, then moving into the world of depression and anxiety disorders in the absence of decent supportive evidence.

Interesting sidenote: While Schulz was presenting on the wonders of Seroquel, he was likely quite unaware that AstraZeneca has conducted a study (Study 15) which had found that Seroquel compared unfavorably to Haldol in preventing psychotic relapse among patients with schizophrenia who began the study in full or partial symptom remisison. Furious Seasons has some additional reporting on this study. It is a near certainty that Schulz was not informed about this study's results, as this could have changed his lofty opinion of Seroquel. This points to the problem of researchers relying on data collected by drug companies -- how are researchers to know they are receiving all of the data?

Note to key opinion leaders: If you don't realize it by now, you are pawns. You are being used to place an academic veneer on the marketing of drugs. The drugs that you are marketing as major breakthroughs typically offer little to no benefit over existing treatment and may cause a slew of nasty side effects. Decide if you want to be a scientist or a marketer. Don't try to do both at the same time, because the odds are pretty good that your scientific credentials will end up being tarnished. Just ask this guy. Now that the media are paying much closer attention to the conflicted interests and skewed science that sadly underlie much of psychiatry these days, it would be a good idea to maintain appearances.

Tuesday, March 10, 2009

Abilify, Depression, and the Memory Hole

ResearchBlogging.org
The Primary Care Companion to the Journal of Clinical Psychiatry has a piece on Abilify for depression that illustrates many of psychiatry's woes. Full text of the article is here. The journal published an article titled "Examining the efficacy of adjunctive aripiprazole in major depressive disorder: A pooled analysis of two studies." The paper combines data from two previously published studies which examined the addition of Abilify to existing antidepressant treatment (1, 2). One of psychiatry's big-name academics, Michael Thase, signed on as lead author. I'm hoping that he didn't actually write the paper. Actually, there are eleven authors of the paper, which seems a little ridiculous given that the paper is an analysis of data which had already been collected for two previously published clinical trials. Seven of the authors are employees of Bristol-Myers Squibb (BMS) or Otsuka, which both market Abilify. Wait... If you look closely, you can see my favorite disclosure... In the fine print on the first page...

In case you can't read the fine print: In defense of Thase and the other academic authors, they may have not actually written any of the paper. Much or all of the writing appears to be creditable to Ogilvy Healthworld Medical Education. On their site, they note that they perform:
Clinical Development and Publications Management
Experienced medical writers work closely with authors, editors and publishers to provide our clients with a full range of publishing options.
Whatever BMS/Otsuka paid you for this one simply was not enough. Why? Because whomever wrote this thing did an admirable job of focusing on the positive and completely ignoring the negative.

Erasing the Patient's Opinions: Remember, the article's title states that it examines the efficacy of adjunctive Abilify (adding Abilify to existing antidepressant treatment). So you'd think the article would mention all of the relevant depression data from the two relevant studies. Well, no. In the two stuides which are discussed in the article, patients were assessed on depression using the following measures:
  • Montgomery Asberg Depression Rating Scale (MADRS)
  • Inventory of Depressive Symptoms-Self Report Scale (IDS)
  • Quick Inventory of Depressive Symptoms Self-Report Scale (QIDS)
Using the MADRS, the authors conclude that adding Abilify to antidepressant treatment is more effective than adding placebo to antidepressant treatment. OK, fine, though it's not by a particularly huge margin. Mysteriously, the authors do not even mention that the self-report scales (IDS and its subscale, the QIDS) were used in the two trials. And why would they? In both trials, Abilify was not significantly better than placebo on these measures. A letter to the editor pointed out this glaring weakness in Abilify's claims of efficacy, the response to which was weak:
Noting that Abilify did not outperform placebo on the self-report measure in the trial, he wrote that "this may be due to the lower sensitivity" of the measure. So the drug wasn't the failure -- blame the rating scale instead. The people at BMS picked the scale and when it doesn't give results they like, then suddenly it's a poor measurement of depression. I bet Dr. Berman would not have complained about the instrument had it yielded results in favor of Abilify.
In the publications of each of the two clinical trials, the authors tried to downplay the fact that Abilify was no better than placebo according to patient self-reports. Then, when publishing an analysis that combined the results of the two trials, the authors go a step further by not even mentioning that patients completed a self-report. Right down the memory hole. In my opinion, any reasonable academic author writing about such research would want to note the strengths and limitations of Abilify in treating depression. The lack of benefit on patient-rated measures is a major weakness. Yet several big-time academics signed off on this paper despite its complete scrubbing of negative data. For that, I hereby nominate each author for a coveted Golden Goblet Award. And I credit the ghostwriter at Ogilvy with a fantastic job of serving his/her corporate clients. You, sir or ma'am, deserve kudos for a marketing job well-done.

The instructions for authors who submit to the Primary Care Companion to the Journal of Clinical Psychiatry state: "Conclusions should flow logically from the data presented, and methodological flaws and limitations should be acknowledged." Um, does completely scrubbing negative data count as failing to acknowledge limitations? I can see that the peer reviewers and/or editor really paid close attention to this paper.

Safety: The authors note that "adjunctive aripiprazole is relatively well-tolerated in patients with MDD." Relatively? Relative to what -- being hit with a baseball bat repeatedly? They note that akathisia occurred in 25% of patients on Abilify compared to 4% of patients on placebo. Restlessness: 12% vs. 2%; insomnia: 8% vs. 3%; fatigue: 8% vs. 4%; blurred vision: 6% vs 1%. The authors report that akathisia resolved in 52% of patients by the end of the study, which would also mean that for 48% of patients with akathisia, they were stuck with it at the end of the study. But don't worry, it's "relatively well-tolerated."

Overall, another example of a "research" publication being little more than a puff piece in favor of a drug. With big-name academics signed on as authors to add credibilty and just a fine print mention of a ghostwriter.

I thank an anonymous reader for alerting me to this study.

Citation:

Thase ME, Trivedi MH, Nelson JC, Fava M, Swanink R, Tran Q, Pikalov A, Yang H, Carlson BX, Marcus RN, Berman RM (2008). Examining the efficacy of adjunctive aripiprazole in major depressive disorder: A pooled analysis of 2 studies Primary Care Companion to the Journal of Clinical Psychiatry, 10, 440-447

Friday, March 06, 2009

Seroquel, Weight Gain, And the Pursuit of GAD and Depression Indications

Jim Edwards at BNET dug through the Seroquel documents and found many instances of AZ employees noting that Seroquel causes weight gain. Yet the company seemed bent on keeping this information hidden. As I mentioned last week, this sure seems a lot like Zyprexa redux, except with more sex scandals and perhaps more buried data. I suggest that everyone head over to BNET and see the details.

Despite all the bad news, AZ is pressing onward with its application for FDA approval for Seroquel in both generalized anxiety disorder and depression. Yikes. I broke the story earlier this week about the "scientific literature" claiming that Seroquel worked better than Haldol in the treatment of schizophrenia, yet internal company data showed Haldol as superior to Seroquel in reducing schizophrenia symptoms. Between discrepant data, the apparent hiding of negative clinical trials and trying to keep doctors distracted from data indicating that Seroquel caused weight gain, I think that Seroquel's luck may have ran out -- my bet is that the FDA won't approve the drug for depression or GAD. But I've been wrong before; the FDA did approve Abilify as an add-on treatment for depression based on pretty flimsy evidence.

Monday, March 02, 2009

Internal Documents Suggest that Seroquel Data Were Not Presented Accurately

A document dated March 9, 2000 titled "BPRS meta-analysis" shows that AstraZeneca, maker of the antipsychotic drug quetiapine (Seroquel), knew fully that its drug did not relieve schizophrenia symptoms to the same extent as its older, generic competitor haloperidol (Haldol). The document provides results of a meta-analysis, a statistical analysis that combines the results of several individual studies. The authors used the Brief Psychiatric Rating Scale (BPRS) as their main measure of efficacy. The BPRS rates a variety of psychiatric symptoms relevant to schizophrenia. More details on the BPRS can be seen here. A total of ten clinical trials were included in the meta-analysis, which variously compared Seroquel to placebo, Haldol, and several other antipsychotic medications. Four trials compared Seroquel to Haldol. Several subscales of the BPRS were included in the analysis.

When examining the amount of change on the BPRS, Seroquel consistently outperformed placebo, both on the BPRS total score and on several of the BPRS subscales. However, in several analyses, Seroquel was outperformed by Haldol and by risperidone (Risperdal; Janssen's antipsychotic). The document states: "Against 'all doses' of Seroquel, each of the three significant p-values generated was in favour of Haloperidol (Total BPRS, Factor V, and Hostility Cluster). There was no evidence of significant differences between the treatments when Haloperidol was compared to high-dose Seroquel." This is a plain admission that Haldol outperformed Seroquel on several outcomes, but that high dose Seroquel yielded approximately equivalent results to Haldol. Only one trial compared risperidone to quetiapine and the results clearly favored risperidone. The document stated: "Comparisons against Risperidone using all doses of Seroquel showed significant improvements for Risperidone on total BPRS, Factor V scores, and the Hostility Cluster. Against high-dose Seroquel only, the Anxiety item, Factor I, and Mood cluster scores were also significantly in favor of Risperidone." Risperidone beat Seroquel, and did so by a wider margin when a high doses of Seroquel was used.

The author of the document, Rob Hemmings, summarizes the results in a table, which appears below. It is described as such: "The following table is an attempt to simplify the claims that could be obtained from these results. A ✔ is entered for those comparisons where we have a statistically significant benefit, be it with 'all doses' or with high dose Seroquel... A x marks those comparisons where a comparator has demonstrated significant superiority compared to Seroquel."
The table demonstrates that according to an analysis by AstraZeneca employees, Seroquel is only shown to outperform placebo, whereas Seroquel is shown to demonstrate poorer efficacy than several other medications.

Under the heading "Conclusions," the document states, in part:
In terms of generating positive claims for Seroquel, these analyses seem somewhat disappointing. Although some trends in favour of Seroquel were observed in the Factor I and Mood cluster items, there was no evidence in these analyses of a significant benefit for using Seroquel over any of the active agents assessed."
The internal analysis clearly indicates that, based on several clinical trials, Seroquel offered no benefits over the competition in terms of reducing schizophrenia symptoms. Indeed, other drugs tended to outperform Seroquel.

How Can These Data be Managed? Shortly after the internal meta-analysis was completed, AstraZeneca employees discussed how to handle the negative results. An AstraZeneca publications manager, John Tumas, wrote in an email
The data don't look good. I don't know how we can get a paper out of this. My guess is that we all (including Schulz) saw the good stuff, ie the meta-analysis of responder rates that showed we were superior to placebo and haloperidol and then thought further analyses would be supportive and that a paper was in order. What seems to be the case is that we were only highlighting the good stuff and that our own analysis support the "view out there" that we are less effective than haloperidol and our competitors.
It would appear that an earlier analysis provided positive results which did not hold up during the internal meta-analysis. "Schulz" almost certainly refers to Dr. Charles Schulz, a psychiatrist at the University of Minnesota. In a press release from the year 2000, Dr. Schulz was quoted:
I hope that our findings help physicians better understand the dramatic benefits of newer medications like SEROQUEL because, if they do, we may be able to help ensure patients receive these medications first. The data suggest that SEROQUEL is an effective first- choice antipsychotic.
This press release was based on Schulz's presentation at the American Psychiatric Association convention in May 2000. The email from John Tumas discussed earlier noted that a group at AstraZeneca needed to meet soon "because Schulz needs to get a draft ready for APA and he needs any additional analyses we can give him well before then." It is unclear if Schulz ever received the analyses that showed Seroquel was less effective than Haldol. Regardless, in the press release, he was also quoted as saying: "Almost 50 years later, however, many patients are still taking these medications [such as Haldol], even though more effective treatments like Seroquel exist." While he was stumping for Seroquel in a press release, AstraZeneca's internal data painted a completely different picture.

Schulz, in his role as primary author, would typically be expected to demonstrate a solid understanding of the data underlying his presentation. It raises troubling questions when an independent academic author presents results that are in direct opposition to the underlying data. Such issues have been mentioned previously on this site.

The documents regarding Seroquel are available at Furious Seasons. Reporting on other facets of the documents can be found at the St. Petersburg Times, Bloomberg, New York Times, and the Wall Street Journal.