Wednesday, March 25, 2009

APA Monitor: We Don't Need No Stinking Evidence

The American Psychological Association publishes two monthly publications for members, the well-regarded journal American Psychologist, and the APA's newspaper, Monitor on Psychology. I've been having issues with The Monitor for as long as I can remember. At times, I think the magazine makes claims that are not at all substantiated by evidence, which really bothers me. Why? Because psychology is supposed to be a science; it is what separates psychologists from life coaches or snake oil salesmen. I usually skim the Monitor for about 30 seconds per month, but when I saw the cover for this month's issue, my intuition told me to look out for voodoo. The title: Brain imaging: New technologies for research and practice.

So I browsed through the glossy pages, looking for something to catch my eye. Then, on page 36, there it was...

A pacemaker for your brain? Electric brain stimulation may give hope to people with unremitting depression

Oooh. Sounded promising, so I gave it my full attention. Keep in mind that this was in the "Science Watch" section. The article begins:

It's about the size of the letter "o" in this sentence and may have the power to lift deep, unrelenting depression.

OK, there's the attention-grabber. It then goes on to describe deep brain stimulation (DBS). Before long, I ran across:

Since 2005, more than 60 people worldwide have received DBS for treatment-resistant mood disorders. For about 60 percent of them, there's a "striking improvement in their symptoms of depression," says Andres Lozano, MD, PhD, a neuroscientist at the University of Toronto who performs DBS surgery.

Well, that practically screams "valid scientific findings," asking a surgeon if his technique works. What was he gonna say, "Nah, I think DBS is a bunch of hooey. I only do it because it pays really well." I'm willing to bet that physicians who practiced bloodletting were also quite confident that the majority of their patients showed "striking improvement," which is why we conduct controlled trials rather than rely on subjective opinion. Later in the article, the author notes that the results from DBS are "dramatic and promising." The author also notes that

A number of other behavioral and mood disorders might also benefit from DBS. Benjamin Greenberg, MD, PhD, a psychiatrist at Brown University in Providence, R.I., is using DBS to treat obsessive-compulsive disorder, with success rates similar to [Helen] Mayberg's and Lozano's. Also similar is Greenberg's claim that OCD people who've had DBS are then able to tolerate and respond to behavioral therapy.

This broad success leads Mayberg to believe that DBS is establishing itself as an important tool for treating disorders that otherwise won't budge.

OK, so Lozano claims that 60% of people make "striking improvement"; what about others? As mentioned above, Helen Mayberg has done some research on this topic. The article describes one of her studies. Here comes the most convincing evidence I've ever witnessed:

The initial trial included six people who met diagnostic criteria for major depressive disorder. The two researchers and their colleagues implanted electrodes in the white matter adjacent to their patients' subgenual cingulate cortexes and fired up their pacemakers. All the patients, who were awake during the procedure, reported a "sudden calmness or lightness," Mayberg and Lozano reported in the paper.

The researchers followed up with the patients by administering monthly depression scales. After six months, four of the six showed significantly fewer depressive symptoms. To make sure they weren't getting a placebo effect, Mayberg and Lozano secretly switched off the electrodes in their best-responding patient. After about two weeks, the patient's scores began to drop. After about a month, his depressive symptoms had returned. The researchers switched it back on and six weeks later he was back up to non-depressive levels.

So the author of the article, based on the subjective opinion of a psychiatrist and a neurosurgeon, along with and an uncontrolled study of six people concludes that DBS:

  • Has shown "broad success"
  • "A number of other behavioral and mood disorders might also benefit from DBS"
  • "May have the power to lift deep, unrelenting depression"
  • Has shown "dramatic and promising" results

The author threw in a few caveats about side effects (though he essentially gave it a clean bill of health), and also noted that DBS should be reserved for patients with longstanding depression and who have not shown positive results with other treatments. So it stopped short of being a blanket endorsement of DBS, yet it did really make it sound like a fantastic treatment for longstanding depression despite the very meager evidence cited in its support. I often complain about poorly designed studies, suppression of negative data, or misinterpreted results leading to drugs being touted as unrealistically safe and effective. But this article shows that it doesn't necessarily take drug company involvement to pimp a treatment well beyond the scientific evidence.

For all I know, DBS may turn out to be The Holy Grail in treating depression of all shapes and sizes. I cast no aspersions on the researchers mentioned in the article, as searching for ways to treat seemingly intractable cases of depression is doing God's work. But the writer did a horrendous job of overblowing the evidence in favor of DBS. This kind of article feeds the popular notion that psychologists are a bunch of flakes who know nothing about science. The APA Monitor can do much better than this.

Friday, March 20, 2009

Seroquel, Haldol, and The Full Court Media Press

I was very pleased to have been acknowledged in a recent story in the St. Paul Pioneer Press. The reporter, Jeremy Olson, wrote the following in his story:

An Internet psychiatry blog first raised questions March 2 about the research Schulz presented at the APA conference and why it lacked any of the company's findings."It raises troubling questions when an independent academic author presents results that are in direct opposition to the underlying data," wrote the blogger, an anonymous academic.

He didn't cite my blog by name -- the unwieldy long name which I stupidly chose for the site may be responsible for that -- but I'm nonetheless grateful that my site was acknowledged for its work on this story. He is referencing my post in which I noted that a University of Minnesota psychiatry professor (Charles Schulz) had stated in a press release that Seroquel was "more effective" than Haldol. This was based upon his analysis of data comparing Seroquel to the much older antipsychotic drug Haldol in the treatment of schizophrenia. Yet an internal AstraZeneca analysis found that Haldol was actually more effective than Seroquel. Both the Pioneer Press and the Star Tribune, the two big papers in the Minneapolis-St. Paul area ran stories on the controversy.

When asked about his lavishing of praise on Seroquel in the press release, the Pioneer Press said:

In an interview with the Pioneer Press last week, Schulz defended his research and presentation of Seroquel as accurate and ethical. However, he acknowledged the corporate press release from his APA presentation might have exaggerated in calling Seroquel "significantly superior."

"You know," he said, "I can't disagree with that."

Schulz said the following in the Star Tribune:

In an interview this week, Schulz said the pharmaceutical company never shared its doubts about Seroquel, which went on to become a blockbuster, with annual sales of $4.5 billion today. "I don't recall anybody calling up and saying, oh my goodness, we have this problem," he said. At the same time, Schulz acknowledged that his own study did not really show that Seroquel was more effective than the older drug. "That's a bit of a misunderstanding," he said. "I think the overall message is that it works about the same."

Thanks to a helpful reader, I was able to track down what appears to be Schulz's presentation from 2000. It says "...quetiapine was clearly statistically significantly superior to placebo as well as to haloperidol..." This appears to contradict his statement that Haldol and Seroquel "work about the same." Again, the data from Schulz's presentation don't match AstraZeneca's internal analysis. Schulz is obviously backing away from his earlier praise for Seroquel, for which he deserves some credit. The problem was that Schulz, along with a laundry list of researchers in psychiatry were caught in a tidal wave of unbridled enthusiasm for the atypical antipsychotics, first as wonder drugs for schizophrenia, then as the Next Big Thing in bipolar, then moving into the world of depression and anxiety disorders in the absence of decent supportive evidence.

Interesting sidenote: While Schulz was presenting on the wonders of Seroquel, he was likely quite unaware that AstraZeneca has conducted a study (Study 15) which had found that Seroquel compared unfavorably to Haldol in preventing psychotic relapse among patients with schizophrenia who began the study in full or partial symptom remisison. Furious Seasons has some additional reporting on this study. It is a near certainty that Schulz was not informed about this study's results, as this could have changed his lofty opinion of Seroquel. This points to the problem of researchers relying on data collected by drug companies -- how are researchers to know they are receiving all of the data?

Note to key opinion leaders: If you don't realize it by now, you are pawns. You are being used to place an academic veneer on the marketing of drugs. The drugs that you are marketing as major breakthroughs typically offer little to no benefit over existing treatment and may cause a slew of nasty side effects. Decide if you want to be a scientist or a marketer. Don't try to do both at the same time, because the odds are pretty good that your scientific credentials will end up being tarnished. Just ask this guy. Now that the media are paying much closer attention to the conflicted interests and skewed science that sadly underlie much of psychiatry these days, it would be a good idea to maintain appearances.

Tuesday, March 10, 2009

Abilify, Depression, and the Memory Hole

ResearchBlogging.org
The Primary Care Companion to the Journal of Clinical Psychiatry has a piece on Abilify for depression that illustrates many of psychiatry's woes. Full text of the article is here. The journal published an article titled "Examining the efficacy of adjunctive aripiprazole in major depressive disorder: A pooled analysis of two studies." The paper combines data from two previously published studies which examined the addition of Abilify to existing antidepressant treatment (1, 2). One of psychiatry's big-name academics, Michael Thase, signed on as lead author. I'm hoping that he didn't actually write the paper. Actually, there are eleven authors of the paper, which seems a little ridiculous given that the paper is an analysis of data which had already been collected for two previously published clinical trials. Seven of the authors are employees of Bristol-Myers Squibb (BMS) or Otsuka, which both market Abilify. Wait... If you look closely, you can see my favorite disclosure... In the fine print on the first page...

In case you can't read the fine print: In defense of Thase and the other academic authors, they may have not actually written any of the paper. Much or all of the writing appears to be creditable to Ogilvy Healthworld Medical Education. On their site, they note that they perform:
Clinical Development and Publications Management
Experienced medical writers work closely with authors, editors and publishers to provide our clients with a full range of publishing options.
Whatever BMS/Otsuka paid you for this one simply was not enough. Why? Because whomever wrote this thing did an admirable job of focusing on the positive and completely ignoring the negative.

Erasing the Patient's Opinions: Remember, the article's title states that it examines the efficacy of adjunctive Abilify (adding Abilify to existing antidepressant treatment). So you'd think the article would mention all of the relevant depression data from the two relevant studies. Well, no. In the two stuides which are discussed in the article, patients were assessed on depression using the following measures:
  • Montgomery Asberg Depression Rating Scale (MADRS)
  • Inventory of Depressive Symptoms-Self Report Scale (IDS)
  • Quick Inventory of Depressive Symptoms Self-Report Scale (QIDS)
Using the MADRS, the authors conclude that adding Abilify to antidepressant treatment is more effective than adding placebo to antidepressant treatment. OK, fine, though it's not by a particularly huge margin. Mysteriously, the authors do not even mention that the self-report scales (IDS and its subscale, the QIDS) were used in the two trials. And why would they? In both trials, Abilify was not significantly better than placebo on these measures. A letter to the editor pointed out this glaring weakness in Abilify's claims of efficacy, the response to which was weak:
Noting that Abilify did not outperform placebo on the self-report measure in the trial, he wrote that "this may be due to the lower sensitivity" of the measure. So the drug wasn't the failure -- blame the rating scale instead. The people at BMS picked the scale and when it doesn't give results they like, then suddenly it's a poor measurement of depression. I bet Dr. Berman would not have complained about the instrument had it yielded results in favor of Abilify.
In the publications of each of the two clinical trials, the authors tried to downplay the fact that Abilify was no better than placebo according to patient self-reports. Then, when publishing an analysis that combined the results of the two trials, the authors go a step further by not even mentioning that patients completed a self-report. Right down the memory hole. In my opinion, any reasonable academic author writing about such research would want to note the strengths and limitations of Abilify in treating depression. The lack of benefit on patient-rated measures is a major weakness. Yet several big-time academics signed off on this paper despite its complete scrubbing of negative data. For that, I hereby nominate each author for a coveted Golden Goblet Award. And I credit the ghostwriter at Ogilvy with a fantastic job of serving his/her corporate clients. You, sir or ma'am, deserve kudos for a marketing job well-done.

The instructions for authors who submit to the Primary Care Companion to the Journal of Clinical Psychiatry state: "Conclusions should flow logically from the data presented, and methodological flaws and limitations should be acknowledged." Um, does completely scrubbing negative data count as failing to acknowledge limitations? I can see that the peer reviewers and/or editor really paid close attention to this paper.

Safety: The authors note that "adjunctive aripiprazole is relatively well-tolerated in patients with MDD." Relatively? Relative to what -- being hit with a baseball bat repeatedly? They note that akathisia occurred in 25% of patients on Abilify compared to 4% of patients on placebo. Restlessness: 12% vs. 2%; insomnia: 8% vs. 3%; fatigue: 8% vs. 4%; blurred vision: 6% vs 1%. The authors report that akathisia resolved in 52% of patients by the end of the study, which would also mean that for 48% of patients with akathisia, they were stuck with it at the end of the study. But don't worry, it's "relatively well-tolerated."

Overall, another example of a "research" publication being little more than a puff piece in favor of a drug. With big-name academics signed on as authors to add credibilty and just a fine print mention of a ghostwriter.

I thank an anonymous reader for alerting me to this study.

Citation:

Thase ME, Trivedi MH, Nelson JC, Fava M, Swanink R, Tran Q, Pikalov A, Yang H, Carlson BX, Marcus RN, Berman RM (2008). Examining the efficacy of adjunctive aripiprazole in major depressive disorder: A pooled analysis of 2 studies Primary Care Companion to the Journal of Clinical Psychiatry, 10, 440-447

Friday, March 06, 2009

Seroquel, Weight Gain, And the Pursuit of GAD and Depression Indications

Jim Edwards at BNET dug through the Seroquel documents and found many instances of AZ employees noting that Seroquel causes weight gain. Yet the company seemed bent on keeping this information hidden. As I mentioned last week, this sure seems a lot like Zyprexa redux, except with more sex scandals and perhaps more buried data. I suggest that everyone head over to BNET and see the details.

Despite all the bad news, AZ is pressing onward with its application for FDA approval for Seroquel in both generalized anxiety disorder and depression. Yikes. I broke the story earlier this week about the "scientific literature" claiming that Seroquel worked better than Haldol in the treatment of schizophrenia, yet internal company data showed Haldol as superior to Seroquel in reducing schizophrenia symptoms. Between discrepant data, the apparent hiding of negative clinical trials and trying to keep doctors distracted from data indicating that Seroquel caused weight gain, I think that Seroquel's luck may have ran out -- my bet is that the FDA won't approve the drug for depression or GAD. But I've been wrong before; the FDA did approve Abilify as an add-on treatment for depression based on pretty flimsy evidence.

Monday, March 02, 2009

Internal Documents Suggest that Seroquel Data Were Not Presented Accurately

A document dated March 9, 2000 titled "BPRS meta-analysis" shows that AstraZeneca, maker of the antipsychotic drug quetiapine (Seroquel), knew fully that its drug did not relieve schizophrenia symptoms to the same extent as its older, generic competitor haloperidol (Haldol). The document provides results of a meta-analysis, a statistical analysis that combines the results of several individual studies. The authors used the Brief Psychiatric Rating Scale (BPRS) as their main measure of efficacy. The BPRS rates a variety of psychiatric symptoms relevant to schizophrenia. More details on the BPRS can be seen here. A total of ten clinical trials were included in the meta-analysis, which variously compared Seroquel to placebo, Haldol, and several other antipsychotic medications. Four trials compared Seroquel to Haldol. Several subscales of the BPRS were included in the analysis.

When examining the amount of change on the BPRS, Seroquel consistently outperformed placebo, both on the BPRS total score and on several of the BPRS subscales. However, in several analyses, Seroquel was outperformed by Haldol and by risperidone (Risperdal; Janssen's antipsychotic). The document states: "Against 'all doses' of Seroquel, each of the three significant p-values generated was in favour of Haloperidol (Total BPRS, Factor V, and Hostility Cluster). There was no evidence of significant differences between the treatments when Haloperidol was compared to high-dose Seroquel." This is a plain admission that Haldol outperformed Seroquel on several outcomes, but that high dose Seroquel yielded approximately equivalent results to Haldol. Only one trial compared risperidone to quetiapine and the results clearly favored risperidone. The document stated: "Comparisons against Risperidone using all doses of Seroquel showed significant improvements for Risperidone on total BPRS, Factor V scores, and the Hostility Cluster. Against high-dose Seroquel only, the Anxiety item, Factor I, and Mood cluster scores were also significantly in favor of Risperidone." Risperidone beat Seroquel, and did so by a wider margin when a high doses of Seroquel was used.

The author of the document, Rob Hemmings, summarizes the results in a table, which appears below. It is described as such: "The following table is an attempt to simplify the claims that could be obtained from these results. A ✔ is entered for those comparisons where we have a statistically significant benefit, be it with 'all doses' or with high dose Seroquel... A x marks those comparisons where a comparator has demonstrated significant superiority compared to Seroquel."
The table demonstrates that according to an analysis by AstraZeneca employees, Seroquel is only shown to outperform placebo, whereas Seroquel is shown to demonstrate poorer efficacy than several other medications.

Under the heading "Conclusions," the document states, in part:
In terms of generating positive claims for Seroquel, these analyses seem somewhat disappointing. Although some trends in favour of Seroquel were observed in the Factor I and Mood cluster items, there was no evidence in these analyses of a significant benefit for using Seroquel over any of the active agents assessed."
The internal analysis clearly indicates that, based on several clinical trials, Seroquel offered no benefits over the competition in terms of reducing schizophrenia symptoms. Indeed, other drugs tended to outperform Seroquel.

How Can These Data be Managed? Shortly after the internal meta-analysis was completed, AstraZeneca employees discussed how to handle the negative results. An AstraZeneca publications manager, John Tumas, wrote in an email
The data don't look good. I don't know how we can get a paper out of this. My guess is that we all (including Schulz) saw the good stuff, ie the meta-analysis of responder rates that showed we were superior to placebo and haloperidol and then thought further analyses would be supportive and that a paper was in order. What seems to be the case is that we were only highlighting the good stuff and that our own analysis support the "view out there" that we are less effective than haloperidol and our competitors.
It would appear that an earlier analysis provided positive results which did not hold up during the internal meta-analysis. "Schulz" almost certainly refers to Dr. Charles Schulz, a psychiatrist at the University of Minnesota. In a press release from the year 2000, Dr. Schulz was quoted:
I hope that our findings help physicians better understand the dramatic benefits of newer medications like SEROQUEL because, if they do, we may be able to help ensure patients receive these medications first. The data suggest that SEROQUEL is an effective first- choice antipsychotic.
This press release was based on Schulz's presentation at the American Psychiatric Association convention in May 2000. The email from John Tumas discussed earlier noted that a group at AstraZeneca needed to meet soon "because Schulz needs to get a draft ready for APA and he needs any additional analyses we can give him well before then." It is unclear if Schulz ever received the analyses that showed Seroquel was less effective than Haldol. Regardless, in the press release, he was also quoted as saying: "Almost 50 years later, however, many patients are still taking these medications [such as Haldol], even though more effective treatments like Seroquel exist." While he was stumping for Seroquel in a press release, AstraZeneca's internal data painted a completely different picture.

Schulz, in his role as primary author, would typically be expected to demonstrate a solid understanding of the data underlying his presentation. It raises troubling questions when an independent academic author presents results that are in direct opposition to the underlying data. Such issues have been mentioned previously on this site.

The documents regarding Seroquel are available at Furious Seasons. Reporting on other facets of the documents can be found at the St. Petersburg Times, Bloomberg, New York Times, and the Wall Street Journal.

Friday, February 27, 2009

Seroquel Becomes Zyprexa, Part 2. But With More Sex.

I had a big post on Abilify ready to go for today, but I'll sit on it for a few days because Seroquel is the new Zyprexa, and that is the big news of the week. Well, that and Forest getting probed for allegedly marketing Celexa and Lexapro off-label for depression in kids. But more on that later. In the meantime, check out Jim Edwards' nice piece on the emerging scandal.

Back to the 'Quel. First off, a big-time round of applause for Philip Dawdy at Furious Seasons. He's been covering the unfolding Seroquel mess like a hawk, which is exactly what he did during the days of the Zyprexa documents scandal, which is still costing the admittedly criminal corporation of Lilly billions. According to legal documents, Wayne Macfadden, former U.S. Medical Director for Seroquel, admits to being engaged in sexual relationships with a British researcher at the Institute of Psychiatry (IOP) who participated in Seroquel research. Incredibly, Macfadden was also apparently entangled in a sexual relationship with a ghostwriter who wrote up results of Seroquel studies. The attorneys who are suing AstraZeneca claim that: "The IOP researcher suggested that Macfadden would punish her if she even looked at studies that were favorable to Seroquel's competitors." Better yet, Macfadden was alleged to have "promised sexual favors in exchange for intelliegence on AstraZeneca's competitors." It would seem a relevant conflict of interest to note that one was engaged in sexual relations with the Seroquel Medical Director, wouldn't it? I don't typically care about people's sex lives and am in favor of respecting people's privacy. Except when it is potentially related to poor science and/or poor care of patients.

So that's a little weird. And then... according to the Wall Street Journal, internal documents from AstraZeneca suggest that AZ hid concerns that the drug caused diabetes. Gee, that sounds like a page from the Zyprexa playbook. AZ sales reps were instructed to inform physicians that there was no causal link between Seroquel and diabetes. However, according to the WSJ, "In a 2000 position paper about the safety of Seroquel sent to Dutch regulatory authorities, an AstraZeneca doctor named Wayne Geller wrote that there was a relationship between the drug and diabetes. 'There is reasonable evidence to suggest that Seroquel therapy can cause impaired glucose regulation including diabetes melliutus in certain individuals,' Dr. Geller wrote." Expect a few more stories to appear in the mainstream press followed by AZ doling out decent chunks of change to settle lawsuits. This may kill Seroquel's chances of FDA approval for depression, generalized anxiety disorder, and the common cold (OK, I made that one up). Let's hope the documents make their way to the internet so that bloggers such as myself and Philip Dawdy can dig through and go into more depth than the mainstream press. Just like we did with Zyprexa (1, 2, 3).

Can we call this the Sex-o-quel scandal or is that too cheesy?

By the way, Furious Seasons is currently running a fundraiser. I will be making my donation today, and you should do the same if you are in favor of mental health journalism that breaks important stories and is bold enough to cover a wide variety of important issues, regardless of their level of controversy.

Friday, February 13, 2009

What's Next for Schatzberg?

An advertisement in the Psychiatric Times (page 34 of .pdf) calls for applicants for the Chair of the Department of Psychiatry and Behavioral Sciences at Stanford University, a position now filled by Dr. Alan Schatzberg. The Stanford Psychiatry Department webpage currently states, in part:

Under the direction of the Chairman and Chief Alan F. Schatzberg, M.D., the Stanford University Department of Psychiatry and Behavioral Sciences, a center for the advancement of psychiatric practice, research and education, has three goals:

  • To advance the understanding of the etiologies of psychiatric or sleep disorders and to lay the foundation for new treatment development.
  • To develop innovative treatments and to deliver comprehensive services on a continuum of care to patients in a high quality efficient and compassionate manner.
  • To train medical students, residents and clinical and research fellows in the science and practice of psychiatry and sleep medicine.
Looks like Schatz is out. I have noted previously that Schatzberg was deeply involved with a duplicate publication that pimped Cymbalta. Schatzberg's close involvement with Corcept, maker of mifepristone (Corlux), has also raised eyebrows. Mifepristone has been an utter failure in clinical trials, but the manufacturer has attempted to spin the data in ways that should be obvious to anyone with a smidgen of critical thinking skills. Charles Grassley has hit Schatzberg as part of the investigation into the tangled web of conflicted interests involving psychiatrists and drugmakers. There is also some evidence that Schatz was involved in the launch of Zyprexa for bipolar disorder.

Schatz is apparently out as department chair. I wonder who will take his place...

Thanks to an alert reader for the tip.

Tuesday, January 27, 2009

Abilify For Depression: I'm Not the Only Skeptic

ResearchBlogging.org

In April 2008, findings were published in the Journal of Clinical Psychopharmacology which claimed that the atypical antipsychotic aripiprazole (Abilify) was an effective add-on treatment for depression. I heartily disagreed with the study's conclusions, noting that the patient-rated depression measure did not demonstrate an advantage over placebo, an inconvenient result that the authors tried to explain away as if was unimportant. I also pointed out that the study design was biased in favor of Abilify:
Study Design. Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).

Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?
Some commenters agreed with my take on the matter while others did not. Two letters to the editor published in the latest Journal of Clinical Psychopharmacology raised concerns about the study. Alexander Tsai, from UCLA, wrote that he was concerned that the advantage for Abilify was small (2.8 points on the Montgomery-Asberg Depression Rating Scale ) and that the study design was biased in favor of Abilify (agreeing with my earlier point).

Dr. Bernard Carroll, wrote in his letter that:
  • The advantage of Abilify over placebo was small
  • There was no advantage on the patient-rated measure
  • Due to the notable side effect profile of Abilify, clinical raters could likely distinguish patients who were taking Abilify from those who were taking placebo, which could have biased their ratings. Thus, he questions if the study was truly double-blind.
  • The authors did not report whether the occurrence of several side effects were more common on Abilify than placebo. Dr. Carroll calculated that akathisia, fatigue, restlessness, and insomnia were all significantly more common on Abilify and wondered why the authors did not include such data in their report.
  • The authors did not note the relationship between akathisia (severe restlessness/tension) and suicide, which is concerning given that Abilify produces akathisia in droves.
The Defense: Robert Berman from Bristol-Myers Squibb wrote back to defend the study. His points were not impressive. Noting that Abilify did not outperform placebo on the self-report measure in the trial, he wrote that "this may be due to the lower sensitivity" of the measure. So the drug wasn't the failure -- blame the rating scale instead. The people at BMS picked the scale and when it doesn't give results they like, then suddenly it's a poor measurement of depression. I bet Dr. Berman would not have complained about the instrument had it yielded results in favor of Abilify.

Adverse Events: As for not reporting adverse events, well, there's a perfectly good explanation hidden somewhere in here...
...we have clearly reported rates of spontaneously reported treatment-emergent events that occurred at a rate of 5% or greater in any treatment group. As this study is not designed to collect adverse events in a systematic manner, statistical comparison between treatment groups is not appropriate.
So let me get this straight. They discussed "spontaneously reported" events, which would refer to the events reported by the patients without much questioning. Everyone knows that spontaneous reports are a joke because most side effects are not spontaneously reported. Based on spontaneous report, the rate of sexual side effects in SSRI's is quite low. But when you bother to ask people taking SSRIs questions about their sexual functioning, the rates of sexual problems increase drastically. So when Dr. Berman goes on to write that no suicide-related adverse events were reported in the study, keep in mind that the study investigators were not asking about such events. So it may be more accurate to say that nobody committed suicide during the study, but nobody was tracking suicidal ideation unless patients reported such problems themselves. Yes, suicidal ideation was covered a little bit by measures used in the study, but a more systematic assessment would have been helpful. To give the authors credit, at least they did include a couple measures of extrpyramidal symptoms, from which we gathered that akathisia happened in 25% of patients. Yikes.

Saying that the study was not designed to collect adverse event data in a systematic manner is frightening. If adverse event collection was not systematic, then the authors writing in the study report that "adverse events were generally mild to moderate" is meaningless. You can't say that adverse event data were not collected in any sort of systematic manner then also say that the study is "safe," as the authors claim in their paper. This is the definition of duplicitous. In any case, the authors should have reported that several adverse events were significantly more likely to occur on Abilify than placebo rather than making the ridiculous claim that comparing adverse event rates between treatment and placebo is not appropriate.

Dr. Berman does not address the less than 3-point benefit for Abilify over placebo. There is also no real explanation to address the concerns of Dr. Tsai and myself, who noted that the study design was biased in favor of Abilify.

Kudos to Dr. Caroll and Dr. Tsai for taking the time to write excellent letters which addressed quite problematic issues in this study. Every time I see a commercial pimping Abilify for depression, I cringe. It's good to know that some people in the medical community are seeing through the weak research that "supports" the use of Abilify as an antidepressant.

Citation for the offending study below:

Ronald N Marcus et al. (2008). The Efficacy and Safety of Aripiprazole as Adjunctive Therapy in Major Depressive Disorder Journal of Clinical Psychoopharmacology, 28 (2), 156-165

Friday, January 16, 2009

Zyprexa: Lilly Admits Guilt, But Also Blame Physicians

In February 2007, I wrote a post in which I described evidence that Lilly's antipsychotic olanzapine (Zyprexa) was marketed off-label for dementia. The evidence I discussed was based on documents generously and bravely hosted at Furious Seasons. At the time, I was careful to avoid labeling the practices as illegal -- they were definitely unethical but I couldn't really say for sure what if a law was broken. However, a law firm known to represent Lilly was regularly visiting my website at the time, which made me think that Lilly was seriously concerned about legal troubles. I suppose they had good reason to be worried.

I can now officially say that the off-label marketing of Zyprexa for dementia was criminal. Lilly just admitted to committing a crime in the off-label marketing of the drug for dementia and settled legal charges for a cool $1.4 billion. And there are more cases still on the books.

For a really interesting take on this situation, listen to New York Times reporter Gardiner Harris. You can find his talk embedded in the New York Times story from January 14, 2009, which is linked here. The plea agreement in the latest case is available here.

It is important to remember that pimping Zyprexa for dementia is far from a victimless crime. Antipsychotics, including Zyprexa, have been linked to an increased rate of death in elderly patients and have also been shown to be of little to no more benefit than a placebo in reducing dementia-related symptoms (1, 2). For a disturbing account of the widespread inappropriate use of such medications, read this post and weep.

This is truly a case where lust for profits likely led to the early demise of who-knows-how-many patients. And we're just talking about dementia, not the other cases where Lilly went berzerk with marketing Zyprexa (1, 2).

Blame the Physicians Too: While much of the blame for the overuse of antipsychotics in the elderly can be placed on corporations such as Lilly, it is also true that Lilly does not directly administer the drugs. Physicians need to understand that prescribing drugs which have been found to offer little benefit but are linked to killing patients -- how is that legitimately practicing medicine? First, do no harm?? Yes, I know that dementia is a hell of a difficult condition to handle. But does that mean we should be doling out ineffective and potentially deadly treatments to "manage" persons with dementia. Yes, reps from Lilly (and likely others) wined and dined physicians, "educating" them about the benefits of Zyprexa and other antipsychotics. That's their job -- to positively spin their products. No different than a used car salesperson except that drug reps are typically much better looking.

Doctors need to use critical thinking skills -- you don't just listen to a drug rep or skim a drug-company provided journal article reprint then jump on the Zyprexa bandwagon. How about learning how to evaluate evidence so that junky marketing disguised as science does not persuade you to write inappropriate scripts? Yes, we can be outraged that Lilly and others pimp ineffective and dangerous treatments, but the physicians are the most important link. If they cannot be better educated to understand clinical trial results, and cannot take time to critically review the scientific literature, then this pattern will repeat itself over and over again. It takes tricky pharmaceutical marketing in combination with an audience that is unwilling to think critically for this type of tragedy to occur. And occur again, it will.

Unfortunately, the published scientific literature is quite biased, as negative studies tend to vanish rather than grace the pages of our journals. But it's still a much better idea for prescribers to actually read journals and critically examine their findings, as opposed to relying on marketing alone. Better yet would be for research data on medications (negative and positive) to be available for all to see.

Monday, January 12, 2009

The Budget Crisis, Universities, and Key Opinion Leaders

Everyone knows that state budgets across the United States are in a crunch. All state-supported universities are looking for sources of income outside of taxpayer funds. As state legislatures look to cut money, many state universities are in for a big budget hit. So if the state is going to pony up less money, how can a university survive...?

Perhaps by seeking to entice industry funding. Set up a few clinical trials and see what happens. There is nothing inherently wrong about university faculty working on industry-sponsored research. In an ideal world, all goes according to plan and all benefit from such collaboration. Universities love industry collaboration because it brings in good money. Researchers like to collaborate with industry for some altruistic motives, such as receiving funding to work on investigating treatments that might hopefully bring about better lives for people struggling with various ailments. Because receiving funding makes the university
administration happy, it also makes life at a university medical center much more pleasant for those who bring in the bucks.

But how do things really work? Sometimes, they go well. But there are also nondisclosure agreements, in which an "independent" academic researcher gives away any right to discuss the data from clinical trials that he/she is working on unless approval is given by industry. As Graham Emslie, key opinion leader in the field of child psychiatry, can attest to, there are certainly many cases where negative results were found for a drug, but the negative data were buried to avoid any untoward publicity. Academics often farm out their writing of joint work with industry to ghostwriters who spin the final product to pimp a product rather than accurately describe the results. As regular readers know, this is just the tip of the iceberg.

If academics are willing to be oversee industry-sponsored research, have substantial input into writing the final presentation of the results, and actually review the data from these joint ventures with industry, then academic-industry collaboration can be fruitful. However, if academics are simply used to recruit patients for clinical trials, stamp their names on papers consisting of data with which they are entirely unfamiliar, and are complicit in hiding negative data, then the current sad state of affairs will continue unabated.

Given the current financial situation, universities will be encouraging faculty very strongly to get external funding for their work, and we can only hope that academics will behave responsibly when such collaborations occur.

Wednesday, January 07, 2009

Sowing the Seeds of Lexapro

ResearchBlogging.org

I'm reading an article with my jaw completely agape and I thought I'd share the pain. The good people at Forest Pharmaceuticals have put together a tragic waste of journal space. The editorial board at the journal Depression and Anxiety should call an emergency meeting to see how this thing got published. Any peer reviewer who put a stamp of approval on this should be forced to listen to Michael Bolton's Greatest Hits at maximum volume for 12 hours straight.

OK, so what am I having a fit about? Here's what happened in this so-called study. 109 primary care doctors were recruited to participate, for which they were doubtlessly paid a decent chunk per patient (not discussed in the manuscript). The lucky depressed patients of these physicians then received escitalopram (Lexapro) for six months. The manuscript mentions that the "investigators" (the primary care docs) "were not required to have previous clinical research experience to be selected for this study." Yeah, no kidding.

There was no control group, and there had already been dozens of studies on the effects of Lexapro in depression, so how are we getting any new info out of this study? Maybe because this is investigating Lexapro in primary care settings; maybe there was no research on that beforehand. Well, no. The manuscript writes that "The efficacy and tolerability of escitalopram in MDD have been extensively evaluated in primary-care settings," citing four relevant studies. So the study is actually not an attempt to answer a scientific question. So what, exactly, is this study?

Looks and smells like a seeding trial, about which Harold Sox and Dummond Rennie wrote:
This practice—a seeding trial—is marketing in the guise of science. The apparent purpose is to test a hypothesis. The true purpose is to get physicians in the habit of prescribing a new drug. Why would a drug company go to the expense and bother of conducting a trial involving hundreds of practitioners— each recruiting a few patients—when a study based at a few large medical centers could accomplish the same scientific purposes much more efficiently? The main point of the seeding trial is not to get high-quality scientific information: It is to change the prescribing habits of large numbers of physicians. A secondary purpose is to transform physicians into advocates for the sponsor’s drug. The company flatters a physician by selecting him because he is “an opinion leader” and incorporates him in the research team with the title of “investigator.” Then, it pays him good money: a consulting fee to advise the company on the drug’s use and another fee for each patient he enrolls. The physician becomes invested in the drug’s future and praises its good features to patients and colleagues. Unwittingly, the physician joins the sponsor’s marketing team. Why do companies pursue this expensive tactic? Because it works.
So these primary care doctors now feel like "researchers," even though their investigation had essentially zero scientific merit. That probably makes these "investigators" feel important -- and the association between feeling important/scientific and Lexapro is a feeling Forest was banking on to increase Lexapro prescriptions in Canada.

Findings: So what did this extremely important piece of seeding, er, research find? Get ready... Lexapro is safe and effective. To quote the authors: "Escitalopram was well tolerated, safe, and efficacious. Escitalopram can be used with confidence to treat patients with MDD in Canadian primary-care settings." And "As adherence to antidepressant treatment is paramount to achieving long-term recovery, the present results suggest that escitalopram should be considered among the first-line choices of antidepressant used in primary care." So with no control group, we can determine that a Lexapro prescription should be among the first things that come to mind when treating depression. This is mind-boggling. This journal often published good work, but this is among the most uninformative pieces of research I have read. Unless one is thinking about marketing, in which case it is very enlightening.


Citation: Pratap Chokka, Mark Legault (2008). Escitalopram in the treatment of major depressive disorder in primary-care settings: an open-label trial Depression and Anxiety, 25 (12) DOI: 10.1002/da.20458

Monday, January 05, 2009

We're All Mentally Disordered: College-Age Edition

A study in the December 2008 issue of the Archives of General Psychiatry concluded that almost half of college aged Americans suffered from a DSM-IV disorder over a one-year timeframe. Yes, I am behind the curve on this one -- Furious Seasons was all over this last month (1, 2). Rather than rant about the very odd idea that half of young adults are suffering from a mental disorder, I want to start by mentioning one aspect of the study -- perhaps the most important one. Let's look at how the diagnoses were assigned. To quote from the study:

All of the diagnoses were made according to DSM-IV criteria using the National Institute on Alcohol Abuse and Alcoholism Alcohol Use Disorder and Associated Disabilities Interview Schedule–DSM-IV version, a valid and reliable fully structured diagnostic interview designed for use by professional interviewers who are not clinicians.

If the interviewers are not clinicians, on what basis are they trained to understand what makes for truly significant distress that might justify a mental health diagnosis versus someone who is suffering from more mild symptoms that do not comprise a mental disorder? Here's some information from a different study that used a different slice of the same overall dataset on which the December 2008 study was based:

Approximately 1800 lay interviewers from the US Bureau of the Census administered the NESARC using laptop computer–assisted software that included built-in skip, logic, and consistency checks. On average, the interviewers had 5 years’ experience working on census and other health-related national surveys. The interviewers completed 10 days of training. This was standardized through centralized training sessions under the direction of NIAAA and census headquarters staff.

So the figures that will be trotted out in the media ad infinitum about the shoddy mental health of American youth are based on laptop-assisted interviews conducted by people who apparently have no formal training in mental health. Maybe mental health and related disability are really so easy to assess that we don't need experienced, formally trained interviewers. If that's the case, maybe we should just have Census Bureau interviewers provide initial mental health assessments in clinical care settings -- after all, if they are such good mental disorder detectors, couldn't we just train a bunch of interviewers rather than spend millions of dollars training and paying mental health professionals? Think of the savings!

I mean no disrespect toward the Census Bureau interviewers. They are performing important work that in many instances helps us to better understand the health of the nation. All I'm saying is that we might want to avoid uncritically accepting judgments of our nation's mental health based on interviewers who lack mental health training and experience.

Friday, January 02, 2009

KOL Continues to Vanish

Charles Nemeroff, about whom I have written much, continues to disappear. His latest vanishing act: From a psychiatric research gathering in Berlin in late November 2008. Their website now reads: "Dr. Charles B. Nemeroff (Atlanta, the USA) called his participation off in the congress and its scientific contributions." We can only hope that they had another key opinion leader of his stature to replace him.

Back story.

Wednesday, December 17, 2008

The Incredible Vanishing Key Opinion Leader

Charles Nemeroff, former chair of psychiatry at Emory University and key opinion leader extraordinaire has vanished. Not quite vanished from the face of the Earth, but from Medscape CME and now from a Georgia mental health commission. Nemeroff was found to have not disclosed a whole boatload of money he received from Big (and little) Pharma according to an investigation by Senator Charles Grassley. For example, it appears that Nemeroff received about $20,000 in cash from GlaxoSmithKline in one month in exchange for promoting GSK products to his peers.

I have previously written about a number of, um, "interesting" behaviors on the part of Nemeroff, which I recommend you read in order to understand that Nemeroff has, on several occasions, engaged in behavior that certainly appears to have placed the causes of his corporate sponsors over science. Not good for an "independent" researcher.

And now, it seems that Chuck Nemeroff is vanishing. Dr. Bernard Carroll noted that Nemeroff's continuing medical education offerings had vanished from Medscape and offered the following:
Well, good for Medscape. They came in for their share of criticism, here and here, a while back. Now they deserve credit for displaying ethical standards. Meanwhile, we are waiting for another company called CME Outfitters to get the message. Dr. Nemeroff is slated to moderate a raft of new programs for this company in the coming weeks, sponsored by corporations like Pfizer, AstraZeneca, and Ortho-McNeil Janssen. CME Outfitters' logo, after all, is Education with Integrity. Sooner or later the pharmaceutical corporations, like the CME companies, will understand that they are not helping themselves by trotting out a shopworn and sleazy KOL figurehead like Nemeroff for their marketing efforts. And other KOLs who up to now were willing to "wet their beaks" in these CME forums controlled by the Boss of Bosses Nemeroff will now be leery of associating with him.
Well, CME Outfitters is still rolling with Nemeroff. For example, he has an upcoming program called "Atypical Antipsychotics in Major Depressive Disorder: When Current Treatments Are Not Enough," which is a scary thought given that he appears to have been pulling data from thin air for a prior CME exercise in which he pimped risperidone as a treatment for refractory depression. Specifically, Nemeroff's presentation claimed that risperidone improved sexual function in a clinical trial, when the published article based on the trial's results said no such thing. In addition, Nemeroff's claim that risperidone had shown efficacy in a short-term study versus placebo for depression was also unsupported. So I'm thinking the upcoming program on antipsychotics for depression might be a fantastic example of marketing beating the crap out of science.

Georgia appointed a commission to address several issues within the public mental health system. They have completed a report. Interestingly...

The final version also does not contain the name of commission member Charles Nemeroff, an Emory psychiatry professor who has been a subject of a U.S. Senate Finance Committee investigation into whether drug company money paid to doctors and academics compromises medical research and scholarship. Nemeroff, an internationally known expert on depression, did not attend recent commission meetings.

But Nemeroff was appointed to the commission with some fanfare. The press release listing Nemeroff's accomplishments is pretty lengthy. The Georgia state legislator who appointed Dr. Nemeroff said, "I am confident that Charles will be an asset to this commission and will serve as a strong advocate for the people of Georgia being served [by] our mental health systems"

Yet Nemeroff was not on the final report. If it weren't for his work on CME Outfitters, I would be worried that we might need to file a missing persons report for Dr. Nemeroff.

Update (12-18-08): The Wall Street Journal Health Blog has two interesting posts on Dr. Nemeroff (1, 2). Read them and feel free to file them under "bizarre."

Wednesday, December 10, 2008

Treatment Guidelines and GSK's Open Disclosure

Last week, I noted that a recently published article had found that studies favoring GSK's "mood stabilizer" Lamictal tended to get published in medical journals while articles reaching less favorable conclusions tended to remained unpublished. I wrote that "GSK worked the system expertly and it paid off." A reader commented that he thought my characterization of GSK as hiding negative data on Lamictal was inaccurate. I appreciate his well-written critical comments, which are linked here and are partially reproduced below:
Acute Depression - All of the acute depression studies (there were 5 not 3 as you reported) were presented at scientific meetings over the years and were recently published in Bipolar Disorders (Calabrese et al. 2008). Why so long to publish? The paper was rejected twice and took 3 years to get accepted because journal reviewers did not find the data of interest.
I responded via comment that, if his history is accurate, then the reviewers should be flogged. He added that GSK had provided negative Lamictal data to numerous authors who wrote review articles on Lamictal. In some cases, this appears to be true. However, in at least one notable case, either GSK failed to provide the data or the authors completely ignored the negative data. The data here appeared in a 2004 "academic highlight" (i.e., lowlight) in the Journal of Clinical Psychiatry. Of relevance, the article was funded by an "unrestricted educational grant" from GSK. The article bashes antidepressant treatment in bipolar as unsupported by evidence. Then the expert panel of authors/key opinion leaders put together their guidelines for treating bipolar disorder.

The article begins by discussing bipolar depression. Lithium is discussed first and receives a positive review. Then comes Lamictal, GSK's mood stabilizer. They discuss, in detail, the positive results from Calabrese et al. The authors then discuss some positive long-term findings for lamotrigine before moving on to olanzapine and olanzapine/fluoxetine. They conclude that lithium and Lamictal have the best evidence for treating bipolar depression as can be seen here:

Category 1 evidence is the best evidence, so hooray for lamotrigine/Lamictal! But what don't they discuss in their "expert" review of the data? How about two negative studies -- SCA40910 (completed in 2002) and SCAB2001 (completed in 1997) -- GSK titles of studies that both showed negative results for Lamictal in treating depression in bipolar disorder. A reader tracked these down and sent them -- you can find them if you head to GSK's clinical trial registry. Given that these "International Consensus Guidelines" were published in February of 2004, you'd think the authors would have included data from both of GSK's unpublished studies unless:
A. They didn't know about their existence (and why would they unless GSK told them)
B. They knew about them but opted to not include them in this "expert review"

Given that a GSK employee has told me how open and honest GSK has been with their data, I'd be interested in seeing his response as to which of the above he believes took place. Keep in mind that the Journal of Clinical Psychiatry, in which this so-called "academic highlight" appeared is a very widely read journal. According to Google Scholar, this piece has been cited 46 times, many of which have doubtlessly recycled the inaccurate claim that Lamictal is an effective treatment for acute bipolar depression.

The same pattern as usual: Company conducts research, selectively publishes positive results, funds "educational" pieces such as "academic highlights" to paint an overly rosy picture of treatment effectiveness and/or safety, and physicians, based upon the "evidence base" delude themselves into thinking that they are writing prescriptions based on the best scientific data.

Thursday, December 04, 2008

Lamictal: Break Out the Shovel

ResearchBlogging.org


GlaxoSmithKline, manufacturer of lamotrigine (Lamictal), the antiepileptic drug used widely for bipolar disorder, happily hid clinical trial results which found Lamictal was no better than a placebo. Given recent findings about how often pharmaceutical companies selectively push positive results to publication in medical journals while suppressing negative results, this can hardly be considered a surprise. It is nonetheless instructive to examine how the published data on Lamictal paint a much rosier picture of the drug's efficacy compared to unpublished data.

Nassir Ghaemi, a psychiatrist at Tufts University Medical Center, dug through GSK's online database of information, and found that several negative Lamictal studies (studies which failed to show a benefit for Lamictal over placebo on the primary outcome measure) were quietly residing on the site. Why did GSK post such information on their site? Not out of the goodness of their hearts; rather, because they were forced to post data about clinical trial outcomes as a result of a legal agreement. Here's what Ghaemi found in GSK's database:

Acute mania: Two studies compared lithium, Lamictal, and placebo. Both found that Lamictal did not beat a placebo. Neither study was published.

Acute bipolar depression: Three studies were conducted. All three showed negative results. Two were not published. On one study, there was a positive result for Lamictal on a secondary outcome measure, and the results of the study were written to emphasize the positive outcomes, as in stating "Lamotrigine monotherapy is an effective and well-tolerated treatment for bipolar depression."

Rapid cycling bipolar: Two studies were completed; both were negative on the primary outcome. However, one study showed favorable outcomes for Lamictal on several secondary measures. The obviously negative study was not published while the study that showed a number of benefits for Lamictal was published.

Prophylaxis (Prevention of future episodes): Two studies were conducted, both of which showed that patients on Lamictal went longer between episodes than did placebo patients. Both studies were published.

Well, I'm shocked, shocked, that GSK would simply bury a slew of negative data on their product. Who woulda thunk it? So what does this mean for Lamictal? Dr. Ghaemi was interviewed by Dr. Daniel Carlat (of Carlat Psychiatry Blog and the Carlat Psychiatry Report). There were many pieces of Ghaemi's interview that were interesting (see February 2008 issue of Carlat Psychiatry report; sorry, no link available), but the most interesting piece was:
Carlat: My understanding is that you wrote up your discovery of the negative Lamictal data and submitted the paper to some journals. What has been the response?

Ghaemi: I first submitted to JAMA because I knews they were sympathetic to this kind of critique. Their reaction was, "We already publish many papers like this; this is old news; there is nothing new here." They recommended that I send it to a psychiatric journal. So then I sent it to the American Journal of Psychiatry, but they rejected it as well, saying that they were doubtful that this type of negative publication bias was common among other companies marketing medications for bipolar disorder.

Carlat: Do you think there is much suppressed negative data about other drugs?

Ghaemi: It's very hard to get this information. Companies are not required to disclose it. And if they do publish it, they will sometimes delay publication for two or three years, and then publish it in an obscure journal that it less likely to be read.
Ghaemi also did some digging on other drugs used for bipolar disorder and found that negative studies for Seroquel and Abilify were also lurking in the unpublished zone. However, it appears that Lamictal is the worst offender of the bunch. Is it just me, or is anyone else getting flashbacks to GSK's handling of suicide data regarding its antidepressant Paxil?

Thanks to an anonymous reader for helping to track down relevant information on this and an upcoming post on this topic. The forthcoming post will deal with the misleading scientific literature on Lamictal. Key opinion leaders will likely be mentioned. The usual stuff, just on a different drug and plugging in the names of other academics who apparently deemed it acceptable to mislead their fellow physicians about the efficacy of lamotrigine. GSK worked the system expertly and it paid off.

S. Nassir Ghaemi, Arshia A. Shitzadi, Megan Filkowski (2008). Publication bias and the pharmaceutical industry: The case of lamotrigine for bipolar disorder Medscape Journal of Medicine, 10 (9), 211-211

Tuesday, November 25, 2008

Key Opinion Leader With A Very Short Fuse

Psychiatrist Joe "Short Fuse" Biederman of Harvard University is really in hot water now. The sordid details can be seen in a fantastic article by Gardiner Harris of the New York Times. Here's just one snippet:

In a November 1999 e-mail message, John Bruins, a Johnson & Johnson marketing executive, begs his supervisors to approve a $3,000 check to Dr. Biederman as payment for a lecture he gave at the University of Connecticut. “Dr. Biederman is not someone to jerk around,” Mr. Bruins wrote. “He is a very proud national figure in child psych and has a very short fuse.” Mr. Bruins wrote that Dr. Biederman was furious after Johnson & Johnson rejected a request that Dr. Biederman had made for a $280,000 research grant. “I have never seen someone so angry,” Mr. Bruins wrote. “Since that time, our business became non-existant (sic) within his area of control.”

Mr. Bruins concluded that unless Dr. Biederman received a check soon, “I am truly afraid of the consequences.”

A series of documents described the goals behind establishing the Johnson & Johnson Center for the study of pediatric psychopathology, where Dr. Biederman serves as chief. A 2002 annual report for the center said its research must satisfy three criteria: improve psychiatric care for children, have high standards and “move forward the commercial goals of J.& J.,” court documents said.

And from Bloomberg,

Biederman “approached Janssen multiple times to propose the creation of a Janssen-MGH center,” according to an e-mail from a J&J executive. The center would “generate and disseminate data supporting the use” of Risperdal in children, the e-mail said. Pediatric use was approved by U.S. regulators in August 2007.

Wow. And the plot sickens, er, thickens from there. Normally, being caught with one's hands this deep into the cookie jar would lead me to write a much more blistering piece, but the day job shows no signs of abating in its workload. Fortunately, Philip Dawdy is rolling with the story at Furious Seasons (1, 2).

Let's see if Biederman's defenders can defend him in another op-ed as they did a few months ago. Or maybe we can leave Joe to defend himself. Here's what he said a few months ago when facing criticism:

Biederman dismisses most critics, saying that they cannot match his scientific credentials as co author of 30 scientific papers a year and director of a major research program at the psychiatry department that is top-ranked in the "US News & World Report" ratings.

"The critics 'are not on the same level. We are not debating as to whether [a critic] likes brownies and I like hot dogs. In medicine and science, not all opinions are created equal,' said Biederman, a native of Czechoslovakia who came to Mass. General in 1979 after medical training in Argentina and Israel.

Nope, most of his critics cannot match his credentials of apparently shaking down hundreds of thousands of dollars from Johnson & Johnson. But maybe I just like brownies and he likes hot dogs. Another key opinion leader whose reputation is deservedly shot to shreds. Nemeroff, Biederman, and the list goes on.

Thursday, November 20, 2008

Staying Alive

The day job has been merciless lately and promises little relief in the near future. Thanks for the emails. I am surviving and hope to write something here relatively soon (emphasis on relatively).

Friday, October 31, 2008

You Really Can Report Safety Data

ResearchBlogging.org

A new study concluded that the combination of sertraline (Zoloft) and cognitive-behavioral therapy (CBT) worked better than either treatment alone for children with anxiety disorders. There was even a nonsignificant trend for Zoloft to outperform CBT, which was quite surprising to me. But that's not really the point of this post. The study can be read at the New England Journal of Medicine website.

I'd like to commend the researchers on doing something that is exceedingly rare in psychopharmacology and psychotherapy trials -- they gave a detailed report of adverse events. And we find that a greater percentage of kids showed suicidal ideation on... CBT. It was not a statistically significant difference, but it was nonetheless surprising. Zoloft, however, was related to significantly more disinhibition, irritability, restlessness, and poor concentration than CBT. This may have been a fluke, but two participants on Zoloft had "homicidal ideation" compared to none on CBT. I have bitched several times about missing/mysterious data on adverse events in psychiatric drug trials, and some have also complained that psychotherapy trials do a poor job of tabulating adverse event data. Again, kudos to the study authors for reporting adverse events; imagine if reporting safety data in such a manner was commonly practiced.

Source: J. T. Walkup, A. M. Albano, J. Piacentini, B. Birmaher, S. N. Compton, J. T. Sherrill, G. S. Ginsburg, M. A. Rynn, J. McCracken, B. Waslick, S. Iyengar, J. S. March, P. C. Kendall (2008). Cognitive Behavioral Therapy, Sertraline, or a Combination in Childhood Anxiety New England Journal of Medicine DOI: 10.1056/NEJMoa0804633