Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Friday, May 14, 2010

Eli Lilly: Our Drug Failed, So it Has Serious Potential

ResearchBlogging.org
These folks at Lilly must think we are exceptionally stupid. As in can't tie our own shoes. A study in the Journal of Psychiatric Research recently found that their experimental antidepressant LY2216684 was no better than placebo. Here are a couple of quotes from the abstract:
LY2216684 did not show statistically significant improvement from baseline compared to placebo in the primary analysis of the Hamilton depression rating scale (HAM-D17) total score. Escitalopram demonstrated significant improvement compared to placebo on the HAM-D17 total score, suggesting adequate assay sensitivity.
On the primary outcome measure, the experimental drug failed whereas Lexapro worked to some extent. I know what you're thinking - "the sample size was probably too small to find a significant effect." Um, you're wrong. How about 269 people on the Lilly drug, 138 on placebo, and 62 on Lexapro.

But wait, here comes the good news...
Both LY2216684 and escitalopram showed statistically significant improvement from baseline on the patient-rated QIDS-SR total score compared to placebo... The results of this initial investigation of LY2216684’s efficacy suggest that it may have antidepressant potential.
The good news for Lilly is that most people who claim to "read journal articles" really just browse the abstract without actually looking at the full text of the paper. For the select few who have nothing better to do than read Lilly propaganda, take a look at Table 2. A total of 12 secondary outcome measures are listed. The Lilly drug beat placebo on... ONE of them. Lilly doesn't say much about how much better their drug was than placebo on the QIDS-SR measure beside throwing around that often meaningless term of "statistically significant." People on the drug improved by 10.2 points whereas placebo patients improved 8.3 points. So about a 20% difference. If you bother to calculate an effect size, it is d = .24, which is quite small and clinically insignificant. So on the ONE measure where the drug was better than placebo, it was by a small margin, and it missed the mark on 11 other secondary measures as well as on the primary outcome measure. But "it may have antidepressant potential." Hell yes, I've never been so exited about a new drug.

By the way, Lilly is apparently trying this wonder drug out in at least five trials. The journal in which this article appeared has published other dubious Eli Lilly research in the past. The editorial review process is clearly working wonders over at the Journal of Psychiatric Research. Sad, really. The journal publishes some really good work, but then runs this kind of junk as well.

Depression Self-Report Sidebar: The self-reported measure on which the drug had an advantage, the Quick Inventory of Depressive Symptoms (QIDS) - it's really awesome, according to Lilly. Remember, it's the only measure on which their experimental failure drug had an advantage over placebo. So the authors wrote "Self-reported depression symptoms, such as those obtained by the QIDS-SR, may be more sensitive than clinician-administered scales for signal detection in clinical studies of depression."

What does Bristol-Myers Squibb think? In three trials of Abilify for depression, self-reports of depression were unfavorable. So the publications for these studies made sure to downplay these depression self-reports by saying that these measures were not sensitive, that they weren't picking up improvements in depression.

So if a self-report provided positive results, then BAM, it's an awesome measure of depression. But if it provided negative results, then it's a horrendously inaccurate measure and should never have been used in the first place.

Citation below. Yes, one of the authors' last names is Kielbasa.

Dubé, S., Dellva, M., Jones, M., Kielbasa, W., Padich, R., Saha, A., & Rao, P. (2010). A study of the effects of LY2216684, a selective norepinephrine reuptake inhibitor, in the treatment of major depression Journal of Psychiatric Research, 44 (6), 356-363 DOI: 10.1016/j.jpsychires.2009.09.013

Saturday, September 19, 2009

Lend Me Your Name

Journalism regarding the horrors of ghostwritten papers in medical journals is all the rage these days (1, 2, 3). Here's my very small contribution. The document shown below from a medical writing company has been described elsewhere. But it is worth seeing in its glory firsthand. The document is from Wyeth's ghostwriting firm, DesignWrite. It was part of the Premarin/hormone replacement therapy disaster (see below). Perhaps you remember the era when hormone replacement therapy was being prescribed for all sorts of people because it was supposedly a wonder treatment. So what if it increased risk for breast cancer and perhaps other conditions as well? Not to worry, DesignWrite could get around that...

In layman's terms, it goes like this... Wyeth -- you give us some hints about the marketing spin you'd like us to put on your studies. We'll then write up the studies accordingly and have big-name academics sign off as if they had something to do with our oh-so-objective "research". And don't worry, Wyeth, you get to review all papers we write up to make sure we market your drug appropriately.


We now know that several academics participated in this program. To quote one ethicist, regarding the academics who lent their names as authors: "They sold their credentials for false credit and money." DesignWrite's current slogan is: "Where we put clinical data to work." Hmmm. DesignWrite gets paid, Wyeth gets paid, and the academics who lend their names get paid and/or get another publication to boost their stock in the academic world.

Oh, and patients, what did they get out this... breast cancer. But who cares about them anyway -- patients are just little buckets of money; it's not like they're real human beings.

A summary of the results that led to the downfall of hormone replacement therapy

Three years after stopping hormone therapy, women who had taken study pills with active estrogen plus progestin no longer had an increased risk of cardiovascular disease (heart disease, stroke, and blood clots) compared with women on placebo. The lower risk of colorectal cancer seen in women who had taken active E+P disappeared after stopping the intervention. The benefit for fractures (broken bones) in women who had taken active E+P also disappeared after stopping hormone therapy. On the other hand, the risk of all cancers combined in women who had used E+P increased after stopping the intervention compared to those on placebo. This was due to increases in a variety of cancers, including lung cancer. After stopping the intervention, mortality from all causes was somewhat higher in women who had taken active E+P pills compared with the placebo.

Based on the findings mentioned above, the study’s global index that summarized risk and benefits was unchanged, showing that the health risks exceeded the health benefits from the beginning of the study through the end of this three year follow-up. The follow-up after stopping estrogen plus progestin confirms the study’s main conclusion that combination hormone therapy (E+P) should not be used to prevent disease in healthy, postmenopausal women. The most important message to women who have stopped this hormone therapy is to continue seeing their physicians for rigorous prevention and screening activities for all important preventable health conditions.

I'm glad to see that ghostwriting is now the topic du jour in health journalism. But in a few weeks, the attention will vanish as the drug industry and its associated writing firms will agree to allegedly stringent guidelines that ensure this never happens again. And nothing will actually change. I mean, seriously, do you think academic researchers are going to write their own papers? Do you think drug companies are going to stop hiring writers to expertly spin the data? The current system works too well for it to simply go away.

Thanks to an alert reader for sending this document along. You can search for more documents at the Drug Industry Document Archive, including those from Wyeth and DesignWrite. Happy digging!

Thursday, December 04, 2008

Lamictal: Break Out the Shovel

ResearchBlogging.org


GlaxoSmithKline, manufacturer of lamotrigine (Lamictal), the antiepileptic drug used widely for bipolar disorder, happily hid clinical trial results which found Lamictal was no better than a placebo. Given recent findings about how often pharmaceutical companies selectively push positive results to publication in medical journals while suppressing negative results, this can hardly be considered a surprise. It is nonetheless instructive to examine how the published data on Lamictal paint a much rosier picture of the drug's efficacy compared to unpublished data.

Nassir Ghaemi, a psychiatrist at Tufts University Medical Center, dug through GSK's online database of information, and found that several negative Lamictal studies (studies which failed to show a benefit for Lamictal over placebo on the primary outcome measure) were quietly residing on the site. Why did GSK post such information on their site? Not out of the goodness of their hearts; rather, because they were forced to post data about clinical trial outcomes as a result of a legal agreement. Here's what Ghaemi found in GSK's database:

Acute mania: Two studies compared lithium, Lamictal, and placebo. Both found that Lamictal did not beat a placebo. Neither study was published.

Acute bipolar depression: Three studies were conducted. All three showed negative results. Two were not published. On one study, there was a positive result for Lamictal on a secondary outcome measure, and the results of the study were written to emphasize the positive outcomes, as in stating "Lamotrigine monotherapy is an effective and well-tolerated treatment for bipolar depression."

Rapid cycling bipolar: Two studies were completed; both were negative on the primary outcome. However, one study showed favorable outcomes for Lamictal on several secondary measures. The obviously negative study was not published while the study that showed a number of benefits for Lamictal was published.

Prophylaxis (Prevention of future episodes): Two studies were conducted, both of which showed that patients on Lamictal went longer between episodes than did placebo patients. Both studies were published.

Well, I'm shocked, shocked, that GSK would simply bury a slew of negative data on their product. Who woulda thunk it? So what does this mean for Lamictal? Dr. Ghaemi was interviewed by Dr. Daniel Carlat (of Carlat Psychiatry Blog and the Carlat Psychiatry Report). There were many pieces of Ghaemi's interview that were interesting (see February 2008 issue of Carlat Psychiatry report; sorry, no link available), but the most interesting piece was:
Carlat: My understanding is that you wrote up your discovery of the negative Lamictal data and submitted the paper to some journals. What has been the response?

Ghaemi: I first submitted to JAMA because I knews they were sympathetic to this kind of critique. Their reaction was, "We already publish many papers like this; this is old news; there is nothing new here." They recommended that I send it to a psychiatric journal. So then I sent it to the American Journal of Psychiatry, but they rejected it as well, saying that they were doubtful that this type of negative publication bias was common among other companies marketing medications for bipolar disorder.

Carlat: Do you think there is much suppressed negative data about other drugs?

Ghaemi: It's very hard to get this information. Companies are not required to disclose it. And if they do publish it, they will sometimes delay publication for two or three years, and then publish it in an obscure journal that it less likely to be read.
Ghaemi also did some digging on other drugs used for bipolar disorder and found that negative studies for Seroquel and Abilify were also lurking in the unpublished zone. However, it appears that Lamictal is the worst offender of the bunch. Is it just me, or is anyone else getting flashbacks to GSK's handling of suicide data regarding its antidepressant Paxil?

Thanks to an anonymous reader for helping to track down relevant information on this and an upcoming post on this topic. The forthcoming post will deal with the misleading scientific literature on Lamictal. Key opinion leaders will likely be mentioned. The usual stuff, just on a different drug and plugging in the names of other academics who apparently deemed it acceptable to mislead their fellow physicians about the efficacy of lamotrigine. GSK worked the system expertly and it paid off.

S. Nassir Ghaemi, Arshia A. Shitzadi, Megan Filkowski (2008). Publication bias and the pharmaceutical industry: The case of lamotrigine for bipolar disorder Medscape Journal of Medicine, 10 (9), 211-211

Thursday, September 25, 2008

The Cymbalta Schatz-Storm: Duplicate Publication and Lying by Omission

ResearchBlogging.org

This post details the duplicate publication of data on the antidepressant duloxetine (Cymbalta). Marketing and "science" collide to produce hideous offspring: an experimercial that pimps Lilly's bogus "Depression Hurts" marketing for Cymbalta using the exact same (weak) data twice. Data were published in the Journal of Clinical Psychiatry (JCP), and then the same data were published a second time in the Journal of Psychiatric Research (JPR), a blatant violation of JPR policy. Oh, and Alan Schatzberg, president-elect for the American Psychiatric Association is involved in the story.

The study: Lilly conducted a rather uninteresting study of Cymbalta, in which patients who had not shown a treatment response to an SSRI were then assigned to either a) Direct switch: Switch to Cymbalta and immediately discontinue the SSRI medication or b) Start-Taper-Switch: taper the SSRI over a 2 week period while also starting Cymbalta. Note that there was not a control group of any sort, an issue that the authors dance around (i.e., essentially ignore) in the papers based on this study's data.

Publication #1 -- Journal of Clinical Psychiatry: Data from this study were published in the January 2008 issue of the Journal of Clinical Psychiatry. The findings were that, in essence, there were no notable differences between patients who were directly switched to Cymbalta as opposed to those who did the start-taper-switch method. But what do the authors conclude?

Despite the lack of control group, the authors get the message out that not only was depression improved, so were "painful physical symptoms." As anyone who has a television has probably noticed, Lilly has been pushing hard for quite some time to convince patients and physicians that Cymbalta will relieve depression and pain in depressed patients. So if the marketing points can be pushed in one journal, why not pimp the same idea using the same data in another journal?

Publication #2 -- Journal of Psychiatric Research: Data from the same study were published online (to appear in print soon) in the Journal of Psychiatric Research (JPR). And I mean the exact same data appear again in this paper. This is a huge scientific no-no. Findings are supposed to be published once, not over and over again. Journals are struggling to find space for new and interesting findings, so there is no need to waste space on duplicate data. In fact, to quote from JPR's website
Submission of a paper to the Journal of Psychiatric Research is understood to imply that it is an original paper which has not previously been published, and is not being considered for publication elsewhere. Prior publication in abstract form should be indicated. Furthermore, authors should upload copies of any related manuscript that has been recently published, is in press or under consideration elsewhere. The following circumstances indicate that a paper is related to the manuscript submitted to the Journal: a) any overlap in the results presented; b) any overlap in the subjects, patients or materials the results are based on.
So it's pretty clear -- don't submit data that has already been published. Here is a figure from the Journal of Clinical Psychiatry (JCP) article mentioned above:
And here is the same data, in a figure in JPR:
But wait -- that's just the beginning. How about the data tables... From JCP:
And the right-side half of this table in JCP:
And the exact same data appearing in JPR:
To be fair to these "researchers" in JPR, they reported data from subscales of two measures not reported in JCP. But the vast majority of the data is just reprinted from the article in JCP. Which is completely trouncing journal policy and, more importantly, conveying Lilly's marketing messages to the audiences of two different journals. Unfortunately, they apparently did not consider that some people might actually read both journals and notice that essentially the same article had appeared twice. Or, Lilly considered this prospect and said, "Who cares." I'll leave it to my readers to decide if they care.

Authors: The JCP paper was authored by David Perahia, Deborah Quail, Derisala Desaiah, Emmanuele Corruble, and Maurizio Fava. The JPR paper was "authored" by Perahia, Quail, Desaiah, Angel Montejo, and Alan Schaztberg. So to re-publish the same data, it was out with Corruble and Fava -- in with Montejo and Schatzberg. Why Schatzberg? We're almost there...

JPR describes the contributions of each author. For these two authors (Schatzberg and Montejo) who were not credited in the JCP paper, they were both described as "involved in data review and interpretation, including the development of this manuscript." How could they have been involved with data review and interpretation -- the vast majority of the data were already analyzed, interpreted and written up by other researchers in the JCP paper? Did they write the paper? Apparently not, since the JPR article mentioned that "Dr. Desaiah worked with Dr. Perahia to draft the manuscript..." So Montejo and Schatzberg could not conceivably have played any significant role in data analysis, interpretation, or writing the paper. It seems that if Desaiah and Perahia "drafted" the manuscript, then the most Montejo and Schatzberg could have done is to maybe review the paper.

So why is Schatzberg on the paper? Well, it just so happens, I'm sure by sheer coincidence, that Schatzberg is the co-editor in chief of JPR. So he'd be in a good position to help a paper that essentially republishes data from JCP with only minor additions make it into publication against his own journal's policies.

Nice work, Schatzberg. That's pimpin' it hard. That, my friend, is worthy of nomination for a coveted Golden Goblet Award. Congratulations. It is not the first time Schatzberg's "scientific" behavior has been noted. He has been stumping (in the face of much contradictory data) in favor of his pet drug RU-486/Corlux in the treatment of psychotic depression for some time. Between the bad science surrounding Corlux and Schaztberg's myriad conflicts of interest, much has been written (1, 2, 3, 4, 5) -- add another chapter to the chronicles of the storied American Psychiatric Association Leader. This reminds me of an earlier incident involving Charles Nemeroff.

Discussion: As I've noted previously, the discussion section of a journal article often contains key marketing points, science being relegated to secondary status at best. The JPR article provides a few good examples of Cymbalta's talking points:
The current paper focuses on pain-related outcomes, demonstrating that a switch of SSRI non- or partial-responders to duloxetine was associated with a significant improvement in all pain measures including six VAS pain scales, the SQ-SS and its pain subscale, and the SF-36 bodily pain domain.

Switch of SSRI non- and partial-responders to duloxetine resulted in mean improvements on all pain measures regardless of the switch method used.

Duloxetine, an SNRI, has previously been shown to be effective in the treatment of PPS associated with depression, and it is also effective in the treatment of chronic pain such as diabetic peripheral neuropathic pain (DPNP) for which it is approved in the US, Europe and elsewhere, so duloxetine’s effects on pain in our sample of SSRI non- or partial-responders was not unexpected.

Patients with MDD present with a broad range of symptoms including those related to alteration of mood and PPS, all of which may contribute to global functional impairment. Effective treatment of both mood symptoms and PPS associated with depression may therefore optimize the chances of functional improvement. Recent findings that residual PPS in depressed patients may be associated with impaired quality of life (Wise et al., 2005, 2007), decreased productivity and lower rates of help seeking (Demyttenaere et al., 2006) and a lower likelihood of attaining remission (Fava et al., 2004), further demonstrate the importance of effective treatment of PPS in patients with MDD, so duloxetine’s effects on PPS are reassuring.

Improvements in pain are consistent with previously reported studies demonstrating duloxetine’s efficacy for pain, either as part of depression, or as part of a chronic pain condition such as DPNP.
Where do I start? How about by mentioning that JPR states:

7. Discussion: The results of your study should be placed in the appropriate context of knowledge, with discussion of its limitations and implications for future work.
So maybe if there was research that questions Lilly's talking points about Cymbalta relieving pain in depression, such research should be discussed. Well, it just so happens that there is research, which analyzed Lilly's own clinical trials and found that Cymbalta was no better than a placebo or Paxil in treating pain in depression. This meta-analysis of Cymbalta trials was published in January 2008, yet the JPR article, which was originally received by JPR on March 26, 2008 did not mention the negative data. Hmmm, that doesn't exactly sound like placing the findings "in the appropriate context of knowledge," does it? All this talk about Cymbalta's fantastic analgesic effects despite Lilly's own data showing that Cymbalta is at best close to useless in treating pain among depressed patients. Another study that claimed to show Cymbalta was a helluva painkiller was also smacked in a letter to the editor a few months ago -- and the authors of the Lilly-sponsored trial conceded defeat by refusing to reply to the critiques of their study.

Better Than "Weak" SSRIs (Not Really): In the JPR study, it was mentioned that the evidence for SSRIs in treating pain is "weak." No disagreement on my end. But see, once SSRI patients switched to Cymbalta, their pain magically went away because Cymbalta, unlike SSRIs, relieves pain. Never mind the lack of control group, which was allotted a grand total of 15 words in the discussion as a potential limitation of the study. The authors also failed to note that prior research showed that Cymbalta was no better than Paxil in treating pain in depressed patients. And Perahia, the lead author of the JCP and JPR "studies" is certainly aware of the research showing that Cymbalta works no better than a "weak" SSRI, since he was the lead author on one such study! So he is quite aware that Cymbalta has never been shown superior to Paxil in treating pain, yet he accurately describes research indicating that SSRIs are "weak" pain treatments, but then neglects to mention that Cymbalta failed to demonstrate superiority to Paxil in treating pain in depression. This is called lying by omission.

I may pass along my concerns to the Journal of Psychiatric Research. My prior experiences in passing along such concerns to journals via my blog identity is that they either a) ignore my concerns entirely or b) instruct me to write a letter to the editor which would be considered for publication, with the stipulation that I use my real identity. Sorry, but a published letter to the editor is not worth blowing my cover.

Call for Action: Rather than my running into point b. from the last paragraph, how about one or more scientifically inclined readers submit your concerns to the journal, under the following condition: Make sure you read the original papers first to judge if my concerns are valid. Then, if you feel similarly, why not send a letter to the editor? This is bad science which does nothing to advance patient care -- it seeks only to advance sales of Cymbalta by pimping it as a painkiller in depression while ignoring all contradictory data. So let's try a little research of our own -- see if JPR is willing to address these issues or if they will be swept under the rug.

Reference to JPR article:

D PERAHIA, D QUAIL, D DESAIAH, A MONTEJO, A SCHATZBERG (2008). Switching to duloxetine in selective serotonin reuptake inhibitor non- and partial-responders: Effects on painful physical symptoms of depression Journal of Psychiatric Research DOI: 10.1016/j.jpsychires.2008.07.001

Update: Also see an excellent follow-up post on the topic at Bad Science.

Tuesday, August 19, 2008

Investigative Journalism Par Excellence

I am a little late in reporting this story, but there is a must-read post from Jonathan Leo over at Chemical Imbalance that I must bring to your attention. Many bloggers have chimed in about the radio program The Infinite Mind broadcast about SSRIs. Most writers have focused, understandably, on the myriad unreported conflicts of interest of the guests on the show. But the conflicts of interest are not the most important part of this saga -- the terribly misleading information on the program, which aired on National Public Radio outlets, is the main problem.

Leo compares the data on SSRIs and suicide to the blatantly false statements made by the The Infinite Mind commentators. He notes, for example, that it is utter BS to state that nobody committed suicide in antidepressant trials submitted to the FDA -- in children there were no suicides, but among adults there certainly were. And kids who dropped out of the studies due to poor response or side effects, well, who knows what happened to them?

Leo also notes that the commentators were dead wrong about their alleged evidence linking decreased prescriptions of SSRIs to an increase in suicides. I also noted the same problem. He then proceeds to make point after point about the commentators overstating the efficacy of antidepressants.

As I've written before, conflicts of interest are important. But rather than just noting that people have conflicts, it is important to show the data -- are people with conflicts of interest misstating the evidence in a manner that reflects the conflict of interest? In the case of The Infinite Mind, the answer is a clear yes. Leo's post is quite lengthy, but well worth the time.

Update (08-31-08): My mistake. I had earlier called the program All in The Mind, which is vastly incorrect. The program was The Infinite Mind (as has been corrected above). This post has absolutely nothing to do with All in The Mind, which is a program which airs on Australia's Radio National. In fact, I've listened to a couple of All in the Mind broadcasts previously and found them to be well-done. Thanks to a commenter for catching my error.

Thursday, June 26, 2008

Conflicts, Bad Science, and Corlux

Recently, the watchful eyes of Charles Grassley have been peering into the bank accounts of big name psychiatrists. Melissa DelBello and Joe Biederman (1, 2) from the Wonderful World of Child Bipolar were first, and now Alan Schatzberg has been hit. Schatzberg is the Chair of Psychiatry at Stanford University. He is also the President of the American Psychiatric Association. In other words, he's kind of a big deal.

Pharmalot hits the details, but the gist is that Schatzberg is deeply involved at Corcept Therapeutics, a company for which he is chair of the scientific advisory board and holds a large amount of stock. According to Grassley, he did not disclose some of his stock sale profits or the magnitude of his multimillion dollar stock holdings in the company. Additionally, Schatzberg allegedly underreported income received from other drug companies. It appears that Schatzberg was not really required to disclose some of this information, so according to my brief review of the information, it is quite possible that he has broken no rules. Now, whether the rules need to be changed is a different story. No offense to Grassley, but I was well ahead of him on part of this story, noting in April 2007 that Schatzberg had a mega-conflict of interest going with Corcept. I also noted previously that Schatzberg was on the Zyprexa bandwagon, helping to "educate" fellow physicians about the Lilly wonder drug.

The Real Problem: But amidst all this discussion of conflicts of interest, I am afraid that we are getting a bit diverted from the main problem, that of shoddy science. It is admittedly interesting noting that Schatzberg is somehow supposed to be an independent, disinterested scientist while standing to make an absolute truckload of money if his sponsored product succeeds. But it runs deeper. While Schatzberg is a bigwig at Corcept, let's review how Corcept's main product mifepristone (RU-486; yes, the abortion pill) has done.

Mifepristone (aka Corlux) is intended to work as a treatment for psychotic depression. One main problem: It doesn't relieve depressive symptoms. In multiple trials, it has failed to demonstrate antidepressant properties. The CEO of Corcept and another member of their scientific advisory board have previously tried to spin away such inconvenient data by painting negative results as positive. To give Corcept credit, their scientists are consistent spinmeisters, seemingly always able to dredge a positive from obviously negative findings. Schatzberg has been an author on a couple Corlux-related papers that were shredded by independent analysts, who found statistical problems and overly optimistic interpretations of the study results. As the senior member of the Scientific Advisory Board, I assume that Schatzberg had some input on the other study reports that also overstated the efficacy of Corlux.

Could his millions of dollars in Corcept holdings bias Schatzberg, either subconsciously or overtly? You be the judge. But remember that this is not just about conflicts of interest -- this is about science. There is hard evidence that the research on Corlux, which is tightly linked to Schatzberg, has been misinterpreted for the sake of marketing. Conflicts of interest sometimes lead to bad science, but rather than focus just on conflicts of interest, we need to dig a layer deeper and see the poor science -- the shoddy evidence that is used as the foundation for "evidence based medicine" in many cases.

Note also that David Healy has written an interesting piece on the topic of conflicts of interest and bad science, pointing out that a larger problem is lack of access to company-owned data. Think Paxil and suicide. He concludes:
If I were employed in a company marketing department I would much prefer to have the field think that all that is wrong is that a few corrupt academics fail to declare competing interests than to have the field think that company practices that restrict access to data while still claiming the moral high ground of science are the real source of the problem.
I'd love to know what American Psychiatric Association members think about this. The news had already broken about Schatzberg overstating the efficacy of Corlux before he was elected APA president. Do APA members not care that their president has a documented record of putting product promotion before scientific evidence?

Monday, April 28, 2008

Paxil, Lies, and the Lying Researchers Who Tell Them

A bombshell has just appeared in the International Journal of Risk & Safety in Medicine. The subject of the paper is Paxil study 329, which examined the effects of the antidepressant paroxetine in adolescents. The study findings were published in the Journal of the American Academy of Child and Adolescent Psychiatry in 2001. These new findings show that I was wrong about Paxil Study 329. You know, the one that I said overstated the efficacy of Paxil and understated its risks. The one that I claimed was ghostwritten. Turns out that due to legal action, several documents were made available that shed more light on the study. The authors (Jureidini, McHenry, and Mansfield) of the new investigation have a few enlightening points. Let's look at the claims and you can then see how wrong I was, for which I sincerely apologize. The story is actually worse than I had imagined. Here's what I said then:

Article [quote from the study publication]: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

I went on to bemoan how the authors took differences either based on arbitrary cutoff scores or from measures that assessed something other than depression to make illegitimate claims that paroxetine was effective. Based upon newly available data from the study, here's what happened.
  • The protocol for the study (i.e., the document laying out what was going to happen in the study) called for eight outcome measurements. To quote Jureidini et al: "There was no significant difference between the paroxetine and placebo groups on any of the eight pre-specified outcome measures." So I was wrong. Paxil was not better on 4 of 8 measures -- it was better on ZERO of eight measures. My sincerest apologies.
  • Another quote from Jureidini and friends: "Overall four of the eight negative outcome measures specified in the protocol were replaced with four positive ones, many other negative measures having been tested and rejected along the way."
Let's break this thing down for a minute. The authors planned to look eight different ways for Paxil to beat placebo. They went zero for eight. So, rather than declaring defeat, the authors then went digging to find some way in which Paxil was better than a placebo. Devising various cutoff scores on various measures on which victory could be declared, as well as examining individual items from various measures rather than entire rating scales, the authors were able to grasp and pull out a couple of small victories. In the published version of the paper, there is no hint that such data dredging occurred. Change the endpoints until you find one that works out, then declare victory.

How About Safety?

I was incensed about the coverage of safety, particularly the magical writing that stated that a placebo can make you suicidal, but Paxil could not. I wrote:
It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” [i.e., suicidal] on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring.
Turns out I missed a couple things. Based on looking at an internal document and doing some calculations, Jureidini et al. found that serious adverse events were significantly more likely to occur in patients taking paroxetine (12%) vs. placebo (2%). Likewise, adverse events requiring hospitalization were significantly disadvantageous to paroxetine (6.5% vs. 0%). Severe nervous system side effects -- same story (18% vs. 4.6%). The authors of Study 329 did not conduct analyses to see whether the aforementioned side effects occurred more commonly on drug vs. placebo.

Funny how they had time to dredge through every conceivable efficacy outcome but couldn't see whether the difference in severe adverse events was statistically significant.

One quote from the discussion section of the paper sums it all up:
There was no significant efficacy difference between paroxetine and placebo on the two primary outcomes or six secondary outcomes in the original protocol. At least 19 additional outcomes were tested. Study 329 was positive on 4 of 27 known outcomes (15%). There was a significantly higher rate of SAEs with paroxetine than with placebo. Consequently, study 329 was negative for efficacy and positive forharm.
But the authors concluded infamously that "Paroxetine is generally well-tolerated and effective for major depression in adolescents."

Enter Ghostwriters. Documentary evidence as shown on indicated that the first draft of the study was ghostwritten. This leaves two roles for the so-called academic authors of this paper:
  • They were willing co-conspirators who committed scientific fraud.
  • They were dupes, who dishonestly represented that they had a major role in the analysis of data and writing of the study, when in fact GSK operatives were working behind the scenes to manufacture these dubious results.
Remember, this study was published in 2001, and there has still been no apology for the fictional portrayal of its results, wherein a drug that was ineffective and unsafe was portrayed as safe and effective. Physicians who saw the authorship line likely thought "Gee, this is a who's who among academic child psychiatrists -- I can trust that they provided some oversight to make sure GSK didn't twist the results." But they were wrong.

By the way, Martin Keller, the lead "independent academic" author of this tragedy of a study said, when asked about what it means to be a key opinion leader in psychiatry:
You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.
So is completely misrepresenting the data from a study "honorable"? Is Keller's opinion "worth considering?" As you know if you've read this blog for long, such behavior is, sadly, not a fluke occurrence. Many others who should be providing leadership are leading us on a race to the scientific and ethical bottom. What will Brown University, home of Keller, do? Universities don't seem to care at all about scientific fraud, provided that the perpetrators of bad science are bringing home the bacon.

Not one of the "key opinion leaders" who signed on as an author to this study has said, "Yep, I screwed up. I didn't see the data and I was a dupe." Nobody. Sure, I don't expect that every author of every publication can vouch for the data with 100% certainty. I understand that. But shouldn't the lead author be taking some accountability?

This is a Fluke (?) Some may be saying: "But this is just a fluke occurrence." Is it? I've seen much evidence that data are often selectively reported in a manner like this -- looks like (sadly) it takes a lawsuit for anyone to get a whiff of the bastardization of $science that passes for research these days. If GSK had not been sued, nobody would have ever known that the published data from Study 329 were negative. A reasonably educated person could see that the writeup of the study was a real pimp job -- lots of selling the product based on flimsy evidence, but nobody would have seen the extent of the fraud. Apparently lawyers need to police scientists because scientists are incapable of playing by some very basic rules of science.

See for Yourself. Documents upon which the latest Jureidini et al. paper are based can be found here. Happy digging.

Friday, April 18, 2008

Key Opinion Leaders, Osteoporosis, Vioxx, Psychiatry, Science, and Patients

Remember Richard Eastell? To summarize briefly, he is a professor at Sheffield University who was lead author on a publication that showed positive results for the osteoporosis drug Actonel. One problem: the data did not actually provide good news for Actonel. In a key graph in the published paper, 40% of patient data was missing. Now that's an interesting form of science: Just eliminate the pesky 40% of the data that don't go along with your hypothesis and POOF!, you get exactly the results you are looking for. An excellent writeup of the situation can be seen in Jennifer Washburn's excellent piece in Slate. Making the plot more interesting, Eastell did not have the raw data; Procter & Gamble's (Actonel's sponsor) statisticians were in charge of the analysis. Hence the missing 40% of the data, which helped to cast Actonel in a more positive light. Read more on the topic here. When all data are included, the analysis does not support Actonel's marketing points. Eastell signed off on the original (misleading) paper saying that he had seen all of the data, which was, of course not true.

I noted in October 2006 that Eastell was chairing a session on osteoporosis, one that charged a hefty registration fee. The website promoting the session at the time mentioned: "This course is suitable for pharmaceutical industry personnel from clinical through to marketing disciplines." I suppose that Eastell is a key opinion leader in his field. Being willing to put one's name on a paper where the key graph knocks out 40% of the data is a good step toward becoming an influential academic these days. I suppose Eastell could at least claim ignorance, since he was unfamiliar with the underlying data.

In psychiatry, Charles Nemeroff, a key opinion leader, put his name on a continuing medical education presentation in which the data don't match with the published article that was based on the same data set. In the CME presentation, the medication (risperidone) outperformed placebo, although the published report indicated that risperidone did not beat a placebo, and in the CME presentation, risperidone was claimed to improve sexual functioning, which was never mentioned in the published article.

Eastell and a colleague recently received a roughly $7.5 million grant. Good for them. I've got nothing against the guy personally; I just find it interesting that he is getting rewarded nicely despite the whole Actonel fiasco. And I've only described a wee bit of that strange saga. The Scientific Misconduct Blog has much, much more. Like the part where he told Blumsohn to stop bothering Procter & Gamble about the data because P & G was a good source of income for the university. I've got no problem with excellence being rewarded. Perhaps Eastell has done many excellent things. However, during the P &G/Actonel fiasco, Eastell was willing to let the sponsors push him around, even if science was being bastardized in the process. Their money meant more than good science. And if patients took Actonel thinking that it was more effective than it actually was, who cares -- they're not the ones providing the research funding, right?

Think about this for a second. Many people have been up in arms about the recently unveiled Vioxx ghostwriting scandal. For a fantastic take on the scandal, see Health Care Renewal or Hooked. Briefly, Merck and its associated medical writers wrote manuscripts that said nice things about Vioxx. Then academic authors/key opinion leaders were found to review the papers and stick their names on as lead authors. Mind you, "reviewing" the papers often meant simply meant making minimal edits, if even putting in that much effort. Did they see the data? They saw tables and figures provided by Merck, but did they see the raw data? In most cases, apparently not. Doesn't that make them information launderers? They take industry data, and clean it up with their academic reputation. Oh, Dr. So-and-So is at Sheffield or Emory or Harvard... -- he must have made sure that the sponsoring drug company is portraying the data accurately. A veneer of credibility. And an extra publication for the key opinion leader, which makes the KOL that much more important in the academic world where publication envy runs rampant.

This system is not exactly set up to benefit patient outcomes, is it?

Wednesday, March 26, 2008

Genetic Testing for Bipolar: Are You Kidding Me?

Academics Dr. John Kelsoe and Kurt May have fired the warning shot: Genetic testing for mental disorders is on its way. Like much else in the mental health field, I fear that marketing may yet again trump science. Kelsoe and May's new test is out and it claims it can assess the risk for bipolar disorder (sort of) for a fee of $399.

Both Furious Seasons and Daniel Carlat have already opined wisely on the topic. The first issue is the science behind such testing -- if the science does not support the validity of the test in determining if someone actually has a mental disorder, then the test is a sham. So what does the science say? According to an article in Science, one genetic variant used in the test was associated with a tripling of risk for bipolar disorder. The catch: The variant was only found in 3% of individuals with bipolar disorder and 1% of the people without bipolar disorder. A genetic variant that is only possessed by 3% of people with bipolar can hardly be considered as widely useful. A combination of five variants in another study was found in 15% of individuals with bipolar disorder compared to 5% if those without the condition. As I understand it, the current test, as put forth by Kelsoe and May through the company Psynomics, tests for a combination of the previously mentioned variants. Again, the set of variants they are using are not very common even among people with bipolar disorder. So even if you are bipolar, the odds are high that this test would not label you as such. In the world of testing, this is called low sensitivity, which means that a test is nothing to cheer about.

Additionally, according to the Science piece, other researchers were unable to replicate Kelsoe's findings, making the test yet more questionable.

The thing about bipolar disorder is that it can be diagnosed by (drum roll please)... interviewing a patient thoroughly! That's right, a well-trained interviewer can simply ask questions to determine whether an individual has bipolar disorder. Imagine that. There is often a hullabaloo made over patients with bipolar disorder being initially misdiagnosed as depressed -- the way to solve this problem is not to perform a fairly useless genetic test, but rather to actually spend time with patients, perform a thorough assessment, and listen to them. How's that for a wild idea? If your response is: "But there's no time to actually talk with the patients," then no cookie for you! It is likely true that many people later diagnosed with bipolar were initially seen in primary care settings for a brief appointment, in which they were diagnosed as depressed (the underlying bipolar piece was missed). Again, giving a scientifically dubious test because "Gee, it's based on genetics so it has to be accurate" rather than training physicians to improve interviewing skills will only worsen the problem.

When I have more time emerges, I will post again on the topic. This idea of genetic testing for mental disorders certainly needs much more attention. When academics go into marketing, strange things can happen, as I have documented here on many occasions.

Wednesday, March 05, 2008

Nemeroff Confirms Kirsch: SSRIs Offer Little Benefit


This post will discuss how the latest meta-analysis claiming to show public health benefits for Effexor actually also showed that antidepressants aren't up to snuff. Part 1 detailed how the study authors found a very small advantage for Effexor over SSRIs, which they then suggested meant that Effexor offered significant benefits for public health over SSRIs. Ghostwriters, company statisticians, questions about transparency, etc. Even the journal editor jumped on board. All the usual goodies.

Bad News for SSRIs: But now, on to part deux. Remember that the authors used a Hamilton Depression Rating Scale of 7 or less as indicative of remission, which was the one and only outcome measure of import in their analysis. In their database of studies analyzed in the meta-analysis, there were nine studies that had an Effexor group, an SSRI group, and a placebo group. In these studies, there was a 5.5% difference in remission rates for SSRIs versus placebo. Read it again: there was a 5.5% difference in remission rates for SSRIs versus placebo. You should be shaking your head, perhaps cursing under your breath or even aloud. Using the number needed to treat statistic that the authors used in their analysis of Effexor versus SSRIs, that means you would have to treat 18 people with SSRI instead of a placebo to get one additional remission that you would not get if all 18 had received a placebo. Damn -- that is pathetic! In these same nine trials, the difference between Effexor and SSRIs was 13%, for a number needed to treat of 8. One might conclude that Effexor was more than twice as effective as SSRIs based on these figures, but one would be wrong. Please see my prior post for why depression remission should absolutely not be used as the only judgment of a drug's efficacy. Granted, the numbers for SSRIs were based on nine trials, which limits the generalizability of the findings, but the findings sure fit well with the Kirsch series of meta-analyses that found only a small difference for SSRIs over placebo in all but the most severe cases.

If you told most people that you would have to treat 18 depressed patients with a SSRI rather than a placebo to get one additional remission in depressive symptoms, you'd get laughed out of the room, but that is exactly what Nemeroff et al found. Do the authors conclude with: "The findings confirm earlier work by Kirsch and colleagues showing that the benefits of SSRIs over placebo are quite modest"? Not exactly. Here is their interpretation:
To achieve one remission more than with placebo, 8 patients would need to be treated with venlafaxine (NNT = 8) compared with 18 patients who would need to be treated with an SSRI (NNT = 18). From this perspective, the magnitude of the advantage of SSRIs versus placebo in the placebo-controlled dataset (NNT=18) is similar to the advantage of venlafaxine relative to SSRIs in the combined data set (NNT = 17).
This is right after the authors wrote about how a NNT of 17 was possibly important to public health (see part 1), which was about the time I fell out of my chair laughing. A more plausible interpretation is that SSRIs yielded very little benefit over placebo and that Effexor, in turn, yielded very little benefit (in fact, a statistically significant benefit over only Prozac) over SSRIs. But that sort of interpretation does not lead to good marketing copy or press releases that tout the benefits of medication well beyond what is reasonable. What if the press releases for this study read: "Nemeroff confirms findings of Kirsch: Antidepressants offer very little benefit over placebo." That would have been refreshing.

Sidebar: Here is my standard statement about antidepressants -- they work. Huh? Yeah, the average person (surely not everyone) on an antidepressant improves by a notable amount. The problem is that the vast majority (about 80%) of such improvement is due to the placebo effect and/or the depression simply getting better over time. Give someone a pill and that person will likely show some improvement, but nearly all of the improvement is due to something other than the drug. If most improvement is due to the placebo effect, couldn't we usually get such improvement using psychotherapy, exercise, or something else, which might avoid some drug-induced side effects? Moving on...

Key Opinion Leaders: But notice how this Wyeth/Advogent authored piece featuring Charles Nemeroff as lead author (as well as Michael Thase as last author) throws down a major spin job regarding the efficacy of antidepressants. As reported previously, their measure of efficacy was quite arbitrary. It could have been supplemented with other measures, as Wyeth is in possession of such relevant data, but such analyses were not conducted. But even using their questionable measure of efficacy, antidepressants put on a poor performance. Similarly, Effexor's advantage over SSRIs was meager. Yet the authors (remember, three medical writers worked on this paper) conclude that venlafaxine offers a public health benefit over SSRIs. Maybe the authors were afraid of being sued for writing anything negative in their paper? Or perhaps they just know who is buttering their bread. It is also possible that the authors truly cannot envision the idea that SSRIs offer such a meager advantage over placebo and that Effexor yields very little (if any) benefit over SSRIs. And that is the problem. The "key opinion leaders" are all stacked on one side of the aisle -- drugs are highly effective and each new generation of medications is better than the last. So plug in the name of the next drug here, and you'll see a key opinion leader along with a team of medical writers rushing out to show physicians that the latest truly is the greatest. Since we don't really train physicians to understand clinical trials or statistics particularly well, you can also expect many physicians targeted by such marketing efforts to simply lap up unsupported claims of "public health benefit."

Hey, is there a counter-detailer in the room somewhere?

Monday, March 03, 2008

HipSaver: Diss Us and We'll Sue You

In an amazing and highly troubling move, HipSaver, a corporation that manufactures hip protection gear, is suing the authors of a study who had the temerity to write in their article: "These results add to the increasing body of evidence that hip protectors, as currently designed, are not effective for preventing hip fracture among nursing home residents."

Though this is not my area of expertise, my loose familiarity with the research indicates that the above statement appears to be true. The study in question did not examine the HipSaver product, and the offending statement was made in the discussion section, where authors offer opinions about their findings.

HipSaver said that such claims are a slander upon the field of hip protectors. If we are going to start suing authors based on the discussion sections of their articles, then we may as well stop doing science immediately. Of course, much of what passes for science these days is iffy, so maybe nobody would notice if we just stopped doing clinical trials.

Read more at the WSJ Health Blog. Thanks to the reader who alerted me to this bizarre development.

Tuesday, February 19, 2008

Why I Love the Discussion Section

A recent study in the Journal of Clinical Psychopharmacology found that aripiprazole (Abilify) offered no benefit over placebo in treating biploar depression. Well, at least that's what the results showed, but the discussion section told a bit of a different story. At the end of eight weeks, Abilify failed to beat placebo on either the Montgomery-Asberg Depression Rating Scale or the Clinical Global Impressions -- Bipolar Severity of Illness Scale.

It is rare that an industry-sponsored article reports negative results and it would be nigh-impossible to find a published industry-sponsored study that failed to put a happy spin on the negative results. Sure, the results were negative in this study, but if the dosing was different, the treatment could have worked. There's always a loophole, some possibility that results would have been dandy if something were different. Check this out:
It is possible that the dosing regimen used in the current studies may have been too high for this patient group, or that titration was too rapid. Specifically, the unexpectedly high rates of discontinuation caused by any reason or because of AEs suggest that the aripiprazole starting dose (10 mg/d) may have been too high and that the dose titration (weekly adjustments in 5-mg increments according to clinical response and tolerability) may have been too rapid...

However, because preliminary data indicate that aripiprazole may have a potential value as adjunctive therapy in patients with bipolar depression, future studies that focus on the use of aripiprazole as adjunctive therapy using a better-tolerated dosing schedule with a more conservative escalation may be of greater value for the treatment of patients with bipolar depression...
And my favorite part...
Although the improvements in MADRS total scores in the current aripiprazole studies did not separate statistically significantly from placebo at end point, the significant effects observed with aripiprazole monotherapy within the first 6 weeks are clinically meaningful and similar to the effects seen with olanzapine monotherapy and lamotrigine monotherapy in patients with bipolar depression.
OK, so the argument is that while treatment did not work at the end of 8 weeks, the effects after 6 weeks were really super-duper impressive. Gimme a break. The authors did not present the actual numbers on the MADRS (the primary manner in which depression was assessed); rather, the data were presented in figures. Um, isn't science supposed to be based on numbers -- shouldn't they be provided in the text of the paper? At 6 weeks, the difference in scores between Abilify and placebo looks to be a little more than 2 points on the MADRS, a rating scale that spans from 0 to 60. And if a drug makes a person 2 points better relative to placebo, then the findings are "clinically meaningful"? Keep lowering that bar, fellas. While the discussion reaches out to rescue the reputation of Abilify, it does (to be fair), also point out on a couple occasions that Abilify was not particularly efficacious at the end of 8 weeks and was related to a worse safety/tolerability profile than placebo. In fact, relative to some other studies I've dissed regarding their sunny presentation of unimpressive results (like this one), the current Abilify article is a model of fair discussion.

Side note: Akathisia was reported by about a quarter of patients taking Abilify. The funny thing about akathisia is that it is not well-defined in this study or in many others. Is it a problem with movements, mental tension, something else, or what? It would seem important to know, given that Abilify apparently causes akathisia in droves. Do a Pubmed search for aripiprazole and akathisia and you'll see what I mean. A couple descriptions of akathisia follow:
  • Increased tenseness, restlessness, insomnia and a feeling of being very uncomfortable
  • On the first day of treatment he reacted with marked anxiety and weepiness, on the second day felt so terrible with such marked panic at night that the medication was cancelled
  • Another: A movement disorder characterized by a feeling of inner restlessness and a compelling need to be in constant motion as well as by actions such as rocking while standing or sitting, lifting the feet as if marching on the spot and crossing and uncrossing the legs while sitting. People with akathisia are unable to sit or keep still, complain of restlessness, fidget, rock from foot to foot, and pace.

Wednesday, October 10, 2007

Blumsohn, History, and Fire

Wow. I noted yesterday in a History (and Future) Lesson that Aubrey Blumsohn posted a list of scientific misconduct related goodies that occurred on various years on October 8th. Proving that October 8th was not an anomaly, he's now posted a list of scientific misconduct related events that have occurred on October 9th over the years.

His posts from the last two days are mandatory reading. In fact, I declare that the Scientific Misconduct Blog is ON FIRE! Today, I will post nothing more so that you can run to the Scientific Misconduct Blog and read the excellent posts noted above.

Tuesday, October 02, 2007

Peer Review Is Mediocre at Best

Regarding bias in "science" and the utter balderdash that passes for peer-reviewed science, I sometimes feel like a lone voice in the wilderness. Well, thank God -- another blogger has thrown down the gauntlet on the topic. The Last Psychiatrist has a great post on the topic in which he notes a few huge problems with medical journals, of which I'll highlight a few in upcoming posts. Let's start with peer review...
"Most people think peer review is some infallible system for evaluating knowledge. It's not. Here's what peer review does not do: it does not try to verify the accuracy of the content. They do not have access to the raw data. They don't re-run the statistical calculations to see if they're correct. They don't look up references to see if they are appropriate/accurate."
Couldn't agree more with the Last Psychiatrist. We just assume the raw data are accurate. Every study likely contains some small data entry or calculation errors, but what if the whole paper is based on a significant misrepresentation of the raw data? Wouldn't that be a, large problem? What is reported and what is not reported? To put it in layman's terms, anybody can make up whatever the hell they want, and the peer reviewers are under the assumption that it is true. We're working on the honor system here and who knows how often the final paper reflects the real data, or if we are dealing with undisclosed errors due to sloppiness, an accident, greed, or just wanting to cover up the bad news, like in the following...

There is no way that even the world's greatest peer reviewer would catch this, as without access to raw data, we're trusting that relevant information is presented in the manuscript. Reviewers might catch an obvious statistical error, but they sometimes miss the most blatant errors, such as a paper that makes an important conclusion based on no evidence whatsoever.

What do peer reviewers do?

Again, quoting from The Last Psychiatrist...
They look for study "importance" and "relevance." You know what that means? Nothing. It means four people think the article is important. Imagine your four doctor friends were exclusively responsible for deciding what you read in journals. Better example: imagine the four members of the current Administration "peer reviewed" news stories for the NY Times.
And what determines if they think an article is important? Their bias, hopefully mixed with some basic understanding of research methodology and statistics. In maintream psychiatry, you are raised to believe that meds are safe and effective and that newer meds are better than older ones. Is this based on evidence? Sometimes, but quite often it is based on marketing that would make a user car dealer proud. Peer reviewers are often somewhat of an echo chamber than reflects whatever hooey marketing is going on lately. For example, consider that many poorly done studies on second generation antipsychotics were featured prominently in journals -- the mainstream psychiatry culture was overflowing with excitement about heavily promoted new antipsychotic meds, to the point where peer reviewers were willing to gloss over the flawed studies (for example, often using a control group receiving an unreasonable dose of an older antipsychotic) -- and now we have these drugs selling at a clip of well over $10 billion annually in the U.S.

No, I'm not claiming I don't have my own bias. Duh. You can see the cards I'm holding pretty clearly if you read this site with much regularity. The point is that peer reviewers need to realize their bias and take a better, more objective look at research they are reviewing. Too many industry-cheerleading pieces in journals leads to uncritical acceptance of treatments that nearly always fail to live up to their initial hype. After all, after a few trials have been published (even if poorly done and/or overstating efficacy, understating risks), the drugs are now based on "science," which leads to yet more marketing. Check the actual track record of benzos, SSRIs, Depakote, and atypical antipsychotics if you doubt me. Each treatment "revolution" is closely linked to peer review. So if you are pleased as punch with the current state of affairs in mental health treatments, then please make sure to send letters to your favorite medical journal editors thanking them for the present system. Don't let it change.

Or maybe the whole system needs a fundamental overhaul. More on that later.

Promo Time. I'll take yet another lead from the LP and humbly suggest that you promote this post via Digg, Reddit, or any other favorite service. I'd even more strongly suggest you hit up the LP's post and promote it. While you're sharing posts with the world, you should you read my take on SSRI's, Suicide and Dunce Journalism and send it to all of your friends. And while I'm in promotion mode, give your money to Philip Dawdy if you like good journalism on mental health issues. If you want to pledge money to support the operations of this unpaid anonymous blogger (and you know you do!), thanks, but take it and give it to Philip. Now!

Friday, May 25, 2007

Convenient Honesty and Zoloft

Recently, a study was published which cast doubt on the efficacy of sertraline (Zoloft) for PTSD, finding that the drug was no better than a placebo.

The kicker is that the patent has expired for Zoloft, which is why the data are now flowing more freely. I’ll make the case here that data were buried until they would no longer hurt sales to any meaningful extent, at which point data were published, at least partially as a public relations move to show just how “honest” the companies are with sharing both positive and negative results with the psychiatric community.

The Research: The latest study, which appears in the May 2007 Journal of Clinical Psychiatry, showed no benefit for drug over a 12-week period. Placebo tended to outperform Zoloft on the majority of outcome measures, though the differences were of a small and statistically insignificant degree. Patients were significantly more likely to drop out of treatment on Zoloft. It was unclear if there were any serious adverse events (e.g., suicide attempts, notable aggression, etc.) because the article did not mention them at all. Patients started this study between May 1994 and September 1996. The original draft of the study was received by the journal in March 2006. Nearly 10 years passed between study completion and writing up the data for publication.

Two prior studies found positive results found positive results for Zoloft and were published quickly, while these negative results languished until the Zoloft patent had expired. One earlier positive study did not list the dates during which the study occurred, but it seems clear that it was rushed to publication much quicker than the negative study. Another positive study was conducted between May 1996 and June 1997 and was published in 2000. It’s quite obvious why the positive studies were rushed to press and the negative study languished, is it not?

Do keep in mind that the magnitude of positive effect for Zoloft over placebo, even in the positive studies, was small to moderate. When even the positive news for antidepressants in treating PTSD show only modest improvement relative to placebo, one should tread cautiously.

Change of Heart: Drug companies have been criticized widely for failing to disclose clinical trial data (1, 2). In an effort to shore up the support of the medical community and the public at large, what could possibly make more sense than publishing negative trial results? Gee, look at how honest we are – we share the good news and the bad news! Of course, when the positive results are published as quickly as possible and the negative results are published after a 10 year delay, well after the negative results can pose any threat to corporate profits, I’m not impressed by their newfound dedication to transparency.

Note: If you are a journalist, this is the kind of story that would merit a broad audience. The plot is pretty simple to follow and it reeks of corporate malfeasance, a subject that is not new to Pfizer and its former cash cow antidepressant.

Thursday, March 15, 2007

Procter & Gamble: Purple Haze


The Procter & Gamble –Aubrey Blumsohnn saga has officially turned into tragicomedy to the 7th power. As you may know, Blumsohn was performing research for P & G regarding its osteoporosis drug Actonel. To make a long story short, Blumsohn discovered that P & G’s data analysis strongly appeared to differ from reality. When Blumsohn attempted to make such knowledge public, he nearly lost his job. But worry not, the poorly done data analyses resulted in several scientific presentations and a publication in the Journal of Bone and Mineral Research that has yet to be retracted. So the official scientific record still seems to paint an unrealistically favorable picture of P & G’s Actonel.

Latest Installment: Dr. Blumsohn has decided to present the results of some of the real data analyses, (i.e., data not, um, creatively analyzed, by Procter & Gamble) so that the scientific and medical communities may become familiar with what appears to be the real story of Actonel rather than the PR currently posing as the official scientific record.

Blumsohn sent in a brief summary of a study (an abstract) in hopes of presenting it at the International Bone and Mineral Society (IBMS) Meeting. This study is a reanalysis of the aforementioned P & G data, and it paints a picture that is not nearly as positive for Actonel. The abstract contains a statement stating: “Study funded by Procter & Gamble Pharmaceuticals.” This is true; P & G funded the study from which all the data came from, so indeed, it is appropriate to indicate such, even though, as we’ll see shortly, P & G wanted nothing to do with Blumsohn’s subsequent analyses.

Enter Dr. Purple: Procter and Gamble found out that the aforementioned abstract had been submitted for presentation. A man named Dr. Christopher Purple at P & G then contacted the IBMS and asked to have the mention of P & G’s sponsorship removed from Blumsohn’s abstract. Mind you, Dr. Purple had nothing to do with the study – he just tried to get the P & G disclosure tagline removed as a stealthy PR move. The IBMS people then replied to Dr. Purple that the P & G line would indeed be removed.

Unfortunately for Dr. Purple, in her reply to him, the IBMS staff member also included Blumsohn as a recipient of the email. Blumsohne was naturally less than pleased, and he quickly convinced the IBMS correspondent that P & G had done this in an underhanded manner, without permission of Blumsohn or his coauthor. The P & G disclosure tagline was then re-added to the abstract.

Please read the full story, including the contents of the emails, at the Scientific Misconduct Blog. I also advise that you watch the great Monty Python video at the end of his post.

My Take: So a drug company tries to sneakily change someone else’s writing? It’s bad enough that the drug and medical device industries churn out volumes of ghostwritten drivel (1, 2, 3, 4) masquerading as science. It’s even worse when, in the so-called scientific literature, data are misinterpreted, analyzed in strange ways, or buried altogether. Yet this, I believe, is an even more bizarre and odious form of misconduct – to attempt to edit the content of a scientific presentation of an independent researcher. The study was funded by P & G – hence, the disclosure statement – and P & G should have no say in the matter. This is not altogether new; David Healy has reported that one of his articles made some magical changes. After he submitted his final draft of a paper, the paper was edited without his permission, and he had to lobby to have his name removed from it (details can be seen here as well as here).

Perhaps I’ll email the good Dr. Purple and see if he has an opinion he’d like to share on the matter.