
This post details the duplicate publication of data on the antidepressant duloxetine (Cymbalta). Marketing and "science" collide to produce hideous offspring: an
experimercial that pimps Lilly's bogus "Depression Hurts" marketing for Cymbalta using the exact same (weak) data twice. Data were published in the Journal of Clinical Psychiatry (JCP), and then the same data were published a second time in the Journal of Psychiatric Research (JPR), a blatant violation of JPR policy. Oh, and Alan Schatzberg, president-elect for the American Psychiatric Association is involved in the story.
The study: Lilly conducted a rather uninteresting study of Cymbalta, in which patients who had not shown a treatment response to an SSRI were then assigned to either a) Direct switch: Switch to Cymbalta and immediately discontinue the SSRI medication or b) Start-Taper-Switch: taper the SSRI over a 2 week period while also starting Cymbalta. Note that there was not a control group of any sort, an issue that the authors dance around (i.e., essentially ignore) in the papers based on this study's data.
Publication #1 -- Journal of Clinical Psychiatry: Data from this study were published in the
January 2008 issue of the Journal of Clinical Psychiatry. The findings were that, in essence, there were no notable differences between patients who were directly switched to Cymbalta as opposed to those who did the start-taper-switch method. But what do the authors conclude?
Despite the lack of control group, the authors get the message out that not only was depression improved, so were "painful physical symptoms." As anyone who has a television has probably noticed, Lilly has been pushing hard for quite some time to convince patients and physicians that Cymbalta will relieve
depression and pain in depressed patients. So if the marketing points can be pushed in one journal, why not pimp the same idea
using the same data in another journal?
Publication #2 -- Journal of Psychiatric Research: Data from the same study were
published online (to appear in print soon) in the Journal of Psychiatric Research (JPR). And I mean the
exact same data appear again in this paper. This is a huge scientific no-no. Findings are supposed to be published
once, not over and over again. Journals are struggling to find space for new and interesting findings, so there is no need to waste space on duplicate data. In fact, to quote from
JPR's websiteSubmission of a paper to the Journal of Psychiatric Research is understood to imply that it is an original paper which has not previously been published, and is not being considered for publication elsewhere. Prior publication in abstract form should be indicated. Furthermore, authors should upload copies of any related manuscript that has been recently published, is in press or under consideration elsewhere. The following circumstances indicate that a paper is related to the manuscript submitted to the Journal: a) any overlap in the results presented; b) any overlap in the subjects, patients or materials the results are based on.
So it's pretty clear -- don't submit data that has already been published. Here is a figure from the Journal of Clinical Psychiatry (JCP) article mentioned above:
And here is the same data, in a figure in JPR:
But wait -- that's just the beginning. How about the data tables... From JCP:
And the right-side half of this table in JCP:
And the exact same data appearing in JPR:
To be fair to these "researchers" in JPR, they reported data from subscales of two measures not reported in JCP. But the
vast majority of the data is just reprinted from the article in JCP.
Which is completely trouncing journal policy and, more importantly, conveying Lilly's marketing messages to the audiences of two different journals. Unfortunately, they apparently did not consider that some people might actually read
both journals and notice that essentially the same article had appeared twice. Or, Lilly considered this prospect and said, "Who cares." I'll leave it to my readers to decide if they care.
Authors: The JCP paper was authored by David Perahia, Deborah Quail, Derisala Desaiah, Emmanuele Corruble, and Maurizio Fava. The JPR paper was "authored" by Perahia, Quail, Desaiah, Angel Montejo, and Alan Schaztberg. So to re-publish the same data, it was out with Corruble and Fava -- in with Montejo and Schatzberg. Why Schatzberg? We're almost there...
JPR describes the contributions of each author. For these two authors (Schatzberg and Montejo) who were not credited in the JCP paper, they were both described as "involved in data review and interpretation, including the development of this manuscript." How could they have been involved with data review and interpretation -- the vast majority of the data were already analyzed, interpreted and written up by other researchers in the JCP paper? Did they write the paper? Apparently not, since the JPR article mentioned that "Dr. Desaiah worked with Dr. Perahia to draft the manuscript..." So Montejo and Schatzberg could not conceivably have played any significant role in data analysis, interpretation, or writing the paper. It seems that if Desaiah and Perahia "drafted" the manuscript, then the most Montejo and Schatzberg could have done is to
maybe review the paper.
So why is Schatzberg on the paper? Well, it just so happens, I'm sure by sheer coincidence, that Schatzberg is the co-editor in chief of JPR. So he'd be in a good position to help a paper that essentially republishes data from JCP with only minor additions make it into publication against his own journal's policies.Nice work, Schatzberg. That's pimpin' it hard. That, my friend, is worthy of nomination for a coveted
Golden Goblet Award. Congratulations. It is not the first time Schatzberg's "scientific" behavior has been noted. He has been stumping (in the face of much contradictory data) in favor of his pet drug RU-486/Corlux in the treatment of psychotic depression for some time. Between the bad science surrounding Corlux and Schaztberg's myriad conflicts of interest, much has been written (
1,
2,
3,
4,
5) -- add another chapter to the chronicles of the storied American Psychiatric Association Leader. This reminds me of an earlier incident involving
Charles Nemeroff.
Discussion: As I've noted previously, the discussion section of a journal article often contains key marketing points, science being relegated to secondary status at best. The JPR article provides a few good examples of Cymbalta's talking points:
The current paper focuses on pain-related outcomes, demonstrating that a switch of SSRI non- or partial-responders to duloxetine was associated with a significant improvement in all pain measures including six VAS pain scales, the SQ-SS and its pain subscale, and the SF-36 bodily pain domain.
Switch of SSRI non- and partial-responders to duloxetine resulted in mean improvements on all pain measures regardless of the switch method used.
Duloxetine, an SNRI, has previously been shown to be effective in the treatment of PPS associated with depression, and it is also effective in the treatment of chronic pain such as diabetic peripheral neuropathic pain (DPNP) for which it is approved in the US, Europe and elsewhere, so duloxetine’s effects on pain in our sample of SSRI non- or partial-responders was not unexpected.
Patients with MDD present with a broad range of symptoms including those related to alteration of mood and PPS, all of which may contribute to global functional impairment. Effective treatment of both mood symptoms and PPS associated with depression may therefore optimize the chances of functional improvement. Recent findings that residual PPS in depressed patients may be associated with impaired quality of life (Wise et al., 2005, 2007), decreased productivity and lower rates of help seeking (Demyttenaere et al., 2006) and a lower likelihood of attaining remission (Fava et al., 2004), further demonstrate the importance of effective treatment of PPS in patients with MDD, so duloxetine’s effects on PPS are reassuring.
Improvements in pain are consistent with previously reported studies demonstrating duloxetine’s efficacy for pain, either as part of depression, or as part of a chronic pain condition such as DPNP.
Where do I start? How about by mentioning that
JPR states:
7. Discussion: The results of your study should be placed in the appropriate context of knowledge, with discussion of its limitations and implications for future work.
So maybe if there was research that questions Lilly's talking points about Cymbalta relieving pain in depression, such research should be discussed. Well, it just so happens that there
is research, which analyzed Lilly's own clinical trials and found that
Cymbalta was no better than a placebo or Paxil in treating pain in depression. This meta-analysis of Cymbalta trials was published in
January 2008, yet the JPR article, which was originally received by JPR on March 26, 2008 did not mention the negative data. Hmmm, that doesn't exactly sound like placing the findings "in the appropriate context of knowledge," does it? All this talk about Cymbalta's fantastic analgesic effects despite Lilly's own data showing that Cymbalta is at best close to useless in treating pain among depressed patients. Another study that claimed to show Cymbalta was a helluva painkiller was also smacked in a
letter to the editor a few months ago -- and the authors of the Lilly-sponsored trial conceded defeat by refusing to reply to the critiques of their study.
Better Than "Weak" SSRIs (Not Really): In the JPR study, it was mentioned that the evidence for SSRIs in treating pain is "weak." No disagreement on my end. But see, once SSRI patients switched to Cymbalta, their pain magically went away because Cymbalta, unlike SSRIs, relieves pain. Never mind the
lack of control group, which was allotted a grand total of 15 words in the discussion as a potential limitation of the study. The authors also failed to note that prior research showed that Cymbalta was
no better than Paxil in treating pain in depressed patients. And Perahia, the lead author of the JCP and JPR "studies" is certainly aware of the research showing that Cymbalta works no better than a "weak" SSRI, since he was the
lead author on one such study! So he is quite aware that Cymbalta has never been shown superior to Paxil in treating pain, yet he accurately describes research indicating that SSRIs are "weak" pain treatments, but then neglects to mention that Cymbalta failed to demonstrate superiority to Paxil in treating pain in depression.
This is called lying by omission.I may pass along my concerns to the Journal of Psychiatric Research. My prior experiences in passing along such concerns to journals via my blog identity is that they either a) ignore my concerns entirely or b) instruct me to write a letter to the editor which would be considered for publication, with the stipulation that I use my real identity. Sorry, but a published letter to the editor is not worth blowing my cover.
Call for Action: Rather than my running into point b. from the last paragraph, how about one or more scientifically inclined readers submit your concerns to the journal, under the following condition: Make sure you read the original papers first to judge if my concerns are valid. Then, if you feel similarly, why not send a letter to the editor? This is bad science which does nothing to advance patient care -- it seeks only to advance sales of Cymbalta by pimping it as a painkiller in depression while ignoring all contradictory data. So let's try a little research of our own -- see if JPR is willing to address these issues or if they will be swept under the rug.
Reference to JPR article:
D PERAHIA, D QUAIL, D DESAIAH, A MONTEJO, A SCHATZBERG (2008). Switching to duloxetine in selective serotonin reuptake inhibitor non- and partial-responders: Effects on painful physical symptoms of depression Journal of Psychiatric Research DOI: 10.1016/j.jpsychires.2008.07.001
Update: Also see an excellent follow-up post on the topic at Bad Science.