Let’s go through what happened with study 329, which turned into a publication in JAACAP in July 2001 upon which Dr. Martin Keller (see here) was the lead author. The study was submitted (after the Journal of the American Medical Association had rejected it – good for them) to JAACAP, and Panorama nicely documented a couple of the reviewer comments. They included
Overall, results do not clearly indicate efficacy – authors need to clearly note this.
The relatively high rate of serious adverse effects was not addressed in the discussion.
Given the high placebo response rate… are (these drugs) an acceptable first-line therapy for depressed teenagers?
Remember that journals receive manuscripts, and then send them to be reviewed by researchers in the field as to their quality. These reviews are generally taken very seriously when considering what changes should be made to a paper and whether the manuscript will be published.
Yet, the paper was not only accepted and published in JAACAP, but the editor seems to have ignored the suggestions of the individuals who reviewed the paper. These issues mentioned in the review were obviously not addressed – feel free to read the actual journal article and you can see that the efficacy of paroxetine was pimped well beyond what the data showed and the safety data were also painted to show a picture contrary to the study’s own data. Again, please feel free to read my earlier post regarding the study’s data versus how such data were reported and interpreted in the journal article.
Read this carefully – we all make mistakes. When someone points out that a mistake was made, it is natural to become defensive – that’s okay. However, several years after the fact, one should be able to admit fault and learn from one’s errors; at least that is my opinion.
Dr. Duncan was asked if she regretted allowing Keller et al.’s Paxil/Seroxat study to be published – her response was less that I hoped for:
I don’t have any regrets about publishing [the study] at all – it generated all sorts of useful discussion which is the purpose of a scholarly journal.
Let’s follow this train of logic. If a study is either particularly poorly done or misinterprets its own data to a large extent, then there will be an outcry of researchers and critics who will point out the numerous flaws that occurred. This could, of course, be interpreted as “useful discussion,” which I suppose is what
Of further interest, Jon Jureidini and Anne Tonkin had a letter published in JAACAP in May 2003. In their letter they stated
…a study that did not show significant improvement on either of two primary outcome measures is reported as demonstrating efficacy (p. 514).
The tone of their letter was perhaps a bit catty as it discussed how Keller et al seem to have spun their interpretation well out of line with the actual study data. I can, however, hardly blame them for their snippiness. Another nugget from their letter:
We believe that the Keller et al. study shows evidence of distorted and unbalanced reporting that seems to have evaded the scrutiny of your editorial process (p. 514).Thank you to Jureidini and
Disclaimer: I watched Panorama and took copious notes. I believe all quotes are accurate but please let me know if you think I transcribed something incorrectly.
Update (1/29/08): My apologies. I should have typed Mina Dulcan, not Mina Duncan. Sorry for the misspellings.