Showing posts with label Panorama. Show all posts
Showing posts with label Panorama. Show all posts

Wednesday, January 31, 2007

Journal Editor Unapologetic Over Paxil/Seroxat Article

Dr. Mina Duncan is the editor of the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), which, as she noted on Panorama is very widely read among child and adolescent psychiatrists. So, in this prestigious journal, one would expect high editorial standards.

Let’s go through what happened with study 329, which turned into a publication in JAACAP in July 2001 upon which Dr. Martin Keller (see here) was the lead author. The study was submitted (after the Journal of the American Medical Association had rejected it – good for them) to JAACAP, and Panorama nicely documented a couple of the reviewer comments. They included

Overall, results do not clearly indicate efficacy – authors need to clearly note this.

The relatively high rate of serious adverse effects was not addressed in the discussion.

Given the high placebo response rate… are (these drugs) an acceptable first-line therapy for depressed teenagers?

Remember that journals receive manuscripts, and then send them to be reviewed by researchers in the field as to their quality. These reviews are generally taken very seriously when considering what changes should be made to a paper and whether the manuscript will be published.

Yet, the paper was not only accepted and published in JAACAP, but the editor seems to have ignored the suggestions of the individuals who reviewed the paper. These issues mentioned in the review were obviously not addressed – feel free to read the actual journal article and you can see that the efficacy of paroxetine was pimped well beyond what the data showed and the safety data were also painted to show a picture contrary to the study’s own data. Again, please feel free to read my earlier post regarding the study’s data versus how such data were reported and interpreted in the journal article.

Read this carefully – we all make mistakes. When someone points out that a mistake was made, it is natural to become defensive – that’s okay. However, several years after the fact, one should be able to admit fault and learn from one’s errors; at least that is my opinion.

Dr. Duncan was asked if she regretted allowing Keller et al.’s Paxil/Seroxat study to be published – her response was less that I hoped for:

I don’t have any regrets about publishing [the study] at all – it generated all sorts of useful discussion which is the purpose of a scholarly journal.

Let’s follow this train of logic. If a study is either particularly poorly done or misinterprets its own data to a large extent, then there will be an outcry of researchers and critics who will point out the numerous flaws that occurred. This could, of course, be interpreted as “useful discussion,” which I suppose is what Duncan meant happened in the case of this article. After all, there were several letters to the editor that expressed their frustration with the study and how Keller et al interpreted their data. So, according to my interpretation of Duncan’s logic, we should publish studies with as many flaws as possible so that we can “usefully discuss” them.

Of further interest, Jon Jureidini and Anne Tonkin had a letter published in JAACAP in May 2003. In their letter they stated

…a study that did not show significant improvement on either of two primary outcome measures is reported as demonstrating efficacy (p. 514).

The tone of their letter was perhaps a bit catty as it discussed how Keller et al seem to have spun their interpretation well out of line with the actual study data. I can, however, hardly blame them for their snippiness. Another nugget from their letter:

We believe that the Keller et al. study shows evidence of distorted and unbalanced reporting that seems to have evaded the scrutiny of your editorial process (p. 514).
Thank you to Jureidini and Tonkin for their contribution to the “useful discussion” – indeed, their comments were likely the most useful of all that were contributed to the discussion. I give credit to Duncan for publishing their letter. I would be more impressed if she was willing to state that there were some problems with the editorial process in the case of this article, but I suppose you can’t win them all.

Disclaimer: I watched Panorama and took copious notes. I believe all quotes are accurate but please let me know if you think I transcribed something incorrectly.

Update (1/29/08): My apologies. I should have typed Mina Dulcan, not Mina Duncan. Sorry for the misspellings.

Tuesday, January 30, 2007

Keller, Bad Science, and Seroxat/Paxil

I will focus on Dr. Martin Keller and some seriously poor science in this post. Panorama did an excellent job of profiling Keller’s role in helping to promote paroxetine (known as Paxil in the USA and Seroxat in the UK). Note this is a lengthy post and that the bold section headings should help you find your way.

Who is Martin Keller? He is chair of psychiatry at Brown University. According to his curriculum vita, he has over 300 scientific publications. People take his opinions seriously. He is what is known as a key opinion leader or thought leader in academia and by the drug industry. What does that mean? Well, on videotape (see the Panorama episode from 1-29-07), Keller said:

You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.

Keller and Study 329: GlaxoSmithKline conducted a study, numbered 329, in which it examined the efficacy and safety of paroxetine versus placebo in the treatment of adolescent depression. Keller was the lead author on the article (J American Academy of Child and Adolescent Psychiatry, 2001, 762-772) which appeared regarding the results of this study.

Text of Article vs. the Actual Data: We’re going to now examine what the text of the article said versus what the data from the study said.

Article: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

Data on safety: Emotional lability occurred in 6 of 93 participants on paroxetine compared to 1 of 87 on placebo. Hostility occurred in 7 of 93 patients on paroxetine compared to 0 of 87 on placebo. In fact, on paroxetine, 7 patients were hospitalized due to adverse events, including 2 from emotional lability, 2 due to aggression, 2 with worsening depression, and 1 with manic-like symptoms. This compares to 1 patient who had lability in the placebo group, but apparently not to the point that it required hospitalization. A total of 10 people had serious psychiatric adverse events on paroxetine compared to one on placebo.

What exactly were emotional lability and hostility? To quote James McCafferty, a GSK employee who helped work on Study 329, “the term emotional lability was catch all term for ‘suicidal ideation and gestures’. The hostility term captures behavioral problems, most related to parental and school confrontations.” According to Dr. David Healy, who certainly has much inside knowledge of raw data and company documents (background here), hostility counted for “homicidal acts, homicidal ideation and aggressive events.”

Suicidality is now lability and overt aggression is now hostility. Sounds much nicer that way.

Conveniently defining depression: On page 770 of the study report, the authors opined that “…our study demonstrates that treatment with paroxetine results in clinically relevant improvement in depression scores.” The only measures that showed an advantage for paroxetine were either based on some arbitrary cutoff (and the researchers could of course opt for whatever cutoff yielded the results they wanted) or were not actually valid measures of depression. The only measures that were significant were either a global measure of improvement, which paints an optimistic view of treatment outcome, or were cherry-picked single items from longer questionnaires.

Also, think about the following for a moment. A single question on any questionnaire or interview is obviously not going to broadly cover symptoms of depression. A single question cannot cover the many facets of depression. Implying that a single question on an interview which shows an advantage for paroxetine shows that paroxetine is superior in treating depression is utterly invalid. Such logic is akin to finding that a patient with the flu reports coughing less often on a medication compared to placebo, so the medication is then declared superior to placebo for managing flu despite the medication not working better on any of the many other symptoms that comprise influenza.

Whitewashing safety data: It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring. David Healy has also discussed this fallacious assumption that drugs cannot cause harm.

Did Keller Know the Study Data? I’ll paraphrase briefly from Panorama, which had a video of Keller discussing the study and his role in examining and analyzing its data. He said he had reviewed data analytic tables, but then he mentioned soon after that on some printouts there were “item numbers and variable numbers and they don’t even have words on them – I tend not to look at those. I do better with words than symbols. [emphasis mine].”

Ghosted: According to Panorama (and documents I’ve obtained), the paper was written by a ghostwriter. Keller’s response to the ghostwriter after he saw the paper? “You did a superb job with this. Thank you very much. It is excellent. Enclosed are some rather minor changes from me, Neal, and Mike. [emphasis mine].” And let’s remember that Keller apparently did not wish to bother with looking at numbers. It would also appear that he did not want to bother much with the words based upon those numbers.

Third Party Technique: This is a tried and true trick – get several leading academics to stamp their names on a study manuscript and suddenly it appears like the study was closely supervised in every aspect, from data collection to data analysis, to study writeup, by independent academics. Thus, it is not GlaxoSmithKline telling you that their product is great, it is “independent researchers” from such bastions of academia as Brown University, the University of Pittsburgh, and University of Texas Southwester Medical Center and the University of Texas Medical Branch at Galveston which are stamping approval of the product. More on this in future posts.

Keller’s Background… It is relatively well-known that Keller makes much money from his consulting and research arrangements with drug companies. In fact, several years ago, it was documented that Keller pulled in over $500,000 in a single year through these lucrative deals. When looking at how he stuck his name on a study he did not write, endorsing conclusions that were clearly far from the actual study data, can one seriously believe that Keller operated as an independent researcher? Can you believe that this is an isolated incident?

See, for example, Keller’s involvement in a study examining the effects of Risperdal (risperidone) for the treatment of depression. This study was presented a number of times, and he never appeared as an author of any of the presentations. Yet when the study was published, his name appeared as an author. The real kicker was that he allegedly helped to design the study, according to the published article. If he had played a major role in the study, he would have been acknowledged earlier (via being listed as a presentation author), so he apparently helped design the study after it was completed, which is obviously a major feat! The whole story is here. Why put his name on the paper? So that readers would believe more strongly in the study due to his big name status.

In addition, Keller wrote about how Effexor reduces episodes of depression in the long-term though he clearly misinterpreted the study’s findings. To be fair, many other researchers have made the same mistake in believing that SSRI’s reduce depression. To quote an earlier post:

In other words, because SSRIs and similar drugs (e.g., Effexor) have withdrawal symptoms that sometimes lead to depression, it looks like they are effective in preventing depression because people often get worse shortly after stopping their medication. The drug companies (Wyeth, in the case of Effexor) would like you to believe that this means antidepressants protect you from re-experiencing depression once you get better, that they are a good long-term treatment. A more accurate statement is that antidepressants protect you from their own substantial withdrawal symptoms until you stop taking them.

Again, Keller is way off from the study data.

Keller on Camera: Keller’s response to being asked about the increased suicidality among participants taking paroxetine in Study 329 was interesting:

None of these attempts led to suicide and very few of them led to hospitalization.

Well then I suppose a huge increase in suicidal thoughts and gestures is okay, then? This is the commentary of an “opinion leader” – if statements such as the above shape opinions among practicing psychiatrists, then we really are in trouble.

Next: Well, consider this post just the start regarding Paxil/Seroxat. The way the data were pimped by GSK merits more discussion as does more discussion of allegedly detached academics and their role in this debacle.

Paxil/Seroxat: WOW

I just completed watching the Panorama investigation that aired yesterday on BBC. I ever so highly recommend it. You can check it out here. This lifts the curtains on the usual sets of lies, and does an excellent job of exposing how allegedly independent researchers served as puppets for GlaxoSmithKline. I will write more about it soon. Nice to see some good investigative journalism.