I will focus on Dr. Martin Keller and some seriously poor science in this post.
Panorama did an excellent job of profiling Keller’s role in helping to promote paroxetine (known as Paxil in the
USA and Seroxat in the
UK).
Note this is a lengthy post and that the bold section headings should help you find your way.
Who is Martin Keller? He is chair of psychiatry at Brown University. According to his curriculum vita, he has over 300 scientific publications. People take his opinions seriously. He is what is known as a key opinion leader or thought leader in academia and by the drug industry. What does that mean? Well, on videotape (see the Panorama episode from 1-29-07), Keller said:
You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.
Keller and Study 329: GlaxoSmithKline conducted a study, numbered 329, in which it examined the efficacy and safety of paroxetine versus placebo in the treatment of adolescent depression. Keller was the lead author on the article (J American Academy of Child and Adolescent Psychiatry, 2001, 762-772) which appeared regarding the results of this study.
Text of Article vs. the Actual Data: We’re going to now examine what the text of the article said versus what the data from the study said.
Article: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).
Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.
Data on safety: Emotional lability occurred in 6 of 93 participants on paroxetine compared to 1 of 87 on placebo. Hostility occurred in 7 of 93 patients on paroxetine compared to 0 of 87 on placebo. In fact, on paroxetine, 7 patients were hospitalized due to adverse events, including 2 from emotional lability, 2 due to aggression, 2 with worsening depression, and 1 with manic-like symptoms. This compares to 1 patient who had lability in the placebo group, but apparently not to the point that it required hospitalization. A total of 10 people had serious psychiatric adverse events on paroxetine compared to one on placebo.
What exactly were emotional lability and hostility? To quote James McCafferty, a GSK employee who helped work on Study 329, “the term emotional lability was catch all term for ‘suicidal ideation and gestures’. The hostility term captures behavioral problems, most related to parental and school confrontations.” According to Dr. David Healy, who certainly has much inside knowledge of raw data and company documents (background here), hostility counted for “homicidal acts, homicidal ideation and aggressive events.”
Suicidality is now lability and overt aggression is now hostility. Sounds much nicer that way.
Conveniently defining depression: On page 770 of the study report, the authors opined that “…our study demonstrates that treatment with paroxetine results in clinically relevant improvement in depression scores.” The only measures that showed an advantage for paroxetine were either based on some arbitrary cutoff (and the researchers could of course opt for whatever cutoff yielded the results they wanted) or were not actually valid measures of depression. The only measures that were significant were either a global measure of improvement, which paints an optimistic view of treatment outcome, or were cherry-picked single items from longer questionnaires.
Also, think about the following for a moment. A single question on any questionnaire or interview is obviously not going to broadly cover symptoms of depression. A single question cannot cover the many facets of depression. Implying that a single question on an interview which shows an advantage for paroxetine shows that paroxetine is superior in treating depression is utterly invalid. Such logic is akin to finding that a patient with the flu reports coughing less often on a medication compared to placebo, so the medication is then declared superior to placebo for managing flu despite the medication not working better on any of the many other symptoms that comprise influenza.
Whitewashing safety data: It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring. David Healy has also discussed this fallacious assumption that drugs cannot cause harm.
Did Keller Know the Study Data? I’ll paraphrase briefly from Panorama, which had a video of Keller discussing the study and his role in examining and analyzing its data. He said he had reviewed data analytic tables, but then he mentioned soon after that on some printouts there were “item numbers and variable numbers and they don’t even have words on them – I tend not to look at those. I do better with words than symbols. [emphasis mine].”
Ghosted: According to Panorama (and documents I’ve obtained), the paper was written by a ghostwriter. Keller’s response to the ghostwriter after he saw the paper? “You did a superb job with this. Thank you very much. It is excellent. Enclosed are some rather minor changes from me, Neal, and Mike. [emphasis mine].” And let’s remember that Keller apparently did not wish to bother with looking at numbers. It would also appear that he did not want to bother much with the words based upon those numbers.
Third Party Technique: This is a tried and true trick – get several leading academics to stamp their names on a study manuscript and suddenly it appears like the study was closely supervised in every aspect, from data collection to data analysis, to study writeup, by independent academics. Thus, it is not GlaxoSmithKline telling you that their product is great, it is “independent researchers” from such bastions of academia as Brown University, the University of Pittsburgh, and University of Texas Southwester Medical Center and the University of Texas Medical Branch at Galveston which are stamping approval of the product. More on this in future posts.
Keller’s Background… It is relatively well-known that Keller makes much money from his consulting and research arrangements with drug companies. In fact, several years ago, it was documented that Keller pulled in over $500,000 in a single year through these lucrative deals. When looking at how he stuck his name on a study he did not write, endorsing conclusions that were clearly far from the actual study data, can one seriously believe that Keller operated as an independent researcher? Can you believe that this is an isolated incident?
See, for example, Keller’s involvement in a study examining the effects of Risperdal (risperidone) for the treatment of depression. This study was presented a number of times, and he never appeared as an author of any of the presentations. Yet when the study was published, his name appeared as an author. The real kicker was that he allegedly helped to design the study, according to the published article. If he had played a major role in the study, he would have been acknowledged earlier (via being listed as a presentation author), so he apparently helped design the study after it was completed, which is obviously a major feat! The whole story is here. Why put his name on the paper? So that readers would believe more strongly in the study due to his big name status.
In addition, Keller wrote about how Effexor reduces episodes of depression in the long-term though he clearly misinterpreted the study’s findings. To be fair, many other researchers have made the same mistake in believing that SSRI’s reduce depression. To quote an earlier post:
In other words, because SSRIs and similar drugs (e.g., Effexor) have withdrawal symptoms that sometimes lead to depression, it looks like they are effective in preventing depression because people often get worse shortly after stopping their medication. The drug companies (Wyeth, in the case of Effexor) would like you to believe that this means antidepressants protect you from re-experiencing depression once you get better, that they are a good long-term treatment. A more accurate statement is that antidepressants protect you from their own substantial withdrawal symptoms until you stop taking them.
Again, Keller is way off from the study data.
Keller on Camera: Keller’s response to being asked about the increased suicidality among participants taking paroxetine in Study 329 was interesting:
None of these attempts led to suicide and very few of them led to hospitalization.
Well then I suppose a huge increase in suicidal thoughts and gestures is okay, then? This is the commentary of an “opinion leader” – if statements such as the above shape opinions among practicing psychiatrists, then we really are in trouble.
Next: Well, consider this post just the start regarding Paxil/Seroxat. The way the data were pimped by GSK merits more discussion as does more discussion of allegedly detached academics and their role in this debacle.