Showing posts with label Martin Keller. Show all posts
Showing posts with label Martin Keller. Show all posts

Friday, April 03, 2009

Leading Psychiatrist Slammed in Leading Journal

In the latest American Journal of Psychiatry appears a review of Allison Bass's book Side Effects. As many of my readers undoubtedly recall, the book details the saga of the antidepressant drug paroxetine (Paxil) and the troubled line of "research" used to support its use in children (among other points). The reviewer clearly liked the book, which is not necessarily newsworthy. What is notable is that a book review appearing in perhaps the world's leading psychiatry journal slams a leading member of the psychiatry profession. The reviewer, Dr. Spencer Eth, writes the following:
More recently, psychiatrists have been greeted in the morning with front-page newspaper exposés of huge sums being directed by these same drug companies to the physician leaders of our field. In Side Effects: A Prosecutor, a Whistleblower, and a Bestselling Antidepressant on Trial, journalist Alison Bass has written the powerful story of a leading medication, its manufacturer, and a favored psychiatrist, whose driving force was profit not treatment.
Ouch. Though not naming the psychiatrist directly, it is clearly a reference to Martin Keller, bigwig at Brown University, whose work on one particular study regarding Paxil was the subject of a lengthy prior post. For the collection of my posts related to Dr. Keller, please click here.

Back to the review...
This well-told cautionary lacks the excitement of a novel but instead informs the reader with an actual case study with the real names of psychiatrists we know. We can see exactly how corporate greed, drug-company-sponsored clinical research, and mental health care become a toxic mix that inevitably damages our patients’ well being, our colleagues’ reputations, and our profession’s good name.
It was a refreshing surprise to see Martin Keller's goose get cooked in this review. I don't mean to sound vindictive or meanspirited. Keller has done a lot of work over the course of his career, much of which likely has some redeeming value. That being said, there can be little doubt that some of his "science" is quite dubious. And for a major psychiatry journal to run anything, even a book review, that directly goes after a "key opinion leader" who appears quite culpable in performing bad science -- that's a good sign.

Wednesday, July 16, 2008

Round Up The Usual Suspects

Senator Grassley's investigation of the connections between Big Pharma and psychiatry continues to target individuals who have been featured on this blog. The latest, according to Pharmalot: Martin Keller of Brown University. Those of you who have any familiarity with GlaxoSmithKline's work with Paxil in youth will recall that Keller was the "lead author", using the term as loosely as possible, of the infamous Study 329 paper which claimed inaccurately that Paxil was safe and effective in treating depression. Recently, a team of researchers published a bombshell of a paper that pointed out in detail how the Study 329 manuscript was doctored to paint an unrealistically favorable picture of the study's findings.

A recent transcript became available in which Keller discussed his role in "authoring" the Study 329 paper (pgs. 242-266 in particular). The CliffsNotes version: Keller claimed on one hand that he couldn't recall what happened and on the other hand said that he always played a key role in developing the main points to be communicated in every manuscript on which he was lead author. If we take Keller at his word, that he really developed the main ideas for the paper, then he was either a) negligent of the actual study data or b) actively participating in covering up the unfavorable results from the study. I noted earlier that Keller indicated earlier that he did not seem particularly familiar with the actual study data, so I suppose option A may be more likely. In any case, being the lead author on a study that claims a drug is safe and effective when the study data show that the drug is dangerous and ineffective -- that's nothing to brag about.

An interesting coincidence: There is a new Dean of Medicine (Edward J. Wing) at Brown University. Aubrey Blumsohn of the Scientific Misconduct Blog wrote a letter to the incoming Dean. So far, Blumsohn says he has received no reply. Blumsohn expressed concerns with Keller and with the handling of David Kern, a Brown University faculty member who was canned for apparently nefarious reasons. My bet: Nothing will change. Keller brings in a boatload of money to the university and is hence highly valued by the administration. He is also a "big name" in psychiatry, though between Study 329 and ARISE-RD, another strange study in which Keller seems to have designed a study after it was already completed, his standing is certainly not spotless. Let me be clear: I'm not advocating that the new Dean do anything in particular. I'm not calling for Keller to be canned or anything of the like. However, if I were a Dean (God forbid), then I would be concerned about well-documented issues with one of my big-name faculty, particularly because these issues go to the heart of scientific integrity.

Monday, April 28, 2008

Paxil, Lies, and the Lying Researchers Who Tell Them

A bombshell has just appeared in the International Journal of Risk & Safety in Medicine. The subject of the paper is Paxil study 329, which examined the effects of the antidepressant paroxetine in adolescents. The study findings were published in the Journal of the American Academy of Child and Adolescent Psychiatry in 2001. These new findings show that I was wrong about Paxil Study 329. You know, the one that I said overstated the efficacy of Paxil and understated its risks. The one that I claimed was ghostwritten. Turns out that due to legal action, several documents were made available that shed more light on the study. The authors (Jureidini, McHenry, and Mansfield) of the new investigation have a few enlightening points. Let's look at the claims and you can then see how wrong I was, for which I sincerely apologize. The story is actually worse than I had imagined. Here's what I said then:

Article [quote from the study publication]: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

I went on to bemoan how the authors took differences either based on arbitrary cutoff scores or from measures that assessed something other than depression to make illegitimate claims that paroxetine was effective. Based upon newly available data from the study, here's what happened.
  • The protocol for the study (i.e., the document laying out what was going to happen in the study) called for eight outcome measurements. To quote Jureidini et al: "There was no significant difference between the paroxetine and placebo groups on any of the eight pre-specified outcome measures." So I was wrong. Paxil was not better on 4 of 8 measures -- it was better on ZERO of eight measures. My sincerest apologies.
  • Another quote from Jureidini and friends: "Overall four of the eight negative outcome measures specified in the protocol were replaced with four positive ones, many other negative measures having been tested and rejected along the way."
Let's break this thing down for a minute. The authors planned to look eight different ways for Paxil to beat placebo. They went zero for eight. So, rather than declaring defeat, the authors then went digging to find some way in which Paxil was better than a placebo. Devising various cutoff scores on various measures on which victory could be declared, as well as examining individual items from various measures rather than entire rating scales, the authors were able to grasp and pull out a couple of small victories. In the published version of the paper, there is no hint that such data dredging occurred. Change the endpoints until you find one that works out, then declare victory.

How About Safety?

I was incensed about the coverage of safety, particularly the magical writing that stated that a placebo can make you suicidal, but Paxil could not. I wrote:
It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” [i.e., suicidal] on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring.
Turns out I missed a couple things. Based on looking at an internal document and doing some calculations, Jureidini et al. found that serious adverse events were significantly more likely to occur in patients taking paroxetine (12%) vs. placebo (2%). Likewise, adverse events requiring hospitalization were significantly disadvantageous to paroxetine (6.5% vs. 0%). Severe nervous system side effects -- same story (18% vs. 4.6%). The authors of Study 329 did not conduct analyses to see whether the aforementioned side effects occurred more commonly on drug vs. placebo.

Funny how they had time to dredge through every conceivable efficacy outcome but couldn't see whether the difference in severe adverse events was statistically significant.

One quote from the discussion section of the paper sums it all up:
There was no significant efficacy difference between paroxetine and placebo on the two primary outcomes or six secondary outcomes in the original protocol. At least 19 additional outcomes were tested. Study 329 was positive on 4 of 27 known outcomes (15%). There was a significantly higher rate of SAEs with paroxetine than with placebo. Consequently, study 329 was negative for efficacy and positive forharm.
But the authors concluded infamously that "Paroxetine is generally well-tolerated and effective for major depression in adolescents."

Enter Ghostwriters. Documentary evidence as shown on indicated that the first draft of the study was ghostwritten. This leaves two roles for the so-called academic authors of this paper:
  • They were willing co-conspirators who committed scientific fraud.
  • They were dupes, who dishonestly represented that they had a major role in the analysis of data and writing of the study, when in fact GSK operatives were working behind the scenes to manufacture these dubious results.
Remember, this study was published in 2001, and there has still been no apology for the fictional portrayal of its results, wherein a drug that was ineffective and unsafe was portrayed as safe and effective. Physicians who saw the authorship line likely thought "Gee, this is a who's who among academic child psychiatrists -- I can trust that they provided some oversight to make sure GSK didn't twist the results." But they were wrong.

By the way, Martin Keller, the lead "independent academic" author of this tragedy of a study said, when asked about what it means to be a key opinion leader in psychiatry:
You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.
So is completely misrepresenting the data from a study "honorable"? Is Keller's opinion "worth considering?" As you know if you've read this blog for long, such behavior is, sadly, not a fluke occurrence. Many others who should be providing leadership are leading us on a race to the scientific and ethical bottom. What will Brown University, home of Keller, do? Universities don't seem to care at all about scientific fraud, provided that the perpetrators of bad science are bringing home the bacon.

Not one of the "key opinion leaders" who signed on as an author to this study has said, "Yep, I screwed up. I didn't see the data and I was a dupe." Nobody. Sure, I don't expect that every author of every publication can vouch for the data with 100% certainty. I understand that. But shouldn't the lead author be taking some accountability?

This is a Fluke (?) Some may be saying: "But this is just a fluke occurrence." Is it? I've seen much evidence that data are often selectively reported in a manner like this -- looks like (sadly) it takes a lawsuit for anyone to get a whiff of the bastardization of $science that passes for research these days. If GSK had not been sued, nobody would have ever known that the published data from Study 329 were negative. A reasonably educated person could see that the writeup of the study was a real pimp job -- lots of selling the product based on flimsy evidence, but nobody would have seen the extent of the fraud. Apparently lawyers need to police scientists because scientists are incapable of playing by some very basic rules of science.

See for Yourself. Documents upon which the latest Jureidini et al. paper are based can be found here. Happy digging.

Wednesday, February 13, 2008

Key Opinion Leaders and Information Laundering: The Case of Paxil

Joseph Glenmullen’s testimony regarding GlaxoSmithKline’s burial of suicide data related to Paxil, which was discussed briefly across the blogosphere last week (Pharmalot, Furious Seasons, for example), was quite interesting in many respects.

One important aspect that needs public airing is how key opinion leaders in psychiatry were used by GSK to help allay fears that Paxil might induce suicidal thoughts and/or behaviors. When GSK issues statements indicating that Paxil is not linked to increased suicide risk, many people will think “Gee, of course GSK will say Paxil is not linked to suicide – it’s their product, after all.” But when purportedly independent academic researchers make the same claims regarding the alleged safety of Paxil, then people tend to think “Well, if these big-name researchers say it’s safe, then I suppose that there’s no risk.” But what if GSK simply hands these big-name researchers (aka “key opinion leaders") charts with data, and then the “independent” researchers go about stating that Paxil is safe? Mind you, the researchers in question don’t see the actual raw data – just tables handed to them from GSK – in other words, they simply take GSK’s word that the data is accurate. In essence, these researchers are serving as information conduits for GSK.

But wait a second, what if the charts and data tables handed to them by GSK are not an accurate representation of the raw data; what if GSK is lying? Well, of course, it turns out that GSK was lying in a big way for several years. This post will not go into depth on the suicide data, as it has been covered elsewhere (1, 2, 3 ) -- even GSK now admits that Paxil is related to an increased risk of suicidality.

My main question in this post is how we are supposed to trust our "key opinion leaders" in psychiatry if they are willing to simply look at data tables from GSK (and others), then make pronouncements regarding the benefits and safety of medication without ever examining the raw data. To put this in layman's terms, suppose an election occurs and candidate A wins 70% of votes while candidate B wins 30% of votes. As the vote counter, I then rig the results to say that candidate B won the election by a 55% to 45% margin. Suppose that the election certification board shows up later and I show them a spreadsheet that I created which backs up my 55% vote tally for candidate A. The election board is satisfied and walks away, not knowing that the vote counting was a sham. Obviously, the election board should have checked the ballots (the raw data) rather than simply examining the spreadsheet (the data table). In much the same way, these so-called thought leaders in psychiatry should have checked the raw data before issuing statements about Paxil.

What did these key opinion leaders say about Paxil? Some quotes from Glenmullen's testimony follows, based upon documents he obtained in GSK's archive. Here's what David Dunner (University of Washington) and Geoffrey Dunbar (of GSK) reportedly said at a conference
Suicides and suicide attempts occurred less frequently with Paxil than with either placebo or active controls.
John Mann of Columbia University, regarding how data were collected:
We spent quite a bit of time gathering data from various drug companies and formulating it into the publication of the committee's findings
The committee he references is a committee from the American College of Neuropsychopharmacology, the same organization that issued a dubious report blessing the use of antidepressants in kids.

More from Mann, after being asked if he saw raw data or just data summarized in tables:
To be perfectly honest, I can't recall how much of the statistical raw data we received at the time that we put these numbers together...No, I think we all went through the tables of data that were provided at the time.
To use the analogy from above, the election board did not actually see the ballots. Stuart Montgomery is next. He was an author, along with Dunner and Dunbar, on a paper in the journal European Neuropsychopharmacology that stated:
Consistent reduction in suicides, attempted suicides, and suicidal thoughts, and protection against emergent suicidal thoughts suggest that Paxil has advantages in treating the potentially suicidal client.
Did Dunner see any raw data?
Dunner: I didn't see the raw data in the case report forms. I did see the tables. I work with the tables. The tables came before any draft, as I recall. We -- we created the paper from the tables.

Attorney: And -- and you never questioned, did you, or did you not question the validity of the data in Table 8?

Dunner: No
The above-mentioned paper that gave a clean slate to Paxil? According to a GSK document examined by Glenmullen, it was used by GSK to help convince physicians that they need not worry about Paxil inducing suicidality.

If you are an academic researcher, and you simply take data tables from drug companies then reproduce them in a report and/or publication, you are not doing research -- you are laundering information. People think that you have closely examined the data, but you have not, and you are thus doing the public a disservice.

I am unaware of any of the above researchers ever issuing a public apology. I can respect the context of the times; researchers may not have been aware of how pharmaceutical companies fool around with data in the early 90's. So if anyone wants to issue a mea culpa, I'd respect such an apology, but I have a feeling that not a single one of the above named individuals (nor this guy) will make an apology. Instead, it will be more business as usual, as these key opinion leaders, knowing who butters their bread, will continue to launder information and tell the public that everything will be fine and dandy if they just take their Paxil, Seroquel, or whatever hot drug of the moment is burning up the sales charts.

Friday, November 02, 2007

Paxil's "Advantages"

Paxil and its advantages. Yeah, that's what this blog is about. I just recently retitled the blog; it was formerly known as the Paxil Pimp's Paradise. What am I talking about? I received an email a couple of days ago, to which I will reply in this post. Don't worry. In sticking with my informal confidentiality policy, I'll not reveal the identity of the person or his/her employer. Here is the email:
Respected Sir / Madam,

I read your review on website, please if you can provide me the the reviews for Advantages of Paroxetine for depression & anxiety. It would be more interesting if it would consist of recent data i.e. in year 2007.

I expect you [sic] early reply

Thank You.
Yes, this person works for a drug company. That's all I will reveal about the author of the email. Here is my reply...

Dear Sir/Madam,

Please see the following posts for a detailed explanation of the "advantages" of paroxetine (Paxil/Seroxat) as discussed previously on my site...
  • Advantage 1: Increases suicide attempts in patients.
  • Advantage 2: Potentially increases obesity in patients, though research is preliminary.
  • Advantage 3: Increase in birth defects for children whose mothers were taking Paxil while pregnant.
  • Advantage 4: Excellent marketing, both for social phobia and depression. Excellent use of misleading writing in so-called scientific journals when writing about the "advantages" of Paxil, including using euphemisms for unpleasantries like suicide attempts.
  • Advantage 5: Major discontinuation symptoms. Take Paxil for a while, try to stop and let me know what "advantage" you notice. See references at bottom of this post for a start. There are many more studies documenting clearly the difficulties with paroxetine withdrawal.
  • Advantage 6: Those wonderful sexual side effects. And they might last for a long time even after one stops taking the medication.
I hope you find this information useful in your search for the advantages of Paxil. I am flattered by your interest in my opinion on this matter. For additional information on paroxetine, you may want to consult Martin Keller, who has a somewhat different take than myself, but who is a potential recipient of the prestigious Golden Goblet Award for his excellent scientific work on paroxetine. Karen Wagner, another Golden Goblet Nominee, may also be an excellent source. You may also wish to consult the following websites:
Should I be able to assist further, please let me know. There are other sources with which you will want to be familiar. You may also want to contact Philip Dawdy regarding the advantages of atypical antipsychotics, and please see Aubrey Blumsohn regarding the advantages of Actonel in treating osteoporosis. Also, I hope you contact Jack Friday, Ed Silverman, or Peter Rost to provide industry cheerleading. For any questions regarding the excellent Rozerem advertising campaign, please see John Mack. Last but absolutely not least, for any advice regarding how to outsource your scientists, fake your clinical trials, and abuse your employees, please take advice from the sage Pharma Giles.

Sincerely Yours,

Paxil Pee-Yimp #1

Tuesday, April 03, 2007

GSK, Key Opinion Leaders, and Used Cars


GSK -- More Documents. Oh boy. The good folks at Healthy Skepticism have posted a slew of documents pertaining to the infamous GSK Study 329, in which Paxil was described in a 2001 journal article as safe and effective, yet the data showed some rather heinous side effects occurring much more frequently in the Paxil group (such as significant aggression and suicidal behavior) than in the placebo group. The data also showed, at best, a small advantage for Paxil over placebo, an advantage that was more than outweighed by the significant incidence of serious side effects.

I've looked at a few of the newly posted documents and hope to post my take on them in the near future. In the meantime, I refer you to the documents on Healthy Skepticism's excellent website.

Hat tip to Philip Dawdy at Furious Seasons for beating me to the punch and linking to the above documents. Also beating me to the story, he mentioned that in the latest American Journal of Psychiatry, there is an editorial on adolescent bipolar disorder written by Boris Birmaher, one of the articles on the now discredited GSK 329 study. Birmaher states that it is quite important for bipolar disorder to be increasingly recognized and treated in youth.

Dawdy essentially asks why we should trust Birmaher given his involvement in the scandalous GSK Study 329 (please read this link for background info). Despite several years passing since the publication of the study's results, not a single one of the "independent" academic authors have apologized or spoken out against the way the data were manipulated and misinterpreted.

Key Opinion Leaders or Used Car Salespeople?
If academic psychiatry wants some credibility, then it is high time for the so-called opinion leaders to issue a mea culpa -- it's time to admit some fault. Here's my message to the the big-name academics in psychiatry, which likely applies to the academic bigwigs in many other branches of medicine as well: Rather than pimping drugs in corporate press releases, taking cushy consulting gigs, and rubber stamping your name on ghostwritten articles (based on data you have never actually seen) and infomercials labeled as "medical education," turn over a new leaf. Have you been used? Are you really performing science or are you just a tool of a marketing division? What good is your research actually doing for patients? Does selectively reporting only positive data and burying the negative data really help people struggling with mental anguish?

How is it different to hide the faulty mechanics on a 1986 Ford Tempo as a car salesperson versus, as a researcher, to hide safety and efficacy data on a medication? The same rule is applying -- Tell to Sell. In other words, if it ain't going to help sell (the car or the drug), then keep your mouth shut.

Must...take..."broad spectrum psychotropic agent"...too outraged to function...

Thursday, February 01, 2007

Seroxat (Paxil) Can Make People Suicidal?

… not by the hair of my chinny chin chin, sayeth Martin Keller, or so it would appear.

Not sure how I neglected to mention this in a prior post, but here goes: Keller (on video), was asked about a study which showed placebo was superior to paroxetine (Seroxat or Paxil). In response, Keller, while smiling, reached out and stroked the reporter under the chin. I don’t really know what that gesture meant, but it struck me (and some individuals who have emailed me about it) as pretty strange.

Wednesday, January 31, 2007

Journal Editor Unapologetic Over Paxil/Seroxat Article

Dr. Mina Duncan is the editor of the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), which, as she noted on Panorama is very widely read among child and adolescent psychiatrists. So, in this prestigious journal, one would expect high editorial standards.

Let’s go through what happened with study 329, which turned into a publication in JAACAP in July 2001 upon which Dr. Martin Keller (see here) was the lead author. The study was submitted (after the Journal of the American Medical Association had rejected it – good for them) to JAACAP, and Panorama nicely documented a couple of the reviewer comments. They included

Overall, results do not clearly indicate efficacy – authors need to clearly note this.

The relatively high rate of serious adverse effects was not addressed in the discussion.

Given the high placebo response rate… are (these drugs) an acceptable first-line therapy for depressed teenagers?

Remember that journals receive manuscripts, and then send them to be reviewed by researchers in the field as to their quality. These reviews are generally taken very seriously when considering what changes should be made to a paper and whether the manuscript will be published.

Yet, the paper was not only accepted and published in JAACAP, but the editor seems to have ignored the suggestions of the individuals who reviewed the paper. These issues mentioned in the review were obviously not addressed – feel free to read the actual journal article and you can see that the efficacy of paroxetine was pimped well beyond what the data showed and the safety data were also painted to show a picture contrary to the study’s own data. Again, please feel free to read my earlier post regarding the study’s data versus how such data were reported and interpreted in the journal article.

Read this carefully – we all make mistakes. When someone points out that a mistake was made, it is natural to become defensive – that’s okay. However, several years after the fact, one should be able to admit fault and learn from one’s errors; at least that is my opinion.

Dr. Duncan was asked if she regretted allowing Keller et al.’s Paxil/Seroxat study to be published – her response was less that I hoped for:

I don’t have any regrets about publishing [the study] at all – it generated all sorts of useful discussion which is the purpose of a scholarly journal.

Let’s follow this train of logic. If a study is either particularly poorly done or misinterprets its own data to a large extent, then there will be an outcry of researchers and critics who will point out the numerous flaws that occurred. This could, of course, be interpreted as “useful discussion,” which I suppose is what Duncan meant happened in the case of this article. After all, there were several letters to the editor that expressed their frustration with the study and how Keller et al interpreted their data. So, according to my interpretation of Duncan’s logic, we should publish studies with as many flaws as possible so that we can “usefully discuss” them.

Of further interest, Jon Jureidini and Anne Tonkin had a letter published in JAACAP in May 2003. In their letter they stated

…a study that did not show significant improvement on either of two primary outcome measures is reported as demonstrating efficacy (p. 514).

The tone of their letter was perhaps a bit catty as it discussed how Keller et al seem to have spun their interpretation well out of line with the actual study data. I can, however, hardly blame them for their snippiness. Another nugget from their letter:

We believe that the Keller et al. study shows evidence of distorted and unbalanced reporting that seems to have evaded the scrutiny of your editorial process (p. 514).
Thank you to Jureidini and Tonkin for their contribution to the “useful discussion” – indeed, their comments were likely the most useful of all that were contributed to the discussion. I give credit to Duncan for publishing their letter. I would be more impressed if she was willing to state that there were some problems with the editorial process in the case of this article, but I suppose you can’t win them all.

Disclaimer: I watched Panorama and took copious notes. I believe all quotes are accurate but please let me know if you think I transcribed something incorrectly.

Update (1/29/08): My apologies. I should have typed Mina Dulcan, not Mina Duncan. Sorry for the misspellings.

Tuesday, January 30, 2007

Keller, Bad Science, and Seroxat/Paxil

I will focus on Dr. Martin Keller and some seriously poor science in this post. Panorama did an excellent job of profiling Keller’s role in helping to promote paroxetine (known as Paxil in the USA and Seroxat in the UK). Note this is a lengthy post and that the bold section headings should help you find your way.

Who is Martin Keller? He is chair of psychiatry at Brown University. According to his curriculum vita, he has over 300 scientific publications. People take his opinions seriously. He is what is known as a key opinion leader or thought leader in academia and by the drug industry. What does that mean? Well, on videotape (see the Panorama episode from 1-29-07), Keller said:

You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.

Keller and Study 329: GlaxoSmithKline conducted a study, numbered 329, in which it examined the efficacy and safety of paroxetine versus placebo in the treatment of adolescent depression. Keller was the lead author on the article (J American Academy of Child and Adolescent Psychiatry, 2001, 762-772) which appeared regarding the results of this study.

Text of Article vs. the Actual Data: We’re going to now examine what the text of the article said versus what the data from the study said.

Article: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

Data on safety: Emotional lability occurred in 6 of 93 participants on paroxetine compared to 1 of 87 on placebo. Hostility occurred in 7 of 93 patients on paroxetine compared to 0 of 87 on placebo. In fact, on paroxetine, 7 patients were hospitalized due to adverse events, including 2 from emotional lability, 2 due to aggression, 2 with worsening depression, and 1 with manic-like symptoms. This compares to 1 patient who had lability in the placebo group, but apparently not to the point that it required hospitalization. A total of 10 people had serious psychiatric adverse events on paroxetine compared to one on placebo.

What exactly were emotional lability and hostility? To quote James McCafferty, a GSK employee who helped work on Study 329, “the term emotional lability was catch all term for ‘suicidal ideation and gestures’. The hostility term captures behavioral problems, most related to parental and school confrontations.” According to Dr. David Healy, who certainly has much inside knowledge of raw data and company documents (background here), hostility counted for “homicidal acts, homicidal ideation and aggressive events.”

Suicidality is now lability and overt aggression is now hostility. Sounds much nicer that way.

Conveniently defining depression: On page 770 of the study report, the authors opined that “…our study demonstrates that treatment with paroxetine results in clinically relevant improvement in depression scores.” The only measures that showed an advantage for paroxetine were either based on some arbitrary cutoff (and the researchers could of course opt for whatever cutoff yielded the results they wanted) or were not actually valid measures of depression. The only measures that were significant were either a global measure of improvement, which paints an optimistic view of treatment outcome, or were cherry-picked single items from longer questionnaires.

Also, think about the following for a moment. A single question on any questionnaire or interview is obviously not going to broadly cover symptoms of depression. A single question cannot cover the many facets of depression. Implying that a single question on an interview which shows an advantage for paroxetine shows that paroxetine is superior in treating depression is utterly invalid. Such logic is akin to finding that a patient with the flu reports coughing less often on a medication compared to placebo, so the medication is then declared superior to placebo for managing flu despite the medication not working better on any of the many other symptoms that comprise influenza.

Whitewashing safety data: It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring. David Healy has also discussed this fallacious assumption that drugs cannot cause harm.

Did Keller Know the Study Data? I’ll paraphrase briefly from Panorama, which had a video of Keller discussing the study and his role in examining and analyzing its data. He said he had reviewed data analytic tables, but then he mentioned soon after that on some printouts there were “item numbers and variable numbers and they don’t even have words on them – I tend not to look at those. I do better with words than symbols. [emphasis mine].”

Ghosted: According to Panorama (and documents I’ve obtained), the paper was written by a ghostwriter. Keller’s response to the ghostwriter after he saw the paper? “You did a superb job with this. Thank you very much. It is excellent. Enclosed are some rather minor changes from me, Neal, and Mike. [emphasis mine].” And let’s remember that Keller apparently did not wish to bother with looking at numbers. It would also appear that he did not want to bother much with the words based upon those numbers.

Third Party Technique: This is a tried and true trick – get several leading academics to stamp their names on a study manuscript and suddenly it appears like the study was closely supervised in every aspect, from data collection to data analysis, to study writeup, by independent academics. Thus, it is not GlaxoSmithKline telling you that their product is great, it is “independent researchers” from such bastions of academia as Brown University, the University of Pittsburgh, and University of Texas Southwester Medical Center and the University of Texas Medical Branch at Galveston which are stamping approval of the product. More on this in future posts.

Keller’s Background… It is relatively well-known that Keller makes much money from his consulting and research arrangements with drug companies. In fact, several years ago, it was documented that Keller pulled in over $500,000 in a single year through these lucrative deals. When looking at how he stuck his name on a study he did not write, endorsing conclusions that were clearly far from the actual study data, can one seriously believe that Keller operated as an independent researcher? Can you believe that this is an isolated incident?

See, for example, Keller’s involvement in a study examining the effects of Risperdal (risperidone) for the treatment of depression. This study was presented a number of times, and he never appeared as an author of any of the presentations. Yet when the study was published, his name appeared as an author. The real kicker was that he allegedly helped to design the study, according to the published article. If he had played a major role in the study, he would have been acknowledged earlier (via being listed as a presentation author), so he apparently helped design the study after it was completed, which is obviously a major feat! The whole story is here. Why put his name on the paper? So that readers would believe more strongly in the study due to his big name status.

In addition, Keller wrote about how Effexor reduces episodes of depression in the long-term though he clearly misinterpreted the study’s findings. To be fair, many other researchers have made the same mistake in believing that SSRI’s reduce depression. To quote an earlier post:

In other words, because SSRIs and similar drugs (e.g., Effexor) have withdrawal symptoms that sometimes lead to depression, it looks like they are effective in preventing depression because people often get worse shortly after stopping their medication. The drug companies (Wyeth, in the case of Effexor) would like you to believe that this means antidepressants protect you from re-experiencing depression once you get better, that they are a good long-term treatment. A more accurate statement is that antidepressants protect you from their own substantial withdrawal symptoms until you stop taking them.

Again, Keller is way off from the study data.

Keller on Camera: Keller’s response to being asked about the increased suicidality among participants taking paroxetine in Study 329 was interesting:

None of these attempts led to suicide and very few of them led to hospitalization.

Well then I suppose a huge increase in suicidal thoughts and gestures is okay, then? This is the commentary of an “opinion leader” – if statements such as the above shape opinions among practicing psychiatrists, then we really are in trouble.

Next: Well, consider this post just the start regarding Paxil/Seroxat. The way the data were pimped by GSK merits more discussion as does more discussion of allegedly detached academics and their role in this debacle.

Monday, January 22, 2007

Effexor For Life

In a study that parallels an earlier study (and accompanying post – Lexapro for Life), Dr. Martin Keller (presumably a primary investigator on this study) found that long-term venlafaxine (Effexor) use reduces risk for depression. In fact, he said,

these data showed that venlafaxine extended release can help prevent new episodes of depression -- providing an option to the millions of adult patients with depression who have experienced a disappointing setback or who are still seeking symptom relief.

Please note that I have text of a document from PR Newswire indicating that he made the above statement, but I don’t have a link to the text.

His comments are not all that different than those of Dr. Susan Kornstein, who was quoted as saying the following about the long-term effects of Lexapro:

These findings indicate the importance of maintenance therapy for patients with recurrent major depressive disorder beyond four to six months of improvement, even if a patient’s depressive symptoms appear to be resolved

This study: The Effexor results have not been published to my knowledge, but from what I gather from the study description, this study examined people who were on Effexor and received some sort of therapeutic response while taking it. Then, some of them kept taking Effexor and some were assigned to take a placebo (of course, not knowing they were switched to placebo). Those who took Effexor were significantly less likely to experience a recurrence of their depressive symptoms compared to those on placebo. So, apparently, you should be on Effexor forever. As noted above, Keller (and in a similar study, Kornstein) say this means that you should stay on your meds long-term because they prevent depression.

Withdrawal from Effexor: Just like Kornstein, Keller has absolutely misinterpreted the evidence. For example, in one small study, discontinuation of venlafaxine was associated with adverse events in 78% of patients compared to 22% of patients who stopped taking a placebo. Another, larger study found, similarly, that Effexor was related to increased rates of discontinuation symptoms compared to placebo. There are frequent reports to a national medication hotline in the UK regarding discontinuation symptoms when patients stop taking Effexor. In addition, there are also case reports of experiencing shock-like sensations during venlafaxine withdrawal. For a brief read on these shock-like sensations, check out this brief read in the British Medical Journal. In addition, it is now known that for paroxetine (Paxil), another antidepressant, healthy (not depressed) volunteers at times experienced depression upon ceasing taking their medication. Given the similar mechanism of action between Effexor and Paxil, one would expect a similar result for Effexor.

A quote from Dr. David Healy helps to summarize this fundamental manipulation of evidence by drug companies (and their allied “independent” academics):

It is clear now that the companies must have known that a certain proportion of these patients re-randomised to placebo, who subsequently complained of depressive and anxiety symptoms, will have been suffering from withdrawal problems. These withdrawal problems however appear to have been used as a basis for claiming that continued SSRI intake had a prophylactic effect against nervous and depressive problems. Based on such studies companies sought and have received licences to make these claims regarding prophylaxis.

In other words, because SSRIs and similar drugs (e.g., Effexor) have withdrawal symptoms that sometimes lead to depression, it looks like they are effective in preventing depression because people often get worse shortly after stopping their medication. The drug companies (Wyeth, in the case of Effexor) would like you to believe that this means antidepressants protect you from re-experiencing depression once you get better, that they are a good long-term treatment. A more accurate statement is that antidepressants protect you from their own substantial withdrawal symptoms until you stop taking them.

See No Withdrawal, Mention No Withdrawal: I do not say this as a personal affront to Keller, Kornstein, or any other academic who has made public statements regarding the long-term efficacy of antidepressants, but it seems odd that anyone, particularly anyone with research credentials, could ignore the solid evidence that there are substantial withdrawal symptoms associated with antidepressants. Many researchers apparently continue to toe the line of the drug companies that “it’s the depression, not the drug” that causes depression to return.

The Current Verdict on Long-Term Outcomes: Depression is indeed a nasty condition that often returns after it is successfully treated or after it just vanishes due to the passage of time. So yes, we should treat it. However, let’s keep in mind that antidepressants are barely more effective than a placebo in the short-term and, as we see here, can lead to problems in the long-term, where one can either choose to stay, for years, on an antidepressant (perhaps indefinitely) or have a good chance of risking some heinous withdrawal effects. Psychotherapy is much better in the long-run for depression, but it still does not positively impact many people. Like it or not, psychotherapy for depression has the best (though limited) success rate, perhaps due solely to its lack of causing withdrawal effects, or at least causing them at a much lower rate than meds.