Psychiatric medications, science, marketing, psychiatry in general, and occasionally clinical psychology. Questioning the role of key opinion leaders and the use of "science" to promote commercial ends rather than the needs of people with mental health concerns.
Tuesday, April 03, 2007
GSK, Key Opinion Leaders, and Used Cars
GSK -- More Documents. Oh boy. The good folks at Healthy Skepticism have posted a slew of documents pertaining to the infamous GSK Study 329, in which Paxil was described in a 2001 journal article as safe and effective, yet the data showed some rather heinous side effects occurring much more frequently in the Paxil group (such as significant aggression and suicidal behavior) than in the placebo group. The data also showed, at best, a small advantage for Paxil over placebo, an advantage that was more than outweighed by the significant incidence of serious side effects.
I've looked at a few of the newly posted documents and hope to post my take on them in the near future. In the meantime, I refer you to the documents on Healthy Skepticism's excellent website.
Hat tip to Philip Dawdy at Furious Seasons for beating me to the punch and linking to the above documents. Also beating me to the story, he mentioned that in the latest American Journal of Psychiatry, there is an editorial on adolescent bipolar disorder written by Boris Birmaher, one of the articles on the now discredited GSK 329 study. Birmaher states that it is quite important for bipolar disorder to be increasingly recognized and treated in youth.
Dawdy essentially asks why we should trust Birmaher given his involvement in the scandalous GSK Study 329 (please read this link for background info). Despite several years passing since the publication of the study's results, not a single one of the "independent" academic authors have apologized or spoken out against the way the data were manipulated and misinterpreted.
Key Opinion Leaders or Used Car Salespeople? If academic psychiatry wants some credibility, then it is high time for the so-called opinion leaders to issue a mea culpa -- it's time to admit some fault. Here's my message to the the big-name academics in psychiatry, which likely applies to the academic bigwigs in many other branches of medicine as well: Rather than pimping drugs in corporate press releases, taking cushy consulting gigs, and rubber stamping your name on ghostwritten articles (based on data you have never actually seen) and infomercials labeled as "medical education," turn over a new leaf. Have you been used? Are you really performing science or are you just a tool of a marketing division? What good is your research actually doing for patients? Does selectively reporting only positive data and burying the negative data really help people struggling with mental anguish?
How is it different to hide the faulty mechanics on a 1986 Ford Tempo as a car salesperson versus, as a researcher, to hide safety and efficacy data on a medication? The same rule is applying -- Tell to Sell. In other words, if it ain't going to help sell (the car or the drug), then keep your mouth shut.
Must...take..."broad spectrum psychotropic agent"...too outraged to function...
Back in the 70's, the clinical director, a psychiatrist, of the community mental health center where I worked required that the entire clinical staff watch two films. One was Depression in the anxious patient and the other was Anxiety in the depressed patient. And, low and behold, the solution for both was Tofranil. This was the beginning of my deep skepticism.
ReplyDeleteI really wouldn't read too much into this. It may just be a few rotten orchards...
ReplyDeleteyou wrote "admit some fault".
ReplyDeleteThat is some funny stuff there.
Thanks for the positive comments about the Healthy Skepticism site. For those who are really into the science, a vital link (to the study protocol) wasn't working on our Paxil Study 329 page, but is now fixed. Apologies. http://www.healthyskepticism.org/documents/protocol329.pdf
ReplyDeleteDear CL,
ReplyDeleteIncredible posting! Thanks to all those who contributed information. I have way too many comments for a comment, and would love to take this off line, CP, but I will summarize as follows:
1. I participated as a statistician on many studies like this, from 1984 on. But not willingly. Don't make the assumption that all involved did so because they wanted to. I was a single mother, supporting myself and my child alone and working my way through my PHD program by being a statistician. I did resign from my first PHARMA paper, but over time they WORE ME DOWN. I never did stats, this uninformed, but not much better. WHY??? Because after I finish my document, it leaves me, I never see it again, and the statistics are corrupted. Yea.. I can prove it.
2. The results of ANY one study alone proves nothing... pedantic I know :)
3. BACK IN THE DAY, when this study was run and analyzed, the methodology employed and the statistical techniques used, were not at all out of line.
4. Now, these types of statistics are no longer used in any top medical journal. JAMA has strict statistical guidelines for RCT data, presented in any of their publications. This ANOVA without covariates, but testing for investigator interactions is not wrong. I would have done it differently, and now, I would never do one like this, but overall, this is not incorrect. Few journals today do NOT force you to use all available data for longitudinal RCTs. JAMA in particular will only accept random coefficient longitudinal analyses for this type of data, WITH the appropriate covariates and controls for error. Also clustering, imputation, and many types of propensity weighting are encouraged and required for publication... IS THIS BETTER??? DOES IT MAKE DATA MORE OR LESS TRANSPARENT???
5. I wonder if anyone has the RAW DATA. I will be glad, free of charge, to reanalyze the data using the appropriate statistical measures, and we can see whats up.
6. The NEW methods are WORSE!!!! Now, instead of using the observed data, ANY amount of RCT data is imputed... don't even get me started. The reader is never actually shown the real amount of missing data, due to fancy rewording.
7. The goal is publication of positive results. We work the data very hard to achieve this. It makes us statisticians very ILL.
8. The statistics were not OPTIMUM even given the time, AND the interpretation outreaches the findings.
9. Why don't you calculate the adjusted effect sizes...
10. The side effect profile is not shockingly bad
11. OF COURSE PAPERS ARE GHOST WRITTEN... WOW.. WHAT A SHOCKER!! It used to be, when you were a site on a clinical trial such as this THEY WOULD NOT LET YOU KEEP OR ANALYZE ANY OF YOUR DATA OR THE OTHER SITES DATA. In fact, I have had a PHARMA CEO threaten to SUE me and report me to the FEC for NOT turning over data, on their drug that we ran independently.
12. I hate all the stuff that MDs get for promoting PHARMA, however, most MDs working in academic medicine are taking a HUGE pay cut in comparison to private practice or private hospital work. As such, they believe, because I hear this weekly, that they need to supplement their incomes with these advisory boards, panels, talks, consulting, legal testimony, etc. Its crap. They can live on their salaries...IF, they didn't have HUGE loans that are the size of house loans from MS.
13. I challenge all those who have ACCESS to RAW, I mean the actual HAM-Ds over time and the group assignment and demos, to talk to CP... and on to me. I am willing to give any data a rigorous analysis.
Thanks again for allowing this rant on your comments.
Dr. BK
Cheryl,
ReplyDeleteToday's anxiety is tomorrow's depression and vice versa and whatever is on patent at the time is the cure for both of them, conveniently enough!
Robyn,
Keep up the great work at Healthy Skepticism.
BK,
Excellent comments. My main problems with the Paxil study were
1. The whitewashing of suicidal behavior and aggression.
2. The effect sizes were small across the board, and treatment did not show a statistically significant impact on several of the DV's. The DV's that showed an effect did not seem to be the best depression measures.
3. This was highly ghosted -- did any of the authors see the raw data?
4. The conclusions in the article are not even close to an accurate reflection of the study data.
My earlier posts go into more detail, and I much appreciate your added commentary as well.
Excellent comments, y'all.