Showing posts with label Paxil. Show all posts
Showing posts with label Paxil. Show all posts

Monday, April 28, 2008

Paxil, Lies, and the Lying Researchers Who Tell Them

A bombshell has just appeared in the International Journal of Risk & Safety in Medicine. The subject of the paper is Paxil study 329, which examined the effects of the antidepressant paroxetine in adolescents. The study findings were published in the Journal of the American Academy of Child and Adolescent Psychiatry in 2001. These new findings show that I was wrong about Paxil Study 329. You know, the one that I said overstated the efficacy of Paxil and understated its risks. The one that I claimed was ghostwritten. Turns out that due to legal action, several documents were made available that shed more light on the study. The authors (Jureidini, McHenry, and Mansfield) of the new investigation have a few enlightening points. Let's look at the claims and you can then see how wrong I was, for which I sincerely apologize. The story is actually worse than I had imagined. Here's what I said then:

Article [quote from the study publication]: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

I went on to bemoan how the authors took differences either based on arbitrary cutoff scores or from measures that assessed something other than depression to make illegitimate claims that paroxetine was effective. Based upon newly available data from the study, here's what happened.
  • The protocol for the study (i.e., the document laying out what was going to happen in the study) called for eight outcome measurements. To quote Jureidini et al: "There was no significant difference between the paroxetine and placebo groups on any of the eight pre-specified outcome measures." So I was wrong. Paxil was not better on 4 of 8 measures -- it was better on ZERO of eight measures. My sincerest apologies.
  • Another quote from Jureidini and friends: "Overall four of the eight negative outcome measures specified in the protocol were replaced with four positive ones, many other negative measures having been tested and rejected along the way."
Let's break this thing down for a minute. The authors planned to look eight different ways for Paxil to beat placebo. They went zero for eight. So, rather than declaring defeat, the authors then went digging to find some way in which Paxil was better than a placebo. Devising various cutoff scores on various measures on which victory could be declared, as well as examining individual items from various measures rather than entire rating scales, the authors were able to grasp and pull out a couple of small victories. In the published version of the paper, there is no hint that such data dredging occurred. Change the endpoints until you find one that works out, then declare victory.

How About Safety?

I was incensed about the coverage of safety, particularly the magical writing that stated that a placebo can make you suicidal, but Paxil could not. I wrote:
It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” [i.e., suicidal] on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring.
Turns out I missed a couple things. Based on looking at an internal document and doing some calculations, Jureidini et al. found that serious adverse events were significantly more likely to occur in patients taking paroxetine (12%) vs. placebo (2%). Likewise, adverse events requiring hospitalization were significantly disadvantageous to paroxetine (6.5% vs. 0%). Severe nervous system side effects -- same story (18% vs. 4.6%). The authors of Study 329 did not conduct analyses to see whether the aforementioned side effects occurred more commonly on drug vs. placebo.

Funny how they had time to dredge through every conceivable efficacy outcome but couldn't see whether the difference in severe adverse events was statistically significant.

One quote from the discussion section of the paper sums it all up:
There was no significant efficacy difference between paroxetine and placebo on the two primary outcomes or six secondary outcomes in the original protocol. At least 19 additional outcomes were tested. Study 329 was positive on 4 of 27 known outcomes (15%). There was a significantly higher rate of SAEs with paroxetine than with placebo. Consequently, study 329 was negative for efficacy and positive forharm.
But the authors concluded infamously that "Paroxetine is generally well-tolerated and effective for major depression in adolescents."

Enter Ghostwriters. Documentary evidence as shown on indicated that the first draft of the study was ghostwritten. This leaves two roles for the so-called academic authors of this paper:
  • They were willing co-conspirators who committed scientific fraud.
  • They were dupes, who dishonestly represented that they had a major role in the analysis of data and writing of the study, when in fact GSK operatives were working behind the scenes to manufacture these dubious results.
Remember, this study was published in 2001, and there has still been no apology for the fictional portrayal of its results, wherein a drug that was ineffective and unsafe was portrayed as safe and effective. Physicians who saw the authorship line likely thought "Gee, this is a who's who among academic child psychiatrists -- I can trust that they provided some oversight to make sure GSK didn't twist the results." But they were wrong.

By the way, Martin Keller, the lead "independent academic" author of this tragedy of a study said, when asked about what it means to be a key opinion leader in psychiatry:
You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.
So is completely misrepresenting the data from a study "honorable"? Is Keller's opinion "worth considering?" As you know if you've read this blog for long, such behavior is, sadly, not a fluke occurrence. Many others who should be providing leadership are leading us on a race to the scientific and ethical bottom. What will Brown University, home of Keller, do? Universities don't seem to care at all about scientific fraud, provided that the perpetrators of bad science are bringing home the bacon.

Not one of the "key opinion leaders" who signed on as an author to this study has said, "Yep, I screwed up. I didn't see the data and I was a dupe." Nobody. Sure, I don't expect that every author of every publication can vouch for the data with 100% certainty. I understand that. But shouldn't the lead author be taking some accountability?

This is a Fluke (?) Some may be saying: "But this is just a fluke occurrence." Is it? I've seen much evidence that data are often selectively reported in a manner like this -- looks like (sadly) it takes a lawsuit for anyone to get a whiff of the bastardization of $science that passes for research these days. If GSK had not been sued, nobody would have ever known that the published data from Study 329 were negative. A reasonably educated person could see that the writeup of the study was a real pimp job -- lots of selling the product based on flimsy evidence, but nobody would have seen the extent of the fraud. Apparently lawyers need to police scientists because scientists are incapable of playing by some very basic rules of science.

See for Yourself. Documents upon which the latest Jureidini et al. paper are based can be found here. Happy digging.

Wednesday, April 02, 2008

Does GSK Love Bad Publicity?

Bob Fiddaman, who runs a website that frequently discusses issues associated with the antidepressant drug paroxetine (Paxil/Seroxat) reports that he recently received an intimidating letter from GlaxoSmithKline's attorneys. Fiddaman posted a YouTube video in which he compared comments from GSK employee Alistair Benbow to statements and data gleaned from other sources. Benbow's statements in the video are frequently in disagreement with other sources. According to Fiddaman, GSK was upset because he used their logo without permission and because Benbow has allegedly experienced "serious distress by such unwarranted harassment."

From reading Fiddaman's post, I was unable to ascertain exactly what types of statements were made by GSK, though my impression is that getting the attorneys involved is a way to attempt to bully Fiddaman into silence. Unfortunately for GSK, such a tactic is an very stupid decision. Why? Well, because those of us who blog about the drug industry tend to keep a close eye on each other's work, and when we notice someone is feeling intimidated, we think it is newsworthy, so we write about it.

Philip Dawdy, author of the popular Furious Seasons blog, has opined in part that:
Basically, GSK used lawyers to intimidate an activist into shutting up...
Aubrey Blumsohn of the excellent Scientific Misconduct Blog, wrote that:
Their questions are about science. Many of those critics are our patients. They question the quality, transparency and honesty of our science, and they do so with good reason. We ignore these patients and these questions at our peril. That such patients should be threatened is a disgrace.
Seroxat Secrets has kept a close eye on Seroxat issues and noted:
Rather than take damaged patients on in court, GlaxoSmithKline would do better to meet them and begin to try to understand why some people suffered Seroxat addiction and then undertake some meaningful research into the problem: there is something wrong with Seroxat and it causes problems for many patients.
In my opinion, it seems that GSK's goal was to get Fiddaman to shut up, so that his video would be seen by fewer people. But the funny thing is that by sending a letter through attorneys, GSK has aroused the ire of several people, resulting in the video getting much more attention. Not a good move.

As for the video, it can be seen here:



 

Wednesday, February 13, 2008

Key Opinion Leaders and Information Laundering: The Case of Paxil

Joseph Glenmullen’s testimony regarding GlaxoSmithKline’s burial of suicide data related to Paxil, which was discussed briefly across the blogosphere last week (Pharmalot, Furious Seasons, for example), was quite interesting in many respects.

One important aspect that needs public airing is how key opinion leaders in psychiatry were used by GSK to help allay fears that Paxil might induce suicidal thoughts and/or behaviors. When GSK issues statements indicating that Paxil is not linked to increased suicide risk, many people will think “Gee, of course GSK will say Paxil is not linked to suicide – it’s their product, after all.” But when purportedly independent academic researchers make the same claims regarding the alleged safety of Paxil, then people tend to think “Well, if these big-name researchers say it’s safe, then I suppose that there’s no risk.” But what if GSK simply hands these big-name researchers (aka “key opinion leaders") charts with data, and then the “independent” researchers go about stating that Paxil is safe? Mind you, the researchers in question don’t see the actual raw data – just tables handed to them from GSK – in other words, they simply take GSK’s word that the data is accurate. In essence, these researchers are serving as information conduits for GSK.

But wait a second, what if the charts and data tables handed to them by GSK are not an accurate representation of the raw data; what if GSK is lying? Well, of course, it turns out that GSK was lying in a big way for several years. This post will not go into depth on the suicide data, as it has been covered elsewhere (1, 2, 3 ) -- even GSK now admits that Paxil is related to an increased risk of suicidality.

My main question in this post is how we are supposed to trust our "key opinion leaders" in psychiatry if they are willing to simply look at data tables from GSK (and others), then make pronouncements regarding the benefits and safety of medication without ever examining the raw data. To put this in layman's terms, suppose an election occurs and candidate A wins 70% of votes while candidate B wins 30% of votes. As the vote counter, I then rig the results to say that candidate B won the election by a 55% to 45% margin. Suppose that the election certification board shows up later and I show them a spreadsheet that I created which backs up my 55% vote tally for candidate A. The election board is satisfied and walks away, not knowing that the vote counting was a sham. Obviously, the election board should have checked the ballots (the raw data) rather than simply examining the spreadsheet (the data table). In much the same way, these so-called thought leaders in psychiatry should have checked the raw data before issuing statements about Paxil.

What did these key opinion leaders say about Paxil? Some quotes from Glenmullen's testimony follows, based upon documents he obtained in GSK's archive. Here's what David Dunner (University of Washington) and Geoffrey Dunbar (of GSK) reportedly said at a conference
Suicides and suicide attempts occurred less frequently with Paxil than with either placebo or active controls.
John Mann of Columbia University, regarding how data were collected:
We spent quite a bit of time gathering data from various drug companies and formulating it into the publication of the committee's findings
The committee he references is a committee from the American College of Neuropsychopharmacology, the same organization that issued a dubious report blessing the use of antidepressants in kids.

More from Mann, after being asked if he saw raw data or just data summarized in tables:
To be perfectly honest, I can't recall how much of the statistical raw data we received at the time that we put these numbers together...No, I think we all went through the tables of data that were provided at the time.
To use the analogy from above, the election board did not actually see the ballots. Stuart Montgomery is next. He was an author, along with Dunner and Dunbar, on a paper in the journal European Neuropsychopharmacology that stated:
Consistent reduction in suicides, attempted suicides, and suicidal thoughts, and protection against emergent suicidal thoughts suggest that Paxil has advantages in treating the potentially suicidal client.
Did Dunner see any raw data?
Dunner: I didn't see the raw data in the case report forms. I did see the tables. I work with the tables. The tables came before any draft, as I recall. We -- we created the paper from the tables.

Attorney: And -- and you never questioned, did you, or did you not question the validity of the data in Table 8?

Dunner: No
The above-mentioned paper that gave a clean slate to Paxil? According to a GSK document examined by Glenmullen, it was used by GSK to help convince physicians that they need not worry about Paxil inducing suicidality.

If you are an academic researcher, and you simply take data tables from drug companies then reproduce them in a report and/or publication, you are not doing research -- you are laundering information. People think that you have closely examined the data, but you have not, and you are thus doing the public a disservice.

I am unaware of any of the above researchers ever issuing a public apology. I can respect the context of the times; researchers may not have been aware of how pharmaceutical companies fool around with data in the early 90's. So if anyone wants to issue a mea culpa, I'd respect such an apology, but I have a feeling that not a single one of the above named individuals (nor this guy) will make an apology. Instead, it will be more business as usual, as these key opinion leaders, knowing who butters their bread, will continue to launder information and tell the public that everything will be fine and dandy if they just take their Paxil, Seroquel, or whatever hot drug of the moment is burning up the sales charts.

Friday, January 11, 2008

Zetia, Paxil, Medical Journals, Fraud, Etc,

I've been busy wiping up tears after the Frontline episode on medicating children with a wide variety of psychiatric medicines. Well worth watching. There are many thoughtful comments over at Furious Seasons. Feel free to add your voice. I may post on some of the highlights and lowlights of the Frontline piece later. Suffice for now to say that it sure is depressing that the media keeps up the dunce journalism of linking decreased SSRI prescriptions to an increase in suicide as if this was some sort of reliable finding. Please read my earlier posts (1, 2) for details on this constantly repeated yet incorrect interpretation of events.

Here are a few other posts worth reading:
  • Is "symptom remission" a realistic or even desirable goal when treating depression? A very interesting battle of letters in the American Journal of Psychiatry receives excellent coverage at Furious Seasons.
  • Roy Poses at Health Care Renewal demolishes an op-ed piece by Robert Goldberg (from the infamous Drug Wonks site). Also check out an incredible tale of kickbacks to a physician from multiple companies. If your hunger for bizarre tales in healthcare is not yet satiated, read about CellCyte, a company whose main product is apparently fraud.
  • Are medical journals asleep at the wheel regarding problems with Zetia? Aubrey Blumsohn seems to think so, and I think he might have a point. It would not be the first time that a medical journal dropped the ball.
  • Paxil for life. Go ahead, try to quit. What, you can't quit? A large group of individuals suing GlaxoSmithKline believe they've had difficulties quitting Paxil without significant problems. Worry not, friends, GSK said: "We believe there is no merit in this litigation... Seroxat has benefited millions of people worldwide who have suffered from depression.'' Read more about Paxil/Seroxat's special benefits. H/T: PharmaGossip.
  • While you can catch up on the national presidential derby from many sources, there is little coverage of the race for American Psychiatric Association president. Daniel Carlat (who is popping up everywhere these days, which is a good thing) provides his take on the upcoming APA election. To nobody's surprise, some have noted an issue with one candidate's potential conflicts of interest.
  • Pfizer = McDonald's + Estee Lauder?

Friday, November 02, 2007

Paxil's "Advantages"

Paxil and its advantages. Yeah, that's what this blog is about. I just recently retitled the blog; it was formerly known as the Paxil Pimp's Paradise. What am I talking about? I received an email a couple of days ago, to which I will reply in this post. Don't worry. In sticking with my informal confidentiality policy, I'll not reveal the identity of the person or his/her employer. Here is the email:
Respected Sir / Madam,

I read your review on website, please if you can provide me the the reviews for Advantages of Paroxetine for depression & anxiety. It would be more interesting if it would consist of recent data i.e. in year 2007.

I expect you [sic] early reply

Thank You.
Yes, this person works for a drug company. That's all I will reveal about the author of the email. Here is my reply...

Dear Sir/Madam,

Please see the following posts for a detailed explanation of the "advantages" of paroxetine (Paxil/Seroxat) as discussed previously on my site...
  • Advantage 1: Increases suicide attempts in patients.
  • Advantage 2: Potentially increases obesity in patients, though research is preliminary.
  • Advantage 3: Increase in birth defects for children whose mothers were taking Paxil while pregnant.
  • Advantage 4: Excellent marketing, both for social phobia and depression. Excellent use of misleading writing in so-called scientific journals when writing about the "advantages" of Paxil, including using euphemisms for unpleasantries like suicide attempts.
  • Advantage 5: Major discontinuation symptoms. Take Paxil for a while, try to stop and let me know what "advantage" you notice. See references at bottom of this post for a start. There are many more studies documenting clearly the difficulties with paroxetine withdrawal.
  • Advantage 6: Those wonderful sexual side effects. And they might last for a long time even after one stops taking the medication.
I hope you find this information useful in your search for the advantages of Paxil. I am flattered by your interest in my opinion on this matter. For additional information on paroxetine, you may want to consult Martin Keller, who has a somewhat different take than myself, but who is a potential recipient of the prestigious Golden Goblet Award for his excellent scientific work on paroxetine. Karen Wagner, another Golden Goblet Nominee, may also be an excellent source. You may also wish to consult the following websites:
Should I be able to assist further, please let me know. There are other sources with which you will want to be familiar. You may also want to contact Philip Dawdy regarding the advantages of atypical antipsychotics, and please see Aubrey Blumsohn regarding the advantages of Actonel in treating osteoporosis. Also, I hope you contact Jack Friday, Ed Silverman, or Peter Rost to provide industry cheerleading. For any questions regarding the excellent Rozerem advertising campaign, please see John Mack. Last but absolutely not least, for any advice regarding how to outsource your scientists, fake your clinical trials, and abuse your employees, please take advice from the sage Pharma Giles.

Sincerely Yours,

Paxil Pee-Yimp #1

Monday, September 24, 2007

Shyness: Pathological or Normal Experience

SmikeKlineBeecham/GlaxoSmithKline, the psychiatric elites who devised the Diagnostic and Statistical Manual of Mental Disorders, and social phobia. An interesting combination. I read a fascinating op-ed in the New York Times by Christopher Lane , an English professor at Northwestern University that discussed the growth of social phobia, especially among kids. Here are some highlights...

"How much credence should we give the diagnosis? Shyness is so common among American children that 42 percent exhibit it. And, according to one major study, the trait increases with age. By the time they reach college, up to 51 percent of men and 43 percent of women describe themselves as shy or introverted. Among graduate students, half of men and 48 percent of women do. Psychiatrists say that at least one in eight of these people needs medical attention.

"But do they? Many parents recognize that shyness varies greatly by situation, and research suggests it can be a benign condition. Just two weeks ago, a study sponsored by Britain’s Economic and Social Research Council reported that levels of the stress hormone cortisol are consistently lower in shy children than in their more extroverted peers. The discovery upends the common wisdom among psychiatrists that shyness causes youngsters extreme stress. Julie Turner-Cobb, the researcher at the University of Bath who led this study, told me the amounts of cortisol suggest that shyness in children “might not be such a bad thing.” [Not sure that this finding in itself is strongly suggestive of anything important, but it's interesting.]

Lane goes on to write about his perception that the diagnostic criteria are too loose for social phobia. Then, enter Paxil.

Then, having alerted the masses to their worrisome avoidance of public restrooms, the psychiatrists needed a remedy. Right on cue, GlaxoSmithKline, the maker of Paxil, declared in the late 1990s that its antidepressant could also treat social anxiety and, presumably, self-consciousness in restaurants. Nudged along by a public-awareness campaign (“Imagine Being Allergic to People”) that cost the drug maker more than $92 million in one year alone ($3 million more than Pfizer spent that year promoting Viagra), social anxiety quickly became the third most diagnosed mental illness in the nation, behind only depression and alcoholism. Studies put the total number of children affected at 15 percent — higher than the one in eight who psychiatrists had suggested were shy enough to need medical help.

This diagnosis was frequently irresponsible, and it also had human costs. After being prescribed Paxil or Zoloft for their shyness and public-speaking anxiety, a disturbingly large number of children, studies found, began to contemplate suicide and to suffer a host of other chronic side effects. This class of antidepressants, known as S.S.R.I.’s, had never been tested on children. Belatedly, the Food and Drug Administration agreed to require a “black box” warning on the drug label, cautioning doctors and parents that the drugs may be linked to suicide risk in young people.

You might think the specter of children on suicide watch from taking remedies for shyness would end any impulse to overprescribe them. Yet the tendency to use potent drugs to treat run-of-the-mill behaviors persists, and several psychiatrists have already started to challenge the F.D.A. warning on the dubious argument that fewer prescriptions are the reason we’re seeing a spike in suicides among teenagers. [Note that I tackled this recently.]

It goes on to close with...

With so much else to worry about, psychiatry would be wise to give up its fixation on a childhood trait as ordinary as shyness.

To view the diagnostic criteria for social phobia, please go here. Here is a key symptom:

"The avoidance, anxious anticipation, or distress in the feared social or performance situation(s) interferes significantly with the person's normal routine, occupational (academic) functioning, or social activities or relationships, or there is marked distress about having the phobia."

The diagnosis depends to a large extent what the doctor considers as "interferes significantly" or as "marked distress." When Paxil was being pushed, I'd be willing to bet that the reps were given scripts that helped to expand the boundaries of social anxiety disorder. When words like "significantly" or "marked" are used, one has to wonder what they mean? Who shapes physicians' judgment on these matters? To a notable extent, physician perceptions are influenced by commercials, er, continuing medical education and cheerleaders, er, drug reps.

A great piece from the New Republic in 1999 relevant to the expansion of social phobia can be found here. The points raised in the article ring true today. Let me be clear: I've seen real social phobia -- it exists and it is painful. But does it really affect 13% of Americans? I think not. I'm quite glad that Dr. Lane is stepping into the fray. I'm not sure I agree with him wholeheartedly, (I'll have to read his upcoming book first), but I know that I'm glad someone is willing to bring these issues to the fore. At the very least, this is a subject worthy of debate and discussion, not blind acceptance of the current orthodoxy that social phobia (like everything else) is underdiagnosed and undertreated.

Wednesday, May 16, 2007

Blumsohn is Right On Target

Attacking the small problems whilst the larger villainy remains uninvestigated and unpunished -- that's what the British regulatory agencies seem to be doing, to paraphrase a recent post from Dr. Aubrey Blumsohn. To that, I say trudat. Here's a snippet...
The emphasis on decorum and status explains why the BBC had to conduct it's own investigation [Link] of the worrying events surrounding clinical trials of the drug Seroxat and the company GlaxoSmithKline (GSK) as the medicines regulator (the MHRA) simply dragged it's feet for years conducting an internal investigation of its own collusion with the deception. And to cap it all, key figures within the MHRA are previous employees of GSK.
Blumsohn points out that there sure is a lot of emphasis on style (i.e., oh, be nice and proper) as opposed to substance (um, this drug can be dangerous OR this professor is getting railroaded by the administration, etc.). Also, those in the highest roles of authority/"prestige" are off-limits -- they can do whatever they want without consequence. Do read his entire post -- it's well worth your time.

Friday, April 27, 2007

Paxil and Pimping


A preliminary ruling has indicated that GlaxoSmithKline should pay out $63.8 million to make amends for making misleading claims about its antidepressant Paxil (Seroxat) in kids. Of course, the company admits no wrongdoing.

I wonder if the authors who stamped their names on the the main ghostwritten "scientific" publication for Paxil in kids should also be shelling out some cash. After all, it was the paper (chock full of HUGE misinterpretations of the study data) with their names on it that was doubtlessly used as part of the Paxil in kids marketing campaign. Were these "independent" academics innocent parties who were misled by the corporate meanies at GSK? Or, conversely, were these academics an integral part of the marketing team and should they also be held accountable for making false claims? Like these claims made in the infamous GSK study 329 (from an earlier post)...

Article: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

Data on safety: Emotional lability occurred in 6 of 93 participants on paroxetine compared to 1 of 87 on placebo. Hostility occurred in 7 of 93 patients on paroxetine compared to 0 of 87 on placebo. In fact, on paroxetine, 7 patients were hospitalized due to adverse events, including 2 from emotional lability, 2 due to aggression, 2 with worsening depression, and 1 with manic-like symptoms. This compares to 1 patient who had lability in the placebo group, but apparently not to the point that it required hospitalization. A total of 10 people had serious psychiatric adverse events on paroxetine compared to one on placebo.

What exactly were emotional lability and hostility? To quote James McCafferty, a GSK employee who helped work on Study 329, “the term emotional lability was catch all term for ‘suicidal ideation and gestures’. The hostility term captures behavioral problems, most related to parental and school confrontations.” According to Dr. David Healy, who certainly has much inside knowledge of raw data and company documents (background here), hostility counted for “homicidal acts, homicidal ideation and aggressive events.”

Suicidality is now lability and overt aggression is now hostility. Sounds much nicer that way.

What counts as safe and effective on Planet Paxil passes as ineffective and dangerous to us Earthlings. So while I'm glad to see that it appears GSK will be shelling out some dough to compensate its , consumers, the systemic problems of ghosted science and outright lying are not addressed. What is $64 million to GSK? Roughly a drop in the bucket. And the "key opinion leaders" who pimped Paxil escape unscathed.

Please read the fine post at Health Care Renewal about how academics might be brought to think twice before becoming drug pimps. I also think that giving out sarcastic awards to academics who participate in ghosted science is a good idea, so here goes...

Awards? I'd like to nominate Dr. Karen Wagner for a Golden Goblet, or perhaps a Krusty the Klown award for her pimping of Paxil as well as her stalwart work on the "SSRIs are great for kids" report authored on behalf of the American College of Neuropsychopharmacology. Her reports of clinical trials that overstated the efficacy and hid serious side effects of Zoloft also merit special mention. Overstating the efficacy of citalopram in kids was also nice work. While she was co-authoring the report saying that SSRIs are great for kids, she was also busy conducting trials of SSRIs for kids. Sound like a conflict of interest? Worry not, because she was also sitting on the conflict of interest committee at her own university!

Please feel free to add your nominations.

Oh, and as for the latest version of SSRIs are great for kids report (in JAMA). I'll hopefully get on that sometime in the near future. Suffice to say that it is highly misleading for now.

Tuesday, April 03, 2007

GSK, Key Opinion Leaders, and Used Cars


GSK -- More Documents. Oh boy. The good folks at Healthy Skepticism have posted a slew of documents pertaining to the infamous GSK Study 329, in which Paxil was described in a 2001 journal article as safe and effective, yet the data showed some rather heinous side effects occurring much more frequently in the Paxil group (such as significant aggression and suicidal behavior) than in the placebo group. The data also showed, at best, a small advantage for Paxil over placebo, an advantage that was more than outweighed by the significant incidence of serious side effects.

I've looked at a few of the newly posted documents and hope to post my take on them in the near future. In the meantime, I refer you to the documents on Healthy Skepticism's excellent website.

Hat tip to Philip Dawdy at Furious Seasons for beating me to the punch and linking to the above documents. Also beating me to the story, he mentioned that in the latest American Journal of Psychiatry, there is an editorial on adolescent bipolar disorder written by Boris Birmaher, one of the articles on the now discredited GSK 329 study. Birmaher states that it is quite important for bipolar disorder to be increasingly recognized and treated in youth.

Dawdy essentially asks why we should trust Birmaher given his involvement in the scandalous GSK Study 329 (please read this link for background info). Despite several years passing since the publication of the study's results, not a single one of the "independent" academic authors have apologized or spoken out against the way the data were manipulated and misinterpreted.

Key Opinion Leaders or Used Car Salespeople?
If academic psychiatry wants some credibility, then it is high time for the so-called opinion leaders to issue a mea culpa -- it's time to admit some fault. Here's my message to the the big-name academics in psychiatry, which likely applies to the academic bigwigs in many other branches of medicine as well: Rather than pimping drugs in corporate press releases, taking cushy consulting gigs, and rubber stamping your name on ghostwritten articles (based on data you have never actually seen) and infomercials labeled as "medical education," turn over a new leaf. Have you been used? Are you really performing science or are you just a tool of a marketing division? What good is your research actually doing for patients? Does selectively reporting only positive data and burying the negative data really help people struggling with mental anguish?

How is it different to hide the faulty mechanics on a 1986 Ford Tempo as a car salesperson versus, as a researcher, to hide safety and efficacy data on a medication? The same rule is applying -- Tell to Sell. In other words, if it ain't going to help sell (the car or the drug), then keep your mouth shut.

Must...take..."broad spectrum psychotropic agent"...too outraged to function...

Friday, March 30, 2007

Link-A-Thon

Good posts that warrant your attention include:
  • In terms of efficacy, antidepressants + mood stabilizer = placebo + mood stabilizer in bipolar disorder according to a new study. I've not yet read it, so I'll refer you to Furious Seasons for an analysis (1, 2).
  • Oh, the University of Medicine and Dentistry of New Jersey (UMDNJ). It seems like nary a week goes by without some sort of serious ethical/corruption issues rear their ugly heads. For more on the situation, check out Health Care Renewal. Maybe that's why Lilly partnered with them to create a Center of Excellence for Psychiatry?
  • Antipsychotics linked to death in the elderly, again. Thanks for PharmaGossip for pointing this out.
  • Pharmalot wonders why the FDA is patting itself on the back for protecting public health at the same time that it is performing less enforcement.
  • John Mack basically says this blog is close to useless according to his rating scale. That hurts, Mr. Mack.
  • Seroxat/Paxil is an easy target, but that doesn't mean we should stop kicking it while it's down. Fiddaman scores points with a good post on how GSK covered up its data on the drug, while Seroxat Secrets has been ablaze with posts pointing out the patently false statements made by GSK employees about Seroxat/Paxil.

Friday, March 16, 2007

Blog Props

Here's a handful of props to some good posts written in the recent past. Enjoy.

  • The Daubert challenge is dissected at the AHRP blog. Any time a legal challenge is based on "expert consensus" in a field, one should be afraid, since we all know that expert consensus tends to dovetail oh-so-nicely with the Big Pharma party line.
  • AHRP also slams what appears to be the overprescription of psych meds for children in foster care.
  • Philip Dawdy has taken the lead on pointing out that the great state of Montana is accusing Lilly of doing some pretty bad things in its promotion of Zyprexa.
  • Brandweek NRX notes that a Lilly spokesperson named Marni Lemons is attempting to make "lemonade" out of the Zyprexa situation (an apt analogy), and that she has proven her acting mettle in a venue outside of her professional role.
  • Health Care Renewal takes a couple of shots at the University of California system for poor use of resources as well as corruption.
  • The Great Beaver Controversy continues -- starting with John Mack, then moving to PharmaGiles and PharmaGossip (here and here).
  • Pharmalot indicates that the Supreme Court may review the practice of Big Pharma paying off generic manufacturers to not introduce generic competitors to the market. Back story here and here.
  • Seroxat Secrets raises some questions about Alastair Benbow and the Big Lies about Paxil/Seroxat.

Friday, March 09, 2007

Lots of Good Reading

Many excellent posts have emerged this week that you simply must read.

  • My vote for informed rant of the week easily goes to Philip Dawdy, who covers many topics in his post about a man with schizophrenia who is running into problems with, to use Dawdy's words, the "Nanny State."
  • Regarding Deborah Powell, Dean of the University of Minnesota Medical School, taking a seat on the board of PepsiAmericas, please check out Health Care Renewal and the Periodic Table. In fact, Health Care Renewal has been bubbling with a slew of good stuff this week.
  • Pharma Giles is now, at least in my world, the undisputed king of acronyms! Check out the TAMPONED initiative and what kind of TARTS make such a program possible. Under the TARTS link, make sure to read the section about SSRIs.
  • Bipolar Blast has a good post regarding problems related to withdrawing from benzodiazepines.
  • Speaking of drug withdrawal, Fiddaman has an excellent picture that summarizes the problem (well, one of many problems) with Paxil/Seroxat quite nicely.
  • On the same topic, Seroxat Secrets has also been going to town about Seroxat's problems.
  • In addition, Depression Introspection wonders if there's something fishy about the Mood Disorders Questionnaire.
  • And as always, check out PharmaGossip and Pharmalot to catch the latest breaking news on almost everything of note in the Pharma world.

Friday, February 02, 2007

Then What Happens...

Recently, a reader had an excellent comment to which I responded. The content of these comments seemed like they may be of general interest, so they is reposted below, with very slight trimming of both the reader's comment and my response. All emphases are added in this version...

nab said...

The next logical step from Healy's accurate description:

For all of those patients who have been betrayed - directly or indirectly - or any of us who are on the "outside" should/do not really care whether this was complicity or whether many were (as I think) hoodwinked.

Accountability must be demanded from the entire system (academic - research - clinical): we don't care how you do it, you just need to not have these results. If you are a clinician and you get fooled, then I don't feel you to be gaining personally, but I would like to see the scrutiny that people like you are demanding, or else, how do we know this sort of thing won't keep happening.

That trust is fragile, and that is why (a) I truly appreciate the outrage we can from you - CL Psych and the likes of Healy, Avorn, etc., and (b) am completely frustrated and disturbed by the general lack of such a response from the mainstrean medical community.

For a profession that demands - and is mostly granted - autonomy in its decision-making, I don't really care - to a large extent - how or why bad knowledge was propagated. Obviously those directly responsible should individually be held to account, but at the institutional or professional level, these are examples of a systemic failure.

I feel that many doctors reach Healy's conclusion, shrug their shoulders, and say, "damn, those bastards fooled us, but I didn't do anything personally wrong so whatayagonnado?" Why do we not hear more outrage? Where is the outrage about Vioxx for example?

Regardless, I know Healy means well, but it is quite an indictment of the entire system - not just the pharmaco. or specific academics and clinicians involved. I feel that organized medicine (and doctors generally), love to point the finger at the insurance companies, the pharmaceutical companies, hospitals, anyone besides themselves.

Unfortunately, this causes them to ignore the obvious fact that the practicing doctors on the frontline and the honest and honorable academics researchers have the most power and could be the most effective at remedying these problems. And they ignore this at their own peril because if they don't demand accountability then ultimately soneone has to.

My reply
More scrutiny from practitioners would indeed be a good step. I suspect that a large majority of clinicians have no idea of the degree to which the system has been corrupted.

How can anyone practice Evidence Based Medicine when the Evidence Base is full of half-reported data that is often sold like a used car in such forums as industry-sponsored consensus guidelines, continuing medical education, journal supplements, doctor dinners, and conferences which resemble Disneyland more than a scientific learning environment?

My thoughts: Med schools need to step up their ethics training. Likewise, when the vast majority of of physicians are not trained in research design or statistics during med school, they are not well trained to sniff out the BS in studies.

I believe, perhaps naively, that a class in research/stats and a class that details the numerous examples of what can go wrong when industry and science mix would really awaken med students. That would allow them to have a chance at bucking the system. Reforming the infomercial continuing medical education system would also be a nice touch.

There are surely other ways to go about this, but those are my initial thoughts.
Feel free to add your two cents. Someone has to come up with some answers.

Thursday, February 01, 2007

Seroxat (Paxil) Can Make People Suicidal?

… not by the hair of my chinny chin chin, sayeth Martin Keller, or so it would appear.

Not sure how I neglected to mention this in a prior post, but here goes: Keller (on video), was asked about a study which showed placebo was superior to paroxetine (Seroxat or Paxil). In response, Keller, while smiling, reached out and stroked the reporter under the chin. I don’t really know what that gesture meant, but it struck me (and some individuals who have emailed me about it) as pretty strange.

Wednesday, January 31, 2007

Journal Editor Unapologetic Over Paxil/Seroxat Article

Dr. Mina Duncan is the editor of the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), which, as she noted on Panorama is very widely read among child and adolescent psychiatrists. So, in this prestigious journal, one would expect high editorial standards.

Let’s go through what happened with study 329, which turned into a publication in JAACAP in July 2001 upon which Dr. Martin Keller (see here) was the lead author. The study was submitted (after the Journal of the American Medical Association had rejected it – good for them) to JAACAP, and Panorama nicely documented a couple of the reviewer comments. They included

Overall, results do not clearly indicate efficacy – authors need to clearly note this.

The relatively high rate of serious adverse effects was not addressed in the discussion.

Given the high placebo response rate… are (these drugs) an acceptable first-line therapy for depressed teenagers?

Remember that journals receive manuscripts, and then send them to be reviewed by researchers in the field as to their quality. These reviews are generally taken very seriously when considering what changes should be made to a paper and whether the manuscript will be published.

Yet, the paper was not only accepted and published in JAACAP, but the editor seems to have ignored the suggestions of the individuals who reviewed the paper. These issues mentioned in the review were obviously not addressed – feel free to read the actual journal article and you can see that the efficacy of paroxetine was pimped well beyond what the data showed and the safety data were also painted to show a picture contrary to the study’s own data. Again, please feel free to read my earlier post regarding the study’s data versus how such data were reported and interpreted in the journal article.

Read this carefully – we all make mistakes. When someone points out that a mistake was made, it is natural to become defensive – that’s okay. However, several years after the fact, one should be able to admit fault and learn from one’s errors; at least that is my opinion.

Dr. Duncan was asked if she regretted allowing Keller et al.’s Paxil/Seroxat study to be published – her response was less that I hoped for:

I don’t have any regrets about publishing [the study] at all – it generated all sorts of useful discussion which is the purpose of a scholarly journal.

Let’s follow this train of logic. If a study is either particularly poorly done or misinterprets its own data to a large extent, then there will be an outcry of researchers and critics who will point out the numerous flaws that occurred. This could, of course, be interpreted as “useful discussion,” which I suppose is what Duncan meant happened in the case of this article. After all, there were several letters to the editor that expressed their frustration with the study and how Keller et al interpreted their data. So, according to my interpretation of Duncan’s logic, we should publish studies with as many flaws as possible so that we can “usefully discuss” them.

Of further interest, Jon Jureidini and Anne Tonkin had a letter published in JAACAP in May 2003. In their letter they stated

…a study that did not show significant improvement on either of two primary outcome measures is reported as demonstrating efficacy (p. 514).

The tone of their letter was perhaps a bit catty as it discussed how Keller et al seem to have spun their interpretation well out of line with the actual study data. I can, however, hardly blame them for their snippiness. Another nugget from their letter:

We believe that the Keller et al. study shows evidence of distorted and unbalanced reporting that seems to have evaded the scrutiny of your editorial process (p. 514).
Thank you to Jureidini and Tonkin for their contribution to the “useful discussion” – indeed, their comments were likely the most useful of all that were contributed to the discussion. I give credit to Duncan for publishing their letter. I would be more impressed if she was willing to state that there were some problems with the editorial process in the case of this article, but I suppose you can’t win them all.

Disclaimer: I watched Panorama and took copious notes. I believe all quotes are accurate but please let me know if you think I transcribed something incorrectly.

Update (1/29/08): My apologies. I should have typed Mina Dulcan, not Mina Duncan. Sorry for the misspellings.

Tuesday, January 30, 2007

Keller, Bad Science, and Seroxat/Paxil

I will focus on Dr. Martin Keller and some seriously poor science in this post. Panorama did an excellent job of profiling Keller’s role in helping to promote paroxetine (known as Paxil in the USA and Seroxat in the UK). Note this is a lengthy post and that the bold section headings should help you find your way.

Who is Martin Keller? He is chair of psychiatry at Brown University. According to his curriculum vita, he has over 300 scientific publications. People take his opinions seriously. He is what is known as a key opinion leader or thought leader in academia and by the drug industry. What does that mean? Well, on videotape (see the Panorama episode from 1-29-07), Keller said:

You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.

Keller and Study 329: GlaxoSmithKline conducted a study, numbered 329, in which it examined the efficacy and safety of paroxetine versus placebo in the treatment of adolescent depression. Keller was the lead author on the article (J American Academy of Child and Adolescent Psychiatry, 2001, 762-772) which appeared regarding the results of this study.

Text of Article vs. the Actual Data: We’re going to now examine what the text of the article said versus what the data from the study said.

Article: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

Data on safety: Emotional lability occurred in 6 of 93 participants on paroxetine compared to 1 of 87 on placebo. Hostility occurred in 7 of 93 patients on paroxetine compared to 0 of 87 on placebo. In fact, on paroxetine, 7 patients were hospitalized due to adverse events, including 2 from emotional lability, 2 due to aggression, 2 with worsening depression, and 1 with manic-like symptoms. This compares to 1 patient who had lability in the placebo group, but apparently not to the point that it required hospitalization. A total of 10 people had serious psychiatric adverse events on paroxetine compared to one on placebo.

What exactly were emotional lability and hostility? To quote James McCafferty, a GSK employee who helped work on Study 329, “the term emotional lability was catch all term for ‘suicidal ideation and gestures’. The hostility term captures behavioral problems, most related to parental and school confrontations.” According to Dr. David Healy, who certainly has much inside knowledge of raw data and company documents (background here), hostility counted for “homicidal acts, homicidal ideation and aggressive events.”

Suicidality is now lability and overt aggression is now hostility. Sounds much nicer that way.

Conveniently defining depression: On page 770 of the study report, the authors opined that “…our study demonstrates that treatment with paroxetine results in clinically relevant improvement in depression scores.” The only measures that showed an advantage for paroxetine were either based on some arbitrary cutoff (and the researchers could of course opt for whatever cutoff yielded the results they wanted) or were not actually valid measures of depression. The only measures that were significant were either a global measure of improvement, which paints an optimistic view of treatment outcome, or were cherry-picked single items from longer questionnaires.

Also, think about the following for a moment. A single question on any questionnaire or interview is obviously not going to broadly cover symptoms of depression. A single question cannot cover the many facets of depression. Implying that a single question on an interview which shows an advantage for paroxetine shows that paroxetine is superior in treating depression is utterly invalid. Such logic is akin to finding that a patient with the flu reports coughing less often on a medication compared to placebo, so the medication is then declared superior to placebo for managing flu despite the medication not working better on any of the many other symptoms that comprise influenza.

Whitewashing safety data: It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring. David Healy has also discussed this fallacious assumption that drugs cannot cause harm.

Did Keller Know the Study Data? I’ll paraphrase briefly from Panorama, which had a video of Keller discussing the study and his role in examining and analyzing its data. He said he had reviewed data analytic tables, but then he mentioned soon after that on some printouts there were “item numbers and variable numbers and they don’t even have words on them – I tend not to look at those. I do better with words than symbols. [emphasis mine].”

Ghosted: According to Panorama (and documents I’ve obtained), the paper was written by a ghostwriter. Keller’s response to the ghostwriter after he saw the paper? “You did a superb job with this. Thank you very much. It is excellent. Enclosed are some rather minor changes from me, Neal, and Mike. [emphasis mine].” And let’s remember that Keller apparently did not wish to bother with looking at numbers. It would also appear that he did not want to bother much with the words based upon those numbers.

Third Party Technique: This is a tried and true trick – get several leading academics to stamp their names on a study manuscript and suddenly it appears like the study was closely supervised in every aspect, from data collection to data analysis, to study writeup, by independent academics. Thus, it is not GlaxoSmithKline telling you that their product is great, it is “independent researchers” from such bastions of academia as Brown University, the University of Pittsburgh, and University of Texas Southwester Medical Center and the University of Texas Medical Branch at Galveston which are stamping approval of the product. More on this in future posts.

Keller’s Background… It is relatively well-known that Keller makes much money from his consulting and research arrangements with drug companies. In fact, several years ago, it was documented that Keller pulled in over $500,000 in a single year through these lucrative deals. When looking at how he stuck his name on a study he did not write, endorsing conclusions that were clearly far from the actual study data, can one seriously believe that Keller operated as an independent researcher? Can you believe that this is an isolated incident?

See, for example, Keller’s involvement in a study examining the effects of Risperdal (risperidone) for the treatment of depression. This study was presented a number of times, and he never appeared as an author of any of the presentations. Yet when the study was published, his name appeared as an author. The real kicker was that he allegedly helped to design the study, according to the published article. If he had played a major role in the study, he would have been acknowledged earlier (via being listed as a presentation author), so he apparently helped design the study after it was completed, which is obviously a major feat! The whole story is here. Why put his name on the paper? So that readers would believe more strongly in the study due to his big name status.

In addition, Keller wrote about how Effexor reduces episodes of depression in the long-term though he clearly misinterpreted the study’s findings. To be fair, many other researchers have made the same mistake in believing that SSRI’s reduce depression. To quote an earlier post:

In other words, because SSRIs and similar drugs (e.g., Effexor) have withdrawal symptoms that sometimes lead to depression, it looks like they are effective in preventing depression because people often get worse shortly after stopping their medication. The drug companies (Wyeth, in the case of Effexor) would like you to believe that this means antidepressants protect you from re-experiencing depression once you get better, that they are a good long-term treatment. A more accurate statement is that antidepressants protect you from their own substantial withdrawal symptoms until you stop taking them.

Again, Keller is way off from the study data.

Keller on Camera: Keller’s response to being asked about the increased suicidality among participants taking paroxetine in Study 329 was interesting:

None of these attempts led to suicide and very few of them led to hospitalization.

Well then I suppose a huge increase in suicidal thoughts and gestures is okay, then? This is the commentary of an “opinion leader” – if statements such as the above shape opinions among practicing psychiatrists, then we really are in trouble.

Next: Well, consider this post just the start regarding Paxil/Seroxat. The way the data were pimped by GSK merits more discussion as does more discussion of allegedly detached academics and their role in this debacle.

Paxil/Seroxat: WOW

I just completed watching the Panorama investigation that aired yesterday on BBC. I ever so highly recommend it. You can check it out here. This lifts the curtains on the usual sets of lies, and does an excellent job of exposing how allegedly independent researchers served as puppets for GlaxoSmithKline. I will write more about it soon. Nice to see some good investigative journalism.