Monday, April 28, 2008

Paxil, Lies, and the Lying Researchers Who Tell Them

A bombshell has just appeared in the International Journal of Risk & Safety in Medicine. The subject of the paper is Paxil study 329, which examined the effects of the antidepressant paroxetine in adolescents. The study findings were published in the Journal of the American Academy of Child and Adolescent Psychiatry in 2001. These new findings show that I was wrong about Paxil Study 329. You know, the one that I said overstated the efficacy of Paxil and understated its risks. The one that I claimed was ghostwritten. Turns out that due to legal action, several documents were made available that shed more light on the study. The authors (Jureidini, McHenry, and Mansfield) of the new investigation have a few enlightening points. Let's look at the claims and you can then see how wrong I was, for which I sincerely apologize. The story is actually worse than I had imagined. Here's what I said then:

Article [quote from the study publication]: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

I went on to bemoan how the authors took differences either based on arbitrary cutoff scores or from measures that assessed something other than depression to make illegitimate claims that paroxetine was effective. Based upon newly available data from the study, here's what happened.
  • The protocol for the study (i.e., the document laying out what was going to happen in the study) called for eight outcome measurements. To quote Jureidini et al: "There was no significant difference between the paroxetine and placebo groups on any of the eight pre-specified outcome measures." So I was wrong. Paxil was not better on 4 of 8 measures -- it was better on ZERO of eight measures. My sincerest apologies.
  • Another quote from Jureidini and friends: "Overall four of the eight negative outcome measures specified in the protocol were replaced with four positive ones, many other negative measures having been tested and rejected along the way."
Let's break this thing down for a minute. The authors planned to look eight different ways for Paxil to beat placebo. They went zero for eight. So, rather than declaring defeat, the authors then went digging to find some way in which Paxil was better than a placebo. Devising various cutoff scores on various measures on which victory could be declared, as well as examining individual items from various measures rather than entire rating scales, the authors were able to grasp and pull out a couple of small victories. In the published version of the paper, there is no hint that such data dredging occurred. Change the endpoints until you find one that works out, then declare victory.

How About Safety?

I was incensed about the coverage of safety, particularly the magical writing that stated that a placebo can make you suicidal, but Paxil could not. I wrote:
It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” [i.e., suicidal] on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring.
Turns out I missed a couple things. Based on looking at an internal document and doing some calculations, Jureidini et al. found that serious adverse events were significantly more likely to occur in patients taking paroxetine (12%) vs. placebo (2%). Likewise, adverse events requiring hospitalization were significantly disadvantageous to paroxetine (6.5% vs. 0%). Severe nervous system side effects -- same story (18% vs. 4.6%). The authors of Study 329 did not conduct analyses to see whether the aforementioned side effects occurred more commonly on drug vs. placebo.

Funny how they had time to dredge through every conceivable efficacy outcome but couldn't see whether the difference in severe adverse events was statistically significant.

One quote from the discussion section of the paper sums it all up:
There was no significant efficacy difference between paroxetine and placebo on the two primary outcomes or six secondary outcomes in the original protocol. At least 19 additional outcomes were tested. Study 329 was positive on 4 of 27 known outcomes (15%). There was a significantly higher rate of SAEs with paroxetine than with placebo. Consequently, study 329 was negative for efficacy and positive forharm.
But the authors concluded infamously that "Paroxetine is generally well-tolerated and effective for major depression in adolescents."

Enter Ghostwriters. Documentary evidence as shown on indicated that the first draft of the study was ghostwritten. This leaves two roles for the so-called academic authors of this paper:
  • They were willing co-conspirators who committed scientific fraud.
  • They were dupes, who dishonestly represented that they had a major role in the analysis of data and writing of the study, when in fact GSK operatives were working behind the scenes to manufacture these dubious results.
Remember, this study was published in 2001, and there has still been no apology for the fictional portrayal of its results, wherein a drug that was ineffective and unsafe was portrayed as safe and effective. Physicians who saw the authorship line likely thought "Gee, this is a who's who among academic child psychiatrists -- I can trust that they provided some oversight to make sure GSK didn't twist the results." But they were wrong.

By the way, Martin Keller, the lead "independent academic" author of this tragedy of a study said, when asked about what it means to be a key opinion leader in psychiatry:
You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.
So is completely misrepresenting the data from a study "honorable"? Is Keller's opinion "worth considering?" As you know if you've read this blog for long, such behavior is, sadly, not a fluke occurrence. Many others who should be providing leadership are leading us on a race to the scientific and ethical bottom. What will Brown University, home of Keller, do? Universities don't seem to care at all about scientific fraud, provided that the perpetrators of bad science are bringing home the bacon.

Not one of the "key opinion leaders" who signed on as an author to this study has said, "Yep, I screwed up. I didn't see the data and I was a dupe." Nobody. Sure, I don't expect that every author of every publication can vouch for the data with 100% certainty. I understand that. But shouldn't the lead author be taking some accountability?

This is a Fluke (?) Some may be saying: "But this is just a fluke occurrence." Is it? I've seen much evidence that data are often selectively reported in a manner like this -- looks like (sadly) it takes a lawsuit for anyone to get a whiff of the bastardization of $science that passes for research these days. If GSK had not been sued, nobody would have ever known that the published data from Study 329 were negative. A reasonably educated person could see that the writeup of the study was a real pimp job -- lots of selling the product based on flimsy evidence, but nobody would have seen the extent of the fraud. Apparently lawyers need to police scientists because scientists are incapable of playing by some very basic rules of science.

See for Yourself. Documents upon which the latest Jureidini et al. paper are based can be found here. Happy digging.

Thursday, April 24, 2008

Military Analysts and Key Opinion Leaders

As noted at the Carlat Psychiatry Blog, it sure is strange to see the similarities between "independent" military analysts and "independent" scientists/key opinion leaders. What do they both have in common? They both pass along talking points from an outside source, lending these marketing messages an air of independence and credibility. Check out the New York Times for the story on the military analysts. Check out this story on "independent scientists" and information laundering here for just one of many, many examples of scientists selling out.

And see the clip below for a few examples of military analysts bravely repeating talking points in the name of their Pentagon Masters, er, defense contractors, er, Patriotism and the Defense of Freedom.

Monday, April 21, 2008

Don't Believe the CPA's Hype About Not Believing the Hype

The Canadian Psychiatric Association has cast its lot with the SSRIs. So sayeth Dr. Patrick White, CPA President, in an "Important Message to Physicians." The title of the article reads, "Don't believe the media hype surrounding the inefficacy of SSRIs" and it's a doozy. It critiques the Kirsch et al study in PLoS Medicine which concluded that antidepressant benefits over placebo were generally small. Quotes from his Important Message follow along with my commentary.
It is unfortunate the media coverage obscured the fact that the article does reinforce that antidepressants are in fact effective for persons with severe depression.
OK, fair enough. I'm too lazy to track down all the media coverage on the study, but my recollection was that a few outlets mentioned that Kirsch et al. found that antidepressants work better than placebo for severe depression. But for mild and moderate depression, what was the score? Not mentioned in the CPA piece, but you can see in my prior post, for mild depression, meds were not looking good.
The review combines data from all submissions received by the (US) FDA before drugs are introduced into the US market. Authors do not discriminate between studies which include doses (in dose-finding studies) below the anticipated therapeutic threshold, and studies with more conventional dosing levels. Combining studies in this manner ignores elementary pharmacology, and reduces the ability to discriminatebetween the active ingredient and the placebo. This criticism has also been voiced about their previous publications.
This critique would hold water if another meta-analysis that was published in the New England Journal of Medicine with a bit of media splash (so I assume Dr. White might have read it), noted that, including only approved doses, the impact of antidepressants over placebo was small. So this critique is lame at best.
The main thesis of the article—that there are many failed clinical trials of antidepressants in the FDA database that are not reported in publications—has been known for many years. Such trials are conducted for a variety of regulatory reasons, including dose finding. To show whether antidepressants work in clinical practice requires different studies, which are not included in this article.
I like this. It could be read as: "Sure, pharma doesn't publish a lot of their results -- who cares?" Again, going back to the Turner et al. study from NEJM, the average effect size for antidepressants vs. placebo was d = .15 (meager) in unpublished studies and d = .37 in published studies (small). Selective publication is not just because of dose-finding; it keeps negative information buried. This is not mentioned in the Important Message anywhere. The average effect size for antidepressants over placebo in unpublished studies was very small, which is likely why the studies were not published.
As we know in clinical practice, a substantial number of people do not respond to an antidepressant either:
  • at the first dosage they are given, or
  • within the usual six-week time frame of many of these studies, or indeed
  • to the first antidepressant prescribed.
Therefore, testing any single antidepressant for a short space of time will bias the results towards diminished clinical efficacy. This point, highlighted by many of those who have commented on the report, has been ignored by the authors and any subsequent media coverage.
Wait a second -- we should just assume that drugs work better in the long-term than they do in short-term studies. No data are cited to support this point. Actually, not a single citation is offered in the entire piece -- apparently this Message was too Important to bother with data. The STAR-D research on antidepressants in clinical practice did not exactly give cause for celebration regarding antidepressant efficacy. If someone did not respond to an initial course of medication, then switching/augmenting was not particularly helpful for most.
Our national mortality from suicide is greater than that from motor vehicle accidents and HIV combined.
Sad. And where's the evidence suggesting that antidepressants reduce suicide more than a placebo? Could it be that providing any sort of intervention that matches up to a placebo (i.e., is credible, delivered by a caring professional, etc.) might possibly reduce suicide? And isn't it also possible that our treatments don't do much to reduce suicide? Intervention in the time of crisis may save lives. But overall, I have not been convinced that we are saving lives in droves through the massive prescription of antidepressants. I know, that is heresy, but if we are going to claim that we are White Knights riding in to save lives, we should have a little bit more solid data on our side.

Thanks to the anonymous reader who passed along this Important Message regarding why antidepressants really work tremendously well and that any research which dares to challenge this point is, by fiat, invalid.

Friday, April 18, 2008

Key Opinion Leaders, Osteoporosis, Vioxx, Psychiatry, Science, and Patients

Remember Richard Eastell? To summarize briefly, he is a professor at Sheffield University who was lead author on a publication that showed positive results for the osteoporosis drug Actonel. One problem: the data did not actually provide good news for Actonel. In a key graph in the published paper, 40% of patient data was missing. Now that's an interesting form of science: Just eliminate the pesky 40% of the data that don't go along with your hypothesis and POOF!, you get exactly the results you are looking for. An excellent writeup of the situation can be seen in Jennifer Washburn's excellent piece in Slate. Making the plot more interesting, Eastell did not have the raw data; Procter & Gamble's (Actonel's sponsor) statisticians were in charge of the analysis. Hence the missing 40% of the data, which helped to cast Actonel in a more positive light. Read more on the topic here. When all data are included, the analysis does not support Actonel's marketing points. Eastell signed off on the original (misleading) paper saying that he had seen all of the data, which was, of course not true.

I noted in October 2006 that Eastell was chairing a session on osteoporosis, one that charged a hefty registration fee. The website promoting the session at the time mentioned: "This course is suitable for pharmaceutical industry personnel from clinical through to marketing disciplines." I suppose that Eastell is a key opinion leader in his field. Being willing to put one's name on a paper where the key graph knocks out 40% of the data is a good step toward becoming an influential academic these days. I suppose Eastell could at least claim ignorance, since he was unfamiliar with the underlying data.

In psychiatry, Charles Nemeroff, a key opinion leader, put his name on a continuing medical education presentation in which the data don't match with the published article that was based on the same data set. In the CME presentation, the medication (risperidone) outperformed placebo, although the published report indicated that risperidone did not beat a placebo, and in the CME presentation, risperidone was claimed to improve sexual functioning, which was never mentioned in the published article.

Eastell and a colleague recently received a roughly $7.5 million grant. Good for them. I've got nothing against the guy personally; I just find it interesting that he is getting rewarded nicely despite the whole Actonel fiasco. And I've only described a wee bit of that strange saga. The Scientific Misconduct Blog has much, much more. Like the part where he told Blumsohn to stop bothering Procter & Gamble about the data because P & G was a good source of income for the university. I've got no problem with excellence being rewarded. Perhaps Eastell has done many excellent things. However, during the P &G/Actonel fiasco, Eastell was willing to let the sponsors push him around, even if science was being bastardized in the process. Their money meant more than good science. And if patients took Actonel thinking that it was more effective than it actually was, who cares -- they're not the ones providing the research funding, right?

Think about this for a second. Many people have been up in arms about the recently unveiled Vioxx ghostwriting scandal. For a fantastic take on the scandal, see Health Care Renewal or Hooked. Briefly, Merck and its associated medical writers wrote manuscripts that said nice things about Vioxx. Then academic authors/key opinion leaders were found to review the papers and stick their names on as lead authors. Mind you, "reviewing" the papers often meant simply meant making minimal edits, if even putting in that much effort. Did they see the data? They saw tables and figures provided by Merck, but did they see the raw data? In most cases, apparently not. Doesn't that make them information launderers? They take industry data, and clean it up with their academic reputation. Oh, Dr. So-and-So is at Sheffield or Emory or Harvard... -- he must have made sure that the sponsoring drug company is portraying the data accurately. A veneer of credibility. And an extra publication for the key opinion leader, which makes the KOL that much more important in the academic world where publication envy runs rampant.

This system is not exactly set up to benefit patient outcomes, is it?

Tuesday, April 15, 2008

Academics, Atypicals, and Marketing

Ahhhh, there is nothing like the sweet smell of investigative journalism in the morning. Robert Farley published a whale of an excellent piece on how atypical antispychotics were marketed in the St. Petersburg Times on Saturday. I will discuss some of the tasty tidbits from the article, but you'd be a fool to not read the entire article yourself.

Farley notes that the manufacturers of atypical antipsychotics needed to spread the word that their drugs worked better than older antipsychotics. The one slight problem was that there was not any solid evidence (except when looking at biased studies) showing that the new drugs were superior. So if the companies could not advertise this point directly, they needed to enlist third parties to say it for them. In other words, it was time for some information laundering. In what has become the standard operating procedure for the field, "independent" academics were enlisted to make recommendations that the new drugs were better than the old drugs.

So hire a few academics as consultants, fly them off to a "consensus conference," and have them generate treatment guidelines. Would the guidelines be biased? Well, yeah, but that's pretty much the point -- science be damned, it's about market share, baby. Like the Texas Medication Algorithm Project (TMAP), which helped to propel the atypicals to first-line treatment (and second-line, for that matter), and other TMAP clones across the nation. Throw in a few studies of the effectiveness of TMAP, then misinterpret their results, and BAM, you've now established (based on little to no credible evidence) that atypical antipsychotics are the new wonder drugs. And with the wind at your back, hey, why not see if you can market these drugs for everything? After all, you've got the support of the "independent" academic community...

Also see Psych Central's take on the story.

Monday, April 14, 2008

Antidepressant PR Gone Wild

As noted at Furious Seasons, a recent broadcast of "The Infinite Mind" went absolutely wild with its reaching to cover up risks associated with SSRIs. Oy. It was almost as if a PR consultant for the drug industry was involved with the show... Oh, wait, a PR consultant for the drug industry was involved -- Peter Pitts from Drug Wonks appeared on the program. You may recall that Pitts works at a PR firm (Selvage, Manning, and Lee) that does much business with the drug industry.

More coming later on the Canadian Psychiatric Association's unscientific dismissal of evidence linking antidepressants to poor efficacy.

Friday, April 11, 2008

Comment Rejection

Oops. I accidentally rejected a comment from Jeffrey Dach. I meant to accept it, but clicked the wrong link. Please re-post.

Thursday, April 10, 2008

Key Opinion Leader Is Unfairly Disparaged

Or so she said. I've written about key opinion leader, University of Cincinnati child psychiatrist Melissa DelBello a few times (here, here, and here). One key point was she was quoted as saying "Trust me. I don't make much" in regards to income received from AstraZeneca for giving favorable talks for its antipsychotic drug Seroquel. I had missed that in 2007, she claimed she was misquoted in an interesting piece on Inside Higher Ed:

[University of Cincinnati spokesperspon] Puff said that DelBello’s comment in May that she did not “make much” money from drug companies had actually come in response to the reporter’s question “about how much money she was given for making a single, individual presentation. Her comment was misrepresented and then repeated by Sen. Grassley.” Added DelBello: “I was and have been misquoted by the NYT.” (The Times reporter, Gardiner Harris, could not be reached Sunday to respond to the suggestion that he had misrepresented DelBello’s comment.)

Puff also said that “the implication of what Sen. Grassley said was that she was disingenuous in what she was paid. She has been completely open in disclosing her payments. She’s made complete disclosures to the university and its IRB. Furthermore, she’s made full disclosure to the Senate Finance Committee.... Additionally, Dr. DelBello has disclosed her funding at all speaking engagements and she’s disclosed in the patient consents of her studies.”

I wonder if she has made disclosures about her company (MSZ Associates) that Senator's Grassley's investigation claims was set up for "personal financial reasons"and well-funded by AstraZeneca. Also, does the above mean that DelBello disclosed that she has personally received hundreds of thousands of dollars from AstraZeneca and other sources in the consent forms for her studies? I have to admit I'm pretty skeptical about that, but I could be wrong. As far as full disclosure to the Senate, Grassley's most recent findings seem to contradict this claim. Hey, maybe Grassley is just making things up, so either DelBello is being unfairly persecuted or her story is simply not adding up.

Why am I making such a big deal about this? Well, such a gigantic hidden conflict of interest doesn't exactly engender my faith, and DelBello is a person who can take at least responsibility for the widespread treatment of children with antipsychotic medications. Due to her research findings that some claim support the use of antipsychotics in kids and her many marketing speeches for AstraZeneca and others, the landscape for badly behaving children is changing, and likely not for the better (1, 2, 3).

Pharmalot reports that the University of Cincinnati is unresponsive to his requests for comment. Perhaps they're going for the time-honored tradition of remaining in silence under the belief that this publicity cannot possibly last much longer.

Tuesday, April 08, 2008

Bipolar Child Key Opinion Leader: I Get Money

As reported on the Wall Street Journal Health Blog, Dr. Melissa DelBello's tight financial ties to AstraZeneca are again under scrutiny. This should come as no surprise to my readers, as I noted in March that, in 2003-2004, DelBello had been the recipient of $180,000 from AstraZeneca (makers of Seroquel). I gleaned this information from results of an investigation by Senator Charles Grassley. The WSJ Health Blog noted that Grassley's investigation has continued, revealing that:
DelBello, who also has received NIH grants, also reported $100,000 in outside income between 2005 and 2007. But when Grassley asked AstraZeneca directly, the total value of its payments to DelBello during those three years came to $238,000.
So she claimed initially that she received $100k from 2005-2007, but she actually pulled in $238k from a single company and who knows how much from other outside entities. In fact, it is clear that DelBello has received funding from several other corporate interests. To quote her disclosure from a continuing medical education exercise:
Dr. DelBello has disclosed the following relevant financial relationships: AstraZeneca, Bristol-Myers Squibb, Eli Lilly, and Pfizer: Consultant; AstraZeneca, GlaxoSmithKline, Pfizer: Speakers’ Bureau; and Abbott Laboratories, AstraZeneca, Bristol-Myers Squibb, Eli Lilly, Janssen, Johnson and Johnson, Pfizer, and Shire: Research Support Recipient.
But wait, there's more! According to Grassley's investigation, DelBello has also established a company for "personal financial purposes." The company is called MSZ Associates and AstraZeneca put $60,000 in the coffers of the company. The address of MSZ Associates, according to Grassley, is the University of Cincinnati Department of Psychiatry (where DelBello works).

Again, as I've said earlier, I don't know Dr. DelBello, but from this information, I do indeed feel comfortable nominating her for a Golden Goblet Award. For background, read here and here. PharmaGossip's interesting visual representation of the situation can be seen here.

This is how one sets out to become a key opinion leader. DelBello quite likely has a mortgage and bills to pay, but is this confluence of commercial and academic interests really the best we can do for our patients?

Being a key opinion leader has one pleasant side effect: You Gets Mad Money.

(Warning: Video contains adult language)

Monday, April 07, 2008

The Lingering Stain of Paxil Study 329

Neal Ryan has an editorial in this month's American Journal of Psychiatry. Dr. Ryan is a noted academic child psychiatrist who played a role in the infamous Paxil Study 329, which obfuscated its own findings that Paxil possessed little if any efficacy relative to a placebo and carried more risk than placebo in terms of suicidal thoughts/attempts, and aggressive behavior.

A summary of the excellent Panorama expose on the study included the following snippet:

Child psychiatrist Dr Neal Ryan of the University of Pittsburgh was paid by GSK as a co-author of Study 329.

In 2002 he also gave a talk on childhood depression at a medical conference sponsored by GSK.

He said that Seroxat could be a suitable treatment for children and later told Panorama reporter Shelley Jofre that it probably lowered rather than raised suicide rates.

In amongst the archive of emails in Malibu, Shelley was surprised to find that her own emails to Dr Ryan from 2002 asking questions about the safety of Seroxat had been forwarded to GSK asking for advice on how to respond to her.

She also found an email from a public relations executive working for GSK which said: "Originally we had planned to do extensive media relations surrounding this study until we actually viewed the results.

"Essentially the study did not really show it was effective in treating adolescent depression, which is not something we want to publicise."

But now Ryan has changed his tune. He writes in the latest American Journal of Psychiatry about a trial of fluoxetine (Prozac) in the treatment of children and adolescents with depression. In the piece, he begins with some background information:
Other, newer antidepressants have at most a single controlled trial showing efficacy in the treatment of major depression in youths, and thus far studies of several antidepressants have not shown statistical superiority to placebo in well-designed trials in this population
He cites one source (Bridge et al., 2007) in support of this statement. I checked his source and it is clear from the table (eTable 1 to be precise) in the source that the single trial showing superiority of drug over placebo is not the Paxil study he coauthored.

Ryan in 2001: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762). Yet he is now saying that antidepressants such as Paxil are not effective in treating depression.

Note that Dr. Ryan does not specifically retract his earlier claims about the efficacy of Paxil, but that his current statement does certainly contradict his prior writing about the wonders of Paxil. Well, to be fair, his statement from 2001 about the efficacy of paroxetine could very well have been ghostwritten, as we now know that the good majority of the paper was written by ghostwriters under the employ of GlaxoSmithKline.

And in the end of his current editorial, it is stated that:
Dr. Ryan has received research support from NIMH. He reports no other competing interests.
Did Dr. Ryan participate in Paxil Study 329 for free? I would guess that he indeed received funding from GSK for his efforts on the study, but perhaps I'm wrong. But that is actually a peripheral point.

The main point is that Ryan never disavowed the overstatement of efficacy or the minimization of risk that occurred in the published study report of Paxil 329 findings, yet now he pens an editorial in which he states that no antidepressants except Prozac possess any efficacy in treating child/adolescent depression. Either Paxil works or it does not work, and Dr. Ryan cannot have it both ways.

Still waiting for the first key opinion leader to admit frankly that the efficacy claims made in Paxil 329 were inaccurate...

Friday, April 04, 2008

Vioxx Goes The Way Of Zyprexa

An interesting website (www.vioxxdocuments.com) provides a trove of documents related to controversies surrounding Vioxx. Some of them are hot items, containing statements like this:
The much broader issues, which surfaced at the American College of Rheumatology meetings, were most disturbing and involve suppression of data by Merck and a consistent pattern of intimidation of investigators by Merck staff...
Zyprexa documents, Vioxx documents, Neurontin documents, what's next? If there are any bored researchers, journalists, or bloggers out there, start reading!

Hat Tip: PharmaGossip.

What Tangled Webs We Weave...

Roy Poses of Health Care Renewal did some digging and turned up gold. One really has to wonder whom some of our "leaders" in healthcare truly pledge their allegiance. Read the paragraph below and tell me that this does anything other than make you curse under your breath (or aloud):
So a director of Eli Lilly that was accused of responsibility for the company's poor performance, poor performance which presumably included its mis-marketing of Zyprexa, also turns out to be responsible for the management of the University of Texas Southwestern Medical Center, currently under fire for maintaining an "A-list" of favored patients, and letting its top executives live the high life on donated funds, practices that go against its mission.
Want the full story? Head over to Health Care Renewal and check it out. You'll also find some background material there -- by the time you're done reading it all, you are guaranteed to have smoke coming from your ears. File under outrageous.

Wednesday, April 02, 2008

Does GSK Love Bad Publicity?

Bob Fiddaman, who runs a website that frequently discusses issues associated with the antidepressant drug paroxetine (Paxil/Seroxat) reports that he recently received an intimidating letter from GlaxoSmithKline's attorneys. Fiddaman posted a YouTube video in which he compared comments from GSK employee Alistair Benbow to statements and data gleaned from other sources. Benbow's statements in the video are frequently in disagreement with other sources. According to Fiddaman, GSK was upset because he used their logo without permission and because Benbow has allegedly experienced "serious distress by such unwarranted harassment."

From reading Fiddaman's post, I was unable to ascertain exactly what types of statements were made by GSK, though my impression is that getting the attorneys involved is a way to attempt to bully Fiddaman into silence. Unfortunately for GSK, such a tactic is an very stupid decision. Why? Well, because those of us who blog about the drug industry tend to keep a close eye on each other's work, and when we notice someone is feeling intimidated, we think it is newsworthy, so we write about it.

Philip Dawdy, author of the popular Furious Seasons blog, has opined in part that:
Basically, GSK used lawyers to intimidate an activist into shutting up...
Aubrey Blumsohn of the excellent Scientific Misconduct Blog, wrote that:
Their questions are about science. Many of those critics are our patients. They question the quality, transparency and honesty of our science, and they do so with good reason. We ignore these patients and these questions at our peril. That such patients should be threatened is a disgrace.
Seroxat Secrets has kept a close eye on Seroxat issues and noted:
Rather than take damaged patients on in court, GlaxoSmithKline would do better to meet them and begin to try to understand why some people suffered Seroxat addiction and then undertake some meaningful research into the problem: there is something wrong with Seroxat and it causes problems for many patients.
In my opinion, it seems that GSK's goal was to get Fiddaman to shut up, so that his video would be seen by fewer people. But the funny thing is that by sending a letter through attorneys, GSK has aroused the ire of several people, resulting in the video getting much more attention. Not a good move.

As for the video, it can be seen here:



 

Tuesday, April 01, 2008

Abilify for Depression: Second Round a Lot Like the First Round

In July 2007, I posted about a very strangely designed study that claimed to show Abilify was an effective treatment for depression when added to antidepressant medication. Here is what I wrote about it then...
Study Design. Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).

Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?
Well, these experts in study design decided that once was just not enough, so they ran the same exact study twice. Same huge design flaw. Similar results. The results are posted in the Journal of Clinical Psychopharmacology. By a statistically significant, though not overwhelming margin, those on Abilify + antidepressant improved more than those on antidepressant + placebo. Or did they?

Dear Patients: We Don't Care What You Say. On depression measures rated by clinicians, patients did modestly better on Abilify. But on measures completed by the patients, there was no statistically significant difference between Abilify + antidepressant vs. placebo + antidepressant. So the patients didn't actually perceive themselves as being less depressed -- um, shouldn't the opinion of the patients matter? The message that the authors are trying to make is that the opinion of the clinical raters matters much more than the opinions of patients, which strikes me as ludicrous. These people are depressed, not floridly psychotic, so I think they would have a pretty decent idea as to their mental health status.

The authors attempted to explain this inconvenient finding away as follows:
These [patient-rated] scales were included for exploratory purposes, and the lack of emphasis on these ratings may have contributed to increased variance. The corresponding clinician-rated versions were not included, which may have hindered patients in responding accurately to the self-rated version.
I'm really confused here -- what do they mean that a "lack of emphasis" resulted in increased variance? And as for accusing the patients of not completing the ratings accurately, that just sounds like sour grapes to me. If there was a significant difference favoring Abilify, you can bet your life savings that the authors would not have accused the patients of reporting inaccurately. Only if a drug is shown to not work can we accuse patients of inaccurate reporting because all new drugs must work; such is the dogma of modern day psychopharmacology gone wild.

The authors close their paper with the following jewel:
Given the public health challenge of antidepressant nonresponse, this is a significant clinical finding.
We're getting knee-deep in bogus public health claims these days. Even though patients don't perceive that they improved any more on Abilify relative to a placebo, this is a significant benefit to public health? You bet. If someone has a defense for designing a study in such a manner, I'm all ears, but this really looks like a blatantly biased study that still managed to find no benefit (according to patients) in using Abilify.