Showing posts with label stealth marketing. Show all posts
Showing posts with label stealth marketing. Show all posts

Saturday, September 19, 2009

Lend Me Your Name

Journalism regarding the horrors of ghostwritten papers in medical journals is all the rage these days (1, 2, 3). Here's my very small contribution. The document shown below from a medical writing company has been described elsewhere. But it is worth seeing in its glory firsthand. The document is from Wyeth's ghostwriting firm, DesignWrite. It was part of the Premarin/hormone replacement therapy disaster (see below). Perhaps you remember the era when hormone replacement therapy was being prescribed for all sorts of people because it was supposedly a wonder treatment. So what if it increased risk for breast cancer and perhaps other conditions as well? Not to worry, DesignWrite could get around that...

In layman's terms, it goes like this... Wyeth -- you give us some hints about the marketing spin you'd like us to put on your studies. We'll then write up the studies accordingly and have big-name academics sign off as if they had something to do with our oh-so-objective "research". And don't worry, Wyeth, you get to review all papers we write up to make sure we market your drug appropriately.


We now know that several academics participated in this program. To quote one ethicist, regarding the academics who lent their names as authors: "They sold their credentials for false credit and money." DesignWrite's current slogan is: "Where we put clinical data to work." Hmmm. DesignWrite gets paid, Wyeth gets paid, and the academics who lend their names get paid and/or get another publication to boost their stock in the academic world.

Oh, and patients, what did they get out this... breast cancer. But who cares about them anyway -- patients are just little buckets of money; it's not like they're real human beings.

A summary of the results that led to the downfall of hormone replacement therapy

Three years after stopping hormone therapy, women who had taken study pills with active estrogen plus progestin no longer had an increased risk of cardiovascular disease (heart disease, stroke, and blood clots) compared with women on placebo. The lower risk of colorectal cancer seen in women who had taken active E+P disappeared after stopping the intervention. The benefit for fractures (broken bones) in women who had taken active E+P also disappeared after stopping hormone therapy. On the other hand, the risk of all cancers combined in women who had used E+P increased after stopping the intervention compared to those on placebo. This was due to increases in a variety of cancers, including lung cancer. After stopping the intervention, mortality from all causes was somewhat higher in women who had taken active E+P pills compared with the placebo.

Based on the findings mentioned above, the study’s global index that summarized risk and benefits was unchanged, showing that the health risks exceeded the health benefits from the beginning of the study through the end of this three year follow-up. The follow-up after stopping estrogen plus progestin confirms the study’s main conclusion that combination hormone therapy (E+P) should not be used to prevent disease in healthy, postmenopausal women. The most important message to women who have stopped this hormone therapy is to continue seeing their physicians for rigorous prevention and screening activities for all important preventable health conditions.

I'm glad to see that ghostwriting is now the topic du jour in health journalism. But in a few weeks, the attention will vanish as the drug industry and its associated writing firms will agree to allegedly stringent guidelines that ensure this never happens again. And nothing will actually change. I mean, seriously, do you think academic researchers are going to write their own papers? Do you think drug companies are going to stop hiring writers to expertly spin the data? The current system works too well for it to simply go away.

Thanks to an alert reader for sending this document along. You can search for more documents at the Drug Industry Document Archive, including those from Wyeth and DesignWrite. Happy digging!

Thursday, May 07, 2009

Phase V, Abilify, and Vanishing Akathisia

If you've been reading about Abilify for depression on this site, you've probably noticed that I've been down on Abilify for causing akathisia in a frighteningly high percentage of patients. In two recent trials, akathisia occurred in 25% of Abilify patients compared to 4% of placebo patients. What, exactly, is akathisia? That's still a matter of some debate. Let's turn to a recent Journal of Clinical Psychiatry article on the topic. Entitled "Akathisia: An Updated Review Focusing on Second-Generation Antipsychotics," the paper purports to provide "a review of the literature on the incidence of drug-induced akathisia associated with the use of second-generation antipsychotics (SGAs) and first-generation antipsychotics (FGAs)."

It provides a few different characteristics associated with acute akathisia, including:
  • "Intense dysphoria
  • Awareness of restlessness
  • Complex and semipurposeful motor fidgetiness"
It mentions "...suicidal behavior has been described in patients with akathisia in case reports, both in patients receiving antipsychotic medication and in patients receiving selective serotonin reuptake inhibitors (SSRIs)."A couple of descriptions from another journal:
  • Increased tenseness, restlessness, insomnia and a feeling of being very uncomfortable
  • On the first day of treatment he reacted with marked anxiety and weepiness, on the second day felt so terrible with such marked panic at night that the medication was cancelled
So we can all agree that akathisia does not sound like fun.

Now back to the Journal of Clinical Psychiatry review article. What did the authors conclude? "The comparative incidence of akathisia among the newer antipsychotic agents remains poorly characterized." And "...SGAs are generally associated with a lower propensity for movement disorders compared with their FGA counterparts, an emerging body of comparative literature shows that second-generation medications are not completely free from inducing akathisia."

The authors go through a long list of second-generation antipsychotic medications. The drug that receives the least attention is aripiprazole (Abilify). The authors conclude that "in studies comparing aripiprazole with placebo, akathisia rates in the aripiprazole arm were similar in some studies, and higher in others. As with other SGAs, akathisia rates with aripiprazole were lower than those of FGAs." So Abilify causes less akathisia than older medications and it's unclear if it causes more akathisia than placebo. But, wait, wasn't akathisia related to much higher rates of akathisia than placebo in treating depression? Fortunately, the authors had a little trick to erase that inconvenient piece of evidence; they only examined trials trials involving people diagnosed with schizophrenia or bipolar disorder. So the depression studies -- POOF -- vanished, along with their damning data.

Why would the authors want to censor negative data about Abilify? Well, one author is an employee of Otsuka America Pharmaceutical, Inc., and another is an employee of Bristol-Myers Squibb, companies that market Abilify. And the other authors: All but one of them have a financial relationship with Bristol-Myers Squibb. The best part:
Editorial support provided by Maria Soushko, Ph.D., Phase Five Communications, Inc., New York, N.Y., with funding provided by Bristol-Myers Squibb.
So a paper that excludes the most inconvenient evidence regarding akathisia on Abilify had major parts of the writing done by... a medical writer hired by Bristol-Myers Squibb. If one goes to Phase Five's website , the first animation that pops up says "Spinning Your Science Into Gold." I'd say that this article was indeed 24 karat gold. I hereby nominate all authors of the study for a much coveted Golden Goblet Award.


ResearchBlogging.org

Citation Below:

Kane, J., Fleischhacker, W., Hansen, L., Perlis, R., Pikalov, A., & Assunção-Talbott, S. (2009). Akathisia: An Updated Review Focusing on Second-Generation Antipsychotics The Journal of Clinical Psychiatry DOI: 10.4088/JCP.08r04210

Update: See a related post at the Carlat Psychiatry Blog. A partial quote:
Publishing an article that was carefully crafted to draw attention away from Abilify's main liability was shameful, and is exactly the kind of deceptive editorial practice that we as a society can no longer tolerate.

Wednesday, April 29, 2009

Abilify Runs Amok, Runs Stealth Safety Campaign in Medical Journal

Furious Seasons has a rather distressing piece of news from a recent Bristol-Myers Squibb conference call. To sum it up quickly, BMS claims that 10.6% of depressed patients are now receiving atypical antipsychotics. Of those 10.6%, 21.7% are taking Abilify. So that would mean roughly 10-11 in 100 depressed patients are taking antipsychotics and 2 of them are on Abilify. I shudder to think how many are on Seroquel. Or Zyprexa. It made me think of a post I wrote a few weeks ago in which I described the marketing of Abilify for depression. A huge market of depressed people just ripe for the picking.

Going along with this, BMS is pushing back on the issue of akathisa, the side effect that has garnered the drug much bad publicity (at least in the blog world; 1, 2, 3) via a medical journal article that distracts attention from Abilify as an akathisia-inducer. More on that to come soon. Ghostwriters, ignoring contradictory evidence; basically, an attempt to completely obscure the evidence on the topic. It's not the first time BMS has successfully placed a study with major flaws into a medical journal (1, 2). Details will be forthcoming.

Wednesday, January 07, 2009

Sowing the Seeds of Lexapro

ResearchBlogging.org

I'm reading an article with my jaw completely agape and I thought I'd share the pain. The good people at Forest Pharmaceuticals have put together a tragic waste of journal space. The editorial board at the journal Depression and Anxiety should call an emergency meeting to see how this thing got published. Any peer reviewer who put a stamp of approval on this should be forced to listen to Michael Bolton's Greatest Hits at maximum volume for 12 hours straight.

OK, so what am I having a fit about? Here's what happened in this so-called study. 109 primary care doctors were recruited to participate, for which they were doubtlessly paid a decent chunk per patient (not discussed in the manuscript). The lucky depressed patients of these physicians then received escitalopram (Lexapro) for six months. The manuscript mentions that the "investigators" (the primary care docs) "were not required to have previous clinical research experience to be selected for this study." Yeah, no kidding.

There was no control group, and there had already been dozens of studies on the effects of Lexapro in depression, so how are we getting any new info out of this study? Maybe because this is investigating Lexapro in primary care settings; maybe there was no research on that beforehand. Well, no. The manuscript writes that "The efficacy and tolerability of escitalopram in MDD have been extensively evaluated in primary-care settings," citing four relevant studies. So the study is actually not an attempt to answer a scientific question. So what, exactly, is this study?

Looks and smells like a seeding trial, about which Harold Sox and Dummond Rennie wrote:
This practice—a seeding trial—is marketing in the guise of science. The apparent purpose is to test a hypothesis. The true purpose is to get physicians in the habit of prescribing a new drug. Why would a drug company go to the expense and bother of conducting a trial involving hundreds of practitioners— each recruiting a few patients—when a study based at a few large medical centers could accomplish the same scientific purposes much more efficiently? The main point of the seeding trial is not to get high-quality scientific information: It is to change the prescribing habits of large numbers of physicians. A secondary purpose is to transform physicians into advocates for the sponsor’s drug. The company flatters a physician by selecting him because he is “an opinion leader” and incorporates him in the research team with the title of “investigator.” Then, it pays him good money: a consulting fee to advise the company on the drug’s use and another fee for each patient he enrolls. The physician becomes invested in the drug’s future and praises its good features to patients and colleagues. Unwittingly, the physician joins the sponsor’s marketing team. Why do companies pursue this expensive tactic? Because it works.
So these primary care doctors now feel like "researchers," even though their investigation had essentially zero scientific merit. That probably makes these "investigators" feel important -- and the association between feeling important/scientific and Lexapro is a feeling Forest was banking on to increase Lexapro prescriptions in Canada.

Findings: So what did this extremely important piece of seeding, er, research find? Get ready... Lexapro is safe and effective. To quote the authors: "Escitalopram was well tolerated, safe, and efficacious. Escitalopram can be used with confidence to treat patients with MDD in Canadian primary-care settings." And "As adherence to antidepressant treatment is paramount to achieving long-term recovery, the present results suggest that escitalopram should be considered among the first-line choices of antidepressant used in primary care." So with no control group, we can determine that a Lexapro prescription should be among the first things that come to mind when treating depression. This is mind-boggling. This journal often published good work, but this is among the most uninformative pieces of research I have read. Unless one is thinking about marketing, in which case it is very enlightening.


Citation: Pratap Chokka, Mark Legault (2008). Escitalopram in the treatment of major depressive disorder in primary-care settings: an open-label trial Depression and Anxiety, 25 (12) DOI: 10.1002/da.20458

Wednesday, December 17, 2008

The Incredible Vanishing Key Opinion Leader

Charles Nemeroff, former chair of psychiatry at Emory University and key opinion leader extraordinaire has vanished. Not quite vanished from the face of the Earth, but from Medscape CME and now from a Georgia mental health commission. Nemeroff was found to have not disclosed a whole boatload of money he received from Big (and little) Pharma according to an investigation by Senator Charles Grassley. For example, it appears that Nemeroff received about $20,000 in cash from GlaxoSmithKline in one month in exchange for promoting GSK products to his peers.

I have previously written about a number of, um, "interesting" behaviors on the part of Nemeroff, which I recommend you read in order to understand that Nemeroff has, on several occasions, engaged in behavior that certainly appears to have placed the causes of his corporate sponsors over science. Not good for an "independent" researcher.

And now, it seems that Chuck Nemeroff is vanishing. Dr. Bernard Carroll noted that Nemeroff's continuing medical education offerings had vanished from Medscape and offered the following:
Well, good for Medscape. They came in for their share of criticism, here and here, a while back. Now they deserve credit for displaying ethical standards. Meanwhile, we are waiting for another company called CME Outfitters to get the message. Dr. Nemeroff is slated to moderate a raft of new programs for this company in the coming weeks, sponsored by corporations like Pfizer, AstraZeneca, and Ortho-McNeil Janssen. CME Outfitters' logo, after all, is Education with Integrity. Sooner or later the pharmaceutical corporations, like the CME companies, will understand that they are not helping themselves by trotting out a shopworn and sleazy KOL figurehead like Nemeroff for their marketing efforts. And other KOLs who up to now were willing to "wet their beaks" in these CME forums controlled by the Boss of Bosses Nemeroff will now be leery of associating with him.
Well, CME Outfitters is still rolling with Nemeroff. For example, he has an upcoming program called "Atypical Antipsychotics in Major Depressive Disorder: When Current Treatments Are Not Enough," which is a scary thought given that he appears to have been pulling data from thin air for a prior CME exercise in which he pimped risperidone as a treatment for refractory depression. Specifically, Nemeroff's presentation claimed that risperidone improved sexual function in a clinical trial, when the published article based on the trial's results said no such thing. In addition, Nemeroff's claim that risperidone had shown efficacy in a short-term study versus placebo for depression was also unsupported. So I'm thinking the upcoming program on antipsychotics for depression might be a fantastic example of marketing beating the crap out of science.

Georgia appointed a commission to address several issues within the public mental health system. They have completed a report. Interestingly...

The final version also does not contain the name of commission member Charles Nemeroff, an Emory psychiatry professor who has been a subject of a U.S. Senate Finance Committee investigation into whether drug company money paid to doctors and academics compromises medical research and scholarship. Nemeroff, an internationally known expert on depression, did not attend recent commission meetings.

But Nemeroff was appointed to the commission with some fanfare. The press release listing Nemeroff's accomplishments is pretty lengthy. The Georgia state legislator who appointed Dr. Nemeroff said, "I am confident that Charles will be an asset to this commission and will serve as a strong advocate for the people of Georgia being served [by] our mental health systems"

Yet Nemeroff was not on the final report. If it weren't for his work on CME Outfitters, I would be worried that we might need to file a missing persons report for Dr. Nemeroff.

Update (12-18-08): The Wall Street Journal Health Blog has two interesting posts on Dr. Nemeroff (1, 2). Read them and feel free to file them under "bizarre."

Monday, October 06, 2008

A Month in The Life of Chuck "High Life" Nemeroff

The psychiatry world is belatedly exhibiting outrage toward a man whose ability to lure pharma cash seems to know no bounds. He may be the textbook case of a key opinion leader. Of course, I speak of Charles "Bling Bling" Nemeroff. Rather than list the many questionable at best behaviors he has exhibited, each of which has called into question his standing as a scientist as opposed to a blatant drug marketer, I just want to a) direct everyone to a detailed list of his speaking engagements from GlaxoSmithKline and b) discuss a month of living the High Life, Nemeroff Style.

As is well known by now (1, 2), Nemeroff appears to have not been particularly forthcoming about the huge amounts he was making while moonlighting for every drug company on the planet (see below) despite requirements that he do so. According to psychiatrist Danny Carlat:
From 2000 to 2006, GSK paid Nemeroff a total of $960,488. Note that this was not research grant money, or money for Emory's psychiatry department. These were fees that went into his personal bank account, which he earned by either sitting on GSK's Advisory Board, or speaking to doctors about GSK products. His typical fee for a talk was $3500 plus expenses, but sometimes he made more.

Of this $960,488, the total amount he disclosed to Emory [his employer, to whom he was required to report such income] was $34,998.
According to a GSK document hosted by Senator Charles Grassley, Nemeroff took in over $20 grand in one month from speaking engagements for GSK. Not bad work if you can get it, eh? And this month doesn't seem unusual for Nemeroff. These are only his speeches for GSK -- he also gave speeches for several other companies. The document goes on and on -- 39 pages of paid speech listings, nearly all of them featuring Nemeroff. I just picked 03-30-00 to 04-30-00 because they were on the first pages of the document, which covers expenses from 2000 to 2008 for Dr. Bling Bling.

Nemeroff GSK Honoraria from March 30, 2000 to April 30, 2000
Date
Speaking Fee
03/30/2000
$4000
04/12/2000
$2500
04/19/2000
$4000
04/20/2000
$4175 (includes some 'expenses'; I suspect $4000 was the speaking fee)
04/27/2000
$4000
04/30/2000
$2500
TOTAL
$21, 175 (probably $21,000 excluding travel expenses)

Imagine making $20k in a month for basically reading slides a few times that were quite possibly entirely written by a drug company. And many of these talks were accompanied by posh meals, the kind that myself and most of my readers might eat once or twice a year.

Here's a Nemeroff disclosure from a recent journal article:

Dr Nemeroff has received grants from or performed research for the American Foundation for Suicide Prevention, AstraZeneca, Bristol-Myers Squibb, Forest Laboratories, Inc, Janssen Pharmaceutica, NARSAD: TheMental Health Research Association, the National Institute of Mental Health, Pfizer Pharmaceuticals, and Wyeth-Ayerst Laboratories; has been a consultant to Abbott Laboratories, Acadia Pharmaceuticals, Bristol-Myers Squibb, Corcept Therapeutics, Cypress Bioscience, Cyberonics, Eli Lilly and Co, Entrepreneur’s Fund, Forest Laboratories, Inc, GlaxoSmithKline, i3 DLN, Janssen Pharmaceutica, Lundbeck, Otsuka America Pharmaceutical, Inc, Pfizer Pharmaceuticals, Quintiles Transnational, UCB Pharma, and Wyeth-Ayerst Laboratories; has been on the speakers bureau for Abbott Laboratories, GlaxoSmithKline, Janssen Pharmaceutica, and Pfizer Pharmaceuticals; is a stockholder in Acadia Pharmaceuticals, Corcept Therapeutics, Cypress Bioscience, and NovaDel Pharma Inc; is on the board of directors of the American Foundation for Suicide Prevention, the American Psychiatric Institute for Research and Education, the George West Mental Health Foundation, NovaDel Pharma Inc, and the National Foundation for Mental Health; holds patents on a method and devices for transdermal delivery of lithium (US 6,375,990 B1) and on a method to estimate serotonin and norepinephrine transporter occupancy after drug treatment using patient or animal serum (provisional filing April 2001); and holds equity in Reevax, BMC-JR LLC, and CeNeRx.
No, I didn't make that up. As Ed Silverman wrote at Pharmalot, "It also raises a question - when he did find time to do anything else?"

Wednesday, October 01, 2008

Prialt Pushed Through Duplicate Publication

Apparently, the same data on Elan's pain medication Prialt (ziconotide) was published twice. Same data set. No reference in the second publication to the first publication. As I noted last week in a post about Cymbalta, that's not supposed to happen. It's the sort of thing that leads physicians to believe that a medication has a lot of supporting evidence -- "Of course I prescribe it; I've seen two positive clinical trials" -- when in fact it's just the same data being repackaged in another journal. The full story is contained in two posts at The MacGuffin (1, 2). An infomercial passing off as continuing medical education is also involved in the plot. Count The MacGuffin as an official must-read blog.

Monday, August 18, 2008

Sowing the Seeds of Vioxx

A new article in the Annals of Internal Medicine lays bare how research was commandeered by marketing in the promotion of Vioxx, Merck’s former all-star painkiller and personkiller. The article is based on a chunk of internal Merck documents revealed during legal proceedings (not unlike the infamous Zyprexa documents). Merck set up the study known as ADVANTAGE, in which 2785 arthritis patients were given Vioxx and 2772 arthritis patients took naproxen. Physicians across the nation were recruited to enroll patients to participate in the study, which was intiated during the FDA approval process for Vioxx.

The study, however, was conceived and conducted by Merck’s marketing department. Why? As Vioxx was about to come to market, Merck needed to develop a need for their product. By hiring physicians to participate as “investigators” on this trial, Merck was exposing its product to an important group of potential customers. To quote Merck:

The objectives were to provide product trial among a key physician group to accelerate uptake of VIOXX as the second entrant in a highly competitive new class and gather data important to this customer group. The trial was designed and executed in the spirit of the Merck marketing principles.

Other snippets from a Merck marketing memo, with pithy commentary added free of charge:

  • "...the trial was targeted to a select group of critical customers.” So the main qualification to be a “research investigator” in this trial was to be a customer that Merck wanted to win over.
  • The sales force nominated potential investigators and completed intake forms, allowing a very large number of sites to be evaluated and enrolled and ensuring equal distribution of investigators across the business groups.” Again, the sales force chose the physicians, apparently based on how easily they could be swayed to prescribe Vioxx as a result of participating in this study.
  • An analysis performed at 6 months post launch demonstrated a significantly higher level of prescribing for VIOXX among primary care ADVANTAGE investigators compared to a control group of VIOXX 99 prescribers (see attached). Feedback from the field has been overwhelmingly positive about their ability to access key customers and the influence that being involved in the trial has had on their perceptions of VIOXX and Merck.” The program apparently had its desired results – more docs prescribing Vioxx as a result of their participation in Merck marketing-designed “research.” As for the patients dying due to taking Vioxx, well, what’s a little collateral damage when there are quarterly sales goals to be met?

The name for such an exercise in marketing is a “seeding trial,” referring to a company planting seeds in physicians to use their product under the guise of research.

II. What about getting the results out?

According to Merck: “Preparations are now underway for analysis and publication of the data, which will utilize key investigators as authors and advisors.” Turns out that worked pretty well – as the lead author on the main ADVANTAGE publication told the New York Times that “Merck designed the trial, paid for the trial, ran the trial. Merck came to me after the study was completed and said, “We want your help to work on the paper. The initial paper was written at Merck, and then it was sent to me for editing.” – it was sent to him to place a veneer of academic credibility on Merck’s marketing-run trial.

Even Merck’s head of research said that trials such as ADVANTAGE are “intellectually redundant” – as they tend to focus on results that are already well-established, such as showing Vioxx to be somewhat easier on the gastrointestinal tract that naproxen.

Harold Sox and Drummond Rennie, also writing in the Annals of Internal Medicine, in a critique of Vioxx’s marketing wrote:

This practice—a seeding trial—is marketing in the guise of science. The apparent purpose is to test a hypothesis. The true purpose is to get physicians in the habit of prescribing a new drug. Why would a drug company go to the expense and bother of conducting a trial involving hundreds of practitioners— each recruiting a few patients—when a study based at a few large medical centers could accomplish the same scientific purposes much more efficiently? The main point of the seeding trial is not to get high-quality scientific information: It is to change the prescribing habits of large numbers of physicians. A secondary purpose is to transform physicians into advocates for the sponsor’s drug. The company flatters a physician by selecting him because he is “an opinion leader” and incorporates him in the research team with the title of “investigator.” Then, it pays him good money: a consulting fee to advise the company on the drug’s use and another fee for each patient he enrolls. The physician becomes invested in the drug’s future and praises its good features to patients and colleagues. Unwittingly, the physician joins the sponsor’s marketing team. Why do companies pursue this expensive tactic? Because it works.

This is hardly the biggest problem with Vioxx, as its tendency to kill people en masse via heart problems is obviously a far more important issue. And we all know at this point that Merck made sure to underplay the risks of Vioxx, and that the medical community was also asleep at the wheel when it came examining published studies on the risks of Vioxx. Nonetheless, using the guise of science to recruit naive (or greedy) physicians to serve as Vioxx pushers is contemptible.

Merck was out pushing how wonderfully safe and well-tolerated was via a “study” designed solely to turn “investigators” into top Vioxx prescribers, while at the same time, more and more people were meeting an early end due to this purportedly safe new drug.

For more hot Vioxx action, check out this site.

Update (8-19-08): Check out an interview with Dr. Kevin Hill, lead author of the investigation of the ADVANTAGE trial, over at the Carlat Psychiatry Blog. As always, Pharmalot also has a good post on the topic.

Monday, May 05, 2008

In the Name of Science and Charity

Philip Dawdy at Furious Seasons has noted that Eli Lilly released a short report in which they describe the funding they provided to a variety of organizations. All in the name of science and charity, of course. Beneficiaries of Lilly's largess include:
These were just some of the big recipients. The report itself is well worth checking out. One will note that Lilly is kindly funding a lot of "education" about fibromyalgia just as they try to move Cymbalta for all things pain-related. The amount of "education" regarding bipolar disorder is also instructive. Um, Viva Zyprexa?

Read some of the details at Furious Seasons and read Lilly's report as well. To Lilly's credit, at least they are making an attempt at disclosure; their industry colleagues are more than welcome to follow suit. Remember that the figures from Lilly's report are from the first quarter of 2008 only.

Monday, April 14, 2008

Antidepressant PR Gone Wild

As noted at Furious Seasons, a recent broadcast of "The Infinite Mind" went absolutely wild with its reaching to cover up risks associated with SSRIs. Oy. It was almost as if a PR consultant for the drug industry was involved with the show... Oh, wait, a PR consultant for the drug industry was involved -- Peter Pitts from Drug Wonks appeared on the program. You may recall that Pitts works at a PR firm (Selvage, Manning, and Lee) that does much business with the drug industry.

More coming later on the Canadian Psychiatric Association's unscientific dismissal of evidence linking antidepressants to poor efficacy.

Tuesday, March 18, 2008

Zyprexa and Key Opinion Leaders

Since the Zyprexa trial is ongoing in Alaska, I thought I should return to the wonderful world of Zyprexa. I encourage readers to follow the Zyprexa coverage at Furious Seasons, Pharmalot, and PharmaGossip. Today, I will discuss the link between key opinion leaders and the marketing of Zyprexa. To preface, a coveted Golden Goblet Nomination could be handed out to several individuals based on their involvement in Zyprexa marketing...

In March 2000, Zyprexa received FDA approval for treatment of manic episodes. One document laid out the multipronged marketing maneuvers that Lilly utilized to move Zyprexa shortly after its approval. Some of the details of this document have been well-covered in a terrific piece of investigative journalism at Furious Seasons. This post will provide some coverage of the link between Zyprexa and the key opinion leaders who helped popularize the drug across the nation.

Once approved for bipolar disorder, Lilly utilized several tactics to market Zyprexa for bipolar disorder, including a satellite conference beamed to about 6000 physicians, and 8000 treatment team members in 1000 facilities. The faculty providing this educational service included many of the big names in academic psychiatry, including Paul Keck, Jan Fawcett, Hagop Akiskal, and Alan Schatzberg.

Alan Schatzberg, you say? Yes, the same Alan Schatzberg who is set to become president of the American Psychiatric Association. Some have been less than pleased with his election as APA president, considering his background as a physician-marketer, a key opinion leader with large conflicts of interest. The same Alan Schatzberg who has been involved in marketing passing itself off as continuing medical education.

Lilly also bankrolled dinner meetings, anticipated to draw 150-400 physicians per sitting. Dr. Schatzberg was also listed as a speaker for such dinners. One mental health service provider was impressed enough with receiving such excellent medical education that you can find it on his CV.

In the document outlining Zypexa's big marketing launch, Paul Keck's name appears in the following contexts:
  • Satellite symposium provider
  • Trainer of "local speakers." I believe this means he would train local physicians in various markets to then discuss Zyprexa with their colleagues.
  • Faculty for bipolar weekend symposia
  • Faculty for audio conferences
  • Faculty for a satellite CME workshop
  • Faculty for "dissemination of Bipolar information to 30,000 customers"
  • Faculty on a "closed symposium" resulting in a CME newsletter and a CME audiotape, both of which were mailed to 30,000 individuals
  • Author of two journal supplement articles
Paul Keck was also a member of a task force chartered by the American Psychiatric Association that served to revise the organization's guidelines to provide a more favorable view of atypical antipsychotics (including Zyprexa) in the treatment of bipolar disorder. No conflict of interest there, eh?

Keck said in a 2002 interview that:
"Often," Keck said, "patients with bipolar disorder require complex treatment regimens to manage all phases of their illness, creating a compliance challenge for patients and a management challenge for clinicians. These studies suggest that physicians may be able to use olanzapine as a foundation to simplify patients’ treatment regimens, and the combination of olanzapine and fluoxetine could be an effective treatment choice
It is likely that Keck was not performing all of his "educational" functions for Lilly in exchange for lollipops. He was likely receiving a healthy dose of cold, hard cash. Yet in the article, nothing is written about his financial links to Lilly. Keck has also appeared in press releases saying nice things about Symbyax (fluoxetine/olanzapine combination).

To be fair, Keck has also stumped for Pfizer's Geodon in press releases. Oh, and he also said nice things about Abilify in a press release. I suppose that if one is going to be a true key opinion leader, a real mover and shaker, one should be prepared to say nice things about whatever new drug is released, since each new drug naturally represents an "important" treatment option. Keck, like Alan Schatzberg and Charles Nemeroff, is also currently listed as a member of the clinical advisory board for Neuroscience CME, a for-profit entity awash in drug industry money. Dr. Daniel Carlat has previously written that the "educational" content produced by this organization is biased, and I find that easy to believe. It's not hard to find examples of poorly done industry-funded CME. In fact, you might be interested in reading about a CME activity in which Nemeroff seems to have pulled data out of thin air.

In sum, the usual fun and games were in play when Zyprexa was initially being pushed for bipolar disorder. Some of the biggest names in psychiatry left their fingerprints all over the marketing of Zyprexa and one of these key opinion leaders recently won the presidential election for the American Psychiatric Association. I suppose, then, that American psychiatrists are generally either unaware of conflicts of interest or don't care about them.

The beautiful thing about being a key opinion leader is that one's name recognition is huge. Among psychiatrists, I bet that Schatzberg's name is better known than that of Bill Clinton, since Schatzberg's byline appears on journal supplements and CME so frequently. That can't hurt when running for president of the national professional organization. I will be very interested to see how Schatzberg handles questions about conflicts of interest and drug industry influence on his profession. Don't be expecting any major efforts at reform in the near future.

Friday, March 07, 2008

Link-O-Rama, Early March Edition

A few pieces of interesting news...
  • Dr. Daniel Carlat has been busy. He aptly notes that Pristiq is an Effexor copycat that apparently provides no special benefits over soon to be generic venlafaxine. Hey, didn't I just write a piece or two about Effexor? In addition, Carlat continues to hammer the corrupting, I er, continuing medical education industry. He also documents the use of deceptive "surveys" to market antipsychotics. Excellent work -- keep it up! Dr. Grohol at PsychCentral pointed out another set of potential problems with the surveys.
  • Furious Seasons puts forth Ye Olde Pimp Slappe on antidepressant use in bipolar disorder with a side dish of I Told You So. He has indeed questioned the use of antidepressants in bipolar disorder and the latest data continue to question the utility and safety of such practices. Philip Dawdy notes accurately that he is the only person in the USA to host the infamousZyprexa documents online. He also broke a number of excellent stories on said documents. All for the salary of zero dollars. So why not send him some money? He's doing a fundraiser currently. You can donate here. Hey, I'd like to rake in some donations for myself. I think I provide a somewhat valuable service, and my day job doesn't exactly make me rich. But when Dawdy is doing such work much more productively than myself and he doesn't even have a day job (file under journalism in crisis), I think he deserves your financial consideration, not I. So if you ever had the kindhearted intention of sending me cash to support my work, send your money to Philip Dawdy.
  • Health Care Renewal is chronically excellent, as y'all know already. Recent stories include a fat conflict of interest involving the head of the Obesity Society, yet another chapter in the sordid University of Medicine and Dentistry of New Jersey affair, and a take on the baseless lawsuit from HipSaver.
  • Speaking of HipSaver, Aubrey Blumsohn has also written eloquently on this case, which I hope receives scrutiny from many sources. He also reports unfavorably about the sham investigation of GSK. Bob Fiddaman and Seroxat Secrets were similarly unimpressed.
  • Peter Rost is on the job market.
  • As usual, Pharmalot and PharmaGossip have continued to provide all the news that's fit to print. Of particular interest to my readers (I think) was the marketing of Abilify. Perhaps yet more interesting, the 6th episode of RX -- Sex, Drugs, and Quarterly Goals is up. Everyone should check out all six episodes. I'm hooked.
  • Pharma Giles has been generating his usual brand of dead-on satire. I was particularly amused by his take on the most recent Kirsch antidepressant meta-analysis.
  • In "who cares?" news, bifeprunox is apparently dead in the water. Better hope that Pristiq sells in droves, Wyeth...
As for myself, I have at least one piece in the works involving a key opinion leader and cash. Stay tuned.

Wednesday, March 05, 2008

Nemeroff Confirms Kirsch: SSRIs Offer Little Benefit


This post will discuss how the latest meta-analysis claiming to show public health benefits for Effexor actually also showed that antidepressants aren't up to snuff. Part 1 detailed how the study authors found a very small advantage for Effexor over SSRIs, which they then suggested meant that Effexor offered significant benefits for public health over SSRIs. Ghostwriters, company statisticians, questions about transparency, etc. Even the journal editor jumped on board. All the usual goodies.

Bad News for SSRIs: But now, on to part deux. Remember that the authors used a Hamilton Depression Rating Scale of 7 or less as indicative of remission, which was the one and only outcome measure of import in their analysis. In their database of studies analyzed in the meta-analysis, there were nine studies that had an Effexor group, an SSRI group, and a placebo group. In these studies, there was a 5.5% difference in remission rates for SSRIs versus placebo. Read it again: there was a 5.5% difference in remission rates for SSRIs versus placebo. You should be shaking your head, perhaps cursing under your breath or even aloud. Using the number needed to treat statistic that the authors used in their analysis of Effexor versus SSRIs, that means you would have to treat 18 people with SSRI instead of a placebo to get one additional remission that you would not get if all 18 had received a placebo. Damn -- that is pathetic! In these same nine trials, the difference between Effexor and SSRIs was 13%, for a number needed to treat of 8. One might conclude that Effexor was more than twice as effective as SSRIs based on these figures, but one would be wrong. Please see my prior post for why depression remission should absolutely not be used as the only judgment of a drug's efficacy. Granted, the numbers for SSRIs were based on nine trials, which limits the generalizability of the findings, but the findings sure fit well with the Kirsch series of meta-analyses that found only a small difference for SSRIs over placebo in all but the most severe cases.

If you told most people that you would have to treat 18 depressed patients with a SSRI rather than a placebo to get one additional remission in depressive symptoms, you'd get laughed out of the room, but that is exactly what Nemeroff et al found. Do the authors conclude with: "The findings confirm earlier work by Kirsch and colleagues showing that the benefits of SSRIs over placebo are quite modest"? Not exactly. Here is their interpretation:
To achieve one remission more than with placebo, 8 patients would need to be treated with venlafaxine (NNT = 8) compared with 18 patients who would need to be treated with an SSRI (NNT = 18). From this perspective, the magnitude of the advantage of SSRIs versus placebo in the placebo-controlled dataset (NNT=18) is similar to the advantage of venlafaxine relative to SSRIs in the combined data set (NNT = 17).
This is right after the authors wrote about how a NNT of 17 was possibly important to public health (see part 1), which was about the time I fell out of my chair laughing. A more plausible interpretation is that SSRIs yielded very little benefit over placebo and that Effexor, in turn, yielded very little benefit (in fact, a statistically significant benefit over only Prozac) over SSRIs. But that sort of interpretation does not lead to good marketing copy or press releases that tout the benefits of medication well beyond what is reasonable. What if the press releases for this study read: "Nemeroff confirms findings of Kirsch: Antidepressants offer very little benefit over placebo." That would have been refreshing.

Sidebar: Here is my standard statement about antidepressants -- they work. Huh? Yeah, the average person (surely not everyone) on an antidepressant improves by a notable amount. The problem is that the vast majority (about 80%) of such improvement is due to the placebo effect and/or the depression simply getting better over time. Give someone a pill and that person will likely show some improvement, but nearly all of the improvement is due to something other than the drug. If most improvement is due to the placebo effect, couldn't we usually get such improvement using psychotherapy, exercise, or something else, which might avoid some drug-induced side effects? Moving on...

Key Opinion Leaders: But notice how this Wyeth/Advogent authored piece featuring Charles Nemeroff as lead author (as well as Michael Thase as last author) throws down a major spin job regarding the efficacy of antidepressants. As reported previously, their measure of efficacy was quite arbitrary. It could have been supplemented with other measures, as Wyeth is in possession of such relevant data, but such analyses were not conducted. But even using their questionable measure of efficacy, antidepressants put on a poor performance. Similarly, Effexor's advantage over SSRIs was meager. Yet the authors (remember, three medical writers worked on this paper) conclude that venlafaxine offers a public health benefit over SSRIs. Maybe the authors were afraid of being sued for writing anything negative in their paper? Or perhaps they just know who is buttering their bread. It is also possible that the authors truly cannot envision the idea that SSRIs offer such a meager advantage over placebo and that Effexor yields very little (if any) benefit over SSRIs. And that is the problem. The "key opinion leaders" are all stacked on one side of the aisle -- drugs are highly effective and each new generation of medications is better than the last. So plug in the name of the next drug here, and you'll see a key opinion leader along with a team of medical writers rushing out to show physicians that the latest truly is the greatest. Since we don't really train physicians to understand clinical trials or statistics particularly well, you can also expect many physicians targeted by such marketing efforts to simply lap up unsupported claims of "public health benefit."

Hey, is there a counter-detailer in the room somewhere?

Monday, March 03, 2008

Effexor Beats SSRIs (Kind of, Sort of, In a maybe meaningless way...)

A recent study in the journal Biological Psychiatry claimed to show that Effexor's (venlafaxine's) alleged advantages over SSRIs "may be of public health relevance." Unstated in the article, but a more accurate reading of their findings, is that antidepressants yield little benefit over a placebo. I'm breaking this into two parts. The current post deals with the authors' claims regarding venlafaxine's superiority over SSRIs. A second post will examine their understated finding that antidepressants are not particularly impressive compared to placebo.

The study was a meta-analysis, where data from all clinical trials comparing Effexor to an SSRI were pooled together. The authors used remission on the Hamilton Rating Scale for Depression (HAM-D) as their measure of treatment effectiveness. On the HAM-D, a score of less than or equal to 7 was used to define remission. They found that remission rates on Effexor were 5.9% greater than remission rates on SSRIs. Thus, one would need to treat 17 depressed patients with Effexor rather than an SSRI to yield one remission that would not have occurred had all 17 patients received an SSRI. Not a big difference, you say? Here's what the authors said:
...the pooled effect size across all comparisons of venlafaxine versus SSRIs reflected an average difference in remission rates of 5.9%, which reflected a NNT of 17 (1/.059), that is, one would expect to treat approximately 17 patients with venlafaxine to see one more success than if all had been treated with another SSRI. Although this difference was reliable and would be important if applied to populations of depressed patients, it is also true that it is modest and might not be noticed by busy clinicians in everyday practice. Nonetheless, an NNT of 17 may be of public health relevance given the large number of patients treated for depression and the significant burden of illness associated with this disorder. [my emphasis]
Public Health Relevance/Remission: The public health claim is pretty far over the top. If one had to treat 17 patients with Effexor to prevent a suicide or homicide that would have occurred had SSRIs been used, then yes, we'd be talking about a significant impact on public health. But that's not what we're dealing with in this study. The outcome variable was remission on the HAM-D, which is a soft, squishy measure of convenience. The authors state that remission rates are "the most rigorous measure of antidepressant efficacy," but to my knowledge there is no evidence supporting their adoption of the magic cutoff score of 7 on the HAM-D as the definition for depressed/not depressed. Are people who scored 8 or 9 on the HAM-D really significantly more depressed than people who scored 6 or 7? Take a look at the HAM-D yourself and make your own decision. I know of not a single piece of empirical data stating that such small differences are meaningful. So I'm not buying the public health benefit -- in fact, I think it is patently ridiculous.

Outcome measures can be either categorical (e.g., remission or no remission) or continuous (e.g., change on HAM-D scores from pretest to posttest). Joanna Moncrieff and Irving Kirsch discuss how using cut-off scores (categorical measures) rather than looking at mean change (continuous measures) can result in the categorical measure making the treatment appear much more effective than examination of continuous measures. Applied to this case, one wonders why the data on mean improvement was not provided. One can make a very weak case that Effexor works better than SSRIs based on an arbitrary categorical measure but not one shred of data was presented to show superiority on a continuous measure. If the data supported Effexor on both categorical and continuous measures, then I'd bet they would have been discussed in this article, as it was funded by Wyeth (patent holder for Effexor). Thus, the absence of data on continuous measures (e.g., difference in mean improvement on the HAM-D between Effexor-treated patients and SSRI-treated patients), is suspicious.

Even if the authors decided to use only categorical measures, it would have been nice had they opted to use multiple measures. They could have used the equally arbitrary 50% improvement criterion (HAM-D scores drop by 50% during treatment), for example. However, such data were not provided. So the authors decided to use one highly arbitrary measure, on which they found a very small benefit for venflafaxine over placebo. Whoopee.

I received an email from a respected psychiatrist (who shall remain anonymous) about this study. He/she opined:
...it would have been interesting if the authors had used other cutoffs for the Hamilton scale besides 7 to define remission; i.e., if they had done a sensitivity analysis. Apparently, Wyeth has all the raw data from the studies, so a lot of interesting science could be done with this very large aggregate database. For example, there are robust factor analyses of the Hamilton scale that indicate reasonably independent dimensions of depressed mood, agitation/anxiety, diurnal variation, etc., and it would be of great interest to determine the relative effects of the various drugs on these different illness dimensions
In other words, the authors could have attempted to see if there were meaningful differences between Effexor and SSRIs on important variables, yet they opted to not undertake such analysis. A skeptical view is that they analyzed the data in such a fashion, found nothing, and thus just reported the "good news" about Effexor. I don't know if they conducted additional analyses that were not reported. However, it would seem to me that someone at Wyeth would have run such analyses at some point, perhaps as part of this meta-analysis, because any advantage over SSRIs would make for excellent marketing copy. In fact, Effexor has been running the "better than SSRIs" line for years, based on rather scant data. If there were more impressive data, they would have been reported by now.

Prozac and the Rest: The findings showed that Effexor was only superior to a statistically significant degree (i.e., we'd not expect such differences by chance alone) when compared to Prozac (fluoxetine). The authors, to their credit, pointed this out on multiple occasions. However, their reporting seems a little contradictory when, on one hand, they report that venlafaxine was superior to SSRIs as a class (see quote toward the top of the post), but then note that the differences were only statistically significant when compared to Prozac. The percentage difference in remission favoring Effexor over Zoloft (sertraline) was 3.4%, over Paxil (paroxetine) was 4.6%, Celexa (citalopram) was 3.9%, and Luvox (fluvoxamine) was 14.1%. I think just about anyone would concur that the difference versus fluvoxamine seems too high to be credible, and it was based on only one study, making the fluke factor more tenable. Again, the advantage of Effexor over all SSRIs except Prozac was not statistically significant. Even if these differences were statistically significant, would the authors claim that needing to treat 26 patients with Effexor rather than Celexa to achieve one additional depression remission would improve public health? Small differences on a soft, squishy, arbitrary endpoint combined with not performing (or not reporting) more meaningful data = Not news.

The Editor Piles On: In a press release, the editor of the journal in which this article appears jumped on board in a big way:

Acknowledging the seemingly small advantage, John H. Krystal, M.D., Editor of Biological Psychiatry and affiliated with both Yale University School of Medicine and the VA Connecticut Healthcare System, comments that this article “highlights an advance that may have more importance for public health than for individual doctors and patients.” He explains this reasoning:

"If the average doctor was actively treating 200 symptomatic depressed patients and switched all of them to venlafaxine from SSRI, only 12 patients would be predicted to benefit from the switch. This signal of benefit might be very hard for that doctor to detect. But imagine that the entire population of depressed patients in the United States, estimated to be 7.1% of the population or over 21 million people, received a treatment that was 5.9% more effective, then it is conceivable that more than 1 million people would respond to venlafaxine who would not have responded to an SSRI. This may be an example of where optimal use of existing medications may improve public health even when it might not make much difference for individual doctors and patients."

Seeing a journal editor swallow the Kool-Aid is not encouraging. Again, the 5.9% difference is based on an endpoint that may well mean nothing.

Ghostwriter Watch: Who wrote the study and who conducted the analyses? The authors are listed as Charles Nemeroff, Richard Entsuah, Isma Benattia, Mark Demitrack, Diane Sloan, and Michael Thase. Their respective contributions are not listed in the text of the article. The contribution of Wilfrido Ortega-Leon for assistance with statistical analysis is acknowledged in the article, as are the contributions of Sherri Jones and Lorraine Sweeney of Advogent for "editorial assistance."

Ortega-Leon appears to be an employee of Wyeth. So did an employee of Wyeth run all of the stats, then pass them along to the authors for writeup? Last time I checked, there were sometimes problems associated with having a company-funded statistician run the stats then pass them along without any independent oversight. I don't know what happened, but my questions could have been easily resolved: Describe each author's contributions in a note at the end of the article.

Sherri Jones and Lorraine Sweeney have served in an "editorial assistant" role for other studies promoting Effexor, such as this one. I suspect that they are familiar with the key marketing messages for the drug. An important question: What does "editorial assistance" mean? Did Jones and Sweeney simply spell-check the paper and make sure the figures looked pretty? Did they consult the authors to get the main points, then fill in a few gaps? Or did they write the whole paper then watch the purported authors rubber-stamp their names on the author byline? Simply listing "editorial assistance" is not transparency. I have no problem with medical writers helping with a manuscript, depending on what "helping" means. Many researchers are not skilled writers and cleaning up their writing is a good idea for all parties. But having a medical writer who is paid by a drug company to make sure that key marketing messages are included in the paper can lead to problems.

Part 2, regarding the unemphasized, but important, finding from this study that antidepressants yield mediocre benefits over placebo.

Update (03-03-08): See comments. A wise reader has pointed out that there are actually three authors from Advogent. Well, um, one author and two editorial assistants. A skeptical person would add that the presence of three medical writers and a Wyeth statistican who appears in a footnote at the end of the study obviates the need for those pesky academic authors except for the need to lend the study a stamp of approval from "independent scientists." Is that too cynical?

Wednesday, February 20, 2008

Medical Bribery: We Want Details

When drug companies provide kickbacks and bribes to physicians, they sometimes make the news for a brief spin around the news cycle, followed by shock when the same thing happens again a few news cycles later. But the point of this post is not to describe the amnesia that has befallen the media, but to wonder why nobody calls out the recipients of such lucre.

To give credit where it is certainly due, Health Care Renewal and some other blogs keep a close on such behavior. But my take is that the occasional sense of outrage regarding bribes, kickbacks, and other goodies tends to be shoved nearly entirely in the direction of the drug/medical device industry. Don't get me wrong -- they deserve some serious blame and shame, but if physicians wouldn't take the enticements, then there would be no problem to begin with. As the hackneyed phrase goes, it takes two to tango.

And these legal deals work out great. Merck or Bristol-Myers Squibb or whomever can simply settle the claims with the feds, pay out a sh*tload of cash, but admit no wrongdoing. And the doctors who were bribed -- we rarely know much about them. They seem to get a free pass. Yet when we catch an occasional whiff of what these doctors are up to, it is quite telling.

To quote from my post in May 2007 (which I humbly suggest that you should read in its entirety)...

"Anya’s doctor, George Realmuto gave several educational marketing speeches for Concerta, manufactured by Johnson & Johnson, which also makes Risperdal. He had the following to say (and I hope he was misquoted) when asked about why he gives marketing speeches for drugs.

“To the extent that a drug is useful, I want to be seen as a leader in my specialty and that I was involved in a scientific study,” he said. [i.e. I wanna be a key opinion leader???]

The money is nice, too, he said. Dr. Realmuto’s university salary is $196,310. “Academics don’t get paid very much,” he said. “If I was an entertainer, I think I would certainly do a lot better.”

Hey, can someone fetch me the Kleenex? Making $196,310 per year is a sign that he does not “get paid very much.” Cry me a river. In-blanking-credible."

And I'm just referencing legalized bribery, the kind where docs take cash to become product spokespersons. It would be quite tantalizing if we actually had a better idea of what, exactly, these bribes and kickbacks entailed. Yeah, we know that drug companies blatantly buy off some doctors in developing countries, providing cars, air conditioners, cameras and a wide variety of other products. We also know that drug companies must be a bit more subtle in how they bribe doctors in the so-called developed world. Sure, we have the vacations disguised as "educational meetings", the speaking engagements, seeding trials, and the like -- dressing up bribery as a form of education and/or research. Please feel free to add a few more tricks of the trade in the comments section. I've hit my limit for press releases which mention legal settlements and bribing doctors, yet fail to mention what "bribing" actually means.

Tuesday, February 19, 2008

Why I Love the Discussion Section

A recent study in the Journal of Clinical Psychopharmacology found that aripiprazole (Abilify) offered no benefit over placebo in treating biploar depression. Well, at least that's what the results showed, but the discussion section told a bit of a different story. At the end of eight weeks, Abilify failed to beat placebo on either the Montgomery-Asberg Depression Rating Scale or the Clinical Global Impressions -- Bipolar Severity of Illness Scale.

It is rare that an industry-sponsored article reports negative results and it would be nigh-impossible to find a published industry-sponsored study that failed to put a happy spin on the negative results. Sure, the results were negative in this study, but if the dosing was different, the treatment could have worked. There's always a loophole, some possibility that results would have been dandy if something were different. Check this out:
It is possible that the dosing regimen used in the current studies may have been too high for this patient group, or that titration was too rapid. Specifically, the unexpectedly high rates of discontinuation caused by any reason or because of AEs suggest that the aripiprazole starting dose (10 mg/d) may have been too high and that the dose titration (weekly adjustments in 5-mg increments according to clinical response and tolerability) may have been too rapid...

However, because preliminary data indicate that aripiprazole may have a potential value as adjunctive therapy in patients with bipolar depression, future studies that focus on the use of aripiprazole as adjunctive therapy using a better-tolerated dosing schedule with a more conservative escalation may be of greater value for the treatment of patients with bipolar depression...
And my favorite part...
Although the improvements in MADRS total scores in the current aripiprazole studies did not separate statistically significantly from placebo at end point, the significant effects observed with aripiprazole monotherapy within the first 6 weeks are clinically meaningful and similar to the effects seen with olanzapine monotherapy and lamotrigine monotherapy in patients with bipolar depression.
OK, so the argument is that while treatment did not work at the end of 8 weeks, the effects after 6 weeks were really super-duper impressive. Gimme a break. The authors did not present the actual numbers on the MADRS (the primary manner in which depression was assessed); rather, the data were presented in figures. Um, isn't science supposed to be based on numbers -- shouldn't they be provided in the text of the paper? At 6 weeks, the difference in scores between Abilify and placebo looks to be a little more than 2 points on the MADRS, a rating scale that spans from 0 to 60. And if a drug makes a person 2 points better relative to placebo, then the findings are "clinically meaningful"? Keep lowering that bar, fellas. While the discussion reaches out to rescue the reputation of Abilify, it does (to be fair), also point out on a couple occasions that Abilify was not particularly efficacious at the end of 8 weeks and was related to a worse safety/tolerability profile than placebo. In fact, relative to some other studies I've dissed regarding their sunny presentation of unimpressive results (like this one), the current Abilify article is a model of fair discussion.

Side note: Akathisia was reported by about a quarter of patients taking Abilify. The funny thing about akathisia is that it is not well-defined in this study or in many others. Is it a problem with movements, mental tension, something else, or what? It would seem important to know, given that Abilify apparently causes akathisia in droves. Do a Pubmed search for aripiprazole and akathisia and you'll see what I mean. A couple descriptions of akathisia follow:
  • Increased tenseness, restlessness, insomnia and a feeling of being very uncomfortable
  • On the first day of treatment he reacted with marked anxiety and weepiness, on the second day felt so terrible with such marked panic at night that the medication was cancelled
  • Another: A movement disorder characterized by a feeling of inner restlessness and a compelling need to be in constant motion as well as by actions such as rocking while standing or sitting, lifting the feet as if marching on the spot and crossing and uncrossing the legs while sitting. People with akathisia are unable to sit or keep still, complain of restlessness, fidget, rock from foot to foot, and pace.

Wednesday, February 13, 2008

Key Opinion Leaders and Information Laundering: The Case of Paxil

Joseph Glenmullen’s testimony regarding GlaxoSmithKline’s burial of suicide data related to Paxil, which was discussed briefly across the blogosphere last week (Pharmalot, Furious Seasons, for example), was quite interesting in many respects.

One important aspect that needs public airing is how key opinion leaders in psychiatry were used by GSK to help allay fears that Paxil might induce suicidal thoughts and/or behaviors. When GSK issues statements indicating that Paxil is not linked to increased suicide risk, many people will think “Gee, of course GSK will say Paxil is not linked to suicide – it’s their product, after all.” But when purportedly independent academic researchers make the same claims regarding the alleged safety of Paxil, then people tend to think “Well, if these big-name researchers say it’s safe, then I suppose that there’s no risk.” But what if GSK simply hands these big-name researchers (aka “key opinion leaders") charts with data, and then the “independent” researchers go about stating that Paxil is safe? Mind you, the researchers in question don’t see the actual raw data – just tables handed to them from GSK – in other words, they simply take GSK’s word that the data is accurate. In essence, these researchers are serving as information conduits for GSK.

But wait a second, what if the charts and data tables handed to them by GSK are not an accurate representation of the raw data; what if GSK is lying? Well, of course, it turns out that GSK was lying in a big way for several years. This post will not go into depth on the suicide data, as it has been covered elsewhere (1, 2, 3 ) -- even GSK now admits that Paxil is related to an increased risk of suicidality.

My main question in this post is how we are supposed to trust our "key opinion leaders" in psychiatry if they are willing to simply look at data tables from GSK (and others), then make pronouncements regarding the benefits and safety of medication without ever examining the raw data. To put this in layman's terms, suppose an election occurs and candidate A wins 70% of votes while candidate B wins 30% of votes. As the vote counter, I then rig the results to say that candidate B won the election by a 55% to 45% margin. Suppose that the election certification board shows up later and I show them a spreadsheet that I created which backs up my 55% vote tally for candidate A. The election board is satisfied and walks away, not knowing that the vote counting was a sham. Obviously, the election board should have checked the ballots (the raw data) rather than simply examining the spreadsheet (the data table). In much the same way, these so-called thought leaders in psychiatry should have checked the raw data before issuing statements about Paxil.

What did these key opinion leaders say about Paxil? Some quotes from Glenmullen's testimony follows, based upon documents he obtained in GSK's archive. Here's what David Dunner (University of Washington) and Geoffrey Dunbar (of GSK) reportedly said at a conference
Suicides and suicide attempts occurred less frequently with Paxil than with either placebo or active controls.
John Mann of Columbia University, regarding how data were collected:
We spent quite a bit of time gathering data from various drug companies and formulating it into the publication of the committee's findings
The committee he references is a committee from the American College of Neuropsychopharmacology, the same organization that issued a dubious report blessing the use of antidepressants in kids.

More from Mann, after being asked if he saw raw data or just data summarized in tables:
To be perfectly honest, I can't recall how much of the statistical raw data we received at the time that we put these numbers together...No, I think we all went through the tables of data that were provided at the time.
To use the analogy from above, the election board did not actually see the ballots. Stuart Montgomery is next. He was an author, along with Dunner and Dunbar, on a paper in the journal European Neuropsychopharmacology that stated:
Consistent reduction in suicides, attempted suicides, and suicidal thoughts, and protection against emergent suicidal thoughts suggest that Paxil has advantages in treating the potentially suicidal client.
Did Dunner see any raw data?
Dunner: I didn't see the raw data in the case report forms. I did see the tables. I work with the tables. The tables came before any draft, as I recall. We -- we created the paper from the tables.

Attorney: And -- and you never questioned, did you, or did you not question the validity of the data in Table 8?

Dunner: No
The above-mentioned paper that gave a clean slate to Paxil? According to a GSK document examined by Glenmullen, it was used by GSK to help convince physicians that they need not worry about Paxil inducing suicidality.

If you are an academic researcher, and you simply take data tables from drug companies then reproduce them in a report and/or publication, you are not doing research -- you are laundering information. People think that you have closely examined the data, but you have not, and you are thus doing the public a disservice.

I am unaware of any of the above researchers ever issuing a public apology. I can respect the context of the times; researchers may not have been aware of how pharmaceutical companies fool around with data in the early 90's. So if anyone wants to issue a mea culpa, I'd respect such an apology, but I have a feeling that not a single one of the above named individuals (nor this guy) will make an apology. Instead, it will be more business as usual, as these key opinion leaders, knowing who butters their bread, will continue to launder information and tell the public that everything will be fine and dandy if they just take their Paxil, Seroquel, or whatever hot drug of the moment is burning up the sales charts.