Thursday, January 31, 2008

Peer Review, GSK, Cash, and Limp Noodles

Stephanie Saul has a quite interesting story in the New York Times about a peer-reviewer who really dropped the ball.

A key member of the Senate said Wednesday that a prominent diabetes expert leaked an unpublished and confidential medical journal article to GlaxoSmithKline last year, tipping the company to the imminent publication of safety questions involving the company’s diabetes drug Avandia.

The doctor, Steven M. Haffner of the University of Texas Health Science Center in San Antonio, faxed the article to the drug maker after agreeing to read it as part of the peer-review process for the New England Journal of Medicine, according to a statement Wednesday by Senator Charles E. Grassley...

An article on the matter that was published online Wednesday by the journal Nature quoted Dr. Haffner. “Why I sent it is a mystery,” the quote says. “I don’t really understand it. I wasn’t feeling well. It was bad judgment."

OK, I have to give Haffner credit for admitting his error. However, "I wasn't feeling well" -- not a great excuse. According to a GSK spokesperson, Haffner sent the article to GSK for "advice from experienced statisticians." The spokesperson denied that GSK provided any feedback to Haffner. Um, couldn't Haffner have found a statistician who did not work for GSK? As you might guess, this type of behavior is a no-no; the New England Journal of Medicine (as well as virtually all journals) has a policy where peer reviewers are not to share the content of papers under peer review with others.

Why did Haffner go to GSK? Well, here's one possible reason...

Dr. Haffner has previously disclosed that he has conducted research and served as a paid speaker for Glaxo. [Iowa Senator] Mr. Grassley said that Dr. Haffner had received $75,000 in consulting and speaking fees from GlaxoSmithKline since 1999

Maybe it was his relationship with GSK, maybe not. Haffner is apparently not a fan of medical journals, as can be seen in the quote below:

“The three major medical journals are becoming more like British tabloid newspapers — all they lack is a bare-chested woman on page 3,"
Apparently if they changed their peer review process to include submitting papers critical of industry for "objective" peer review by precisely the companies they are criticizing, that would help to de-tabloid the journals??

At the end of the article, Saul notes a case of limp noodle punishment for a similar violation...

Last year the New England Journal sanctioned another physician, Dr. Martin B. Leon, for commenting on a study before its publication. Dr. Leon, who was a reviewer of a journal article on the effectiveness of heart stents, disclosed at a medical conference that the study’s findings were negative before the article appeared. As a result, the journal barred Dr. Leon from reviewing articles for five years, and said he could not submit commentary for publication in the journal during that period.

Oh, THAT will teach him a lesson! He can't spend his free time reviewing articles for a journal. Ouch, that might leave a mark. And he can't publish an editorial for five years in one journal?? With hundreds of other journals to choose from, how will he survive? See some interesting comments on the Leon case here.

Hat Tip: Furious Seasons

Tuesday, January 29, 2008

Fred Hassan's Hindquarters: Meet Peter Rost's Boot

Schering-Plough CEO Fred Hassan gets served (link contains adult language) by Peter Rost. Badly. While Hassan has been hailed as some sort of genius by some in past years, Rost offers a take on his record that is much less flattering. Here's Rost's lead-in to the story:
In this article BrandweekNRX will take you behind the glossy numbers and show you what really happened during Hassan’s tenure as CEO of two major drug companies and how he managed to fool almost everyone into thinking he had created successful turn-arounds.
Gee, Peter, that's not very nice. Then again, neither is this:
And that’s not all--there is also evidence that a significant part of Pharmacia sales were based upon selling drugs to its wholesalers ahead of demand, “stuffing the channels,” resulting in revenue of $500 million from such sales.
It just gets even more intriguing from there. I've not posted on the Schering-Plough sinking ship until now, but I've been watching the story, which just keeps looking worse for the company. I thank Rost for keeping a close eye on this saga and look forward to reading more about it at Brandweek.

The Serotonin Monster Strikes Again

Last Halloween, I discussed the legendary Serotonin Monster, that is, the alleged chemical imbalance that causes depression, anxiety, aggression, and God only knows what else. I ranted briefly and referred interested readers to an excellent article in PLoS Medicine from Jeffrey Lacasse and Jon Leo.

Turns out that Leo and Lacasse are still on the case of the Serotonin Monster. They have a newer article, recently published in the journal Society, that sheds further light on this mysterious creature. The full text of the article is freely available online; I encourage everyone to read it. They are concerned that most people get their information from magazines or newspapers, which they believe presents the overly simplistic "chemical imbalance" theory of depression as if it were based on solid science. Here's what they did to investigate how the media presented such theories:
To determine the evidence behind the media’s claims about chemical imbalances, for approximately 1 year we performed weekly Internet searches of the media for “chemical imbalances” and sent e-mails to the authors asking them for the evidence they were basing their statements on.
Can you guess their results? I bet you can. Think for a minute. OK, here's one example of what they found:
In an article for the Sacramento Bee (3/9/07), about how to handle teenagers with depression, the author states: “Act promptly and accept that they may have a chemical imbalance [italics added] or need help with coping skills.”In reply to our questions, the author mentioned that: psychiatrists would be the best people to talk with about chemical imbalances; mental illnesses have been linked to chemical imbalances; psychiatrists are trained to figure this out through a variety of tests; and that “numerous studies have been done” and “the research is definitely available.” We pointed out to her that, if there are “numerous studies” which are “definitely available,” then it should be relatively easy to cite at least one article. She did not reply. We also mailed a copy of our e-mails to an editor at the Sacramento Bee.
Another example:
In another New York Times article (6/19/07), “On the Horizon, Personalized Depression Drugs,” Richard Friedman, the chairman of psychopharmacology at the Weill Cornell Medical College, stated: “For example, some depressedpatients who have abnormally low levels of serotonin respond to SSRIs, which relieve depression, in part, by flooding the brain with serotonin.” For his evidence he supplied a 2000 paper by Nestler titled, “Neurobiology of Depression,” which focuses on the hypothalamic pituitary system but not on serotonin.
One more:
The Bradenton Herald (3/24/07), published an article entitled, “Seniors Sought for Depression Study.” The primary source for the article was Dr. Andrew Cutler, the director of the Florida Clinical Research Center, who is extensively quoted and referred to by the reporter. “True depression,” Cutler says, “has its roots in a chemical imbalance in the brain.” Neither the reporter nor Dr. Cutler replied to e-mails.
Sidebar: Dr. Cutler. Andrew Cutler has appeared previously on this site.
  • One was about a misleading statement he made when stumping for Seroquel. He spoke of a Seroquel trial having several advantageous features, including its inclusion of Bipolar II patients. The problem, however, that was unacknowledged by Dr. Cutler, was that Seroquel was no better than a placebo for this group of patients.
  • I think that if a researcher makes a statement that "true depression has its roots in a chemical imbalance in the brain," that researcher should respond to emails asking him to back up his argument.
  • I also found it curious that he noted that he keeps his placebo response low when running clinical trials. Some of his patient recruitment practices also raised potential ethical questions, at least in my mind. These were just questions -- I'm not saying what he was doing was either right or wrong.
  • Standard disclaimer: For all I know, Cutler is a great guy. I am not issuing a personal attack here -- Just noticing a few things that raised questions for me.

Back to the Article. In the piece, there were several other examples of bogus claims about chemical imbalances in the media. In not a single instance was a journalist able to drum up convincing evidence to support the claims regarding a serotonin deficit. Here are some questions for the media to ponder...
Considering the media’s inability, or unwillingness, to cite evidence in support of their own statements, can the same group really be expected to go one step further and actively investigate these issues? The solution is not simply for the media to modify, or tone down, their own statements about the chemical imbalance theory, but is for them to take a more analytical approach with those who promote the chemical theory as ineluctable truth. In other words, rather than us questioning the media, shouldn’t the media be doing the questioning? It’s almost as if these reporters are blinded by the term, “peer reviewed,” and operate under the mistaken assumption that the words are some sort of stamp declaring that the results are unquestionable and that they can check their skeptical radars at the door when given a press-release mentioning a peer-reviewed article.
The article then mentions that Pfizer tried to pimp the chemical imbalance theory to a reporter who was interested in Lacasse and Leo's prior investigation. To encourage my audience to read the article, I won't discuss it here, but will mention that Leo and Lacasse's dissection of Pfizer's evidence is intriguing, as it involves conflicts of interest, buried data, and other such tricks that have been discussed often on this blog.

Enter Judith Miller. As I was reading this article, I had thoughts of Judy Miller, so I was apparently on the same wavelength as the authors when they wrote toward the end of their paper:
A comparison of the media’s reporting about mental illness to the biased reporting in the New York Times about the events leading up to the Iraq War does not seem far-fetched. In hindsight, as the Times editors now acknowledge, Judith Miller’s war coverage was overly one-sided. Her fundamental flaw could be described as a lack of professional skepticism toward the Bush administration, as she willingly parroted what those pushing for war were saying, while giving little credence to the stance of the other side. Writing in the New York Review of Books, Michael Massing commented that the Times and Miller’s reporting were examples of media “submissiveness.”

This depiction could just as well apply to the media’s reporting of mental health issues. As just one example, in some cases, the media still go to the people responsible for the original problems. For instance, several of the researchers involved with the studies of SSRIs in children are still cited in the press even though the following information has come out about their published studies: they downplayed the suicide risk; they exaggerated the benefits; and the papers published under their names were actually written by ghostwriters paid by the pharmaceutical industry.
Indeed, SSRIs for child/adolescent depression will make fodder for the ages; the role of key opinion leaders/leading lights in child psyshciatry as either "dupes" or willing co-conspirators should be read by EVERYONE (1, 2, 3, 4, 5).

In sum, another nice piece of work from Leo and Lacasse. I understand that being a journalist is not an easy job. Essentially, one is tasked with writing on a wide variety of topics, generally outside of one's areas of expertise. Thus, journalists must rely on their sources, but at the same time, it is quite important that journalists do more than simply accept whatever their sources tell them without any sort of thoughtful evaluation.

Friday, January 25, 2008

Bribing Physicians: Where My Money At?

An utterly fascinating article in the Wall Street Journal by Vanessa Fuhrmans notes that insurers are moving to bribe doctors for prescribing generics. Snippet below, but you really should read the whole article.

Health plans are drawing scrutiny for offering financial incentives to entice doctors to prescribe cheaper generic medicines, including paying doctors $100 each time they switch a patient from a brand-name drug.

Pharmaceutical companies have long gone to great lengths to try to get doctors to prescribe their brand-name pills. They spend billions of dollars, plying physicians with samples, educational lunches and speaker fees. But as the patents for a growing number of blockbuster medicines expire, some health insurers are trying to trump those perks with bonuses or higher reimbursements for writing more generic prescriptions.

The idea, health plans say, is to save everyone -- patients, employers and insurers -- money. And many doctors argue that it's only right to reimburse them for spending time evaluating whether a cheaper generic alternative is better or as good for a patient.

But the more aggressive approaches, such as cash rewards for each patient switched from a given list of drugs, are coming under fire for injecting financial incentives into what some patient advocates and legislators say should be a purely medical decision. Medical societies are also concerned that such rewards may put doctors in the ethically questionable position of taking a payment that patients know nothing about.

There is little doubt that many newer medications offer little to no benefit over generics and this site and others have frequently noted that prescription practices often appear quite irrational. In psychiatry, for example, the movement to place everyone on Depakote and/or atypical antipsychotics for treating bipolar disorder was a marketing miracle considering that the evidence base never showed superior efficacy relative to lithium. The same story goes for treating schizophrenia with atypical antipsychotics over conventional antipsychotics (as well as ignoring psychosocial interventions) based on a set of obviously flawed studies. Or treating depression with newer antidepressants rather than generics or (especially) with psychotherapy, which is generally linked to better long-term outcomes.

Of course, as has been documented here and many other places, we know that Big Pharma utilizes a variety of methods to ensure that physicians prescribe newer drugs, even if such prescriptions are irrational. If we just consider this as a battle of mega-industries who want to maximize their profits (Pharma vs. Insurance), then maybe this is the unfettered free market at its best?

On one hand, we have Pharma using a variety of tricks, including: buying meals, providing all sorts of gifts, infomercials disguised as medical education, tricky statistics, burying negative findings, and just being sooooo good looking, and on the other we now have insurers providing kickbacks to doctors for prescribing generics. Both pharma and insurers are attempting to influence prescribing through methods far outside of providing objective medical information to physicians. I know that some favor a pure free market approach and if so, then I suppose that this is just the latest and greatest maneuver in which companies attempt to pimp their wares (Pharma) or buy physician loyalty to a different set of products (Insurers).

As for the patients, um, who is looking out for their interests? I realize that physicians genuinely want their patients to improve (well, maybe not this one), but is a system of competing interests trying to irrationally manipulate physicians' prescribing practices really the best way to ensure patient wellness?

By the way, how much $$$ could insurers save? See below.
Hat Tip: PharmaGossip

To sum up the current state of affairs in four words: Dolla Dolla Bill Y'All

Thursday, January 24, 2008

My Political Leanings Are Out in the Open (Sort of)

As noted on the Scientific Misconduct Blog and PharmaGossip, there is an online test available (the Political Compass) that claims to measure one's political leanings. If one is inclined to think that one's political views might influence a person's writing, then my potential bias is revealed in the graph below. I encourage everyone who blogs about pharma, mental health, or related topics to take the test and post the results on your blog. If you're not a blogger, it is still an interesting test that is probably worth your time.

Disclaimer: I can't attest to the validity of this test. It is not intended to diagnose, treat, cure, or prevent any disease.

Wednesday, January 23, 2008

Key Opinion Leaders, Continuing Medical Education, and Utter B.S.

Psychiatrist Bernard Carroll has another brilliant post on corrupt, er, continuing medical education (CME) and how the process has been co-opted by various commercial interests. His post a few days ago was certainly great, and in combination with his current post, I officially declare that Bernard Carroll is ON FIRE!

Here's a bit of what he had to say. Commit this paragraph to memory:
Medical journals are not the only compromised medium. Continuing Medical Education (CME) is a second front in the campaign to expand the AAP [atypical antipsychotic] drug market. The standard formula calls for corporate sponsorship channeled through an “unrestricted educational grant” to a medical education communications company (MECC). The MECC employs writers to prepare the “educational content,” and academic KOLs are recruited to deliver this content. The KOLs are chosen for their willingness to be “on message” for the corporate sponsor. If they go “off message” they know they will not be invited back. The talk of “unrestricted grants” is window dressing. The MECC also secures the imprimatur of a nationally accredited CME sponsor, typically an academic institution. The sponsor is paid to certify that the CME program meets the standards of the Accreditation Council on Continuing Medical Education (ACCME). Everybody turns a buck: the MECC and its staff are handsomely paid (CME is now a multi-billion dollar business); the KOLs are generously rewarded with honoraria and perquisites; the academic sponsor is well paid by the MECC; the ACCME receives dues from the academic sponsor; the audience obtains free CME credits rather than having to pay for these required educational experiences; and the corporate sponsor gets what it considers value for its marketing dollar.
Guess what... Charles Nemeroff is also featured -- regular readers will note that his name has appeared on a few occasions on my site. Carroll takes apart a recent CME exercise in which Nemeroff featured information that appears to be false. In fact, I detailed some of the problems with this CME exercise here. Carroll's post has a number of updates. Chances were given for this CME exercise to be redeemed in some form, but misinformation apparently prevailed yet again.

Regarding another of the atypical antipsychotics discussed in this wonderful CME piece, Carroll wrote (in part) the following:
When discussing aripiprazole for nonresponding depression, Dr. Nemeroff once again was economical with the truth. Note that Bristol-Myers Squibb, the marketer of aripiprazole, sponsored this PeerView/UCLA program. To document his claims about aripiprazole, Dr. Nemeroff cited one Abstract from the American Psychiatric Association meeting in May 2007. That does not meet ACCME standards of documentation for learners, most of whom would be unable to access the cited Abstract (not that it would tell them much even if they could). For some reason, Dr. Nemeroff did not inform learners that the complete report of the aripiprazole study had appeared in June 2007 (Berman RM et al. J Clin Psychiatry 2007;68: 843-853), fully 5 months before the CME event went on-line. From that readily available report it is clear that the Number Needed to Treat (NNT) for response with aripiprazole is 10, which compares unfavorably with a NNT of 4 for lithium, the best established augmenting option in placebo-controlled trials. A NNT of 10 means a clinician would need to treat 10 patients with aripiprazole before obtaining one remission that would not have occurred anyway with placebo. That does not constitute compelling clinical benefit. Dr. Nemeroff did not candidly discuss these troubling data. Dr Nemeroff provided his CME audience none of the remission or response data from the published aripiprazole study, though these data were readily available. These omissions of published, highly relevant information signify disrespect for his audience by Dr. Nemeroff, incompetence by the MECC, and failure of due diligence by the accrediting institution, UCLA, to ensure that accurate, balanced information and adequate documentation are provided.
I shall not steal any more of Carroll's thunder. Head over the Health Care Renewal and check it out in full.

Tuesday, January 22, 2008

Defending the Hiding of Negative Clinical Trial Data

In the wake of the New England Journal of Medicine study that revealed a sizable discrepancy between the raw data on antidepressants and the data published in medical journals, a few people have jumped to the defense of drug companies. In this post, I will examine their arguments and compare them with evidence. As an extra feature, the DrugWonks' disconcerting love of Vioxx will be discussed toward the end of the post.

"Nathan", who posts frequently on the excellent Pharmalot blog, made several comments. Here's one of them:

For those of you that haven’t written up a scientific article for publication, I’ll make you aware of a few things:
1) Journals don’t accept an article just because it is written. Journal editors scrutinize it and submit it to reviewers who further scrutinize it — very frequently rejecting articles that aren’t conclusive or are poorly written.
2) Scientists write scientific articles — not public relations representatives.
3) Scientists generally don’t waste weeks (or months) of their time writing up articles that don’t prove anything and probably won’t be accepted by the journal. A negative result is generally meaningless in science. Do you think Thomas Edison wrote up articles about the 900 filaments that failed to light a light bulb?

Now I’ll ask again: Is it any surprise that the negative clinical trials were not written up as publications?

Here’s one example: The Paxil trial may have tried doses of 10, 25, and 50 mg. The 10 and 25 mg dose failed to show an effect. The 50 mg dose did show an effect. Why should anyone waste their time writing up a journal article explaining why the 10 and 25 mg dose failed? It’s obvious that drug levels just weren’t high enough to observe an effect.

The problems with the above post:

1. Well, let's face it, peer review does not exactly catch everything. In fact, peer review often results in the publication of very poor studies, like the infamous Paxil Study 329. But it actually appears that Nathan's point is that studies that "aren't conclusive or are poorly written" don't get published. So is he saying that the positive studies of SSRIs were more likely to written better than studies that found negative results? Not sure that makes much sense. Perhaps he is implying that a negative study is not "conclusive" -- well, neither is a positive study, for that matter. When one considers the small benefit of antidepressants over placebo, then even positive studies have not exactly yielded conclusive evidence of anything particularly impressive.

2. Nathan is either naive on this point or is being dishonest. It is well known that ghostwriters are frequently used in the publication process (1, 2, 3, 4, 5). This is not a problem when they are just helping a poor writer piece together a readable manuscript, but this is often a huge problem when one considers that ghostwriters often have a huge conflict of interest in that they are hired by drug companies to pen papers that will paint their product in a favorable light.

3. "A negative result is meaningless in science." That is an amazing comment. Truly amazing. So if 10 trials are conducted on a drug, and one turns up positive, then I suppose everyone should prescribe that drug because the nine negative results were "meaningless." Yeah, that makes a lot of sense.

3.5. The point about dosing would make sense except that it is irrelevant to the case at hand. Remember, Nathan is commenting specifically on the antidepressant studies that did not get published, and he is hypothesizing that the authors of that study included doses that were not approved as effective. Perhaps he missed page 253 of the article, which states:
We included data pertaining only to doses later approved as safe and effective; data pertaining to unapproved dosages were excluded.
Oh, so the authors did not include data for unapproved doses. So much for that critique. Reading a study before offering critiques of it: A good idea.

Nathan in another comment:
I understand how this APPEARS to be a smoking gun. But I’ll state again why I believe it is not: Let’s say I do a clinical trial with Paxil at 10 mg (once per day), 20 mg (once per day), 20 mg (twice per day), and 50 mg (twice per day). That’s a total of 4 trials. Only the 50 mg dose works. Are the other three trials counted as “negative” evidence that the drug doesn’t work? Of course not.
See above for why this critique holds no water.

Nathan:

As Bob pointed out, editors like to accept articles that make their journal look good. NEJM is probably not going to accept a bunch of articles about clinical trials that didn’t work. That doesn’t mean the editors are in some sort of under-the-table deal with the company. It simply means that they are looking out for the best interest of the JOURNAL, not the public.


I think we’re all missing the point. Of course scientists like to publish things that work, not things that fail. The point is that ALL clinical trial data (the good, the bad, and the ugly) is available to the public, the FDA, and anyone else interested. Whether or not the data makes it into a journal is irrelevant.

All clinical trial data is available to the public? Cue the laugh track. Turn it up a little. No, a little more still. Ah, there we go. Who is he kidding? There is absolutely no such thing as a repository for clinical trial data. That is part of the reason why the problem with ghosted science exists in the first place. There are clinical trial registries, but go ahead and check out clinicaltrials.gov. One can outline a study on such a registry then report the results in whatever manner one deems fit, or not at all.

Enter the DrugWonks: And, as you would have guessed, the dependable, fair and balanced DrugWonks weighed in as well. Here are pieces of Robert Goldberg's post:
The NEJM of medicine recycles the old story that many of negative studies about antidepressants were not published. That doesn't affect whether the drugs work or not. It does add to the distortion of what a negative study is and why they are negative. Most of the time they are negative because they simply confirm the hypothesis. Other times they are poorly designed or small studies of little statistical power. They don't prove that the drugs fail. There is a difference. Taken together they can often help guide who responds to what medicines or why not...which again is why we need the Critical Path. To suggest that the failure to publish negative studies is part of a coverup is wrong and leads to fearmongering once again. We have been down this road. And journalists are once again raising unfounded fears about the safety and efficacy of drugs...leading people to die because they stop taking medicines because of the fearmongering the media has engaged in regarding vaccines, SSRIs, Avandia, Vioxx and Vytorin
As for the argument that studies were negative simply because their sample sizes were small, please read the following from the article (page 254):
The difference between the sample sizes for the published studies (median, 153 patients) and the unpublished studies (median, 146 patients) was neither large nor significant.
In other words, Goldberg's analysis is wrong. As I stated above: Reading a study before offering critiques of it: A good idea. As for "Most of the time they are negative because they simply confirm the hypothesis." -- What the hell does that mean? Confirm what hypothesis? This is the sort of seemingly random statement that I have run across frequently on the DrugWonks site. As for the negative studies being poorly designed (so the positive studies are thus designed better?), this seems wrong as well. These studies aren't particularly complex. One group gets drug, the other gets placebo. Participants are assigned to groups randomly. In theory, participants are unaware of if they are receiving drug or placebo. It appears that there were not large differences in how these studies were designed. In any case, Goldberg provides not one single shred of evidence to support his claims. Nathan, mentioned above, also provided no evidence to support his critiques of the NEJM study.

Bring on the Vioxx Love. Goldberg then goes to a new level. He wrote, in part:
And journalists are once again raising unfounded fears about the safety and efficacy of drugs...leading people to die because they stop taking medicines because of the fearmongering the media has engaged in regarding vaccines, SSRIs, Avandia, Vioxx and Vytorin
Let's just focus on one of these medicines: Vioxx. I should mention that as I type this, I am quaking with rage. Goldberg is either incredibly ignorant or incredibly dishonest. For a person who claims expertise in matters of drugs, he should know much better than this. He is claiming that people died because they stopped taking Vioxx. He is apparently serious. Lord Jesus, tell me that this man is joking... First of all, how many people have died because they switched from one minor painkiller to another? Bob, please provide the data. PLEASE.

But (and this is where the rage is coming from), Bob, since you are a Drug Wonk, which would imply at the very least a firm understanding of important and highly publicized findings from the medical literature, how many people died because they did take Vioxx? According to a study by David Graham and colleagues published in the Lancet:
Using the relative risks from the abovementioned randomised clinical trials and the background rates seen in NSAID risk studies, an estimated 88000–140000 excess cases of serious coronary heart disease probably occurred in the USA over the market-life of rofecoxib. [If even a third of these people died, well, you do the math...]
But taking this drug off the market resulted in people dying? Are you *!@! kidding me? Because Graham is a "fearmonger," one of the favorite epithets thrown around on the DrugWonks site, how about we not take his word for it... What have other researchers found? One meta-analysis found an increased risk for heart attacks with COX-2 inhibitors relative to placebo and to some other drugs. Another meta-analysis also found elevated heart attack risk for Vioxx. How about this meta-analysis, which found increased risk for both cardiovascular and renal events on Vioxx? Another meta-analysis found cardiovascular events were related to Vioxx use.

So, Bob, there's my evidence on Vioxx. Where's yours? It's clear that Bob and I differ on a number of points. I think reasonable people can disagree on many items, but Vioxx is not one of them. A drug that has been clearly and repeatedly linked to serious health problems -- and Bob is defending it, while calling those who criticize it "fearmongers." Nice game plan.

A More Reasonable Critique. The Last Psychiatrist offers a more reasonable analysis, suggesting that academic authors did not want to publish the negative studies because publishing negative studies would make them look incompetent -- the mainstream culture of medicine would ask who runs a trial and finds negative results? And even if the studies were submitted to a journal, then they were likely to be rejected by peer reviewers and editors.

This may be true to an extent, but I tend to believe that some negative studies would have been published. There have always been journal editors who have some ability to think critically and publish material that runs counter to that of mainstream medicine. Sure, some of the studies would have been rejected a time or two, but I think they would have been published at some point. While I think his analysis is intelligent (and it thankfully does not involve the term "fearmonger"), I think it only offers a partial explanation.

Missing the Boat. I could certainly have missed something, but I have not even seen a lame attempt to critique part of the study that is quite damning. Remember, the study found that drug companies changed their primary outcome measures and statistical analyses in between submitting to the FDA and submitting for journal publication. This resulted in inflated effect sizes for every antidepressant. Kind of a big deal, as the medical literature ends up suggesting the drugs are more beneficial than they actually are. Not even DrugWonks has mentioned this rather major point.

Oh, and one more thing for Bob Goldberg: Before engaging in a variety of tactics that border on slander regarding Roy Poses of Health Care Renewal (in a recent often nonsensical post that I won't honor by linking to it), you might want to improve your own credibility.

If you missed it, read my post about the NEJM study that some folks are now critiquing.

Friday, January 18, 2008

Risperdal for Depression: A Depressing Look



Bernard Carroll, a psychiatrist who is known for laying down the smack on drug industry spin, has a fantastic post at Health Care Renewal that should be read by all. It deals with the marketing and "science" surrounding the use of Risperdal for treatment-resistant depression, a topic I've discussed at length on this site previously.

Teaser:
The campaign aims to shape a favorable climate of opinion for the drug through experimercials (commercially strategic clinical trials) and journal publications that are really infomercials. The stakeholders are some major corporations, “key opinion leaders” (KOLs), leading medical journals, and several million patients who suffer from nonresponsive depression in the US. The winners are the KOLs and the corporations, while the big losers are the patients.
The plot thickens:
You begin to see the picture: we have the appearance of editorial self-dealing, including product placement for a corporate client of the editor; an incompetent or possibly dishonest journal review process; and the appearance that somebody went out of his way in two places to insert a false claim of efficacy into the report of a negative clinical trial.
The depth of reporting is excellent. I can vouch for the Dr. Carroll's analysis of the situation and note that it closely matches my own investigation into the Risperdal for depression studies. When industry, academia, and buckets of money collide, the outcome is predictable.

Thursday, January 17, 2008

Antidepressants: Hiding and Spinning Negative Data

As I alluded to yesterday, a whopper of a study has just appeared in the New England Journal of Medicine. It tracked each study antidepressant submitted to the FDA, comparing the results as seen by the FDA in comparison with the data published in the medical literature. The FDA uses raw data from the submitting drug companies for each study. This makes great sense, as the FDA statisticians can then compare their analyses to the analyses from drug companies, in order to make sure that the drug companies were analyzing their data accurately.

After studies are submitted to the FDA, drug companies then have the option of submitting data from their trials for publication in medical journals. Unlike the FDA, journals are not checking raw data. Thus, it is possible that drug companies could selectively report their data. An example of selective data reporting would be to assess depression using four measures. Suppose that two of the four measures yield statistically significant results in favor of the drug. In such a case, it is possible that the two measures that did not show an advantage for the drug would simply not be reported when the paper was submitted for publication. This is called "burying data," "data suppression," "selective reporting," or other less euphemistic terms. In this example, the reader of the final report in the journal would assume that the drug was highly effective because it was superior to placebo on two of two depression measures, left completely unaware that on two other measures the drug had no advantage over a sugar pill. Sadly, we know from prior research that data are often suppressed in such a manner. In less severe cases, one might just switch the emphasis placed on various outcome measures. If a measure shows a positive result, allocate a lot of text to discussing that result and barely mention the negative results.

But wait, there's an even better way to suppress data. Suppose that a negative study is submitted to the FDA. There is no commercial value in presenting negative results on a product. Indeed, it makes no sense from a commercial vantage point to submit a clinical trial that shows no advantage for one's drug for publication in a medical journal. While it earns a bit of good PR for being honest, it would of course hurt sales for the drug, which would not please shareholders. From an amoral, purely financial view, there is no reason to publish negative trial results.

On the other hand, there is science. One of the first things that any medical student hopefully learns is that scientists should report all of their results so that other scientists, physicians, the media, and the general public have an up-to-date and comprehensive understanding of all scientific findings. Yes, this may sound naive, but this is how science is supposed to work in an ideal world.

Back to the NEJM study. Were manufacturers of antidepressants playing by the rules of science or the rules of the almighty dollar? Take a look at this table excerpted from the study...

The FDA concluded that 38 studies yielded positive results. 37 of these 38 studies were published. The FDA found mixed or "questionable" results in 12 studies. Of these 12 studies, six were not published, and six others were published as if they were positive findings. Of the 24 studies that the FDA concluded were negative, three were published accurately, five were published as if they were positive findings, and 16 were not published. To summarize, positive studies were nearly always reported while mixed and negative studies were nearly always either not published or published in a manner that spun the results unreasonably. How does one turn a questionable or negative finding into a positive one? As mentioned above, report the results that are favorable to your product and sweep the remaining results under the rug.

Overall, how do the statistics for this group as prepared by the FDA compare to the statistics in medical journal publications? Remember, physicians are trained to highly value medical journals, as they are the storehouse for "evidence-based medicine." I'll borrow a quote from the study authors:
For each drug, the effect-size value based on published literature was higher than the effect-size value based on FDA data, with increases ranging from 11 to 69%
Well, that's not very reassuring. Effect size refers to the magnitude of the difference between the drug and placebo. Note that for every single drug, the effect size as reported in the medical literature (the foundation for "evidence based medicine) was greater than the effect size calculated from the FDA's data. Remember, the FDA's data is based on raw data submitted by drug companies, and is thus much less subject to bias than data that the drug companies manipulate prior to submitting for publication in a medical journal. Other highlights from the authors:
Not only were positive results more likely to be published, but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome... we found that the efficacy of this drug class is less than would be gleaned from an examination of the published literature alone. According to the published literature, the results of nearly all of the trials of antidepressants were positive. In contrast, FDA analysis of the trial data showed that roughly half of the trials had positive results. The statistical significance of a study’s results was strongly associated with whether and how they were reported, and the association was independent of sample size.
I'll say it one more time: Every single drug had an inflated effect size in the medical literature in comparison with the data held by the FDA. To move into layman's terms for a moment, manufacturers of every single drug appear to have cheated. This is not some pie in the sky statistics review -- this is the medical literature (the foundation of "evidence-based medicine") being much more optimistic about the effects of antidepressants than is accurate. This is marketing trumping science.

The drugs that were found to have increased their effects as a result of selective publication and/or data manipulation:
  • Bupropion (Wellbutrin)
  • Citalopram (Celexa)
  • Duloxetine (Cymbalta)
  • Escitalopram (Lexapro)
  • Fluoxetine (Prozac)
  • Mirtazapine (Remeron)
  • Nefazodone (Serzone)
  • Paroxetine (Paxil)
  • Sertraline (Zoloft)
  • Venlafaxine (Effexor)
That is every single drug approved by the FDA for depression between 1987 and 2004. Just a few of many tales of data suppression and/or spinning can be found below:
Props to the Wall Street Journal (David Armstrong and Keith Winstein in particular) and the New York Times (Benedict Carey) for quickly getting on this important story.

There are some people who seem unmoved by this story. Indeed, some people are crying that this is an unfair portrayal of the drug industry. More on their curious take on the situation coming later.

I'll close with a question: What does this say about the key opinion leaders whose names appear as authors on most of these published clinical trials in which the data is reported inaccurately?

Wednesday, January 16, 2008

Antidepressants: You Dropped a Bomb on Me

According to a hot-off-the-press article in the New England Journal of Medicine, publication bias in antidepressants has been quite quite substantial. Much as I've documented in a recent post, the researchers found a great deal of misinterpreted research and buried studies.

Hats off to the researchers for their impressive and thorough analysis. Unfortunately, I have no time to comment on the study in more depth now. In the meantime, read David Armstrong's excellent piece in the Wall Street Journal. I plan to offer my take on this sordid tale in the near future.

If you've read my blog for long, you are aware that I am qualified to say "TOLD YOU SO!" on this topic. I hope that the bomb dropped on the drug industry from this study's results is heard loudly and clearly across the world.

Zetia: Just the Latest Chapter in Hiding Data

There have been many interesting posts written about how data regarding Zetia were buried for quite some time. One of the main storylines in this saga is that it took about two years after the study was completed to analyze and release the data. The most disappointing aspect of this story is that few if any outlets are noting that this is not a fluke event.

Clinical trials are a huge part of how drugs are marketed. After examining clinical trial data, physicians who prescribe their drug believe they are engaging in evidence-based medicine. Granted, most physicians have little training in actually understanding statistics or research design, which are key in understanding clinical trial evidence. But that's not the point of this post...

The point is that Zetia is just the latest chapter in a lengthy volume of hidden clinical trial data. Here's one study in which it appears that data were reported on 1 of 15 participants. There was also a study examining Zoloft for PTSD in which data were reported about 10 years after the end of the study. How about suicide attempts apparently vanishing from a study report on Prozac? And a 5-6 year delay in reporting results on Effexor for depression in youth?

The above reports on hiding data were all based on studies I encountered randomly. I did not go fishing to find studies which published their data many years after it was collected or only reported a partial picture of their results. I was just looking through journals and happened to run across the studies mentioned above. Publication bias does not just occur when negative results are simply not published (which seems a fairly common practice), but it also occurs when negative results are published after a long delay. Delaying negative data means raking in more cash before the negative data reduces prescriptions for a product.

So you can be outraged by the Zetia story if you'd like, but please don't act surprised. Similar events will happen again and again and again.

Update: Welcome to those of you who have clicked the link from the Wall Street Journal. Please take a look around to find a series of documented incidents where science has been overrun by marketing. Add comments as you deem appropriate.

Friday, January 11, 2008

Zetia, Paxil, Medical Journals, Fraud, Etc,

I've been busy wiping up tears after the Frontline episode on medicating children with a wide variety of psychiatric medicines. Well worth watching. There are many thoughtful comments over at Furious Seasons. Feel free to add your voice. I may post on some of the highlights and lowlights of the Frontline piece later. Suffice for now to say that it sure is depressing that the media keeps up the dunce journalism of linking decreased SSRI prescriptions to an increase in suicide as if this was some sort of reliable finding. Please read my earlier posts (1, 2) for details on this constantly repeated yet incorrect interpretation of events.

Here are a few other posts worth reading:
  • Is "symptom remission" a realistic or even desirable goal when treating depression? A very interesting battle of letters in the American Journal of Psychiatry receives excellent coverage at Furious Seasons.
  • Roy Poses at Health Care Renewal demolishes an op-ed piece by Robert Goldberg (from the infamous Drug Wonks site). Also check out an incredible tale of kickbacks to a physician from multiple companies. If your hunger for bizarre tales in healthcare is not yet satiated, read about CellCyte, a company whose main product is apparently fraud.
  • Are medical journals asleep at the wheel regarding problems with Zetia? Aubrey Blumsohn seems to think so, and I think he might have a point. It would not be the first time that a medical journal dropped the ball.
  • Paxil for life. Go ahead, try to quit. What, you can't quit? A large group of individuals suing GlaxoSmithKline believe they've had difficulties quitting Paxil without significant problems. Worry not, friends, GSK said: "We believe there is no merit in this litigation... Seroxat has benefited millions of people worldwide who have suffered from depression.'' Read more about Paxil/Seroxat's special benefits. H/T: PharmaGossip.
  • While you can catch up on the national presidential derby from many sources, there is little coverage of the race for American Psychiatric Association president. Daniel Carlat (who is popping up everywhere these days, which is a good thing) provides his take on the upcoming APA election. To nobody's surprise, some have noted an issue with one candidate's potential conflicts of interest.
  • Pfizer = McDonald's + Estee Lauder?

Monday, January 07, 2008

Big Pharma: Marketing Ain't Cheap










A fantastic post on Hooked summarizes the most recent study in PLoS Medicine that estimates drug marketing expenditures at roughly $57.5 billion annually. It is also discussed nicely on Pharmalot.

Here's my two cents. We've all heard that high drug costs are secretly a blessing because these costs support the development of newer, better drugs. Here's one example, from the Cato Institute:
It's easy to believe that drugs cost too much. At least it is if you aren't the member of my church who just died of stomach cancer; my next-door neighbor and running partner who has been diagnosed with multiple sclerosis; my friend who endured experimental chemotherapy to fight breast cancer; and my journalistic colleague killed by liver cancer last year.

For all of them, drugs don't cost nearly enough, since a higher cost would bring forth more and better means of fighting cancer, multiple sclerosis and other diseases. Yet legislators seem dedicated to restricting the availability of such pharmaceuticals.

Other scare stories are not uncommon. Even respected psychiatrists hold similar views about the kind, loving drug industry. I recall sitting in an auditorium, chowing down on lunch with a colleague as we listened to the sales pitch for Abilify. The lunch was fairly tasty and kept me mostly distracted from the slides that claimed to show that Abilify was a new drug that offered terrific benefits and little risk. To be honest, I remember little of what was actually said in the slides, as I attended many such activities, from which the information blurred into a haze which presented the basic message that incredible progress was constantly being made in psychopharmacology. At the end of the videoconferenced presentation, I was getting ready for the remainder of my day when I noticed a very high ranking official take the podium with a highly concerned expression. This official, who was supposedly an objective, highly regarded clinical scientist, then let loose with a brief speech.

He said something to the effect of: "You know, it's important to remember that it costs -- did you know that it costs 800 million dollars to make a drug like this? 800 million dollars. These companies are taking a big risk to bring us these drugs and we really need to appreciate all the effort, risk, and cost that they put into developing these great medications..." He might have uttered a sentence or two afterward but I was stuck in a state of shock and would not have noticed what he said. Now, if this highly reputed independent psychiatrist had bothered to do a little bit of research, he would have known that this figure was bogus. And this does not even count as part of the $57.5 billion that drug companies spent on marketing.

What's my point? When drug companies stump about how they channel much more money into research and development than into marketing, they are lying. Badly. I give drug companies credit for doing an excellent job at marketing. When so many of their newer products offer little to no benefit versus generic medications, it is indeed impressive that they can constantly generate blockbuster after blockbuster. The only explanation for this phenomenon is that drug companies do an amazing job of marketing. In the form of direct to consumer ads, having allegedly independent academics stump for their products based on bogus "science" (1, 2, 3, 4), disease mongering (1, 2), inaccurate medical journal ads, sexy drug reps, or just good old-fashioned payola, there is no doubt that drug companies do a fantastic job of selling their wares.

The drug company beancounters state that the best way to make money in the short-term is to spend a lot of cash on marketing. Works out just fine in the short term, but it looks like the model falls apart in the long-term. Might I suggest funneling a bit more cash into science and less into marketing if you want to thrive in the long-term?

Thanks to a couple of anonymous readers for digging up some of the research cited in this post and for the terrific Dilbert cartoon.

Inaccurate Advertising Hurts

I'm late to the game on this post, and this material has been covered well on other sites. In case you've missed it, a recent meta-analysis indicated that the effect of Cymbalta on pain in depression relative to placebo was somewhere between nothing and minimal. This was noted on Furious Seasons, the WSJ Health Blog, and Pharmalot. According to the Pharmalot post, it also appears that Lilly has not fully disclosed all relevant data in Cymbalta's clinical trials, which contradicts Lilly's pledge to share all data openly.

This is apparently another example of how we cannot trust that pharmaceutical advertising is any more accurate than advertising for quick weight-loss programs, exercise equipment, or get-rich-quick schemes. Caveat emptor.

Props to John Mack for noting many months ago that the Depression Hurts campaign reeked of off-label marketing.

Friday, January 04, 2008

Drug Company Marketing Expenditures

How about $57 billion or so annually? More coming in a few days. In the meantime, read this and see what you think.

Thursday, January 03, 2008

Mandatory Mental Health Screening for Massachusetts Medicaid Kids

As reported initially by the Boston Globe, then covered by the AHRP Blog and Furious Seasons, Massachusetts has implemented a mental health screening process for its children on Medicaid. It would make sense that if mental health is to be examined, one would look for symptoms of mental illness. To assist in the process, one of eight questionnaires is to be used by doctors to identify mental health issues. Here are some of the issues mentioned on one of the mental health screenings:
  • Complains of aches and pains
  • Is less interested in school
  • Is absent from school
  • Refuses to share
  • Blames others for his or her troubles
  • Teases others
  • Does not understand other people's feelings
  • Does not show feelings
  • Gets hurt frequently
  • Wants to be with you [the parent] more than before
There are a few others that seem iffy, but I think the above are the worst offenders. So what? Apparently, it may be a sign of some yet to be defined mental illness that a child would want to be with mommy more often, refuse to share toys with a sibling, and not show interest in school. 'Cuz kids should show little interest in their mothers, school is pretty exciting, and most kids love letting their brother share their coolest toy.

Yes, I understand that any of the above could possibly be linked to a mental health issue. If I counted right, there were 35 of these issues listed on the questionnaire. Who is going to spend time going over each of these 35 issues? Nobody. It would seem that if we are going to screen kids for mental health problems, we might want to stick with the more important issues rather than a kid who does not like to share toys.

It is certainly a good idea for doctors to pay attention to the mental health of their patients. However, I'm not sure that this sort of overly inclusive checklist of potential issues is going help much.

Mental Health Problems: An Epidemic? There is the black undercurrent of labeling developmentally relatively normal behavior as indicative of a mental disorder and sticking the kids on all sorts of psychotropic meds that (in many cases) have little data to support their use.

But there is more to it than drugs. It's our culture. We've come to accept that there is an epidemic of autism, depression, anxiety, ADHD, bipolar disorder, and who knows what's next in our kids. While the drug industry certainly played a role in these developments, it says something about our culture that we are readily willing to buy into the idea that mental illness has spread like a plague throughout American society. Have we bought into these disorders hook, line, and sinker because:
  • It abdicates parents of any responsibility for their children's behavior
  • It lets kids off the hook for their behavior (I couldn't help it -- I have ADHD)
  • It adds yet more drama to the teen years (Gina is, like, so moody. I bet she is, like, bipolar)
  • It seems so scientific. We uncover yet more diagnoses with each edition of the DSM and we then think that we have a better understanding of human behavior.
I'm not claiming that these are especially deep thoughts, but there is something about the interaction of science, marketing, and American culture that seems to have gone awry here.

Wednesday, January 02, 2008

Risperdal for Depression Study Hammered Yet Again

A letter to the editor in the journal Neuropsychopharmacology has slapped the ARISE-RD study, which examined Risperdal as a treatment for depression. I have written about this study on several occasions. The letter to the editor notes many of the same concerns that I have discussed on my site, including: 1)The study reported data that was previously published, a violation of journal policy and 2) A claim regarding the drug's efficacy was withdrawn because the statistics were done incorrectly.

In addition, the letter (by Bernard Carroll) notes that data regarding weight gain are not reported in full, a troubling omission given that risperidone was apparently related to more weight gain than placebo. This borderline significant to statistically significant difference (depending on what analysis is used) was reported in a prior iteration of the study, but not in the final version as published in the journal.

Please see several prior posts regarding this study (1, 2, 3, 4, 5). It's a doozy. Mark Rapaport, the lead author of the study, in his reply to Carroll's letter, wrote the following:
The paper repeatedly states in Abstract, Methods and in Discussion that continuation of risperidone augmentation therapy was not more beneficial than placebo, and hence the working hypothesis was disproven...

I would like to thank the reviewers and the editors of Neuropsychopharmacology for having the courage to allow us to publish this negative finding.
A couple things. First, does it really take all that much courage to publish a negative finding? Should a Nobel Prize or a Bronze Star be awarded? There should be little honor attached -- to paraphrase Chris Rock: Why should you get mega-credit for doing things you're supposed to do? Oooh, you published a negative finding. What do you want, a cookie? You're supposed to publish a negative finding you low expectation' havin' "independent scientist." The fact that somebody thinks props should be doled out because a negative finding was published shows how sad-sack the system has become.

To top it off, the study, as originally published, hardly painted its findings as negative, as can be seen in my aforementioned links to the study. Yes, there were a couple of small caveats about efficacy in the paper, but take a look at a snippet from the press release that accompanied the study:
In the first large-scale study of its kind, researchers at Cedars-Sinai found that people suffering from resistant major depressive disorder who don’t respond to standard antidepressants can benefit when the drug therapy is augmented by a broad spectrum psychotropic agent, even when treated for a brief period of time.

Does that sound like a "negative finding" to you? If I am following this correctly, it went something like this: The study supports the efficacy of Risperdal for depression until a number of problems are found with the study, at which point the lead author indicates that they never said the findings were positive. I'm a little confused.

I hope to never again write about this study, as I feel like the proverbial dead horse-beating has begun.

Oh, and Happy New Year. I expect to be writing about similar stories throughout the year because while the names may change, the storyline remains the same.