Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Tuesday, April 15, 2008

Academics, Atypicals, and Marketing

Ahhhh, there is nothing like the sweet smell of investigative journalism in the morning. Robert Farley published a whale of an excellent piece on how atypical antispychotics were marketed in the St. Petersburg Times on Saturday. I will discuss some of the tasty tidbits from the article, but you'd be a fool to not read the entire article yourself.

Farley notes that the manufacturers of atypical antipsychotics needed to spread the word that their drugs worked better than older antipsychotics. The one slight problem was that there was not any solid evidence (except when looking at biased studies) showing that the new drugs were superior. So if the companies could not advertise this point directly, they needed to enlist third parties to say it for them. In other words, it was time for some information laundering. In what has become the standard operating procedure for the field, "independent" academics were enlisted to make recommendations that the new drugs were better than the old drugs.

So hire a few academics as consultants, fly them off to a "consensus conference," and have them generate treatment guidelines. Would the guidelines be biased? Well, yeah, but that's pretty much the point -- science be damned, it's about market share, baby. Like the Texas Medication Algorithm Project (TMAP), which helped to propel the atypicals to first-line treatment (and second-line, for that matter), and other TMAP clones across the nation. Throw in a few studies of the effectiveness of TMAP, then misinterpret their results, and BAM, you've now established (based on little to no credible evidence) that atypical antipsychotics are the new wonder drugs. And with the wind at your back, hey, why not see if you can market these drugs for everything? After all, you've got the support of the "independent" academic community...

Also see Psych Central's take on the story.

Monday, March 17, 2008

Who's Bankrolling Your "Trusted" Medical Journals?

Note: This is a guest post from Susan Jacobs (see byline at bottom of post)

It is easy to point our fingers at greedy pharmaceutical companies when it comes to the rising costs of our prescription meds. However, the average citizen probably isn't aware of just how much these companies control our lives.

A perfect example of this control can be found on every other page in a leading medical journal. I'm speaking, of course, about the copious amounts of ad space.

It is easy for people to presume that the scientific evidence presented in various medical journals is based on unbiased information. Nothing could be further from the truth, unfortunately. Just as a network television channel strives to please their sponsors at the expense of a program's content, a medical journal that is filled with ads will always be at the mercy of its financial backers.

In his article "Under the Influence: Drug Companies, Medical Journals, and Money," Kent Sepkowitz writes:

Just as pharmaceuticals fund studies and pay doctors to give lectures, so too do they buy journal ads and reprints of favorable articles—lots of them. Often a drug company may find one of its products featured in a scientific article while another of its products is dolled up in a high-gloss ad a few pages later. Yet the journals keep quiet about these financial arrangements.

So, just how much money is the integrity of a medical journal (not to mention, the mental and physical well-being of its readers) worth a year? According to The Social Policy Research Institute, the New England Journal of Medicine receives approximately $18 million a year from pharmaceutical companies, while JAMA, the Journal of the American Medical Association receives around $27 million.

It was the New England Journal of Medicine that brought the most attention to this problem in recent years, after publishing a favorable study of the "safe" drug, Vioxx. Of course, we now know just how wrong they were. (It is worth noting that 2 of the 13 people involved with that study were actually employees of Merck.)

When Boston Magazine's Karen Donovan questioned the Journal's editor about the Vioxx scandal, he replied, “I am not a person who wants to make more rules. I just want people to behave.” It is particularly frightening to think that this increasingly corrupt industry is being held to an honor system of sorts, particularly one that is so indelibly damaged.

By-line:

Susan Jacobs is a teacher, a freelance writer as well as a regular contributor for NOEDb, a site helping students obtain an online nursing degree. Susan invites your questions, comments and freelancing job inquiries at her email address susan.jacobs45@gmail.com.

If you have feedback for Susan, please post a comment and/or send her an email.

Wednesday, September 05, 2007

Rost Busts Pfizer and Journalists

Pfizer is proudly touting a new observational study which found that patients who switched from Lipitor to a generic medication had an increased risk of heart problems. Fantastic -- avoid the generic and use Lipitor. Oh, but wait, as Peter Rost points out...

There is only one teeny weeny problem. The most common reason for switching drugs is because the therapy doesn't work; when the drugs don't have the desired effect. So it is completely expected that patients who were forced to switch had a worse outcome. They may simply be treatment resistant.

And Pfizer knows this.

That's the reason they use a weasel-sentence in their press release, hidden deep inside the text, saying "As with all observational studies, the findings should be regarded as hypothesis generating."

But that has not stopped the so-called health media from running with the story from the Lipitor saves, generics kill angle (see several sources on Rost's site). Here's more fuel for the fire:

"The bottom line on this particular study is that the data tell us such switching may not be without consequences," said Michael Berelowitz, senior vice president of Pfizer's global medical division, in a phone interview.

With all due respect, Mr. Berelowitz, it would appear that you are either ignorant on this point or you are lying. An analogy in the mental health field would be if patients who tried Effexor and then switched to a generic tricyclic antidepressant (say, imipramine) were found to have worse depression outcomes than patients who stayed on Effexor. Duh! Again, maybe people who dropped Effexor are treatment-resistant and/or had more severe depression -- medications don't work as well for them. So it would be a pretty stupid comparison to say that those who switched antidepressants were acting dangerously by switching medications, wouldn't it?

Wednesday, July 18, 2007

Abilify: It's Tricky to Rock the FDA

In the "you're kidding me" category, we have a report from Forbes that Abilify (aripiprazole) is going to be going up for FDA priority review as a depression treatment. I was able to track down exactly one placebo-controlled study using this drug as an antidepressant. Participants who did not show satisfactory response to an antidepressant trial were assigned to receive either Abilify or a placebo in addition to their antidepressant. As you'll see, this was a study worthy of close examination.

Study Results. I read the study results and was underwhelmed. The authors (via their ghostwriter(s) to some unknown extent) reported that the difference between those receiving add-on Abilify vs. add-on placebo was three points on the Montgomery-Asberg Depression Rating Scale (MADRS). For perspective, the MADRS has 10 questions, each rated from zero to six. So suppose three of those ten questions show an improvement of one point each. Whoopee. But keep reading -- it becomes bizarre.

Study Design. Patients were initially assigned to receive an antidepressant plus a placebo for eight weeks. Those who failed to respond to treatment were assigned to Abilify + antidepressant or placebo + antidepressant. Those who responded during the initial 8 weeks were then eliminated from the study. So we've already established that antidepressant + placebo didn't work for these people -- yet they were then assigned to treatment for 6 weeks with the same treatment (!) and compared to those who were assigned antidepressant + Abilify. So the antidepressant + placebo group started at a huge disadvantage because it was already established that they did not respond well to such a treatment regimen. No wonder Abilify came out on top (albeit by a modest margin).

Here's an analogy. A group of 100 students is assigned to be tutored by Tutor A regarding math. The students are all tutored for 8 weeks. The 50 students whose math skills improve are sent on their merry way. That leaves 50 students who did not improve under Tutor A's tutelage. So Tutor B comes along to tutor 25 of these students, while Tutor A sticks with 25 of them. Tutor B's students do somewhat better than Tutor A's students on a math test 6 weeks later. Is Tutor B better than tutor A? Not really a fair comparison between Tutor A and Tutor B, is it?

Ghostwriter Watch:
Yep, the study acknowledged "Editorial support for the preparation of this manuscript was provided by Ogilvy Healthworld Medical Education; funding was provided by Bristol-Myers Squibb Co." Of the study authors, all but one were employees of BMS or Otsuka (which is also involved in marketing Abilify).

Unless BMS is hiding data somewhere, this is hardly the stuff breakthrough treatments are made of. Not that the FDA has a history for expecting much in terms of efficacy, but seriously -- can we have a study without a ridiculously biased design before we jump on the Abilify for depression bandwagon?

Oh, the wonders of "evidence based medicine." This one reminds me of the ARISE-RD study for Risperdal as a depression treatment.

Update: I forgot to mention that this is not the first time I've been puzzled by Abilify's claims. For information on how Abilify is supposedly a great long-term treatment for bipolar disorder, you really have to get the story from Furious Seasons, who had a great post on the topic in December 2006. Get ready for more flimsy evidence from BMS.

Friday, May 25, 2007

Convenient Honesty and Zoloft

Recently, a study was published which cast doubt on the efficacy of sertraline (Zoloft) for PTSD, finding that the drug was no better than a placebo.

The kicker is that the patent has expired for Zoloft, which is why the data are now flowing more freely. I’ll make the case here that data were buried until they would no longer hurt sales to any meaningful extent, at which point data were published, at least partially as a public relations move to show just how “honest” the companies are with sharing both positive and negative results with the psychiatric community.

The Research: The latest study, which appears in the May 2007 Journal of Clinical Psychiatry, showed no benefit for drug over a 12-week period. Placebo tended to outperform Zoloft on the majority of outcome measures, though the differences were of a small and statistically insignificant degree. Patients were significantly more likely to drop out of treatment on Zoloft. It was unclear if there were any serious adverse events (e.g., suicide attempts, notable aggression, etc.) because the article did not mention them at all. Patients started this study between May 1994 and September 1996. The original draft of the study was received by the journal in March 2006. Nearly 10 years passed between study completion and writing up the data for publication.

Two prior studies found positive results found positive results for Zoloft and were published quickly, while these negative results languished until the Zoloft patent had expired. One earlier positive study did not list the dates during which the study occurred, but it seems clear that it was rushed to publication much quicker than the negative study. Another positive study was conducted between May 1996 and June 1997 and was published in 2000. It’s quite obvious why the positive studies were rushed to press and the negative study languished, is it not?

Do keep in mind that the magnitude of positive effect for Zoloft over placebo, even in the positive studies, was small to moderate. When even the positive news for antidepressants in treating PTSD show only modest improvement relative to placebo, one should tread cautiously.

Change of Heart: Drug companies have been criticized widely for failing to disclose clinical trial data (1, 2). In an effort to shore up the support of the medical community and the public at large, what could possibly make more sense than publishing negative trial results? Gee, look at how honest we are – we share the good news and the bad news! Of course, when the positive results are published as quickly as possible and the negative results are published after a 10 year delay, well after the negative results can pose any threat to corporate profits, I’m not impressed by their newfound dedication to transparency.

Note: If you are a journalist, this is the kind of story that would merit a broad audience. The plot is pretty simple to follow and it reeks of corporate malfeasance, a subject that is not new to Pfizer and its former cash cow antidepressant.

Wednesday, February 28, 2007

Bias in Research: The Word is Spreading

Dr. John Grohol over at PsychCentral has issued an excellent post on bias in research. He aptly describes some of the many biases that enter into the research process and why we'd better be quite careful before we jump on the evidence based medicine bandwagon. A teaser follows, in which he discusses evidence based medicine:
It’s like buying a 14-chapter book expecting to get the entire story. But instead of getting the entire story, you find chapters 10-14 are missing, and that chapters 3-9 were written by an author that didn’t appear on the front cover of the book. But it’s not quite so obvious as that. Nobody tells you that chapters 3-9 were written by someone else, and nobody mentions that it’s actually a 14-chapter book that’s missing 5 chapters. It’s no wonder you come away from the book feeling a little confused and betrayed. It’s nothing like you expected or were promised.
Very well said! I salute his excellent post and refer all readers to check it out. For a somewhat similar and certainly windier take on bias in the research process, please read my earlier post.

Hat Tip: Furious Seasons

Tuesday, February 27, 2007

Prepare to NOT be Shocked: Clinical Trials are Biased

Dr. Jeffrey Peppercorn and colleagues recently conducted an analysis of cancer drug trials to examine whether industry funding was associated with a greater likelihood of finding positive results. What did they find?
In 2003, the likelihood of positive results was 84 percent for studies with pharmaceutical involvement, versus 54 percent in studies without clear industry connections. Our analysis was small, consisting only of 140 trials, but similar associations have been documented before, in stroke trials, psychiatry trials, cardiovascular trials and several other areas of clinical research.
An earlier post has detailed how the whole publication process is biased and this latest story appears to be just another cog in the machine.

Hat Tip: eDrugSearch Blog.

Thursday, February 08, 2007

Evidence Biased Medicine (To the Core)

The Last Psychiatrist wrote an excellent comment regarding a post describing how "scientific" data and/or its analysis and interpretation are often cooked by ghostwriters and/or friendly academics. He discussed how the whole process of publishing research is biased, a point that will be discussed in depth throughout this lengthy post. The Last Psychiatrist said:
Sure Pharma puts pressure on doctors, and forces through studies that are helpful to them (and suppresses those that hurt them).

But the real problem in medicine is the academic centers. Their bias is dangerous because it's so subtle and pervasive.

If Astra Zeneca does a Seroquel study, I think we can guess the bias. But when Assistant Professor Jones does a Seroquel study-- funded by the NIH-- is that study magically free of bias? What about Jones's beliefs on medications (he thinks pharmacotherapy is a gold mine, or is he anti-drugs and pro therapy?; maybe he's pro-seizure drugs (Depakote, Lamictal) and anti-antipsychotics (or the other way around?) Maybe his mentor gets AZ money (which is used to pay his salary through the university?) Maybe NIH has a stake in getting expensive drugs like Seroquel to look bad (e.g. CATIE?)

And journals are worse: think that the editors of a journal don't have biases-- even direct pharma ones?

And the three peer reviewers?

Ever wondered what articles don't get accepted for publication, and why (and I say this as someone who has a pretty high rate of publication success).

And why do those journals-- which publish public data-- cost $1000/yr and can't be accessed by the public?

The first and most important step to fixing medicine is abandoning the journal system. All articles, including the raw data that generated them, photos, scientific notebooks, etc, should go online. Let the world vet the data.
I agree with the great majority of what he said. I find it hard to believe that NIH is against expensive medications, since many of the NIH folks have ties with drug companies which are, of course, pushing newer and more expensive meds.

We have to keep in mind that the whole "scientific" peer-review process includes a lot of bias.

Let's review how studies go from a set of numbers into a published manuscript. It may sound like a dull process, but this is the foundation of our so-called evidence base in medicine, so it is actually very important to understand.

How is it biased? Let me count the ways...

Step 1. Analyzing data and writing the paper. Researchers transform a bunch of numbers into a paper.

1) It's anyone's guess as to whether the researchers have
actually seen the data upon which they are to base their writings. In some cases, it is unlikely that they have. So the company could have already made some alterations to the data -- it's unclear how often this happens, but it is certainly a possibility in some (hopefully rare?) instances.

2) The company can analyze the data in any way it sees fit.
Go to Aubrey Blumsohn's site for an excellent example of why this can be problematic. Company statisticians can cook the books either overtly or in a more subtle manner (like they did with the Seroquel data -- here and here).

3) The company can interpret the numbers in any way it wants.
For example, if someone committed suicide while taking a drug, the drug couldn't have caused it, but if the patient committed suicide on a placebo, then the placebo caused it. Even when the data are not favorable, positive conclusions are reached in most instances (here, for example).

4) The company can bury any unfavorable data.
Suppose that depression was measured in five different ways. If a couple of those measures yielded unfavorable results, toss them aside and act as if they never existed. Don't even mention that they existed in the article.

5) When all else fails, deep-six the study.
If the data still fail to prove favorable, just bury the entire thing -- don't publish it. When lawyers and/or researchers get their hands on unpublished data, it quite often shows unfavorable results which the sponsoring company thought best to bury.

Step 2. Peer Review. The paper is then sent off to "experts" for peer review. As the LP said earlier, these folks (including me) have their biases. Indeed, one of my peers has called the peer review process "a Rorscach test of the reviewers," meaning that you can easily see their biases through the reviews. Most reviewers of psychiatry journals have ties to industry which have likely shaped their beliefs to roughly the following: "Drugs are safe and effective," though biases will vary.

The comments of these expert reviewers are quite important in determining whether the study will get published.

Here's what one former journal editor, Richard Smith (British Medical Journal) had to say about peer review (with my emphasis):
The problem with peer review is that we have good evidence on its deficiencies and poor evidence on its benefits. We know that it is expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud. We also know that the published papers that emerge from the process are often grossly deficient.
Hmmm. No, this is not sour grapes on my part -- I've little to complain about in terms of being published. But myself and many other researchers are often befuddled by the whole process -- it often seems that reviewers are unhelpful.

Does peer review help, at least a little bit? I think so. Does it solve the problem of low-quality papers hitting journals, which are then turned into marketing copy by the drug and device industry? Obviously not.

Step 3. Editoral Decision. The editor chooses whether to accept the paper (usually after some revisions are made).

Journal editors frequently have huge ties to industry. Just google the names of many editors and you'll find that they have received funding from a lot of different sources. We also know that sometimes peer reviewers make good comments, yet the editor chooses to ignore them.

Note that nearly all journals are a for-profit entity. How can they make money? Advertising, subscriptions, and reprints. If a journal runs an article favorable to industry (saying that vagus nerve stimulation is great for depression, for example), then it is likely that the company will buy thousands of reprints for dissemination to physicians. The journal is making good money from each reprint and can make tens of thousands or even up to a million dollars from reprints of a study. A study that is unfavorable or irrelevant to industry is not going to generate revenue for the journal. So from a business standpoint, it makes more sense to print studies (like this or this) that are written from a slant of favoring a product than to run something less industry-friendly.

What to do?

Start by making all trial information publicly available.
The Last Psychiatrist said it. Richard Smith said it, and I agree with it. I don't think we should abolish journals altogether -- seems extreme, but making data publicly available -- that's an excellent idea.

Penalize those who engage in misconduct
As Fiona Godlee (editor of the British Medical Journal) stated recently:

So what can we do to change the blind-eye culture of medicine? In the interests of patients and professional integrity I suggest intolerance and exposure.

--SNIP--

And if journals discover authors who are guests on their own papers, they should report them to their institution, admonish them in the journal and probably retract the paper.

Reputations for sale are reputations at risk. We need to make that risk so high it's not worth taking.

Other thoughts?

Update (2-9-07): Quite a few people have been reading this via reddit. To see comments regarding this post at reddit, click the following link.