Showing posts with label journals. Show all posts
Showing posts with label journals. Show all posts

Friday, May 14, 2010

Eli Lilly: Our Drug Failed, So it Has Serious Potential

ResearchBlogging.org
These folks at Lilly must think we are exceptionally stupid. As in can't tie our own shoes. A study in the Journal of Psychiatric Research recently found that their experimental antidepressant LY2216684 was no better than placebo. Here are a couple of quotes from the abstract:
LY2216684 did not show statistically significant improvement from baseline compared to placebo in the primary analysis of the Hamilton depression rating scale (HAM-D17) total score. Escitalopram demonstrated significant improvement compared to placebo on the HAM-D17 total score, suggesting adequate assay sensitivity.
On the primary outcome measure, the experimental drug failed whereas Lexapro worked to some extent. I know what you're thinking - "the sample size was probably too small to find a significant effect." Um, you're wrong. How about 269 people on the Lilly drug, 138 on placebo, and 62 on Lexapro.

But wait, here comes the good news...
Both LY2216684 and escitalopram showed statistically significant improvement from baseline on the patient-rated QIDS-SR total score compared to placebo... The results of this initial investigation of LY2216684’s efficacy suggest that it may have antidepressant potential.
The good news for Lilly is that most people who claim to "read journal articles" really just browse the abstract without actually looking at the full text of the paper. For the select few who have nothing better to do than read Lilly propaganda, take a look at Table 2. A total of 12 secondary outcome measures are listed. The Lilly drug beat placebo on... ONE of them. Lilly doesn't say much about how much better their drug was than placebo on the QIDS-SR measure beside throwing around that often meaningless term of "statistically significant." People on the drug improved by 10.2 points whereas placebo patients improved 8.3 points. So about a 20% difference. If you bother to calculate an effect size, it is d = .24, which is quite small and clinically insignificant. So on the ONE measure where the drug was better than placebo, it was by a small margin, and it missed the mark on 11 other secondary measures as well as on the primary outcome measure. But "it may have antidepressant potential." Hell yes, I've never been so exited about a new drug.

By the way, Lilly is apparently trying this wonder drug out in at least five trials. The journal in which this article appeared has published other dubious Eli Lilly research in the past. The editorial review process is clearly working wonders over at the Journal of Psychiatric Research. Sad, really. The journal publishes some really good work, but then runs this kind of junk as well.

Depression Self-Report Sidebar: The self-reported measure on which the drug had an advantage, the Quick Inventory of Depressive Symptoms (QIDS) - it's really awesome, according to Lilly. Remember, it's the only measure on which their experimental failure drug had an advantage over placebo. So the authors wrote "Self-reported depression symptoms, such as those obtained by the QIDS-SR, may be more sensitive than clinician-administered scales for signal detection in clinical studies of depression."

What does Bristol-Myers Squibb think? In three trials of Abilify for depression, self-reports of depression were unfavorable. So the publications for these studies made sure to downplay these depression self-reports by saying that these measures were not sensitive, that they weren't picking up improvements in depression.

So if a self-report provided positive results, then BAM, it's an awesome measure of depression. But if it provided negative results, then it's a horrendously inaccurate measure and should never have been used in the first place.

Citation below. Yes, one of the authors' last names is Kielbasa.

Dubé, S., Dellva, M., Jones, M., Kielbasa, W., Padich, R., Saha, A., & Rao, P. (2010). A study of the effects of LY2216684, a selective norepinephrine reuptake inhibitor, in the treatment of major depression Journal of Psychiatric Research, 44 (6), 356-363 DOI: 10.1016/j.jpsychires.2009.09.013

Tuesday, March 16, 2010

Editorial Support, CME, and the Primary Care Companion


By now, everyone who has been paying attention should know that a journal article which lists "editorial support" is an article that was ghostwritten. Yet the average reader of these articles is apparently uninformed enough to not care. Why else would so many articles get published which feature "editorial support provided by [insert name of ghostwriter here]." One my my favorite journals, under the "so bad, it's good" category, is the Primary Care Companion to the Journal of Clinical Psychiatry. Good articles certainly make their way into the journal, perhaps by accident, but the journal can always be counted on to provide a steady supply of utter garbage.

Here's the acknowledgements section from one recent piece in the journal: "Editorial support was provided by George Rogan, MSc, Phase Five Communications Inc, New York, New York. Mr. Rogan reports no other financial affiliations relevant to the subject of this article." And in case you're wondering, "Funding for editorial support was provided by Bristol-Myers Squibb." If you've somehow guessed that this is an advertorial for Abilify, you win. Other ghostwritten pieces of fluff paid for by BMS include an article discussing the safety profile of Abilify in depression. It states that "In conclusion, this post hoc analysis extends previous findings demonstrating that aripiprazole is safe and generally well tolerated as an augmentation strategy to standard ADT in patients with MDD with a history of an inadequate response to antidepressant medication." But Abilify caused akathisia in a quarter of patients - I think that's a problem.

But wait... there's more. An article based on data from two trials, which showed (allegedly) that Seroquel improves anxiety in patients with bipolar disorder. This piece also acknowledges that it was ghostwritten. And we know that AstraZeneca, manufacturer of Seroquel, has cooked the books on Seroquel in the past. Feel free to look through the journal every month and have a giggle at some of the ridiculous pieces that make their way into print.

CME

You can get your continuing medical education (CME) from the Primary Care Companion as well. One particularly awesome piece of medical wisdom pimped Abilify educated physicians about the best ways to manage resistant depression. This one is a beauty. It was supported by cash from BMS, which features prominently in the "treat aggressively" message of the piece. The article features none other than Michael Thase as the leading discussant. The same guy who was the leading author on a paper which allegedly showed the wonders of Abilify for depression - despite the pesky fact that patients said it didn't work.

Back to the CME.. Thase starts off by stating that only a third of patients achieve remission of depressive symptoms during treatment. Given that Abilify is being marketed for treatment-resistant depression, this is a perfect way to start off this infomercial educational piece. He adds that failure to achieve remission increases the risk of suicide and puts people at risk for more depression, worse psychiatric outcomes, and all sorts of other bad things. So we better get rid of all symptoms of depression. Thase suggests that clinicians should closely monitor patients to see if their symptoms are remitting.

In particular, "Relying on the global statement “I’m definitely better” from the patient overlooks persistent, minor, or residual symptoms. Dr Thase recommended using a standardized symptom assessment measure and keeping track of the patient’s levels of symptom burden." So even if the patient says he or she is much better, don't believe it. Have the patient fill out rating scales and if any symptoms at any level are present, keep treating. In Thase's words, "If the current treatment is well tolerated and the individual has made significant symptom improvement but is still experiencing residual symptoms, then it may be necessary to adjust the treatment dose, add another medication, or combine pharmacotherapy and psychotherapy." Note that adding psychotherapy comes after adding another medication.

Then a series of other objective, expert psychiatrists chime in. Dr. Gaynes offers his wisdom, which includes "Dr Gaynes concluded that incomplete remission requires aggressive identification and management." Don't be afraid - be aggressive. The unspoken message: Hey, using an antipsychotic like Abilify for depression may seem freakin' crazy. But don't worry, you need to be aggressive. Dr. Trivedi then comments about using rating scales to measure side effects. I don't have much to say about his section, but things get worse momentarily...

Dr. Papakostas then checks in. "A meta-analysis of randomized, double-blind, placebo controlled studies found that augmentation of various antidepressants with the atypical antipsychotic agents olanzapine, risperidone, and quetiapine was more efficacious than adjunctive placebo therapy. In addition, Dr Papakostas noted that the atypical antipsychotic aripiprazole was recently approved by the US Food and Drug Administration (FDA) for use as an adjunctive therapy to antidepressants in MDD. Augmenting with atypical antipsychotics has so far been the best studied strategy for managing treatment-resistant depression, said Dr Papakostas." Dr. P was the coauthor of a meta-analysis that provided "considerable evidence" regarding the wonders of antipsychotic therapy for depression. The only problem was that the analysis actually did not find convincing evidence that the drugs were particularly effective, which I discussed in December 2009.

Next comes Dr. Shelton. Time to be aggressive, again: "Thus, said Dr Shelton, the long-term management of depression should be viewed in the context of acute treatment and the need for early aggressive management to get the patient as well as possible." Be aggressive by adding Abilify to the antidepressant regimen. If not, your patient won't achieve full remission and will suffer needlessly... "Dr Shelton advised clinicians to be aggressive in treatment and stay active over time, asking themselves if everything has
honestly been done to help the patient." Psychotherapy is given a brief mention in this section, but let's face it -- most physicians think of "be aggressive" as upping the dosage and/or adding medications - not as "let's be aggressive by adding psychotherapy."

Then there's the exam at the end. Write up your answers, mail them in, and get your medical education credit. Here's one of the questions...
3. Scores on both patient- and clinician-rated scales found that Ms B is still experiencing residual depressive symptoms. You optimize her current SSRI dose, which produces some improvement. She has not reported any problems with side effects. What course of action to improve her outcome has the most comprehensive efficacy data?
a. Increase the dose of her current SSRI again
b. Augment her current SSRI with another SSRI
c. Switch her to a serotonin-norepinephrine reuptake inhibitor
d. Augment her current SSRI with an atypical antipsychotic

If you guessed that D is the correct answer, you're one step closer to CME credit. And one step closer to writing a prescription for Abilify despite the fact that it is as likely to induce akathisia as to induce remission of depressive symptoms. Or that its advantage over placebo is small on several measures and nonexistent on a patient-rated measure of depression. But D is still the "correct" answer.

ResearchBlogging.org

The offending educational piece is cited below:
Thase, M., Gaynes, B., Papakostas, G., Shelton, R., & Trivedi, M. (2009). Tackling Partial Response to Depression Treatment The Primary Care Companion to The Journal of Clinical Psychiatry, 11 (4), 155-162 DOI: 10.4088/PCC.8133ah3c

Friday, April 03, 2009

Leading Psychiatrist Slammed in Leading Journal

In the latest American Journal of Psychiatry appears a review of Allison Bass's book Side Effects. As many of my readers undoubtedly recall, the book details the saga of the antidepressant drug paroxetine (Paxil) and the troubled line of "research" used to support its use in children (among other points). The reviewer clearly liked the book, which is not necessarily newsworthy. What is notable is that a book review appearing in perhaps the world's leading psychiatry journal slams a leading member of the psychiatry profession. The reviewer, Dr. Spencer Eth, writes the following:
More recently, psychiatrists have been greeted in the morning with front-page newspaper exposés of huge sums being directed by these same drug companies to the physician leaders of our field. In Side Effects: A Prosecutor, a Whistleblower, and a Bestselling Antidepressant on Trial, journalist Alison Bass has written the powerful story of a leading medication, its manufacturer, and a favored psychiatrist, whose driving force was profit not treatment.
Ouch. Though not naming the psychiatrist directly, it is clearly a reference to Martin Keller, bigwig at Brown University, whose work on one particular study regarding Paxil was the subject of a lengthy prior post. For the collection of my posts related to Dr. Keller, please click here.

Back to the review...
This well-told cautionary lacks the excitement of a novel but instead informs the reader with an actual case study with the real names of psychiatrists we know. We can see exactly how corporate greed, drug-company-sponsored clinical research, and mental health care become a toxic mix that inevitably damages our patients’ well being, our colleagues’ reputations, and our profession’s good name.
It was a refreshing surprise to see Martin Keller's goose get cooked in this review. I don't mean to sound vindictive or meanspirited. Keller has done a lot of work over the course of his career, much of which likely has some redeeming value. That being said, there can be little doubt that some of his "science" is quite dubious. And for a major psychiatry journal to run anything, even a book review, that directly goes after a "key opinion leader" who appears quite culpable in performing bad science -- that's a good sign.

Monday, March 17, 2008

Who's Bankrolling Your "Trusted" Medical Journals?

Note: This is a guest post from Susan Jacobs (see byline at bottom of post)

It is easy to point our fingers at greedy pharmaceutical companies when it comes to the rising costs of our prescription meds. However, the average citizen probably isn't aware of just how much these companies control our lives.

A perfect example of this control can be found on every other page in a leading medical journal. I'm speaking, of course, about the copious amounts of ad space.

It is easy for people to presume that the scientific evidence presented in various medical journals is based on unbiased information. Nothing could be further from the truth, unfortunately. Just as a network television channel strives to please their sponsors at the expense of a program's content, a medical journal that is filled with ads will always be at the mercy of its financial backers.

In his article "Under the Influence: Drug Companies, Medical Journals, and Money," Kent Sepkowitz writes:

Just as pharmaceuticals fund studies and pay doctors to give lectures, so too do they buy journal ads and reprints of favorable articles—lots of them. Often a drug company may find one of its products featured in a scientific article while another of its products is dolled up in a high-gloss ad a few pages later. Yet the journals keep quiet about these financial arrangements.

So, just how much money is the integrity of a medical journal (not to mention, the mental and physical well-being of its readers) worth a year? According to The Social Policy Research Institute, the New England Journal of Medicine receives approximately $18 million a year from pharmaceutical companies, while JAMA, the Journal of the American Medical Association receives around $27 million.

It was the New England Journal of Medicine that brought the most attention to this problem in recent years, after publishing a favorable study of the "safe" drug, Vioxx. Of course, we now know just how wrong they were. (It is worth noting that 2 of the 13 people involved with that study were actually employees of Merck.)

When Boston Magazine's Karen Donovan questioned the Journal's editor about the Vioxx scandal, he replied, “I am not a person who wants to make more rules. I just want people to behave.” It is particularly frightening to think that this increasingly corrupt industry is being held to an honor system of sorts, particularly one that is so indelibly damaged.

By-line:

Susan Jacobs is a teacher, a freelance writer as well as a regular contributor for NOEDb, a site helping students obtain an online nursing degree. Susan invites your questions, comments and freelancing job inquiries at her email address susan.jacobs45@gmail.com.

If you have feedback for Susan, please post a comment and/or send her an email.

Thursday, October 04, 2007

Clinical Trial Registry of WHAT?

Worried about data being hidden in clinical trials, with negative data regarding a drug's safety and/or efficacy being buried? Worry no longer. At least that's what the super-optimistic authors in the New England Journal of Medicine would have us believe. Here's what they wrote:
Of special interest to us, an additional provision of the act requires sponsors of all clinically directive therapeutic trials to register their studies, at inception, in a public database sponsored by the National Library of Medicine. Although some aspects of this provision are not ideal, such as the delayed public availability of registration information on device trials and the noninclusion of phase 1 trials, mandatory registration represents a critical advance in making clinical trials of new treatments public knowledge.
AND
A decade ago a clinical trial could be conducted in secret. The trial’s sponsor, claiming proprietary rights, could keep all information about it, including its very existence, private. Thus, if a drug had important adverse effects, this information might never be made public. Legislators believed that such a possibility was not in the best interests of the American people. Once a clinical trial is mounted, the sponsor has an ethical obligation to publicly acknowledge the contribution of the participants and the risk they have taken by ensuring that information about the conduct of the trial and its principal results are in the public domain. With the FDA Revitalization Act, the United States joins other countries in recognizing that the human volunteers needed to complete a trial are more precious than the money required to mount it.
Wow, everything is looking up now! All data will be reported and all will be well. I read what I think is a relatively updated version of said legislation, and I did not really see anything that requires that all results are reported for every trial. Nothing even close to that is described, unless I missed something. Data reporting requirements should be pretty simple: Data on every single efficacy and safety measure must be reported in full. Nothing less is acceptable. Otherwise, sponsors can continue to fund research from which the positive data are reported and the negative data are minimized.

The Present System is Broken. Here's an example of why the present Clinical Trials database is close to useless for learning about study results. First come snippets from a registered clinical trial, then comes the publication based upon the results. Snippets from trial protocol (what was going to happen in the study):
This is a study of the effectiveness of adding Abilify (aripiprazole), an atypical antipsychotic medication, to ongoing SSRI antidepressant treatment for depressed outpatients who are not responding fully to SSRI treatment alone. It is hypothesized that patients’ functioning will improve after 12 weeks of treatment with Aripiprazole and SSRI medication...

Total Enrollment: 15...

Primary Outcome Measures:
  • Hamilton Depression Rating Scale (HDRS)
Secondary Outcome Measures:
  • Clinical Global Impressions Scale (CGI)
  • Global Assessment of Functioning Scale (GAFS)
  • Beck Depression Inventory (BDI)
Study chairs or principal investigators

David J. Hellerstein, MD, Principal Investigator, NY State Psychiatric Institute, and St. Luke's - Roosevelt Hospital Center

Publications
Hellerstein DJ. Aripiprazole as an adjunctive treatment for refractory major depression. Prog Neuropsychopharmacol Biol Psychiatry. 2004 Dec;28(8):1347-8. No abstract available.
Clinical Trial Entry vs. The Publication. You can see the details above. 15 patients to be enrolled in the study using four measures of outcome. Let's see what that cited publication had to say about the results...
Ms. V. is a 46-year-old single female with a 5-year history of severe depression... What about the 14 other patients?

At that point [venlafaxine + mirtazapine not working], aripiprazole was added, initially at 15 mg/day then increased to 30 mg/day. Within a month, Ms. V. noted that her mood and concentration were improved, and she was no longer anhedonic. She began socializing with family members again, began gardening and was able to concentrate on reading and movies. After 3 months on the venlafaxine extended-release, mirtazapine, and aripiprazole, Ms. V. noted that her appetite remained good, she was sleeping 7 h per night, her mood was much better, and she had begun to seek a new job, sending resumes and phoning prospective employers... What happened to those measures of outcome mentioned above?
The linked publication discusses one case though the trial was to study 15 people. Did the other 14 all jump off a bridge? What happened? If we are going to have a clinical trial registry that requires reporting of results, is this what you want? I hope not. Next, the obligatory pro-Abilify propaganda, such as...
Furthermore, aripiprazole’s benign side effect profile [including minimal weight gain, sedation and parkinsonism, resulting from its low affinity for alpha-1-adrenergic, histamine (H1) and muscarinic (M1) receptors] suggests that it may be tolerable for refractory-depressed patients, who may need adjunctive treatment for a long period of time. The case of Ms. V. suggests the possible benefit of aripiprazole in treatment-resistant depression as an adjunct to other antidepressant medications.
When there is no clinical trial data to support your discussion, one can always fall back on discussing various receptors and how, theoretically, tweaking them in one way or another is going to lead to positive results.

The case study ends with:
Further studies of aripiprazole as an adjunctive medication for treatment-resistant depression are indicated.
Yeah, but what happened to the 14 other patients in this study?? I did a search on Medline to find if there were additional studies by Hellerstein using Abilify in depression and I could find nothing that seemed to fit the bill. Please alert me if I missed something in the search. One could argue that perhaps the study is in the midst of being written up for publication -- if so, it's sure taking a while...


There's More.
If I had more time, I'd go through the clinicaltrials.gov registry and I'm sure I'd find more interesting discrepancies between clinical trial protocols (where researchers say what is going to happen) and what was actually published (the official record of what actually happened). In fact, I've done this before, in which I noted that a published study of Risperdal for depression was clearly changed in its data reporting somewhere between the clinical trial registry and the eventual publication. A rather thorough investigation into the topic found that:

"Overall, 50% of efficacy and 65% of harm outcomes per trial were incompletely reported. Statistically significant outcomes had a higher odds of being fully reported compared with nonsignificant outcomes for both efficacy (pooled odds ratio, 2.4; 95% confidence interval [CI], 1.4-4.0) and harm (pooled odds ratio, 4.7; 95% CI, 1.8-12.0) data. In comparing published articles with protocols, 62% of trials had at least 1 primary outcome that was changed, introduced, or omitted. Eighty-six percent of survey responders (42/49) denied the existence of unreported outcomes despite clear evidence to the contrary."

Who Cares? Well, if you don't mind receiving only part of the efficacy and safety data, then don't do anything. Send no emails, tell none of your friends, don't bother your local journal editor, and don't ask your doctor if he/she is aware that the so-called bedrock of evidence based medicine (i.e., results published in scientific journals) is based on selectively reported information.

The Kool-Aid. I'm really stunned that the folks at the New England Journal of Medicine would just rush off and publish an essay about how great things are now going to be. People have been calling out for data to be shared openly for quite some time, and now, magically, with the passing of some vague legislation, all data will be made publicly available. The drug industry is suddenly just going to publish all information on all of their clinical trials, overnight? And some bigwigs at NEJM are willing to drink this Kool-Aid? That is pathetic.

Monday, July 09, 2007

JAACAP Strikes Again

Aubrey Blumsohn has detailed how the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP) is standing behind its work, despite some huge questions -- See Chapter 2 below.

Chapter 1: Paxil 329. You might recall my hissy-fit over JAACAP editor in chief Mina Duncan's declaration regarding Paxil Study 329:
I don’t have any regrets about publishing [the study] at all – it generated all sorts of useful discussion which is the purpose of a scholarly journal.
This comment was especially interesting given that the published study had the following issues:
  • Renamed suicidality as "lability" and overt aggression as "hostility."
  • Declared superiority over placebo in treating depression when, in fact, the study data did not support such an assertion
  • Determined (by magic?) that a placebo can make you suicidal while Paxil could not. The folks who became suicidal on Paxil -- it was not because of Paxil, but the one patient who became suicidal on placebo became suicidal due to the placebo. How does that work?
  • Study was ghostwritten
  • Lead author Marty Keller appears not to have read the study data
Read more about all of the above here. Despite the study being somewhat of a joke, which was noted to some extent by reviewers, Dr. Duncan published it anyway and said she had no regrets about it.

Chapter 2: The Gillberg Affair

To make a long story short, another study that was published in JAACAP has come under a pile of scrutiny. The study claimed that a syndrome called DAMP (Deficits in Attention, Motor control and Perception) was quite common. There was concern that the data was fraudulent, as can be seen by following this link. My point is not to discuss whether the data were or were not cooked -- I have not given this enough investigation to weigh in.

The point is that the lead researcher Christopher Gillberg refused to provide the data to defend against charges that the data were fraudulent. Mind you, no identifying information was being requested. Just the numbers. The plot thickens substantially from there, but one key point worthy of mention here is that a Swedish court insisted that the DAMP study data be provided to investigators. The data were then destroyed. That would be a no-no in science. If nobody can see the numbers, how do we know they weren't cooked?

Aubrey Blumsohn has engaged in email correspondence with JAACAP staff that I found illuminating. JAACAP's point seems to be that they don't care if the study data were destroyed -- it's not their problem.

No, journals, which serve as the official scientific record, don't need to hold study authors accountable to making sure their study data are accurate. That would be just too much to ask, apparently.

Friday, February 09, 2007

Puff Pieces and Ghostwriting

The journal European Neuropsychopharmacology ran a supplement issue in Sepotember 2006 titled:

September 2006, Pages S149-S155
From the Clinic to the Community: Treating the Whole Schizophrenic Patient and Innovation in Psychiatric Therapy: The Promise of the New Antipsychotics

What is a supplement issue? It is a journal issue paid for by drug companies (GlaxoSmithKline, in the case of this particular supplement) which control the content contained in this issue. Isn't that just an advertisement, you ask? I'm not sure what else you could call it, but the words independent and scholarly certainly do not apply.

Siegfried Kasper has an advertisement, er, article in which he says (with my emphasis): "The use of atypical agents to address the full range of psychotic symptoms with minimal adverse effects should ensure improved functionality and an improved patient quality of life in patients with schizophrenia: both can be regarded as positive reinforcers for long-term compliance."

Who is Siegfried Kasper? Let's find out. David Healy was given a ghostwritten article a few years ago to which he was expected to attach his name. However, he made several changes to the paper. In fact, it was altered to the extent that it no longer served its originally intended purpose as an advertisement for milnacipran. So, the paper that was originally ghostwritten and sent to Healy was forwarded to an Austrian psychiatrist. The psychiatrist was Siegfried Kasper and he attached his name to the paper, making not a single change. In other words, someone wrote an article intended to serve as an advertisement for milnacipran, and Kasper affixed his name to it as if it were his own work. In fact, to quote from the Guardian...

… the original, ghostwritten article which contained what they described as "the main commercially important points" was to be there too. "Siegfried Kasper has kindly agreed to author this one," they said. The name of Professor Kasper of the University of Vienna, editor of the journal, duly appeared on the unaltered, published article, complete with the original references to Dr Healy's work. Professor Kasper told the Guardian that he was happy with the content of the article.

According to one source, Kasper has “authored over 800 research reports and reviews.” One is left to wonder how many of those were actually written by him versus written by ghostwriters and rubber stamped under his name.

When academics are willing to sell their names and reputations, how can the university system function as an independent check on the claims made by drug manufacturers?

Thursday, February 08, 2007

Evidence Biased Medicine (To the Core)

The Last Psychiatrist wrote an excellent comment regarding a post describing how "scientific" data and/or its analysis and interpretation are often cooked by ghostwriters and/or friendly academics. He discussed how the whole process of publishing research is biased, a point that will be discussed in depth throughout this lengthy post. The Last Psychiatrist said:
Sure Pharma puts pressure on doctors, and forces through studies that are helpful to them (and suppresses those that hurt them).

But the real problem in medicine is the academic centers. Their bias is dangerous because it's so subtle and pervasive.

If Astra Zeneca does a Seroquel study, I think we can guess the bias. But when Assistant Professor Jones does a Seroquel study-- funded by the NIH-- is that study magically free of bias? What about Jones's beliefs on medications (he thinks pharmacotherapy is a gold mine, or is he anti-drugs and pro therapy?; maybe he's pro-seizure drugs (Depakote, Lamictal) and anti-antipsychotics (or the other way around?) Maybe his mentor gets AZ money (which is used to pay his salary through the university?) Maybe NIH has a stake in getting expensive drugs like Seroquel to look bad (e.g. CATIE?)

And journals are worse: think that the editors of a journal don't have biases-- even direct pharma ones?

And the three peer reviewers?

Ever wondered what articles don't get accepted for publication, and why (and I say this as someone who has a pretty high rate of publication success).

And why do those journals-- which publish public data-- cost $1000/yr and can't be accessed by the public?

The first and most important step to fixing medicine is abandoning the journal system. All articles, including the raw data that generated them, photos, scientific notebooks, etc, should go online. Let the world vet the data.
I agree with the great majority of what he said. I find it hard to believe that NIH is against expensive medications, since many of the NIH folks have ties with drug companies which are, of course, pushing newer and more expensive meds.

We have to keep in mind that the whole "scientific" peer-review process includes a lot of bias.

Let's review how studies go from a set of numbers into a published manuscript. It may sound like a dull process, but this is the foundation of our so-called evidence base in medicine, so it is actually very important to understand.

How is it biased? Let me count the ways...

Step 1. Analyzing data and writing the paper. Researchers transform a bunch of numbers into a paper.

1) It's anyone's guess as to whether the researchers have
actually seen the data upon which they are to base their writings. In some cases, it is unlikely that they have. So the company could have already made some alterations to the data -- it's unclear how often this happens, but it is certainly a possibility in some (hopefully rare?) instances.

2) The company can analyze the data in any way it sees fit.
Go to Aubrey Blumsohn's site for an excellent example of why this can be problematic. Company statisticians can cook the books either overtly or in a more subtle manner (like they did with the Seroquel data -- here and here).

3) The company can interpret the numbers in any way it wants.
For example, if someone committed suicide while taking a drug, the drug couldn't have caused it, but if the patient committed suicide on a placebo, then the placebo caused it. Even when the data are not favorable, positive conclusions are reached in most instances (here, for example).

4) The company can bury any unfavorable data.
Suppose that depression was measured in five different ways. If a couple of those measures yielded unfavorable results, toss them aside and act as if they never existed. Don't even mention that they existed in the article.

5) When all else fails, deep-six the study.
If the data still fail to prove favorable, just bury the entire thing -- don't publish it. When lawyers and/or researchers get their hands on unpublished data, it quite often shows unfavorable results which the sponsoring company thought best to bury.

Step 2. Peer Review. The paper is then sent off to "experts" for peer review. As the LP said earlier, these folks (including me) have their biases. Indeed, one of my peers has called the peer review process "a Rorscach test of the reviewers," meaning that you can easily see their biases through the reviews. Most reviewers of psychiatry journals have ties to industry which have likely shaped their beliefs to roughly the following: "Drugs are safe and effective," though biases will vary.

The comments of these expert reviewers are quite important in determining whether the study will get published.

Here's what one former journal editor, Richard Smith (British Medical Journal) had to say about peer review (with my emphasis):
The problem with peer review is that we have good evidence on its deficiencies and poor evidence on its benefits. We know that it is expensive, slow, prone to bias, open to abuse, possibly anti-innovatory, and unable to detect fraud. We also know that the published papers that emerge from the process are often grossly deficient.
Hmmm. No, this is not sour grapes on my part -- I've little to complain about in terms of being published. But myself and many other researchers are often befuddled by the whole process -- it often seems that reviewers are unhelpful.

Does peer review help, at least a little bit? I think so. Does it solve the problem of low-quality papers hitting journals, which are then turned into marketing copy by the drug and device industry? Obviously not.

Step 3. Editoral Decision. The editor chooses whether to accept the paper (usually after some revisions are made).

Journal editors frequently have huge ties to industry. Just google the names of many editors and you'll find that they have received funding from a lot of different sources. We also know that sometimes peer reviewers make good comments, yet the editor chooses to ignore them.

Note that nearly all journals are a for-profit entity. How can they make money? Advertising, subscriptions, and reprints. If a journal runs an article favorable to industry (saying that vagus nerve stimulation is great for depression, for example), then it is likely that the company will buy thousands of reprints for dissemination to physicians. The journal is making good money from each reprint and can make tens of thousands or even up to a million dollars from reprints of a study. A study that is unfavorable or irrelevant to industry is not going to generate revenue for the journal. So from a business standpoint, it makes more sense to print studies (like this or this) that are written from a slant of favoring a product than to run something less industry-friendly.

What to do?

Start by making all trial information publicly available.
The Last Psychiatrist said it. Richard Smith said it, and I agree with it. I don't think we should abolish journals altogether -- seems extreme, but making data publicly available -- that's an excellent idea.

Penalize those who engage in misconduct
As Fiona Godlee (editor of the British Medical Journal) stated recently:

So what can we do to change the blind-eye culture of medicine? In the interests of patients and professional integrity I suggest intolerance and exposure.

--SNIP--

And if journals discover authors who are guests on their own papers, they should report them to their institution, admonish them in the journal and probably retract the paper.

Reputations for sale are reputations at risk. We need to make that risk so high it's not worth taking.

Other thoughts?

Update (2-9-07): Quite a few people have been reading this via reddit. To see comments regarding this post at reddit, click the following link.

Wednesday, January 31, 2007

Journal Editor Unapologetic Over Paxil/Seroxat Article

Dr. Mina Duncan is the editor of the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), which, as she noted on Panorama is very widely read among child and adolescent psychiatrists. So, in this prestigious journal, one would expect high editorial standards.

Let’s go through what happened with study 329, which turned into a publication in JAACAP in July 2001 upon which Dr. Martin Keller (see here) was the lead author. The study was submitted (after the Journal of the American Medical Association had rejected it – good for them) to JAACAP, and Panorama nicely documented a couple of the reviewer comments. They included

Overall, results do not clearly indicate efficacy – authors need to clearly note this.

The relatively high rate of serious adverse effects was not addressed in the discussion.

Given the high placebo response rate… are (these drugs) an acceptable first-line therapy for depressed teenagers?

Remember that journals receive manuscripts, and then send them to be reviewed by researchers in the field as to their quality. These reviews are generally taken very seriously when considering what changes should be made to a paper and whether the manuscript will be published.

Yet, the paper was not only accepted and published in JAACAP, but the editor seems to have ignored the suggestions of the individuals who reviewed the paper. These issues mentioned in the review were obviously not addressed – feel free to read the actual journal article and you can see that the efficacy of paroxetine was pimped well beyond what the data showed and the safety data were also painted to show a picture contrary to the study’s own data. Again, please feel free to read my earlier post regarding the study’s data versus how such data were reported and interpreted in the journal article.

Read this carefully – we all make mistakes. When someone points out that a mistake was made, it is natural to become defensive – that’s okay. However, several years after the fact, one should be able to admit fault and learn from one’s errors; at least that is my opinion.

Dr. Duncan was asked if she regretted allowing Keller et al.’s Paxil/Seroxat study to be published – her response was less that I hoped for:

I don’t have any regrets about publishing [the study] at all – it generated all sorts of useful discussion which is the purpose of a scholarly journal.

Let’s follow this train of logic. If a study is either particularly poorly done or misinterprets its own data to a large extent, then there will be an outcry of researchers and critics who will point out the numerous flaws that occurred. This could, of course, be interpreted as “useful discussion,” which I suppose is what Duncan meant happened in the case of this article. After all, there were several letters to the editor that expressed their frustration with the study and how Keller et al interpreted their data. So, according to my interpretation of Duncan’s logic, we should publish studies with as many flaws as possible so that we can “usefully discuss” them.

Of further interest, Jon Jureidini and Anne Tonkin had a letter published in JAACAP in May 2003. In their letter they stated

…a study that did not show significant improvement on either of two primary outcome measures is reported as demonstrating efficacy (p. 514).

The tone of their letter was perhaps a bit catty as it discussed how Keller et al seem to have spun their interpretation well out of line with the actual study data. I can, however, hardly blame them for their snippiness. Another nugget from their letter:

We believe that the Keller et al. study shows evidence of distorted and unbalanced reporting that seems to have evaded the scrutiny of your editorial process (p. 514).
Thank you to Jureidini and Tonkin for their contribution to the “useful discussion” – indeed, their comments were likely the most useful of all that were contributed to the discussion. I give credit to Duncan for publishing their letter. I would be more impressed if she was willing to state that there were some problems with the editorial process in the case of this article, but I suppose you can’t win them all.

Disclaimer: I watched Panorama and took copious notes. I believe all quotes are accurate but please let me know if you think I transcribed something incorrectly.

Update (1/29/08): My apologies. I should have typed Mina Dulcan, not Mina Duncan. Sorry for the misspellings.