Wednesday, October 31, 2007

Ghosts, Goblins, and Serotonin: Boo!

In an earlier post, I noted that I thought a key opinion leader had contradicted himself across two articles regarding the role of serotonin in depression. One reader posted a comment that challenged my assertion, to which I reply via this post. The reader, “Alan,” stated, in part, that
He only said what he said -- that [serotonin] is clearly disordered and deficient in many if not most people with depression. And he is right. There's overwhelming evidence for that. (There's also great evidence for the therapeutic value of serotonergic interventions in depression, which he did not mention.) That's not to say that other things are not playing a role, or that serotonin is the sole problem area -- the "single fundamental neurobiological defect". He only said what he said. And this blogger is jumping all over him. Why?

Okay. Two issues here. One: Did the key opinion leader (Charles Nemeroff) contradict himself? Two: Is serotonin deficient in depression? This post will deal with issue two – people can read the old post and decide for themselves if Nemeroff’s statements were contradictory or if I was in error. You decide.

Part 1: Does a Serotonin Deficiency Cause Depression?

Let’s break this thing down to make it really simple to understand. Statement 1 from Alan: [serotonin] is clearly disordered and deficient in many if not most people with depression... There's overwhelming evidence for that.

Really? Drug companies certainly use serotonin to market their antidepressants, but is there solid evidence for a serotonin imbalance in depression? Actually, no.

Despite making excellent marketing copy, studies have found no consistent abnormality in serotonin in depressed people. Doubt me? Read this excellent article by Lacasse and Leo (published in PLoS Medicine) that describes the gap between the marketing of serotonin in depression and the scientific literature.

One quote from the PLoS Medicine article:

Consider the medical textbook, Essential Psychopharmacology, which states, “So far, there is no clear and convincing evidence that monoamine deficiency accounts for depression; that is, there is no ‘real’ monoamine deficit” [44]. Like the pharmaceutical company advertisements, this explanation is very easy to understand, yet it paints a very different picture about the serotonin hypothesis.

But since SSRI’s impact depression and also impact serotonin, depression must be due to a serotonin deficiency. Um, no. Again, I’ll leave it to Lacasse & Leo:

With direct proof of serotonin deficiency in any mental disorder lacking, the claimed efficacy of SSRIs is often cited as indirect support for the serotonin hypothesis. Yet, this ex juvantibus line of reasoning (i.e., reasoning “backwards” to make assumptions about disease causation based on the response of the disease to a treatment) is logically problematic – the fact that aspirin cures headaches does not prove that headaches are due to low levels of aspirin in the brain. Serotonin researchers from the US National Institute of Mental Health Laboratory of Clinical Science clearly state, “[T]he demonstrated efficacy of selective serotonin reuptake inhibitors…cannot be used as primary evidence for serotonergic dysfunction in the pathophysiology of these disorders” [12].

I could quote the article extensively, but I’d prefer that you read it yourself. Whether you have a scientific background or not, it’s easy to understand and it shows that the serotonin emperor is wearing no clothes. Don’t take my word for it. After you’ve read the article, if you’d like to do your own independent investigation on the topic, go ahead. Please report your findings showing a strong link between serotonin dysfunction and depression right here in the comment section. I’m waiting.

It is true that variations in the serotonin transporter gene can predispose people toward experiencing depression. I don’t deny that. And given that our understanding of the brain is still rather primitive, there may be some point where we figure out that serotonin plays a certain role in depression. But at this point, there is no evidence supporting a specific serotonin deficiency in depression. Again, please correct me if you disagree.

Part 2: Do SSRI’s Work?

Another piece from Alan’s comment:

There's also great evidence for the therapeutic value of serotonergic interventions in depression.

Like what? Try that about 80% of the drug effect is replicated by placebo – there is about a 20% difference in efficacy between placebo and antidepressant (Kirsch et al., 2002). Is that “great evidence” of efficacy? It’s more encouraging than 0% better than placebo, but I remain less than fully convinced. And about those sexual side effects and increased risk of suicidal thinking and suicide attempts… If depression was really due to poor serotonin function, then one would expect treatments that increase serotonin transmission would have a much stronger advantage over placebo.

I appreciate Alan’s comment as well as its timing. It is only appropriate on Halloween, I should discuss the serotonin-depression link, as it is about as well supported scientifically as many of the ghost stories often told on such a holiday.

Friday, October 26, 2007

A Blog on Atypical Antipsychotics? SWEET!

I have discovered a blog (simply titled Atypical Antipsychotics ) that I believe has some serious potential. It has the following things I like in a blog:
  1. Sarcasm
  2. Unwillingness to blindly accept marketing department-generated BS
  3. It disses Invega
Not sure what more anyone could want in a blog, really. Only a few posts so far, but you can bet that I'll be watching it like a hawk. And I invite you to do likewise.

SSRIs, Anxiety, Kids, Suicide, and Credible Evidence

I wrote a while ago about Christopher Lane's assertions that social anxiety was overdiagnosed and overtreated, particularly among children. Many people disagreed with Lane. One person who disagreed was Dr. Ronald Pies, a psychiatrist at SUNY Upstate Medical Center, who wrote in the New York Times that
... there is no credible evidence to support Mr. Lane’s implication that S.S.R.I. antidepressants are linked with increased risk of suicide in children prescribed these medications for social anxiety. The Food and Drug Administration’s initial concerns stemmed from studies in children with major depression, not anxiety disorders, and the latest evidence has not supported a strong link between S.S.R.I.’s and risk of suicide.
I re-read the latest summary of evidence regarding SSRIs and suicide in kids. Mind you, the article that I referenced (Bridge et al., 2007 in JAMA) came to decidedly pro-SSRI conclusions -- I didn't get my evidence dropped to me from a black helicopter. Based on trials submitted to the FDA, as reported by Bride and colleagues, there were data that pertained directly toward Dr. Pies' assertion. Here are the data regarding SSRI's and suicide in children and adolescents with anxiety disorders.

Note: AD represents Antidepressant; PL represents Placebo
Suicidal Ideation

Suicide Attempt/Preparatory Action
AD: 3 of 362
PL: 1 of 339

AD: 1 of 362
PL: 0 of 339
Non-OCD Anxiety Disorder
AD: 5 of 573
PL: 0 of 582

AD: 1 of 573
PL: 0 of 582
Total for Anxiety Disorders
AD: 8 of 935
PL: 1 of 921

AD: 2 of 935
PL: o of 921

Compare the odds of having suicidal ideation on drug to the odds of having suicidal ideation on placebo. Kind of a large difference, eh? I realize that the odds of developing suicidal ideation are still small, even on medication, but they are substantially higher than a child taking placebo.

While one could point out correctly that the difference is not "statistically significant," I think one would be foolish to fall back on that argument. We have seen in adults and children that SSRIs are related to more suicide attempts and that this finding is pretty consistent across trials, at least among children and young adults. When events occur rarely, then we need exceedingly large samples in order to be quite certain that the event (such as suicidal ideation in SSRI trials for anxiety in kids) is not an anomaly. But when kids are being treated for disorders that are very rarely associated with suicidality, yet the children show a much higher rate of suicidal ideation on a drug compared to a placebo, does it not make sense to warn patients about such potential hazards? One could run to the less SSRI's cause more suicide argument, but that hasn't really held up so well scientifically.

In my eyes, the above data represent "credible evidence" that SSRIs can indeed lead to an increase in suicidal thoughts among kids with anxiety disorders. Either Dr. Pies was unfamiliar with the above evidence or he believes it is not credible.

No actual suicides were recorded during the trials. Of course, if someone got worse during the study, then quit the study and killed himself/herself, then who knows if such data were included. Perhaps such events occurred -- I don't know. And there was much more supervision of these kids in a clinical trial then you'd see in real life, which could have kept some people from suicide. Further, let's suppose that the drug causes a child with social anxiety to become suicidal. He does not make an attempt on his own life, but he is suicidal for a month. Doesn't prior suicidal thinking predict later suicidal thinking and later attempt of suicide? So even if the child makes no immediate attempt on his life, couldn't he be at higher risk down the line? Maybe I'm losing my marbles, but I think it's a reasonable question.

Related posts on SSRI's and suicide:

Tuesday, October 16, 2007

Invega: Just in the Nick of Time?

In a Bloomberg report, it was noted that:
Johnson & Johnson cut costs as it faces generic competition to its best-selling prescription drug, the antipsychotic Risperdal, which generated $4.2 billion last year.
Phew, it's a good thing that Invega (Son of Risperdal) is on the market to save the day for J & J. And there is some preliminary (read: probably bogus) research suggesting that it works better than Seroquel in treating schizophrenia. See my recent post to understand my skepticism regarding the latest results. Invega is entering a crowded market (Abilify, Seroquel, Zyprexa, generic risperidone, Geodon, etc.) and I don't think it is going to fare particularly well unless there is some pretty darned impressive marketing. Which is not entirely out of the question. I humbly suggest taking a piece from Pfizer's Geodon campaign.

Monday, October 15, 2007

Son of Risperdal Beats Seroquel

Janssen, manufacturer of Invega (son of Risperdal) funded a study comparing Invega, Seroquel, and placebo in the treatment of schizophrenia. Results were as follows:
"After two weeks, those on Invega had a greater reduction in symptoms as measured by a standard test called Positive and Negative Syndrome Scale for Schizophrenia, or Panss. The test measures symptoms such as disorganized thoughts and uncontrolled hostility. The score for Invega patients declined 23.4 points, 17.1 points for Seroquel and 15 points for placebo, according to J&J."
By the way, note the rather paltry advantage of Seroquel in comparison with placebo. These results fit nicely into a pattern. A study published in the American Journal of Psychiatry in 2006 found that clearly, the best predictor of which antipsychotic would be shown superior in a head-to-head comparison was who funded the study. Also feel free to examine their table in which they point out the biases in these various comparative studies.

Pharma claims to spend bazillions of dollars on research -- but these studies aren't research in the classic sense, which is a scientific endeavor undertaken to gain knowledge. These are exercises in marketing -- set up a study with some sort of bias favoring your drug, then hurry and rush out the drug reps with low cut blouses and fistfuls of reprints of studies showing your drug is superior to the competition. Then, the competition retaliates by setting up a study in which their drug is set up to win due to some sort of biased design. And the cycle goes on and on and on. Pharma then counts these "studies" as research expenditures and waxes on about their dedication to developing lifesaving medications. As if these studies done purely for marketing purposes have anything to do with developing lifesaving medications.

Background on Invega:

Friday, October 12, 2007

Best *!&@! Article Ever

Well, at least the best article on cursing. While I've somehow received a G-rating for this site, I think the politics of cursing are fascinating. Look at how much time and emotion we put into the idea of cursing. No, you can't say that word -- that'll get you a parental advisory warning. If you said that word in class, you could get fired. Et cetera. I'm not advocating for a cuss like a gangsta rapper society or for prudishness. What I am advocating is that y'all read an incredibly interesting article on cursing by Dr. Stephen Pinker, available the New Republic's website.

Hey, it can't be all drugs, "science," and PR all the time on this site, can it? Actually, I have a tie-in. When doing my investigative work for this site, look at how many times I've ended up thinking "WTF is going on here??!?!?!" A few examples:

Hat Tip: Mind Hacks.

Wednesday, October 10, 2007

Blumsohn, History, and Fire

Wow. I noted yesterday in a History (and Future) Lesson that Aubrey Blumsohn posted a list of scientific misconduct related goodies that occurred on various years on October 8th. Proving that October 8th was not an anomaly, he's now posted a list of scientific misconduct related events that have occurred on October 9th over the years.

His posts from the last two days are mandatory reading. In fact, I declare that the Scientific Misconduct Blog is ON FIRE! Today, I will post nothing more so that you can run to the Scientific Misconduct Blog and read the excellent posts noted above.

Tuesday, October 09, 2007

A History (And Future) Lesson

When science, industry, and government collide, the results are often less than pretty. Aubrey Blumsohn gives a glance into several episodes, all of which apparently related to the date October 8 in various years. Either October 8 is a very bad day, or these incidents occur with regularity on many other days of the year as well. I strongly suggest (nay, I insist), that you educate yourselves over at the Scientific Misconduct Blog.

When done, feel free to head over to Furious Seasons and get a reminder about how the "patient advocate group" known as the National Alliance for the Mentally Ill touted the second generation antipsychotics as life-saving. That's all fine and dandy, until one notes there is no data that schizophrenia outcomes have improved by a single iota since these drugs were foisted upon the public.

But don't worry, these miracle antipsychotic drugs are now prescribed for bipolar disorder, Mega-Watered Down Bipolar Disorder, autism, depression, and whatever else you can imagine. So, the gap in lifespan between people with schizophrenia and the rest of us continues to increase, yet these drugs are still pimped as a huge improvement over older treatments.

My Guaranteed Prediction: And when the next bandwagon of psychiatric treatments comes out, count on them to be touted as safer and more effective than the drugs that they are replacing. The same companies that are currently pushing atypical antipsychotics will eventually push other antipsychotic drugs and will then denigrate the very same treatments they now claim are life saving. NAMI and others that claim to advocate for patients will state uequivocally that the new treatments save lives and make the world a better place. The old treatments may even be labeled as causing dependence, which of course will not be true of the newer treatments.

Of course, at the anemic rate which psychiatric drugs are being developed these days, it may be a few years before the prior paragraph comes true, but come true it will. Mark my words. I have no special powers of prediction -- all one needs to do is notice a pattern and note that there are currently no real obstacles (beside having very few drugs in the pipleine) to the current script being replayed over and over again.

If you think the media or a clinical trial registry are going to fix things, consider yourself a sunny optimist.

Friday, October 05, 2007

Lilly Updates Label: A Little Too Late

The good news is that Lilly has updated its warnings for Zyprexa (olanzapine) and Symbyax (olanzapine/fluoxetine combo). Here's a bit from the PR release:
Specifically, the changes include new warnings for weight gain and hyperlipidemia (elevation of triglycerides and cholesterol) and updated information in the warning for hyperglycemia (elevated blood sugar), including additional language on a greater association of increases in glucose levels with olanzapine than with some other atypical antipsychotics.
Here's the tragicomic part:
"Today's communication is part of Lilly's historical and ongoing commitment to inform doctors and patients about updated prescribing information," said Sara Corya, M.D., global medical director, Lilly.


"Lilly continues to recommend that clinicians consult expert guidelines for treating people with antipsychotics, particularly the monitoring of lipids and blood glucose, regardless of the medication prescribed," Dr. Corya said. "Over the last several years, the company has been actively informing healthcare professionals about these recommendations."
Yes, Lilly is all about honestly sharing information. Please read my prior post on the incredible shifting Zyprexa glucose data here. Read a whale of a great post from Furious Seasons on how Lilly tried to play weight gain on Zyprexa off as a benefit of treatment (!) here. Interesting questions about Lilly's handling of the glucose discussion can also be seen here. If you have some time to burn, look through the above posts, then tell me Lilly has a "historical and ongoing commitment" to sharing data openly and honestly.

As for the expert guidelines, yeah -- great idea! Like TMAP -- read
my postabout how the research supporting said guidelines for treating bipolar disorder is flimsy at best, yet these "expert guidelines" are oft-cited as a great example of the good that comes from expert guidelines. Oh, and did I mention that the "expert guidelines" are often authored with the help of industry?

Here's the bad news. Really bad news. Philip Dawdy has posted an interview with the mother of a man who took Zyprexa, apparently piled on the pounds, and then allegedly died of "profound hyperglycemia." It is a sad, sad story and well worth your time to read it.

While you're there, look around the Furious Seasons blog and ask yourself, "Is there anywhere else where I can find this type of mental health coverage?" I bet you'll say no. If you want continuing coverage of these issues, I suggest that you contribute whatever you can to Philip Dawdy, author of Furious Seasons.

Thursday, October 04, 2007

Shove Your Gifts Where The Sun Don't Shine

There is great rant posted by Dr. Daniel Carlat, that ends with
We don't need gifts from drug reps, nor do we need the biased "education" they provide during their visits. Let's stop pretending that gifts are anything other than influence-peddling.
Read his rant and, while on his site, check out many other excellent posts. It is no secret that I am a big fan of his work. We need many more psychiatrists who are capable of seeing through the marketing blitzkrieg that has come to dominate modern psychiatry.

You GO Bill

One for the ages by Bill Maher on Pharma, junk food, and laziness.

Hat Tip: Cary Byrd.

Clinical Trial Registry of WHAT?

Worried about data being hidden in clinical trials, with negative data regarding a drug's safety and/or efficacy being buried? Worry no longer. At least that's what the super-optimistic authors in the New England Journal of Medicine would have us believe. Here's what they wrote:
Of special interest to us, an additional provision of the act requires sponsors of all clinically directive therapeutic trials to register their studies, at inception, in a public database sponsored by the National Library of Medicine. Although some aspects of this provision are not ideal, such as the delayed public availability of registration information on device trials and the noninclusion of phase 1 trials, mandatory registration represents a critical advance in making clinical trials of new treatments public knowledge.
A decade ago a clinical trial could be conducted in secret. The trial’s sponsor, claiming proprietary rights, could keep all information about it, including its very existence, private. Thus, if a drug had important adverse effects, this information might never be made public. Legislators believed that such a possibility was not in the best interests of the American people. Once a clinical trial is mounted, the sponsor has an ethical obligation to publicly acknowledge the contribution of the participants and the risk they have taken by ensuring that information about the conduct of the trial and its principal results are in the public domain. With the FDA Revitalization Act, the United States joins other countries in recognizing that the human volunteers needed to complete a trial are more precious than the money required to mount it.
Wow, everything is looking up now! All data will be reported and all will be well. I read what I think is a relatively updated version of said legislation, and I did not really see anything that requires that all results are reported for every trial. Nothing even close to that is described, unless I missed something. Data reporting requirements should be pretty simple: Data on every single efficacy and safety measure must be reported in full. Nothing less is acceptable. Otherwise, sponsors can continue to fund research from which the positive data are reported and the negative data are minimized.

The Present System is Broken. Here's an example of why the present Clinical Trials database is close to useless for learning about study results. First come snippets from a registered clinical trial, then comes the publication based upon the results. Snippets from trial protocol (what was going to happen in the study):
This is a study of the effectiveness of adding Abilify (aripiprazole), an atypical antipsychotic medication, to ongoing SSRI antidepressant treatment for depressed outpatients who are not responding fully to SSRI treatment alone. It is hypothesized that patients’ functioning will improve after 12 weeks of treatment with Aripiprazole and SSRI medication...

Total Enrollment: 15...

Primary Outcome Measures:
  • Hamilton Depression Rating Scale (HDRS)
Secondary Outcome Measures:
  • Clinical Global Impressions Scale (CGI)
  • Global Assessment of Functioning Scale (GAFS)
  • Beck Depression Inventory (BDI)
Study chairs or principal investigators

David J. Hellerstein, MD, Principal Investigator, NY State Psychiatric Institute, and St. Luke's - Roosevelt Hospital Center

Hellerstein DJ. Aripiprazole as an adjunctive treatment for refractory major depression. Prog Neuropsychopharmacol Biol Psychiatry. 2004 Dec;28(8):1347-8. No abstract available.
Clinical Trial Entry vs. The Publication. You can see the details above. 15 patients to be enrolled in the study using four measures of outcome. Let's see what that cited publication had to say about the results...
Ms. V. is a 46-year-old single female with a 5-year history of severe depression... What about the 14 other patients?

At that point [venlafaxine + mirtazapine not working], aripiprazole was added, initially at 15 mg/day then increased to 30 mg/day. Within a month, Ms. V. noted that her mood and concentration were improved, and she was no longer anhedonic. She began socializing with family members again, began gardening and was able to concentrate on reading and movies. After 3 months on the venlafaxine extended-release, mirtazapine, and aripiprazole, Ms. V. noted that her appetite remained good, she was sleeping 7 h per night, her mood was much better, and she had begun to seek a new job, sending resumes and phoning prospective employers... What happened to those measures of outcome mentioned above?
The linked publication discusses one case though the trial was to study 15 people. Did the other 14 all jump off a bridge? What happened? If we are going to have a clinical trial registry that requires reporting of results, is this what you want? I hope not. Next, the obligatory pro-Abilify propaganda, such as...
Furthermore, aripiprazole’s benign side effect profile [including minimal weight gain, sedation and parkinsonism, resulting from its low affinity for alpha-1-adrenergic, histamine (H1) and muscarinic (M1) receptors] suggests that it may be tolerable for refractory-depressed patients, who may need adjunctive treatment for a long period of time. The case of Ms. V. suggests the possible benefit of aripiprazole in treatment-resistant depression as an adjunct to other antidepressant medications.
When there is no clinical trial data to support your discussion, one can always fall back on discussing various receptors and how, theoretically, tweaking them in one way or another is going to lead to positive results.

The case study ends with:
Further studies of aripiprazole as an adjunctive medication for treatment-resistant depression are indicated.
Yeah, but what happened to the 14 other patients in this study?? I did a search on Medline to find if there were additional studies by Hellerstein using Abilify in depression and I could find nothing that seemed to fit the bill. Please alert me if I missed something in the search. One could argue that perhaps the study is in the midst of being written up for publication -- if so, it's sure taking a while...

There's More.
If I had more time, I'd go through the registry and I'm sure I'd find more interesting discrepancies between clinical trial protocols (where researchers say what is going to happen) and what was actually published (the official record of what actually happened). In fact, I've done this before, in which I noted that a published study of Risperdal for depression was clearly changed in its data reporting somewhere between the clinical trial registry and the eventual publication. A rather thorough investigation into the topic found that:

"Overall, 50% of efficacy and 65% of harm outcomes per trial were incompletely reported. Statistically significant outcomes had a higher odds of being fully reported compared with nonsignificant outcomes for both efficacy (pooled odds ratio, 2.4; 95% confidence interval [CI], 1.4-4.0) and harm (pooled odds ratio, 4.7; 95% CI, 1.8-12.0) data. In comparing published articles with protocols, 62% of trials had at least 1 primary outcome that was changed, introduced, or omitted. Eighty-six percent of survey responders (42/49) denied the existence of unreported outcomes despite clear evidence to the contrary."

Who Cares? Well, if you don't mind receiving only part of the efficacy and safety data, then don't do anything. Send no emails, tell none of your friends, don't bother your local journal editor, and don't ask your doctor if he/she is aware that the so-called bedrock of evidence based medicine (i.e., results published in scientific journals) is based on selectively reported information.

The Kool-Aid. I'm really stunned that the folks at the New England Journal of Medicine would just rush off and publish an essay about how great things are now going to be. People have been calling out for data to be shared openly for quite some time, and now, magically, with the passing of some vague legislation, all data will be made publicly available. The drug industry is suddenly just going to publish all information on all of their clinical trials, overnight? And some bigwigs at NEJM are willing to drink this Kool-Aid? That is pathetic.

Wednesday, October 03, 2007

The Zyprexa Whitewash

Furious Seasons has another tidbit from the Zyprexa documents. It involves the terms "diabetes" and "whitewash." You can read the internal Lilly document yourself at Furious Seasons and decide for yourself.

If you've been living in a cave for the past 10 months or so, here are some other features regarding the infamous Zyprexa documents:
And then there is Abilify. Read Brandweek NRX's piece here. Oh, and Seroquel? Read here and here. The lawsuits will keep coming, and there will be large payouts. I'm not a fan of suing the pants off everyone, but if there is no other way to fight off-label marketing and other dubious promotion tactics, then so be it.

Tuesday, October 02, 2007

Peer Review Is Mediocre at Best

Regarding bias in "science" and the utter balderdash that passes for peer-reviewed science, I sometimes feel like a lone voice in the wilderness. Well, thank God -- another blogger has thrown down the gauntlet on the topic. The Last Psychiatrist has a great post on the topic in which he notes a few huge problems with medical journals, of which I'll highlight a few in upcoming posts. Let's start with peer review...
"Most people think peer review is some infallible system for evaluating knowledge. It's not. Here's what peer review does not do: it does not try to verify the accuracy of the content. They do not have access to the raw data. They don't re-run the statistical calculations to see if they're correct. They don't look up references to see if they are appropriate/accurate."
Couldn't agree more with the Last Psychiatrist. We just assume the raw data are accurate. Every study likely contains some small data entry or calculation errors, but what if the whole paper is based on a significant misrepresentation of the raw data? Wouldn't that be a, large problem? What is reported and what is not reported? To put it in layman's terms, anybody can make up whatever the hell they want, and the peer reviewers are under the assumption that it is true. We're working on the honor system here and who knows how often the final paper reflects the real data, or if we are dealing with undisclosed errors due to sloppiness, an accident, greed, or just wanting to cover up the bad news, like in the following...

There is no way that even the world's greatest peer reviewer would catch this, as without access to raw data, we're trusting that relevant information is presented in the manuscript. Reviewers might catch an obvious statistical error, but they sometimes miss the most blatant errors, such as a paper that makes an important conclusion based on no evidence whatsoever.

What do peer reviewers do?

Again, quoting from The Last Psychiatrist...
They look for study "importance" and "relevance." You know what that means? Nothing. It means four people think the article is important. Imagine your four doctor friends were exclusively responsible for deciding what you read in journals. Better example: imagine the four members of the current Administration "peer reviewed" news stories for the NY Times.
And what determines if they think an article is important? Their bias, hopefully mixed with some basic understanding of research methodology and statistics. In maintream psychiatry, you are raised to believe that meds are safe and effective and that newer meds are better than older ones. Is this based on evidence? Sometimes, but quite often it is based on marketing that would make a user car dealer proud. Peer reviewers are often somewhat of an echo chamber than reflects whatever hooey marketing is going on lately. For example, consider that many poorly done studies on second generation antipsychotics were featured prominently in journals -- the mainstream psychiatry culture was overflowing with excitement about heavily promoted new antipsychotic meds, to the point where peer reviewers were willing to gloss over the flawed studies (for example, often using a control group receiving an unreasonable dose of an older antipsychotic) -- and now we have these drugs selling at a clip of well over $10 billion annually in the U.S.

No, I'm not claiming I don't have my own bias. Duh. You can see the cards I'm holding pretty clearly if you read this site with much regularity. The point is that peer reviewers need to realize their bias and take a better, more objective look at research they are reviewing. Too many industry-cheerleading pieces in journals leads to uncritical acceptance of treatments that nearly always fail to live up to their initial hype. After all, after a few trials have been published (even if poorly done and/or overstating efficacy, understating risks), the drugs are now based on "science," which leads to yet more marketing. Check the actual track record of benzos, SSRIs, Depakote, and atypical antipsychotics if you doubt me. Each treatment "revolution" is closely linked to peer review. So if you are pleased as punch with the current state of affairs in mental health treatments, then please make sure to send letters to your favorite medical journal editors thanking them for the present system. Don't let it change.

Or maybe the whole system needs a fundamental overhaul. More on that later.

Promo Time. I'll take yet another lead from the LP and humbly suggest that you promote this post via Digg, Reddit, or any other favorite service. I'd even more strongly suggest you hit up the LP's post and promote it. While you're sharing posts with the world, you should you read my take on SSRI's, Suicide and Dunce Journalism and send it to all of your friends. And while I'm in promotion mode, give your money to Philip Dawdy if you like good journalism on mental health issues. If you want to pledge money to support the operations of this unpaid anonymous blogger (and you know you do!), thanks, but take it and give it to Philip. Now!