Note that there is, indeed, a small benefit for drug over placebo. Is the small benefit worth the side effects? Well, that's a different question... An even better question is "Please define the term 'small benefit'..." The benefits for medication over placebo appear to be an underwhelming 1.8 points on the Hamilton Depression Rating Scale. Considering that it is a 52-point scale, with many of those points being determined by ratings of sleep and anxiety, any advantage for a drug relative to a placebo might be unrelated to the core symptoms of depression.
What this study adds is that the most severely depressed patients appear to show somewhat more benefit on antidepressants relative to placebo. Their analyses indicate that the placebo response tends to decline among the most severe cases of depression while the antidepressant effect remains about the same. But most people who take antidepressants are not severely depressed. And, shock of all shocks, Kirsch and colleagues found that data from some trials showing no advantage for drug over placebo were simply not available.
Warning: This paragraph is a bit wonky, so you might want to skip ahead. The authors adopted a standard that anything under an effect size of .50 is not clinically significant, which is a standard adopted by the National Institute of Clinical Excellence (NICE) in the UK. According to conventional criteria (e.g., Cohen), an effect size of .50 translates to a moderate effect and an effect size of .20 translates into a small effect. The average effect in this analysis for drug over placebo was .32. Such an effect is certainly not impressive, but should not be confused with no effect. The problem (as noted above) is we don't really know what a small effect means -- on what items on the rating scale was there typically a difference between drug and placebo? Were these items relevant to depression? In any individual study, one can cherry pick items from a rating scale and show a difference favoring a drug, but I'd be more interested in what a large meta-analysis such as Kirsch's most recent study would show on the individual items of the HAM-D or other rating scales. One more thing: The effect for antidepressants really looks bad at the lowest end of severity. Go to Table 1 in the study and look at the effect sizes for the studies where the baseline depression rating is under 24. My own back of the envelope calculations, factoring in sample size, gives an effect size of about .10, which equates to about nothing for the least depressed folks.
Fortunately, the Independent has a nice little story on the topic, though the headline is a little obnoxious. I quote as follows:
But, to be fair and balanced, here are the critiques of the pharma companies, which provide the usual nonspecific and bogus mumbo-jumbo
Alternative treatments for depression, such as counselling or physical exercise, should be tried first, Professor Kirsch said. The pharmaceutical companies had withheld data that was available to the licensing authorities so that doctors and patients did not understand the true efficacy, or lack of it, of the drugs.
"This has been the frustration. It has made it very difficult to answer the question of whether the drugs work. The pharmaceutical companies should be obliged when they get a drug licensed to make all the data available to the public. When you analyse all the trials of these SSRIs, both published and unpublished, it leads you to more sober conclusions," he said.
Tim Kendall, deputy director of the Royal College of Psychiatrists' research unit, said the findings, if proved true, would not be surprising. As head of the National Collaborating Centre for Nice guidelines on mental health, he said it had proved impossible to get access to unpublished trials in the past.
"The companies have this data but they will not release it. When we were drawing up the guidelines on prescribing antidepressants to children [in 2004] we wrote to all the companies asking for it but they said no. The Government pledged in its manifesto to compel the drug companies to give access to their data but that commitment has not been met."
GlaxoSmithKline, makers of Seroxat, said the authors of the study had "failed to acknowledge" the very positive benefits of SSRIs and their conclusions were "at odds with the very positive benefits seen in actual clinical practice." A spokesperson added: "This one study should not be used to cause unnecessary alarm for patients."
Lilly said in a statement: "Extensive scientific and medical experience has demonstrated that fluoxetine [Prozac] is an effective antidepressant.
Wyeth said: "We recognise the need for both pharmacological and non-pharmacological treatments for depression."
If there is such "extensive" evidence about the "very positive" effects of these medications, why wouldn't a single one of these companies cite a single study? Oh, right, because Kirsch already examined the relevant studies. This study, in combination with the recent study in the New England Journal of Medicine that showed how every single drug company with an antidepressant on the market twisted their data regarding the efficacy of antidepressants, should serve as a wakeup call for those who have not been paying attention to the issues of mediocre antidepressant efficacy and how inconvenient data are buried. Overplay the positive data, hide or lie about the negative data. As for depressed patients: Let Them Eat Prozac.
Also see discussion at Furious Seasons.