Friday, October 31, 2008

You Really Can Report Safety Data

A new study concluded that the combination of sertraline (Zoloft) and cognitive-behavioral therapy (CBT) worked better than either treatment alone for children with anxiety disorders. There was even a nonsignificant trend for Zoloft to outperform CBT, which was quite surprising to me. But that's not really the point of this post. The study can be read at the New England Journal of Medicine website.

I'd like to commend the researchers on doing something that is exceedingly rare in psychopharmacology and psychotherapy trials -- they gave a detailed report of adverse events. And we find that a greater percentage of kids showed suicidal ideation on... CBT. It was not a statistically significant difference, but it was nonetheless surprising. Zoloft, however, was related to significantly more disinhibition, irritability, restlessness, and poor concentration than CBT. This may have been a fluke, but two participants on Zoloft had "homicidal ideation" compared to none on CBT. I have bitched several times about missing/mysterious data on adverse events in psychiatric drug trials, and some have also complained that psychotherapy trials do a poor job of tabulating adverse event data. Again, kudos to the study authors for reporting adverse events; imagine if reporting safety data in such a manner was commonly practiced.

Source: J. T. Walkup, A. M. Albano, J. Piacentini, B. Birmaher, S. N. Compton, J. T. Sherrill, G. S. Ginsburg, M. A. Rynn, J. McCracken, B. Waslick, S. Iyengar, J. S. March, P. C. Kendall (2008). Cognitive Behavioral Therapy, Sertraline, or a Combination in Childhood Anxiety New England Journal of Medicine DOI: 10.1056/NEJMoa0804633


Anonymous said...

The business and science of patient safety is complex as described in the following article:
Safety in Numbers

The article features the following comment which seems particularly relevant:

Drug safety is a paradox to many in the industry. If you look for drug safety concerns, you will always find them -- there are practically endless ways novel substances can interact with the diversity and complexity of the human system. If you don't look for safety concerns, you obviously create a whole different set of more serious problems. So there is an invisible line each company tries to walk with respect to transparency and thoroughness -- looking, but not looking to deeply; communicating, but not communicating too broadly.

Anonymous said...

I wish I was quite as impressed as you are although it's true this level of detail has not been exhibited in many studies. What really concerns me about this trial, to be honest, is the failure to provide more detail about concomitant medication and/or prior treatment. The study did not rule out children diagnosed with ADHD on "stable" doses of stimulants and nearly 12% of the kids had ADHD. Over 2% of the kids had "tic disorder" which can be a red flag for treatment with stimulants and/or neuroleptics and even antidepressants. They did exclude children who had not had "adequate responses" to two prior treatments of SSRIs which means they could have been including children who did have "adequate responses" or else had just had one prior course of SSRIs thus throwing in the possibility that at least some of these events occurring in the trial might have been related to withdrawal rather than anxiety. As far as I'm concerned I'm afraid the failure to provide adequate detail on these factors and how they related to adverse events nearly voids the whole study. I realize that this practice of ignoring concomitant (to say nothing of prior) treatment is very standard in academic research but that doesn't make it right. 75% of the kids in the trial were under 12 and the mean age was 10. Personally I think it's criminal to give children this age Zoloft. The 12 week window is of course the"honeymoon" period on antidepressants. This study is going to go out to 6 months and I suspect the results will be more favorable to therapy alone at that time. But after 6 months on Zoloft these kids will be well on their way to a diagnosis of "bipolar" and additional cocktails of medication.

CL Psych said...


I am not saying the trial was perfect. My focus was on the reporting of adverse events, and I still think that that was done reasonably well. But was the study design perfect? Perhaps not -- I would need to re-read the study, paying more attention to the issues you mentioned in order to render an opinion on that.

And after discontinuation of treatment, I think CBT would outperform meds for these kids since research has generally supported that idea.

Anonymous said...

They reported "much improved" on the CGI of 60% for CBT alone, 55% for sertraline, and 24% for placebo. However, using a categorical outcome like "improvement" can be misleading. Say the primary goal is to run a mile in 10 minutes. If out of 100 people running, people wearing green shirts do it, on average, in 9 minutes 45 seconds, and people wearing red shirts do it in 10 minutes 15 seconds, you could have a result where 60% of the green shirts make the goal versus 25% of the red shirts, which sounds like a big deal, even though there is only a 5% difference in their times.
Although there was a difference in "responders" based on much improved on the CGI of 60% versus 24% for placebo, when you look at the actual data, the Pediatric Anxiety Scale, a 30 point scale, went from 18.8 at baseline to 9.8 in the zoloft group, and from 19.6 to 12.6 in the placebo group, a difference of 9%, again, not reported as statistically significant because it was not, in fact, very different. In fact combo wasn't significant either, only CBT alone.

Anonymous said...

Great blog some how i found you looking for info on our sons birth defect esophageal atresia, i wish you the best. also i was wondering if there is any way you would be willing to exchange links? I woulds be so greatful, thanks so much i iwsh you nothing but the best.

Anonymous said...

RE: CBT increases suicidal ideation in anxious kids (compared to Zoloft). "Not statistically significant" means that the difference could be attributable to chance -- that is, if you did the study again with a different sample you shouldn't be surprised to find the opposite result. Why bother testing for significance at all if people are going to hone in on non-significant results?

CL Psych said...


Yeah, I'm aware of what not statistically significant means. And it was noted in my post that it was not statistically significant. Then I also discussed some statistically significant results.

Kevin P. Miller said...

My film GENERATION RX was just released on Nov. 11th about psychiatric medications, COI, ADHD and much more. If interested, please go to


Kevin P. Miller