The abstract of the study includes the following...
"In both the
and the Unites States , SSRI prescriptions for children and adolescents decreased after Netherlands and European regulatory agencies issued warnings about a possible suicide risk with antidepressant use in pediatric patients, and these decreases were associated with increases in suicide rates in children and adolescents." U.S.
So less SSRIs = more suicides, according to the authors. Let’s see if this study actually shows such a relationship…
Look closely at the above graphs (click to enlarge) from the article. Note that the decrease in SSRI prescriptions from 2003 to 2004 was very slight across the 0-10, 11-14, and 15-19 age groups, which is the timeframe in which suicide rates for those aged 5-19 increased notably. The larger declines in SSRI prescribing for youth occurred from 2004-2005, which happens to be when the suicide rate for those aged 15-24 appears to have decreased from 10.3 per 100,000 (see Table 9; page 28 here) to 9.8 per 100,000 (see Table 7 here). Yes, I know I am comparing data for ages 15-24 to data on ages 5-19, but I think this makes sense when one considers that the suicide rate for those 14 and under is much lower than for those aged 15-24. Actually, grouping suicide data for ages 5-19 makes little sense to me given the vast differences in suicide rate within this age group.
It is important to note that the authors of the paper did not have data from 2005, but there is nothing from the 2003-2004 U.S. SSRI prescription data cited in their paper that even suggests a relationship between decreased SSRI use in youth and an increased suicide rate, as the decrease in prescriptions was minimal. Pay close attention: The authors ran a total of zero statistical analyses to examine the relationship between SSRI prescription rates and suicide rates in the
In the discussion, the authors state: “While only a small decrease in the SSRI prescription rate for U.S. children and adolescents occurred from 2003 to 2004, the public health warnings may have left some of the most vulnerable youths untreated.” This is unadulterated speculation, which as I just mentioned is not supported by a single statistical analysis in their paper. It is also hard to imagine how an FDA warning in mid-October could make suicides earlier in the year increase. One can only wonder to what magical time-traveling extent an FDA warning in October could have increased suicide rates earlier in the year. This is so mind-bogglingly obvious that, again, the peer reviewers were possibly inebriated during the review process, or the editor published the paper over the objections of the reviewers. Am I being too nasty? I'm just trying to figure out how it got published and "good science" is not the answer.
The authors then proposed the following:
…we estimate that if SSRI prescriptions in the
were decreased by 30% for all patients, there would be an increase of 5,517 suicides per year… United States
In children 5 – 14 years of age, a 30% reduction in SSRI prescriptions would lead to an estimated increase of 81 suicides per year… Given that SSRI prescriptions for children under age 15 already underwent a reduction of approximately 17% from 2003 to 2005, we expect an increase of .11 suicides per 100,000 children in this age group. Since there are approximately 40 million children in this age group, we would expect 44 additional deaths by suicide in 2005 relative to 2003, or an increase of 18% in this age group.
Preliminary 2005 suicide data indicate a suicide rate in 5-14 year olds of .7 per 100,000, holding steady from 2004. This does not support the predictions of Gibbons and colleagues. Granted, the 2005 data are preliminary, but I’d be surprised if they showed a large change in the direction that Gibbons, Mann, and their team predicted.
Again, let me state that these are only correlational data and that data from clinical trials as well as other sources trumps these types of studies in any case. At the very least, when doing correlational research, try to control for covariates (other variables of interest), examine trends over a longer time period than one year, and maybe actually run some statistics. Oh, and avoid conclusions that require belief in time travel. There are even more potential problems, but the authors missed so many glaring basic issues that it makes no sense to go any deeper.
If data based on correlations is going to be trotted out to scare physicians into prescribing more SSRIs, then it should be examined whether the correlations provide even preliminary support for the idea that SSRIs might reduce suicide. I've criticized many studies on this site for a variety of concerns (like here and here, among many examples), and I think the present study is among the worst offenders of basic research methodology. Until we clean up the "science," don't expect much real progress in the mental health treatment world.
Major Hat Tip: Furious Seasons.