Showing posts with label suicide. Show all posts
Showing posts with label suicide. Show all posts

Friday, October 31, 2008

You Really Can Report Safety Data

ResearchBlogging.org

A new study concluded that the combination of sertraline (Zoloft) and cognitive-behavioral therapy (CBT) worked better than either treatment alone for children with anxiety disorders. There was even a nonsignificant trend for Zoloft to outperform CBT, which was quite surprising to me. But that's not really the point of this post. The study can be read at the New England Journal of Medicine website.

I'd like to commend the researchers on doing something that is exceedingly rare in psychopharmacology and psychotherapy trials -- they gave a detailed report of adverse events. And we find that a greater percentage of kids showed suicidal ideation on... CBT. It was not a statistically significant difference, but it was nonetheless surprising. Zoloft, however, was related to significantly more disinhibition, irritability, restlessness, and poor concentration than CBT. This may have been a fluke, but two participants on Zoloft had "homicidal ideation" compared to none on CBT. I have bitched several times about missing/mysterious data on adverse events in psychiatric drug trials, and some have also complained that psychotherapy trials do a poor job of tabulating adverse event data. Again, kudos to the study authors for reporting adverse events; imagine if reporting safety data in such a manner was commonly practiced.

Source: J. T. Walkup, A. M. Albano, J. Piacentini, B. Birmaher, S. N. Compton, J. T. Sherrill, G. S. Ginsburg, M. A. Rynn, J. McCracken, B. Waslick, S. Iyengar, J. S. March, P. C. Kendall (2008). Cognitive Behavioral Therapy, Sertraline, or a Combination in Childhood Anxiety New England Journal of Medicine DOI: 10.1056/NEJMoa0804633

Tuesday, August 19, 2008

Investigative Journalism Par Excellence

I am a little late in reporting this story, but there is a must-read post from Jonathan Leo over at Chemical Imbalance that I must bring to your attention. Many bloggers have chimed in about the radio program The Infinite Mind broadcast about SSRIs. Most writers have focused, understandably, on the myriad unreported conflicts of interest of the guests on the show. But the conflicts of interest are not the most important part of this saga -- the terribly misleading information on the program, which aired on National Public Radio outlets, is the main problem.

Leo compares the data on SSRIs and suicide to the blatantly false statements made by the The Infinite Mind commentators. He notes, for example, that it is utter BS to state that nobody committed suicide in antidepressant trials submitted to the FDA -- in children there were no suicides, but among adults there certainly were. And kids who dropped out of the studies due to poor response or side effects, well, who knows what happened to them?

Leo also notes that the commentators were dead wrong about their alleged evidence linking decreased prescriptions of SSRIs to an increase in suicides. I also noted the same problem. He then proceeds to make point after point about the commentators overstating the efficacy of antidepressants.

As I've written before, conflicts of interest are important. But rather than just noting that people have conflicts, it is important to show the data -- are people with conflicts of interest misstating the evidence in a manner that reflects the conflict of interest? In the case of The Infinite Mind, the answer is a clear yes. Leo's post is quite lengthy, but well worth the time.

Update (08-31-08): My mistake. I had earlier called the program All in The Mind, which is vastly incorrect. The program was The Infinite Mind (as has been corrected above). This post has absolutely nothing to do with All in The Mind, which is a program which airs on Australia's Radio National. In fact, I've listened to a couple of All in the Mind broadcasts previously and found them to be well-done. Thanks to a commenter for catching my error.

Monday, April 14, 2008

Antidepressant PR Gone Wild

As noted at Furious Seasons, a recent broadcast of "The Infinite Mind" went absolutely wild with its reaching to cover up risks associated with SSRIs. Oy. It was almost as if a PR consultant for the drug industry was involved with the show... Oh, wait, a PR consultant for the drug industry was involved -- Peter Pitts from Drug Wonks appeared on the program. You may recall that Pitts works at a PR firm (Selvage, Manning, and Lee) that does much business with the drug industry.

More coming later on the Canadian Psychiatric Association's unscientific dismissal of evidence linking antidepressants to poor efficacy.

Friday, February 01, 2008

Mood Stablizers and Suicide

As reported on Pharmalot and Furious Seasons, the antiepileptic/"mood stabilizer" class of drugs has been stuck with a suicide warning by the FDA. Granted, the FDA wasn't exactly quick to come to their conclusion -- these drugs have been on the market for quite some time, but better late than never. Others have noted similar concerns prior to the FDA's interest in the topic. Hey, does this remind anyone else of the SSRI-suicide story (1, 2, 3, 4, 5, 6, 7, 8)?

But wait a minute, aren't these supposed to be "life saving" medications? Wait, I can hear it coming now... Drug Wonks and others with similar views will be complaining that because the FDA is irresponsibly "fearmongering," people will stop taking their life saving medications and masses of suicide will ensue.

Friday, January 11, 2008

Zetia, Paxil, Medical Journals, Fraud, Etc,

I've been busy wiping up tears after the Frontline episode on medicating children with a wide variety of psychiatric medicines. Well worth watching. There are many thoughtful comments over at Furious Seasons. Feel free to add your voice. I may post on some of the highlights and lowlights of the Frontline piece later. Suffice for now to say that it sure is depressing that the media keeps up the dunce journalism of linking decreased SSRI prescriptions to an increase in suicide as if this was some sort of reliable finding. Please read my earlier posts (1, 2) for details on this constantly repeated yet incorrect interpretation of events.

Here are a few other posts worth reading:
  • Is "symptom remission" a realistic or even desirable goal when treating depression? A very interesting battle of letters in the American Journal of Psychiatry receives excellent coverage at Furious Seasons.
  • Roy Poses at Health Care Renewal demolishes an op-ed piece by Robert Goldberg (from the infamous Drug Wonks site). Also check out an incredible tale of kickbacks to a physician from multiple companies. If your hunger for bizarre tales in healthcare is not yet satiated, read about CellCyte, a company whose main product is apparently fraud.
  • Are medical journals asleep at the wheel regarding problems with Zetia? Aubrey Blumsohn seems to think so, and I think he might have a point. It would not be the first time that a medical journal dropped the ball.
  • Paxil for life. Go ahead, try to quit. What, you can't quit? A large group of individuals suing GlaxoSmithKline believe they've had difficulties quitting Paxil without significant problems. Worry not, friends, GSK said: "We believe there is no merit in this litigation... Seroxat has benefited millions of people worldwide who have suffered from depression.'' Read more about Paxil/Seroxat's special benefits. H/T: PharmaGossip.
  • While you can catch up on the national presidential derby from many sources, there is little coverage of the race for American Psychiatric Association president. Daniel Carlat (who is popping up everywhere these days, which is a good thing) provides his take on the upcoming APA election. To nobody's surprise, some have noted an issue with one candidate's potential conflicts of interest.
  • Pfizer = McDonald's + Estee Lauder?

Thursday, November 08, 2007

NIMH Gets Dunce Journalism Award



I have pilloried the study published in the American Journal of Psychiatry in September 2007 that purported to show that fewer SSRI prescriptions for kids in the USA was associated with an increase in the youth suicide rate for American youth. I quote from my earlier post below:

Look closely at the above graphs (click to enlarge) from the article. Note that the decrease in SSRI prescriptions from 2003 to 2004 was very slight across the 0-10, 11-14, and 15-19 age groups, which is the timeframe in which suicide rates for those aged 5-19 increased notably. The larger declines in SSRI prescribing for youth occurred from 2004-2005, which happens to be when the suicide rate for those aged 15-24 appears to have decreased from 10.3 per 100,000 (see Table 9; page 28 here) to 9.8 per 100,000 (see Table 7 here). Yes, I know I am comparing data for ages 15-24 to data on ages 5-19, but I think this makes sense when one considers that the suicide rate for those 14 and under is much lower than for those aged 15-24. Actually, grouping suicide data for ages 5-19 makes little sense to me given the vast differences in suicide rate within this age group.


It is important to note that the authors of the paper did not have data from 2005, but there is nothing from the 2003-2004 U.S. SSRI prescription data cited in their paper that even suggests a relationship between decreased SSRI use in youth and an increased suicide rate, as the decrease in prescriptions was minimal. Pay close attention: The authors ran a total of zero statistical analyses to examine the relationship between SSRI prescription rates and suicide rates in the United States. That’s right, zero. So they put up a couple of figures without a single shred of statistical evidence, then claim that declining SSRI prescriptions are associated with an increase in suicide rates. Any peer reviewer who was not drunk or on a high dose of Seroquel should have noticed this gigantic flaw.
Various newspapers and websites dumbly ran with this story using descriptions such as:
Warnings that antidepressants may increase teen suicides appear to have backfired, a new study suggests...

Suicide rates for preteens and teenagers increased sharply when the Food and Drug Administration slapped a "black box" warning on anti-depressants and doctors started writing fewer prescriptions for young people, according to federal data released Thursday
...and others. Please read my earlier post regarding idiotic media coverage of this article for details.

Enter NIMH: In a story datelined September 19, 2007, Jules Asher wrote a story for the NIMH website. As of today, it is still available. It mentions that
...based on mathematical models using previous years' data, the authors predicted an 18 percent increase in youth suicides between 2003 and 2005.
And, as mentioned above, this prediction turned out to be incorrect. Youth suicide rates in 2005 showed little change from 2004. Perhaps the NIMH writing staff could throw that little tidbit of information into an updated version of the article? Earlier in the piece, it is mentioned that
NIMH grantees Robert Gibbons, Ph.D., University of Illinois at Chicago and J. John Mann, M.D., Columbia University, and colleagues, make a case for a possible link between changes in prescription patterns, regulatory warnings and suicide rates in the September, 2007 issue of the American Journal of Psychiatry.
Again, note that they did not make much of a case in that the only statistical analysis they presented was from the Netherlands, yet they apparently believe such data generalized to the US as well. And remember that their own graphs contradict their argument and that the 2005 preliminary data on suicides also contradicts their arguments. The reason I am ranting/raving here is because I expect better from an allegedly nonpartisan organization that is dedicated to science. Why publish a story on the NIMH website pushing results from a study that is so full of holes that I could drive a fleet of Mack Trucks through it? Did the NIMH run stories publicizing findings of more credible research showing a link between SSRIs and increased suicide attempts? Nope. Kind of makes one wonder to what extent NIMH is an objective organization dedicated solely to advancing science, doesn't it?

Major hat tip to an anonymous reader who passed along the link to this wonderful article.

Friday, November 02, 2007

Paxil's "Advantages"

Paxil and its advantages. Yeah, that's what this blog is about. I just recently retitled the blog; it was formerly known as the Paxil Pimp's Paradise. What am I talking about? I received an email a couple of days ago, to which I will reply in this post. Don't worry. In sticking with my informal confidentiality policy, I'll not reveal the identity of the person or his/her employer. Here is the email:
Respected Sir / Madam,

I read your review on website, please if you can provide me the the reviews for Advantages of Paroxetine for depression & anxiety. It would be more interesting if it would consist of recent data i.e. in year 2007.

I expect you [sic] early reply

Thank You.
Yes, this person works for a drug company. That's all I will reveal about the author of the email. Here is my reply...

Dear Sir/Madam,

Please see the following posts for a detailed explanation of the "advantages" of paroxetine (Paxil/Seroxat) as discussed previously on my site...
  • Advantage 1: Increases suicide attempts in patients.
  • Advantage 2: Potentially increases obesity in patients, though research is preliminary.
  • Advantage 3: Increase in birth defects for children whose mothers were taking Paxil while pregnant.
  • Advantage 4: Excellent marketing, both for social phobia and depression. Excellent use of misleading writing in so-called scientific journals when writing about the "advantages" of Paxil, including using euphemisms for unpleasantries like suicide attempts.
  • Advantage 5: Major discontinuation symptoms. Take Paxil for a while, try to stop and let me know what "advantage" you notice. See references at bottom of this post for a start. There are many more studies documenting clearly the difficulties with paroxetine withdrawal.
  • Advantage 6: Those wonderful sexual side effects. And they might last for a long time even after one stops taking the medication.
I hope you find this information useful in your search for the advantages of Paxil. I am flattered by your interest in my opinion on this matter. For additional information on paroxetine, you may want to consult Martin Keller, who has a somewhat different take than myself, but who is a potential recipient of the prestigious Golden Goblet Award for his excellent scientific work on paroxetine. Karen Wagner, another Golden Goblet Nominee, may also be an excellent source. You may also wish to consult the following websites:
Should I be able to assist further, please let me know. There are other sources with which you will want to be familiar. You may also want to contact Philip Dawdy regarding the advantages of atypical antipsychotics, and please see Aubrey Blumsohn regarding the advantages of Actonel in treating osteoporosis. Also, I hope you contact Jack Friday, Ed Silverman, or Peter Rost to provide industry cheerleading. For any questions regarding the excellent Rozerem advertising campaign, please see John Mack. Last but absolutely not least, for any advice regarding how to outsource your scientists, fake your clinical trials, and abuse your employees, please take advice from the sage Pharma Giles.

Sincerely Yours,

Paxil Pee-Yimp #1

Friday, October 26, 2007

SSRIs, Anxiety, Kids, Suicide, and Credible Evidence

I wrote a while ago about Christopher Lane's assertions that social anxiety was overdiagnosed and overtreated, particularly among children. Many people disagreed with Lane. One person who disagreed was Dr. Ronald Pies, a psychiatrist at SUNY Upstate Medical Center, who wrote in the New York Times that
... there is no credible evidence to support Mr. Lane’s implication that S.S.R.I. antidepressants are linked with increased risk of suicide in children prescribed these medications for social anxiety. The Food and Drug Administration’s initial concerns stemmed from studies in children with major depression, not anxiety disorders, and the latest evidence has not supported a strong link between S.S.R.I.’s and risk of suicide.
I re-read the latest summary of evidence regarding SSRIs and suicide in kids. Mind you, the article that I referenced (Bridge et al., 2007 in JAMA) came to decidedly pro-SSRI conclusions -- I didn't get my evidence dropped to me from a black helicopter. Based on trials submitted to the FDA, as reported by Bride and colleagues, there were data that pertained directly toward Dr. Pies' assertion. Here are the data regarding SSRI's and suicide in children and adolescents with anxiety disorders.

Note: AD represents Antidepressant; PL represents Placebo
Condition
Suicidal Ideation




Suicide Attempt/Preparatory Action
OCD
AD: 3 of 362
PL: 1 of 339




AD: 1 of 362
PL: 0 of 339
Non-OCD Anxiety Disorder
AD: 5 of 573
PL: 0 of 582




AD: 1 of 573
PL: 0 of 582
Total for Anxiety Disorders
AD: 8 of 935
PL: 1 of 921




AD: 2 of 935
PL: o of 921

Compare the odds of having suicidal ideation on drug to the odds of having suicidal ideation on placebo. Kind of a large difference, eh? I realize that the odds of developing suicidal ideation are still small, even on medication, but they are substantially higher than a child taking placebo.

While one could point out correctly that the difference is not "statistically significant," I think one would be foolish to fall back on that argument. We have seen in adults and children that SSRIs are related to more suicide attempts and that this finding is pretty consistent across trials, at least among children and young adults. When events occur rarely, then we need exceedingly large samples in order to be quite certain that the event (such as suicidal ideation in SSRI trials for anxiety in kids) is not an anomaly. But when kids are being treated for disorders that are very rarely associated with suicidality, yet the children show a much higher rate of suicidal ideation on a drug compared to a placebo, does it not make sense to warn patients about such potential hazards? One could run to the less SSRI's cause more suicide argument, but that hasn't really held up so well scientifically.

In my eyes, the above data represent "credible evidence" that SSRIs can indeed lead to an increase in suicidal thoughts among kids with anxiety disorders. Either Dr. Pies was unfamiliar with the above evidence or he believes it is not credible.

No actual suicides were recorded during the trials. Of course, if someone got worse during the study, then quit the study and killed himself/herself, then who knows if such data were included. Perhaps such events occurred -- I don't know. And there was much more supervision of these kids in a clinical trial then you'd see in real life, which could have kept some people from suicide. Further, let's suppose that the drug causes a child with social anxiety to become suicidal. He does not make an attempt on his own life, but he is suicidal for a month. Doesn't prior suicidal thinking predict later suicidal thinking and later attempt of suicide? So even if the child makes no immediate attempt on his life, couldn't he be at higher risk down the line? Maybe I'm losing my marbles, but I think it's a reasonable question.

Related posts on SSRI's and suicide:

Friday, September 21, 2007

SSRIs, Suicide, and Dunce Jounalism

Earlier in the week, I noted that a much-ballyhooed study purporting to show a relationship between an increase in youth suicide and a decrease in SSRI prescriptions for youth actually did no such thing.

Here's what the media are saying about the study. WARNING: If you don't want to be shocked by examples of terribly poor journalism, please do not read the remainder of the post.

From the esteemed British Medical Journal:
"Numbers of suicides among Americans aged under 19 years rose by 14% from 2003 to 2004 , the study says, the biggest annual increase since systematic recording began in 1979. The same year saw a 22% decrease in the number of SSRI prescriptions to this age group."
The increase in suicide rates appears to be accurate. As I pointed out earlier, the youth suicide rate then apparently dropped slightly in 2005, which is when SSRI prescriptions for youth fell steeply. As for SSRI prescriptions dropping 22% -- that number is inaccurate. Look at the chart (pardon the crappy image quality) and note that the SSRI prescription rate for youth in the U.S. was down only slightly in 2003-2004, not by 22%. Bad journalism.

From the Washington Post:
The trend lines do not prove that suicides rose because of the drop in prescriptions, but Gibbons, Insel and other experts said the international evidence leaves few other plausible explanations.
Again, if you folks want to rely solely on correlational data (which is a stupid idea in any case), then you may want to make sure that a statistical analysis is actually run which shows a relationship between SSRI prescriptions and suicide rates. Read the whole WaPo piece if you'd like -- it's not terrible overall.

From WebMD:
Warnings that antidepressants may increase teen suicides appear to have backfired, a new study suggests...

"The FDA has overestimated the effect of antidepressant medications on suicidality and dramatically underestimated the efficacy of antidepressants in the treatment of childhood depression," Gibbons told WebMD in April 2007.
Oh, he must be referring to the efficacy that shows, at best, a small effect over placebo. Indeed, the recent meta-analysis by Bridge and colleagues that claims it showed the benefits outweighed the risks for SSRIs even showed a very small treatment effect favoring SSRIs in depression for youth. Indeed, most SSRIs did not show any advantage over placebo. How does one "dramatically underestimate" a treatment that provides an apparently pretty small benefit over placebo? And if you really love to rely on correlational, epidemiological data, then try this on for size -- the data do not indicate that SSRIs decrease suicide.

How about the Chicago Tribune?
Suicide rates for preteens and teenagers increased sharply when the Food and Drug Administration slapped a "black box" warning on anti-depressants and doctors started writing fewer prescriptions for young people, according to federal data released Thursday.
This one will be covered in a moment...

The headline in the San Francisco Chronicle:

Suicide rise follows antidepressant drop: Study finds dramatic increase after 'black box' warning

As I noted earlier, the black box warning occurred in October 2004, and I know of not one shred of data that can track that particular time point to a "dramatic increase" in suicides. Hell, Gibbons and colleagues did not even attempt to link an increase in suicides to that particular date, yet the media latch onto this point as if it is mired in solid scientific data.

And one more from the Los Angeles Times:
The study, which includes data from the Netherlands, provides the strongest evidence yet that the drugs are useful in preventing suicide, Gibbons said.
So, to make this clear, the "strongest evidence yet" is based upon a report of correlational data where, for the main population studied (the United States), there was not even a single statistical analysis done to relate a decrease in SSRI prescriptions with an increase in suicide rate? This, my friends, is bad science that has now become "the truth" thanks to science writers who either don't know a damn thing about science or are unwilling to challenge the opinions of the scientists whom they interview. It also becomes "the truth" when Gibbons, in interviews, is making statements that run far past what his own data show. Scientists can have opinions, but one needs to separate what is based on solid data from what is speculation.

By all means, read my prior post that dissects this latest study and let me know if I missed something. In my opinion, the Gibbons study was uninformative at best and appears to have led to a large number of poorly reported stories. At this point, a few bright individuals appear to have indicated that my take on the article is accurate (1, 2, 3).

To be fair, The Boston Globe and New York Times get a pass -- their coverage of the issue was excellent.

For a great brief read on another youth suicide study, visit The Last Psychiatrist.

Update (9-25-07): Via Furious Seasons -- An op-ed in the Boston Globe by Alison Bass blasts the latest media blitz regarding the alleged link between declining SSRI prescriptions and increased suicides. Furious Seasons wisely notes a couple of small errors in the Globe piece, but the overall thrust is well worth a read.

Monday, September 17, 2007

Peer Review, SSRIs, Suicide, and Booze

The recent study in the American Journal of Psychiatry by Gibbons, Mann, and colleagues regarding the relationship between SSRI usage and suicides reads more like an exercise for undergraduate students to find obvious errors than it does a real peer-reviewed study. Sounds mean, but keep reading.

The abstract of the study includes the following...

"In both the Unites States and the Netherlands, SSRI prescriptions for children and adolescents decreased after U.S. and European regulatory agencies issued warnings about a possible suicide risk with antidepressant use in pediatric patients, and these decreases were associated with increases in suicide rates in children and adolescents."

So less SSRIs = more suicides, according to the authors. Let’s see if this study actually shows such a relationship…








Look closely at the above graphs (click to enlarge) from the article. Note that the decrease in SSRI prescriptions from 2003 to 2004 was very slight across the 0-10, 11-14, and 15-19 age groups, which is the timeframe in which suicide rates for those aged 5-19 increased notably. The larger declines in SSRI prescribing for youth occurred from 2004-2005, which happens to be when the suicide rate for those aged 15-24 appears to have decreased from 10.3 per 100,000 (see Table 9; page 28 here) to 9.8 per 100,000 (see Table 7 here). Yes, I know I am comparing data for ages 15-24 to data on ages 5-19, but I think this makes sense when one considers that the suicide rate for those 14 and under is much lower than for those aged 15-24. Actually, grouping suicide data for ages 5-19 makes little sense to me given the vast differences in suicide rate within this age group.

It is important to note that the authors of the paper did not have data from 2005, but there is nothing from the 2003-2004 U.S. SSRI prescription data cited in their paper that even suggests a relationship between decreased SSRI use in youth and an increased suicide rate, as the decrease in prescriptions was minimal. Pay close attention: The authors ran a total of zero statistical analyses to examine the relationship between SSRI prescription rates and suicide rates in the United States. That’s right, zero. So they put up a couple of figures without a single shred of statistical evidence, then claim that declining SSRI prescriptions are associated with an increase in suicide rates. Any peer reviewer who was not drunk or on a high dose of Seroquel should have noticed this gigantic flaw.

In the discussion, the authors state: “While only a small decrease in the SSRI prescription rate for U.S. children and adolescents occurred from 2003 to 2004, the public health warnings may have left some of the most vulnerable youths untreated.” This is unadulterated speculation, which as I just mentioned is not supported by a single statistical analysis in their paper. It is also hard to imagine how an FDA warning in mid-October could make suicides earlier in the year increase. One can only wonder to what magical time-traveling extent an FDA warning in October could have increased suicide rates earlier in the year. This is so mind-bogglingly obvious that, again, the peer reviewers were possibly inebriated during the review process, or the editor published the paper over the objections of the reviewers. Am I being too nasty? I'm just trying to figure out how it got published and "good science" is not the answer.

The authors then proposed the following:

…we estimate that if SSRI prescriptions in the United States were decreased by 30% for all patients, there would be an increase of 5,517 suicides per year…

In addition…

In children 5 – 14 years of age, a 30% reduction in SSRI prescriptions would lead to an estimated increase of 81 suicides per year… Given that SSRI prescriptions for children under age 15 already underwent a reduction of approximately 17% from 2003 to 2005, we expect an increase of .11 suicides per 100,000 children in this age group. Since there are approximately 40 million children in this age group, we would expect 44 additional deaths by suicide in 2005 relative to 2003, or an increase of 18% in this age group.

Preliminary 2005 suicide data indicate a suicide rate in 5-14 year olds of .7 per 100,000, holding steady from 2004. This does not support the predictions of Gibbons and colleagues. Granted, the 2005 data are preliminary, but I’d be surprised if they showed a large change in the direction that Gibbons, Mann, and their team predicted.

Again, let me state that these are only correlational data and that data from clinical trials as well as other sources trumps these types of studies in any case. At the very least, when doing correlational research, try to control for covariates (other variables of interest), examine trends over a longer time period than one year, and maybe actually run some statistics. Oh, and avoid conclusions that require belief in time travel. There are even more potential problems, but the authors missed so many glaring basic issues that it makes no sense to go any deeper.

If data based on correlations is going to be trotted out to scare physicians into prescribing more SSRIs, then it should be examined whether the correlations provide even preliminary support for the idea that SSRIs might reduce suicide. I've criticized many studies on this site for a variety of concerns (like here and here, among many examples), and I think the present study is among the worst offenders of basic research methodology. Until we clean up the "science," don't expect much real progress in the mental health treatment world.

Background here and here.

Major Hat Tip: Furious Seasons.

Friday, September 14, 2007

Less SSRI's, More Suicide -- Apparently Not

Now that the 2005 suicide data are available from the CDC (as mentioned yesterday), one can see that despite SSRI prescriptions falling, there was apparently a very slight decrease in suicides. That does not lend credence to the story that decreased SSRI use leads to more suicides. The New York Times (Alex Berenson and Ben Carey) has some nice reporting on the story, including some telling quotes. Here's what Thomas R. Ten Have, a biostatistics professor at the University of Pennsylvania had to say regarding the latest study that claimed to show a link between decreased SSRI usage and increased suicide rate:
There doesn’t seem to be any evidence of a statistically significant association between suicide rates and prescription rates provided in the paper.
Yet here's what Dr. John Mann, one of the "experts" on the topic and coauthor of the previously mentioned study had to say:
The most plausible explanation is a cause and effect relationship: prescription rates change, therefore suicides change
Too bad the "most plausible explanation" just got shot down. This is just the tip of the iceberg regarding SSRIs and suicide. More to come at a later date. In the meantime, always be wary when someone notes that two variables are related, then claims that one variable causes another. Be especially wary when it turns out that the correlation is inconsistent or does not even exist, or may perhaps even go in the other direction. More to come another time.

Hat Tip: Furious Seasons.

Thursday, September 13, 2007

SSRIs, CDC, and Suicide

Though some people have been asserting with confidence that a decline in SSRI prescriptions has led to an increase in the suicide rate, Furious Seasons has the story that, um, suicide rates were slightly down in 2005 according to data from the Centers for Disease Control. Link to the CDC document here and link to an excellent post at Furious Seasons here.

Many researchers, bloggers, and others have been slamming the FDA for daring to put a black box warning on SSRI's that links the drugs to potential increased suicidal ideation. If fewer people take SSRI's, more people die. Or so the argument goes.

There is indeed some correlational data linking decreased SSRI prescription with increased suicide rates, as well as some correlational data finding no such relationship. Mind you, there is a reason that we all learn in introductory research methods that correlation does not prove that change in one variable causes change in another variable. There are much stronger sources of evidence, which will be discussed at a later date. For now, it is interesting that the suicide rate appears to have fallen slightly in 2005 despite estimates that SSRI prescriptions fell significantly.

Friday, March 09, 2007

FDA Testimonies on SSRIs and Suicide

You may recall the FDA's meeting on SSRI's and suicide. You can now access the presentations of the speakers, including David Healy, Robert Gibbons, Sara Bostock, Vera Sharav, Robert Valuck, and many more.

Transcripts of the meeting (part 1 and part 2) are also available. Kudos to the FDA for their willingness to post all this material online.

Hat Tip: ShrinkRap.

Saturday, March 03, 2007

Suicide Update

Psychiatric News, the newspaper of the American Psychiatric Association, has caused me a traumatic brain injury. How? Well, after reading the following, I hit my head against a wall so hard that I probably damaged at least 85% of my brain. Why?

The paper is beating the drum over the supposed relationship between a) the FDA placing a warning about the link between suicidality and SSRI usage and b) the subsequent increase in teen suicides. To arouse the ire of its audience, the piece stated:

"This is very disturbing news," said David Fassler, M.D., an APA trustee-at-large and a child and adolescent psychiatrist in Vermont. "The current data suggest that the decreased use of these medications is, in fact, associated with an increase in actual deaths attributable to suicide."

David Shern, Ph.D., president and CEO of Mental Health America, echoed Fassler's concern.
In the article, nobody was interviewed who took an opposing view. Better yet, there was no data presented in the article that the FDA warning actually led to a decrease in SSRI prescriptions for youth. As I have attested to in two prior posts on this issue, the data do not actually support that SSRI prescriptions for youths declined when the FDA issued its warning. So if prescription rates did not go down, kinda hard to say that declining prescriptions led to more suicides.

Yet note how the mainstream psychiatric press and the mainstream media have reported this issue -- it's a pro-drug circus where science is omitted so that a panic can better be created. Funny how there are never a lack of "key opinion leaders" who are willing to step up and opine on these issues despite having no data to back their assertions.

Update: Turns out there were data indicating a decline in SSRI prescriptions, though this information did not become public knowledge until much later. But, it appears that despite decreased SSRI usage, suicide rates fell slightly in 2005.

Wednesday, February 14, 2007

SSRI's and Suicide: Updated Update

I found more information regarding SSRIs and suicide in youth. I'm going to present two sets of statistics from an article and then illustrate how it is impossible to say that decreasing rates of SSRI prescription led to more suicides, contrary to what many "experts" are saying. From the Seattle Times:
The suicide rate climbed 18 percent from 2003 to 2004 for Americans under age 20, from 1,737 deaths to 1,985. Most suicides occurred in older teens, according to the data — the most current to date from the federal Centers for Disease Control and Prevention.

--SNIP--

Data from Verispan, a prescription tracking firm, show that 3 million antidepressant prescriptions were written for kids through age 12 in 2004, down 6.8 percent from 2003. Among 13- to 19-year-olds, the number dropped less than 1 percent to 8.11 million in 2004.
So, SSRI prescriptions were essentially unchanged in 2004 (less than 1% decrease) among older teens, who are much more likely to commit suicide than youger children. Logically, how could a less than one percent decrease in SSRI prescriptions among older teens lead to a significant increase in suicides? Seriously, folks!

I thank the Seattle Times for at least presenting some data, as other sources (such as ABC News) have just taken it as fact that SSRI prescriptions plummeted without presenting any information.

See a prior post on this topic here, which cites somewhat different data, but essentially comes to the same conclusion that there is no scientific data that link the 2004 increase in suicides to decreasing SSRI prescriptions.

Note: Please see the comments. A couple of readers provided some additional information that was very interesting. Note that my conclusion on this matter remains unchanged.

Tuesday, February 13, 2007

Less SSRI's, MORE Suicide (?)

Some "key opinion leaders" were in the papers again last week stating that the increase in suicide rates for teens was related to a lower prescription rate of SSRIs. Of note, in the news stories I've seen on the topic, no data have actually been provided to show that antidepressant prescription rates went down when the suicide rate increased. The lack of data, naturally, did not prevent the media from running with the story, much in the same way that children sometimes run with scissors.

For examples of reporting on the topic in the media, try MedPage, or ABC for example. The AHRP blog dug up information from the American Psychiatric Association that stated:

In 2003, U.S. physicians wrote 15 million antidepressant prescriptions for patients under age 18, according to FDA data. In the first six months of 2004, antidepressant prescriptions for children increased by almost 8 percent, despite the new drug labeling.

The point here is that antidepressant prescription rates were actually rising when suicide rates were rising, so it is a bit hard to see how FDA warnings were leading to fewer prescriptions which were, in turn, leading to more suicides.

So how does this kind of story gain traction?

Enter Chuck. According to ABC News, Dr. Charles Nemeroff, a "key opinion leader" in psychiatry, (background here and here) said that

"I have no doubt that there is such a relationship," said Dr. Charles Nemeroff, chairman of the department of psychiatry and behavioral sciences at the Emory University School of Medicine.

"The concerns about antidepressant use in children and adolescents has paradoxically resulted in a reduction in their use, and this has contributed to increased suicide rates."

It would appear that Nemeroff has either seen some data nobody else has seen or that he is making things up. Given his cozy relationship with a plethora of drug companies, I'm guessing it's the latter. Even if there were data showing a decrease in SSRI prescriptions as suicide rates increased, surely Nemeroff would know that there could be numerous other factors involved. As is stated in every introductory research class, correlation does not imply causation. Of course, this point appears to be moot, as I've yet to see any evidence that SSRI prescription rates went down as youth suicide rates increased.

It would appear that this latest scare over SSRI deficiency causing suicide is another case of pseudoevidence based medicine.

Hat Tip: AHRP, Hooked.

Update: Nemeroff indeed had some data indicating that SSRI prescriptions have fallen. Yet it now appears that while SSRI usage fell, suicides did not increase. Nemeroff's statement above thus appears incorrect.

Wednesday, February 07, 2007

Suicide? Not a Problem

A study in International Clinical Psychopharmacology compared three doses of agomelatine to paroxetine (Paxil or Seroxat) and placebo in the treatment of depression. I will write about the efficacy of agomelatine soon, but this post focuses on the alleged safety of the medications.

In the abstract, the authors wrote, “Agomelatine, whatever the dose, showed good acceptability with a side-effect profile close to that of placebo.” In the discussion, they state “Agomelatine is very well tolerated with an adverse event profile close to placebo.”

Now we are going to compare the rate of suicide and suicide attempts on medication to the rate of suicides and suicide attempts on placebo. Here is the data from the study, which is quoted directly (with added emphases):

During the study, two participants committed suicide, one [of 147 patients] on paroxetine after 11 days of treatment, one [of 137 patients] on agomelatine 25 mg of treatment after 10 days of treatment (both deaths being unrelated to treatment according to the investigator's opinion)... There were seven suicide attempts, one [of 141 patients] on agomelatine 1mg, three [of 147 patients] on agomelatine 5 mg, one on agomelatine 25 mg, two on paroxetine and none [of 139 patients] on placebo (p. 244).

Suicide and suicide attempts are never mentioned again in the article. Imagine that the data went the other direction – that there were suicide attempts and actual suicides on placebo, but not on medication. In such a case, there would almost certainly be further discussion of how the medication seemed to offer a protective effect against suicide. I give the authors credit for at least presenting the relevant data, but it is quite odd that the data on such a serious matter were simply brushed away.

This reminded me a bit of the Paxil/Seroxat study #329, in which suicide attempts were much more common on drug than placebo, but the study investigators deemed that none of these serious events were related to the medication. Same old story: The drugs don’t cause suicidal ideation and actual suicide – only depression can do that. Does Traci Johnson ring a bell for anyone?

Whitewashing of safety data such as in the case of agomelatine (and here and here) should raise suspicions. More on agomelatine (the "ideal antidepressant?") to come.

Tuesday, January 30, 2007

Keller, Bad Science, and Seroxat/Paxil

I will focus on Dr. Martin Keller and some seriously poor science in this post. Panorama did an excellent job of profiling Keller’s role in helping to promote paroxetine (known as Paxil in the USA and Seroxat in the UK). Note this is a lengthy post and that the bold section headings should help you find your way.

Who is Martin Keller? He is chair of psychiatry at Brown University. According to his curriculum vita, he has over 300 scientific publications. People take his opinions seriously. He is what is known as a key opinion leader or thought leader in academia and by the drug industry. What does that mean? Well, on videotape (see the Panorama episode from 1-29-07), Keller said:

You’re respected for being an honorable person and therefore when you give an opinion about something, people tend to listen and say – These individuals gave their opinions; it’s worth considering.

Keller and Study 329: GlaxoSmithKline conducted a study, numbered 329, in which it examined the efficacy and safety of paroxetine versus placebo in the treatment of adolescent depression. Keller was the lead author on the article (J American Academy of Child and Adolescent Psychiatry, 2001, 762-772) which appeared regarding the results of this study.

Text of Article vs. the Actual Data: We’re going to now examine what the text of the article said versus what the data from the study said.

Article: Paroxetine is generally well-tolerated and effective for major depression in adolescents (p. 762).

Data on effectiveness: On the primary outcome variables (Hamilton Rating Scale for Depression [HAM-D] mean change and HAM-D final score < 8 and/or improved by 50% or more), paroxetine was not statistically superior to placebo. On four of eight measures, paroxetine was superior to placebo. Note, however, that its superiority was always by a small to moderate (at best) margin. On the whole, the most accurate take is that paroxetine was either no better or slightly better than a placebo.

Data on safety: Emotional lability occurred in 6 of 93 participants on paroxetine compared to 1 of 87 on placebo. Hostility occurred in 7 of 93 patients on paroxetine compared to 0 of 87 on placebo. In fact, on paroxetine, 7 patients were hospitalized due to adverse events, including 2 from emotional lability, 2 due to aggression, 2 with worsening depression, and 1 with manic-like symptoms. This compares to 1 patient who had lability in the placebo group, but apparently not to the point that it required hospitalization. A total of 10 people had serious psychiatric adverse events on paroxetine compared to one on placebo.

What exactly were emotional lability and hostility? To quote James McCafferty, a GSK employee who helped work on Study 329, “the term emotional lability was catch all term for ‘suicidal ideation and gestures’. The hostility term captures behavioral problems, most related to parental and school confrontations.” According to Dr. David Healy, who certainly has much inside knowledge of raw data and company documents (background here), hostility counted for “homicidal acts, homicidal ideation and aggressive events.”

Suicidality is now lability and overt aggression is now hostility. Sounds much nicer that way.

Conveniently defining depression: On page 770 of the study report, the authors opined that “…our study demonstrates that treatment with paroxetine results in clinically relevant improvement in depression scores.” The only measures that showed an advantage for paroxetine were either based on some arbitrary cutoff (and the researchers could of course opt for whatever cutoff yielded the results they wanted) or were not actually valid measures of depression. The only measures that were significant were either a global measure of improvement, which paints an optimistic view of treatment outcome, or were cherry-picked single items from longer questionnaires.

Also, think about the following for a moment. A single question on any questionnaire or interview is obviously not going to broadly cover symptoms of depression. A single question cannot cover the many facets of depression. Implying that a single question on an interview which shows an advantage for paroxetine shows that paroxetine is superior in treating depression is utterly invalid. Such logic is akin to finding that a patient with the flu reports coughing less often on a medication compared to placebo, so the medication is then declared superior to placebo for managing flu despite the medication not working better on any of the many other symptoms that comprise influenza.

Whitewashing safety data: It gets even more bizarre. Remember those 10 people who had serious adverse psychiatric events while taking paroxetine? Well, the researchers concluded that none of the adverse psychiatric events were caused by paroxetine. Interestingly, the one person who became “labile” on placebo – that event was attributed to placebo. In this magical study, a drug cannot make you suicidal but a placebo can. In a later document, Keller and colleagues said that “acute psychosocial stressors, medication noncompliance, and/or untreated comorbid disorders were judged by the investigators to account for the adverse effects in all 10 patients.” This sounds to me as if the investigators had concluded beforehand that paroxetine is incapable of making participants worse and they just had to drum up some other explanation as to why these serious events were occurring. David Healy has also discussed this fallacious assumption that drugs cannot cause harm.

Did Keller Know the Study Data? I’ll paraphrase briefly from Panorama, which had a video of Keller discussing the study and his role in examining and analyzing its data. He said he had reviewed data analytic tables, but then he mentioned soon after that on some printouts there were “item numbers and variable numbers and they don’t even have words on them – I tend not to look at those. I do better with words than symbols. [emphasis mine].”

Ghosted: According to Panorama (and documents I’ve obtained), the paper was written by a ghostwriter. Keller’s response to the ghostwriter after he saw the paper? “You did a superb job with this. Thank you very much. It is excellent. Enclosed are some rather minor changes from me, Neal, and Mike. [emphasis mine].” And let’s remember that Keller apparently did not wish to bother with looking at numbers. It would also appear that he did not want to bother much with the words based upon those numbers.

Third Party Technique: This is a tried and true trick – get several leading academics to stamp their names on a study manuscript and suddenly it appears like the study was closely supervised in every aspect, from data collection to data analysis, to study writeup, by independent academics. Thus, it is not GlaxoSmithKline telling you that their product is great, it is “independent researchers” from such bastions of academia as Brown University, the University of Pittsburgh, and University of Texas Southwester Medical Center and the University of Texas Medical Branch at Galveston which are stamping approval of the product. More on this in future posts.

Keller’s Background… It is relatively well-known that Keller makes much money from his consulting and research arrangements with drug companies. In fact, several years ago, it was documented that Keller pulled in over $500,000 in a single year through these lucrative deals. When looking at how he stuck his name on a study he did not write, endorsing conclusions that were clearly far from the actual study data, can one seriously believe that Keller operated as an independent researcher? Can you believe that this is an isolated incident?

See, for example, Keller’s involvement in a study examining the effects of Risperdal (risperidone) for the treatment of depression. This study was presented a number of times, and he never appeared as an author of any of the presentations. Yet when the study was published, his name appeared as an author. The real kicker was that he allegedly helped to design the study, according to the published article. If he had played a major role in the study, he would have been acknowledged earlier (via being listed as a presentation author), so he apparently helped design the study after it was completed, which is obviously a major feat! The whole story is here. Why put his name on the paper? So that readers would believe more strongly in the study due to his big name status.

In addition, Keller wrote about how Effexor reduces episodes of depression in the long-term though he clearly misinterpreted the study’s findings. To be fair, many other researchers have made the same mistake in believing that SSRI’s reduce depression. To quote an earlier post:

In other words, because SSRIs and similar drugs (e.g., Effexor) have withdrawal symptoms that sometimes lead to depression, it looks like they are effective in preventing depression because people often get worse shortly after stopping their medication. The drug companies (Wyeth, in the case of Effexor) would like you to believe that this means antidepressants protect you from re-experiencing depression once you get better, that they are a good long-term treatment. A more accurate statement is that antidepressants protect you from their own substantial withdrawal symptoms until you stop taking them.

Again, Keller is way off from the study data.

Keller on Camera: Keller’s response to being asked about the increased suicidality among participants taking paroxetine in Study 329 was interesting:

None of these attempts led to suicide and very few of them led to hospitalization.

Well then I suppose a huge increase in suicidal thoughts and gestures is okay, then? This is the commentary of an “opinion leader” – if statements such as the above shape opinions among practicing psychiatrists, then we really are in trouble.

Next: Well, consider this post just the start regarding Paxil/Seroxat. The way the data were pimped by GSK merits more discussion as does more discussion of allegedly detached academics and their role in this debacle.