Showing posts with label safety reporting. Show all posts
Showing posts with label safety reporting. Show all posts

Friday, October 31, 2008

You Really Can Report Safety Data

ResearchBlogging.org

A new study concluded that the combination of sertraline (Zoloft) and cognitive-behavioral therapy (CBT) worked better than either treatment alone for children with anxiety disorders. There was even a nonsignificant trend for Zoloft to outperform CBT, which was quite surprising to me. But that's not really the point of this post. The study can be read at the New England Journal of Medicine website.

I'd like to commend the researchers on doing something that is exceedingly rare in psychopharmacology and psychotherapy trials -- they gave a detailed report of adverse events. And we find that a greater percentage of kids showed suicidal ideation on... CBT. It was not a statistically significant difference, but it was nonetheless surprising. Zoloft, however, was related to significantly more disinhibition, irritability, restlessness, and poor concentration than CBT. This may have been a fluke, but two participants on Zoloft had "homicidal ideation" compared to none on CBT. I have bitched several times about missing/mysterious data on adverse events in psychiatric drug trials, and some have also complained that psychotherapy trials do a poor job of tabulating adverse event data. Again, kudos to the study authors for reporting adverse events; imagine if reporting safety data in such a manner was commonly practiced.

Source: J. T. Walkup, A. M. Albano, J. Piacentini, B. Birmaher, S. N. Compton, J. T. Sherrill, G. S. Ginsburg, M. A. Rynn, J. McCracken, B. Waslick, S. Iyengar, J. S. March, P. C. Kendall (2008). Cognitive Behavioral Therapy, Sertraline, or a Combination in Childhood Anxiety New England Journal of Medicine DOI: 10.1056/NEJMoa0804633

Wednesday, September 19, 2007

The Drug Safety Blindfold

A recent study in the Archives of Internal Medicine found that serious adverse drug events reported to the FDA were up by a large margin (260%) from 1998-2005. A major problem with any such investigation, and acknowledged by the authors, is that adverse events are only rarely reported when they occur. Thus, their findings are nearly certainly an underestimate, likely by a large margin.

Why the upsurge? The authors stated:

The increase over time was largely explained by increases of just 1 type of report – expedited reports from manufacturers of new, serious events not on the product label. Of the increase of 54,876 additional events in 2005 compared with 1998, expedited reports accounted for 48,080 (87.6%) of these events.

Wait a second -- a large chunk of these reports are from the manufacturer regarding events that are not on the product label, meaning events that the manufacturer claims do not happen while taking the drug? I really hope I am missing something here. At first glance, it would appear that the labels on drugs are surely missing a great deal of relevant information!

Furious Seasons has a long post regarding the psych meds listed in the report, so I won’t steal his thunder except to say that the usual suspects were linked to a large number of deaths. It is important to note that these reported deaths were not necessarily caused by the drug, but that whomever reported the event thought a relationship between death and the drug may exist.

It is a sobering article that really reinforced my curiosity as to what extent we really know about the safety of our medicines. It has previously been investigated thoroughly that clinical trials do a very poor job of reporting safety outcomes, so I suppose the latest study is actually not particularly surprising. For example, as reported in the American Journal of Psychiatry, across a reasonably large sample of psychotropic drug trials:

On average, drug trials devoted one-tenth of a page in their results sections to safety, and 58.3% devoted more space to the names and affiliations of authors than to safety.

Bummer. And, from the Journal of the American Medical Association regarding clinical trials for a wide variety of interventions:

Overall, the median space allocated to safety results was 0.3 page. A similar amount of space was devoted to contributor names and affiliations… Only 39% of trials had adequate reporting of clinical adverse effects and only 29% had adequate reporting of laboratory-determined toxicity.

I’m not trying to instill a panic, but it is at least a little scary that clinical trials don’t provide adequate information and, apparently, the labels of drugs are missing a significant number of relevant adverse drug effects. But hey, what’s a few dead people when there are buckets of money to be made?

Update: John Grohol at Psych Central has some intelligent comments about the Archives of Internal Medicine study, mentioning that we need to know how many people are taking said drugs in order to compare the list adverse events for each drug to the number of people taking each medication. I agree with his comment and also believe that we need to be vigilant -- drug safety reporting is a joke and needs to change.