As the trend continues toward medicalizing (diagnosing and treating) increasingly more aspects of human mental functioning, it should come as no surprise that the latest issue of the Journal of Clinical Psychiatry features a study examining the use of sertraline (Zoloft) in treating PMS.
The study concluded that “intermittent, luteal phase dosing with low doses of sertraline is an effective treatment for PMS (p. 1630).” It appeared that lower doses of the medication (25 mg) were actually as effective or more effective than higher doses (50 mg). They also tested a couple other dosing schedules, finding all to be somewhat effective.
One problem I noted (but the authors generally did not) is that the effects are all in the small to moderate range. The Quality of Life Enjoyment and Satisfaction Questionnaire yielded virtually no change on drug compared to placebo, but I would think a quality of life measure should be an important part of the study. After all, if PMS is such a huge problem, then surely a treatment that impacts PMS would notably improve quality of life? Hmmm…
There was a huge statistical problem in this study as well. The authors used 12 measures to assess outcome and also used two different dosages of the drug (25 versus 50 mg). They also used three dosing schedules – luteal phase, at symptom onset, and continuous. This created 72 comparisons of drug versus placebo. Of these 72 comparisons, I counted 36 that were statistically significant. That means that half of the comparisons found no significant advantage for medication. But wait, it gets more fishy…
Despite making 72 comparisons, the authors employ their own unique definition of statistical significance. Instead of using the p-value of .05 or less as their cutoff, they opt for p < .08 on some measures, which is generally not accepted in the scientific community. So, if we factor in these seven p < .08 results, we are left with only 29 of 72 comparisons showing a significant (p < .05) advantage over placebo. When making so many comparisons, one expects to find some differences based on chance alone (Type I error in statistical jargon). However, the authors made no adjustments for multiple comparisons. This, friends, is data dredging.
It is a time-honored strategy – use a lot of outcome measures and a variety of dosing schedules so that your drug, which seems to have only a mild effect, will come up as statistically significantly superior to placebo on some measures, which will allow you to fill up a few pages discussing its efficacy.
Ghostwriter Watch: “The authors also wish to acknowledge Edward Schweizer, M.D., Paladin Consulting Group, New York, N.Y., for assistance in the preparation of the manuscript. (p. 1631).” I don’t know a lot about Dr. Schweizer, but I am assuming his consulting firm was paid by Pfizer and wrote quite a bit of the paper. Let me know if I’m wrong. Good deal: Pfizer pays for the study (this is acknowledged in the article), conducts a variety of statistical gymnastics through its analysis, then pays a ghostwriter to write a good chunk of the manuscript.
1 comment:
Post a Comment