Well, I have since gone back to the BOLDER I study, and I can see now that the same trick was in place. We’re talking about last observation carried forward effect size (the traditional method) versus mixed-model repeated measures (MMRM; the ‘new’ method) effect size here. It may sound boring and wonky, but it is the backbone of interpreting how strong of an effect was generated by the treatment. For more details, please see my earlier post.
In BOLDER II, I could at least calculate the last observation carried forward effect size in the traditional sense based on data provided in the article. In BOLDER I, this was actually not possible. To calculate the size of the treatment effect, one must examine the means and standard deviations of all groups. This information was not provided in full, as there were no standard deviations reported. Given that reporting means and standard deviations is standard practice in medical journals, this is quite odd.
So, I just decided to use the standard deviations from the BOLDER II study to see what would happen. They seemed to be fairly comparable groups. Yes, the standard deviations of the groups in the two studies are likely to be different, but since the authors didn't report them in BOLDER I, I had little other choice. For the 300 mg Seroquel group, I found an effect size of .49, which is a moderate treatment effect. The study authors in BOLDER I, using the alternative MMRM method, found a treatment effect of .67, which is moderate to large. Using their method seemed to boostthe effect size by 37%. For the 600 mg Seroquel group, I found an effect size of .52 (moderate), whereas the study authors came up with .81, which would typically represent a large treatment effect. Using the authors’ methods, the effect size apparently increased by 36%.
The authors did not bother even reporting the standard deviations or the effect sizes calculated the traditional way. Regarding similar issues being raised about BOLDER II, one of the authors replied by saying essentially that the authors thought nobody would be interested in the traditional analysis. Call me old-fashioned, but when a new statistical analysis comes on the scene that dramatically increases the apparent magnitude of treatment effects, making treatments appear more powerful just because data are analyzed in a different way, I’m suspicious. When the old method, which has been used to calculate effects for decades, is just tossed out the window as old-fashioned, one can’t help but wonder if these drug company funded statisticians are just licking their chops because through statistical maneuvering, they have figured out how to make their sponsor’s products look better.
So, when you see that the apparent effects of treatments have grown, keep in mind that you may well be staring at the effects of newfangled statistics rather than the actual treatment.