Spelling error in title intentional. I got my hands on the BOLDER II study, which found that quetiapine (Seroquel) was an effective treatment for bipolar depression. As is often the case in clinical trials research, I found something that made me feel uneasy. In the abstract it is stated that the therapeutic effect size (ES) was .61 for quetiapine 300 mg/d and .54 for quetiapine 600 mg/d on the Montgomery Asberg Depression Rating Scale (MADRS). The authors, however, used a different method to calculate effect size than what is typically done. This alternative method inflated the apparent effect of Seroquel by a decent margin. In this post, I’ll compare effect size done the conventional way to the effect size reported in the BOLDER II results.
If you’re not interested in how I calculated my stats, please skip the next paragraph!
I calculated the effect sizes from information available in Table 2 (pg. 604), where it is stated that, on the MADRS, 300 mg/d quetiapine was related to an average improvement of 16.94 points, while placebo was associated with an average improvement of 11.93 points. That nets a difference of 5.01 between the groups, according to my math. I calculated the standard deviation (SD) for each group from the reported standard error for each group: Standard error = (Standard deviation divided by square root of number of participants in the group). This gets a SD of 12.56 for the placebo group and a SD of 12.33 for the Seroquel 300 mg/d group. I then took the average of the two SD’s (technically, I weighted the placebo group a bit more heavily because there were slightly more placebo participants than Seroquel 300 mg/d participants). This yielded an average (pooled) standard deviation of 12.45. To get the effect size at this point, I divided 5.01 by 12.45.
On the MADRS, the effect size for Seroquel 300 mg/d versus placebo is .402. I assure you that the methods I used were those conventionally used in the field. The authors of the study, however, utilized mixed model repeated measures analysis (MMRM) to calculate their effect size. This is not conventionally how ES is calculated and an ES of .61 is obviously about 50% larger than an ES, calculated using conventional methods, of .402. The authors did not provide a rationale for using MMRM analysis to calculate effect size. When using an unusual method, an explanation should certainly be provided. My guess (and I could well be wrong) is that the sponsor took a look at how both methods turned out and decided that reporting the effect size using MMRM made better publicity. An ES of .61 is moderate according to most standards while an effect of .40 is often considered small to moderate. Through doing a newfangled analysis, the effect grows substantially and the drug looks more efficacious than it would had the more conventional method been utilized.
Hey, I’m all for progress. If MMRM is really a better analytic method, so be it. However, the study authors provided absolutely no rationale for using this analysis. They should have provided a rationale as well as providing ES calculated using both MMRM and traditional methods so that readers could have seen both figures. Instead, the higher ES’s are reported without any justification, leaving the reader thinking that there is no controversy regarding the reported effects of Seroquel in this study.
As for the 600 mg/day dose, my calculations come up with an ES of .37, which is again in the small to moderate category, whereas the authors came up with .54. The effect size goes up 46% this time when the unconventional method is used.
To summarize, the authors used an unconventional statistical method to calculate effect size, which resulted in effect sizes about 50% larger than those yielded when conventional methods are used. The results using conventional methods are not provided in the published report and no rationale is laid out for the unconventional method.
There’s more to come on my interpretation of BOLDER II. Stay tuned.