Friday, May 30, 2008

BOLDER Update: Lilly Started It

Some of my longstanding readers probably remember that I long ago wrote about a statistical issue in the Seroquel trials for bipolar depression (known by the corny acronym BOLDER). It was just a minor issue, you know, the kind that would make a drug look about 50% more effective than a placebo depending on which type of analysis you chose to use. No biggie.

Lilly Started It: It just so happens that Philip Dawdy (who has apparently been christened as Dr. Dawdy) at Furious Seasons recently had a letter published in the Journal of Clinical Psychopharmacology on this issue of statistics. Dawdy noted that the authors' use of a statistical method known as mixed models repeated measures (MMRM) rather than the more conventional last observation carried forward (LOCF) resulted in a major inflation in effect size. As I mentioned earlier, the choice of methods to calculate the effect size (the magnitude of difference between drug and placebo) had a big impact. Dawdy aptly noted that the authors should have reported the effect sizes calculated by both methods so that readers could note how one method made Seroquel look better than did the other method. To quote Dawdy, "...the authors should also have reported the LOCF effect sizes so that the readers would have been aware of how the method impacted the findings." I was flattered to see that my blog was cited in Dawdy's letter. I heard through the grapevine that another author attempted to cite my blog in a letter to the editor, but that the journal struck the citation to my site in the final version of the published letter. If some of y'all researchers who read this blog wanna cite my site, go ahead.


I'm not saying that Seroquel was a dud, but that it did get a boost from the analysis used in the study. When the authors are playing by a new rule when it comes to calculating the differences between drug and placebo, it would make sense to report the results using both the old rules and new rules. In his response, Michael Thase of the BOLDER team responded that "It is my understanding that mixed model repeated measurement (MMRM) analysis was chosen to compute effect sizes in the BOLDER studies because it would permit direct comparison with the results of the study of the only other treatment approved for bipolar depression, the combination of olanzapine and fluoxetine (OFC). Thus, in plain and simple terms, we were attempting to facilitate an apples to apples comparison between quetiapine monotherapy and OFC." So because Lilly did it, we did it. Um, OK. But is there some kind of law against reporting the results from both the newfangled MMRM analysis and the old-fashioned LOCF analysis? Just wondering. And if Lilly started saying it was okay to market Zyprexa off-label for various conditions, would that mean all antipsychotics could be marketed off-label for all sorts of issues? (Hypothetically speaking, of course.)

Stats: Thase goes on to note that there is some research suggesting that MMRM does not overinflate effect sizes; rather, LOCF underestimates them. I know a bit about stats, but I'm not a statistician. Basically, the differences between the methods boil down to how data is handled for persons who dropped out of a study. The best solution is to try to track down study dropouts and assess how they are functioning, rather than having a statistical model guess at their sense of mental well-being, but this requires extra effort and time, and is sometimes not possible. Basically, the LOCF model makes some assumptions that are quirky at best, while MMRM seems to handle missing data better in many situations. All that being said, in many trials where a drug beats placebo, MMRM appears to generate effect sizes that are higher than LOCF, which then leads us to a question "Geez, have we been underestimating the effects of drugs by 50%?" -- um, that seems a little hard to swallow. I'm not quite ready to buy into that.

4 comments:

Jeff said...

In studies like this, it would be preferable to see multiple analyses reported. This could consist of only a summary sentence or two, or one table, and then a reference to a website where the additional analyses would be published in detail. I would like to see LOCF, MMRM, and most importantly, a a straightforward report of how those who actually completed the study did. [In other words, a statistical analysis that consists only of non-imaginary patients].

Furthermore, what would be really ideal is to see the raw data posted to that other scientists could analyze it.

Jeff Lacasse

Anonymous said...

Congratulations on hitting the big time! lol

Bernard Carroll said...

CUCKOO'S EGGS IN THE NEST OF SCIENCE

Jeff Lacasse is right. I recently came across a splendid metaphor that applies here. In reference to the persistent questions about biased or incomplete or suppressed analyses in the reporting of experimercials, David Healy remarked that these studies are in fact not scientific reports at all – they are cuckoo’s eggs in the nest of science. “If companies want to market their product under the banner of science, they can be required to conform to the norms of science.” That includes making the data available for independent review and analysis. The full article is titled One Flew Over the Conflict of Interest Nest, published in World Psychiatry 2007; 6: 27-28.

soulful sepulcher said...
This comment has been removed by the author.