In this post, I will discuss statements of a couple academic thought leaders regarding the BOLDER I study and how these statements compare to the actual evidence.
Cutler: Dr. Andrew Cutler, one of the authors on the BOLDER I study to which I referred earlier, had some nice things to say about the study’s findings in Clinical Psychiatry News. For example, he said “It’s a landmark study.” In addition, the article paraphrased Cutler as saying, “The study used one dose each evening, and had several other unique features, he noted. Almost all previous data on treating bipolar depression focus on bipolar I patients, but this study included rapid cyclers and bipolar II patients.”
What Dr. Cutler failed to note is that Seroquel was not more effective than placebo at the end of the eight week trial for bipolar II patients. If Cutler is going to introduce bipolar II participants into the mix, then he has an obligation to mention that treatment was not significantly more effective than placebo for this group. He may have been misquoted – I don’t know – but it does read directly like PR for the drug that goes well beyond the findings of the study which he is discussing.
It is worth noting that Cutler runs a private research firm and that it can’t hurt his standing among drug companies who hire his firm to run clinical trials when he makes statements such as those mentioned above. I found it interesting that on his firm’s website, it is mentioned “We are also noted for low placebo response.” How do you get a low placebo response besides not treating participants very well? I’m not making an accusation – I am just really curious. If placebo response has to do with expectations of getting better and these expectations are influenced by how the study personnel treat participants, then what’s the deal? Maybe you can treat participants well and still quash the placebo response; I just wish I knew more about how this was done.
Calabrese: Dr. Joseph Calabrese, BOLDER I’s primary investigator, had the following to say about the study: “There was a dramatic response within eight days of beginning treatment in patients who were symptomatic with bipolar depression.” I’m not sure if “dramatic” is the right word. Looking at Figure 2 in the article, I see that after week one, placebo patients seem to have changed by 5 points on the Montgomery-Asberg Depression Rating Scale (MADRS), while those on Seroquel seem to have changed by about 9.5 points. That’s an average difference of about 4.5 points on a rating scale where the average patient had a score of about 30 coming in. So is a score of 25 (on placebo) “dramatically” different than a score of 20.5 (on medication)? Methinks that the language itself is a little dramatic. Don’t get me wrong – it’s good that the medication was better than placebo after a week. But let’s not get hyped up beyond what is reasonable.
It is troubling when academics are willing to make PR statements that go beyond the evidence and, in Cutler’s statement regarding bipolar II, are demonstrably false. Academic thought leaders are frequently featured in news articles, review articles, PR releases, and continuing medical ‘education’ events making these types of statements that go well beyond the evidence. It seems that too few professionals in the mental health arena have picked up on the fact that marketing and science have largely melded into a witches’ brew in which what is good for a product’s sales often trumps the actual data on its efficacy and safety.
Before anyone gets upset because I mentioned people’s names here, keep in mind that I’m not saying either Cutler or Calabrese is a “bad person.” I don’t know them personally. They may well conduct top-notch research and possess saintly personalities. All I am talking about here is how their remarks seem to differ from the studies that the remarks are based upon, and how such practice is widespread.
Major Hat Tip: Depression Introspection.