Wednesday, November 29, 2006
PharmaGossip has written about this issue on occasion (such as here), pointing out that there is only so long that the drug industry can run around in circles. Research money needs to start going toward developing new meds that add a significant benefit beyond existing products rather than creating me-too's or new drug classes that actually add little to existing treatments.
Remember, when you hear PhRMA waxing poetic about how much money is spent on R & D, a large chunk of that money is going to conduct studies showing that existing product X is roughly as effective as drugs A, B, C, and/or D in treating condition Y. That is the kind of research that generally does little to improve patient outcomes.
The full title of the article published in BMC Psychiatry is “Even More Suicide Attempts in Clinical Trials with Paroxetine Randomised Against Placebo.” The authors are Aursnes, Tvete, Gaasemyr, & Natvig. They performed a Bayesian analysis (a form of statistical analysis) on whether paroxetine (Paxil/Seroxat) is related to an increase in suicides in clinical trials. Here’s what they had to say, starting with some background:
“Last year we wrote a paper ‘Suicide attempts in clinical trials with paroxetine randomised against placebo’  that hit the front pages of newspapers worldwide [2,3]. Our publication demonstrated an increased intensity of suicide attempts per year when using paroxetine compared to placebo, and caused GlaxoSmithKline (GSK) to come up with a comment . Since then GSK has provided additional data to the American Food and Drug Administration (FDA), as the agency required new documentation on paroxetine. This also resulted in a Briefing Document from GSK  in which they admit that there is an increased risk for suicide attempts associated with paroxetine.”
Then the authors mentioned that, in the current study, “We analyzed the data GSK presented in their latest report by the same Bayesian approach used by us in our article. We included only the double blind, parallel design studies with patients randomized to either paroxetine or placebo, as recognized by GSK in the Briefing Document. These 19 studies contained 3455 and 1978 patients to the treatment and placebo groups, respectively. They resulted in 11 and 1 suicide attempts, respectively, as compared to 7 and 1 in our study. The studies lasted 6 – 12 weeks, and we obtained 601 and 333 patient years in the treatment and placebo groups, respectively.”
What did they find?
“We found that the posterior probability that medication with paroxetine is associated with an increased intensity per year of a suicide attempt is 0.99 with the pessimistic prior, 0.98 with the slightly optimistic prior and 0.99 with the slightly pessimistic prior. Hence, we can be at least 98% sure that paroxetine increases suicide attempts. This is stronger evidence than the p value equal to 0.058 given in the Briefing Document .”
I am not a Bayesian statistician, so I can’t necessarily critique their method very well. I can tell you that these conclusions match fairly well with evidence marshaled by Healy and others that paroxetine (as well as other SSRIs) are indeed related to an increased suicide risk. Feel free to read other posts (here, here, here, and here) and their related sources. At that point, I think you will see that, based on clinical trial data, there is indeed a greater risk for suicide on SSRI than placebo.
Tuesday, November 28, 2006
This is the third (and perhaps final) post in a series on a study recently published in Neuropsychopharmacology which used risperidone (Risperdal) as an add-on treatment for depression. The study had three phases, as follows:
1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for > six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks
2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks
3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.
Let’s start with examining the differences between the trial report found on clinicaltrials.gov and the trial as published in Neuropsychopharmacology. The clinical trials report indicated that the primary outcome measures were: a) change in Montgomery-Asberg Depression Rating Scale (MADRS); b) time to relapse, as measured by Hamilton Rating Scale for Depression and Clinical Global Impression (CGI) scores.
Secondary measures include: a) Response rate, measured by at least a 50% improvement in MADRS score; b) Change in Hamilton Rating Scale for Depression (HAM-D) and c) Clinical Global Impressions (CGI) scale scores.
Now, to the journal report. Under the results for the open-label risperidone augmentation, on page 9 of the early online version of the study, it is stated that the MADRS was “the primary measure used to assess depression severity.” Nowhere are the results of the MADRS response criteria (>= 50% change in MADRS scores) reported. Where did this go? If this was a predetermined test of treatment response, shouldn’t it be reported? While means and standard deviations of the MADRS are reported, the alleged measure for treatment response is strangely missing.
It’s also unclear what happened to the CGI scores, as means and standard deviations for this instrument are not reported anywhere. It’s mentioned that scores on this measure were used as one measure of relapse, but the means and standard deviations are missing.
Under the results from the double-blind continuation phase, we can see that the rate of relapse was 53.3% for risperidone and 54.6% for placebo. The time to relapse was 102 days for risperidone augmentation and 85 days for placebo augmentation, for which the associated p-value was .52. But a post-hoc analysis found the p-value of the difference in time to relapse was p <.05. The authors state that this difference was found because they switched to a linear ranks test. I’m no expert on this test, so I can’t make a judgment, but I can say that I’m suspicious any time a p-value goes from .52 to .05 just by switching statistical tests. At the very least, an explanation in the article is in order, as it is noteworthy that just switching a statistical test made such a change in results.
Post-hoc analysis part 2. An additional post-hoc analysis was conducted using the subgroup of patients who were fully nonresponsive to citalopram monotherapy. In other words, the people who showed the poorest response to SSRI treatment were examined in separate analyses. Their median time to relapse and relapse rate were reported as significantly different, in favor of the risperidone group. The relapse rate was 56% in the risperidone group and 64% in the placebo group. The associated p-value was reported as .05. However, I conducted my own analysis and came up with Chi-Square = .922 and a p-value of .337. It is mentioned earlier in the paper that the authors used the Cochran-Mantel-Haenzel test and this explains how the p-value shrunk drastically. Again, a post-hoc analysis was conducted which changed the results substantially, yet the authors did not discuss reasons behind these large discrepancies. What this would appear to mean is that time to relapse differed substantially more than chance depending on the site where patients received treatment. The CMH test stratified by treatment site, which I believe would then account for differences due to treatment site. If treatment response really was dependent to a significant extent on the treatment site, this bears mention in the article, but such a discussion is nowhere to be found.
Table 2. The results here are quite interesting. This refers to the double-blind section of the study in which patients who had shown symptom resolution while receiving risperidone were randomly assigned to continue risperidone or to receive placebo. On the MADRS, patients receiving risperidone, on average, gained 11.2 points (i.e., their depression worsened by 11.2 points), whereas patients on placebo gained 10.4 points. Thus, there was a slight, but certainly not significant, difference in favor of persons on placebo worsening less than persons on risperidone. On the HAM-D, patients receiving risperidone worsened by an average of 7.6 points whereas they worsened by an average of 7.9 points on placebo. Between the two measures, it is clear that on average, there was very little difference between risperidone and placebo. However (and take out your notepads, please), patients in both groups got significantly worse over time in the third phase of the study. Thus, the scenario for the average patient is that he/she sees a relatively brief improvement in symptoms while taking risperidone then returns to a period of moderate depressive symptoms. The authors do not discuss that mean scores between groups did not differ at all in the third phase of the study.
The only evidence to emerge from this study, really, is that an open-label treatment resulted in a decrease of symptoms. If Janssen really wanted to impress, they would have included an active comparator. Say, an older antipsychotic, a so-called “mood stabilizer,” or perhaps another atypical antipsychotic. Or, if not feeling daring, at least add a placebo to the mix. Based on the study results, we cannot even conclude that risperidone augmentation worked better than adding a sugar pill to SSRI treatment.
In summary, it is unknown what happened to some of the secondary outcome measures (CGI scores, MADRS response rate) and the statistical analyses used in some cases required more explanation, as their use led to a big change in interpretation of the results.
So what do we have here? I believe this is an excellent example of a study conducted for marketing purposes. I bet that many reprints of this article have been purchased by Janssen, which will be passed on by cheerleaders, er drug reps, to physicians in a ploy to market Risperdal as an adjunctive treatment for depression. Additionally, there are likely “key opinion leaders,” perhaps including some of the study authors, who are willing to stump for Risperdal as an adjunctive treatment for depression at conferences, meetings, and dinners. With this study now published in Neuropsychopharmacology, there can be little doubt that such marketing strategies now have a glimmer of scientific sparkle on their side, although upon closer examination, the scientific evidence is very weak at best. Yet too few doctors will bother to perform closer examination of the meager science behind the marketing as the atypical antipsychotics continue their march toward rebranding as “broad spectrum psychotropic agents,” as Rispserdal was referred to in this press release regarding the present study.
I encourage interested readers to also check out my earlier posts regarding the questionable authorship of the paper (possibly involving magic!) as well as the rather blatant undisclosed conflicts of interest associated with the study. This is so distressing that I think I’ll have to chill out with a couple of Quaaludes, er, earlier versions of broad spectrum psychotropic agents.
"Akzo Nobel NV said its unit Organon and Pfizer Inc agreed to end their collaboration in the further development of antipsychotic drug asenapine, but this will have no effect on the planned stock market flotation of Organon.
Pfizer's decision to discontinue its participation in the asenapine development program 'is an outcome of a commercial analysis of the compound as a part of its overall portfolio,' the companies said."Organon will continue to develop the product regardless of Pfizer's lack of collaboration. According to www.Clinicaltrials.gov, there are 19 studies that have investigated the drug or are currently recruiting patients. Looks like Organon is swinging for both bipolar and schizophrenia. However, is this compound not late to the game? The atypical market is flooded and I'm going to boldly predict that asenapine will work no better than any existing product. It's going to take a whale of a marketing effort to push this one to importance. For Organon's sake, I hope they can get a better market share with asenapine than they did with mirtazapine (Remeron).
Monday, November 27, 2006
Over at Furious Seasons, you’ll find a good synopsis of some of the larger issues surrounding atypicals. I believe many readers will find his points of interest. Teaser below…
"This is now the third study in about a year to knockdown the prevailing orthodoxy that atypicals reduce symptoms better than first-generation antipsychotics and that the atypicals are so kinder and gentler with the side effects. I have discussed the CATIE study here and here.
All of these studies combined raise serious questions. Here are a few:
Why do pharma companies continue to charge anywhere from 8 to 20 times as much for atypicals as they do for older antipsychotics? Because they can and no one will question them on it.
Why do doctors continue to insist, in the face of compelling data, that atypicals are great? Because they can and no one will question them on it.
Why did NAMI National put out a press release and organize a teleconference for reporters soon after this Archives of General Psychiatry study called the status of atypicals into account? Because they can and no one will question them on it. And, NAMI National gets a lot of money each year from pharma companies. Any connection?
Why have these same atypicals suddenly become frontline treatments in treating bipolar disorder, despite a profound lack of independent evidence showing that these meds are good for schizophrenics and that those poor folks can barely tolerate taking them? Why would they suddenly become so "good" for bipolars? Hell, they don't even reduce re-hospitalization rates compared to only taking a mood stabilizer. Bipolars don't particularly fancy these meds, either, as I pointed out last year.
Why are we now giving them to children? Why are their parents going along for the ride?"
For much more, interested readers should take a hike over to the post at Furious Seasons.
For my take on the CUtLASS study, feel free to read here.
The FDA warning, issued on Nov. 16, stated that Seroquel sales material distributed by the company was misleading because it minimized information about certain risks contained the the drugs' labeling.
AstraZeneca said in an emailed statement that the sales material was accompanied by a copy of the FDA-approved product labeling, which includes the complete warnings and precautions.
"AstraZeneca takes FDA's letter seriously. We will work with the FDA to resolve the matter," the company said.
Seroquel, which is used to treat schizophrenia and bipolar disorder, is AstraZeneca's second best-selling drug, with sales of $2.8 billion in 2005."Source: MarketWatch
Friday, November 24, 2006
"Two top executives have exited Cyberonics, after it disclosed that its stock options problems are much broader than previously reported.
On Monday, investors bid up shares of the Houston-based medical device maker, which said Chairman and CEO Robert Cummins and Chief Financial Officer Pamela Westbrook resigned.
The duo was replaced, on an interim basis, by three people: Tony Coelho as chairman, Reese Terry Jr. as chief executive and John Riccardi as chief financial officer. George Parker was appointed as interim chief operating officer.
The personnel changes came as Cyberonics reported widespread stock option problems in a filing Friday with the U.S. Securities and Exchange Commission..."
"It's funny how questionable financial practices seem to go hand-in-hand with dodgy marketing and strange science."
Indeed. Check out the full story here. No word on how this may impact Charles Nemeroff.
In an earlier post, I mentioned that it appears that antipsychotic use among kids has risen drastically in the past few years. Well, another study (here and here) indicated that AP use among kids increased 500% from the two year period of 1993-1995 compared to 2002. Yeah, you read that correctly. Five Hundred Percent. Five-Fold. No, I'm not kidding. And their use continues to rise.
This is not based on much evidence that these meds are more effective than older meds, like lithium or older antipsychotics. Nor is the safety data particularly compelling.
"Last year in the United States, about 1.6 million children and teenagers — 280,000 of them under age 10 — were given at least two psychiatric drugs in combination, according to an analysis performed by Medco Health Solutions at the request of The New York Times. More than 500,000 were prescribed at least three psychiatric drugs. More than 160,000 got at least four medications together, the analysis found.
Many psychiatrists and parents believe that such drug combinations, often referred to as drug cocktails, help. But there is virtually no scientific evidence to justify this multiplication of pills, researchers say. A few studies have shown that a combination of two drugs can be helpful in adult patients, but the evidence in children is scant. And there is no evidence at all — “zero,” “zip,” “nil,” experts said — that combining three or more drugs is appropriate or even effective in children or adults."
"The use of two-medicine combinations in children is on much shakier ground. Even for single drugs, the effectiveness of some psychiatric medications in younger patients is questionable: most trials of antidepressants in depressed children, for instance, fail to show any beneficial effect. But hardly any studies have examined the safety or the effectiveness of medicine combinations in children. A 2003 review in The American Journal of Psychiatry found only six controlled trials of two-drug combinations. Four of the six failed to show any benefit; in a fifth, the improvement was offset by greater side effects.
“No one has been able to show that the benefits of these combinations outweigh the risks in children,” said Dr. Daniel J. Safer, an associate professor of psychiatry at Johns Hopkins University and an author of the 2003 review. [To read a great article by Dr. Safer regarding the influence of drug companies on research outcomes, check this out.]
If the evidence for two-drug combinations is minimal, for three-drug combinations it is nonexistent, several top experts said.
“The data is zip,” Dr. Hyman said."
The article mentions a few cases, one of which is mentioned below...
"Fate Riske, 3, of Fond du Lac, Wis., takes two antipsychotics and a sleeping medicine to control what her mother, Elizabeth Klein-Riske, said were hours-long tantrums, a desire to watch the same movies repeatedly and an insistence on eating the meat, cheese and bread in her sandwiches separately.
On a recent visit, Fate played sweetly for four hours as her parents, who both have trouble walking, sat in front of a television. Sucking on a pacifier, Fate showed off her pink dress and matching shoes.
Mrs. Klein-Riske credited the drugs for Fate’s cherubic behavior during the visit. But a few weeks on a different antipsychotic led Fate to become aggressive, talk rapidly and “run around wild, totally out of control,” said Mrs. Klein-Riske, who receives government financial and child-care assistance because her daughter is considered mentally ill.
Fate’s weight ballooned in five months to 48 pounds from 30."
So a three year old is taking two antipsychotics? Sounds kind of like the kid is acting like a three year old to me! OK, her behavior probably is worse than the average kid, but there should be more behavioral interventions before digging into the polypharmacy chest, don't you think?
"Antidepressants are commonly paired with stimulants, but antidepressant use has declined over the last year after the F.D.A. warning about suicide risk. In their place, physicians are prescribing combinations that include antipsychotic and anticonvulsant drugs, according to Medco. From 2001 to 2005, the use of antipsychotic drugs in children and teenagers grew 73 percent, Medco found. Among girls, antipsychotic use more than doubled."
Read the whole thing here.
According to the warning letter posted on the FDA Web site Wednesday, the sales material "minimizes the risk of hyperglycemia and diabetes mellitus and fails to communicate important information regarding neuroleptic malignant syndrome, tardive dyskinesia and the bolded cataracts precaution."
...and Procter & Gamble keeps hiding (data, that is). To very briefly summarize, Blumsohn would like P & G to retract a misleading (at best) paper and provide the journal in which the data were published with relevant study data, yet P & G refuses to do so. As Blumsohn states in his latest post "As stated, the main purpose of this letter is to revisit your earlier refusal to allow the data provided in April to be scrutinized in an open manner. Your refusal to allow the data to be transmitted to a journal editor accompanying a properly corrected manuscript is not appropriate. A journal editor can request raw data from an author at any time, and such refusal (particularly under the circumstances of this case) would be inappropriate." Basically, the evidence indicates that the P & G data are incorrect in the published report referenced by Blumsohn. And not just off by a bit -- we're talking a drastic difference here!
In addition, the Journal of Bone and Mineral Research (where the disputed paper was published) appears to have handled the situation quite poorly, as they continue to drag their feet on retracting the manuscript (see here and here) despite a good deal of evidence (here, for example) that the manuscript was fraudulent.
Spread the word, folks! This stinks on many levels and those involved should receive as much negative publicity as possible. Hopefully, added attention will lead P & G to start behaving responsibly. Blumsohn has voluminous documentation regarding this case at his site, which I think everyone should be reading regularly. Check out his latest post for a copy of his latest letter to P & G:
Scientific Misconduct Blog: Procter & Gamble - Let's take the high road
Wednesday, November 22, 2006
"The former chief pharmacist for the state Public Welfare Department, who earned extra income from sources that included two drug manufacturers, was charged Tuesday with crimes that carry potential prison time.
Steven J. Fiorello, of Palmyra, was fined more than $27,000 last year by the State Ethics Commission for using his position to get consulting work. He was arraigned Tuesday on criminal charges for the same activity.
"Pennsylvania law very clearly prohibits state officials from using their public positions for personal financial gain," said state Attorney General Tom Corbett. "Accepting illegal payments and then failing to report them is not only a conflict of interest, but also a violation of the public trust."
Fiorella was arraigned Tuesday on two felony counts of conflict of interest, which each carry a maximum five-year prison term and $10,000 fine, and misdemeanor counts of accepting honoraria and failing to disclose income on annual statements of financial interest.
Fiorello, 59, served as pharmacy director for the welfare department's Office of Mental Health and Substance Abuse Services for several years. He left state government and now works in the pharmacy industry as a consultant, his lawyer said. He already paid Ethics Commission fines totaling $27,269 in April 2005.
Fiorello allegedly accepted more than $10,000 for consulting work he did and trips he took between 1998 and 2003 for various companies, including the Pfizer and Janssen drug companies, according to court papers."
Nice work if you can find it, especially if you can get away with it, which is looking increasingly doubtful for Mr. Fiorello.
Reading more, I found that...
"Fiorello was employed as the Director of Pharmacy for the Pennsylvania Department of Public Welfare's Office of Mental Health, Substance and Abuse Services. As part of his responsibilities, Fiorello served on a committee that decided which drugs would be used for mental health treatment in all state hospitals - decisions which guided more than $9 million in annual drug purchases by the Commonwealth."
And a third source said that:
"According to former investigator turned whistleblower, Allan Jones, PA taxpayers are saddled with an expensive drug treatment model known as PennMap, for the treatment of mentally ill persons in state care.
"This model is part of a large pharmaceutical marketing scheme designed to infiltrate public institutions and influence treatment practices," he explains, "Pennsylvania is paying tens of millions of dollars for patented drugs that have no proven advantage over cheaper generic drugs." [See here and here for examples on this point]
As part of the overall scheme, on July 27, 2001, Tom Ridge appointed Gerald Radke, an Eli Lilly Marketing Director, to head the PA Office of Mental Health and Substance Abuse. With Radke at the helm, PA Medicaid funded sales of Lilly’s Zyprexa rose from approximately $26.5 million in 2000 to $34.2 million in 2001, and reached $39.2 million in 2003. In state hospitals, hundreds of patients had their medications switched in the absence of medical need or indication, to comply with administrative decisions."
OK, the third source is an article by Evelyn Pringle, with whom I have some credibility issues (see here), but if her story is even close to accurate, then it looks like the money given to Fiorello by drug companies was a GREAT investment. Pay him a few measly thousand dollars and suddenly atypical antipsychotics are everywhere in the PA mental health system, which is of course worth millions.
Here's what I found on Walgreens.com. These are monthly costs.
Olanzapine (Zyprexa) 15mg: $554.98
Risperidone (Risperdal) 4mg: $331.09
Quetiapine (Seroquel) 500mg: $519.77
Aripiprazole (Abilify) 20mg: $518.99
Ziprasidone (Geodon) 100mg: $483.08
Haloperidol (Haldol) 5mg : $10.89
Perphenazine 16mg: $29.83
Looking like a lot more than a 10-fold difference in price, eh?
Now, let's add in benztropine to control for some of the side effects commonly seen on haloperidol or perphenazine.
Benztropine 2mg: $12.09
We get a total of Haldol + benztropine of $22.98 per month and perphenazine + benztropine of $41.92. Thus, the cheapest new AP (Risperdal) is eight times as pricey as perphenazine + benztropine. Most of the comparisons with Haldol suggest that the price difference, even including benztropine as an adjunctive medication, is at least 20-fold. With perphenazine + benztropine, the difference decreases to 10-13 times as expensive.
The doses I used were from gleaning doses that were used in clinical trials.
We know, however, that the atypical antipsychotics are not 10-20 times more effective than older meds; indeed, there is little reason to suspect they are much more effective at all in comparison to older meds, as can be seen here, here, here, and here.
Monday, November 20, 2006
This post (similar to a previous post) centers on a study for risperidone as an add-on to SSRI treatment for depression.
The current post focuses on the failure of some authors to disclose their conflicts of interest. When the advance online publication of the article is examined, the only author listing any financial support is lead author Mark Hyman Rapaport, who lists four grants and a chairmanship. Janssen funded the study according to an earlier abstract version of the study, so it is curious that Rapaport did not list Janssen as a financial supporter. Rapaport was not alone in his failure to disclose. Nemeroff (the last author) and Keller (fifth author) clearly had conflicting interests that should have been declared.
Let’s start with Nemeroff. He is the editor of the journal (Neuropsychopharmacology) in which this article appeared, so he should be familiar with the journal's conflict of interest policy, which states in part: “At the time of submission, each author must disclose any involvement, financial or otherwise, that might potentially bias their work. The information should be listed in the Acknowledgements that appear at the end of the manuscript and noted in the authors’ cover letter.” The policy is pretty clear – so does Nemeroff have a significant conflict of interest in this case?
In the Journal of Clinical Psychiatry Supplement 8 from 2005, the conflicts of interest section mentions that, among Nemeroff’s quite numerous funding sources, Nemeroff has received grant/research support from Janssen, is a consultant for Janssen, and is a member of the speakers bureau for (you guessed it) Janssen, which is the company marketing Risperdal. In the same supplement, which was derived from a “planning roundtable…supported by an educational grant from Janssen Medical Affairs,” Nemeroff penned a review article that reflected favorably upon risperidone, as well as some other drugs. So it’s pretty clear that there was a conflict of interest here – it’s just that editor Nemeroff did not enforce his journal’s policies upon himself. Of course, this is not the first time such behavior has occurred. You can read about a similar failure to enforce editorial policies involving Nemeroff here and here.
But wait, there’s more! Nemeroff actually violated another of his journal’s policies, the one about duplicate publication of data.
On the Neuropsychopharmacology author instructions page, right under Nemeroff’s name as editor of the journal, you can see the following: “Submission is a representation that neither the manuscript nor its data have been previously published (except in abstract) or are currently under consideration for publication.” Nemeroff, in the aforementioned 2005 Journal of Clinical Psychiatry Supplement 8 wrote no less than five paragraphs describing the risperidone add-on study’s data, which was later published in Neuropsychopharmacology. So Neuropsychopharmacology’s editorial policy is that study data should not have been published earlier except in abstract form, but Nemeroff wrote about it for much longer than an abstract in a supplement paid for by Janssen, yet felt free to flout editorial policy regarding prior publication. This, of course, comes in addition to an egregious failure to disclose conflicts of interest.
What is the penalty for such behavior, one might ask? “An accusation that an Editor…has violated the conflict of interest policy shall be referred to the ACNP Ethics Committee for consideration and investigation. The Ethics Committee shall report its findings and recommendations to the Publications Committee and Council for action… an Editor…found guilty of violating the conflict of interest policy is subject to sanction, including forfeiture of the editorship.”
Don’t worry – Nemeroff is one step ahead of the game here – he chose to resign his editorship over the previous scandal involving his pimping of vagus nerve stimulation therapy, which you can feel free to read about here. No, Nemeroff did not state that he was leaving the editor position as a result of the VNS debacle, but the timing seems to reflect more than a coincidence.
To summarize briefly, Nemeroff had a blatant conflict of interest which he did not declare. He is also the editor of the journal in which the article appeared where he did not disclose the COI. In addition, he ignored his journal’s prohibition on prior publication of data. As the editor, he should obviously know much better. Indeed, it is difficult to believe that this was an oversight. It appears that Nemeroff was playing the role of marketer for risperidone as opposed to carrying out his duties as an editor.
How about Martin Keller? In that same 2005 supplement of the Journal of Clinical Psychiatry mentioned above, Keller is listed as having received honoraria from Janssen and as being an advisory board member for Janssen. Keep in mind that whatever work he conducted at the “planning roundtable” upon which the supplement was based was also funded by Janssen. Yet no mention of any financial support from Janssen is provided in the Neuropsychopharmacology article.
Apparently, perhaps due to Nemeroff’s earlier brush with the spotlight regarding his marketing of VNS therapy in the journal in which he edits [an article in which blatant conflicts of interest were not disclosed], the authors thought better of the conflict of interest issue. A corrigendum (correction) is displayed in the November print edition of Neuropsychopharmacology that lists disclosures for Nemeroff, Keller, and Rapaport. But if you are obtaining the article through online access (which is likely true for most people), then you won’t find the correction because it is not included in the pages of the article. Eventually the correction will be picked up on Medline, but many readers will not notice it.
Add the failure to disclose conflicts of interest to the shifting authorship line mentioned earlier and you can see why I am feeling a little skeptical. Of course, given some of Nemeroff’s past ethical issues (here and here), this is not entirely surprising. The last chapter in this tale, regarding the risperidone augmentation study’s data analysis will be told shortly.
..."Looking at the new report, one sees in the Acknowledgements that the first author was supported by Corcept. One also sees that a co-author, E. Ronald de Kloet, failed to disclose his relationship to the company: he is a member of Corcept’s scientific advisory board and, unless he has sold any, the owner of 60,000 shares of Corcept stock. One also sees that this basic science article is careful to follow the company’s marketing message and branding language on the putative efficacy of mifepristone for PMD. For instance, it states, “The glucocorticoid receptor antagonist mifepristone has been shown to rapidly and effectively ameliorate symptoms of psychotic major depression.” These basic scientists also stated, “recent clinical studies have shown that the glucocorticoid-receptor (GR) antagonist mifepristone relieves symptoms of psychotic depression after a remarkably brief treatment period of 4 or 8 days.” None of the cited studies shows anything of the sort. We then read, “… similarly to its clinical efficacy, mifepristone’s effects on adult neurogenesis are rapid and positive, and may therefore be important for its mechanism of action.”
What is the deal with "similarly to its clinical efficacy" -- there is no proven clinical efficacy. The post goes on to discuss how the article makes for great marketing copy (which was likely its intent all along). A basic scientist can play marketing waterboy as well as the clinical trials folks!
Link to the excellent HC Renewal post here. More on mifepristone here and here.
Over at Furious Seasons, you’ll find an excellent post about the clinical trial mania (pardon the bad pun) regarding Seroquel. Basically, Philip Dawdy went to the government’s clinical trials site and saw what was happening with Seroquel trials. The drug industry always talks about how much money it spends on research and development, yet if this is what counts as R & D, it is no wonder many people are not swayed by the industry that R & D consistently yields great new medications. Why develop new meds when you can just market existing drugs ad nauseum. Here are some of the trials that he found…
I’d write more, but I’m not one to steal Dawdy’s well-deserved thunder on this one. Please read his full post. You’ll be glad you did. Since we’re talking about Seroquel, anyone want a sponsored editorial with that?
Sunday, November 19, 2006
Currently, Lamictal is undergoing phase III trials as an add-on treatment for schizophrenia. Here’s what a recent review had to say about research done to this point on lamotrigine as an add-on treatment…
“We found five relevant trials (total n=537), but no usable data on service outcomes, general functioning, behaviour, engagement with services, satisfaction with treatment or economic outcomes. Overall, reporting of data was poor. Those data we were able to use suggested that equal proportions of people allocated lamotrigine or placebo had no global response (n=208, 1 RCT, RR 1.06 CI 0.73 to 1.54). There was no significant difference between groups in the proportions of people whose mental state did not improve (n=297, 3 RCT, RR 1.26 CI 0.81 to 1.97). There was, however, a significant reduction in the PANSS total scores (n=67, 2 RCT, WMD -16.88 CI -8.57 to -25.18, p=0.0001), positive symptom sub-scale scores (n=65, 2 RCTs, WMD -5.10 CI -8.86 to -1.34) and negative symptom sub-scale scores (n=67, 2 RCTs, WMD -5.25, CI -7.07 to -3.43). Most cognitive measures showed no differences (n=329, 2 RCTs, RR not attaining BACS composite score of 0.5 1.10 CI 0.59 to 2.04).”
So the PANSS shows change, which is good, but on the other hand there was no usable data on a number of important variables. This brings up the issue of what is clinically relevant – what is the relationship between PANSS scores and real-life patient functioning? I’m not claiming to have the answer. If PANSS scores are changing yet mental state (however it was measured) is not also improving, what's going on? The question is important – if we have nothing but a single measure indicating improvement, does this mean a drug really benefits patients? Alan Kazdin discussed this point much more brilliantly than I could in an article in the American Psychologist recently. What made this of particular concern to me was the fact that no usable data were reported, across five trials, on a variety of other aspects of patient functioning – was this just a lack of curiosity on the part of investigators or were data suppressed? I’ve not dug deeply on this one, but it is a question worth thinking about.
Friday, November 17, 2006
Op-ed contributor Andrew Solomon’s words are in quotes and italics followed by my thoughts. I would have liked to post all of his piece, but I don’t think the NYT would approve (copyright issues), so I’ll try to be selective.
“Depression is the leading cause of disability worldwide, according to the World Health Organization. It costs more in treatment and lost productivity than anything but heart disease.”
So far, so good
“Despite medical advances in the last 20 years that have greatly improved our ability to help those who suffer from depression, we lack an effective system for administering care.”
I wonder which medical advances he is speaking of. Maybe SSRI’s? Or Effexor? Or Cymbalta? In any case, there is no evidence that these new treatments have brought more than perhaps a minimal advantage over treatments which existed prior to 1986 (see here and here).
“Only a very small percentage of depressives who seek help receive appropriate treatment for their condition. Research often stalls short of being translated into useful medicine. Depressives continue to be stigmatized, which makes their lives even more difficult and lonely.”
Um, I think that if Mr. Solomon is so concerned about people with depression avoiding stigmatization, he may want to stop referring to them as “depressives,” as the term indicates that he is defining people with a label, and a rather negative one at that. I can assure you that nobody who is dealing with depression wants to be called a ‘depressive.’
“These problems are similar to those cancer patients once faced, and the best way to address them might be similar as well. We need a network of depression centers, much like the cancer centers established in the 1970s.”
“Following this model, the National Institute of Mental Health should coordinate and subsidize a national network of depression centers, ideally based at research universities with good hospitals and departments devoted to the subject.”
“Among the thousands of depressed people I have met with, the majority have sought treatment but feel that they are not getting good care. Many of them have been prescribed antidepressants by family doctors who lack training in psychiatry and have conducted only cursory interviews before rendering their diagnoses. Antidepressants vary in their chemistry and effects; and human brains vary as much as human minds. To treat the most complicated organ in the body appropriately demands considerable expertise.”
Apparently Mr. Solomon has never heard of psychotherapy. In case he’s wondering, it has a pretty good track record with ‘depressives,’ better than the meds he seems to be touting.
“(Full disclosure: my father is the chief executive of a pharmaceutical company that manufactures antidepressants.)”
Well, that clears things up a bit on my end.
“Before the cancer centers came around, cancer was as taboo as depression is now. But as antibiotics and vaccines for other illnesses lengthened life expectancy, cancer became more pervasive and less shameful. Depression, too, is becoming more widespread and more frequently diagnosed. Depression and bipolar illness will affect some 20 percent of Americans during their lives, and yet the stigma endures. People often come up to me after lectures to whisper about their affliction, as though everyone else in the room weren’t grappling with precisely the same thing.
It is neither wise nor feasible for a large proportion of the population to be trying to keep a secret. A national network that helped to medicalize depression in the public imagination would reduce sufferers’ shame. The very waiting rooms of depression centers would provide incontrovertible proof of the ubiquity of the illness and ease the isolation of sufferers. Within the centers, patients would find themselves the focus of an elite community of insight and support.”
Yeah, we should medicalize it! Call it a disease! I can see it now: someone is going to say just like a person with diabetes needs insulin, ‘depressives’ need serotonin. Oh, wait, that’s been done. It’s played – there is clearly no reliable biological marker for depression, but selling depression as a “disease” sure sells those pills – just ask Mr. Solomon’s father, the drug company executive (Forest Labs)!
“…As it is established that these mental illnesses are not character defects, but instead can be characterized in terms of brain symptoms, the false distinctions between them and cancer or heart disease will become impossible to sustain…”
“We’ve made stellar progress in treating mental illness since the Prozac revolution but there is a catastrophic divide between research and practice. We must come up with a seamless way to support scientific progress and to administer the treatments we have, in order ultimately to alleviate as much suffering as possible.”
Indeed, if we are going to fix the gap between science and practice, I’d suggest 1) how about less polypharmacy (doling out a bunch of meds simultaneously), which has a very meager evidence base, and 2) how about psychotherapy first and maybe meds if psychotherapy does not work.
Mr. Solomon is clearly far out of touch with the evidence base, yet he writes books and is featured in the New York Times. That makes me feel like a “depressive.” Full text of his writing here, but it will not be available free online for long. His book has received rave reviews, though I’ve not read it, and I want to be clear that my comments only apply to his writing in the NYT today.One more thing: "Prozac Revolution" -- did Mr. Solomon read Listening to Prozac one too many times?
SEC Filing Below...
Item 3.01 Notice of Delisting or Failure to Satisfy a Continued Listing Rule or Standard; Transfer of Listing.Nasdaq has notified Corcept Therapeutics Incorporated that the Company is not in compliance with continuous listing standards for inclusion on the Nasdaq Global Market because (i) pursuant to Nasdaq Marketplace Rule 4450(a)(5), the Company’s price per share for its Common Stock had closed below the minimum $1.00 per share requirement for 30 consecutive business days, and (ii) pursuant to Marketplace Rule 4450(a)(3), the Company’s stockholders’ equity reported on its Form 10-Q for the period ending September 30, 2006 did not comply with the minimum $10 million requirement.
The Nasdaq notifications were provided in two letters dated November 10, 2006. On November 16, 2006 the Nasdaq staff determined pursuant to Marketplace Rule 4814(b) to adjust the period of time required for Corcept to disclose the receipt of the two letters to no later than the close of business on November 22, 2006.
Pursuant to Marketplace Rule 4450(e)(2), the company has a 180 day grace period to regain compliance with Nasdaq’s minimum bid price requirement. In order to regain compliance, the bid price of the Company’s common stock must close at $1.00 or more per share for a minimum of 10 consecutive business days anytime before May 9, 2007.
Nasdaq has advised the Company that under Marketplace Rule 4803, the Company has until December 4, 2006 to provide Nasdaq a specific plan to achieve and sustain compliance with the minimum stockholders’ equity standard.
But seriously, the problems continue to stack up. In an article published online in Neuropsychopharmacology, a journal at which Nemeroff is the editor, the following occurred:
1) A sizable authorship switch
2) Failure to disclose conflicts of interest
3) Bobbing and weaving on data analyses
This centers on a study for risperdone as an add-on to SSRI treatment for depression. The study had three phases, as follows:
1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for > six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks
2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks
3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.
This post focuses solely on an authorship switch. In 2004, results from this study were presented in abstract form. In this form, the authors read as follows:
Nemeroff, Gharabawi, Canuso, Mahmoud, Loescher, Turkoz, Rapaport, Gharabawi. You might think that there were two different Gharabawis, but they were both listed as George M Gharabawi, so he’s either the 2nd or 8th author – someone made an obvious typo here.
Who’s on the final published manuscript in Neuropsychopharmacology? In order: Rapaport, Gharabawi, Canuso, Mahmoud, Keller, Bossie, Turkoz, Lasser, Loescher, Bouhours, Dunbar, Nemeroff.
As if by magic, Nemeroff goes from first to last author. Rapaport moves from seventh author to first, Turkoz gets bumped down a couple spots. Keller appeared out of thin air. What did he do to get on the study? Keller is credited with “study concept and design,” which I would deem impossible since, if he really conceived and designed the study, he would have appeared as an author on the earlier abstract. Yet he is listed fourth on the list of people who designed the study. He is also credited, along with all of the authors, with “analysis and interpretation of the data” and critical revision of the manuscript for “important intellectual content.” Is it possible that he did a great job of helping to revise the manuscript? I suppose, but it seems there were plenty of other people who were also involved with the writing of the paper. Note that Keller was not credited with “drafting of the manuscript.” So Keller did not recruit participants, provided no administrative support, did not provide statistical expertise, and did not draft the manuscript, but apparently helped design the study after it was completed! Very impressive indeed.
But wait there’s more! In a press release, it is stated that “Dr. Mark Hyman Rapaport was the study’s principal investigator. Co-principal investigators were Charles B. Nemeroff, Ph.D., M.D. and Martin B. Keller, M.D.” So Keller, who played no major role in designing in the study or running patients was a co-principal investigator. Remember, he would have been listed as an author on the initial abstract describing the study results if he helped design the study.
What am I implying? There’s no doubt that Keller is a big name in psychiatry. He has, according to his CV from August of 2006, over 300 journal publications to go with dozens of book chapters. So it certainly adds credibility to the study to tack him on as an author. As for Nemeroff moving from 1st to last, that’s interesting. My thought is that with an authorship list of 12, nobody is going to remember authors 6-11, so tacking him on as last author makes the name stand out more. Just speculation on my part. And Rapaport making the jump from last to first? Well, I think that, again, we’re talking about name recognition here. Rapaport is likewise a pretty big name. Now, mind you, I’m not implying at all that Rapaport did not have a major role; indeed, the author contributions section of the paper indicates that he did quite a bit of work on the project and he absolutely appears to deserve first author credit.
There are varying standards for the ordering of authorship. In some disciplines, it just goes in descending order (which makes the most sense) – he/she who contributed most gets first authorship while he/she who contributed least gets last authorship. In others, the lab supervisor, who may have done very little on the study, gets last authorship or sometimes first authorship. In any case, the first author and the last stick out most in memory and I’m sure it doesn’t hurt to throw in a bigwig like Keller in the middle of the mix. I’m guessing it would have been better publicity to move Keller higher on the list, but there’s only so much credit a guy can receive for apparently doing magic (designing the study after it was completed) and making comments on the manuscript. Of course, inappropriate authorship is widespread, so these results come as no surprise.
More on other issues with the study later. I assure you that the authorship switch is the least of the study’s problems.
"The report documented nearly $36 million in illegal Medicare and Medicaid payments for procedures on hundreds of patients, in ex change for payments of $5.7 million to physicians since 2002 in ex change for sending their heart patients to UMDNJ's University Hospital.' Unfortunately, this scheme reached well into all levels of the hospital and University Central Administration, who were complicit first in forming and expediting this illegal plan, and later in covering it up,' said Stern in his report, who said the illegal activity 'persists to this day…”
Sound interesting? Check out the link to the full story here.
Thursday, November 16, 2006
“Between 2000 and 2003, researchers evaluated approximately 200 participants at 28 centers in the
“Patients who were switched to placebo showed a significantly higher rate of depression recurrence (65 percent), compared to those who stayed on escitalopram (27 percent),” said Kornstein. “This was true even though the patients showed a full resolution of their depression at the start of maintenance treatment.” The medication was found to be safe and well tolerated throughout the study, she said.
“These findings indicate the importance of maintenance therapy for patients with recurrent major depressive disorder beyond four to six months of improvement, even if a patient’s depressive symptoms appear to be resolved,” she said.
This work was funded by Forest Research Institute.”
Your first impression may be: Lexapro (escitalopram) for life! Let’s look at the flip side of the coin, shall we?
First, discontinuing medication for one group and putting them on placebo is going to set up an increased rate of depression due to discontinuation/withdrawal from the medication.
Second, and more importantly, a meta-analysis by De Maat, Dekker, Schoevers, & de Jonghe regarding long-term treatment outcomes for psychotherapy and medication found that the longer term relapse rate for medication was 57% compared to 24% for psychotherapy. In the studies they examined, both medication and psychotherapy were provided in the short-term, then discontinued, and the long-term results were then analyzed.
I suppose it may be true that if you continue people on medication for a longer time, they may maintain lower symptom levels. But, when the medication is taken away, look what happens – likely, it’s relapse.
So is it cost-effective to keep people on so-called maintenance medication therapy indefinitely as compared to providing psychotherapy, which provides superior long-term results without the need for maintenance treatment? I think the answer is an obvious no.
In addition, there is precious little evidence regarding the long-term effects of antidepressants, as can be seen in an interesting article here. Despite the widespread long-term use of antidepressants, surprisingly little is known about their impact and what happens upon discontinuation of long-term treatment.
To summarize: We know antidepressants fare much worse than psychotherapy when both treatments are provided short-term then discontinued and outcomes are examined over the long-term. We don’t know the long-term effects of keeping people on indefinite maintenance pharmacotherapy. I have a feeling some people will make a big deal about Kornstein’s findings; please refer them to this post.
Hat tip to the anonymous reader for informing me about the July sponsored editorial.
Wednesday, November 15, 2006
I just saw a brief headline that...
"Corcept Therapeutics announces $3 mln private equity financing Co announces that it has entered into a definitive agreement with certain accredited investors for the private placement of 3 mln shares of its common stock at a price of $1.00 per share. Pursuant to the agreement, the investors, who are led by Paperboy Ventures, have irrevocably committed to purchase the shares."
There are only three logical explanations:
1) These people are looking for a tax write-off when Corcept collapses due to the poor performance of its Corlux (mifepristone/RU-486) product in treating psychotic depression. When the FDA determined the drug is not approvable, they’ll drop a tax bracket or two.
2) These investors have been snowed by someone; they don't know what they're doing. They haven't read posts such as this, this, and this. They don't know that the efficacy of Corlux is nothing to write home about, as it appears to help little with depression and not particularly greatly with psychotic symptoms either.
3) They know something that I don't. Maybe the FDA, circa vagus nerve stimulation, is going to approve Corlux regardless of its actual efficacy.
"A Lowell couple is suing pharmaceutical giant AstraZeneca in federal court for failing to disclose the true dangers of its popular anti-psychotic drug Seroquel...
"The lawsuit, filed last week in U.S. District Court in Hammond, says Randall Waugaman developed "diabetes and/or diabetes-related injuries" while taking the prescription drug.
Jim Minnick, a spokesman for AstraZeneca, declined to comment on the Waugaman's case, but said in general the company is disputing the claims in the swell of litigation filed in federal courts across the country.
"The safety of patients who use our medications is our highest priority," Minnick said Tuesday. "(Seroquel) is a safe and effective medication when used as directed as a prescribed medication."
Seroquel was approved by the U.S. Food and Drug Administration in 1997. A promotional news release says Seroquel is the most popular "atypical antipsychotic" prescribed drug in the United States, with global sales of almost $2.8 billion last year."
Like many other litigants, the Waugamans claim that AstraZeneca covered up the results of its own studies on the drug that found it also could affect weight gain and hyperglycemia, potentially causing diabetes."
Well, well, well. Although it's fairly clear that Seorquel isn't a diabetes/weight gain inducer to the extent of Zyprexa, there is indeed evidence that Seroquel is often not good for one's weight (as can be seen here and here, among several others). Add this to additional lawsuits regarding the safety and marketing of Seroquel, and it makes one wonder if Seroquel will continue to be a cash cow.
Tuesday, November 14, 2006
In the October Journal of Clinical Psychiatry appears a “sponsored editorial.” Last time I checked, editorials often reflected the informed opinions of the editorial board or perhaps a knowledgeable guest. But, no, this editorial reflects the opinion of AstraZeneca, maker of Seroquel.
You can see what it looks like to the right. How far do we want to blur the line between marketing and science? If the claims made in this advertorial are true, then perhaps someone should write them up in more detail and submit an article on the topic, rather than giving the hint that the editorial board approves of this non peer-reviewed message. Maybe the editors of the Journal of Clinical Psychiatry no longer have standards or maybe it was an oversight. In any case, it is well beyond the standards of acceptable scientific journal editing to allow an advertisement to be labeled as an editorial.
... says US Deputy Health Secretary Alex Azar! From the Guardian...
"The White House is lobbying British ministers to allow the world's main drug companies unrestricted access to the NHS as part of a package of free market reforms for the service. The
He made it clear that he was also in favour of the drug companies being allowed to advertise directly to patients. At the moment they may only advertise to doctors.
He also wanted to share the
"How are we making sure that we don't take steps on cost containment that are short-sighted and prevent the investment in long-term biomedical research and development and innovation, so that when my kids are senior citizens we have the next generation and next, next, next generation of drugs?"
"The White House arguments will increase the mounting pressure on Nice, which is regularly castigated by patient groups and drug companies when it rejects a new medicine from use in the NHS on cost grounds."
""In all of our systems it is so easy to make the decision to cut costs today by going after drug prices, and to not focus on what will be the impact on long-term innovation," he [Azar] said.
My View: Yeah, I am sure that the Brits would LOVE a Medicare boondoggle like ours! There is no doubt that seniors across the UK are begging for an Americanized system of health care.
Then Azar has the audacity to say that these "market reforms" will cause price competition? Find me one iota of evidence to support such a baldfaced lie. The American government decided, nah, we don't need to negotiate prices -- we'll pay whatever y'all good folks in the drug industry would like us to pay. Despite all the free market rhetoric, this is the kind of thing that would make Adam Smith turn over in his grave! In a free market, prices are determined through negotiation, not by fiat.
Arguably, my favorite statement from Azar was the time-honored scare tactic of, to paraphrase, "if drug prices drop, how will they ever have enough money to conduct research to develop new products?" I'd buy that if three things were true:
1) If drug company research was devoted to truly discovering new drugs, rather than copycat me-too meds that add no benefit to patients
2) If drug company research was NOT frequently devoted to conducting trials that simply showed an additional indication for an existing drug in an already crowded market. Risperidone for depression is an example of such (more on that later) -- how many drug treatments do we really need for depression? Or, how about Seroquel for anxiety?
3) If drug company cash was not devoted so highly to marketing as opposed to research
Oh, and as for the new drugs save lives argument, please see this excellent post at the incomparable Pharma Marketing Blog.
Here’s what Healy says: “… many, including the regulators who approve the drugs on the basis of such trials, regard antidepressant trials as assay systems aimed at demonstrating a treatment signal from which a presumption of efficacy can be drawn, rather [than] as efficacy trials… If these trials are simply assay systems, it can be reasonable to discount and leave unpublished evidence from failed trials in which an active treatment fails to distinguish from placebo, on the basis that the trial lacked assay sensitivity... Alternatively, if we regard antidepressant trials as efficacy trials then both those demonstrating a treatment effect and those not demonstrating a treatment effect should be thrown into the meta-analytic hopper, and if this is done the degree of superiority of active treatment may be little more than 5%, or a mean of 2 points on the Hamilton Rating Scale for Depression scores, or no greater than it was shown to be in paediatric antidepressant trials (Khan et al., 2000; Kirsch et al, 2002).”
Counting trials as “assays” is ridiculous. Patently absurd, in fact. To do so is to endorse the idea that only one’s successes count. This is akin to boxer A fighting boxer B 10 times. In 7 fights, boxer A and boxer B fight to a draw – the judges cannot reach a decision. In two fights, boxer A wins by knockout and in a third, boxer A wins via a split decision of the judges. We would logically conclude that boxer A is a bit better than boxer B, but is clearly not superior by a large margin. However, under the “assay sensitivity rule,” we’d only count the times when boxer A won, and conclude that he is a far superior fighter than boxer B, which is clearly a mockery of the evidence based on their ten bouts.
In fact, a meta-analysis reached the same conclusion – that including only studies with “assay sensitivity” results in biased conclusions. In fact, the authors state, “Unless evidence is gathered to support the hypothesis that using the AS method reduces bias [of which there is none currently], meta-analysts should make quality judgments that are based on study methods, and that are independent of outcome.”
This essentially means that existing meta-analyses of antidepressant efficacy are biased, because, with the exception of the few meta-analyses that included unpublished studies, meta-analyses have relied on only published studies (which, almost by definition, yielded at least some significant advantage for the drug).
More from Healy:
“From the RCT data cited above, it appears that when people improve during antidepressant trials, 80-90% of the response can be attributed to the natural history of the disorder, or to the effect of seeking help, or to the benefit of any lifestyle advice or problem-solving offered by the clinician, or to what has been called countertransference or related aspects of the therapeutic encounter.”
I believe what Dr. Healy is referring to is what we in psychology call the “common factors,” including the therapeutic relationship. I recall that 80-82% of the antidepressant effect was accounted for by placebo in Kirsch’s work, and that when one looks at active placebos (placebos with side effects similar to antidepressants, which then keeps participants blind to their treatment condition more consistently), the number gets closer to 90% or above. Moncrieff, Wessely, & Hardy conducted a meta-analysis on the active placebo topic that bears this out.
So if the active effect of the drug (i.e., the effect we can attribute to the drug over placebo and other factors) is very small, what should we do?
Healy opines as follows:
"But what should happen if the combined non-drug components contribute four times more of the eventual response to treatment in standard cases than does the active drug? If the money and culture are to follow the evidence in this scenario, where should they go? One possibility is to modify the APA statement to say that psychiatrists rather than antidepressants can save lives. For example, we might expect lives to be saved in the case of clinical practice, informed
by the evidence, that restricts antidepressant use to cases in which it is clear that the condition has not resolved of its own accord, efforts at problem-solving have not led to a resolution, and hazards such as suicide arising from the severity of the condition have shifted the risk–benefit ratio in favour of a closely monitored drug intervention with informed patients, rather than non-intervention. Aside from the scientific and clinical merits of this position, there is a political case for reading the data this way, in that if there is no evidence that antidepressants pose risks [which is of course untrue but is often stated by "opinion leaders"] and if antidepressants rather than physicians save lives [likewise untrue but believed by many], then in a brave new world in which healthcare is being segmented, it is not difficult to foresee a future in which depression screening and treatment might be undertaken by non-medical personnel."
If I'm understanding him correctly, then I don't think he could be more correct. The psychiatrist him/herself accounts for a significant part of treatment outcome. Poor interpersonal skills? Can't form solid relationships with your patients? My bet is that your outcomes are poor, regardless of the pharmacological regimens you employ. In both psychotherapy and pharmacotherapy, the therapist/physician influences outcome regardless of the treatment provided. Instead of focusing on the type of antidepressant, which we know does not make much of a difference in influencing outcomes, we should be figuring out what therapist behaviors and/or personality traits are related to good outcomes -- there's more action there than sifting through a bunch of antidepressants which yield little benefit over a placebo in any case.
But figuring out what types of therapists are effective does not reward shareholders and is thus research that will take decades to conduct (who's going to fund it?), while clinical trials of medication will plug along in a direction where we can be assured that the sponsor's desired outcomes will be found (except when the pesky government gets involved in research, such as with CUtLASS 1) while patients in the real world benefit little to not at all from these trial results. Creating yet more drugs which serve to benefit shareholders, corporate executives, and allied academic researchers yet fail to yield any benefit to patients over existing regimens -- The cycle continues.