Showing posts with label ARISE-RD study. Show all posts
Showing posts with label ARISE-RD study. Show all posts

Wednesday, January 02, 2008

Risperdal for Depression Study Hammered Yet Again

A letter to the editor in the journal Neuropsychopharmacology has slapped the ARISE-RD study, which examined Risperdal as a treatment for depression. I have written about this study on several occasions. The letter to the editor notes many of the same concerns that I have discussed on my site, including: 1)The study reported data that was previously published, a violation of journal policy and 2) A claim regarding the drug's efficacy was withdrawn because the statistics were done incorrectly.

In addition, the letter (by Bernard Carroll) notes that data regarding weight gain are not reported in full, a troubling omission given that risperidone was apparently related to more weight gain than placebo. This borderline significant to statistically significant difference (depending on what analysis is used) was reported in a prior iteration of the study, but not in the final version as published in the journal.

Please see several prior posts regarding this study (1, 2, 3, 4, 5). It's a doozy. Mark Rapaport, the lead author of the study, in his reply to Carroll's letter, wrote the following:
The paper repeatedly states in Abstract, Methods and in Discussion that continuation of risperidone augmentation therapy was not more beneficial than placebo, and hence the working hypothesis was disproven...

I would like to thank the reviewers and the editors of Neuropsychopharmacology for having the courage to allow us to publish this negative finding.
A couple things. First, does it really take all that much courage to publish a negative finding? Should a Nobel Prize or a Bronze Star be awarded? There should be little honor attached -- to paraphrase Chris Rock: Why should you get mega-credit for doing things you're supposed to do? Oooh, you published a negative finding. What do you want, a cookie? You're supposed to publish a negative finding you low expectation' havin' "independent scientist." The fact that somebody thinks props should be doled out because a negative finding was published shows how sad-sack the system has become.

To top it off, the study, as originally published, hardly painted its findings as negative, as can be seen in my aforementioned links to the study. Yes, there were a couple of small caveats about efficacy in the paper, but take a look at a snippet from the press release that accompanied the study:
In the first large-scale study of its kind, researchers at Cedars-Sinai found that people suffering from resistant major depressive disorder who don’t respond to standard antidepressants can benefit when the drug therapy is augmented by a broad spectrum psychotropic agent, even when treated for a brief period of time.

Does that sound like a "negative finding" to you? If I am following this correctly, it went something like this: The study supports the efficacy of Risperdal for depression until a number of problems are found with the study, at which point the lead author indicates that they never said the findings were positive. I'm a little confused.

I hope to never again write about this study, as I feel like the proverbial dead horse-beating has begun.

Oh, and Happy New Year. I expect to be writing about similar stories throughout the year because while the names may change, the storyline remains the same.

Thursday, May 03, 2007

Uh-Oh Chuck, They STILL Out to Get Us, Man

The ARISE-RD study, which examined the addition of Risperdal (risperidone) as an add-on treatment for persons who were not responding well to antidepressant treatment, has been discussed several times on this site. I had some suspicions about this study, one of which was recently validated. If you are already familiar with the background, please feel free to skip to the bold heading “Change in Findings.”

Background. The study had the following phases:
1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for greater than six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks
2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks
3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.

Please read the linked posts for more detail on the following…

Conflicts of Interest. Nearly all of the authors failed to disclose their conflicts of interest. One of the authors who failed to disclose relevant conflicts was Charles Nemeroff, who was also the editor of the journal (Neuropsychopharmacology) in which the study was published, so he cannot claim ignorance of the rules. In the ARISE-RD study, Nemeroff was also republishing data he had previously published, which is a no-no. Read the linked post for more details.

Authorship. The authorship was switched around as the study went from earlier abstract form to final copy. Especially curious was the addition of key opinion leader Martin Keller to the authorship line. Why switch authors? My take is that the more big names one can slap on a study, the thicker its veneer of academic credibility. If one follows the trail of this study, one would have to believe that Keller designed the study after it was already completed – something is fishy here… Read both linked posts (1 and 2) for background.

Statistical legerdemain was at work. The authors, in an earlier report, declared the measures they would be investigating to assess the efficacy of treatment. Several of these measures were reported incompletely or not at all in the final published version of the paper. This looks a lot like burying negative data. In addition, some of their analyses seemed to yield results that could best be described as magical. Read the linked post for background on the statistical issues.

Change in Findings. I speculated earlier that their finding that risperidone warded off depression to a significantly greater extent than placebo for patients who did not respond at all to initial antidepressant treatment was bogus. Well, the authors just published a brief corrigendum in Neuropsychopharmacology in which they state that

Following the publication of this article, the authors noted that in the abstract and in the next to last paragraph of the results section, a P-value for part of one of the post hoc analyses was incorrectly reported. A significant P-value was reported for both the difference in time to relapse and for relapse rates in a subgroup of patients fully non-responsive to citalopram monotherapy. Although the P-value for time to relapse was correctly reported, the correct P-value for the comparison of relapse rates is not significant (P=0.4; CMH test). This change does not alter the major findings of the study nor any of the conclusions of the report. We appreciate the assistance of a diligent reader in identifying this error.

First off, since I noted in a prior post that their original test looked suspiciously wrong, I suppose that I may have been the diligent reader. If so, you’re kindly welcome. If it was someone else who brought it to their attention, then thanks for doing so.

What they are basically saying, to the statistically uninitiated, is that they reported that risperidone appeared to be effective for a group of people but that, in fact, their analysis was wrong. The analysis, done correctly, shows that risperidone was not effective in preventing the return of depression in persons who…

(a) initially showed no response to antidepressant treatment, and
(b) whose depression improved while taking risperidone as an add-on treatment and then
(c) took risperdone for six months

…in comparison with people who were switched to a placebo after showing improvement in symptoms (b). In other words, risperidone did not prevent relapse into depression.

In the abstract of their paper, it is stated that

Open-label risperidone augmentation substantially enhanced response in treatment-resistant patients.

Later in their paper, it is stated that

Our secondary analysis revealed that patients who were least responsive to citalopram monotherapy may be those most likely to benefit from continuation therapy with risperidone.

Great – except that this is the result they just retracted. In other words, if you showed no response at all initially to the antidepressant, then whether you were allotted to receive a placebo or Risperdal in the final study phase made no difference; you were equally likely to experience a relapse of depression. So, despite their claim to the contrary, this does, in fact, change one of their major conclusions.

Was a Stork Involved? Where do incorrect findings come from? Two sources, generally. One is from Honest Mistake-Ville. Second is from corporate headquarters. Given the large number of authors on this paper, it seems pretty odd to me that nobody would have caught the error. I have no problem with honest mistakes being made, but given the large number of other issues surrounding authorship (1, 2), failure to disclose conflicts of interest, and statistical/data reporting issues, I’m very suspicious. When you take this latest finding away, look what happens. Here are the main findings from the abstract that compare risperidone to placebo, quoted directly, edited to show the most recent correction…

Median time to relapse was 102 days with risperidone augmentation and 85 days with placebo (NS); relapse rates were 53.3% and 54.6%, respectively. In a post-hoc analysis of patients fully nonresponsive to citalopram monotherapy, median time to relapse was 97 days with risperidone augmentation and 56 with placebo (p=0.05); relapse rates were 56.1% and 64.1%, respectively (p≤0.05) (p=.4).

So risperidone wins on only one of four analyses in comparison to placebo. And it is the weakest analysis of the bunch, one that looks at only a subgroup of the participants, and finds that you get a little more time, on average, before you relapse, but you still have essentially the same likelihood of becoming depressed as if you had taken placebo.

When the study was published, press releases were issued that attested to the drug’s efficacy in helping patients with treatment-resistant depression. Now that the second correction has been made to the paper (the first was related to not disclosing relevant conflicts of interest), there will be no press releases correcting the earlier, overly optimistic press releases. Many physicians have likely received a copy of this study in their mailboxes, but I am certain they will not receive a follow-up notice to inform them that the results were incorrect.

This reminds me of another post that described similar problems occurring throughout the so-called scientific investigation process.

The ARISE-RD study is now officially nominated for a Golden Goblet Award. Not sure which authors merit individual nomination, though both Nemeroff (1, 2) and Keller (1, 2) have appeared multiple times on this site regarding other issues.

Saturday, December 09, 2006

Uh Oh Chuck They Out to Get Us Man: Authorship AGAIN

You may have thought I had nothing more to say about the ARISE-RD study, in which risperidone (Risperdal) was used as an adjunctive (add-on) treatment for depression, but, incredibly, there is more information pertinent to its discussion. If you are already familiar with the issues surrounding the authorship of the paper, please skip ahead two paragraphs.

You may recall the authorship being altered from when this study was presented in abstract form (presented at the American College of Neuropsychopharmacology conference) to when it was published in final form in Neuropsychopharmacology. I took exception to Keller (a big name in psychiatry) being added to the paper as an author, as he did not appear on the previous abstract as an author. I hypothesized that Keller was added because his big name gives the study an air of scientific credibility. Especially given the study’s unimpressive results (and clear use of statistical legerdemain), adding a big name was an excellent marketing ploy by Janssen.

To quote from my earlier piece on the authorship of this paper: “What did he [Keller] do to get on the study? Keller is credited with “study concept and design,” which I would deem impossible since, if he really conceived and designed the study, he would have appeared as an author on the earlier abstract. Yet he is listed fourth on the list of people who designed the study. He is also credited, along with all of the authors, with “analysis and interpretation of the data” and critical revision of the manuscript for “important intellectual content.” Is it possible that he did a great job of helping to revise the manuscript? I suppose, but it seems there were plenty of other people who were also involved with the writing of the paper. Note that Keller was not credited with “drafting of the manuscript.” So Keller did not recruit participants, provided no administrative support, did not provide statistical expertise, and did not draft the manuscript, but apparently helped design the study after it was completed! Very impressive indeed.

So now, to the point of this post. I tracked down that this study was presented previously on two or three other occasions outside of at the ACNP conference. It was presented at the 2003 American Psychiatric Association conference (possibly in two different forms, though I am not sure on this point) as well as at the 2004 American Psychiatric Association conference and the NCDEU 2003 conference. See here on page 138 for the NCDEU reference and here for the 2003 and 2004 American Psychiatric Association references.

Both Keller and Nemeroff are conspicuously absent from the 2003 and 2004 APA and NCDEU presentations, yet both appear on the final manuscript as authors. Nemeroff appears as the lead author on the 2004 ACNP abstract for the study despite not appearing at all on prior presentations. Keller, who allegedly helped design the study, only appears on the final manuscript -- he appears on not a single presentation of the study data. To top it off, a press release touted Keller and Nemeroff as co-principal investigators on the study. Obviously, if they were really principal investigators, in any meaningful sense of the term, their names would have appeared on all earlier presentations of the data. Before doing this additional research, it seemed Keller’s contributions were minimal at best, and now it appears that Nemeroff’s work can now also be called into question, as he appears nowhere on earlier presentations prior to the ACNP presentation.

At this point, I would say the evidence is quite clear that the changing authorship of the paper had little to do with which authors were responsible for conducting the study, running analyses, and writing the paper. But it sure makes for excellent marketing when big names like Nemeroff and Keller stick their names in the authorship line. Please feel free to read the entire story, starting with a more detailed description of the authorship saga, followed by the covered-up conflicts of interest, and, finally, the tricky use of statistics.

Tuesday, November 28, 2006

Uh Oh Chuck They Out To Get Us Man: Stats

This is the third (and perhaps final) post in a series on a study recently published in Neuropsychopharmacology which used risperidone (Risperdal) as an add-on treatment for depression. The study had three phases, as follows:

1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for > six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks

2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks

3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.

Let’s start with examining the differences between the trial report found on clinicaltrials.gov and the trial as published in Neuropsychopharmacology. The clinical trials report indicated that the primary outcome measures were: a) change in Montgomery-Asberg Depression Rating Scale (MADRS); b) time to relapse, as measured by Hamilton Rating Scale for Depression and Clinical Global Impression (CGI) scores.

Secondary measures include: a) Response rate, measured by at least a 50% improvement in MADRS score; b) Change in Hamilton Rating Scale for Depression (HAM-D) and c) Clinical Global Impressions (CGI) scale scores.

Now, to the journal report. Under the results for the open-label risperidone augmentation, on page 9 of the early online version of the study, it is stated that the MADRS was “the primary measure used to assess depression severity.” Nowhere are the results of the MADRS response criteria (>= 50% change in MADRS scores) reported. Where did this go? If this was a predetermined test of treatment response, shouldn’t it be reported? While means and standard deviations of the MADRS are reported, the alleged measure for treatment response is strangely missing.

It’s also unclear what happened to the CGI scores, as means and standard deviations for this instrument are not reported anywhere. It’s mentioned that scores on this measure were used as one measure of relapse, but the means and standard deviations are missing.

Under the results from the double-blind continuation phase, we can see that the rate of relapse was 53.3% for risperidone and 54.6% for placebo. The time to relapse was 102 days for risperidone augmentation and 85 days for placebo augmentation, for which the associated p-value was .52. But a post-hoc analysis found the p-value of the difference in time to relapse was p <.05. The authors state that this difference was found because they switched to a linear ranks test. I’m no expert on this test, so I can’t make a judgment, but I can say that I’m suspicious any time a p-value goes from .52 to .05 just by switching statistical tests. At the very least, an explanation in the article is in order, as it is noteworthy that just switching a statistical test made such a change in results.

Post-hoc analysis part 2. An additional post-hoc analysis was conducted using the subgroup of patients who were fully nonresponsive to citalopram monotherapy. In other words, the people who showed the poorest response to SSRI treatment were examined in separate analyses. Their median time to relapse and relapse rate were reported as significantly different, in favor of the risperidone group. The relapse rate was 56% in the risperidone group and 64% in the placebo group. The associated -value was reported as .05. However, I conducted my own analysis and came up with Chi-Square = .922 and a p-value of .337. It is mentioned earlier in the paper that the authors used the Cochran-Mantel-Haenzel test and this explains how the p-value shrunk drastically. Again, a post-hoc analysis was conducted which changed the results substantially, yet the authors did not discuss reasons behind these large discrepancies. What this would appear to mean is that time to relapse differed substantially more than chance depending on the site where patients received treatment. The CMH test stratified by treatment site, which I believe would then account for differences due to treatment site. If treatment response really was dependent to a significant extent on the treatment site, this bears mention in the article, but such a discussion is nowhere to be found.

Table 2. The results here are quite interesting. This refers to the double-blind section of the study in which patients who had shown symptom resolution while receiving risperidone were randomly assigned to continue risperidone or to receive placebo. On the MADRS, patients receiving risperidone, on average, gained 11.2 points (i.e., their depression worsened by 11.2 points), whereas patients on placebo gained 10.4 points. Thus, there was a slight, but certainly not significant, difference in favor of persons on placebo worsening less than persons on risperidone. On the HAM-D, patients receiving risperidone worsened by an average of 7.6 points whereas they worsened by an average of 7.9 points on placebo. Between the two measures, it is clear that on average, there was very little difference between risperidone and placebo. However (and take out your notepads, please), patients in both groups got significantly worse over time in the third phase of the study. Thus, the scenario for the average patient is that he/she sees a relatively brief improvement in symptoms while taking risperidone then returns to a period of moderate depressive symptoms. The authors do not discuss that mean scores between groups did not differ at all in the third phase of the study.

The only evidence to emerge from this study, really, is that an open-label treatment resulted in a decrease of symptoms. If Janssen really wanted to impress, they would have included an active comparator. Say, an older antipsychotic, a so-called “mood stabilizer,” or perhaps another atypical antipsychotic. Or, if not feeling daring, at least add a placebo to the mix. Based on the study results, we cannot even conclude that risperidone augmentation worked better than adding a sugar pill to SSRI treatment.

In summary, it is unknown what happened to some of the secondary outcome measures (CGI scores, MADRS response rate) and the statistical analyses used in some cases required more explanation, as their use led to a big change in interpretation of the results.

So what do we have here? I believe this is an excellent example of a study conducted for marketing purposes. I bet that many reprints of this article have been purchased by Janssen, which will be passed on by cheerleaders, er drug reps, to physicians in a ploy to market Risperdal as an adjunctive treatment for depression. Additionally, there are likely “key opinion leaders,” perhaps including some of the study authors, who are willing to stump for Risperdal as an adjunctive treatment for depression at conferences, meetings, and dinners. With this study now published in Neuropsychopharmacology, there can be little doubt that such marketing strategies now have a glimmer of scientific sparkle on their side, although upon closer examination, the scientific evidence is very weak at best. Yet too few doctors will bother to perform closer examination of the meager science behind the marketing as the atypical antipsychotics continue their march toward rebranding as “broad spectrum psychotropic agents,” as Rispserdal was referred to in this press release regarding the present study.

I encourage interested readers to also check out my earlier posts regarding the questionable authorship of the paper (possibly involving magic!) as well as the rather blatant undisclosed conflicts of interest associated with the study. This is so distressing that I think I’ll have to chill out with a couple of Quaaludes, er, earlier versions of broad spectrum psychotropic agents.

Monday, November 20, 2006

Uh Oh Chuck They Out To Get Us Man: COI

This post (similar to a previous post) centers on a study for risperidone as an add-on to SSRI treatment for depression.

The current post focuses on the failure of some authors to disclose their conflicts of interest. When the advance online publication of the article is examined, the only author listing any financial support is lead author Mark Hyman Rapaport, who lists four grants and a chairmanship. Janssen funded the study according to an earlier abstract version of the study, so it is curious that Rapaport did not list Janssen as a financial supporter. Rapaport was not alone in his failure to disclose. Nemeroff (the last author) and Keller (fifth author) clearly had conflicting interests that should have been declared.

Let’s start with Nemeroff. He is the editor of the journal (Neuropsychopharmacology) in which this article appeared, so he should be familiar with the journal's conflict of interest policy, which states in part: “At the time of submission, each author must disclose any involvement, financial or otherwise, that might potentially bias their work. The information should be listed in the Acknowledgements that appear at the end of the manuscript and noted in the authors’ cover letter.” The policy is pretty clear – so does Nemeroff have a significant conflict of interest in this case?

In the Journal of Clinical Psychiatry Supplement 8 from 2005, the conflicts of interest section mentions that, among Nemeroff’s quite numerous funding sources, Nemeroff has received grant/research support from Janssen, is a consultant for Janssen, and is a member of the speakers bureau for (you guessed it) Janssen, which is the company marketing Risperdal. In the same supplement, which was derived from a “planning roundtable…supported by an educational grant from Janssen Medical Affairs,” Nemeroff penned a review article that reflected favorably upon risperidone, as well as some other drugs. So it’s pretty clear that there was a conflict of interest here – it’s just that editor Nemeroff did not enforce his journal’s policies upon himself. Of course, this is not the first time such behavior has occurred. You can read about a similar failure to enforce editorial policies involving Nemeroff here and here.

But wait, there’s more! Nemeroff actually violated another of his journal’s policies, the one about duplicate publication of data.

On the Neuropsychopharmacology author instructions page, right under Nemeroff’s name as editor of the journal, you can see the following: “Submission is a representation that neither the manuscript nor its data have been previously published (except in abstract) or are currently under consideration for publication.” Nemeroff, in the aforementioned 2005 Journal of Clinical Psychiatry Supplement 8 wrote no less than five paragraphs describing the risperidone add-on study’s data, which was later published in Neuropsychopharmacology. So Neuropsychopharmacology’s editorial policy is that study data should not have been published earlier except in abstract form, but Nemeroff wrote about it for much longer than an abstract in a supplement paid for by Janssen, yet felt free to flout editorial policy regarding prior publication. This, of course, comes in addition to an egregious failure to disclose conflicts of interest.

What is the penalty for such behavior, one might ask? “An accusation that an Editor…has violated the conflict of interest policy shall be referred to the ACNP Ethics Committee for consideration and investigation. The Ethics Committee shall report its findings and recommendations to the Publications Committee and Council for action… an Editor…found guilty of violating the conflict of interest policy is subject to sanction, including forfeiture of the editorship.”

Don’t worry – Nemeroff is one step ahead of the game here – he chose to resign his editorship over the previous scandal involving his pimping of vagus nerve stimulation therapy, which you can feel free to read about here. No, Nemeroff did not state that he was leaving the editor position as a result of the VNS debacle, but the timing seems to reflect more than a coincidence.

To summarize briefly, Nemeroff had a blatant conflict of interest which he did not declare. He is also the editor of the journal in which the article appeared where he did not disclose the COI. In addition, he ignored his journal’s prohibition on prior publication of data. As the editor, he should obviously know much better. Indeed, it is difficult to believe that this was an oversight. It appears that Nemeroff was playing the role of marketer for risperidone as opposed to carrying out his duties as an editor.

How about Martin Keller? In that same 2005 supplement of the Journal of Clinical Psychiatry mentioned above, Keller is listed as having received honoraria from Janssen and as being an advisory board member for Janssen. Keep in mind that whatever work he conducted at the “planning roundtable” upon which the supplement was based was also funded by Janssen. Yet no mention of any financial support from Janssen is provided in the Neuropsychopharmacology article.

Apparently, perhaps due to Nemeroff’s earlier brush with the spotlight regarding his marketing of VNS therapy in the journal in which he edits [an article in which blatant conflicts of interest were not disclosed], the authors thought better of the conflict of interest issue. A corrigendum (correction) is displayed in the November print edition of Neuropsychopharmacology that lists disclosures for Nemeroff, Keller, and Rapaport. But if you are obtaining the article through online access (which is likely true for most people), then you won’t find the correction because it is not included in the pages of the article. Eventually the correction will be picked up on Medline, but many readers will not notice it.

Add the failure to disclose conflicts of interest to the shifting authorship line mentioned earlier and you can see why I am feeling a little skeptical. Of course, given some of Nemeroff’s past ethical issues (here and here), this is not entirely surprising. The last chapter in this tale, regarding the risperidone augmentation study’s data analysis will be told shortly.

Friday, November 17, 2006

Uh Oh Chuck They Out to Get Us Man: Authorship

If nothing else, the continuing stories regarding Nemeroff and company give me a chance to recite some excellent Public Enemy lyrics in post titles.

But seriously, the problems continue to stack up. In an article published online in Neuropsychopharmacology, a journal at which Nemeroff is the editor, the following occurred:
1) A sizable authorship switch
2) Failure to disclose conflicts of interest
3) Bobbing and weaving on data analyses

This centers on a study for risperdone as an add-on to SSRI treatment for depression. The study had three phases, as follows:
1) Participants who had not responded to 1-3 antidepressants other than (es)citalopram (Celexa or Lexapro) for > six weeks were assigned to open-label citalopram (Celexa) treatment for 4-6 weeks
2) Patients who failed to respond to citalopram were then assigned to open label risperidone (Risperdal) augmentation (add-on) treatment for 4-6 weeks
3) Patients whose depression remitted were then assigned to 24 weeks of either risperidone + citalopram or citalopram + placebo and the differences between risperidone and placebo for depressive relapse were examined.

This post focuses solely on an authorship switch. In 2004, results from this study were presented in abstract form. In this form, the authors read as follows:
Nemeroff, Gharabawi, Canuso, Mahmoud, Loescher, Turkoz, Rapaport, Gharabawi.
You might think that there were two different Gharabawis, but they were both listed as George M Gharabawi, so he’s either the 2nd or 8th author – someone made an obvious typo here.

Who’s on the final published manuscript in Neuropsychopharmacology? In order: Rapaport, Gharabawi, Canuso, Mahmoud, Keller, Bossie, Turkoz, Lasser, Loescher, Bouhours, Dunbar, Nemeroff.

As if by magic, Nemeroff goes from first to last author. Rapaport moves from seventh author to first, Turkoz gets bumped down a couple spots. Keller appeared out of thin air. What did he do to get on the study? Keller is credited with “study concept and design,” which I would deem impossible since, if he really conceived and designed the study, he would have appeared as an author on the earlier abstract. Yet he is listed fourth on the list of people who designed the study. He is also credited, along with all of the authors, with “analysis and interpretation of the data” and critical revision of the manuscript for “important intellectual content.” Is it possible that he did a great job of helping to revise the manuscript? I suppose, but it seems there were plenty of other people who were also involved with the writing of the paper. Note that Keller was not credited with “drafting of the manuscript.” So Keller did not recruit participants, provided no administrative support, did not provide statistical expertise, and did not draft the manuscript, but apparently helped design the study after it was completed! Very impressive indeed.

But wait there’s more! In a press release, it is stated that “Dr. Mark Hyman Rapaport was the study’s principal investigator. Co-principal investigators were Charles B. Nemeroff, Ph.D., M.D. and Martin B. Keller, M.D.” So Keller, who played no major role in designing in the study or running patients was a co-principal investigator. Remember, he would have been listed as an author on the initial abstract describing the study results if he helped design the study.

What am I implying? There’s no doubt that Keller is a big name in psychiatry. He has, according to his CV from August of 2006, over 300 journal publications to go with dozens of book chapters. So it certainly adds credibility to the study to tack him on as an author. As for Nemeroff moving from 1st to last, that’s interesting. My thought is that with an authorship list of 12, nobody is going to remember authors 6-11, so tacking him on as last author makes the name stand out more. Just speculation on my part. And Rapaport making the jump from last to first? Well, I think that, again, we’re talking about name recognition here. Rapaport is likewise a pretty big name. Now, mind you, I’m not implying at all that Rapaport did not have a major role; indeed, the author contributions section of the paper indicates that he did quite a bit of work on the project and he absolutely appears to deserve first author credit.

There are varying standards for the ordering of authorship. In some disciplines, it just goes in descending order (which makes the most sense) – he/she who contributed most gets first authorship while he/she who contributed least gets last authorship. In others, the lab supervisor, who may have done very little on the study, gets last authorship or sometimes first authorship. In any case, the first author and the last stick out most in memory and I’m sure it doesn’t hurt to throw in a bigwig like Keller in the middle of the mix. I’m guessing it would have been better publicity to move Keller higher on the list, but there’s only so much credit a guy can receive for apparently doing magic (designing the study after it was completed) and making comments on the manuscript. Of course, inappropriate authorship is widespread, so these results come as no surprise.

More on other issues with the study later. I assure you that the authorship switch is the least of the study’s problems.