Friday, September 28, 2007

Psychiatrist Gives Herpes to Patients

In a report that slams the FDA for lax oversight on clinical trials is tucked a gem regarding a psychiatrist named Dr. David Linden.
Last November, the Oklahoma Board of Medical Licensure and Supervision suspended Dr. Linden’s license for three months because he had sex with two patients and gave them genital herpes infections, according to board records. Dr. Linden, who also owns a psychiatric center in Las Vegas, did not return repeated telephone messages.
WOW! He's also run into trouble from the FDA for poor conduct of trials -- giving herpes to patients was just icing on the cake. The whole story is at the New York Times. On that note, have a good weekend. In fact, I now officially proclaim the upcoming weekend as No Sex With Patients Weekend.

I hope this is not too tabloid-like for this site. I don't care about people's personal misgivings -- we all have our flaws. It's when these errors in judgment start impacting patients that they really bother me. I've not seen the records indicating that Dr. Linden engaged in such behavior, but I'm willing to trust the reporting of Gardiner Harris on this story.

Thursday, September 27, 2007

Zyprexa for Youth: What Marketing Plans?

As reported on Bloomberg, Pharmalot and Furious Seasons, an FDA official has overruled FDA reviewers to put Zyprexa on track for approval for treating adolescent schizophrenia and bipolar disorder.

Lilly claims it has no plans for a major marketing campaign. Yet given the glut of atypical antipsychotics, their frequent use among kids already, and the concerns associated with Zyprexa's side effects, Lilly will have to market aggressively in order to win scripts. Bloomberg also reported that Abilify and Seroquel may win FDA approval for the youth market before long and with Risperdal going generic soon, Lilly will either have to market their drug like hell (i.e., Operation Restore Confidence Part Two?) or accept a minor piece of the market. But perhaps they don't care - maybe the studies were just an attempt to generate good publicity and to extend the patent for six months, which was estimated to be worth a billion dollars in itself.

Zyprexa for Teen Bipolar


Zyprexa for teen mania is great, except for the Fat Camp Factor -- from the study abstract:
The mean baseline-to-endpoint weight change was significantly greater for patients receiving olanzapine relative to patients receiving placebo (3.7 kg versus 0.3 kg), and the incidence of treatment-emergent weight gain ≥7% of baseline was higher for olanzapine-treated patients (41.9% versus 1.9%)
The drug did appear to be much more effective than placebo according to the abstract, but I'll have to read the article before I can weigh in on efficacy more confidently. Gaining 7% of your body weight in three weeks? Um, that ain't good. Gimme a D, Gimme an I, Gimme an A (can you see where I'm going here -- diabetes, anyone?)...

Maybe these kids should have been drinking diet soda -- that will save them from weight gain, at least that's what Lilly might tell them. There were 13 "authors" on the study -- apparently being good at recruiting patients counts as authorship, since there is no way that 13 people made a legit contribution to the study outside if recruitment is left out.

More at Furious Seasons.

Wednesday, September 26, 2007

Another Key Opinion Leader Contradicts Himself

It appears that Lindsay DeVane, who called his own continuing medical education article (appearing in CNS Spectrums) a "commercial piece of crap" has retracted his story (via the excellent Carlat Psychiatry Blog). Apparently, his take on the former "crap" piece has now changed to "there should be no question about the integrity of the CNS Spectrums publication as a CME activity" The article went from, in his own words, a "ridiculous text" to an article that reflects "the inherent limitations in providing practicing clinicians with fundamental descriptions of complicated issues." Is he implying that practicing clinicians lack the intellectual fortitude to understand "complicated issues," so he had to dumb it down to meet their limited capacity? Perhaps there is another interpretation.

He also changed his tune to "all three co-authors were heavily involved in multiple edits before agreement was reached on a final manuscript" from stating originally that he had
not actually read
the manuscript. That is quite a change indeed. One can only wonder which individuals pressed DeVane to change his story. Here's what I don't understand. DeVane has been in the game for a long time. Does he really have that much to lose by pointing out the joke that it today's continuing medical education system? I want to know who spoke with him and how he was persuaded to change his mind. This is such a ridiculous turnaround in stories that it makes Larry Craig look like a straight shooter. I am 99.9% doubtful that DeVane would have changed his story without significant influence from others. Drs. Charles Nemeroff and Sheldon Preskorn were the coauthors. I can't help but wonder if one or both of them took exception to DeVane's labeling of the piece as "crap" and read him the riot act. Does DeVane not realize that this turnaround in story is farcical?

Read the full story
here
. Two further glittering examples of continuing medical education in psychiatry gone awry can be read here and here. To see another key opinion leader contradict himself, go here.

Tuesday, September 25, 2007

Thinking Blogger Award Nominations


I've been nominated again for a Thinking Blogger award. This meme was floating about a few months ago and has appeared again. Big thanks to Dr. John Grohol at Psych Central for the nomination. In his nomination, he wrote:
CL Psych provides excellent analysis into the clinical psychology/psychiatry study of the week (sometimes more than one a week), often pointing out how naked the emperor really is. We enjoy an academic who knows his statistics and research design and doesn’t hold back in his critique of poorly designed studies that draw ridiculous conclusions that reflect the researcher’s own bias more than the data.
That was quite flattering and I appreciate the feedback greatly. I am especially honored given that Psych Central is truly the granddaddy of psychology websites. It has been around a long time, is linked to by everyone, and is very well-respected. The site is the biggest psychology presence on the internet -- yes, I say bigger and more important than the American Psychological Association website. Thank God the APA is sinking a bazillion dollars (actually, $ 7.6 million) into renovating their site.

The Rules

1. If, and only if, you get tagged, write a post with links to 5 blogs that make you think,
2. Link to this post so that people can easily find the exact origin of the meme

Please, remember to tag blogs with real merits, i.e. relative content, and above all — blogs that really get you thinking.

This all started at: http://www.thethinkingblog.com/2007/02/thinking-blogger-awards_11.html

The blog that tagged me is: http://psychcentral.com/blog/

This is the link to the entry in which I was tagged is here.

To avoid the appearance of quid pro quo, I won't nominate Psych Central and I won't re-nominate the excellent sites I nominated a few months ago. In random order:

Carlat Psychiatry Blog
Daniel Carlat is a psychiatrist who regularly blasts the continuing medical education industry for its sham attempts to "educate" physicians. It's truly breathtaking to consider the degree to which physicians are blasted with commercial propaganda that passes for education.

Pharmalot
Frankly, Ed Silverman (author of Pharmalot) scares me. The man is a workhorse who covers the drug industry in a nearly compulsive fashion. He cracks stories on his own and links to virtually every drug industry-related story worthy of discussion. The man is a machine.

Mind Hacks
Vaughan and the crew at Mind Hacks present neuroscience in a manner that is fascinating and digestible for the general public, which is no easy feat. I learn something new every time I visit.

Seroxat Secrets
The author of this site performs investigative work, often regarding the relationship between consumer advocacy groups and the drug industry with regularity. Whatever drug companies and their related PR firms are attempting to try under the radar is often exposed to daylight on this site.

Pharma Giles
There are a few good pharma parody sites. Of them, Pharma Giles gets the nod because of the sheer volume of hilarious posts and how closely they mirror reality. The combination of tragedy and comedy on his site helps to drive home the need for change in a way that makes me want to laugh and cry simultaneously.

No disrespect intended to the excellent sites that I did not nominate!

Monday, September 24, 2007

Shyness: Pathological or Normal Experience

SmikeKlineBeecham/GlaxoSmithKline, the psychiatric elites who devised the Diagnostic and Statistical Manual of Mental Disorders, and social phobia. An interesting combination. I read a fascinating op-ed in the New York Times by Christopher Lane , an English professor at Northwestern University that discussed the growth of social phobia, especially among kids. Here are some highlights...

"How much credence should we give the diagnosis? Shyness is so common among American children that 42 percent exhibit it. And, according to one major study, the trait increases with age. By the time they reach college, up to 51 percent of men and 43 percent of women describe themselves as shy or introverted. Among graduate students, half of men and 48 percent of women do. Psychiatrists say that at least one in eight of these people needs medical attention.

"But do they? Many parents recognize that shyness varies greatly by situation, and research suggests it can be a benign condition. Just two weeks ago, a study sponsored by Britain’s Economic and Social Research Council reported that levels of the stress hormone cortisol are consistently lower in shy children than in their more extroverted peers. The discovery upends the common wisdom among psychiatrists that shyness causes youngsters extreme stress. Julie Turner-Cobb, the researcher at the University of Bath who led this study, told me the amounts of cortisol suggest that shyness in children “might not be such a bad thing.” [Not sure that this finding in itself is strongly suggestive of anything important, but it's interesting.]

Lane goes on to write about his perception that the diagnostic criteria are too loose for social phobia. Then, enter Paxil.

Then, having alerted the masses to their worrisome avoidance of public restrooms, the psychiatrists needed a remedy. Right on cue, GlaxoSmithKline, the maker of Paxil, declared in the late 1990s that its antidepressant could also treat social anxiety and, presumably, self-consciousness in restaurants. Nudged along by a public-awareness campaign (“Imagine Being Allergic to People”) that cost the drug maker more than $92 million in one year alone ($3 million more than Pfizer spent that year promoting Viagra), social anxiety quickly became the third most diagnosed mental illness in the nation, behind only depression and alcoholism. Studies put the total number of children affected at 15 percent — higher than the one in eight who psychiatrists had suggested were shy enough to need medical help.

This diagnosis was frequently irresponsible, and it also had human costs. After being prescribed Paxil or Zoloft for their shyness and public-speaking anxiety, a disturbingly large number of children, studies found, began to contemplate suicide and to suffer a host of other chronic side effects. This class of antidepressants, known as S.S.R.I.’s, had never been tested on children. Belatedly, the Food and Drug Administration agreed to require a “black box” warning on the drug label, cautioning doctors and parents that the drugs may be linked to suicide risk in young people.

You might think the specter of children on suicide watch from taking remedies for shyness would end any impulse to overprescribe them. Yet the tendency to use potent drugs to treat run-of-the-mill behaviors persists, and several psychiatrists have already started to challenge the F.D.A. warning on the dubious argument that fewer prescriptions are the reason we’re seeing a spike in suicides among teenagers. [Note that I tackled this recently.]

It goes on to close with...

With so much else to worry about, psychiatry would be wise to give up its fixation on a childhood trait as ordinary as shyness.

To view the diagnostic criteria for social phobia, please go here. Here is a key symptom:

"The avoidance, anxious anticipation, or distress in the feared social or performance situation(s) interferes significantly with the person's normal routine, occupational (academic) functioning, or social activities or relationships, or there is marked distress about having the phobia."

The diagnosis depends to a large extent what the doctor considers as "interferes significantly" or as "marked distress." When Paxil was being pushed, I'd be willing to bet that the reps were given scripts that helped to expand the boundaries of social anxiety disorder. When words like "significantly" or "marked" are used, one has to wonder what they mean? Who shapes physicians' judgment on these matters? To a notable extent, physician perceptions are influenced by commercials, er, continuing medical education and cheerleaders, er, drug reps.

A great piece from the New Republic in 1999 relevant to the expansion of social phobia can be found here. The points raised in the article ring true today. Let me be clear: I've seen real social phobia -- it exists and it is painful. But does it really affect 13% of Americans? I think not. I'm quite glad that Dr. Lane is stepping into the fray. I'm not sure I agree with him wholeheartedly, (I'll have to read his upcoming book first), but I know that I'm glad someone is willing to bring these issues to the fore. At the very least, this is a subject worthy of debate and discussion, not blind acceptance of the current orthodoxy that social phobia (like everything else) is underdiagnosed and undertreated.

Friday, September 21, 2007

SSRIs, Suicide, and Dunce Jounalism

Earlier in the week, I noted that a much-ballyhooed study purporting to show a relationship between an increase in youth suicide and a decrease in SSRI prescriptions for youth actually did no such thing.

Here's what the media are saying about the study. WARNING: If you don't want to be shocked by examples of terribly poor journalism, please do not read the remainder of the post.

From the esteemed British Medical Journal:
"Numbers of suicides among Americans aged under 19 years rose by 14% from 2003 to 2004 , the study says, the biggest annual increase since systematic recording began in 1979. The same year saw a 22% decrease in the number of SSRI prescriptions to this age group."
The increase in suicide rates appears to be accurate. As I pointed out earlier, the youth suicide rate then apparently dropped slightly in 2005, which is when SSRI prescriptions for youth fell steeply. As for SSRI prescriptions dropping 22% -- that number is inaccurate. Look at the chart (pardon the crappy image quality) and note that the SSRI prescription rate for youth in the U.S. was down only slightly in 2003-2004, not by 22%. Bad journalism.

From the Washington Post:
The trend lines do not prove that suicides rose because of the drop in prescriptions, but Gibbons, Insel and other experts said the international evidence leaves few other plausible explanations.
Again, if you folks want to rely solely on correlational data (which is a stupid idea in any case), then you may want to make sure that a statistical analysis is actually run which shows a relationship between SSRI prescriptions and suicide rates. Read the whole WaPo piece if you'd like -- it's not terrible overall.

From WebMD:
Warnings that antidepressants may increase teen suicides appear to have backfired, a new study suggests...

"The FDA has overestimated the effect of antidepressant medications on suicidality and dramatically underestimated the efficacy of antidepressants in the treatment of childhood depression," Gibbons told WebMD in April 2007.
Oh, he must be referring to the efficacy that shows, at best, a small effect over placebo. Indeed, the recent meta-analysis by Bridge and colleagues that claims it showed the benefits outweighed the risks for SSRIs even showed a very small treatment effect favoring SSRIs in depression for youth. Indeed, most SSRIs did not show any advantage over placebo. How does one "dramatically underestimate" a treatment that provides an apparently pretty small benefit over placebo? And if you really love to rely on correlational, epidemiological data, then try this on for size -- the data do not indicate that SSRIs decrease suicide.

How about the Chicago Tribune?
Suicide rates for preteens and teenagers increased sharply when the Food and Drug Administration slapped a "black box" warning on anti-depressants and doctors started writing fewer prescriptions for young people, according to federal data released Thursday.
This one will be covered in a moment...

The headline in the San Francisco Chronicle:

Suicide rise follows antidepressant drop: Study finds dramatic increase after 'black box' warning

As I noted earlier, the black box warning occurred in October 2004, and I know of not one shred of data that can track that particular time point to a "dramatic increase" in suicides. Hell, Gibbons and colleagues did not even attempt to link an increase in suicides to that particular date, yet the media latch onto this point as if it is mired in solid scientific data.

And one more from the Los Angeles Times:
The study, which includes data from the Netherlands, provides the strongest evidence yet that the drugs are useful in preventing suicide, Gibbons said.
So, to make this clear, the "strongest evidence yet" is based upon a report of correlational data where, for the main population studied (the United States), there was not even a single statistical analysis done to relate a decrease in SSRI prescriptions with an increase in suicide rate? This, my friends, is bad science that has now become "the truth" thanks to science writers who either don't know a damn thing about science or are unwilling to challenge the opinions of the scientists whom they interview. It also becomes "the truth" when Gibbons, in interviews, is making statements that run far past what his own data show. Scientists can have opinions, but one needs to separate what is based on solid data from what is speculation.

By all means, read my prior post that dissects this latest study and let me know if I missed something. In my opinion, the Gibbons study was uninformative at best and appears to have led to a large number of poorly reported stories. At this point, a few bright individuals appear to have indicated that my take on the article is accurate (1, 2, 3).

To be fair, The Boston Globe and New York Times get a pass -- their coverage of the issue was excellent.

For a great brief read on another youth suicide study, visit The Last Psychiatrist.

Update (9-25-07): Via Furious Seasons -- An op-ed in the Boston Globe by Alison Bass blasts the latest media blitz regarding the alleged link between declining SSRI prescriptions and increased suicides. Furious Seasons wisely notes a couple of small errors in the Globe piece, but the overall thrust is well worth a read.

Marketing, Lying, Geodon, Whatever...

Pharma Giles has a wonderful post about a fake antipsychotic and its dubious marketing which is similar to a real ad campaign for Geodon. The post on Pharma Giles is hilarious, and the FDA letter that slaps Pfizer for its false marketing of Geodon is worth a read as well. Geodon was featured in a medical journal ad that offended the sensibilities of the FDA. Here are some highlights...
Specifically, the journal ad fails to include the warnings for neuroleptic malignant syndrome, tardive dyskinesia, and hyperglycemia and diabetes mellitus. The journal ad does mention “movement disorders” and “low EPS,” and while we do not object to these claims, the presentations are insufficient to communicate the risk concepts associated with, and the seriousness of, tardive dyskinesia. Additionally, the professional journal ad fails to include important precautions, specifically, rash, orthostatic hypotension, and seizures. By omitting these risks, the journal ad misleadingly suggests that Geodon for Injection is safer than has been demonstrated.
The ad made a claim that:
Proven advantages over haloperidol IM
―twice the improvement as measured on the BPRS”
To which the FDA letter said:
This presentation is misleading because it implies that Geodon for Injection is more effective than haloperidol IM when this has not been demonstrated by substantial evidence or substantial clinical experience. The single study cited for this claim was an open-label study, which is not an appropriate study design to evaluate subjective endpoints, such as those measured by the Brief Psychiatric Rating Scale anchored version (BPRS), because of the potential for evaluator bias. In fact, FDA is not aware of any substantial evidence to support this claim.
Nice going Pfizer. Keep up the B.S. advertising and the shoddy science (1, 2, 3).

Wednesday, September 19, 2007

The Drug Safety Blindfold

A recent study in the Archives of Internal Medicine found that serious adverse drug events reported to the FDA were up by a large margin (260%) from 1998-2005. A major problem with any such investigation, and acknowledged by the authors, is that adverse events are only rarely reported when they occur. Thus, their findings are nearly certainly an underestimate, likely by a large margin.

Why the upsurge? The authors stated:

The increase over time was largely explained by increases of just 1 type of report – expedited reports from manufacturers of new, serious events not on the product label. Of the increase of 54,876 additional events in 2005 compared with 1998, expedited reports accounted for 48,080 (87.6%) of these events.

Wait a second -- a large chunk of these reports are from the manufacturer regarding events that are not on the product label, meaning events that the manufacturer claims do not happen while taking the drug? I really hope I am missing something here. At first glance, it would appear that the labels on drugs are surely missing a great deal of relevant information!

Furious Seasons has a long post regarding the psych meds listed in the report, so I won’t steal his thunder except to say that the usual suspects were linked to a large number of deaths. It is important to note that these reported deaths were not necessarily caused by the drug, but that whomever reported the event thought a relationship between death and the drug may exist.

It is a sobering article that really reinforced my curiosity as to what extent we really know about the safety of our medicines. It has previously been investigated thoroughly that clinical trials do a very poor job of reporting safety outcomes, so I suppose the latest study is actually not particularly surprising. For example, as reported in the American Journal of Psychiatry, across a reasonably large sample of psychotropic drug trials:

On average, drug trials devoted one-tenth of a page in their results sections to safety, and 58.3% devoted more space to the names and affiliations of authors than to safety.

Bummer. And, from the Journal of the American Medical Association regarding clinical trials for a wide variety of interventions:

Overall, the median space allocated to safety results was 0.3 page. A similar amount of space was devoted to contributor names and affiliations… Only 39% of trials had adequate reporting of clinical adverse effects and only 29% had adequate reporting of laboratory-determined toxicity.

I’m not trying to instill a panic, but it is at least a little scary that clinical trials don’t provide adequate information and, apparently, the labels of drugs are missing a significant number of relevant adverse drug effects. But hey, what’s a few dead people when there are buckets of money to be made?

Update: John Grohol at Psych Central has some intelligent comments about the Archives of Internal Medicine study, mentioning that we need to know how many people are taking said drugs in order to compare the list adverse events for each drug to the number of people taking each medication. I agree with his comment and also believe that we need to be vigilant -- drug safety reporting is a joke and needs to change.

Tuesday, September 18, 2007

Zyprexa: No Longer The Gold Standard?

Decision Resources has now reported that Zyprexa is no longer the market leader in schizophrenia. In March, I noted that the same company proclaimed boldly that Zyprexa was the "gold standard" in treating schizophrenia and would remain so until 2015. So in a matter of months, the same company has gone from labeling Zyprexa as the gold standard for treating schizophrenia to indicating that Zyprexa is an also-ran. The rather quick about-face of Decision Resources makes me wonder about the credibility of its reports.

According to the latest report, concerns about side effects have hurt Zyprexa's market share. Nah, really?

Hat Tip: Furious Seasons, a blog that has been absolutely ablaze with great material.

Monday, September 17, 2007

Peer Review, SSRIs, Suicide, and Booze

The recent study in the American Journal of Psychiatry by Gibbons, Mann, and colleagues regarding the relationship between SSRI usage and suicides reads more like an exercise for undergraduate students to find obvious errors than it does a real peer-reviewed study. Sounds mean, but keep reading.

The abstract of the study includes the following...

"In both the Unites States and the Netherlands, SSRI prescriptions for children and adolescents decreased after U.S. and European regulatory agencies issued warnings about a possible suicide risk with antidepressant use in pediatric patients, and these decreases were associated with increases in suicide rates in children and adolescents."

So less SSRIs = more suicides, according to the authors. Let’s see if this study actually shows such a relationship…








Look closely at the above graphs (click to enlarge) from the article. Note that the decrease in SSRI prescriptions from 2003 to 2004 was very slight across the 0-10, 11-14, and 15-19 age groups, which is the timeframe in which suicide rates for those aged 5-19 increased notably. The larger declines in SSRI prescribing for youth occurred from 2004-2005, which happens to be when the suicide rate for those aged 15-24 appears to have decreased from 10.3 per 100,000 (see Table 9; page 28 here) to 9.8 per 100,000 (see Table 7 here). Yes, I know I am comparing data for ages 15-24 to data on ages 5-19, but I think this makes sense when one considers that the suicide rate for those 14 and under is much lower than for those aged 15-24. Actually, grouping suicide data for ages 5-19 makes little sense to me given the vast differences in suicide rate within this age group.

It is important to note that the authors of the paper did not have data from 2005, but there is nothing from the 2003-2004 U.S. SSRI prescription data cited in their paper that even suggests a relationship between decreased SSRI use in youth and an increased suicide rate, as the decrease in prescriptions was minimal. Pay close attention: The authors ran a total of zero statistical analyses to examine the relationship between SSRI prescription rates and suicide rates in the United States. That’s right, zero. So they put up a couple of figures without a single shred of statistical evidence, then claim that declining SSRI prescriptions are associated with an increase in suicide rates. Any peer reviewer who was not drunk or on a high dose of Seroquel should have noticed this gigantic flaw.

In the discussion, the authors state: “While only a small decrease in the SSRI prescription rate for U.S. children and adolescents occurred from 2003 to 2004, the public health warnings may have left some of the most vulnerable youths untreated.” This is unadulterated speculation, which as I just mentioned is not supported by a single statistical analysis in their paper. It is also hard to imagine how an FDA warning in mid-October could make suicides earlier in the year increase. One can only wonder to what magical time-traveling extent an FDA warning in October could have increased suicide rates earlier in the year. This is so mind-bogglingly obvious that, again, the peer reviewers were possibly inebriated during the review process, or the editor published the paper over the objections of the reviewers. Am I being too nasty? I'm just trying to figure out how it got published and "good science" is not the answer.

The authors then proposed the following:

…we estimate that if SSRI prescriptions in the United States were decreased by 30% for all patients, there would be an increase of 5,517 suicides per year…

In addition…

In children 5 – 14 years of age, a 30% reduction in SSRI prescriptions would lead to an estimated increase of 81 suicides per year… Given that SSRI prescriptions for children under age 15 already underwent a reduction of approximately 17% from 2003 to 2005, we expect an increase of .11 suicides per 100,000 children in this age group. Since there are approximately 40 million children in this age group, we would expect 44 additional deaths by suicide in 2005 relative to 2003, or an increase of 18% in this age group.

Preliminary 2005 suicide data indicate a suicide rate in 5-14 year olds of .7 per 100,000, holding steady from 2004. This does not support the predictions of Gibbons and colleagues. Granted, the 2005 data are preliminary, but I’d be surprised if they showed a large change in the direction that Gibbons, Mann, and their team predicted.

Again, let me state that these are only correlational data and that data from clinical trials as well as other sources trumps these types of studies in any case. At the very least, when doing correlational research, try to control for covariates (other variables of interest), examine trends over a longer time period than one year, and maybe actually run some statistics. Oh, and avoid conclusions that require belief in time travel. There are even more potential problems, but the authors missed so many glaring basic issues that it makes no sense to go any deeper.

If data based on correlations is going to be trotted out to scare physicians into prescribing more SSRIs, then it should be examined whether the correlations provide even preliminary support for the idea that SSRIs might reduce suicide. I've criticized many studies on this site for a variety of concerns (like here and here, among many examples), and I think the present study is among the worst offenders of basic research methodology. Until we clean up the "science," don't expect much real progress in the mental health treatment world.

Background here and here.

Major Hat Tip: Furious Seasons.

Friday, September 14, 2007

Key Opinion Leader Contradicts Himself


In depression, is there a serotonin deficiency or not? Let’s ask a key opinion leader. Dr. Charles Nemeroff stated in a continuing medical education piece released in March 2007 that

There is a large body of evidence that the serotonin system is awry in depression in many, if not most, patients. There is truly a real deficiency of serotonin in depressed patients.

In the same piece, he stated that

Taking this together, one would suggest that the overwhelming evidence is of a relative deficiency of serotonin in the brains of patients with depression.

Yet in an article published in the Journal of Psychiatric Research in April 2007 (accepted for publication in May 2006), Nemeroff states

It is likely that no single fundamental neurobiological defect underlies severe depression.

Oh, so there is a serotonin deficiency and there is likely not a serotonin deficiency. Now I get it. That clears it up. Sounds like doublethink.

How could one contradict oneself on such an issue? This is a core problem in medicine. If these are the leaders of medicine, the scientific gurus whose opinions are thought to influence the practice of physicians throughout the world, then shouldn’t their thoughts be consistent from one day to the next? My humble guess is that this instance was due to one or both pieces being ghostwritten and the author not checking the final version of the paper. If it has your name on it, then shouldn’t you be responsible for the content of the piece? This is a lesson recently learned through the “commercial piece of crap” incident reported first on the excellent Carlat Psychiatry blog, with a similar incident being discussed on this site. The CME piece mentioned in this post is the same article on which Dr. Nemeroff did not disclose a highly relevant conflict of interest, as reported here.

For more on Dr. Nemeroff, please see this post.

Less SSRI's, More Suicide -- Apparently Not

Now that the 2005 suicide data are available from the CDC (as mentioned yesterday), one can see that despite SSRI prescriptions falling, there was apparently a very slight decrease in suicides. That does not lend credence to the story that decreased SSRI use leads to more suicides. The New York Times (Alex Berenson and Ben Carey) has some nice reporting on the story, including some telling quotes. Here's what Thomas R. Ten Have, a biostatistics professor at the University of Pennsylvania had to say regarding the latest study that claimed to show a link between decreased SSRI usage and increased suicide rate:
There doesn’t seem to be any evidence of a statistically significant association between suicide rates and prescription rates provided in the paper.
Yet here's what Dr. John Mann, one of the "experts" on the topic and coauthor of the previously mentioned study had to say:
The most plausible explanation is a cause and effect relationship: prescription rates change, therefore suicides change
Too bad the "most plausible explanation" just got shot down. This is just the tip of the iceberg regarding SSRIs and suicide. More to come at a later date. In the meantime, always be wary when someone notes that two variables are related, then claims that one variable causes another. Be especially wary when it turns out that the correlation is inconsistent or does not even exist, or may perhaps even go in the other direction. More to come another time.

Hat Tip: Furious Seasons.

Thursday, September 13, 2007

SSRIs, CDC, and Suicide

Though some people have been asserting with confidence that a decline in SSRI prescriptions has led to an increase in the suicide rate, Furious Seasons has the story that, um, suicide rates were slightly down in 2005 according to data from the Centers for Disease Control. Link to the CDC document here and link to an excellent post at Furious Seasons here.

Many researchers, bloggers, and others have been slamming the FDA for daring to put a black box warning on SSRI's that links the drugs to potential increased suicidal ideation. If fewer people take SSRI's, more people die. Or so the argument goes.

There is indeed some correlational data linking decreased SSRI prescription with increased suicide rates, as well as some correlational data finding no such relationship. Mind you, there is a reason that we all learn in introductory research methods that correlation does not prove that change in one variable causes change in another variable. There are much stronger sources of evidence, which will be discussed at a later date. For now, it is interesting that the suicide rate appears to have fallen slightly in 2005 despite estimates that SSRI prescriptions fell significantly.

Wednesday, September 12, 2007

WikiScanner: Covington & Burling Cleans Up


The law firm Covington & Burling, which has represented both big Tobacco and big Pharma (see here) cleaned up its reputation on Wikipedia. Here's what they deleted, according to a search on WikiScanner...

Mad Cows and Toxic Smoke

In April 2004, the Washington DC newspaper The Hill reported: "Creekstone Farms Quality Beef, which has been battling the U.S. Department of Agriculture to get permission to test its cattle for mad cow disease, has hired Covington & Burling to help it make its case."[2]

At the time, Creekstone was one of two U.S. beef producers who were seeking to resume exports to Japan, South Korea and other countries by testing every head of cattle they processed for mad cow disease.

According to a September 2003 press release from the firm, Covington & Burling successfully argued on behalf of the Southern Peru Copper Corporation to drop a lawsuit brought against it under the Alien Tort Claims Act (ATCA) by Peruvian citizens charging the copper company with polluting communities and causing health problems. ATCA has been used to address serious human rights violations in places like Burma and East Timor. In their release, Covington & Burling decried the "aggressive, expansionist plaintiffs' litigation" under ATCA.[3]

Covington & Burling also served as corporate affairs consultants to the Philip Morris group of companies, according to a 1993 internal budget review document which indicated the firm was paid $280,000 to "serve as general counsel to the Consumer Products Company Tort Coalition, agree the legal objectives with member company litigators, draft legislation and amendments, prepare lobby papers and testimony for legislative committees and administer the coalition's budget". [4]

During the $280 billion U.S. federal lawsuit against big tobacco, Covington & Burling partner John Rupp, a former lawyer with the industry-funded Tobacco Institute, testified that "the industry sought out scientists and paid them to make an 'objective appraisal' of whether secondhand smoke was harmful to non-smokers, a move they hoped would dispel the 'extreme views' of some anti-smoking activists." He "said the scientists, who came from prestigious institutions such as Georgetown University and the University of Massachusetts, did not consider themselves to be working 'on behalf' of cigarette makers even though they were being paid by the industry." Rupp said, "We were paying them to share their views in forums where they would be usefully presented," according to Reuters. [5]

...they also deleted the following...

Halliburton's Lobbying Partner

In 2003 Halliburton hired the firm to lobby Washington on behalf of its KBR Government Operations division, the same division being pummeled by the media, the Pentagon and Congress for its handling of Iraq contracts. Covington & Burling was paid $520,000 to handle "inquiries concerning company's construction and service contracts in Iraq," the firm said in a filing.

According to the filing, Covington & Burling listed the following people as lobbyists for Halliburton/KBR: Roderick A. DeArment, who was chief of staff to now-retired Sen. Bob Dole (R-KS); Martin B. Gold, former counsel to Senate Majority Leader Bill Frist (R-TN); Stuart E. Eizenstat, U.S. ambassador to the European Union during the Clinton administration; Alan A. Pemberton, coordinator of the firm's government contracts practice; David M. Marchick, who served in various posts in the Clinton administration; Jack L. Schenendorf; Peter Flanagan; Jennifer Plitsch; Benjamin J. Razi; and Allegra Lane.

Halliburton's lobbying expenses are disclosed in documents submitted under the Lobbying Disclosure Act of 1995, which requires congressional and executive branch lobbyists to disclose their lobbying activities twice per year. Each year the information is disclosed at the Senate Office of Public Records.

Covington & Burling was kind enough to leave the following text on the Wikipedia site:

Covington & Burling LLP is a leading international law firm with more than 600 lawyers practicing in Brussels, London, New York, San Francisco, and Washington. Founded in 1919, the firm advises leading multinationals on many of their most significant transactional, litigation, regulatory, and public policy matters. The firm has long emphasized the strength of its Corporate and Litigation Practices derived from the firm's industry expertise acquired through its broad regulatory expertise. Representative clients include The National Football League, Microsoft, PBS, and The Washington Post. Covington's pro bono program has been recognized as preeminent in the legal community. As part of its pro bono program, the firm has rotation programs, which allow attorneys and staff to work for six months at three local legal services organizations - Neighborhood Legal Services Program (NLSP), the Children's Law Center (CLC), or Bread for the City (BFTC).

Can you say "whitewash," anyone? To discover more Wikipedia edits, do your own investigation at WikiScanner. In fact, I strongly encourage more people to take a few minutes out of their day and start digging. C'mon, Peter Rost, (among others) you know you want to do some WikiScanner searching!

Thanks to an anonymous reader for passing along the tip on C & B.

Tuesday, September 11, 2007

Links of Note and a Preview

Many good items have appeared of late and I pass them to you below...
  • Adverse drug event reports skyrocket. Furious Seasons has the story.
  • AHRP goes after Lilly's potential blockbuster for schizophrenia. AHRP's take on Dr. Lieberman seems a little harsh, but it's still a good read.
  • The Last Psychiatrist scores points with a hilarious bit on calculating a commonly used medical statistic (featuring the Flock of Seagulls) and also weighs in on Lilly's hopeful new schizophrenia drug (no Zyprexa pun intended).
  • How 'bout some antidepressants for babies? Pharmalot has the scoop.
  • The discredited chemical imbalance theory of depression rears its head again, courtesy of GSK, as reported by Fiddaman.
  • A long overdue link to the Pharma Girls of Reality TV, courtesy of Cary Byrd.
Coming Attractions: There were many other posts of note, and I hope to get to them later. Also, more WikiScanner goodness to pass along in the next day or two. My take on the latest, greatest antipsychotic from Lilly and the next generation of antidepressants, of which "we can expect therapeutic benefits to appear four to five times more rapidly" than current medications, according to one researcher. I also hope to tackle the SSRI-suicide issue in more depth, but that may take some time. Oh, and an example of why I hate psychotherapy research. All that and more (maybe) to come relatively soon.

Monday, September 10, 2007

You REALLY Don't Wanna Publish That, Right?

Due to a lot of hits on a post from March over the past few days, I am providing a link to it since it is apparently becoming a hot commodity. The post is about Zyprexa, and the email of one Lilly employee in which different ways to suppress the study's results were discussed. Naughty. Very naughty.

And Lilly is painting David Egilman as the bad guy for his role in disseminating the now-infamous Zyprexa documents? Gimme a friggin' break! Oh, and did I mention the aforementioned post is based on one of those documents?

CME, Key Opinion Leaders, and Responsibility

Continuing medical education continues to get slammed (1, 2), and somehow the name of Charles Nemeroff keeps finding its way into these incidents. I wrote last week that Nemeroff co-authored a CME piece, on which he failed to disclose a conflict of interest regarding CeNeRx, for whom he co-chairs the scientific advisory board. The conflict of interest that was not disclosed was quite relevant, as the CME article was a cheer piece for MAOI's, and it just so happens that CeNeRx is in the MAOI business.

Daniel Carlat noted that another CME article upon which Nemeroff was an author has been criticized harshly; this one was dissed by one of its own authors. C. Lindsay DeVane, a coauthor, called it "a commercial piece of crap." Let me state this ever-so-clearly: An author called his own article a commercial piece of crap. This should send shivers up and down your spine -- if authors can't even trust the work upon which their name appears, how the hell are physicians supposed to trust it? Taking it a logical step further, how are patients supposed to trust their physicians if M.D.'s are receiving "education" that is "commercial crap."

I should mention that to Nemeroff's credit, on the "crap" article, he discloses an interest in CeNeRx.

What is authorship, anyway? Everyone knows that CME articles are quite often ghostwritten to reflect key marketing points. It's actually fairly comical that a lot (perhaps virtually all?) of the CME litter-ature is written by ghostwriters, and then key opinion leaders such as Nemeroff, DeVane, Keller, et al sign off on them. To be fair, it's not necessarily all that different from clinical trial literature, which is also frequently ghostwritten by industry-friendly writers (1, 2 ).

So we're left with a new definition of "author" on a scientific paper or CME piece:

Author: Someone who stamps his/her name on a paper to lend extra scientific credibility to the marketing of whatever product is discussed most positively in the manuscript. Having read the paper, written the paper, or having anything to do with the paper/study whatsoever is entirely optional.


Time: Some have said that the "authors" of these CME pieces are not to blame because those mean CME outfits send the proofs of the articles to be approved by the authors so quickly that the authors don't have time to review them. That is perhaps the WORST argument I've heard in a while. If that happens to an author once, I can understand...

Perhaps you're a well-meaning scientist who is hoping to provide something of value to educate your colleagues in a CME piece. Great. Then the ghostwriters throw together something that might be described as, um, "a piece of commercial crap," email you the manuscript to approve in 24 hours or else their version stands as the final version. At that point, you have hopefully learned your lesson. If you are repeatedly performing this exercise, lending your name to commercials passing for education, in which your own scientific views are not represented by the CME pieces, then you have nobody to blame but yourself.

Of course, there may be some whose views are accurately represented in CME pieces. Good for them. Have dozens of CME articles under your name, by all means. But, by God, let's not have any more statements like "the article with my name on it does not reflect my own views." I sincerely applaud Dr. DeVane for admitting that the CME article in CNS Spectrums is a joke. Now let's hope that he never finds himself in such a position again. Fool me once, shame on you; fool me twice... The worst forms of CME, which are marketing points covered with a very thin veneer of science, can only exist so long as "key opinion leaders" continue to sign their names on such pieces.

On a final note, I'd like to know if Nemeroff and Preskhorn, the other two authors on the CNS Spectrums piece likewise view the article as "crap" and if they would be willing to disavow themselves of the study. Here's betting they will have not a word to say on the topic.

Also read Pharmalot's great piece on the topic.

Friday, September 07, 2007

Drug Wonks Loses Remaining Credibility

Robert Goldberg of Drug Wonks appears to have flipped his lid. His latest writing, which is purportedly about comparative effectiveness trials is so riddled with errors that it's bad even by Drug Wonks standards! Making it worse is that Goldberg's tirade appears in the Washington Times as an op-ed. Granted, op-eds are not held to the same standard as regular news, but even throwing in some "truthiness" would have been a nice touch.

For all the gruesome details, head over to Health Care Renewal. You won't believe it. For background on the blog for which Goldberg writes, please read an earlier post.

Thursday, September 06, 2007

Equal Opportunity (For Mormons Only)

An alert reader passed along a job advertisement that blew my mind. It's a fairly standard job description for a faculty position in psychology at Brigham Young University. At the end of the ad is the equal opportunity statement, which reads as follows:
Brigham Young University is an Equal Opportunity Employer sponsored by The Church of Jesus Christ of Latter-Day Saints and requires observance of Church standards. Preference is given to members of the sponsoring Church.
So does this mean that whether you are a white Mormon or a black Mormon, you have equal opportunity, but if you're not Mormon, you're screwed? This does not sound very "equal opportunity" to me. Am I missing something?

Wednesday, September 05, 2007

Rost Busts Pfizer and Journalists

Pfizer is proudly touting a new observational study which found that patients who switched from Lipitor to a generic medication had an increased risk of heart problems. Fantastic -- avoid the generic and use Lipitor. Oh, but wait, as Peter Rost points out...

There is only one teeny weeny problem. The most common reason for switching drugs is because the therapy doesn't work; when the drugs don't have the desired effect. So it is completely expected that patients who were forced to switch had a worse outcome. They may simply be treatment resistant.

And Pfizer knows this.

That's the reason they use a weasel-sentence in their press release, hidden deep inside the text, saying "As with all observational studies, the findings should be regarded as hypothesis generating."

But that has not stopped the so-called health media from running with the story from the Lipitor saves, generics kill angle (see several sources on Rost's site). Here's more fuel for the fire:

"The bottom line on this particular study is that the data tell us such switching may not be without consequences," said Michael Berelowitz, senior vice president of Pfizer's global medical division, in a phone interview.

With all due respect, Mr. Berelowitz, it would appear that you are either ignorant on this point or you are lying. An analogy in the mental health field would be if patients who tried Effexor and then switched to a generic tricyclic antidepressant (say, imipramine) were found to have worse depression outcomes than patients who stayed on Effexor. Duh! Again, maybe people who dropped Effexor are treatment-resistant and/or had more severe depression -- medications don't work as well for them. So it would be a pretty stupid comparison to say that those who switched antidepressants were acting dangerously by switching medications, wouldn't it?

Child Bipolar: Youth Gone Wild?

This one has been covered by many other outlets (Furious Seasons, Washington Post, Psych Central, etc.) and I have admittedly little to add. A recent study in the Archives of General Psychiatry found that diagnosis of child bipolar disorder has increased 4000% No, that is no misprint. Here's a quote from the article:
While the diagnosis of bipolar disorder in adults increased nearly 2-fold during the 10-year study period, the diagnosis of bipolar disorder in youth increased approximately 40-fold during this period
This is the change from 1994 to 2003. A 40-fold increase in child bipolar diagnoses. I advise all readers to check out the limitations of the study pointed out at Psych Central. Even with some caveats, the results can be accurately described as stunning.

Well, Joe Beiderman (1, 2), are you happy about this development? Perhaps the child bipolar crew at Mass General can write another op-ed and defend how work by Biederman and friends has helped to push "awareness" (or is it misdiagnosis?) of child bipolar disorder.

In the meantime, is it the Youth Gone Wild or the Treatment of Youth Gone Wild?

Subthreshold Bipolar: Told You So!


I Said: This is one of those "you heard it here first" moments that you might read about at times on a site like Peter Rost's. On May 10 of this year, I wrote a lengthy post about an Archives of General Psychiatry article that pushed the common existence of a new form of bipolar disorder, which the authors called "subthreshold bipolar disorder." Not only was this condition relatively common -- it required treatment! I strongly encourage you to read the post. Other also chimed in wisely, including Furious Seasons, Polarcoaster, and Dr. X. Ruth introduced us to a hilarious song regarding the new diagnosis. Others were less pleased and threatened to rip myself and others new orifices.

I pointed out several notable issues with the study and its conclusions. One of my main issues of contention was as follows:
Now pay close attention. Only 3.2% of people who had a subthreshold diagnosis during their lifetime, but had not experienced an episode during the past year received “appropriate medication maintenance” treatment. WHAT?? Back up. There is scant, if any, data, saying that people with this newfangled diagnosis of “subthreshold” bipolar benefit from short-term treatment and there is not a *blanking* shred of evidence to say that people with “subthreshold” bipolar benefit from treatment with antipsychotics, mood stabilizers, or lithium in the long-term. How the hell did this section sneak through peer review? So it is now officially “appropriate” for people to receive Zyprexa or Seroquel for their “subthreshold” bipolar disorder in the long-term, even when they are experiencing no symptoms? Incredible. The paper also implies that people with bipolar II should receive constant treatment – again, where is the data to support such a recommendation. The long-term data on bipolar I treatment is also not great, but it dwarfs the data on bipolar II and “subthreshold” BP.
They Said: The September issue of the Archives of General Psychiatry contains a correction from the authors, which reads in part:
"...the reference to inappropriate pharmacological treatment of bipolar disorder should have been restricted to bipolar disorders I and II and not included subthreshold bipolar disorder."
In other words, they were wrong to imply that subthreshold bipolar disorder required pharmacological treatment. Well, thank you very much -- I told you so. The authors also noted a couple of small errors in the some tables, and that disclosures were left out for some of the authors. In addition, I noted earlier that the authors mentioned that "preparation of [the] article was supported by AstraZeneca." The authors have now indicated that "AstraZeneca did not provide any financial or scientific support for this study."

I heartily thank the authors for making the corrections. Errors in tables are easily understandable, but I am still struck that nobody caught the subthreshold bipolar treatment issue, including the authors, the peer reviewers, and the editor. Perhaps that says something about the state of academic psychiatry or perhaps it was just an oversight. One more thank you to deliver. This one goes to Dr. Bernard Carroll, who is credited in the correction with having alerted the study authors to the problem with their statements that subthreshold bipolar required pharmacological treatment.

The Bad News: Here's the problem. News stories have already circulated indicating that subthreshold bipolar is real and requires treatment. Not a single news story will cover the latest turn in events, in which the study authors retract their conclusion that subthreshold bipolar requires drug treatment. The damage has already been done. This reminds me of a post I wrote in June where I wondered how we could amplify news such as this? A major conclusion of a study is withdrawn but who will ever know? Let's face it, not many people read the Archives of General Psychiatry (even including psychiatrists), and those that read it don't usually skim through the bottom of a page at the end of a reference section (where the current retraction was located) looking for corrections. Might journals want to post corrections in a more accessible manner? Might the so-called health media want to report on these corrections?