Showing posts with label Neuronetics. Show all posts
Showing posts with label Neuronetics. Show all posts

Friday, October 30, 2009

Transcranial Magnetic Stimulation for Depression: Not so Effective, but FDA Approved

ResearchBlogging.org
Apparently, the FDA will approve just about anything as an antidepressant. Despite patients indicating that they don't perceive Abilify to work as an antidepressant, the FDA approved it, likely leading to tens of thousands of Americans being able to enjoy a taste of akathisia while getting all the psychological benefits of a placebo. Good work, FDA. The shift of antipsychotics into antidepressants has been documented in many places and is, ironically, very depressing (1, 2, 3, 4).

The FDA's "anything goes" attitude regarding antidepressants apparently extends to mediocre medical devices. In 2007, a paper in Biological Psychiatry presented results from a large trial comparing TMS to sham TMS. The article concluded that the treatment was a fantastic option for depression. Well, close to that anyway. That actually wrote that "Transcranial magnetic stimulation was effective in treating major depression with minimal side effects reported. It offers clinicians a novel alternative for the treatment of this disorder."

Before all of us poor depressed souls get in line for some sweet magnetic stimulation, maybe we should, like, look at the evidence. On the primary measure of outcome, the Montgomery-Asberg Depression Rating Scale, the results weren't quite statistically significant. So the sponsor tried to convince the FDA Neurological Devices Panel that the secondary measures showed super-impressive results. The problem: They didn't. The FDA review panel thought a few things (as can be seen in its entirety here):
  • The Panel’s consensus was that the efficacy was not established; some stated that the device’s effectiveness was “small,” “borderline,” “marginal” and “of questionable clinical significance.” The Study 01 endpoint with a p value of 0.057 per se was not considered a fatal flaw in the study analysis. The Panel did not believe that clinical significance was demonstrated with these results.
  • In general, the panel believed that the analyses of the secondary effectiveness endpoints did not contribute significant information to help establish the effectiveness of the device.
  • The Panel agreed that unblinding was greater in the active group, and considering the magnitude of the effect size, it may have influenced the study results. (35.8% of people receiving TMS reported pain at the application site compared to only 3.8% in the sham TMS group. This is a quick way to make a study unblind, as people experiencing pain could logically surmise that they were receiving TMS).
  • The Panel stated that there were too many non-random dropouts to reliably interpret these results. The Panel’s consensus was that the Week 6 data was of limited value and did not provide supportive data for establishing effectiveness. (After week 4, patients who did not show adequate improvement were given the option to quit the double-blind study; over half of patients departed the study after week 4).
One more doozy. A quote follows from a letter to the editor in Biological Psychiatry in which TMS is taken to task.
The authors note that some patient outcome measures were collected in the trial but omitted from the article. Of the 15 secondary end points the authors included in the paper, 11 were statistically significant. Of 11 secondary end points not included, 2 were statistically significant. Thus, the published end points were three times more likely to be statistically significant than the unpublished ones.
TMS was denied FDA-approval in January, 2007. But in October 2008, the FDA had a change of heart, approving the device. I'm not quite sure what changed the mind of the FDA.

The following disclaimer on the device's website is a bit funny:
NeuroStar TMS Therapy has not been studied in patients who have not received prior antidepressant treatment. Its effectiveness has also not been established in patients who have failed to receive benefit from two or more prior antidepressant medications at minimal effective dose and duration in the current episode.
So it's only demonstrated (weak) efficacy in people who have failed one (not zero, not more than one) antidepressant trial. Impressive, eh? To summarize, the sponsor and its affiliated academics wrote a paper in a major psychiatry journal in which positive outcomes were three times as likely to be reported as negative outcomes. The efficacy data were unimpressive according to an FDA panel -- and these panels are not known for being particularly choosy about efficacy data. It seemed that TMS was dead in the water, only to be resurrected in the form of a surprising FDA approval. And if being resurrected from the grave doesn't make for a great Halloween post, then what does?

Offending Study:
O’Reardon, J., Solvason, H., Janicak, P., Sampson, S., Isenberg, K., Nahas, Z., McDonald, W., Avery, D., Fitzgerald, P., & Loo, C. (2007). Efficacy and Safety of Transcranial Magnetic Stimulation in the Acute Treatment of Major Depression: A Multisite Randomized Controlled Trial Biological Psychiatry, 62 (11), 1208-1216 DOI: 10.1016/j.biopsych.2007.01.018

Letter to Editor:
Yu, E., & Lurie, P. (2009). Transcranial Magnetic Stimulation Not Proven Effective Biological Psychiatry DOI: 10.1016/j.biopsych.2009.03.026

Thursday, March 01, 2007

Advertising as Education: CME

When physicians become licensed to practice medicine, they must continue to stay informed regarding the wide variety of treatments and procedures available to their patients. To ensure that doctors stay informed, it is required that they receive “continuing medical education,” which theoretically keeps physicians updated about the latest developments in their specialty area. So far, so good. But what, exactly, is continuing medical education (CME)?

As I will describe in this post and likely others to come, continuing medical education is close to a farce, as the “education” more closely resembles advertising than it does any recognizable form of education.

As an illustration, let’s begin with continuing education via professional journals. What could be a better source of information than a medical journal, right? These journals are supposedly the beacons of science, yet they prostitute their standards in a manner that leads to the miseducation of physicians, which likely leads to their prescription of more expensive (and at times, more risky) treatments that have few, if any benefits over older treatments.

Case in Point: Journal of Clinical Psychiatry. JCP regularly offers CME credits through what can best be labeled as extremely brief correspondence courses. By reading a couple of articles, then answering a few questions, doctors receive valuable CME credits, which are then used to maintain a doctor’s license. JCP is far from the only journal which participates in this practice.

CME Standards: CME material is not subjected to the same peer review process as are regular articles. Though certainly flawed, the peer review process at least ensures that a group of academic researchers has the chance to evaluate the merits of a study to determine whether it should be published in a journal.

One of the standards regarding the commercial sponsorship of CME states

The content or format of a CME activity or its related materials must promote improvements or quality in healthcare and not a specific proprietary business interest of a commercial interest.

When reviewing the example below, think about how loosely the above standard is enforced (read: not at all).

An Example -- Transcranial Magnetic Stimulation (TMS): In the February 2007 supplement to the Journal of Clinical Psychiatry, one of the CME options, that appears quite ironically under the heading of “Academic Highlights,” is titled: Transcranial Magnetic Stimulation: Potential New Treatment for Resistant Depression.

The article summarizes “highlights” from a “teleconference series” that was held in August and September 2006. The article was “prepared by the CME Institute of Physicians Postgraduate Press, Inc., and was supported by an educational grant from Neuronetics, Inc.”

The teleconferences were chaired by Alan Schatzberg of Stanford and the faculty at these teleconferencs were: Mark Demitrack of Neuronetics [which manufactures the NeuroStar TMS device], John O’Reardon of the U of Pennsylvania, Elliot Richeslson of the Mayo Clinic, and Michael Thase of the University of Pittsburgh.

Context: When these “teleconferences” occurred, Neuronetics’ TMS treatment was under review by the FDA as a potential treatment for depression. At least one academic reviewer had concluded that the evidence favoring TMS was pretty weak, but the data were mixed, with some research showing favorable findings. Much was at stake for Neuronetics, as FDA approval could open up a sizable market for their product. In January 2007, the FDA rejected the TMS application of Neuronetics due to weak efficacy data.

Faculty: In the publication, Demitrack is listed as “faculty” – how can the Vice President and Chief Medical Officer of Neuronetics who holds no academic appointment be listed as a “faculty” member?

Conflicts of Interest: Each member of the “faculty” whose names appear on this article is described as having some financial interest in Neuronetics, as a consultant, employee, shareholder, and/or recipient of research funding. Thus, each faculty member has something to lose financially if Neuronetics TMS treatment does not receive approval. Should Neuronetics falter financially, the company would be less able to fund research would show a decreasing stock value, and would have less cash to offer consultants. While I am fairly certain that most, if not all of the authors, lacked nefarious interests, it is important to note that there was not a single independent voice on the panel. In CME articles such as this, however, this is just par for the course.

Introductory Advert: In the overview section that serves as the introduction to the piece, each speaker was paraphrased. Demitrack (Chief Medical Officer of Neuronetics) was paraphrased as saying:

Transcranial magnetic stimulation has shown promise within the device-based platform of interventions because it is an effective, noninvasive procedure; however, at the present time, TMS therapy has not yet received U.S. Food and Drug Administration approval.

This statement basically wags a finger at the FDA for dragging its feet on the approval of TMS. Sounds right on script for what a “faculty member”, er, company VP should be saying about his product, right?

Richelson is paraphrased as saying:

Modulating neurotransmission to specific brain areas through highly focused magnetic pulses (rTMS) may reduce or even eliminate the depressive symptoms associated with specific brain areas.

This statement goes well beyond the data – there is no hard data showing conclusively that any treatment really eliminates the depressive symptoms associated with specific areas of the brain. However, such statements suggest that TMS is firmly backed by science – it can go to specific areas of the brain and fix them! Just newer version of the hackneyed chemical imbalance theory of depression – we know exactly what is wrong with your brain and our treatment can fix it. Same story, different treatment.

Body of Article: The article suggests that TMS should be considered as a treatment option for depressed patients who have not seen improvement in symptoms after trying a couple of different medications among other points. My favorite statement in the article was based on comments from “faculty member" Demitrack:

TMS seems to provide the promise of at least equivalent efficacy and, in some instances, perhaps better efficacy and an improved tolerability profile compared with continued, more complex pharmacotherapy.

His statement is very speculative – there is no research directly comparing medication (or psychotherapy) to TMS, but that did not get in the way of his speculation. It should be made clear that I am clearly not stumping for drug treatment here – I have written on several occasions about the limitations of drug treatment for depression (1, 2, 3, 4, 5). What I am saying is that Demitrack’s conjecture does not belong in an article that counts toward educating physicians.

Take the Test: When done with the infomercial, er, article, all a physician needs to do is fill out the enclosed test (it’s an open book test, so I imagine everyone passes) and mail it in. Physicians can even complete the test online.

Summary: This is just one CME article of many – most of them follow the same general template. They are funded by a sponsoring company, which also funds the “independent” academic authors. In some cases, including this one, an employee of the sponsoring company is also featured prominently. A medical writer may then write up much or all of the article.

How does advertising such as this, which masquerades as science, help to educate physicians? Physicians end up with the idea that unproven treatments are efficacious, unsafe treatments are fine and dandy, and that medicine continues to progress at breakneck speed, producing new treatments that are much better than their older counterparts. And this helps patients… HOW?

Friday, January 26, 2007

Neuronetics Swings... AND MISSES!

Neuronetics' transcranial magnetic stimulation was rejected by the FDA's advisory panel today. For a couple of stories on the panel's discussion and recommendation, try here and here.

According to one report:
The majority of the panel—made up of an engineer, several psychiatrists and neurologists, and a statistician—had no problem with rTMS's risks. There are almost none. The biggest worry with it is that it might accidentally spark a seizure, but that did not happen even once out of the 155 patients treated. The problem was that Neuronetics couldn't prove any benefit. Treated patients got a little better, but so did those patients that underwent a sham treatment.
According to another source:

But the panel was generally unimpressed with the company's data, which showed a slight statistical advantage in depression symptoms over dummy therapy after six weeks of treatment. Several panelists expressed dismay that patients showed no improvement on some depression scales and only minor improvement on ones that showed a difference.

"The panel seems to be in consensus that the primary analysis did not establish efficacy," said Thomas Brott, the committee's chairman.

"Perhaps a reasonable person could question whether there has been an effect at all," said Brott, a neurologist from Mayo Medical School in Jacksonville, Fla.

The panel did not formally recommend to FDA whether or not the machine should be approved. But the agency scientists suggested at a public hearing that they were also uneasy with the company's results.

Ann Costello, an FDA medical reviewer, questioned whether the mixed evidence of effectiveness in Neuronetics' studies contained "any clinically relevant information."

Peter Lurie, deputy director of Public Citizen's Health Research Group, told the panel that Neuronetics did not show that its device was substantially equal to ECT, a standard that many medical devices must meet for FDA approval. He focused on the fact that patients actively treated with the machine showed mild improvement on only one of three depression scales.

"The magnitude of the finding is trivial from a clinical point of view," he said in an interview.

Maybe the FDA panel learned something from the VNS approval (here and here)?