Tuesday, September 30, 2008

Update: And Now, A Drug Recommendation From Our Sponsor

A few days ago in this post (1), I highlighted the favoritism for the new drug ziconotide (Prialt) in the 2007 Polyanalgesic Consensus Conference (2). I decided to do more research. In the 2003 Polyanalgesic Consensus Conference (3), ziconotide is mentioned 21 times compared to 84 times in the 2007 article. Ziconotide also received a special mention in the 2003 (although not in the abstract or conclusion). This was before the drug was FDA approved, yet it received this endorsement,

"This drug is undergoing review by the U.S. FDA. If it becomes available in the United States and elsewhere, its position within the algorithm is likely to evolve as experience in the clinical setting accumulates. The panel noted that the extensive preclinical and clinical data obtained as part of a formal drug development program exceeds the data available for other drugs used for intrathecal infusion and will probably lead to placement of ziconotide on an upper line of the algorithm, unless accumulating experience suggests a narrow therapeutic index in practice."

"Extensive preclinical and clinical data." In this 24 page article, only three ziconotide articles are cited. One was conducted on rats. One is a case study. There is only one controlled study, which I will talk about later. Even though I have not read the two other studies, rats and a case study hardly constitute "extensive" data. There is one more reference, but it's useless as it's this "84. Unpublished data." There's no way I can double check that one. I guess I'll just take their word for it.

In the 2007 article, one rationale for ziconotide being "recommended as a Line 1 drug in this algorithm...comes from substantial data from preclinical and clinical studies." So the data has grown from "extensive" to "substantial." Or what is downgraded from "extensive" to "substantial?" I'm not quite sure which is better.

Clearly, the excitement for ziconotide was brewing before it was FDA approved. The question is, was all this enthusiasm based on the quality of the available research? I'm not so convinced. A search on clinicaltrials.gov (4), only yields three trials. I don't think that qualifies as either "extensive" or "substantial."

In December 2005, The Medical Letter (5) reviewed the current literature on ziconotide. Only one study reviewed by the Medical Letter is also reviewed in the 2007 Polyanalgesic Conference articles. All other studies in the 2007 article were published in 2006. One is a case study on a 13-year-old girl (6). Another is a lit-review (7). Yet another is a series of case reports (2 patients; 8). That's hardly gold standard quality research (but is it substantial?). Only three of the studies cited are randomized controlled studies (9, 10, 11). So how does a group of people not receiving money from Elan feel about ziconotide?

"Ziconotide (Prialt), a new intrathecal nonopioid analgesic, lowered pain scores in some patients when added to standard therapy for refractory severe chronic pain. Development of tolerance to the analgesic effects of ziconotide was not reported during clinical trials, but its long-term effectiveness and tolerability are unknown. Serious psychiatric and CNS adverse effects have occurred and may be slow to resolve after discontinuation of the drug." [my emphasis]

That's hardly a resounding endorsement. However, the Medical Letter issue was about one-year before those three randomized trials were published. Is there any reason to doubt their findings? Two were funded by Elan. The third was funded by both Neurex (which was bought by Elan) and Medtronic. As I mentioned before, industry sponsored studies are more likely to generate positive results than independently funded studies (12). Remember that Elan provided "generous financial support" for the 2007 conference where ziconotide was anointed as a 1st-line agent. What about the 2003 conference? Who was the sponsor of that? Medtronic! The same Medtronic that co-sponsored this study (9), which has a very interesting history by the way.

The lead author is Peter Staats (don't forget that name). It was published in JAMA in 2004. However, the study was conducted between 1996 and 1998. For a reason I still can't figure out, the data went unpublished for over 5 years. Strangely, during that five year period, multiple articles on ziconotide were published, including this study (13) published in 2000. In this study by Vandana S. Mathur, two unpublished (at the time) studies are reviewed. One was a study on "malignant" (i.e., cancer & AIDS) pain and the other on "nonmalignant" pain. Interestingly, the study published by Staats et al in 2004 (9) is on people with cancer and AIDS, and this study (Wallace et al, 11) is on patients with nonmalignant pain . In Mathur (2000), the malignant study has 112 patients as opposed to the 111 in the Staats et al study. The Wallace et al study lists 255 subjects while the Mathur study list 256 studies. VASPI scores from baseline to the end of the initial titration period is the primary measure in each study. In the Staats study, the mean improvement in VASPI was 53.1% for ziconotide and 18.1 % for placebo. In the Mathur study, the results are 53.1% for ziconotide and 18.1% for placebo. In the Wallace et al study, mean improvement for ziconotide was 31.2% and 6% for placebo. In the Mathur study, mean improvement for ziconotide was 31% and 6% for placebo. Isn't it interesting how similar these results are? It's as if these data were published more than once. As I learned last week, that's a big no no (14).

As it turns out, these data for published more than once (15). Dr. Staats' reply is here (16). In his reply he stated, "I was involved from the outset of the ziconotide trial, including the design, collection, and analysis of data and the drafting of the initial manuscript." He continued, "The initial draft of our manuscript was sent to Neurex/Elan (the sponsors of our trial) by my coauthors and me around 1998, well before Dr Mathur, a former employee of Elan, published her review in June 2000." He Added, "Her submission was sent without my permission or notification and without citing my coauthors or me."

As it turns out though, "Neurex Pharmaceuticals apparently condoned the publication of Mathur’s article since it provided access to the data file and signed off on the article by Mathur" (15). Moreover, "the article by Staats et al does include a citation to the article by Mathur." A citation? How could that be? Well according to Staats, "our in-house manuscript used a reference to an abstract by Brose et al, and in our successive revision process the reference was augmented by the citation of the article by Mathur. When this occurred I assumed that the Mathur reference had been added simply to update the Brose reference we continued to use, adding nothing new. My assumption was not correct; I should have read the manuscript." [my emphasis] In a previous post (17) I highlighted the importance of reading the studies you cite. If only this blog existed back then (sigh...).

Staats continued with "The system for preventing these infractions broke down...in the present case the previous publication was in a nonindexed medium and none of the authors knew that the review included data from our work." I'm not sure what a "nonindexed medium" is. Regardless, I'm not sure how valid of an excuse that is, since Staats is one of the many co-authors of the 2003 and 2007 Polyanalgesic Conference articles. Shouldn't Dr. Staats have come across that study when he was supposedly reviewing the literature for ziconotide in preparation for the 2003 conference?

At the bottom of Staats' article, this is printed regarding the sponsor "The sponsor was responsible for the overall conduct of the study and the collection, analysis, and interpretation of the data obtained...the preparation and review of the manuscript were a joint effort among the authors, the sponsor, and a contract medical writer." I wonder how involved the "authors" actually were in writing and reviewing this article?

As it turns out, Mark Wallace was one of the many co-authors of Staats' study and the 2007 polyanalgesic article. He's also was the lead author of the 2006 nonmalignant study. If only Wallace was on the 2003 polyanalgesic committee, perhaps he would have seen that Mathur study before he re-published those data. (To date, I have not found any "duplicate publication" notices for this study, 11). The funny part is, Wallace cites two studies in which Mathur is an author, yet somehow her 2000 article with his data was not included.

So perhaps ziconotide is not the wonder drug that some have made it out to be. Also, according to The Medical Letter, a 30 days supply of ziconotide costs approximately $4200. If you don't have an intrathecal device installed, that procedure costs approximately $20,000. That's quite a price tag. Good thing all these decisions about ziconotide were made by un-biased people who check and re-check their work and base their decisions on hard science..., right?

Update: From Bad Science (12)

Saturday, September 27, 2008

And Now, A Drug Recommendation From Our Sponsor

Today I received my summer issue of The Pain Practitioner (1), and just like a box of Crack Jack (2), there was a prize inside. Actually, there were two prizes (Oh goodie!). I received two CME booklets (Score!). At the bottom of each booklet was this sentence "Supported by an unrestricted education grant from..."

For those of you who may not be well versed in pharmaceuticalese, when the phrase "unrestricted education grant" is translated into English, it means restricted (i.e., biased) education grant. Kind of the same way "bad" means "good" and "it's not you, it's me" means "it's you."

In the CME booklet that was "support by an unrestricted education grant from Elan," two articles are included. The second article is titled, "Interventional Modalities for the Treatment of Refractory Neuropathic Pain" By Lynn R. Webster (Can you guess the drug company to which she is a consultant?) Anyway, this article is about implantable therapies for people with severe chronic back pain. In the last section of her article, she reviews the recommendations from the 2007 Polyanalgesic Consensus Conference (3).

The purpose of this conference was "to update previous recommendations and to form guidelines for the rational use of intrathecal opioid and nonopioid agents." These recommendations were made by an "expert" (I use that word loosely) panel of physicians and nonphysicians in the field of intrathecal therapies (i.e., spinal injection).

As a neuropsychologist, I often assess people's reasoning skills, specifically, deductive reasoning. Here's you're test. Don't worry, it's only 1 question in length. In the polyanalgesic article, the respective literatures of 20 different drugs were reviewed. In addition to opioid therapy, a new drug was christened as a first line monotherapy. What is the new drug that was recommended as a monotherapy?

Here's a couple hints. This sentence appears in the acknowledgments section, "The authors would like to acknowledge Elan Pharmaceuticals for its most generous financial support of the consensus conference and 'hands off ' approach to the final writing of this article."

Second hint. On Elan's website, there four drugs, which are marketed in the U.S. (4). It's one of those four drugs. Still not sure which drug to chose? I'll list the top 5 drugs (out of the 20 reviewed in the polyanalgesic article), by the number of mentions.

1. Morphine - mentioned over 160 times
2. Prialt (ziconotide) - 84 mentions
3. Clonidine - 58 mentions
4. Hydromorphone - 50 mentions
5. Adenosine - 50 mentions

If you guessed Prialt (ziconotide), then you're correct. Now, I'll admit that I have not read this article in its entirety (it's 29 pages), and I have not reviewed all the literature on the effectiveness of ziconotide; however, I'm not making any claims about it's effective or utility (that's outside the scope of my practice). It very well could be worthy of its first-line treatment status. However, a few curiosities continue to bug me.

Morphine has been around for a long time, so 160 mentions in the article makes sense. It's literature is quite extensive. Ziconotide was approved by the FDA in December of 2004, yet it received the second highest number of mentions at 84 (repetition is the key to memorization; repetition is the key to memorization; repetition is the key to memorization...) . If you average the number of mentions from the remaining 15 drugs, the number is 19.7; that's four time less than ziconotide (Yes, I actually counted all mentions for all 20 drugs).

Moreover, it is the only drug mentioned in the abstract, which states "Of note is that the panelists felt that ziconotide, based on new and relevant literature and experience, should be updated to a line one intrathecal drug." I see, so the decision had everything to do with the "relevant literature" and clinical "experience," and nothing to do with the "generous financial support" from Elan.

Here is where they use psychology on their readers; of the 20 drugs reviewed, ziconotide was the last drug reviewed, taking advantage of the recency effect. Also, it's section is the longest of all the other drugs (morphine was in a close second). Did I mention that ziconotide is also the only drug that is mentioned in the conclusions section? I didn't? Well, it is. Maybe Elan was a little more hands on than the authors are willing to admit. In the CME article written by Dr. Webster, ziconotide is the only drug that receives its own section as well (Remember, this is an "unrestricted" education grant).

At the end of the CME booklet there are 15 questions. The last 7 questions are derived from Dr. Webster's article. Can you guess which drug had its own questions?

Question 14 reads, "According to the expert panelist of the 2007 Polyanalgesic Consensus Conference, which of the following is a recommended first-line agent for pain?"
a. clonidine
b. bupivacain
c. ziconotide (<- correct answer; like you didn't know that one)
d. baclofen

Question 15 reads, "According to this article, which of the following is true regarding ziconotide?"
a. It must be titrated slowly (<- correct answer)
b. It must be discontinued slowly so that the patient does not experience withdrawal syndrome.
c. The primary side effects are cardiovascular.
d. The primary side effects are respiratory.

For some strange reason, I'm not quite convinced by this "expert" panel's recommendations. I know that Elan supposedly had a "hands off" approach to the "final" writing the article, but nothing is said about the conference itself. How "hands off" was Elan during the planning of that? The fact that their four-year-old drug made it to the top based on "expert" opinion is a hard pill to swallow (technically, it's not a pill, it's a liquid). I guess that means if Elan had not provided "generous financial support" for that conference, then the "expert" recommendations would have been the same. Right?

Wednesday, September 24, 2008

Proof of Evolution

This is a short documentary about the psychopharmacologist.

The psychiatrist (lt. shockus electricus) is an endangered species. The environmental mechanism of their demise is still unknown. While there has been some efforts to save this animal, their numbers continue to dwindle. However, a new species seems to have evolved from the psychiatrist. This new creature is called the psychopharmacologist (lt. prescribus pillus). While psychiatrists are scattered through the North American continent and still appear to be thriving in some parts of Europe; psychopharmacologists have developed large breeding populations around the coastal cities, as they seem to thrive in urban environments.

Seriously, psychopharmacologists are the only mainstream doctors (I'm lying, that's actually not true; see comments) whose title reflects how they treat (i.e., drugs) instead of what they treat (i.e., mental illness). An endocrinologist does not prescribe endocrines to patients. An immunologist is someone who studies the immune system. Psychopharmology on the other hand, is the study of drug-induced changes in mood, sensation, thinking, and behavior. That is quite different from psychiatry, which studies how to prevent and treat mental illnesses. At least the title "psychopharmacologist" tells us where their interests lie (it's in the drugs, not the patients).

News Flash: Hot Flashes Are Treated By...Everything!

...Well, I don't know that for certain, but that's what I thought when I read this headline "Acupuncture Reduces Side Effects Of Breast Cancer Treatment As Much As Conventional Drug Therapy, Study Suggests" (1). According to this "first-of-its-kind study," acupuncture is as "effective and longer-lasting in managing the common debilitating side effects of hot flashes, night sweats, and excessive sweating (vasomotor symptoms) associated with breast cancer treatment." What is the conventional drug therapy to which acupuncture was compared? Effexor of course (that's his slave name, he prefers to be called venlafaxine).

Are there any other scientifically validated treatments for hot flashes and other vasomotor symptoms? Well, according to these people (2), "Soy seems to have modest benefit for hot flashes, but studies are not conclusive. Isoflavone preparations seem to be less effective than soy foods. Black cohosh may be effective for menopausal symptoms, especially hot flashes, but the lack of adequate long-term safety data (mainly on estrogenic stimulation of the breast or endometrium) precludes recommending long-term use. Single clinical trials have found that dong quai, evening primrose oil, a Chinese herb mixture, vitamin E, and acupuncture do not affect hot flashes; two trials have shown that red clover has no benefit for treating hot flashes." [my emphasis]. Did I read that correctly? Acupuncture did not affect hot flashes? That must be some sort of fluke. Except that these people (3) and these people (4) both found that acupuncture was no better than sham treatment.

Assuming that my thought process is linear, acupuncture is as effects as venlafaxine, yet acupuncture is no better than sham treatment. So that would mean..., venlafaxine is no better than sham either. So what does the evidence say? Well, venlafaxine has these three open-label trials (5, 6, 7), all of which were...drum roll please...positive! (I'm shocked).

Science Lesson 1: Open-label studies are pointless. 98% of all open-label trials are positive. Sounds pretty high, right? That's probably because I just made that statistic up. But, when you have no comparison group, and all parties involved know about the treatment, positive results are the rule, not the exception.

Anyway, I was able to find this one randomized controlled study (8), which lasted for a staggering 4-weeks, and showed that venlafaxine was superior to placebo pill. Side-effects of the venlafaxine treatment included, "mouth dryness, decreased appetite, nausea, and constipation."

Science Lesson 2: When a drug has side effects, patients and doctors can accurately guess whether or not the patients were given placebo, thus breaking the blinding. Secondly, not all placebos are created equally. Pill placebo is less effective than capsule placebo, which is less effective than injection placebo. Also, if put a sticker price on the placebo, the more expensive placebo out performs the cheaper placebo (9). Lastly, another way to boost the placebo effect is to give a placebo that actually has side effects (10).

So quit jerking me around, does venlafaxine work or not? And while we're at it, does fluoxetine (11) and paroxetine (12) work too? Well, according to this meta-analysis (13) published in 2006, there are a total of 7 trials that compared either SSRIs or SNRIs to placebo. Only 3 out of those 7 trials were superior to placebo (43%). And those are only the published studies. Who knows what negative studies have not been published (14). Well, what about the science? There has to some sort of biological explanation, right? According to the former psychiatrist and current psychopharmacologist Stephen Stahl, "It may be that actions on both the serotonergic and noradrenergice systems are required to improve these (vasomotor) symptoms" (pg. 626). There's just one problem with that theory. In the controlled trial with venlafaxine, the highest dose given was 150mg. At that dose, venlafaxine is barely an SNRI. And since the 150mg dose was no better than the 75mg dose, that does not lend much support to Dr. Stahl's theory.

Science Lesson 3: A theory is suppose to lead to a hypothesis, which leads to data, which leads to revising the theory. In pharmacology, however, you discover that a drug has an effect by accident; then, you assume that the mechanism of that drug is why there was an effect. Then the thought process seems to stop there as these people tend to ignore contradictory evidence. Drug treatments for depression have these actions: SSRI, SNRI, DRI, NRI, Alpha2 antagonist, cortisol antagonist, CRH antagonist, blah, and blah. So the theory is that depression is caused by and treated by all these chemicals. Makes perfect sense.

So serotonergic drugs have some effect on hot flashes. But so does calcium channel blockers (15), CBT (16), and balancing your yin and your yang. And to be honest, I'm not even going to pretend that I know how black cohosh and soy work. So maybe my first thought was correct, hot flashes are treated by everything!

My long-winded point is this: When you have a condition, such as depression or hot flashes, which are highly subjective and have minimal to no reliable and objective identifiers, everything under the sun can be shown to have a positive effect. Everything that is, except magnets (17).

Thursday, September 18, 2008

Psychotherapy Research Is Lame Too

WARNING: THE FOLLOWING POST CONTAINS A LOT NUMBERS. HEADACHE, NAUSEA, DIZZINESS, AND VOMITING MAY OCCUR.

In my previous post, I criticised the manner in which psychotherapy research is conducted. In this post, I discuss four (2 psychotherapy & 2 medication) studies. The emphasis will be on the populations used and how the external validity of the results (i.e., they arn't generalizable) are greatly compromised. The actual results of these studies won't be discussed; however, a reference for each study is included. A specific type of psychotherapy known as behavioral activation therapy (BAT or BA) has this acute major depression treatment study (1), and this two-year relapse prevention study (2). Aripiprazole has this 26 week bipolar I maintenance study (3), and this 74 week extension study (4).

The first BA study initially included 388 subjects who completed a comprehensive intake assessment. Based on the exclusion criteria presented in this post (5), 250 (64%) subjects were eligible for randomization; however, 9 declined participation. In all, 241 (62%) were included in this study. The majority were excluded because of "subthreshold" or "low severity" depression. What he have left are 241 subjects with pure MDD with moderate to severe symptoms. Those subjects were randomized to one of four arms: cognitive therapy (45), behavioral activation (43), paroxetine (100), or placebo (53). At the end of 16 weeks, 172 (71%) subjects completed this study. By treatment arm, 39 (86%) of the CT group completed the study, followed by 36 (83%) for BA, 56 (56%) for paroxetine, and 41 (77%) for placebo. One important caveat, the placebo arm was dissolved after week 8. This means for the remaining 8 weeks, there were only three active arms. If you subtract out the placebo arm, then only 188 (78%) patients with MDD received active treatment, and 131 (54%) of them completed the study. Approximately 1 out of 2 subjects lasted 16-weeks.

In the first aripiprazole study, 633 recently manic subjects were recruited. After exclusion criteria was enforced, 567 (89%) made it into the initial open-label 6-18 weeks stabilization phase. The study does not state why 11% were excluded. It's also important to note that 333 of the 567 subjects were from an aripiprazole acute mania study (i.e., 58% of subjects included were already shown to be responsive to aripiprazole). At the end of the stabilization phase, 361 (63%) subjects had discontinued (primarily for side effects, 22%), while 206 (36%) remained in the study. Out of those 206 subjects, 161 participated in the 26-week maintenance phase. In simpler terms, only 28% of the original 567 advanced to the actual area of investigation. 83 subjects were randomized to placebo, and the other 78 subjects were randomized to aripiprazole. At the end of 26 weeks, 28 subjects completed the placebo arm (34%), and 39 subjects completed the aripiprazole arm (50%). That's a total of 67 subjects, meaning, 88% of the subjects initially recruited dropped out of the study. Only 41% of the subjects who advanced to the 26-week phase remained in the study.

In the second BA study, those subjects who responded to acute treatment were eligible for this continuation study. 106 (61%) subjects out of 172 that finished the previous study were included. The subjects who originally received CT or BA did not receive continued treatment during the two-year follow-up period. These subjects who received paroxetine were re-randomized to either continued medication or switched to placebo. After the first year of follow-up, the paroxetine group was tapered off treatment and the placebo group was dropped from the study. At the end of year one, 55 (51%) subjects had either dropped out or relapsed (9 relapses each occurred in the three treatment arms, 12 in placebo). At the end of year two, only 46 (43%) subjects completed the study. Since the placebo arms were dropped at the half way point in each study (thus limiting comparisons with active treatments), only 167 (69%) subjects actually received active treatment, and only 27% of those subjects completed the final study.

In the second aripiprazole study, 66 of the 67 subjects who completed the 24-week study advanced to the 74-week maintenance phase. 27 subjects were from the original placebo arm and 39 subjects were from the original aripiprazole arm. At the end of the study, 22 (81%) of the placebo subjects discontinued while 32 (82%) of those treated with aripiprazole dropped out of the study. A grand total of 12 subjects out of the initial 567 completed the entire study. That's a paltry 2%. If you want to be more liberal and count only those subjects who initially entered the 26-week study (161), then a ginormous 7% of those subjects completed the study. That's sad when you remember that 333 subjects were already shown to have had a response to aripiprazole, and the remainder were stabilized on aripiprazole.

What I am trying to illustrate is how unimpressive these numbers actually are. Large percentages of subjects are lost even before the actual studies begin. Secondly, since these studied populations are not representative of actual clinical populations, the positive results in these studies are pretty meaningless. Prospective studies that follow subjects for one to two years are quite rare. However, when done, it's striking how very few subjects actually complete these studies. So BA was shown to be comparable to paroxetine after 16-weeks. Since the placebo arm was dropped after week 8, we have no meaningful comparison group. Although aripiprazole was shown to be a maintenance treatment, the numbers were so small in the end that the results become moot. Sadly, as far as these studies go, this is as good at it gets.

Psychotherapy Research Is Lame: Part 1

Over at PsychCentral there is a post (1) titled, "Cognitive Behavioral Therapy Best to Treat Childhood Trauma." Reported are the findings from a recent meta-analysis that states, "strong evidence showed that individual and group cognitive–behavioral therapy can decrease psychological harm among symptomatic children and adolescents exposed to trauma." Regarding the other therapies examined, "evidence was insufficient to determine the effectiveness of play therapy, art therapy, pharmacologic therapy, psychodynamic therapy, or psychological debriefing in reducing psychological harm." According to PsychCentral, this doesn't mean that "these other types of interventions are completely ineffective or don’t work… just that this particular scientific analysis...did not find any significant impact of them." Actually, this scientific analysis found that "evidence was insufficient to determine the effectiveness" as opposed to not finding "any significant impact." That's what happens when one treatment is studied more often than others (2). CBT may be the best treatment, but when other treatments aren't tested, there is no way to tell.

Here's the problem, psychotherapy research is pretty lame. Head-to-head comparisons of different psychotherapies are just as rare as head-to-head drug comparisons. Granted, drug companies put-up millions of dollars to promote (an occasionally research) their own treatments, while we psychology types are lucky to be included in big NIMH studies. Yet, when we do crank out those once-a-in-decade, large, randomized, double-blind, and placebo controlled studies, they arn't necessarily better than drug trials. I know that drug research is easy criticize; there are big, evil, greedy, multinational pharmaceutical companies on which to blame anything and everything. We don't have evil psychotherapy companies at which to hurl blame. The closest thing we have to drug companies are the test publishers such as Psycorp, PAR, and WPS. However, those companies can't hold a candle to the pharmaceutical industry. So who do we blame for the poor state of psychotherapy research? The drug companies. Who else?

If you're a frequent reader of CL Psych(3), then you're familiar with those purveyors of biased research, "Key Opinion Leaders (KOL)" Many KOL's conduct clinical trials funded by drug companies. The cool part is, when a pharmaceutical company finances a drug trial, it's more likely to produce positive results than an independently financed drug trial (4). This creates the impression that the drug under investigation actually does something. Their bias screws with the science. So the question is, does the field of psychology have anything similar to those KOL's? The answer is yes. You see, psychologists' have these things called "theoretical orientations," which dictates their allegiance to specific types of psychotherapy. That allegiance creates bias and that bias screws with the science. Similar to the drug company financed trials, a therapist's allegiance to a specific type of therapy is more likely to produce positive results than the actual components that make up that therapy (5). Meta-analyses that investigated therapist allegiance have reported effect sizes as high as 0.65 (6).

Another problem area is placebo controls. A lot buzz has been generated about antidepressants apparent lack of superiority over placebo (7). What hasn't generated buzz is this: If therapy is just as effective as medication, then therapy also lacks superiority over placebo. However, there's more. In psychotherapy research, control groups come in many different shapes. For example, there's the dreaded "wait-list" control group (8). People assigned to this group have their symptoms checked periodically, while others get their weekly dose of CBT, IPT, BAT, or some other combination of letters. Is it an adequate placebo? Well, according to the Carlat Psychiatry Report (9a), "the wait list control is suboptimal, because unlike...pill placebo, wait list patients don’t actually believe that they are getting treatment." [my emphasis]. This can lead to one of two scenarios: reactive demoralization or the John Henry effect. The former is when people have a worsening of their condition when they know that others are getting a better treatment (thus making the active treatment look better). This can also happen in drug research; however, it usually doesn't occur until the person has deduced that they're on a placebo. The John Henry effect occurs when the control group tries to compete with the experimental group. This leads to an improvement in their condition (thus making the active treatment look less effective). Although research studies "have shown that simply being put on a wait list results in substantial improvement...," this isn't "as robust as pill placebo" (9b) due to the reasons mentioned-above.

The obvious solution to this problem is to create an intervention that will give the impression that the control group is receiving adequate treatment. That's how "treatment as usual" and "clinical management" came about. The problem with these control groups is that they are intentionally "de-powered." Therapists are instructed to be either inert or to minimize certain non-specific therapeutic ingredients, such as the therapeutic alliance (10), which is a difficult task. Another problem stems from the fact that when a therapist is providing CBT, he knows it. When a therapist is providing "clinical management," he knows it. This confound is called the experimenter bias effect. In drug research, when a pharmacotherapist knows that he is prescribing an active drug, the study is referred to as unblinded (or single blinded). Such studies are routinely criticized (11). I've seen only one study (12) where the control therapists were taught a "new" therapy. This was intended to increase the likelihood that the therapists would believe that they were providing an adequate treatment.

Another widely criticised component of drug trials, which limits the external validity (i.e., generalizable) of the results, is called sample enrichment. This is when a clinical population, which is likely to respond to treatment, is selected to participate in a study. Typically, these enriched samples represent approximately 20% of the patients actually seen in clinical settings (13). For example, the quetiapine BOLDER studies used this population of bipolar patients: people who met criteria for BP I or II (I'm with you so far), no co-morbid Axis I disorders (you lost me...), current depressive episode can't last longer than 12 months (I'm still not with you), no history of nonresponse to more than two antidepressants (where'd you go?), no co-morbid substance abuse (there goes 75% of the bipolar population), no medical illnesses (seriously, I can't see you), and no suicidal risk (14). Do these people actually exist? Because I've never seen one.

The protocol for a recently published behavioral activation (BAT) study (15) had this MDD population: DSM-IV diagnosis of MDD with no history of BP, or psychosis, organic brain disorder, or mental retardation (that's fair enough). However, participants were excluded if there was a risk of suicide; substance abuse; co-morbid anxiety, eating, or pain disorders; certain personality disorders; history of poor response to CBT or paroxetine; and an unstable medical condition. The rationale for an enriched sample is not necessarily nefarious; however, it severely limits the external validity of treatment.

As you can see, psychotherapy research is lame. However, by its very nature, psychotherapy is very difficult to study. That's why there is a bias for "manualized" forms of therapy. Those provide a frame work for therapists to follow, and in theory, minimize many confounding factors. Because of that bias, certain therapies like CBT are frequently researched, which creates the appearance of superiority over other therapies. Lastly, just because psychotherapy research is lame, does not mean it's lame for the same reasons as drug research. I'd argue that drug research has little excuse to be as lame as it is. However, psychotherapy research should be conducted better than it currently is.

Sunday, September 14, 2008

Holy Schatz! Part 2

I have reproduced
two of the slides that were presented by Schatzberg to better illustrate my point. At the bottom of this (top/right) slide, you'll see the following reference "DeBattista et al., Biol Psychiatry, 60(12):1343-9, 2006," which is this study (1). Data from a published clinical trial, accompanied by the appropriate reference below.

In the second slide (bottom/right), at the bottom is this reference "Schatzberg AF et al., J Affective Disorder, 107:S40-41, 2008." As I pointed out in my previous post, this reference is to an abstract that does not mention these data.

Now, I am not suggesting any wrong doing; however, I do wonder, what was the purpose of listing this reference? It's relevance to these data appears to not exist. This much I do know,
when data are presented, and that data is accompanied by a reference, it is implied that the data is taken from that reference. In an earlier post (2), I critiqued an article wherein the authors made specific statements that were not supported by the references they cited. This is how misinformation is spread.

And speaking of misinformation, in the my first post about Schaztberg's presentation (3, post has been corrected), I wrote that he did not indicate 06 was a negative study. He did show a slide, which showed that the primary endpoint was not statistically significant (p=.144). What he primarily focused on, was the secondarily analysis of the data, which said that "there was a statistically significant correlation between plasma levels and clinical outcome achieved during treatment" (4). This is Corcept's and Schatzberg's attempt to turn a negative into a positive, which they have been doing for a couple years now (5).

Saturday, September 13, 2008

Holy Schatz!

I'm back after attending the 3rd Annual Psychotic Disorders Conference (1). One of the speakers was Corcept co-founder (and shareholder), and president of the APA (the bad APA, not the good APA), Alan F. Schatzberg (pictured right). His topic? "The Latest Treatment Approaches for Managing Psychotic Depression." Most of Schatzberg's work regarding his company's drug, Corlux, also known by its many aliases: mifepristone, RU-486 (special ops name), the abortion pill, has been well documented here (CL Psych), as well as by others, which can be accessed through the above-link.

Did he have any new or exciting data to present? No. It was like an Earth, Wind, and Fire concert, he was doing all his greatest hits. He began with the spectre of psychotic depression: it has similar neuropsychological deficits seen in schizophrenia (not quite true, but I'll humor him), it represents 15-18% of cases of major depression (that's in Europe by the way), and of course, who can forget his number hit, hypercortisolemia. Ah, the memories that one brings back. Anyway, after quickly blowing through various treatments (ETC, Symbyax, SSRI), and a quick primer on the HPA-axis, he moved onto mifepristone. First, he spoke about the Corcept 03 Study 2006. The lead author is DeBattista. You can read more about him and that study here (2). Did you read it? Good. So, did the Schatzmeister mention any of those criticisms? No. And where moving on...

Here is where I became confused, when he talked about Corcept study 06-PMD. The study concluded in 2007. You can read about the results here (3). The main findings were this, "...study 06, the last of the three Phase 3 trials, in March 2007. These results indicated that this study did not achieve statistical significance with respect to the primary endpoint, 50% improvement in the Brief Psychiatric Rating Scale Positive Symptom Subscale, or BPRS PSS, at Day 7 and at Day 56." What Schatzy primarily focused on was Corcept's spin, which is found in the latter part of that release.

This is where it gets interesting. The PPT slides that Schatzberg showed while talking about this study had this reference at the bottom "Schatzberg AF et al., J Affective Disorders, 107:S40-41, 2008." What would one assume, when seeing that reference underneath the 06-PMD data? If you're an idiot like me, you'd assume that he was referring to a published study, or at least published data. He's not (4). It's an abstract, two paragraphs in lengths that summarize what he was going to talk about at a symposium; the content of which is identical to the lecture I saw (he's doing a greatest hits tour folks!). These data weren't even referenced in the abstract. So why list a reference at all? Because that allowed him to present unpublished data, from a negative trial, as if it were published data and to give it a positive spin. That's a neat trick..., and now watch him do it while drinking a glass of water...

At the end of the lecture, he gave contact information for a person at Stanford, so others could refer patients with PMD to a clinical trial for mifepristone at Stanford University, his place of employment. Because it's affiliated with Stanford, he is recused from working on this study (read these links regarding his position and his other influences , 5, 6).

To summarize, Dr. Schatzberg gave a lecture wherein he presented unpublished data from a company, which he has a major stake in, as if it were published data. Then, he made a request for referrals, for a clinical trial at his place of employment. Somehow, all this constitutes being recused. The best part is, I received 5.5 cme hours for attending this infomercial. And as usual, the representatives for Lilly, Teva, AstraZenca, Abbott, Janssen, and BMS where all present, which means my precious ink pen and note pad collection quadrupled in size.

Saturday, September 6, 2008

The Worst Book I Have Ever Read. Ever!

What is the book in question? The Bible, of course. But, in a close second is Understanding Depression by Donald F. Klein and Paul H. Wender (1)

Major Complaint: No references. Not a single one. I guess they wrote this book from an a priori position of authority; therefore, everything they say is gospel and does not need to be supported by facts and should not be questioned (That's why it's second only to the Bible).

Speaking from atop a mound, these two researchers start with this, "Depression may be a normal human emotion-a response to loss, disappointment, or failure. Some depressions, however, should more properly be put in the category of common biological diseases...pg1" Sadness, not depression, is a normal human emotion. Depression is a specific mental illness. What's that? I will have to defer to the eminent psychopharmacologist Stephen M. Stahl (I wonder, do Rheumatologists ever call themselves non-steroidal anti-inflammatoryist?), "Mental Illnesses are defined as mixtures of symptoms packaged into syndromes. These syndromes are consensus statements from committees writing the nosologies of psychiatric disorders for the DSM of the APA and the ICD. Thus, mental illnesses are not diseases. pg178" (2) Consensus statements from committees? Ooh, I get nauseous from all that complex medical jargon. That's why I'm a psychologist, not smart enough to grasp this stuff.

In their manifesto, they list several reasons for writing this tome, the first being, "To explain what biological depression is and to clarify the difference between depression, a normal emotion, and biological depression, and illness. pg1" I've shown in the above-paragraph why that goal is futile. But there's more, "biological depression is common- in fact, depression and manic-depression are among the most common physical disorders seen in psychiatry. pg2" What?! Physical disorders?! So that means depression is like lupus or cancer. Do they provide any evidence? Nope, remember there are no references. Well, how about an a priori explanation? Since they use complex medical terminology like "heredity," "genes," and "chemistry," stuff that goes over my head, I'll use their words instead. "In sum, although we know comparatively little about the altered chemistry of individuals with depression, our knowledge is advancing rapidly. pg96" Let me get my Merck Manual to translate that. In other words, there is no definitive proof of a chemical imbalance, but we're going to pretend there is anyway.

Should anyone be listening to me anyway? According to these authors, no. "It is essential that people who suspect they are suffering from depression know who is qualified to help. Not all physicians or mental health workers-such as psychologists, social workers, and psychiatric nurses-have had adequate training in the diagnosis and treatment of depression. pg4" Then why do all those neurologists, PCPs, and even psychiatrists refer patients to little ol' me for differentials and med checks? These practitioners even require that I make recommendations for treatment and rehabilitation. They probably should stop doing that since, "Nonphysician therapists, such as psychologists, social workers, and pastoral counselors, are handicapped in treating depressive patients because of their lack of medical training...pg171" How about this, "leukoencephalopathy." I think that's proof enough of my medical credentials.

"Psychologists are often still taught to use diagnostic techniques that are no longer considered useful by biological psychiatrists, and they were not trained to recognize biological factors in mood disorders and other psychiatric illnesses. pg172" That's because psychiatrists run blood and genetic tests that have the ability to diagnose the majority of the 400 diagnoses is the DSM-IV. Wait a minute, they don't do that? Then what are the tests that these biological psychiatrists use? According to the authors, "In the psychiatric part of the evaluation, the psychiatrist will inquire about definite signs and symptoms characteristic of depression and other psychiatric conditions. pg100" Well garsh, I wish I were a baby bumble bee...It's called a clinical interview folks! All health care professions learn how to conduct them.

I bought this book two-years ago. I read one paragraph and then my third testicle descended (painful). These guys are d-bags. Many nonphysician therapists (a dumb terms since most 'real doctors' don't do therapy, including psychiatrist, 3) receive training in biology, pharmacology, and other related disciplines (4). The reality is, biological psychiatry is a field in search of a science (5). This book is only meant to shore up the low self-esteem (that's right, a psychological explanation) of these guys by belittling the professions of others. Sadly, I believe these two guys truly believe in what they're preaching. The Last Psychiatrist sums it up best,"The reason he believes it is his entire professional existence-- his whole identity-- is predicated on believing it. He's not a scientist, he's a priest. (6)" Be proud of what you do.

Friday, September 5, 2008

All I Really Need to Know About Serotonin I Learned in Kindergarten


This article (A), which quotes the findings of this study (B) is another example of the misrepresentation of research. The actual study makes claims not supported by its findings and misrepresents the researched cited within the text.

First, the article title at ScienceDaily (SD) is "PET Scans Help Identify Mechanism Underlying Seasonal Mood Changes." No. They should have used the actual study title "Seasonal Variation in Human Brain Serotonin Transporter Binding." That's strange, I don't see the word "mood" anywhere. Hold on, let me get my glasses. Wait a minute, I don't wear glasses. I can see after all.

Since the rest of the SD article is just quotes lifted from the study, I'll just focus on the study itself. "Indolamines (tryptophan, serotonin, melatonin, and related compounds) have transduced light signals and information on photoperiod into organisms and cells since early in evolution, and their role in signaling change of seasons is preserved in humans. " The study cited for this statement (3) is for melatonin only, an indolamine, not indolamines. Secondly, tryptophan is the precursor amino acid that is converted into serotonin and melatonin (i.e., indolamines are neurotransmitters that are synthesized from tryptophan, a standard amino acid).

"Serotonin is involved in the regulation of many physiologic and pathologic behaviors that vary with season in clinical and nonclinical populations.3-12" Maybe it's just me, but when I read this sentence, I assumed that the 9 studies referenced would support the role of serotonin "in the regulation of many physiologic and pathologic behaviors that vary with season." No. Studies 3-11 establish that seasonal mood changes occur in healthy people and in some clinical populations. Only study 12 has anything to do with the serotonin. The serotonin transporter (SERT) specifically. The words "mood" or "depression" are nowhere to be found in that article. The researchers should have said "there is substantial evidence indicating that moods vary by season in both healthy and clinical populations. The role of serotonin is currently unknown."

"Seasonal variations in peripheral serotonergic markers have been demonstrated in several studies." 3 studies are cited and the studies do support this statement. I don't know if 3 constitutes "several," but at least the above statement is accurate. See, that's what happens when you actually read the work you cite.

"...the seasonal variation in serotonin-related behaviors,3-12" Nope. They cite the same studies where only one is about SERT. I want evidence that seasonal mood variations are "serotonin-related behaviors."

"Previous investigations19-20 of regional serotonin transporter binding and season in humans have not led to a clear understanding of the relationship between these 2 measures." That's probably due to the fact that those two studies cited have nothing to with serotonin binding and the seasons. The first study (19) investigated the effects of MDMA and reduction of SERT. The word "season" doesn't appear in the article. The second study (20) is a review of MDD and AD imaging studies. Again, the word "season" is not in the article. Later, they cite two other studies (21, 22). The first study actually is about seasonal SERT changes while the second article focused on that topic secondarily. Maybe the researchers intended to cite references 21 and 22, instead of 19 and 20.

The researchers reach this conclusions about their study, "Serotonin transporter binding potential values vary throughout the year with the seasons." Yes, I'm with you so far, "Since higher serotonin transporter density is associated with lower synaptic serotonin levels, regulation of serotonin transporter density by season...has the potential to explain seasonal changes in normal and pathologic behaviors." Do you have any leftover "No's" from this post (C)? I suggest that you use them now.

Synaptic serotonin levels cannot be directly measured in vivo. So how is it measured? They measure the presence of its metabolite, 5HIAA. Next fact, 95% of all serotonin is in the stomach (D). So, how is low serum 5HIAA a measurement of brain serotonin? It's not. Make sense now? No? Good, let's move on.

"Higher regional serotonin transporter binding potential values in fall and winter may explain hyposerotonergic [related to low serotonin levels] symptoms, such as lack of energy, fatigue, overeating and increased duration of sleep during the dark season."Actually those behaviors are better explained by low cortisol (E). If cortisol is low, the liver cannot synthesize glucose, which leads to lack of energy, fatigue, and increased sleep. People eat more food (especially those high in carbs) in order to increase glucose, which will give them energy.

Still confused? Read this to learn more about serotonin and mood (F).

Wednesday, September 3, 2008

Placebos Are Inactive...Right?

Over at Clinical Psychology and Psychiatry, an article regarding the long-term effects of placebos is reviewed.

Here's a little more information about those little, inactive pills used in research (1). In a 2002 study, Leuchter and colleagues studied the EEG's (electroencephalographs) of depressed patients who received antidepressant therapy(either fluoxetine or venlafaxine) or placebo treatment. The researchers analyzed the data of responders, "Overall, 52% of the subjects (13 of 25) receiving antidepressant medication responded to treatment, and 38% of those receiving placebo (10 of 26) responded. Medication responders and placebo responders could not be distinguished on the basis of their initial or final level of depression."

Typical right? That's because the researchers performed a placebo washout (common practice in drug clinical trials). "After enrollment, all subjects received single-blind, placebo lead-in treatment for 1 week; subjects who met response criteria (Hamilton depression scale score ≤10) after this week were removed from the study." Would research results be different if this practice was not done? It hasn't been studied as far as I know.

When the EEG's of the antidepressant treated were examine, "At week 2, the medication responder subjects showed a unique and significant decrease in prefrontal cordance that differentiated them from the three other groups." The placebo group, "showed slight increases in prefrontal cordance...at week 4, this increase achieved significance in the placebo responders, who differed both from their group baseline and from the medication responders."

So antidepressants decreased frontal activity of depressed patients while placebo increased frontal activity of depressed patients. Pretty amazing for a supposedly inactive substance. I wonder what was occurring in the brains of those who were washed out for responding to placebo in the first week? I guess we'll never know.