Wednesday, December 10, 2008

Me Too Drug Marketing

In the December 8 issue of the holy journal known as the Lancet (1), an article was published regarding the sassy cousin of ramelteon, tasimelteon. Ramelteon selectively binds to the melatonin receptors in the suprachiasmatic nuclei of the hypothalamus (m1 & m2 specifically). At present the drug is FDA approved for the treatment of primary insomnia.

Enter tasimelteon, which is in phase III clinical trials and is being studied for circadian rhythm sleep disorders (jet-lag and shift-work being the most common). I don't know a lot about the pharmacological properties of tasimelteon, but as far as I can tell, it offers no advantage over ramelteon since they affect the same brain receptors and have a similar half-life (I believe ramelteon is more potent).

Just like all the SSRI's that followed Prozac, this drug appears to offer nothing in the way of improved effectiveness or safety over ramelteon, however, I am making the prediction that when the drug does hit the market, it will cost a pretty penny (2, 3).

Here is what the Medical Letter had to say about ramelteon when it was released, "Ramelteon (Rozerem), a melatonin receptor agonist, is not a controlled substance and apparently has no potential for abuse, but its hypnotic effect is not impressive. In clinical trials, it produced small, statistically significant improvements in sleep latency, but had little effect on sleep maintenance." You can substitute the name tasimelteon for ramelteon and the Medical Letter statement still holds true.

The study published in the Lancet was funded by Vanda Pharmaceuticals Inc., the company responsible for the development of tasimelteon. Sure the article talks about safety and tolerability, how it beat placebo, and blah blah blah (placebo actually did pretty well too), but this article is a simple marketing ploy, nothing more. Think of it as a movie trailer, which is released months before the actual movie, in order to generate buzz, because the actual product has no substance (e.g., Valkyrie). The cool part is, the articles that talk about this drug, act as if ramelteon does not exist. Tasimelteon is portrayed as a break through drug (4, 5), which it's not.

So, should this drug actually make it to market, it will be advertised for circadian rhythm sleep disorders. I might be as bold as to say it will be advertised as the first drug approved for these types of sleep disorders, even though there are already many effective ways to managed these sleep problems.

Wednesday, November 26, 2008

Gosh, I Didn't See That Elephant In The Room

In the December issue of Pharmacotherapy (1), a group of researchers pointed out the obvious, which was that "widely prescribed medications" are "most urgently in need of additional study to determine how effective and safe they are for their off-label uses." Naturally, 9 of those 14 drugs are "antidepressants and antipsychotics," which "have high levels of off-label use without good scientific backing."

Ranking number 1 in terms of off-label use is our good friend quetiapine (Seroquel), which is used to sooth all that ails you. To be honest, I thought olanzapine (Zyprexa) would have at least broke the top 5, but since olanzapine is about as healthy as being exposed to asbestos (2, 3), perhaps landing at unlucky number 13 is appropriate.

Explaining why quetiapine was number 1, one researcher said, "this drug lead all others in its high rate of off-label uses with limited evidence (76 percent of all uses of the drug), it also had features that raised additional concerns, including its high cost at $207 per prescription, heavy marketing and the presence of a 'black-box' warning from the FDA." Personally, I don't think black-box warnings mean anything anymore. They're all the rage now. They're happenin'. A drug truly hasn't made it until it has its own black-box warning. It's the pharmacological equivalent of the iphone.

In the highly ambiguous field of mental health, there are many fads (e.g., Premenstrual Dysphoric Disorder, brought to you by Prozac; Social Anxiety Disorder, brought to you by Paxil). So what's all the rage these days? Bipolar disorder of course, "the most common off-label use for six of the 14 drugs on the list was for bipolar disorder." The principal cause of these fads is another elephant in the room, "when the volume of off-label use of any drug reaches the magnitude that we're documenting, it suggests a role of the pharmaceutical industry in facilitating these types of uses." (Plot twist).

"Although companies are largely prohibited from marketing off-label uses to physicians and consumers, they make use of exceptions or may market drugs illegally...several recent lawsuits have identified systematic plans on the part of some companies to market their products for off-label uses." This has been well-documented at clinpsych (4).

There is, however, one elephant in the room that these researchers neglected to mention. Namely, that much of the research for the uses of these drugs is abysmal (5, 6, 7, 8, 9, 10). The current system is broke. We are just circling the drain at this point. These people can call for all the new research they want, but when the only thing that is produced is garbage, then that's all you're going to get. Garbage in, garbage out.

Update (01/07/09): I like his article more than mine (Last Psychiatrist).

Friday, November 7, 2008

Can You Guess Which One Is The Dummy?

I have been very busy the last few weeks; however, I should have a new post on either Saturday or Sunday. In the mean time, I came across this picture of our new Veep and was struck by the similarity he has with one of my favorite comedians. Can you tell which one is the real dummy?



Saturday, October 18, 2008

News Flash: This Just In: Breaking News...

Welcome to the 6 o'clock news, I'm your anchor, Woodrow Butdonthaveapaddle. This just in...

Jaak Panksepp, a researcher at Bowling Green State University in Ohio (1), say's he has discovered that rats respond to tickling with actual laughter.

Upon hearing these findings, university officials took away the researcher's pot and told him to stop being such a crazy jagoff.

Thursday, October 16, 2008

It's Not Bipolar

Here is a story sure to delight all those who read it (1). It's about a teacher named Suzy Bass. The beginning of the article summarizes what you'll read about, "This popular teacher told students and friends she was going to die. What no one knew: She'd feigned chemo nausea, shaved her own head and was never actually sick at all." This is called Munchausen syndrome (also known as Factitious Disorder in the DSM-IV).

Here are the following diagnostic criteria:

A) The intentional production of physical or psychological signs or symptoms

B) The motivation for the behavior is to assume the sick role

C) External incentive for behavior (e.g., economic gain) are absent

There are three subtypes: predominantly psychological signs and symptoms; predominantly physical signs and symptoms; and of course combined psychological and physical signs and symptoms.

The reason I chose to write this post is not to highlight a really cool disorder, but rather, to show how bad mental health treatment really is.

Ms. Bass' story begins when she "told her parents that she'd been having breathing problems and persistent colds. Then one day she broke the news: She'd been diagnosed with non-Hodgkins lymphoma, an often deadly form of blood cancer. 'I went with her to chemo on more than one occasion,' says her father, who recalls sitting in the waiting room and watching Bass sign in and walk back to the treatment area."

Eventually, she was exposed...again, "Staffers from a school in Dallas, Georgia--where Bass once taught--had contacted him (the principal of her most recent employer) to expose what they claimed was Bass's latest deception. An employee googled her former colleague to see what had become of her; she found the Knoxville News Sentinel article about the prom fund-raiser (which was in her honor). Bass, the callers warned Hutchinson, had pretended to be a cancer patient during her tenure at their school--and at yet another one in Alabama."

People with this disorder not only go to great lengths to produce signs and symptoms, but they also go to great lengths to conceal the their deception, especially when exposed as frauds, "A week after getting exposed, Bass pulled down her Facebook account, changed her phone number and disappeared."

It is also common to move from location to another, thus allowing them to continue with their charade, "She was at Paulding County for about a year and a half when the Basses got a call that their daughter had passed out at school. A few weeks later, Bass called with a worrisome update: A mammogram had detected a tumor. Soon after, she announced that it was stage II ductal carcinoma."

Her parents even commented that, "she looked sick and appeared to have radiation burns under her arms." Once she was exposed again, the cycle repeated itself, "This time she told her parents that enemies at Tanner High had tried to sabotage her career and that she indeed had breast cancer, it had just gone into remission. A little more than a year later, Bass left for Knoxville."

Bass also exhibited another core characteristic of Munchausen syndrome called pseudologia fantastica (another cool name). This is the fancy term for pathological lying, people who lie for no apparent or rational reason, "Bass acknowledges that there were other lies she'd told friends and colleagues. She once pretended she had a fiancé who died on 9/11, that she'd played basketball at Florida State University and that she'd starred in the North American tour of Mamma Mia!"

Here is the part where it gets disturbing, "Once she left Knoxville, Bass admitted herself into an Alabama psychiatric ward and she told doctors she no longer wanted to live. There, she was diagnosed with bipolar, anxiety and obsessive-compulsive disorders." I know what you're thinking, she didn't tell the doctors about faking cancer. Actually, she did, "currently Bass's counselors have not diagnosed her with Munchausen syndrome and say they are primarily focused on treating her bipolar disorder, but add that her diagnostic review is not yet complete." They know her history, yet they truly believe that she has three serious psychiatric disorders. And they are treating her when the diagnostic review is not complete.

Here is the length this woman went to fake cancer, "she'd shaved her head...she was telling people the end was near..." Moreoever, "Bass had forged a doctor's name on a certificate of disability that she gave Paulding's associate superintendent" and "after spending hours researching cancer on the Internet, Bass learned to draw convincing-looking radiation dots on her neck with a permanent marker (doctors tattoo patients so they know where to line up the radiation machine every day). She would also roll up a bath towel, stretch it between her hands and rub it back and forth against her neck as fast as she could to give herself 'radiation burns.' She shaved her own head with a razor and made herself throw up from chemotherapy 'nausea' in school bathrooms. And all those times her father accompanied her to chemo treatments? After walking through the waiting room door, Bass would meet up with an actual cancer patient--a friend she met at church--and keep her company during her chemotherapy." Her doctors supposedly know all of this, yet they are concerned with treating her bipolar disorder.

Here is what her primary care provider said, "It is certainly possible that given her diagnosis of bipolar disorder, Suzy could have truly believed she had cancer, says Marvin Kalachman, a licensed physician assistant who has treated patients for more than 30 years. He prescribes and monitors Bass's medication under the supervision of a medical doctor." WTF!? Okay I have to break that down part by part.

"given her diagnosis of bipolar disorder." He is assuming that the diagnosis is legit. Cancer can actually be tested for, bipolar disorder can't. Yet, he's certain about her diagnosis. Refer again to the diagnostic criteria for Factitious disorder, "The intentional production of physical or psychological signs or symptoms." You see, psychological signs and symptoms are faked too. Would it not cross you're mind to consider that she might be faking bipolar disorder. No where in this entire article is it mentioned that she experiences manic symptoms. The woman who wrote this article (someone who actually has cancer) said this, "Speaking with a thick Southern accent, she sounded calm and polite, even funny. I could see why so many people had adored her. When she told me about a recent session with her mental health counselor, she joked, 'They charge $90 for 20 minutes and I'm the crazy one?'" Clearly this chick is not depressed either.

"...says Marvin Kalachman, a licensed physician assistant who has treated patients for more than 30 years." Her primary is not even a doctor. Who cares if he's treated patients for 30 years. If they're not psychiatric patients, that means nothing.

"He prescribes and monitors Bass's medication under the supervision of a medical doctor." Okay, unless that medical doctor is a psychiatrist, these two have no business treating this woman. None. Nada. Zip.

However, not all hope is lost. There are some people who are actually trained in this stuff, "Marc Feldman, M.D., a world-renowned psychiatrist, has treated more than 100 women who have faked serious illness. Though he has never met Bass, he believes he has her diagnosis: Munchausen syndrome, a psychological disorder in which someone feigns or self-induces illness to get attention and sympathy." This is someone who should be treating Ms. Bass, not Tweedledee and Tweedledum.

At present, "Bass is currently unemployed, a medical recommendation. 'My counselors don't even want me saying Welcome to Wal-Mart. Here's your buggy,'  she says with a laugh. Bass hopes, though, that her determination will propel her through treatment to a more healthy, happy life. 'I'm working to get past the guilt I feel and move past the mistakes I've made. I'm sick and I'm working on it every day,' she says. 'And I can assure you of one thing. If I can at all control this, it will never happen again." Too late, it already has.

Wednesday, October 15, 2008

Smokers Hate Their Children

Over at my favorite hub for science news, good and bad, I came across this article (1) titled, "Parental Warning: Second-hand Smoke May Trigger Nicotine Dependence Symptoms In Kids." Say it's ain't so. That means I'll have to return my tickets for the 76th annual Blow Smoke in a Baby's Face county fair. The article is in reference to this published study (2), which supposedly found that "increased exposure to second-hand smoke, both in cars and homes, was associated with an increased likelihood of children reporting nicotine dependence symptoms, even though these children had never smoked." So of course, results like these call for immediate action, such as the empowering of local governments to exercise control over the behavior of citizens, "these findings support the need for public health interventions that promote non-smoking in the presence of children, and uphold policies to restrict smoking in vehicles when children are present" [my emphasis].

So give it to me guys, how many of these poor 10 year olds have been gripped by the evil hands of nicotine dependence? "Our study found that 5 percent of children who had never smoked a cigarette, but who were exposed to secondhand smoke in cars or their homes, reported symptoms of nicotine dependence." That's it? A measly 5%? (The study actually states 4.6%). Please tell me that the measures of nicotine dependence are fairly rigorous and that these kids have at least something similar subsyndromal nicotine dependence.

"Classroom administered self-report questionnaires" were completed by these obviously bright 10 year olds. What makes these kids bright? Well here are the 7 nicotine dependence questions that they were asked: (i) How often do you have cravings to smoke cigarettes?; (ii) how physically addicted to smoking cigarettes are you?; (iii) how mentally addicted to smoking cigarettes are you?; (iv) how often have you felt like you really need a cigarette?; (v) do you find it difficult not to smoke in places where it is not allowed?; (vi) when you see other kids your age smoking cigarettes, how easy is it for you not to smoke?; (vii) how true is this statement for you? “I sometimes have strong cravings for cigarettes where it feels like I am in the grip of a force that I cannot control.”

You show me a 10 year old kid who knows the physical and mental symptoms of nicotine dependence, and I'll show a 45 year-old midget named Joey. Seriously, those were the questions. There was no mention of headache, tachycardia, sweating, insomnia, or mood changes. Additionally, these questions were not even validated on 10 year olds. They came from this study (3), which was validated on 14-17 year olds who were actual smokers.

Out of a total of 1,488 kids, only 69 (4.6%) endorsed "at least one symptom of nicotine dependence" [my emphasis]. That breaks down to this: 60% (41) endorsed 1 question, 21% (15) endorsed 2, 11% (8) endorsed 3, 4% (3) endorsed 4, and 2% (2) endorsed 6. Endorsing one symptom means nothing. That's why diagnostic criteria have multiple signs and symptoms. Last night I had trouble falling asleep, which is a symptom of depression, therefore I should seek help, right? It's absurd to make the kinds of extrapolations these people are making. When only 2 kids endorse 6 out of 7 questions, that's hardly an epidemic.

How about the fact that 95% of kids exposed to second-hand smoke (SHS) didn't endorse any symptoms of nicotine dependence? To bad they didn't examine the prevalence of these supposed symptoms in children not exposed to SHS. That would create this thing called a "control group," which would allow people to run fancy statistical tests to determine if the actual prevalence of these symptoms in SHS exposed kids has any actual meaning.

In spite of those limitations, the experimenters said this, "exposure to second-hand smoke among non-smokers may cause symptoms that seem to reflect several nicotine withdrawal symptoms: depressed mood, trouble sleeping, irritability, anxiety, restlessness, trouble concentrating and increased appetite." 'Cough, bull sh*t, cough.'

I don't know if these people actually read the diagnostic criteria, but nicotine dependence is a syndrome characterized primarily by both the development of tolerance and withdrawal (not just cravings alone). None of which are thoroughly addressed by those 7 questions.

In the discussion section, it is said "it is of course possible that participants misinterpreted the questions on nicotine dependence or that either social role modeling or expectations about what participants should experience (rather than what they actually experience) influenced reports of nicotine dependence symptoms. However, we did take susceptibility to initiating smoking and peer smoking into account in this analysis, which presumably took at least some of the effects of social role modeling and expectation into account." Too bad those data are not included in published report. So presumably, it is of course possible that experimenters misinterpreted the data (I can play with semantics too).

They even go on to admit that "there are no 'gold standard' measures of nicotine dependence symptoms in children. Although the items used in this study are psychometrically strong and show content as well as convergent construct validity (That's in adolescents by the way), it is possible that they do not measure nicotine dependence symptoms. Never-smokers could report symptoms they expect by simply smelling cigarette smoke or observing others smoking, rather than those they actually experience. Our measures of SHS exposure were not validated with biomarkers" [my emphasis]. Additionally, since these data are cross-sectional (not longitudinal), cause and effect cannot be determined either.

So what do we have here? A self-report measure of a complex physiological and psychological state that was administered to kids, data that were pooled together so as to appear significant, important data that were omitted from the actual article, and researchers who drew conclusions far beyond the scope of the actual results in an attempt to make their data appear meaningful. If this study were funded by Pfizer, and antidepressants were substituted for tobacco, and depression was substituted for dependence, then we would have something very similar to a typical pharmaceutical sponsored study.

So that begs the questions, who funded this study anyway? These guys did (4), the Canadian Tobacco Control Research Initiative, whose goal is "to catalyze, coordinate and sustain research that has a direct impact on programs and policies aimed at reducing tobacco abuse and nicotine addiction" [my emphasis]. I think I smell a big, stinking pile of bias. No where in that mission statement do I get the sense that these people are adherents of the the scientific method. Science is about discovery, not enforcing an agenda. What if their research findings didn't support reducing tobacco use? Huh? I have a hard time believing that they would support any of the research promoted by these people (5). And seriously, could they have found an easier target to generate bad press about other than tobacco? Pedophiles maybe? Personally, I wish somebody would fund research on how to get these people (6) to shut up.

And just as an aside, Coke Zero (7) is Diet Coke (8) in a black can! Diet Coke has zero calories and zero carbs just like Coke Zero. All the ingredients are identical except for one, the artificial sweetener. Other than that, it's still Diet Coke! And to all you d-bags who claim that you can "taste the difference," you're not allowed to read this blog anymore. Seriously, get away from me, "Unwelcome touching! Unwelcome touching!" Go here instead (9).

Wednesday, October 8, 2008

Newsflash: Hot Flashes Are Treated by...Everything Too!

Welcome to the 6 o'clock news, I'm your anchor, Thor Buttocks. In a previous post (1), I neglected to mention Effexor's incest surviving son Pristiq. I came across this study (2), which was published earlier this year. The results from this pharmaceutical industry sponsored trial are not surprising, "Desvenlafaxine is an effective nonhormonal treatment for vasomotor symptoms in postmenopausal women." Case closed...unless you actually read beyond the abstract.

This was a 52-week long study (a rarity in psychiatry), which on the surface, sounds as if it could have generated a lot of good data. 707 healthy, postmenopausal women experiencing 10.9 hot flashes and 3.7 nighttime awakenings per day were randomized into one of five treatment arms: desvenlafaxine 50mg, 100mg, 150mg, 200mg, or placebo. The primary efficacy measures were completed at 4 weeks, at 12 weeks, and at...that's it. Out of 52 weeks, only efficacy data were gathered for 12 weeks. That's 40 weeks of data that are not reported. I wonder why? It certainly couldn't be that desvenlafaxine doesn't work that well. Surely not, no way, never.

First, let me say that these data can be trusted. Why did I say that? Because, according to this article, the "statistical analysis was carried out by the Biostatistics Section of Wyeth Research." Nothing like those good old in house (i.e., paid employee) statisticians to handle test data (it's a good thing you can't lie with statistics, 3). The "100mg/d produced a significantly greater decrease from baseline in the average daily number of moderate-to-severe hot flushes compared with placebo at both weeks 4...and 12...the desvenlafaxine 150mg group differed significantly...at week 12....but not week 4...there was no significant difference from placebo for the desvenlafaxine 50 and 200mg doses at either time point." Additionally, there was a significant reduction in nighttime awakenings for the 100, 150, 200mg doses of desvenlafaxine compared to placebo.

Don't let the short phrase "statistically significant difference" fool you, because the differences are rather clinically insignificant. The 100mg dose produced the highest decrease in the number of daily hot flashes, averaging -7.23. The placebo arm was -5.50. That's a paltry difference of -1.73. The difference for the number of nighttime awakening is even smaller. -2.77 episodes for 100mg compared to -2.21 for placebo. That's a difference of -.56, one half of one full awakening episode (anything is possible with statistics). And of course, the desvenlafaxine groups had significantly more discontinuations and treatment emergent adverse events than placebo, thus justifying the use of desvenlafaxine to treat these symptoms.

What about those missing 40 weeks? How did placebo compare at the end of 52 weeks? I guess we will never know. Perhaps I am drawing a false conclusion here (admittedly, I'm biased), but if the results were significant beyond 12 weeks, I think the "scientists" would have reported those data. The remaining 40 weeks were ostensibly use to determine the "safety and tolerability" of desvenlafaxine (a noble goal that should have been accomplished in phase II trials).

A special thanks is also noted in this article "The authors thank Drs. Kathleen Dorries and Mary Hanson for assistance in the writing and review of this manuscript." I couldn't find any information on Mary Hanson, but Kathleen Dorries works for Advogent Group (4), which "create, deliver, and manage compliant communications and strategic solutions and services for the leading pharmaceutical, biotechnology and medical device companies." Apparently Kathleen Dorries has help write another paper on desvenlafaxine as well (5).

In summation, not only did the sponsor handle the management of the data, the manuscript was at least partially ghost written by people whose purpose is to "promote" rather than report. This study should be filed under infomercial, not science.

Tuesday, September 30, 2008

Update: And Now, A Drug Recommendation From Our Sponsor

A few days ago in this post (1), I highlighted the favoritism for the new drug ziconotide (Prialt) in the 2007 Polyanalgesic Consensus Conference (2). I decided to do more research. In the 2003 Polyanalgesic Consensus Conference (3), ziconotide is mentioned 21 times compared to 84 times in the 2007 article. Ziconotide also received a special mention in the 2003 (although not in the abstract or conclusion). This was before the drug was FDA approved, yet it received this endorsement,

"This drug is undergoing review by the U.S. FDA. If it becomes available in the United States and elsewhere, its position within the algorithm is likely to evolve as experience in the clinical setting accumulates. The panel noted that the extensive preclinical and clinical data obtained as part of a formal drug development program exceeds the data available for other drugs used for intrathecal infusion and will probably lead to placement of ziconotide on an upper line of the algorithm, unless accumulating experience suggests a narrow therapeutic index in practice."

"Extensive preclinical and clinical data." In this 24 page article, only three ziconotide articles are cited. One was conducted on rats. One is a case study. There is only one controlled study, which I will talk about later. Even though I have not read the two other studies, rats and a case study hardly constitute "extensive" data. There is one more reference, but it's useless as it's this "84. Unpublished data." There's no way I can double check that one. I guess I'll just take their word for it.

In the 2007 article, one rationale for ziconotide being "recommended as a Line 1 drug in this algorithm...comes from substantial data from preclinical and clinical studies." So the data has grown from "extensive" to "substantial." Or what is downgraded from "extensive" to "substantial?" I'm not quite sure which is better.

Clearly, the excitement for ziconotide was brewing before it was FDA approved. The question is, was all this enthusiasm based on the quality of the available research? I'm not so convinced. A search on clinicaltrials.gov (4), only yields three trials. I don't think that qualifies as either "extensive" or "substantial."

In December 2005, The Medical Letter (5) reviewed the current literature on ziconotide. Only one study reviewed by the Medical Letter is also reviewed in the 2007 Polyanalgesic Conference articles. All other studies in the 2007 article were published in 2006. One is a case study on a 13-year-old girl (6). Another is a lit-review (7). Yet another is a series of case reports (2 patients; 8). That's hardly gold standard quality research (but is it substantial?). Only three of the studies cited are randomized controlled studies (9, 10, 11). So how does a group of people not receiving money from Elan feel about ziconotide?

"Ziconotide (Prialt), a new intrathecal nonopioid analgesic, lowered pain scores in some patients when added to standard therapy for refractory severe chronic pain. Development of tolerance to the analgesic effects of ziconotide was not reported during clinical trials, but its long-term effectiveness and tolerability are unknown. Serious psychiatric and CNS adverse effects have occurred and may be slow to resolve after discontinuation of the drug." [my emphasis]

That's hardly a resounding endorsement. However, the Medical Letter issue was about one-year before those three randomized trials were published. Is there any reason to doubt their findings? Two were funded by Elan. The third was funded by both Neurex (which was bought by Elan) and Medtronic. As I mentioned before, industry sponsored studies are more likely to generate positive results than independently funded studies (12). Remember that Elan provided "generous financial support" for the 2007 conference where ziconotide was anointed as a 1st-line agent. What about the 2003 conference? Who was the sponsor of that? Medtronic! The same Medtronic that co-sponsored this study (9), which has a very interesting history by the way.

The lead author is Peter Staats (don't forget that name). It was published in JAMA in 2004. However, the study was conducted between 1996 and 1998. For a reason I still can't figure out, the data went unpublished for over 5 years. Strangely, during that five year period, multiple articles on ziconotide were published, including this study (13) published in 2000. In this study by Vandana S. Mathur, two unpublished (at the time) studies are reviewed. One was a study on "malignant" (i.e., cancer & AIDS) pain and the other on "nonmalignant" pain. Interestingly, the study published by Staats et al in 2004 (9) is on people with cancer and AIDS, and this study (Wallace et al, 11) is on patients with nonmalignant pain . In Mathur (2000), the malignant study has 112 patients as opposed to the 111 in the Staats et al study. The Wallace et al study lists 255 subjects while the Mathur study list 256 studies. VASPI scores from baseline to the end of the initial titration period is the primary measure in each study. In the Staats study, the mean improvement in VASPI was 53.1% for ziconotide and 18.1 % for placebo. In the Mathur study, the results are 53.1% for ziconotide and 18.1% for placebo. In the Wallace et al study, mean improvement for ziconotide was 31.2% and 6% for placebo. In the Mathur study, mean improvement for ziconotide was 31% and 6% for placebo. Isn't it interesting how similar these results are? It's as if these data were published more than once. As I learned last week, that's a big no no (14).

As it turns out, these data for published more than once (15). Dr. Staats' reply is here (16). In his reply he stated, "I was involved from the outset of the ziconotide trial, including the design, collection, and analysis of data and the drafting of the initial manuscript." He continued, "The initial draft of our manuscript was sent to Neurex/Elan (the sponsors of our trial) by my coauthors and me around 1998, well before Dr Mathur, a former employee of Elan, published her review in June 2000." He Added, "Her submission was sent without my permission or notification and without citing my coauthors or me."

As it turns out though, "Neurex Pharmaceuticals apparently condoned the publication of Mathur’s article since it provided access to the data file and signed off on the article by Mathur" (15). Moreover, "the article by Staats et al does include a citation to the article by Mathur." A citation? How could that be? Well according to Staats, "our in-house manuscript used a reference to an abstract by Brose et al, and in our successive revision process the reference was augmented by the citation of the article by Mathur. When this occurred I assumed that the Mathur reference had been added simply to update the Brose reference we continued to use, adding nothing new. My assumption was not correct; I should have read the manuscript." [my emphasis] In a previous post (17) I highlighted the importance of reading the studies you cite. If only this blog existed back then (sigh...).

Staats continued with "The system for preventing these infractions broke down...in the present case the previous publication was in a nonindexed medium and none of the authors knew that the review included data from our work." I'm not sure what a "nonindexed medium" is. Regardless, I'm not sure how valid of an excuse that is, since Staats is one of the many co-authors of the 2003 and 2007 Polyanalgesic Conference articles. Shouldn't Dr. Staats have come across that study when he was supposedly reviewing the literature for ziconotide in preparation for the 2003 conference?

At the bottom of Staats' article, this is printed regarding the sponsor "The sponsor was responsible for the overall conduct of the study and the collection, analysis, and interpretation of the data obtained...the preparation and review of the manuscript were a joint effort among the authors, the sponsor, and a contract medical writer." I wonder how involved the "authors" actually were in writing and reviewing this article?

As it turns out, Mark Wallace was one of the many co-authors of Staats' study and the 2007 polyanalgesic article. He's also was the lead author of the 2006 nonmalignant study. If only Wallace was on the 2003 polyanalgesic committee, perhaps he would have seen that Mathur study before he re-published those data. (To date, I have not found any "duplicate publication" notices for this study, 11). The funny part is, Wallace cites two studies in which Mathur is an author, yet somehow her 2000 article with his data was not included.

So perhaps ziconotide is not the wonder drug that some have made it out to be. Also, according to The Medical Letter, a 30 days supply of ziconotide costs approximately $4200. If you don't have an intrathecal device installed, that procedure costs approximately $20,000. That's quite a price tag. Good thing all these decisions about ziconotide were made by un-biased people who check and re-check their work and base their decisions on hard science..., right?

Update: From Bad Science (12)

Saturday, September 27, 2008

And Now, A Drug Recommendation From Our Sponsor

Today I received my summer issue of The Pain Practitioner (1), and just like a box of Crack Jack (2), there was a prize inside. Actually, there were two prizes (Oh goodie!). I received two CME booklets (Score!). At the bottom of each booklet was this sentence "Supported by an unrestricted education grant from..."

For those of you who may not be well versed in pharmaceuticalese, when the phrase "unrestricted education grant" is translated into English, it means restricted (i.e., biased) education grant. Kind of the same way "bad" means "good" and "it's not you, it's me" means "it's you."

In the CME booklet that was "support by an unrestricted education grant from Elan," two articles are included. The second article is titled, "Interventional Modalities for the Treatment of Refractory Neuropathic Pain" By Lynn R. Webster (Can you guess the drug company to which she is a consultant?) Anyway, this article is about implantable therapies for people with severe chronic back pain. In the last section of her article, she reviews the recommendations from the 2007 Polyanalgesic Consensus Conference (3).

The purpose of this conference was "to update previous recommendations and to form guidelines for the rational use of intrathecal opioid and nonopioid agents." These recommendations were made by an "expert" (I use that word loosely) panel of physicians and nonphysicians in the field of intrathecal therapies (i.e., spinal injection).

As a neuropsychologist, I often assess people's reasoning skills, specifically, deductive reasoning. Here's you're test. Don't worry, it's only 1 question in length. In the polyanalgesic article, the respective literatures of 20 different drugs were reviewed. In addition to opioid therapy, a new drug was christened as a first line monotherapy. What is the new drug that was recommended as a monotherapy?

Here's a couple hints. This sentence appears in the acknowledgments section, "The authors would like to acknowledge Elan Pharmaceuticals for its most generous financial support of the consensus conference and 'hands off ' approach to the final writing of this article."

Second hint. On Elan's website, there four drugs, which are marketed in the U.S. (4). It's one of those four drugs. Still not sure which drug to chose? I'll list the top 5 drugs (out of the 20 reviewed in the polyanalgesic article), by the number of mentions.

1. Morphine - mentioned over 160 times
2. Prialt (ziconotide) - 84 mentions
3. Clonidine - 58 mentions
4. Hydromorphone - 50 mentions
5. Adenosine - 50 mentions

If you guessed Prialt (ziconotide), then you're correct. Now, I'll admit that I have not read this article in its entirety (it's 29 pages), and I have not reviewed all the literature on the effectiveness of ziconotide; however, I'm not making any claims about it's effective or utility (that's outside the scope of my practice). It very well could be worthy of its first-line treatment status. However, a few curiosities continue to bug me.

Morphine has been around for a long time, so 160 mentions in the article makes sense. It's literature is quite extensive. Ziconotide was approved by the FDA in December of 2004, yet it received the second highest number of mentions at 84 (repetition is the key to memorization; repetition is the key to memorization; repetition is the key to memorization...) . If you average the number of mentions from the remaining 15 drugs, the number is 19.7; that's four time less than ziconotide (Yes, I actually counted all mentions for all 20 drugs).

Moreover, it is the only drug mentioned in the abstract, which states "Of note is that the panelists felt that ziconotide, based on new and relevant literature and experience, should be updated to a line one intrathecal drug." I see, so the decision had everything to do with the "relevant literature" and clinical "experience," and nothing to do with the "generous financial support" from Elan.

Here is where they use psychology on their readers; of the 20 drugs reviewed, ziconotide was the last drug reviewed, taking advantage of the recency effect. Also, it's section is the longest of all the other drugs (morphine was in a close second). Did I mention that ziconotide is also the only drug that is mentioned in the conclusions section? I didn't? Well, it is. Maybe Elan was a little more hands on than the authors are willing to admit. In the CME article written by Dr. Webster, ziconotide is the only drug that receives its own section as well (Remember, this is an "unrestricted" education grant).

At the end of the CME booklet there are 15 questions. The last 7 questions are derived from Dr. Webster's article. Can you guess which drug had its own questions?

Question 14 reads, "According to the expert panelist of the 2007 Polyanalgesic Consensus Conference, which of the following is a recommended first-line agent for pain?"
a. clonidine
b. bupivacain
c. ziconotide (<- correct answer; like you didn't know that one)
d. baclofen

Question 15 reads, "According to this article, which of the following is true regarding ziconotide?"
a. It must be titrated slowly (<- correct answer)
b. It must be discontinued slowly so that the patient does not experience withdrawal syndrome.
c. The primary side effects are cardiovascular.
d. The primary side effects are respiratory.

For some strange reason, I'm not quite convinced by this "expert" panel's recommendations. I know that Elan supposedly had a "hands off" approach to the "final" writing the article, but nothing is said about the conference itself. How "hands off" was Elan during the planning of that? The fact that their four-year-old drug made it to the top based on "expert" opinion is a hard pill to swallow (technically, it's not a pill, it's a liquid). I guess that means if Elan had not provided "generous financial support" for that conference, then the "expert" recommendations would have been the same. Right?

Wednesday, September 24, 2008

Proof of Evolution

This is a short documentary about the psychopharmacologist.

The psychiatrist (lt. shockus electricus) is an endangered species. The environmental mechanism of their demise is still unknown. While there has been some efforts to save this animal, their numbers continue to dwindle. However, a new species seems to have evolved from the psychiatrist. This new creature is called the psychopharmacologist (lt. prescribus pillus). While psychiatrists are scattered through the North American continent and still appear to be thriving in some parts of Europe; psychopharmacologists have developed large breeding populations around the coastal cities, as they seem to thrive in urban environments.

Seriously, psychopharmacologists are the only mainstream doctors (I'm lying, that's actually not true; see comments) whose title reflects how they treat (i.e., drugs) instead of what they treat (i.e., mental illness). An endocrinologist does not prescribe endocrines to patients. An immunologist is someone who studies the immune system. Psychopharmology on the other hand, is the study of drug-induced changes in mood, sensation, thinking, and behavior. That is quite different from psychiatry, which studies how to prevent and treat mental illnesses. At least the title "psychopharmacologist" tells us where their interests lie (it's in the drugs, not the patients).

News Flash: Hot Flashes Are Treated By...Everything!

...Well, I don't know that for certain, but that's what I thought when I read this headline "Acupuncture Reduces Side Effects Of Breast Cancer Treatment As Much As Conventional Drug Therapy, Study Suggests" (1). According to this "first-of-its-kind study," acupuncture is as "effective and longer-lasting in managing the common debilitating side effects of hot flashes, night sweats, and excessive sweating (vasomotor symptoms) associated with breast cancer treatment." What is the conventional drug therapy to which acupuncture was compared? Effexor of course (that's his slave name, he prefers to be called venlafaxine).

Are there any other scientifically validated treatments for hot flashes and other vasomotor symptoms? Well, according to these people (2), "Soy seems to have modest benefit for hot flashes, but studies are not conclusive. Isoflavone preparations seem to be less effective than soy foods. Black cohosh may be effective for menopausal symptoms, especially hot flashes, but the lack of adequate long-term safety data (mainly on estrogenic stimulation of the breast or endometrium) precludes recommending long-term use. Single clinical trials have found that dong quai, evening primrose oil, a Chinese herb mixture, vitamin E, and acupuncture do not affect hot flashes; two trials have shown that red clover has no benefit for treating hot flashes." [my emphasis]. Did I read that correctly? Acupuncture did not affect hot flashes? That must be some sort of fluke. Except that these people (3) and these people (4) both found that acupuncture was no better than sham treatment.

Assuming that my thought process is linear, acupuncture is as effects as venlafaxine, yet acupuncture is no better than sham treatment. So that would mean..., venlafaxine is no better than sham either. So what does the evidence say? Well, venlafaxine has these three open-label trials (5, 6, 7), all of which were...drum roll please...positive! (I'm shocked).

Science Lesson 1: Open-label studies are pointless. 98% of all open-label trials are positive. Sounds pretty high, right? That's probably because I just made that statistic up. But, when you have no comparison group, and all parties involved know about the treatment, positive results are the rule, not the exception.

Anyway, I was able to find this one randomized controlled study (8), which lasted for a staggering 4-weeks, and showed that venlafaxine was superior to placebo pill. Side-effects of the venlafaxine treatment included, "mouth dryness, decreased appetite, nausea, and constipation."

Science Lesson 2: When a drug has side effects, patients and doctors can accurately guess whether or not the patients were given placebo, thus breaking the blinding. Secondly, not all placebos are created equally. Pill placebo is less effective than capsule placebo, which is less effective than injection placebo. Also, if put a sticker price on the placebo, the more expensive placebo out performs the cheaper placebo (9). Lastly, another way to boost the placebo effect is to give a placebo that actually has side effects (10).

So quit jerking me around, does venlafaxine work or not? And while we're at it, does fluoxetine (11) and paroxetine (12) work too? Well, according to this meta-analysis (13) published in 2006, there are a total of 7 trials that compared either SSRIs or SNRIs to placebo. Only 3 out of those 7 trials were superior to placebo (43%). And those are only the published studies. Who knows what negative studies have not been published (14). Well, what about the science? There has to some sort of biological explanation, right? According to the former psychiatrist and current psychopharmacologist Stephen Stahl, "It may be that actions on both the serotonergic and noradrenergice systems are required to improve these (vasomotor) symptoms" (pg. 626). There's just one problem with that theory. In the controlled trial with venlafaxine, the highest dose given was 150mg. At that dose, venlafaxine is barely an SNRI. And since the 150mg dose was no better than the 75mg dose, that does not lend much support to Dr. Stahl's theory.

Science Lesson 3: A theory is suppose to lead to a hypothesis, which leads to data, which leads to revising the theory. In pharmacology, however, you discover that a drug has an effect by accident; then, you assume that the mechanism of that drug is why there was an effect. Then the thought process seems to stop there as these people tend to ignore contradictory evidence. Drug treatments for depression have these actions: SSRI, SNRI, DRI, NRI, Alpha2 antagonist, cortisol antagonist, CRH antagonist, blah, and blah. So the theory is that depression is caused by and treated by all these chemicals. Makes perfect sense.

So serotonergic drugs have some effect on hot flashes. But so does calcium channel blockers (15), CBT (16), and balancing your yin and your yang. And to be honest, I'm not even going to pretend that I know how black cohosh and soy work. So maybe my first thought was correct, hot flashes are treated by everything!

My long-winded point is this: When you have a condition, such as depression or hot flashes, which are highly subjective and have minimal to no reliable and objective identifiers, everything under the sun can be shown to have a positive effect. Everything that is, except magnets (17).

Thursday, September 18, 2008

Psychotherapy Research Is Lame Too

WARNING: THE FOLLOWING POST CONTAINS A LOT NUMBERS. HEADACHE, NAUSEA, DIZZINESS, AND VOMITING MAY OCCUR.

In my previous post, I criticised the manner in which psychotherapy research is conducted. In this post, I discuss four (2 psychotherapy & 2 medication) studies. The emphasis will be on the populations used and how the external validity of the results (i.e., they arn't generalizable) are greatly compromised. The actual results of these studies won't be discussed; however, a reference for each study is included. A specific type of psychotherapy known as behavioral activation therapy (BAT or BA) has this acute major depression treatment study (1), and this two-year relapse prevention study (2). Aripiprazole has this 26 week bipolar I maintenance study (3), and this 74 week extension study (4).

The first BA study initially included 388 subjects who completed a comprehensive intake assessment. Based on the exclusion criteria presented in this post (5), 250 (64%) subjects were eligible for randomization; however, 9 declined participation. In all, 241 (62%) were included in this study. The majority were excluded because of "subthreshold" or "low severity" depression. What he have left are 241 subjects with pure MDD with moderate to severe symptoms. Those subjects were randomized to one of four arms: cognitive therapy (45), behavioral activation (43), paroxetine (100), or placebo (53). At the end of 16 weeks, 172 (71%) subjects completed this study. By treatment arm, 39 (86%) of the CT group completed the study, followed by 36 (83%) for BA, 56 (56%) for paroxetine, and 41 (77%) for placebo. One important caveat, the placebo arm was dissolved after week 8. This means for the remaining 8 weeks, there were only three active arms. If you subtract out the placebo arm, then only 188 (78%) patients with MDD received active treatment, and 131 (54%) of them completed the study. Approximately 1 out of 2 subjects lasted 16-weeks.

In the first aripiprazole study, 633 recently manic subjects were recruited. After exclusion criteria was enforced, 567 (89%) made it into the initial open-label 6-18 weeks stabilization phase. The study does not state why 11% were excluded. It's also important to note that 333 of the 567 subjects were from an aripiprazole acute mania study (i.e., 58% of subjects included were already shown to be responsive to aripiprazole). At the end of the stabilization phase, 361 (63%) subjects had discontinued (primarily for side effects, 22%), while 206 (36%) remained in the study. Out of those 206 subjects, 161 participated in the 26-week maintenance phase. In simpler terms, only 28% of the original 567 advanced to the actual area of investigation. 83 subjects were randomized to placebo, and the other 78 subjects were randomized to aripiprazole. At the end of 26 weeks, 28 subjects completed the placebo arm (34%), and 39 subjects completed the aripiprazole arm (50%). That's a total of 67 subjects, meaning, 88% of the subjects initially recruited dropped out of the study. Only 41% of the subjects who advanced to the 26-week phase remained in the study.

In the second BA study, those subjects who responded to acute treatment were eligible for this continuation study. 106 (61%) subjects out of 172 that finished the previous study were included. The subjects who originally received CT or BA did not receive continued treatment during the two-year follow-up period. These subjects who received paroxetine were re-randomized to either continued medication or switched to placebo. After the first year of follow-up, the paroxetine group was tapered off treatment and the placebo group was dropped from the study. At the end of year one, 55 (51%) subjects had either dropped out or relapsed (9 relapses each occurred in the three treatment arms, 12 in placebo). At the end of year two, only 46 (43%) subjects completed the study. Since the placebo arms were dropped at the half way point in each study (thus limiting comparisons with active treatments), only 167 (69%) subjects actually received active treatment, and only 27% of those subjects completed the final study.

In the second aripiprazole study, 66 of the 67 subjects who completed the 24-week study advanced to the 74-week maintenance phase. 27 subjects were from the original placebo arm and 39 subjects were from the original aripiprazole arm. At the end of the study, 22 (81%) of the placebo subjects discontinued while 32 (82%) of those treated with aripiprazole dropped out of the study. A grand total of 12 subjects out of the initial 567 completed the entire study. That's a paltry 2%. If you want to be more liberal and count only those subjects who initially entered the 26-week study (161), then a ginormous 7% of those subjects completed the study. That's sad when you remember that 333 subjects were already shown to have had a response to aripiprazole, and the remainder were stabilized on aripiprazole.

What I am trying to illustrate is how unimpressive these numbers actually are. Large percentages of subjects are lost even before the actual studies begin. Secondly, since these studied populations are not representative of actual clinical populations, the positive results in these studies are pretty meaningless. Prospective studies that follow subjects for one to two years are quite rare. However, when done, it's striking how very few subjects actually complete these studies. So BA was shown to be comparable to paroxetine after 16-weeks. Since the placebo arm was dropped after week 8, we have no meaningful comparison group. Although aripiprazole was shown to be a maintenance treatment, the numbers were so small in the end that the results become moot. Sadly, as far as these studies go, this is as good at it gets.

Psychotherapy Research Is Lame: Part 1

Over at PsychCentral there is a post (1) titled, "Cognitive Behavioral Therapy Best to Treat Childhood Trauma." Reported are the findings from a recent meta-analysis that states, "strong evidence showed that individual and group cognitive–behavioral therapy can decrease psychological harm among symptomatic children and adolescents exposed to trauma." Regarding the other therapies examined, "evidence was insufficient to determine the effectiveness of play therapy, art therapy, pharmacologic therapy, psychodynamic therapy, or psychological debriefing in reducing psychological harm." According to PsychCentral, this doesn't mean that "these other types of interventions are completely ineffective or don’t work… just that this particular scientific analysis...did not find any significant impact of them." Actually, this scientific analysis found that "evidence was insufficient to determine the effectiveness" as opposed to not finding "any significant impact." That's what happens when one treatment is studied more often than others (2). CBT may be the best treatment, but when other treatments aren't tested, there is no way to tell.

Here's the problem, psychotherapy research is pretty lame. Head-to-head comparisons of different psychotherapies are just as rare as head-to-head drug comparisons. Granted, drug companies put-up millions of dollars to promote (an occasionally research) their own treatments, while we psychology types are lucky to be included in big NIMH studies. Yet, when we do crank out those once-a-in-decade, large, randomized, double-blind, and placebo controlled studies, they arn't necessarily better than drug trials. I know that drug research is easy criticize; there are big, evil, greedy, multinational pharmaceutical companies on which to blame anything and everything. We don't have evil psychotherapy companies at which to hurl blame. The closest thing we have to drug companies are the test publishers such as Psycorp, PAR, and WPS. However, those companies can't hold a candle to the pharmaceutical industry. So who do we blame for the poor state of psychotherapy research? The drug companies. Who else?

If you're a frequent reader of CL Psych(3), then you're familiar with those purveyors of biased research, "Key Opinion Leaders (KOL)" Many KOL's conduct clinical trials funded by drug companies. The cool part is, when a pharmaceutical company finances a drug trial, it's more likely to produce positive results than an independently financed drug trial (4). This creates the impression that the drug under investigation actually does something. Their bias screws with the science. So the question is, does the field of psychology have anything similar to those KOL's? The answer is yes. You see, psychologists' have these things called "theoretical orientations," which dictates their allegiance to specific types of psychotherapy. That allegiance creates bias and that bias screws with the science. Similar to the drug company financed trials, a therapist's allegiance to a specific type of therapy is more likely to produce positive results than the actual components that make up that therapy (5). Meta-analyses that investigated therapist allegiance have reported effect sizes as high as 0.65 (6).

Another problem area is placebo controls. A lot buzz has been generated about antidepressants apparent lack of superiority over placebo (7). What hasn't generated buzz is this: If therapy is just as effective as medication, then therapy also lacks superiority over placebo. However, there's more. In psychotherapy research, control groups come in many different shapes. For example, there's the dreaded "wait-list" control group (8). People assigned to this group have their symptoms checked periodically, while others get their weekly dose of CBT, IPT, BAT, or some other combination of letters. Is it an adequate placebo? Well, according to the Carlat Psychiatry Report (9a), "the wait list control is suboptimal, because unlike...pill placebo, wait list patients don’t actually believe that they are getting treatment." [my emphasis]. This can lead to one of two scenarios: reactive demoralization or the John Henry effect. The former is when people have a worsening of their condition when they know that others are getting a better treatment (thus making the active treatment look better). This can also happen in drug research; however, it usually doesn't occur until the person has deduced that they're on a placebo. The John Henry effect occurs when the control group tries to compete with the experimental group. This leads to an improvement in their condition (thus making the active treatment look less effective). Although research studies "have shown that simply being put on a wait list results in substantial improvement...," this isn't "as robust as pill placebo" (9b) due to the reasons mentioned-above.

The obvious solution to this problem is to create an intervention that will give the impression that the control group is receiving adequate treatment. That's how "treatment as usual" and "clinical management" came about. The problem with these control groups is that they are intentionally "de-powered." Therapists are instructed to be either inert or to minimize certain non-specific therapeutic ingredients, such as the therapeutic alliance (10), which is a difficult task. Another problem stems from the fact that when a therapist is providing CBT, he knows it. When a therapist is providing "clinical management," he knows it. This confound is called the experimenter bias effect. In drug research, when a pharmacotherapist knows that he is prescribing an active drug, the study is referred to as unblinded (or single blinded). Such studies are routinely criticized (11). I've seen only one study (12) where the control therapists were taught a "new" therapy. This was intended to increase the likelihood that the therapists would believe that they were providing an adequate treatment.

Another widely criticised component of drug trials, which limits the external validity (i.e., generalizable) of the results, is called sample enrichment. This is when a clinical population, which is likely to respond to treatment, is selected to participate in a study. Typically, these enriched samples represent approximately 20% of the patients actually seen in clinical settings (13). For example, the quetiapine BOLDER studies used this population of bipolar patients: people who met criteria for BP I or II (I'm with you so far), no co-morbid Axis I disorders (you lost me...), current depressive episode can't last longer than 12 months (I'm still not with you), no history of nonresponse to more than two antidepressants (where'd you go?), no co-morbid substance abuse (there goes 75% of the bipolar population), no medical illnesses (seriously, I can't see you), and no suicidal risk (14). Do these people actually exist? Because I've never seen one.

The protocol for a recently published behavioral activation (BAT) study (15) had this MDD population: DSM-IV diagnosis of MDD with no history of BP, or psychosis, organic brain disorder, or mental retardation (that's fair enough). However, participants were excluded if there was a risk of suicide; substance abuse; co-morbid anxiety, eating, or pain disorders; certain personality disorders; history of poor response to CBT or paroxetine; and an unstable medical condition. The rationale for an enriched sample is not necessarily nefarious; however, it severely limits the external validity of treatment.

As you can see, psychotherapy research is lame. However, by its very nature, psychotherapy is very difficult to study. That's why there is a bias for "manualized" forms of therapy. Those provide a frame work for therapists to follow, and in theory, minimize many confounding factors. Because of that bias, certain therapies like CBT are frequently researched, which creates the appearance of superiority over other therapies. Lastly, just because psychotherapy research is lame, does not mean it's lame for the same reasons as drug research. I'd argue that drug research has little excuse to be as lame as it is. However, psychotherapy research should be conducted better than it currently is.

Sunday, September 14, 2008

Holy Schatz! Part 2

I have reproduced
two of the slides that were presented by Schatzberg to better illustrate my point. At the bottom of this (top/right) slide, you'll see the following reference "DeBattista et al., Biol Psychiatry, 60(12):1343-9, 2006," which is this study (1). Data from a published clinical trial, accompanied by the appropriate reference below.

In the second slide (bottom/right), at the bottom is this reference "Schatzberg AF et al., J Affective Disorder, 107:S40-41, 2008." As I pointed out in my previous post, this reference is to an abstract that does not mention these data.

Now, I am not suggesting any wrong doing; however, I do wonder, what was the purpose of listing this reference? It's relevance to these data appears to not exist. This much I do know,
when data are presented, and that data is accompanied by a reference, it is implied that the data is taken from that reference. In an earlier post (2), I critiqued an article wherein the authors made specific statements that were not supported by the references they cited. This is how misinformation is spread.

And speaking of misinformation, in the my first post about Schaztberg's presentation (3, post has been corrected), I wrote that he did not indicate 06 was a negative study. He did show a slide, which showed that the primary endpoint was not statistically significant (p=.144). What he primarily focused on, was the secondarily analysis of the data, which said that "there was a statistically significant correlation between plasma levels and clinical outcome achieved during treatment" (4). This is Corcept's and Schatzberg's attempt to turn a negative into a positive, which they have been doing for a couple years now (5).

Saturday, September 13, 2008

Holy Schatz!

I'm back after attending the 3rd Annual Psychotic Disorders Conference (1). One of the speakers was Corcept co-founder (and shareholder), and president of the APA (the bad APA, not the good APA), Alan F. Schatzberg (pictured right). His topic? "The Latest Treatment Approaches for Managing Psychotic Depression." Most of Schatzberg's work regarding his company's drug, Corlux, also known by its many aliases: mifepristone, RU-486 (special ops name), the abortion pill, has been well documented here (CL Psych), as well as by others, which can be accessed through the above-link.

Did he have any new or exciting data to present? No. It was like an Earth, Wind, and Fire concert, he was doing all his greatest hits. He began with the spectre of psychotic depression: it has similar neuropsychological deficits seen in schizophrenia (not quite true, but I'll humor him), it represents 15-18% of cases of major depression (that's in Europe by the way), and of course, who can forget his number hit, hypercortisolemia. Ah, the memories that one brings back. Anyway, after quickly blowing through various treatments (ETC, Symbyax, SSRI), and a quick primer on the HPA-axis, he moved onto mifepristone. First, he spoke about the Corcept 03 Study 2006. The lead author is DeBattista. You can read more about him and that study here (2). Did you read it? Good. So, did the Schatzmeister mention any of those criticisms? No. And where moving on...

Here is where I became confused, when he talked about Corcept study 06-PMD. The study concluded in 2007. You can read about the results here (3). The main findings were this, "...study 06, the last of the three Phase 3 trials, in March 2007. These results indicated that this study did not achieve statistical significance with respect to the primary endpoint, 50% improvement in the Brief Psychiatric Rating Scale Positive Symptom Subscale, or BPRS PSS, at Day 7 and at Day 56." What Schatzy primarily focused on was Corcept's spin, which is found in the latter part of that release.

This is where it gets interesting. The PPT slides that Schatzberg showed while talking about this study had this reference at the bottom "Schatzberg AF et al., J Affective Disorders, 107:S40-41, 2008." What would one assume, when seeing that reference underneath the 06-PMD data? If you're an idiot like me, you'd assume that he was referring to a published study, or at least published data. He's not (4). It's an abstract, two paragraphs in lengths that summarize what he was going to talk about at a symposium; the content of which is identical to the lecture I saw (he's doing a greatest hits tour folks!). These data weren't even referenced in the abstract. So why list a reference at all? Because that allowed him to present unpublished data, from a negative trial, as if it were published data and to give it a positive spin. That's a neat trick..., and now watch him do it while drinking a glass of water...

At the end of the lecture, he gave contact information for a person at Stanford, so others could refer patients with PMD to a clinical trial for mifepristone at Stanford University, his place of employment. Because it's affiliated with Stanford, he is recused from working on this study (read these links regarding his position and his other influences , 5, 6).

To summarize, Dr. Schatzberg gave a lecture wherein he presented unpublished data from a company, which he has a major stake in, as if it were published data. Then, he made a request for referrals, for a clinical trial at his place of employment. Somehow, all this constitutes being recused. The best part is, I received 5.5 cme hours for attending this infomercial. And as usual, the representatives for Lilly, Teva, AstraZenca, Abbott, Janssen, and BMS where all present, which means my precious ink pen and note pad collection quadrupled in size.

Saturday, September 6, 2008

The Worst Book I Have Ever Read. Ever!

What is the book in question? The Bible, of course. But, in a close second is Understanding Depression by Donald F. Klein and Paul H. Wender (1)

Major Complaint: No references. Not a single one. I guess they wrote this book from an a priori position of authority; therefore, everything they say is gospel and does not need to be supported by facts and should not be questioned (That's why it's second only to the Bible).

Speaking from atop a mound, these two researchers start with this, "Depression may be a normal human emotion-a response to loss, disappointment, or failure. Some depressions, however, should more properly be put in the category of common biological diseases...pg1" Sadness, not depression, is a normal human emotion. Depression is a specific mental illness. What's that? I will have to defer to the eminent psychopharmacologist Stephen M. Stahl (I wonder, do Rheumatologists ever call themselves non-steroidal anti-inflammatoryist?), "Mental Illnesses are defined as mixtures of symptoms packaged into syndromes. These syndromes are consensus statements from committees writing the nosologies of psychiatric disorders for the DSM of the APA and the ICD. Thus, mental illnesses are not diseases. pg178" (2) Consensus statements from committees? Ooh, I get nauseous from all that complex medical jargon. That's why I'm a psychologist, not smart enough to grasp this stuff.

In their manifesto, they list several reasons for writing this tome, the first being, "To explain what biological depression is and to clarify the difference between depression, a normal emotion, and biological depression, and illness. pg1" I've shown in the above-paragraph why that goal is futile. But there's more, "biological depression is common- in fact, depression and manic-depression are among the most common physical disorders seen in psychiatry. pg2" What?! Physical disorders?! So that means depression is like lupus or cancer. Do they provide any evidence? Nope, remember there are no references. Well, how about an a priori explanation? Since they use complex medical terminology like "heredity," "genes," and "chemistry," stuff that goes over my head, I'll use their words instead. "In sum, although we know comparatively little about the altered chemistry of individuals with depression, our knowledge is advancing rapidly. pg96" Let me get my Merck Manual to translate that. In other words, there is no definitive proof of a chemical imbalance, but we're going to pretend there is anyway.

Should anyone be listening to me anyway? According to these authors, no. "It is essential that people who suspect they are suffering from depression know who is qualified to help. Not all physicians or mental health workers-such as psychologists, social workers, and psychiatric nurses-have had adequate training in the diagnosis and treatment of depression. pg4" Then why do all those neurologists, PCPs, and even psychiatrists refer patients to little ol' me for differentials and med checks? These practitioners even require that I make recommendations for treatment and rehabilitation. They probably should stop doing that since, "Nonphysician therapists, such as psychologists, social workers, and pastoral counselors, are handicapped in treating depressive patients because of their lack of medical training...pg171" How about this, "leukoencephalopathy." I think that's proof enough of my medical credentials.

"Psychologists are often still taught to use diagnostic techniques that are no longer considered useful by biological psychiatrists, and they were not trained to recognize biological factors in mood disorders and other psychiatric illnesses. pg172" That's because psychiatrists run blood and genetic tests that have the ability to diagnose the majority of the 400 diagnoses is the DSM-IV. Wait a minute, they don't do that? Then what are the tests that these biological psychiatrists use? According to the authors, "In the psychiatric part of the evaluation, the psychiatrist will inquire about definite signs and symptoms characteristic of depression and other psychiatric conditions. pg100" Well garsh, I wish I were a baby bumble bee...It's called a clinical interview folks! All health care professions learn how to conduct them.

I bought this book two-years ago. I read one paragraph and then my third testicle descended (painful). These guys are d-bags. Many nonphysician therapists (a dumb terms since most 'real doctors' don't do therapy, including psychiatrist, 3) receive training in biology, pharmacology, and other related disciplines (4). The reality is, biological psychiatry is a field in search of a science (5). This book is only meant to shore up the low self-esteem (that's right, a psychological explanation) of these guys by belittling the professions of others. Sadly, I believe these two guys truly believe in what they're preaching. The Last Psychiatrist sums it up best,"The reason he believes it is his entire professional existence-- his whole identity-- is predicated on believing it. He's not a scientist, he's a priest. (6)" Be proud of what you do.

Friday, September 5, 2008

All I Really Need to Know About Serotonin I Learned in Kindergarten


This article (A), which quotes the findings of this study (B) is another example of the misrepresentation of research. The actual study makes claims not supported by its findings and misrepresents the researched cited within the text.

First, the article title at ScienceDaily (SD) is "PET Scans Help Identify Mechanism Underlying Seasonal Mood Changes." No. They should have used the actual study title "Seasonal Variation in Human Brain Serotonin Transporter Binding." That's strange, I don't see the word "mood" anywhere. Hold on, let me get my glasses. Wait a minute, I don't wear glasses. I can see after all.

Since the rest of the SD article is just quotes lifted from the study, I'll just focus on the study itself. "Indolamines (tryptophan, serotonin, melatonin, and related compounds) have transduced light signals and information on photoperiod into organisms and cells since early in evolution, and their role in signaling change of seasons is preserved in humans. " The study cited for this statement (3) is for melatonin only, an indolamine, not indolamines. Secondly, tryptophan is the precursor amino acid that is converted into serotonin and melatonin (i.e., indolamines are neurotransmitters that are synthesized from tryptophan, a standard amino acid).

"Serotonin is involved in the regulation of many physiologic and pathologic behaviors that vary with season in clinical and nonclinical populations.3-12" Maybe it's just me, but when I read this sentence, I assumed that the 9 studies referenced would support the role of serotonin "in the regulation of many physiologic and pathologic behaviors that vary with season." No. Studies 3-11 establish that seasonal mood changes occur in healthy people and in some clinical populations. Only study 12 has anything to do with the serotonin. The serotonin transporter (SERT) specifically. The words "mood" or "depression" are nowhere to be found in that article. The researchers should have said "there is substantial evidence indicating that moods vary by season in both healthy and clinical populations. The role of serotonin is currently unknown."

"Seasonal variations in peripheral serotonergic markers have been demonstrated in several studies." 3 studies are cited and the studies do support this statement. I don't know if 3 constitutes "several," but at least the above statement is accurate. See, that's what happens when you actually read the work you cite.

"...the seasonal variation in serotonin-related behaviors,3-12" Nope. They cite the same studies where only one is about SERT. I want evidence that seasonal mood variations are "serotonin-related behaviors."

"Previous investigations19-20 of regional serotonin transporter binding and season in humans have not led to a clear understanding of the relationship between these 2 measures." That's probably due to the fact that those two studies cited have nothing to with serotonin binding and the seasons. The first study (19) investigated the effects of MDMA and reduction of SERT. The word "season" doesn't appear in the article. The second study (20) is a review of MDD and AD imaging studies. Again, the word "season" is not in the article. Later, they cite two other studies (21, 22). The first study actually is about seasonal SERT changes while the second article focused on that topic secondarily. Maybe the researchers intended to cite references 21 and 22, instead of 19 and 20.

The researchers reach this conclusions about their study, "Serotonin transporter binding potential values vary throughout the year with the seasons." Yes, I'm with you so far, "Since higher serotonin transporter density is associated with lower synaptic serotonin levels, regulation of serotonin transporter density by season...has the potential to explain seasonal changes in normal and pathologic behaviors." Do you have any leftover "No's" from this post (C)? I suggest that you use them now.

Synaptic serotonin levels cannot be directly measured in vivo. So how is it measured? They measure the presence of its metabolite, 5HIAA. Next fact, 95% of all serotonin is in the stomach (D). So, how is low serum 5HIAA a measurement of brain serotonin? It's not. Make sense now? No? Good, let's move on.

"Higher regional serotonin transporter binding potential values in fall and winter may explain hyposerotonergic [related to low serotonin levels] symptoms, such as lack of energy, fatigue, overeating and increased duration of sleep during the dark season."Actually those behaviors are better explained by low cortisol (E). If cortisol is low, the liver cannot synthesize glucose, which leads to lack of energy, fatigue, and increased sleep. People eat more food (especially those high in carbs) in order to increase glucose, which will give them energy.

Still confused? Read this to learn more about serotonin and mood (F).