Over at PsychCentral there is a post (1) titled, "Cognitive Behavioral Therapy Best to Treat Childhood Trauma." Reported are the findings from a recent meta-analysis that states, "strong evidence showed that individual and group cognitive–behavioral therapy can decrease psychological harm among symptomatic children and adolescents exposed to trauma." Regarding the other therapies examined, "evidence was insufficient to determine the effectiveness of play therapy, art therapy, pharmacologic therapy, psychodynamic therapy, or psychological debriefing in reducing psychological harm." According to PsychCentral, this doesn't mean that "these other types of interventions are completely ineffective or don’t work… just that this particular scientific analysis...did not find any significant impact of them." Actually, this scientific analysis found that "evidence was insufficient to determine the effectiveness" as opposed to not finding "any significant impact." That's what happens when one treatment is studied more often than others (2). CBT may be the best treatment, but when other treatments aren't tested, there is no way to tell.
Here's the problem, psychotherapy research is pretty lame. Head-to-head comparisons of different psychotherapies are just as rare as head-to-head drug comparisons. Granted, drug companies put-up millions of dollars to promote (an occasionally research) their own treatments, while we psychology types are lucky to be included in big NIMH studies. Yet, when we do crank out those once-a-in-decade, large, randomized, double-blind, and placebo controlled studies, they arn't necessarily better than drug trials. I know that drug research is easy criticize; there are big, evil, greedy, multinational pharmaceutical companies on which to blame anything and everything. We don't have evil psychotherapy companies at which to hurl blame. The closest thing we have to drug companies are the test publishers such as Psycorp, PAR, and WPS. However, those companies can't hold a candle to the pharmaceutical industry. So who do we blame for the poor state of psychotherapy research? The drug companies. Who else?
If you're a frequent reader of CL Psych(3), then you're familiar with those purveyors of biased research, "Key Opinion Leaders (KOL)" Many KOL's conduct clinical trials funded by drug companies. The cool part is, when a pharmaceutical company finances a drug trial, it's more likely to produce positive results than an independently financed drug trial (4). This creates the impression that the drug under investigation actually does something. Their bias screws with the science. So the question is, does the field of psychology have anything similar to those KOL's? The answer is yes. You see, psychologists' have these things called "theoretical orientations," which dictates their allegiance to specific types of psychotherapy. That allegiance creates bias and that bias screws with the science. Similar to the drug company financed trials, a therapist's allegiance to a specific type of therapy is more likely to produce positive results than the actual components that make up that therapy (5). Meta-analyses that investigated therapist allegiance have reported effect sizes as high as 0.65 (6).
Another problem area is placebo controls. A lot buzz has been generated about antidepressants apparent lack of superiority over placebo (7). What hasn't generated buzz is this: If therapy is just as effective as medication, then therapy also lacks superiority over placebo. However, there's more. In psychotherapy research, control groups come in many different shapes. For example, there's the dreaded "wait-list" control group (8). People assigned to this group have their symptoms checked periodically, while others get their weekly dose of CBT, IPT, BAT, or some other combination of letters. Is it an adequate placebo? Well, according to the Carlat Psychiatry Report (9a), "the wait list control is suboptimal, because unlike...pill placebo, wait list patients don’t actually believe that they are getting treatment." [my emphasis]. This can lead to one of two scenarios: reactive demoralization or the John Henry effect. The former is when people have a worsening of their condition when they know that others are getting a better treatment (thus making the active treatment look better). This can also happen in drug research; however, it usually doesn't occur until the person has deduced that they're on a placebo. The John Henry effect occurs when the control group tries to compete with the experimental group. This leads to an improvement in their condition (thus making the active treatment look less effective). Although research studies "have shown that simply being put on a wait list results in substantial improvement...," this isn't "as robust as pill placebo" (9b) due to the reasons mentioned-above.
The obvious solution to this problem is to create an intervention that will give the impression that the control group is receiving adequate treatment. That's how "treatment as usual" and "clinical management" came about. The problem with these control groups is that they are intentionally "de-powered." Therapists are instructed to be either inert or to minimize certain non-specific therapeutic ingredients, such as the therapeutic alliance (10), which is a difficult task. Another problem stems from the fact that when a therapist is providing CBT, he knows it. When a therapist is providing "clinical management," he knows it. This confound is called the experimenter bias effect. In drug research, when a pharmacotherapist knows that he is prescribing an active drug, the study is referred to as unblinded (or single blinded). Such studies are routinely criticized (11). I've seen only one study (12) where the control therapists were taught a "new" therapy. This was intended to increase the likelihood that the therapists would believe that they were providing an adequate treatment.
Another widely criticised component of drug trials, which limits the external validity (i.e., generalizable) of the results, is called sample enrichment. This is when a clinical population, which is likely to respond to treatment, is selected to participate in a study. Typically, these enriched samples represent approximately 20% of the patients actually seen in clinical settings (13). For example, the quetiapine BOLDER studies used this population of bipolar patients: people who met criteria for BP I or II (I'm with you so far), no co-morbid Axis I disorders (you lost me...), current depressive episode can't last longer than 12 months (I'm still not with you), no history of nonresponse to more than two antidepressants (where'd you go?), no co-morbid substance abuse (there goes 75% of the bipolar population), no medical illnesses (seriously, I can't see you), and no suicidal risk (14). Do these people actually exist? Because I've never seen one.
The protocol for a recently published behavioral activation (BAT) study (15) had this MDD population: DSM-IV diagnosis of MDD with no history of BP, or psychosis, organic brain disorder, or mental retardation (that's fair enough). However, participants were excluded if there was a risk of suicide; substance abuse; co-morbid anxiety, eating, or pain disorders; certain personality disorders; history of poor response to CBT or paroxetine; and an unstable medical condition. The rationale for an enriched sample is not necessarily nefarious; however, it severely limits the external validity of treatment.
As you can see, psychotherapy research is lame. However, by its very nature, psychotherapy is very difficult to study. That's why there is a bias for "manualized" forms of therapy. Those provide a frame work for therapists to follow, and in theory, minimize many confounding factors. Because of that bias, certain therapies like CBT are frequently researched, which creates the appearance of superiority over other therapies. Lastly, just because psychotherapy research is lame, does not mean it's lame for the same reasons as drug research. I'd argue that drug research has little excuse to be as lame as it is. However, psychotherapy research should be conducted better than it currently is.
Thursday, September 18, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
What are your views on head-to-head comparisons with both groups receiving treatment? I'm thinking specifically of comparing DBT with mentalization-based therapy in patients with borderline personality disorder.
Post a Comment