Thursday, February 25, 2010

Is the Clinical Significance Criterion Significant?

The draft version of DSM-V: Revenge of the Fallen has been online for a few weeks (1) and much has already been written about it (1, 2, 3, 4). Much focus has been on what is "new" and what is "gone." One feature that is shared by the majority of DSM diagnoses, the "clinical significance" criterion, might be on its way out. Typically this criterion reads "The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning." The general rule being, if the person does not satisfy this criterion, a diagnosis probably should not be made.

This criterion is unique to the DSM-IV and is not found in the earlier versions of the text. The stated reason for adding it to the DSM was to
"establish the threshold for the diagnosis of a disorder in those situations in which the symptomatic presentation by itself (particularly in its milder forms) is not inherently pathological and may be encountered in individuals for whom a diagnosis of mental disorder would be inappropriate."  
Since mental health disorders are made by subjective analysis (often referred to as clinical judgment), does the addition of this criterion aid in the diagnostic process?

Not according to Wakefield et. al who published an article in the January 2010 issue of the American Journal of Psychiatry (5). In the article, titled, "Does the DSM-IV Clinical Significance Criterion for Major Depression Reduce False Positives? Evidence From the National Comorbidity Survey Replication," the authors' reason why the criterion is ineffective, is because it is redundant.

Wakefield argues that "distress is common to both normal reactions (e.g., acute grief) and disordered conditions, 'since most of these symptoms are either intrinsically distressing or are almost invariably accompanied by distress about having the symptom."

In other words, it's highly unlikely that an individual will satisfy full diagnostic criteria for a disorder and not be distressed or impaired.

(Click to Enlarge)

In the results reproduced above, out 2,071 respondents who reported episodes of sadness, 1,254 (60.5%) met diagnostic criteria for major depressive disorder (MDD). Of those who did not meet full criteria for MDD (n=817 or 39.5%), 93.5% did satisfy the "clinically significant distress or impairment" criterion. This suggests that the criterion is a poor indicator of diagnostic status. This result agrees with other research (6). 

It's unlikely that any modification of this criterion, other than its deletion, will resolve this issue of redundancy. If the definition is narrowed, there will be more false negatives; if the definition is broadened, there will be more false positives. Actually, the whole idea of false negatives/positives for already highly arbitrary (i.e. not valid) diagnoses is quite humorous, but I digress...

ResearchBlogging.org

Wakefield, J., Schmitz, M., & Baer, J. (2010). Does the DSM-IV Clinical Significance Criterion for Major Depression Reduce False Positives? Evidence From the National Comorbidity Survey Replication American Journal of Psychiatry DOI: 10.1176/appi.ajp.2009.09040553

Women: Know Your Limits!

A Public Service Announcement.

Wednesday, February 24, 2010

A Tale of Two Studies: Voxel-Based Lesion-Symptom Mapping

Brain imaging has contributed greatly to our understanding of the functional neuroataomy of the human brain. A lot these contributions have been blogged about by my bestest buddy Neuroskeptic (why don't you return my phone calls anymore!?). One of the more popular methods used to capture brain function is the functional magnetic resonance (fMRI). However, the results of fMRI studies are correlational and do not represent causation. There is another method, however, that "can identify regions, including white matter tracts, playing a causal role in a particular cognitive domain." This method is known as voxel-based lesion-symptom mapping (VLSM). A voxel is the three-dimensional analog of a pixel, and represents a volume of about 1 cubic millimeter. This method produces pretty images such as this one below.
A team of researchers from various important sounding universities published a study in this month's Proceedings of the National Academy of Sciences (PNAS; 1). In this issue of PNAS (pronounced penis), is an article titled "Distributed Neural System for General Intelligence Revealed by Lesion Mapping." The researchers created 3-D representations of the lesions of 241 subjects who had "single, focal, stable, chronic lesions of the brain." The subjects also had undergone neuropsychological testing, which included either the WAIS-R/WAIS-III.

The researchers were trying to discover where in the brain is general intelligence (often designated as "g"). Specifically,
"we address the question of whether g draws upon specific brain regions, as opposed to being correlated with global brain properties (such as total brain volume). Identifying such brain regions would help shed light on how g contributes to information processing and open the door to further exploration of its biological underpinnings, such as its emergence through evolution and development, and its alteration through psychiatric or neurological disease."
If "g" sounds like a highly abstract to concept to you, that's because it is. It's actually a really controversial concept within the field (2, 3). Below are the "g" loadings from this study.
The closer the color is to red, the closer that particular subtest loaded onto one of three g-related functions (i.e., verbal, spatial, working memory). The statistics of this study are admittedly over my head, since calculating g loadings require factor analysis. Since "g" is an abstraction, no actual number is presented for "g." Only how well a specific test loads onto "g" is provided.

What the researchers discovered should not be surprising to any biped mammal with working frontal lobes,

"One of the main findings that really struck us was that there was a distributed system here. Several brain regions, and the connections between them, were what was most important to general intelligence." (4)
More specifically,
"Statistically significant associations were found between g and damage to a remarkably circumscribed albeit distributed network in frontal and parietal cortex, critically including white matter association tracts and frontopolar cortex. We suggest that general intelligence draws on connections between regions that integrate verbal, visuospatial, working memory, and executive processes." (1)
"Statistically significant associations" is not same as "causal role." It's correlational. Still, nice sleight of hand.

What this group of geniuses is saying is that different brain functions are located in different parts of the brain, and when everything works in harmony, you have general intelligence.
"The researchers say the findings will open the door to further investigations about how the brain, intelligence, and environment all interact."
Open doors? That would mean that this research is original and ground breaking. It's not. In fact, in the March 2009 issue of Neuron (5) appeared this study, "Lesion Mapping of Cognitive Abilities Linked to Intelligence." Here is the press release (6). In this study, there were 241 patients with "single, focal, stable, chronic lesions of the brain," who had their lesions mapped and were also administered either the WAIS-R/WAIS-III. Also, the researchers are the same in both studies.

This study also found that performance on these (same) tests mapped primarily onto the frontal and parietal lobes.

The main difference is that the former study examined the anatomical location of intelligence in general, while the latter examine the anatomical location of general intelligence.

"So, what's the difference smart ass!?"

It depends on who you ask. Some say there is no difference, while others say there is a difference. At this point in the debate, however, we're engaging in mental masturbation (which is equally satisfying, plus people don't stare when you do it on the bus).

What I've been trying to figure out is if this counts as a duplicate publication? Sure, this doesn't have the far reaching consequences of these douche baggers (7, 8). There is a slight theoretical difference, the results are essentially identical. Curiously, in the most recent study, there is no citation to the other study. You'd think that the researchers want other people to read both of their studies.

Perhaps I'm looking too much into this. Or, perhaps I just enjoy mental masturbation...

ResearchBlogging.org

Gläscher J, Rudrauf D, Colom R, Paul LK, Tranel D, Damasio H, & Adolphs R (2010). Distributed neural system for general intelligence revealed by lesion mapping. Proceedings of the National Academy of Sciences of the United States of America PMID: 20176936

Gläscher J, Tranel D, Paul LK, Rudrauf D, Rorden C, Hornaday A, Grabowski T, Damasio H, & Adolphs R (2009). Lesion mapping of cognitive abilities linked to intelligence. Neuron, 61 (5), 681-91 PMID: 19285465

Wednesday, February 3, 2010

Brodmann's Map 100 Years Later

Brodmann's map. Anyone who has taken a course in basic neuroanatomy has been exposed to his roadmap of the cerebral cortex.

In this month's Nature Reviews Neuroscience, Zilles and Amunts (1) dedicated an article to Korbinian Brodmann and his map, celebrating its 100th anniversary (Brodmann's original work was published in 1909).

First, a little background. Brodmann's original map contains 52 areas; however, areas 12-16 and 48-51 are only found in nonhuman primate brains, so only 43 areas are actually labeled. How Brodmann constructed his "map" is quite complicated. He made numerous razor thin, horizontal slices of human brains. He then stained the cell bodies within those slices and attributed a number to an area if it was cytoarchitectonically distinct from its neighboring areas of the cortex.

Many others followed Brodmann's work with maps of their own. According to the article,
"During the next three decades, Otfried Foerster, Alfred Walter Campbell, Grafton Elliott Smith, Constantin Freiherr von Economo and Georg N. Koskinas argued for localizable anatomical and functional correlation and the segregation of cortical entities"
Many of those names may be new to you, which highlight how influential Brodmann's work has been. The reason there are many different "maps" is because brain mapping is not an exact science. Trying to differentiate the cortex based on brain architecture can produce profoundly different results, depending on the staining technique that is used and on the researcher's subjectivity.
"The Vogts used myelin-stained histological sections to study brain architecture (that is, myeloarchitecture). Their myeloarchitectonic map has many more areas (a total of 200) than that of Brodmann, because the Vogts further subdivided the Brodmann areas on the basis of the regionally more differentiated architecture of intracortical nerve fibres."
Below is a comparison of the various "maps" that have been produced since Brodmann's work in 1909.
(click to enlarge)
Differences between all these brain maps are apparent. However, there is also considerable overlap, suggesting that there is some degree of observer independence, reproducibility, and objectivity to the process.

A little historical note for anyone who was forced to memorize all those Brodmann areas, but was hampered by its apparent lack of logic (areas 1,2,3, start in the mid-lateral areas, while the remaining numbers are distributed in a quasi-random order). Each area number was assigned based on the order in which he prepared a slide, hence the apparent randomness of number assignment.

In his time, testing whether each "area" was correlated to a specific function was quite difficult. Over time, as other "maps" were published and his original became criticized for lack of objectivity, his map fell out of fashion. That is until the 1980's, when various brain imaging techniques were developed. Being able to image a live human during the performance of a specific task, it became possible to associate functional data with cytoarchitectual data. It was Brodmann's map that become apart of many of the first software and sterotaxic atlases for these machines.

Brodmann's work helped to revolutionize modern neuroscience. While many other maps have followed Brodmann's, and even though contemporary research has shown that "his map is incomplete or even wrong in some of the brain regions," many of the areas do correlate very well with various functional areas of the cortex, which is why his work still has relevance 100 years later.

This post was chosen as an Editor's Selection for ResearchBlogging.org

Zilles K, & Amunts K (2010). Centenary of Brodmann's map - conception and fate. Nature reviews. Neuroscience, 11 (2), 139-45 PMID: 20046193

Tuesday, February 2, 2010

Cognitive Impairment and Schizophrenia

Time to act like a big boy again...

When you hear the word "schizophrenia," what comes to mind? Frequently, people imagine someone who has auditory hallucinations (e.g., a voice keeping a running commentary on the person's behavior) or bizarre delusions, such as having thoughts broadcasted to others.

When mental health professionals discuss the disorder, the most common phrases used are "positive symptoms" (e.g., hallucinations, delusions) and "negative symptoms" (e.g., flat affect, alogia). Current medical treatments almost exclusively focus on treating the positive symptoms. Increasingly, there is more discussion about medications treating the negative symptoms as well; however, most medications do a piss poor job of this (1).

What is also "known" about this disorder, is that individuals who have it often have pervasive cognitive deficits as well. There are some who argue that it is the cognitive symptoms that are a main reason for disability and dysfunction (2).

In this month's American Journal of Psychiatry (3), a group of researchers reported on a 30-year longitudinal study of cognition in individuals who eventually go on to develop schizophrenia.

What they wanted to know, is if cognitive impairment is present from early childhood and if those impairments remain stable throughout a lifetime (the developmental deficit hypothesis); whether future schizophrenia subjects lag behind healthy people in their cognitive development (developmental lag hypothesis); or whether they have a decline in cognitive functioning just prior to illness onset or as a result of psychosis (developmental deterioration hypothesis).
The authors of this study followed a cohort from birth to age 32. The children were initially assessed at age 2, with follow-up assessments occurring at ages 5, 7, 9, 11, and 13.

The children's cognitive abilities were assessed with the Wechsler Intelligence Scale for Children - Revised (WISC-R), which was originally published in 1974 (the WISC is currently in its 4 edition).

Scores are generated from this battery by taking the raw score and converting it to an age-match scaled score (SS). In lay terms, an individual's performance is compared to other individuals who are of a similar age cohort. This way, you can tell how someone's performance compares to other people of the same age. The primary score generated by the WISC is a full-scale IQ. If you read my older post on IQ scores (4), you'll recall that IQ can be a meaningless number as it obscures the variability in an individual's performance. In order to compensate for this, the researchers mainly focused on the composite scores of the WISC: verbal comprehension (information, vocabulary, and similarities; see subtest descriptions below), perceptual organization (block design, picture completion, and object assembly), and freedom from distractibility (arithmetic and digit symbol coding).

(click the below image to enlarge)
  The researchers found support for both the developmental deficit and lag hypotheses but not the developmental deterioration hypothesis. This is consistent with other research, which suggests that cognitive deficits in people diagnosed with schizophrenia remain stable over time (5, 6, 7). As the authors described,
"For all eight cognitive tests, the linear slopes of the growth curves were positive and significant (all p values <0.001), indicating that on average, future case subjects, similar to healthy comparison subjects, showed developmental increases in their cognitive functions between ages 7 and 13 years."
For the developmental deficit hypothesis, the authors noted,
"future schizophrenic case subjects exhibited early and static cognitive deficits on the following four cognitive tests: information, similarities, vocabulary, and picture completion...future schizophrenia subjects had significantly lower [performance] values than healthy comparison subjects.
And for the developmental lag hypothesis,
"on three cognitive tests (block design, arithmetic, and digit symbol)...future schizophrenia case subjects had lower linear slope values than healthy comparison subjects, indicating that their growth on tests measure freedome from distractibility and visual-spatial problem solving skills was developmentally slower."
(click to enlarge)

The researchers concluded,
"The neurodevelopmental model of schizophrenia posits the existence of deviations in cognitive development many years prior to the emergence of overt clinical symptoms of adult schizophrenia. Findings from this study add to what is known about the neurodevelopmental model in three ways. First, our findings point to both cognitive developmental deficits and cognitive developmental lags during childhood in individuals who will go on to develop schizophrenia as an adult. Second, different cognitive functions appear to follow different developmental courses from childhood to early adolescence. The developmental deficit model appears to apply to verbal and visual knowledge acquisition, reasoning, and conceptualization abilities. The developmental lag model appears to apply to freedom from distractibility and visual-spatial problem solving abilities. Third, these patterns of cognitive deviations from childhood to early adolescence in schizophrenia are not shared in recurrent depression."
By this point you may be asking yourself, "what the hell does all this psychobabble mean?"

In short, these results don't mean much for clinical practice. They've reconfirmed that future schizophrenia subjects have baseline cognitive deficits, and their neurodevelopment is slower than healthy people.

Here are the average IQs of the different groups pooled together:

The authors of this study made it sound as if all future schizophrenia subjects had cognitive deficits. They didn't. Future schizophrenia subjects had an average IQ score of 94, while healthy subjects had an IQ of 101. Both of these scores fall in the average range (90-110). 7 points is not a big difference.

Above, are two bell curves I constructed to illustrate my point. IQ is a normally distributed score. The purple curve represents normal subjects (mean IQ 101) and the pink curve represents the future schizophrenia subjects (mean IQ 94).

A standard deviation (i.e., a measure of performance variability) for IQ scores is 15 points. A score of 85 (1 SD below the mean) is considered impaired. As you can see, there is considerable overlap between the schizophrenia bell curve and the normal subject bell curve. Nearly two/thirds of the schizophrenia population will have an IQ in the normal range or better; however, the maximum IQ for most schizophrenia subjects will be caped (i.e., rarely above 115), although there are notable exceptions (e.g., John Nash).

Another important point to note is that while a difference in IQ of 7 points (94 versus 101) is statistically significant, it is not clinically significant. You need a difference between 1 to 1-1/2 standard deviations to achieve clinical significance. Based on the bell curve, only 15% of future schizophrenia subjects will have an IQ that low.

The second problem with this study is that the cognitive assessments were not neurodiagnostic.
"the model posits that there is insult to the brain acquired or inherited in early development" and therefore "the developmental deficit model for the etiology of schizophrenia is supported by our data.
What the data indicate is that some, but not all, future schizophrenia subjects had difficulty on some, but not all, of these tests (remember, performance was lower on average). One of the major criticisms of cognitive tests is that performance is influenced by factors outside of the individual. The only factor that is controlled for by the WISC-R is age. But other factors, such as quality of education, region of habitation, ethnicity, medications, and gender are not controlled. In order to determine if a problem is brain based, one needs to control for those other variables, which is why neuropsychologists should use demographically correct norms when possible.

Here's an example: Future schizophrenia subjects tend to be isolated, are viewed by others as weird, and are stigmatized by their peers. These factors can contribute to poor self esteem, stereotype threat, poor school performance, and most importantly, poor motivation to perform well. Of course a person with this social history will perform poorly on cognitive tests (people with recurrent depression also have lower IQ scores on average, see above).

Here's another problem: let's assume that differences in IQ are brain based. The IQ test results do not pinpoint were the problem actually is. Below is a pyramid that illustrates what brain/neurocognitive functions need to be intact in order for the higher order functions (e.g., IQ) to be accurately assessed.
For example, if a person performs poorly on the WISC-R Vocabulary subtest, you then want to know why that person performed poorly. He or she could of had a poor educational background, the person could have a memory retrieval problem, or an expressive speech problem, or a hearing problem.

For anyone who has undergone neuropsychological testing, you'll recall that we administer a butt-load of tests (between 20-30), which takes between 3-6 hours to complete. We do this so we can accurately pinpoint why a person performed poorly and so we can make useful recommendations. If the problem was educational, a tutor will help, if the problem was memory retrieval, cueing will be helpful, if the problem was hearing, a aid will be helpful.

There are many other problems with this study. However, the take home message is that future schizophrenia subjects, on average, perform poorly on some cognitive tests, but that poor performance difference is not huge. Why some perform poorly while others do not is still unknown. This study and its press release (8) do not help resolve this debate, it only muddies the waters. 

ResearchBlogging.org

Reichenberg, A., Caspi, A., Harrington, H., Houts, R., Keefe, R., Murray, R., Poulton, R., & Moffitt, T. (2010). Static and Dynamic Cognitive Deficits in Childhood Preceding Adult Schizophrenia: A 30-Year Study American Journal of Psychiatry, 167 (2), 160-169 DOI: 10.1176/appi.ajp.2009.09040574

Monday, February 1, 2010

Cock Goes Here...


and here...

I expect to lose readers over this...