The field trials were finally released a couple of weeks ago, with methods and results spread across three articles in the American Journal of Psychiatry, the APA’s own journal. If ever there were a case for scientific documents to be made freely available, this would be it. Instead, they’re hidden behind a paywall, costing $35 to “rent” each article for 24 hours. It’s taken me a couple of weeks to get my head around them (luckily my university library has paid for unlimited access to the journal). For those who are interested but don’t want to spend the kids’ Christmas present money on journal articles, I’ve done my best to summarise the study – at least those parts that are relevant to autism.
The field trials were spread across 11 sites, all in North America. But only two – Baystate and Stanford - were involved in trialling autism diagnoses. Both centres were also assessing a number of other diagnoses, so autistic kids only made up a small proportion of those assessed. At Baystate, 23% of kids in their sample of 569 met DSM-IV criteria for an ASD (i.e., they already had a diagnosis of autistic disorder, Asperger syndrome, or PDD-NOS). At Stanford, the figure was 26% from a sample of 463.
The main aim of the field trials was to assess reliability – whether two different clinicians would give the same person the same diagnosis. This meant that, to be included in the final sample, each child had to be assessed twice and, as a result, only a small fraction of the children initially screened into the study were included in the final analysis. At Baystate, 146 of 569 kids made it all the way through. At Stanford the figure was 149 out of 463.
This is where things start to get complicated. In an effort to make sure they had enough kids for each of the diagnoses, the kids were assigned to various “strata” corresponding to the DSM-5 diagnoses under investigation. To be in the ASD stratum, a child had to have a DSM-IV diagnosis of ASD. However, some of the kids with ASD also met criteria for other strata and some were assigned to those strata rather than ASD. At Baystate, this happened to 20 of the 132 ASD kids. At Stanford, 21 of 119 ASD kids were reassigned. However, there is no indication of which strata they were assigned to.
|First 6 columns taken from Paper I (Clarke et al.). Columns 7 and 8 calculated by me. Column 9 take from Paper II (Regier et al)|
Because of this biased sampling, the authors employed a complicated formula to estimate DSM-5 prevalence. This essential piece of information can be found in Footnote E of Table 1 of the second paper.
Calculating the ASD prevalence was effectively done as follows:
- The authors calculated the proportion of kids in each stratum who met DSM-5 criteria for ASD (kids who were diagnosed with ASD by one clinician but not the other were effectively treated as half an ASD kid).
- They then multiplied this by a weighting factor, which corresponds to the proportion of kids in the original sample who were assigned to that stratum.
- Having done this for all the strata, they added up all the values to get the prevalence.
At Baystate, estimated prevalence was 24%, a slight increase from DSM-IV (23%). At Stanford, estimated prevalence went down fairly dramatically to 19%, compared with 26% under DSM-IV.
One of the slightly weird consequences of this approach is that different kids would have ended up contributing different amounts to the ASD prevalence depending on which stratum they’d been assigned to. For example, at Baystate, a child assigned to the non-suicidal self-injury strata would have been “worth” double a child assigned to the bipolar disorder strata. Having scratched my head about this for a while, I think it does make sense – but only if sampling really was deliberate. If it wasn’t, then there’s no reason to weight a child with ASD differently just because they weren’t assigned to the ASD stratum.
My suspicions here are raised by the non-suicidal self-injury stratum. Despite being one of the rarest target diagnoses, this was wildly under-sampled. This suggests that sampling was really a process of assessing as many kids as possible before time ran out. In that case, the formula could seriously distort the true DSM-5 prevalence.
A much simpler and more transparent way to compare DSM-IV and DSM-5 rates would have been to look only at the kids in the final sample and ask how many had an ASD diagnosis under each diagnostic scheme. The authors told me that, across both centres, 79 of the kids in the final group met DSM-IV criteria for ASD (note that only 64 of these were in the ASD stratum, which is why most reports have mistakenly said there were 64 ASD kids in total). However, the authors haven't responded to my question about numbers in DSM-5.
One thing they did tell me is that another paper is being prepared that looks in more detail at the autism results. In addition to the actual numbers of kids diagnosed under DSM-IV and DSM-5, we’re also currently missing information about the make-up of the autism groups at both centres. This is critical to know because, if anyone is going to miss out on ASD diagnosis, it’s likely to be the less clear-cut cases – those who'd meet criteria for Asperger’s or PDD-NOS under DSM-IV. Even without the concerns I've already mentioned, the numbers are pretty meaningless without that information.
The authors cheerfully conclude that children missing out on an ASD diagnosis will be better served by the new Social Communication Disorder (SCD) diagnosis:
"A careful review of data from both sites showed that the decrease at the Stanford site was offset by movement into a new DSM-5 diagnosis called social (or pragmatic) communication disorder (data not shown). Since autism spectrum disorder requires both deficits in social communication and fixated interests/repetitive movement, the more specific deficit assessments in DSM-5 should facilitate more focused treatments for those with social communication deficits only."Given that
Clarke DE, Narrow WE, Regier DA, Kuramoto SJ, Kupfer DJ, Kuhl EA, Greiner L, & Kraemer HC (2012). DSM-5 Field Trials in the United States and Canada, Part I: Study Design, Sampling Strategy, Implementation, and Analytic Approaches. The American journal of psychiatry PMID: 23111546
Regier DA, Narrow WE, Clarke DE, Kraemer HC, Kuramoto SJ, Kuhl EA, & Kupfer DJ (2012). DSM-5 Field Trials in the United States and Canada, Part II: Test-Retest Reliability of Selected Categorical Diagnoses. The American journal of psychiatry PMID: 23111466
I now have it on good authority (via Twitter) that Social Communication Disorder will be included in DSM-5. Implications still unclear.