Showing posts with label language. Show all posts
Showing posts with label language. Show all posts

Wednesday, May 1, 2013

Come and find me at IMFAR 2013

IMFAR, the International Meeting for Autism Research, begins in tomorrow in the beautiful city of San Sebastian in northern Spain. The program looks pretty good and I'm surprising myself at how excited I am!

As well as sampling the rioja and pinchos, I'm presenting two posters. Please do drop by and say hello if you're attending.


108.143: Oscillatory neural responses to speech and nonspeech sounds in a nonverbal child with autism
Thursday 12pm, Poster 143

Click to enlarge
On Thursday I'm presenting a case study conducted by Shu Yau as part of her PhD research. Shu couldn't come so I get to stand next to her poster and waffle instead.

The study involves G, an autistic girl who has never spoken. Shu used magnetoencephalo-graphy (MEG) to record the magnetic fields produced by G's brain while she listened to different sounds.

Interestingly, G showed a very weak response to speech sounds, but an extremely strong response to nonspeech sounds. This was unlike any of the other children we've tested - either typically developing children or verbal autistic children.

We're not sure what to make of the data so would love any feedback. Given G's profound language difficulties, it's tempting to assume that her differential brain response to the two sounds is related to the differences in "speechiness" of the sounds. But we don't know that for sure. It's also unclear at this stage how representative G is of other kids like her.

Nonetheless, we think these are interesting preliminary findings. At the very least they show that it's possible to conduct MEG studies with nonverbal kids, who are usually excluded from neuroimaging research.


144.148: Individual differences in homograph reading amongst Hebrew-speaking autistic children
Friday 3pm, Poster 148
Click to enlarge
On Friday I'm presenting a study of homograph reading in Hebrew-speaking kids with autism, conducted with colleagues Nufar Sukenik and Naama Friedmann from Tel Aviv University.

Homographs are written words that have multiple meanings and, sometimes different pronunciations. For example, the word "tear" is pronounced differently depending on whether the context is crying or ripping.

One of the more consistent findings in autism research is that people with autism tend to be bad at working out which pronunciation of the homograph to use. This is seen as key evidence for the "weak central coherence" account, which attributes many features of autism to an inability to use contextual information.

The advantage of conducting the study in Hebrew is that there are many more suitable homographs in Hebrew than in English - mainly because written Hebrew doesn't really bother with vowels! This meant we could give the kids lots of homographs to read, so our data should be much more reliable than previous studies conducted in English.

Our main finding was that there were statistically reliable individual differences in homograph reading skills within our autism group. In other words, only a subgroup had difficulties. And importantly those differences couldn't be explained away in terms of boring things like older kids or kids with generally stronger reading skills being better at the task.

In fact, the best predictor of homograph reading was performance on a picture naming task. The numbers in the study are relatively small, so again need to be treated with caution - but they suggest (to me at least) an alternative view of homograph reading difficulties in autism. Perhaps the problem is less to do with comprehension of the sentence context and more to do with selecting the right word to say when speaking or reading aloud.


Conference updates:

SFARI (the Simons Foundation Autism Research Initiative) will be live-blogging the conference. You can also follow the #IMFAR2013 hashtag on Twitter.

Thursday, April 5, 2012

A teleconference



Tuesday was a day of meetings: meetings with students about potential research projects; faculty meetings with staff in other departments. Last on my itinerary for the day was a teleconference with colleagues in Queensland.

My previous meeting had run over, so I was late phoning in. A helpful lady patched me through and announced my arrival, and the chair introduced me to everyone present – most of them there in person, sitting around a Brisbane table. The conversation resumed where it had been before my insertion and, consulting my agenda, I tried to figure out where they were up to and what on earth everyone was talking about.

Teleconferences, I've discovered, are far from being the most natural of social interactions. The most obvious problem is the lack of any visual cues. There's no subtle way of indicating that you have something to say, and no way of knowing if and when people are expecting you to chip in. Wait for a pause in the conversation and it’s guaranteed that somebody else will start speaking before you. Twenty minutes since I’d joined the meeting and I still hadn’t actually contributed anything other than "Hello".

One person in particular was proving difficult to make out. I turned up the volume on my phone. But then someone else, also on conference call, chimed in. She was so loud I literally dropped the handset.

So there I was, desperately trying to keep track of the conversation. Turning the volume on my phone up and down as different people spoke.

Things only got worse.

First, the audio feed from the meeting started misbehaving. It began by cutting out for a few seconds every now and then. But the silences became longer and more frequent. I’d miss out on four or five words in a row and, trying to guess what I’d missed, would lose track of the next sentence.

Then, as the meeting was drawing to a close, somebody began stacking teacups. At least that’s what it sounded like. The harsh chinking sound was making it even harder to focus on listening to the conversation.

By this stage, the teleconference had also over-run. I had some emails to respond to before I left for home, so I tried to make my apologies and leave. But now there were separate conversations going on in the room. After five minutes of failing to find a judicious moment to say goodbye, I quietly hung up.

Earlier in the day I'd been holding my own in discussions ranging from cross-frequency coupling of brain oscillations to the implementation of a new curriculum. But, for the duration of that teleconference, lacking the context to the conversation, hampered by the absence of visual cues, and struggling with the variable sound quality, I was, for want of a better word, nonverbal.


Further reading:

Friday, January 20, 2012

The Adventures of DataThief

DataThief by Jed Pascoe: Reproduced with the artist's permission

This post was chosen as an Editor's Selection for ResearchBlogging.org

Having spent much of the past week struggling to make sense of my data, it’s good to come home, pour a glass of wine, put on some Sharon Jones, and, er… play with somebody else’s data!

Recently, I’ve discovered DataThief - an application that allows you to scan in a graph from a paper and extract the data points. Sometimes, this provides insights that really aren’t obvious from the original paper.

The other week, for example, I came across an intriguing neuroimaging study reported on the SFARI website. In the study, Judith Verhoeven and colleagues used diffusion tensor MRI to examine the superior longitudinal fasciculus, a bundle of nerve fibres that is assumed (although see this paper) to connect two brain regions involved in language production and comprehension - Broca’s area (left front-ish) with Wernicke’s area (left and back a bit).

Verhoeven et al. reported that integrity of the superior longitudinal fasciculus was compromised in kids with specific language impairment (SLI) – that is, kids who have language difficulties for no obvious reason. However, the same was not true of kids with autism, even though they had poorer language skills than those with SLI.

Taken at face value, this is a pretty major blow to the idea that autism and SLI have anything more than a superficial resemblance [pdf].

DataThief, however, suggests otherwise.

The figure below is a scatterplot with each coloured shape representing a single child. On the x-axis is performance on a language test. On the y-axis is fractional anisotropy (FA) – the imaging measure used to assess the integrity of the left superior longitudinal fasciculus.

Figure 3a from Verhoeven et al 2011, showing integrity of the left superior longitudinal fasciculus plotted against the child's language scores (z-scores). Children with SLI in red, autistic kids are the blue squares. Control children are the green and blue circles


The purpose of the graph was to show the significant correlation between these two measures in the SLI group. But if we can read off the y-coordinates of each shape, we can show the distribution of fractional anisotropy scores for all three groups.

Cue DataThief.

It’s really just a case of clicking on three reference points for which you know the coordinates and then clicking on each of the data points in turn. Then you simply export the coordinates of the data points as a text file. The only thing I had to remember was to do the three groups separately so I knew which point belonged in which group.

Here’s the fractional anisotropy data replotted to show the distribution for each group. What we can now see is that there is a small subgroup of control kids who have really high FAs. There is also one autistic kid and one kid (arguably two) with SLI who have low FAs. Everyone else is pretty much in the middle.

Verhoeven et al.'s data replotted to show the distribution of fractional anisotropy for each group


On average, kids with SLI have lower than ‘normal’ fractional anisotropy [1], but looking at the spread of scores, you’d be hard pressed to conclude that this was a characteristic of SLI. Likewise, the overlap between the distributions of the autism and SLI groups is almost complete. Hardly evidence for fundamentally different neural mechanisms in the two disorders. 

At the risk of sounding like a broken record, this once again highlights the importance of looking at individual variation within diagnostic groups such as autism and SLI, rather than (or as well as) looking at group averages.

But it also emphasizes a more general point (and this I have to stress is no criticism of the authors of this particular paper).

The data reported in a journal article are really just a snapshot of the actual data recorded, filtered through the authors’ preconceptions about what questions are interesting to ask and how to go about doing that. There’s an imperative to present the data in a neat, sanitized package, with all the rough edges and anomalies smoothed out; to tell a coherent story that will convince reviewers and editors that it’s worthy of publication in a reputable journal. Years of work and terabytes of data may be compressed into just two or three pages.

DataThief only takes us so far. It allows us to extract the information presented visually in the published article, but no further.

Most of the past week has been spent convincing myself that it doesn’t really matter how I analyse my data because the results come out the same regardless. This is reassuring for me, but it doesn’t mean that somebody else, looking at my data with fresh eyes and a different perspective, would not come to an entirely different set of conclusions.

In an ideal world, when a paper is published, researchers should also be able (and encouraged) to publish the data on which the paper is based, as well as the script showing exactly how those data were analysed.

There are, of course, many obstacles in the way and questions to be answered before this becomes standard practice. Who would host and maintain the data? Just how raw should the raw data be? What if the authors are writing multiple papers based on the same data set? Who gets credit for reanalyses of the data set? What happens if a reanalysis shows up an error in the original paper? If the research involves human participants, how do we reassure them that their anonymity will be maintained?

Undoubtedly, there are many more problems that I haven't thought of. But, as scientists, we need to work through these issues and find ways to set our data free.


Footnotes:

[1] The analyses involved an ANOVA with left and right hemisphere as a within-subjects factor. This showed a main effect of group, but no group by hemisphere interaction.


Update:

Originally, I linked to the wrong SFARI article in the third paragraph. That's now fixed. The one I mistakenly linked to reports a conference presentation that does indicate atypical connectivity between language regions in the brains of nonverbal autistic kids (although not the same pathway as examined by Verhoeven et al.)

Further reading:




Reference:

Verhoeven, J., Rommel, N., Prodi, E., Leemans, A., Zink, I., Vandewalle, E., Noens, I., Wagemans, J., Steyaert, J., Boets, B., Van de Winckel, A., De Cock, P., Lagae, L., & Sunaert, S. (2011). Is There a Common Neuroanatomical Substrate of Language Deficit between Autism Spectrum Disorder and Specific Language Impairment? Cerebral Cortex DOI: 10.1093/cercor/bhr292

Thursday, January 19, 2012

Take part in our research on language and auditory processing




We’re looking for kids with autism as well as typically developing kids to take part in our research.

The study is looking at how kids’ brains respond to different sounds, and how this relates to their language and communication skills.

We are using a technique known as magnetoencephalography or MEG for short. MEG works by measuring the tiny magnetic signals naturally emitted by neurons in the brain. It will tell us which parts of the kids’ brains are responding, how quickly, and how sensitive they are to subtle changes in the sounds they are hearing.

It involves absolutely no physical risks. Kids get to go in a “space rocket”, watch a movie of their choice - and get paid!

If you’d like your child to take part, please ring Shu Yau (02 98504314) or email shu.yau@mq.edu.au


Who can take part in the study?

We are currently recruiting children aged 5-12 years, who live in the Sydney area:
  • Children on the autism spectrum (i.e., children with a diagnosis of autism, autism spectrum disorder, Asperger syndrome, or PDD-NOS). Our only criterion is that kids need to have at least some spoken language and can complete the different tasks.
  • Typically developing children (i.e., non-autistic children with no language or communication difficulties or epilepsy). These children are very important because they provide an objective age-matched comparison. 

What would happen if my child took part in this research? 

Firstly, you and your child would be invited to the KIT-Macquarie Brain Research laboratory to meet the researchers and become familiar with the MEG lab. It is important for us to take time to get to know you and your child before we proceed with the study. We will help the children understand what will be expected of them if they decide to take part. For the younger ones, we also give out prizes and certificates to show that they are qualified MEG astronauts!

If you and your child are happy to proceed, we will start with a short hearing screening test, using headphones, to establish the softest sound your child can hear. Then we will proceed to the MEG, where they will lie on a bed and listen to sounds while watching a DVD of their choice. After the MEG recording, your child will complete some behavioural tasks to give us a record of his/her cognitive, social and communicative skills. These involve playing with toys (for the younger ones), storybooks and computer games.

For some children, a second visit may be scheduled to complete the MEG recording, if they wish, or if the child prefers to finish the behavioural tests on another day.

For parents, we would send you a brief questionnaire concerning your child’s social and communication skills. We’d give you a freepost envelope so you could complete it and post it back to us in your own time, free of charge.

Do we get paid for taking part? 

Yes. We pay $40 for the first MEG visit, and $20 for each subsequent visit to complete behavioural testing.

Where and when would the research take place? 

The study will take place at a time that suits you, and can be split into two or more sessions if needed. The MEG system is at the KIT-Macquarie Brain Research laboratory at 299 Lane Cove Road, close to Macquarie Park station.


View Larger Map  

Are there any risks involved in this research? 

There are absolutely no physical risks involved in the study. If your child became tired or anxious, testing would stop immediately. Unlike other brain imaging techniques, MEG is silent, doesn’t involve things being stuck to the child’s head (except for a swimming cap), and you will be able to stay with your child the entire time. The short hearing test is just a screening test, but we will alert you immediately if we suspect hearing loss/impairment in your child.

What happens to the information recorded? 

The information we record during this study will be treated in strictest confidence and we certainly wouldn’t pass on any information about your child to anyone outside the research project without your written permission. Your child’s scores on the various tests would be coded and stored on a computer with password protection. They would be given an ID number so that nobody outside the research project knows their real name.

How will I find out about the outcomes of the research? 

We will send you a summary of the research project and its outcomes. We will also send you a summary of your child’s scores on the different tests, which you may take to clinicians if you wish.

What happens if I change my mind? 

You are free to withdraw your child from the research study at any time. You don't have to give a reason and you'll still get paid.

Who is conducting the research? 

The study is being conducted by Shu Yau, as part of her PhD, supervised by Dr Jon Brock at the Macquarie Centre for Cognitive Science. It is part of a larger research program funded by the Australian Research Council and Macquarie University.

Would we be asked to take part in other studies? 

If you’d like to get involved in other research projects, we can send you information about future studies. But there is absolutely no obligation for you to take part in these.

I'm still interested. What do I do now? 

If having got this far, you're still interested in your child taking part, please phone Shu Yau (PhD student) at 0298504314 or email shu.yau@mq.edu.au


The ethical aspects of this study have been approved by the Macquarie University Human Research Ethics Committee. If you have any complaints or reservations about any ethical aspect of your participation in this research, you may contact the Committee through the Director, Research Ethics (telephone (02) 9850 7854; email ethics@mq.edu.au). Any complaint you make will be treated in confidence and investigated, and you will be informed of the outcome.

Friday, August 19, 2011

The curious case of the reversed pronoun

“You made a circle”, exclaimed Ethan, looking up from his drawing.

“You did make a circle”, his mum acknowledged, ignoring the fact that, not for the first time, Ethan had reversed the pronoun, saying “you” when he should have said “I”.

Ethan was one of six children from Providence, Rhode Island taking part in a study of child language development. Every couple of weeks, a researcher from Brown University would visit him and his mum at home, record, and then transcribe their conversations in painstaking detail. The transcriptions would show that Ethan was a prolific reverser of pronouns; frequently saying “you” when he meant “I” and “your” instead of “my” or “mine”. This curious habit began as soon as pronouns entered his vocabulary and he was still reversing pronouns when, just before his third birthday, the study came to an end.

Ethan’s language skills were otherwise exceptionally good. When assessed at 18 months, his scores put him in the top 1% for children his age. However, some years after the study finished, it transpired that Ethan had Asperger syndrome.

Pronoun reversal is common amongst children on the autism spectrum. Leo Kanner noted as much in the first systematic description of autism and, to this day, it is considered an important marker when conferring an autism diagnosis. But the underlying cause of this highly specific problem remains something of a mystery. Ethan’s diagnosis made sense of his pronoun reversal, but it didn’t exactly explain it.

While pronoun reversal is relatively common in autism, it certainly isn’t unique to the disorder. Deaf children in particular are prone to reversal, despite the fact that in many sign languages, pronouns simply involve pointing to the person in question. And while most typically developing children appear to have little difficulty with pronouns, there have also been several case reports of children who go through a prolonged phase of pronoun reversal.

By coincidence, Naima, one of the five other children in the Providence study, was one such child.

Aware of the serendipitous nature of their data, two of the researchers, Karen Evans and Katherine Demuth, returned to their transcriptions. Forensically re-examining the evidence, they tried to work out why the two children had encountered such difficulties with pronouns. The results of their enquiries provide some intriguing insights into the multiple challenges facing both typically and atypically developing linguists.


The pronoun problem

Personal pronouns represent an unusual problem for the young language learner. Most words they encounter will have a constant reference, at least within the context of the ongoing conversation. “Mummy” will refer to their own mother. “Dog” will refer to the animal that is sat on the carpet right in front of them. But the meanings of “I” and “you” change, depending on who it is that is speaking. My “you” is your “me”.

In Naima’s case, it seems that she simply failed to grasp this concept, thinking that “you” was really just another name for herself. It wasn’t that she sometimes got it right and sometimes got it wrong. Between the ages of 19 and 28 months, virtually every time she used “you” or “your”, she was actually referring to herself, sometimes with amusing results:
Naima: "I think you peed in your diaper."
Mother: "Just now?"
Naima: "I think you did."
Then, all of a sudden, something clicked. In Naima’s final two sessions at 29 and 30 months, every single pronoun was used correctly. But why did she make this mistake in the first place? And what happened for the penny to drop?

Are you experienced?

Yuriko Oshima-Takane, a psychologist at McGill University in Montreal, has argued that children can only deduce the principles of pronoun use by listening in on other people’s conversations. Pronoun reversers, she suggests, are children who, for one reason or another, have missed out on this vital linguistic experience.

Naima appears to be a perfect illustration of this theory. She was an only child at the time of the study and spent most of her time alone with either her mother or her father. As a result, most of the speech she heard was directed at her. This in turn meant that almost every time she heard the word “you” it referred to her. It would be perfectly understandable if she thought of "you” as simply another name for herself.

Evans and Demuth note that the abrupt end of Naima’s pronoun reversal coincided with a family holiday. They speculate that the time spent with both mum and dad is what gave her the learning experience necessary to finally grasp the concept of “you”.

Oshima-Takane suggests a similar explanation for the high rates of pronoun reversal in deaf and autistic children. For deaf kids, having to rely on visual communication or poor quality auditory input makes it much more difficult to follow other people’s conversations. For autistic kids, the argument goes, the problem is more that they are disinterested in other people and so fail to pay attention to their conversations. Like Naima, both groups of children will only learn from speech that directly engages them and will mistakenly jump to the conclusion that “you” only ever refers to themselves.

So could this explain Ethan’s difficulties? Evans and Demuth suggest not, pointing out that, although he often used “you” to refer to himself, he used it appropriately on enough occasions to demonstrate that he’d grasped the concept.

The trail led elsewhere.

Say it again


Kanner’s explanation for pronoun reversal in autism came from another observation - that children with autism often repeat entire phrases verbatim, inappropriately and out of context. This so-called ‘echolalia’ would lead to reversals as the pronouns are repeated exactly as heard. British child psychiatrist, Michael Rutter gave the example of a hungry child requesting a biscuit by echoing the phrase “Do you want a biscuit?” The pronoun was reversed but the biscuit was obtained.

Consistent with this explanation, Evans and Demuth noted that Ethan was indeed most likely to reverse pronouns when imitating an utterance that somebody else had previously made. “Dad gave me that ring”, for example, was clearly a reversal but was almost certainly something his mum had said previously.

Case closed one might think.

However, even using the most generous criteria, imitations accounted for less than half of Ethan’s recorded reversals. What’s more, in contrast to the child in Rutter’s example, he actually made relatively few reversals during requests. For example, when asking for his bottle, he said “I want bottle”, using “I” correctly (even though the sentence wasn’t fully formed).


An alternative perspective

Further analyses revealed two final clues. First, as well as using “you” to refer to himself, Ethan occasionally used “I” to refer to other people (something Naima very rarely did). Second, reversed pronouns were more likely to occur in sentences that contained multiple pronouns. For example, at aged 22 months, Ethan was recorded saying “I got you out” when he should have said “You got me out”.

These observations suggest that his problem lay, not in understanding the principles of which pronoun to use, but in applying those principles during a conversation. His problems were pragmatic rather than conceptual. More precisely, Evans and Demuth propose that Ethan’s pronoun reversal reflected difficulty in referential perspective taking - in choosing the right word given who was being referred to at any given moment in the conversation.

This account of Ethan’s pronoun reversal fits nicely with research suggesting that autistic children have difficulty with other linguistic terms that depend on the speaker’s perspective (Bartolucci & Albers, 1974).

In an intriguing study published last year, Peter Hobson and colleagues at University College London (Hobson et al. 2010) found that children with autism were competent at using “here” and “there” to refer to locations near or far from themselves. However, the same children struggled to follow similar instructions given by two other people – a task that required them to consider the speaker’s perspective to work out which locations “here” and “there” referred to.

Wrapping up

Whether  or not Evans and Demuth have solved the mystery of why these two particular children reversed their pronouns, their investigations demonstrate that, if you scratch beneath the surface, even a phenomenon as striking and specific as reversal of first- and second-person pronouns can have quite different underlying causes. In Naima’s case, it seems she misunderstood the meaning of “you”. In Ethan’s case, he appears to have grasped the concept but lacked the wherewithal to consistently choose the correct pronoun during a conversation.

Ethan’s case is particularly intriguing in the light of his Asperger syndrome diagnosis. However, it would be unwise to assume that he is representative of all individuals on the autism spectrum. His difficulties do not seem to be explicable in terms of either a lack of relevant linguistic experience or a tendency to echo phrases verbatim, but these may still be contributory factors, and could well explain pronoun reversal in other autistic individuals. Indeed, as noted earlier, Ethan’s error patterns are quite different to some other examples in the autism literature.

Perhaps the reason pronoun reversal is so common in autism is that there are several factors associated with autism that each contribute to the difficulties. Working out why a particular child reverses pronouns may require investigation on a case-by-case basis.


Sunday, August 14, 2011

On neural correlates and causation

The advent of neuroimaging techniques such as magnetic resonance imaging (MRI) has revolutionized autism research. We can now look into the brain and see the "neural correlates" of autism. But, as with any form of correlation, identifying a neural correlate doesn't necessarily mean that we have identified a neural cause.


A case in point. Earlier this week I stumbled across a press release doing the rounds of the internet, proclaiming that "Brain imaging research reveals why autistic individuals confuse pronouns". Pronouns are the words like "he", "she", "you" and "I" that can stand in for real names. Kids with autism often struggle with them (there goes another one). In particular, they'll say "you" to refer to themselves and "I" to refer to other people.

Various theories have been put forward over the years to try and explain pronoun "reversal". Leo Kanner thought it happened just because the autistic kids were echoing things other people had said. Bruno Bettelheim (he of 'refrigerator mother' fame) reckoned kids with autism didn't have a sense of self, and so "you" and "I" were indistinguishable to them. An intriguing theory, proposed more recently by Yuriko Oshima-Takane is that kids with autism don't learn how pronouns work because they don't attend to other people's conversations.

So what does brain imaging add to this debate?

The study

The study was conducted by Akiko Mizuno, a graduate student working with Marcel Just at Carnegie Mellon Uni. She tested a group of 15 high-functioning adults with autism on what is known in the trade as a first-order visual-perspective-taking task. On each trial, they saw a series of photographs in which a woman (called Sarah) first showed them a card with different pictures on each side and then asked "What can you see now?" or "What can I see now?" Participants had to press a button on the left or right to give the correct answer.



Interpreting Sarah's questions required the participants to comprehend the pronouns "you" and "I". The adults with autism were slower and less accurate at this task than non-autistic adults. They were also a little slower on control questions that didn't involve pronouns, such as "What can Sarah see?" and "Who can see the carrot?" but the group differences weren't quite as marked. This is crucial because it suggests that the adults with autism had specific problems with the pronoun condition.

These results in themselves are really interesting. They suggest that subtle difficulties with pronouns are apparent, even amongst high functioning adults with autism. It's not clear whether these individuals ever reversed pronouns themselves in their speech, and it's important to remember that the study looked at comprehension of pronouns rather than production. But it's nevertheless striking that there are group differences, even on such a simple task.

The focus, however, was on the brainy stuff.

While the participants were completing the task, their brains were being scanned using fMRI. The headline finding was that, in the autism group, there was reduced "connectivity" between two brain regions, the right anterior insula and the precuneus. Furthermore, within the autism group, there was a significant correlation between brain connectivity and reaction time. People who were slower had weaker connectivity.

Mizuno et al. imply that this is what ultimately causes pronoun reversal:
"The observed lower functional connectivity between those two neural nodes in the autism group, therefore, may result in disturbed perspective-taking processes in shifting a centre of reference between self and other."
Finer grained analyses showed that group differences in "connectivity" were observed only when Sarah asked "what can you see?" and not when she asked "what can see?". The author's explanation is as follows:
"These findings indicate that the critical disturbance... may be dysfunctional processing when recognizing the self as a referent of ‘you’, and shifting to map self onto the pronoun ‘I’."
In other words, when Sarah says "What can you see?", the participant has to translate that into "What can I see?" and this translation process is reliant on the "connectivity" between the precuneus and anterior insula [1].

Reasons to be cautious:

It's possible that Mizuno and colleagues are correct in their interpretation. In fact, I'd really like them to be right, because I've been waffling on about brain connectivity in autism for ages. But there are a number of reasons to query their conclusions.

1.  Everyone uses the term "functional connectivity" in the context of fMRI scans, but it's pretty misleading. fMRI measures brain activity indirectly via changes in blood oxygen levels. Here's an example of the time course of oxygen level changes for a control participant in Just et al.'s original fMRI "connectivity" in autism study.



The two brain regions in this figure are considered to be "functionally connected" because their activation goes up and down at roughly the same time. What isn't obvious from the figure (and is rarely acknowledged) is the fact that the changes are happening really slowly - roughly one cycle of activation and deactivation every trial.

If the two brain regions are 'talking' to each other in order to complete the pronoun task, they're doing it a much faster rate than anything fMRI can hope to measure.

2.  Since that first paper, Just and colleagues (as well as several other research groups) have published a large number of studies demonstrating changes (usually reductions) in "functional connectivity" throughout the autistic brain. Their new study adds to this impressive body of evidence. But this in turn raises a second concern.

As mentioned before, Mizuno et al. looked at connectivity between two brain regions - the right anterior insula and the precuneus. Importantly, this was the only pair of regions they considered looked at [2]. Based on their previous findings, there's a fair chance that they could have chosen any number of brain regions and would have found "underconnectivity" between them too. They may be right and this is the only connection that relates to pronoun comprehension difficulties. But we don't know this.

3.  The claim is that differences in "connectivity" are responsible for difficulties in comprehending pronouns. But it could just as easily be the other way around. People with autism struggle to comprehend pronouns, so they have to work harder or (as the reaction time data suggests) for longer, so it's no surprise that their brain activity while they're doing the task is different.

Brains vs Minds

Neuroimaging studies have provided many important insights into the workings of autistic brains. But sometimes, it's easy to be seduced by the fancy gadgets, the pretty pictures, and the funny words and think that brain imaging is somehow more scientific than good old-fashioned cognitive psychology (as exemplified by Mizuno et al.'s reaction time data), or that it offers privileged insights into the autistic mind. 

Neural correlates are just that - correlations. All the usual caveats apply.


Notes:

[1].  Unfortunately, Mizuno et al. don't report whether the same effect is apparent in the reaction time data. If the people with autism had specific difficulty comprehending the word "you", they should be slower on this condition than control participants.

[2].  As far as I can tell, there is no direct evidence from previous studies that the precuneus and right anterior insula are involved in pronoun comprehension, so effectively Mizuno et al. are relying on a hunch. And in their own analyses, they show that, while the right anterior insula is one of 7 brain regions activated by the task, the precuneus isn't.


Reference:

Mizuno A, Liu Y, Williams DL, Keller TA, Minshew NJ, & Just MA (2011). The neural basis of deictic shifting in linguistic perspective-taking in high-functioning autism. Brain : a journal of neurology PMID: 21733887


Further reading:


Wednesday, November 3, 2010

Pitch discrimination in autism - links to language delay?

ResearchBlogging.org
Until recently, research on perception in autism has focused primarily on the visual modality. However, there is now a growing body of research on auditory processing. Of particular note are two recent studies, both published in the journal Neuropsychologia, which report enhanced auditory discrimination abilities in a subgroup of individuals on the autism spectrum.

The first of these studies was conducted by Catherine Jones and colleagues from the Institute of Education in London, who tested 72 adolescents with autism and a control group matched on age and IQ. Participants played a computer game in which they saw two dinosaurs, each of which produced a pure tone (beep) sound. They simply had to decide which dinosaur had made the higher sound. If they got two in a row correct, then the task got slightly more difficult (the two dinosaur sounds were made more similar in pitch). If they were incorrect then the task was made slightly easier. In this way, the researchers could work out each participant's threshold for detecting a pitch difference between two tones. Participants also completed similar tasks that involved discriminating between tones of different amplitude (loudness) and duration.


Monday, September 13, 2010

Using eyetracking to investigate language comprehension in autism

ResearchBlogging.org

In her classic book, Autism: Explaining the Enigma, Uta Frith coined the term 'weak central coherence' to describe the tendency of people with autism to focus on details at the expense of pulling together different sources of information and seeing the big picture. Frith described this as the "red thread" running through many of the symptoms of autism, including both the difficulties with social interaction and the strengths in attention to detail.

Frith argued that the ability to pull together different sources of information is particularly important for language comprehension and that weak central coherence could explain many of the comprehension difficulties faced by children with autism. Kanner had first noted such difficulties almost half a century earlier, in his original description of autism,  observing that "stories are experienced in unintegrated portions". Subsequent studies have confirmed that children with autism often struggle on reading comprehension tests, even when they are able to sound out the words quite fluently. They also tend to perform poorly on tests that require them to 'read between the lines' to make inferences about events that are implied but are not explicitly stated.