Conversations really DO take two.

Aug 18 2010 Published by under Neuroscience, Uncategorized

You've all heard it takes two to tango. And it certainly takes two (or more) to argue. And now, apparently it really does take two to have a conversation.

ResearchBlogging.org Stephens et al. "Speaker–listener neural coupling underlies successful communication" PNAS, 2010.

We know that real verbal communications requires both a speaker and a listener (often they go back and forth, but not always). This involves both the production of speech, AND the perception and comprehension of what someone else is saying to you. The question is, HOW does that happen?

The scientists for this study decided to look at this by having two separate people in an MRI. The first one poured out their life story, and their MRI signal was recorded. The second one then listened to a recording of the first person's life story while their MRI signal was recorded. And then the scientists compared.

fMRI, or functional Magnetic Resonance Imaging, is a technique that scans a persons brain and detects changes in the BOLD signal. BOLD is blood-oxygen level dependent, and means that what fMRI is looking at is blood flow changes that are imposed on a high resolution anatomical map of the brain. The idea is that when neural activity in one area goes UP, high-oxygen blood will head to that area to supply hungry neurons. Some studies have shown that this does appear to work, though the whole mechanism isn't really known yet.

The scientists used the fMRI to show what parts of the speakers and listeners brains were more active during speaking and listening.

What you can see here is the overlap between activation of a speaker and a listener. They had to slightly shift the recording of the listener in time, as the listener would take some time to input and process. And what you can see is that the speaker and listener OVERLAP in a lot of areas associated with speech. It's not just in the lower areas of straight up auditory processing, it's in higher areas as well! These include areas for speech processing and comprehension (including Broca's and Wernicke's Areas), auditory processing, and speech PREPARATION. This suggests that speech processing actually has a good bit to do with speech comprehension, as well as preparation for your own speech.

What's really interesting about this is that they then replayed the recording, this time spoken in Russian, to a bunch of people who didn't speak Russian. Then, they got very little match between the speaker's activity and the listener's, and what their was was just devoted to basic auditory processing, implying that you "tune out" what you can't understand. Literally.

Not only that, it appeared that the listeners actually had some pre-emptive brain activity associated with speech preparation, which the authors interpreted as the brain preparing for an onslaught of language to be decoded, though Sci isn't so sure about that one.

But what Sci liked best about this study is that they found a significant correlation between the degree of synched-up brain activity between speaker and listener, and the degree of comprehension the listener had! This suggests that synched up brain activity has something important to do with listening comprehension.

Now I bet some people are gonna look at the abstract of this paper and be all "TEMPORAL SYNCHRONY! WE ARE READING EACH OTHERS' MINDS!!!!" While that would be, well, really frightening and probably also the start of many wars, it is also not at all true. The scientists in this study had to SHIFT the time in the brain activation of the listener, because the synchrony is slightly shifted, occurring up to 6 seconds after the listener has heard what the speaker is saying. So it's not instant, you are just processing the speech you are hearing in the SAME AREAS that the speaker is processing what they are speaking, which is really very cool in and of itself. In a manner of speaking, you're "saying" what the person is saying to you as you process it.

This has all sorts of cool implications to it. First off, it provides a new angle on how we communicate, how we process what people are telling us. But it could also provide new ideas for how to look at people with speech and communication problems. If we can tell what areas of the brain are not synching up for them, we may be able to get an idea as to WHY, though that's obviously far off in the future.

There are a couple of things that I think would really cinch this study, though. While they did do the study with a language the listener didn't understand, what about using the same language, but on a TOPIC the listener didn't understand. Like, say, have a particle physicist explain his research to a professional violinist, and vice versa. Do we have the same lack of synchrony when we don't understand the topic, rather than the words? Does this cause us to tune out of the synchrony?

Another thing it would be interesting to try would be speaking and listening on topics where people disagree. Do people who disagree on a topic (say, religion) really LISTEN to each other? Do they lack synchrony? And where does that happen?

And the next time you're listening to someone tell you their life story, don't just enjoy it, sit back and feel the synchrony!

Stephens GJ, Silbert LJ, & Hasson U (2010). Speaker-listener neural coupling underlies successful communication. Proceedings of the National Academy of Sciences of the United States of America, 107 (32), 14425-30 PMID: 20660768

7 responses so far

  • Heather says:

    Thanks for summing up what would have been a difficult, dare I say, fairly inaccessible paper. It does sound fascinating.

    It would be an interesting thought experiment to have autistic listeners, where the comprehension can be sophisticated but incomplete, in particular if the communication output was emotional or ironic in nature, and see which areas are differentially *not* activated.

  • Jac says:

    Could the overlap show that the listener is partially listening and partially framing what they want to say?

  • DD says:

    So as I hear you (or read your words), I'm actually re-speaking them in my brain? I guess that makes sense. It also may answer why some people hear voices etc.

  • Jon Brock says:

    Nice summary, but not sure I share your excitement. Basically, the study shows that there is some overlap between brain regions involved in comprehension and production of speech, which we kind of know anyway. It doesn't really address whether speech production mechanisms are actually involved in the comprehension process. Sophie Scott has a great review on this out recently in Nature reviews Neuroscience, debunking many of the arguments relating to the 'motor theory' of speech perception. For example, there's plenty of evidence that speech production deficits (acquired or developmental) have little effect on comprehension.

    Interestingly, her view is that production mechanisms might play an important role in coordinating dialogue. This fits nicely Garrod and Pickering's theory of dialogue coming from a more linguistic perspective. However, this study seems to be looking exclusively at monologue. One person is talking and the other is just listening.

    That paper: http://www.nature.com/nrn/journal/v10/n4/abs/nrn2603.html

  • FiSH says:

    In terms of experimental design the paper didn't seem to be much about dialogue to me, just a person listening to a monologue, as Jon said. There were a number of suggestions above that seem like they might improve that study, but it seems to me that something far more interactive and reciprocal would be key. There may be quite different processes involved in reciprocity and thinking back it is one of those pre-language skills that children learn that set the stage for a variety of communication and cognitive processes, and I suspect lies outside (at least partially) of the main speech comprehension/production areas. And children learn it prior to learning language, or at least any discernable language.

  • Marian Thier says:

    I'm doing work on different listening habits and I wonder if people with different listening habits would be in synch with one another as much as people with the same habits.

  • Carl Landres says:

    what are the benefits of green tea, There are definitely a good deal of particulars like that to take into consideration. That is also a nice level to bring up. I provide the thoughts above as basic proposal but clearly there are questions just like the one you convey up where the most important thing can be operating in sincere just right faith. I don?t recognize if best practices have emerged round things like that, but I'm sure that your process is obviously recognized as a good game. Each girls and boys feel the impression of just a second’s pleasure, for the the rest of their lives.

Leave a Reply