You've all heard it takes two to tango. And it certainly takes two (or more) to argue. And now, apparently it really does take two to have a conversation.
We know that real verbal communications requires both a speaker and a listener (often they go back and forth, but not always). This involves both the production of speech, AND the perception and comprehension of what someone else is saying to you. The question is, HOW does that happen?
The scientists for this study decided to look at this by having two separate people in an MRI. The first one poured out their life story, and their MRI signal was recorded. The second one then listened to a recording of the first person's life story while their MRI signal was recorded. And then the scientists compared.
fMRI, or functional Magnetic Resonance Imaging, is a technique that scans a persons brain and detects changes in the BOLD signal. BOLD is blood-oxygen level dependent, and means that what fMRI is looking at is blood flow changes that are imposed on a high resolution anatomical map of the brain. The idea is that when neural activity in one area goes UP, high-oxygen blood will head to that area to supply hungry neurons. Some studies have shown that this does appear to work, though the whole mechanism isn't really known yet.
The scientists used the fMRI to show what parts of the speakers and listeners brains were more active during speaking and listening.
What you can see here is the overlap between activation of a speaker and a listener. They had to slightly shift the recording of the listener in time, as the listener would take some time to input and process. And what you can see is that the speaker and listener OVERLAP in a lot of areas associated with speech. It's not just in the lower areas of straight up auditory processing, it's in higher areas as well! These include areas for speech processing and comprehension (including Broca's and Wernicke's Areas), auditory processing, and speech PREPARATION. This suggests that speech processing actually has a good bit to do with speech comprehension, as well as preparation for your own speech.
What's really interesting about this is that they then replayed the recording, this time spoken in Russian, to a bunch of people who didn't speak Russian. Then, they got very little match between the speaker's activity and the listener's, and what their was was just devoted to basic auditory processing, implying that you "tune out" what you can't understand. Literally.
Not only that, it appeared that the listeners actually had some pre-emptive brain activity associated with speech preparation, which the authors interpreted as the brain preparing for an onslaught of language to be decoded, though Sci isn't so sure about that one.
But what Sci liked best about this study is that they found a significant correlation between the degree of synched-up brain activity between speaker and listener, and the degree of comprehension the listener had! This suggests that synched up brain activity has something important to do with listening comprehension.
Now I bet some people are gonna look at the abstract of this paper and be all "TEMPORAL SYNCHRONY! WE ARE READING EACH OTHERS' MINDS!!!!" While that would be, well, really frightening and probably also the start of many wars, it is also not at all true. The scientists in this study had to SHIFT the time in the brain activation of the listener, because the synchrony is slightly shifted, occurring up to 6 seconds after the listener has heard what the speaker is saying. So it's not instant, you are just processing the speech you are hearing in the SAME AREAS that the speaker is processing what they are speaking, which is really very cool in and of itself. In a manner of speaking, you're "saying" what the person is saying to you as you process it.
This has all sorts of cool implications to it. First off, it provides a new angle on how we communicate, how we process what people are telling us. But it could also provide new ideas for how to look at people with speech and communication problems. If we can tell what areas of the brain are not synching up for them, we may be able to get an idea as to WHY, though that's obviously far off in the future.
There are a couple of things that I think would really cinch this study, though. While they did do the study with a language the listener didn't understand, what about using the same language, but on a TOPIC the listener didn't understand. Like, say, have a particle physicist explain his research to a professional violinist, and vice versa. Do we have the same lack of synchrony when we don't understand the topic, rather than the words? Does this cause us to tune out of the synchrony?
Another thing it would be interesting to try would be speaking and listening on topics where people disagree. Do people who disagree on a topic (say, religion) really LISTEN to each other? Do they lack synchrony? And where does that happen?
And the next time you're listening to someone tell you their life story, don't just enjoy it, sit back and feel the synchrony!
Stephens GJ, Silbert LJ, & Hasson U (2010). Speaker-listener neural coupling underlies successful communication. Proceedings of the National Academy of Sciences of the United States of America, 107 (32), 14425-30 PMID: 20660768