Information is coming at us faster than ever, but how much can our brains grasp?
Liina Pylkkanen, professor of linguistics and psychology at New York University, takes a closer look.
LIINA PYLKKÄNEN is a Professor of Linguistics and Psychology at New York University. She is the director of the NYU Neurolinguistics Laboratory and co-director of the Neuroscience of Language Laboratory. Prof. Pylkkänen received her Ph.D. in Linguistics at the Massachusetts Institute of Technology and conducted her post-doctoral work at New York University. She is one of the leading researchers in the use of magnetoencephalography (MEG) to study the brain mechanisms of language, with a focus on semantic cognition.
Split-Second Sentence Processing
In today’s digital world, our brains are bombarded with rapid visual notifications on our many devices. And we feel that we can at least partially understand these messages from just a single glance, enough to even make decisions about them, like whether to keep or delete the message. However, this rapid comprehension is actually a puzzle, since most theories of language processing treat it as a sequential process: we understand sentences word by word, building up to larger meanings.
We explored the neurobiology of this very fast at-a-glance processing to understand how the brain handles language when it’s freed from the seriality of speech. We flashed short sentences for just 300 milliseconds and recorded brain activity using magnetoencephalography, which captures neural dynamics with millisecond precision.
Our findings were striking: around 125 milliseconds after the sentence appeared, the brain started to distinguish sentences from unstructured word lists. This is less time than what it takes to utter one syllable in speech. But what did the brain actually detect this quickly? Our results show that the brain seems to be detecting very basic sentence structure, like subject-verb-object, but it does not at this stage notice small grammatical errors or even if the meaning of the sentence is totally nonsensical.
So the intriguing possibility is that maybe language just seems like a serial system because the organ we primarily use to get it out of our brains, the mouth, can only output one sound at a time? Sign languages already suggest that this is the case: they have a richer array of articulators, the hands, the face and the mouth, and indeed express meaning in a more parallel way.
Leave a Reply