On this Student Spotlight: Understanding language is key to being human…or a chat bot.
Zaid Zada, Ph.D. candidate at Princeton University, examines language and language models.
Zaid is a Ph.D. candidate at Princeton University studying how the brain processes language, how multiple brains synchronize to share information with each other, and what language models can teach us about the brain’s linguistic capabilities.
Brains and Machines Navigate a Common Language Space for Communication
As you’re listening to me speak, our brains are becoming more and more synchronized. This is the power of language. It allows me to transform the neural activity that represents my intended message into words to send to you. Then, when you hear the words, your brain recreates the message by mirroring the same neural activity as mine. We call this kind of synchrony “speaker–listener coupling”. In some sense, this is not very surprising because, to understand each other, we have to agree on what words mean in the context they’re used in. We experience this subjectively when we say, “we’re on the same wavelength”.
Our research delves deeper into this phenomenon. In our study, we collected a unique dataset of high-quality brain recordings of people engaged in natural conversations. In addition to speaker-listener coupling, we found that underlying the coupling is a shared “neural code” for language. Or the way our brains represent words is similar across people. What’s more, we found that large language models exhibit a similar neural code for language as our brains.
Large language models are at the core of the latest AI technologies such as ChatGPT. It’s become clear that they learn enough about language to meaningfully converse with us. Our study found that large language models encode linguistic information similar to the human brain. This similarity allowed us to track how a thought in the speaker’s brain is encoded into words and transferred, word-by-word, to the listener’s brain. We found that linguistic content emerges in the speaker’s brain before articulating a word, and the same linguistic content rapidly reemerges in the listener’s brain after hearing the word. This discovery highlights the shared neural code we, and machines, use to communicate.
Read More:
[Neuron] – A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations
[The Conversation] – AIs encode language like brains do − opening a window on human conversations