Julia Strand, Carleton College – Presence and Timing of Speech

On Carleton College Week: You could hear this more clearly if you could see me talking.

Julia Strand, Assistant Professor of Psychology, never mind how seeing a talking face gives us clues even if we can’t fully hear what is being said.

Julia Strand is an Assistant Professor of Psychology at Carleton College.  She holds a B.A. in Psychology & English from Tufts University, an M.A. and PhD. from Washington University in St. Louis, program Brain, Behavior, & Cognition, and completed a postdoctoral fellowship in the Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering at Washington University in St. Louis.

She teaches courses including Introduction to Psychology, the Psychology of Spoken Words, Sensation & Perception, and Perceptual & Cognitive Expertise. Her research focuses on how humans are able to turn sensory information about speech into meaningful representations. Topics of research include how cognitive abilities influence language perception, what traits of words promote easy recognition, how word recognition abilities change with age, and how visual information (seeing the talker) influences language processing.

Presence and Timing of Speech


Most people who have been to a noisy cocktail party know that it’s easier to understand what someone is saying when you can see their talking face. But what is it about seeing someone that makes it easier to hear them? It is clear that most people tend to unconsciously lipread while listening, meaning they’re getting information about the particular speech sounds that the talker is saying. For instance, the words map and nap sound a lot alike, but the mouth movements to make them look very different.

But it may also be that a talking face provides helpful information about the timing of speech – that is, a moving mouth may cue us about when to listen. In a recent study in our lab, we tested whether listeners get some benefit from visual cues about timing by asking participants to listen to spoken sentences and words in background noise while they looked at a circle on a computer screen that modulated with the speech. The circle got larger and brighter when the speech got louder. We found that this visual stimulus didn’t help participants identify more words, but it did make the task much easier for them. That is, they were able to complete a distracting secondary task more quickly (basically, multitask better) when they could see the modulating circle than when they could not. They also self-reported that it was easier to understand the speech when they had the visual cues.

This suggests that a talking face may provide cues about when in time it is important to listen closely. These findings may have applications for helping people in noisy settings and those with hearing-impairments understand speech more successfully. So the next time you’re listening to someone in a noisy setting, look closely at their mouth and appreciate how much it is telling you!

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *