Listening for the human voice: UTM professor delves into how people perceive sound

Christina Vanden Bosch der Nederlanden Nick Iwanyshyn

Christina Vanden Bosch der Nederlanden remembers the moment she became interested in how humans perceive sound. 

She was playing the cello in Grade 5, and it was her section’s turn to perform a melody line. As she played her instrument, she was suddenly struck with a strong emotional response. 

“The string vibrated, which brought sound vibrations to my ear, and then I got chills. I wondered, how is this happening?” she remembers.   

When she brought up the sensations to her conductor, he was familiar with the experience and suggested literature for her to read about how people react to sound.  

That early experience sparked der Nederlanden’s lifelong curiosity about how people hear sound – which has led to her current research on why individuals focus on the human spoken voice over all other sounds in everyday situations. 

“A lot of my research has shown that even from four months of age, our attention is biased to pick up the human voice if there are different sounds playing. We are biased to listen to speech above all other sounds,” says der Nederlanden, an assistant professor in psychology at UTM who also heads the university’s Language, Attention, Music and Audition (LAMA) Lab

Der Nederlanden explains that when somebody is listening to another person speaking, there might be many other competing sounds happening around them – such as a car going by, a coffee maker beeping, or a refrigerator humming. Despite these distracting sounds, that person is still able to pay attention to what’s most relevant: the person talking.  

For years, she had studied this phenomenon known as attentional speech bias – and has recently been awarded two NSERC Discovery Grants to better understand why people’s attention is biased to pick up a human voice while many other sounds are happening at the same time. 

As part of her NSERC-funded project, Predicting listeners' attentional bias toward the human voice: perceptual, neural, and semantic factors, der Nederlanden and her research team will investigate the many factors at play when looking at attentional speech bias. 

The team will look at how human development plays a role – including whether people at an early age are innately biased towards acoustic characteristics that are unique to the human voice. 

The project will also measure participants’ brain activity to see how their brains track environmental sounds – like a dog barking or a train going by. 

The project is the latest in der Nederlanden’s research that looks at how humans perceive sound. As principal investigator at the LAMA Lab, she and her colleagues study what's relevant in our busy auditory worlds for communication.  

In August, the research team is studying whether babies know the difference between speech and song, and how the brain processes music and speech in early development – building on der Nederlanden’s previous research. 

“We want to know, when in development do we know that speech and song are different and require different spheres of knowledge? When in development do we learn these things – and is it important for us to learn these things earlier in development so that we can be good communicators?” der Nederlanden says. 

She adds that she hopes her research might help in developing training techniques for individuals who struggle with language, such as children with dyslexia and autism. 

“I’d really like to get connected with some hospitals and local organizations in the area to start seeing how we can work with them, and ask how musical interventions, alongside traditional interventions, could be used to help kids who struggle to pay attention to what’s relevant for language and communication.”