When it comes to recognizing speech, being in noise is like being old”

 

Kristin Van Engen – kvanengen@wustl.edu

Washington University in St. Louis

1 Brookings Drive

St. Louis, MO 63130

 

Avanti Dey

Washington University in St. Louis

1 Brookings Drive

St. Louis, MO 63130

 

Nichole Runge

Washington University in St. Louis

1 Brookings Drive

St. Louis, MO 63130

 

Mitchell Sommers

Washington University in St. Louis

1 Brookings Drive

St. Louis, MO 63130

 

Brent Spehar

Washington University in St. Louis

1 Brookings Drive

St. Louis, MO 63130

 

Jonathen E. Peelle

Washington University in St. Louis

1 Brookings Drive

St. Louis, MO 63130

 

Popular version of paper 4aSC12

Presented Thursday morning, May 10, 2018

175th ASA Meeting, Minneapolis, MN

 

How hard is it to recognize a spoken word?

Well, that depends. Are you old or young? How is your hearing? Are you at home or in a noisy restaurant? Is the word one that is used often, or one that is relatively uncommon? Does it sound similar to lots of other words in the language?

As people age, understanding speech becomes more challenging, especially in noisy situations like parties or restaurants. This is perhaps unsurprising, given the large proportion of older adults who have some degree of hearing loss. However, hearing measurements do not actually do a very good job of predicting the difficulty a person will have with speech recognition, and older adults tend to do worse than younger adults even when their hearing is good.

We also know that some words are more difficult to recognize than others. Words that are used rarely are more difficult than common words, and words that sound similar to many other words in the language are recognized less accurately than unique-sounding words. Relatively little is known, however, about how these kinds of challenges interact with background noise to affect the process of word recognition or how such effects might change across the lifespan.

In this study, we used eye tracking to investigate how noise and word frequency affect the process of understanding spoken words. Listeners were shown a computer screen displaying four images, and listened the instruction “Click on the” followed by a target word (e.g., “Click on the dog.”). As the speech signal unfolds, the eye tracker records the moment-by-moment direction of the person’s gaze (60 times per second). Since listeners direct their gaze toward the visual information that matches incoming auditory information, this allows us to observe the process of word recognition in real time.

Our results indicate that word recognition is slower in noise than in quiet, slower for low-frequency words than high-frequency words, and slower for older adults than younger adults. Interestingly, young adults were more slowed down by noise than older adults. The main difference, however, was that young adults were considerably faster to recognize words in quiet conditions. That is, word recognition by older adults didn’t differ much from quiet to noisy conditions, but young listeners looked like older listeners when tasked with listening to speech in noise.

 

 

 

Share This