ASA Lay Language Papers
162nd Acoustical Society of America Meeting


Hearing between the lines: how people with hearing impairment can take detours to understand speech

Matthew B. Winn – mwinn83@gmail.com
Monita Chatterjee – mchatter@umd.edu
William J. Idsardi – idsardi@umd.edu
University of Maryland College Park
College Park, MD 20742 

Popular version of paper 2aSC4
Presented Tuesday morning, November 1, 2011
162nd ASA Meeting, San Diego, Calif.

Getting a hearing aid or cochlear implant is not like picking up a new set of eyeglasses. Hearing devices don’t “fix” your hearing instantly– people with hearing loss need to re-learn what sounds are and how to interpret them. This can be a long and difficult process, despite the best efforts of clinicians and hearing aid engineers.  Recent research at the University of Maryland suggests that people with hearing impairment don’t merely show lower success in listening to speech; they actually could take a whole different kind of listening strategy to hear – a hearing detour. These differences in listening strategies have implications for what we know about hearing loss and how to effectively treat it.

In two sets of experiments, listeners were shown to adapt quickly to challenging listening conditions that simulated hearing impairment, listening in a noisy room, or wearing a cochlear implant. The participants in these studies were able to change their listening strategies on-the-fly when identifying words. For example, people with normal hearing might usually pay close attention to changes in the timing of speech. But they could also ignore those timing signals, and instead listen for a change in voice pitch to glean the same information. In another task, participants listened for timing signals rather than paying attention to vowel articulation. When patients with cochlear implants were brought into the lab, their results confirmed what was predicted in the simulations. They heard little bits of information that usually go unnoticed, and used it to compensate for impaired hearing in other areas. They didn’t need any training, or even any explicit instructions. It appears that people naturally seek out the best strategy to solve a listening task, no matter what kind of challenges they encounter.

People constantly encounter variety in the speech patterns of those around them. We adjust to different dialects, different ages, and even simple gender differences in speech timing, articulation, and voice quality. In one experiment, people with normal hearing and with cochlear implants adjusted to listening to male and female voices. Since female talkers produce sounds with higher frequency than males, listeners compensate for this by adjusting what they consider to be high vs low. A high-frequency sound from a male might be low-frequency if spoken by a female. The crossover point between a high sounds like ‘s’ and a lower sound like ‘sh’ is thus different for male talkers and female talkers. The implanted patients were not expected to make this adjustment easily, since sound is distorted and unclear when delivered through their devices. Surprisingly, not only did they outperform expectations, they gave an extraordinary result that surprised everyone involved. People with cochlear implants didn’t just use their ears to adjust between ‘s’ and ‘sh’ – they used their eyes.  Beyond the adjustment made from hearing a female voice, they made an adjustment when seeing a female face, in addition to the benefit from lip-reading. Where their ears fell short, their eyes offered extra support.

Adjustment to talker-specific speech

Figure 1. Illustration of different contextual influences on the perception of speech sounds by listeners with normal hearing and with cochlear implants. Pictured are Matthew Winn and Ariane Rhone, who designed this experiment while carrying out their PhD work at the University of Maryland.

Living with hearing impairment doesn’t just mean taking wild guesses at what is heard – it means having to change what to listen for. It seems paradoxical, but the complexity of speech can turn out to be a great asset for those most in need. As one piece of the puzzle is lost, another can be gained, perhaps from an unlikely source.  There is much work to be done, with different kinds of speech and different kinds of listeners. Perhaps further training to tune into these subtleties can help even more. As technology progresses to better treat hearing impairment, it is hoped that this work can better inform the systems that extract and deliver speech information to those who need it most.

[ Lay Language Papers Index | Press Room