ASA PRESSROOM

Acoustical Society of America
139th Meeting Lay Language Papers


Time and Timing: Age-related Differences in Auditory, Speech, Language, and Cognitive Processing

M. Kathleen Pichora-Fuller - fullerk@telus.net

University of British Columbia
Vancouver, British Columbia, Canada

Contact at meeting: Westin Peachtree Plaza, 404-659-1400

Popular version of paper 2aPP3
Wednesday Morning, May 31, 2000
139th ASA Meeting, Atlanta, GA

Older adults have difficulty understanding language spoken in noisy conditions or at fast rates, even though they may have little difficulty when a talker speaks slowly and the surrounding environment is relatively quiet. In everyday life, conversations with a family member in the living room may be easy, but conversations at a cocktail party may be virtually impossible. These difficulties begin in the fourth decade, and as age increases more and more older adults are affected, with about half of 75-year olds having a clinically significant hearing loss. However, the change in ability to detect simple sounds that is measured on standard clinical tests cannot fully account for their difficulties understand language spoken in noisy conditions. Even though some speech sounds may become inaudible, many of the sounds of speech remain well above the older listener's thresholds of detection. A common report of older listeners is that they can hear speech but it is unclear. We need to understand more about how the auditory system ages if we are to discover why speech is unclear even when it is audible. Solving this problem will assist diagnosis and rehabilitation, and be crucial to further advances in the design of hearing aids or other communication devices for older listeners. Differences in temporal processing, or how older listeners use time cues in speech, may provide new explanations. If aging auditory systems are slower or less able to precisely code the timing of sound cues, then speech might be perceived less clearly even though it was loud enough.

We know that the auditory system is capable of several different kinds of time coding. The main types of auditory temporal processing include synchrony coding (phase-locking or neural firing timed to the cycles per second of the input sound), gap and duration coding (specialized neural responses to the onsets and offsets of sound energy), and coding of prosodic patterns (speech rate and syllabic rhythm patterns). There is mounting behavioral psychoacoustic and/or physiologic evidence of age-related changes at each of these levels of auditory temporal processing, even when the listeners are considered to have clinically normal hearing in terms of their ability to detect simple tones presented to one ear at a time in quiet. Synchrony coding enables listeners to detect signals in noise better with two ears than with one ear if they are able to precisely code the timing of right- and left-ear inputs; however, older listeners are less able than younger listeners to use synchrony coding in this way. Many older listeners cannot hear a gap in a stretch of sound until the gap is much longer than it needs to be for a younger listener to hear it. Finally, rapid patterns are more challenging for older listeners than for younger listeners.

Correspondingly, particular speech cues can be coded by at least three different types of auditory temporal processing. First, voice cues such as voice quality and pitch arising from the quasi-periodic vibrations of the vocal folds rely on synchrony coding; e.g., the differences between male and female voices. Second, some word-level contrasts rely on gap or duration coding; e.g., the difference between 'slit' and 'split' where the key feature of the 'p' sound is a short interruption in the breath airstream when the lips close. Third, syllabic rhythms and ability to follow different rates of speech rely on coding of higher-level patterns.

Despite the obvious apparent correspondences between these types of temporal processing for non-speech and speech materials, the correlations found between psychoacoustic and speech perception measures have not always been significant. A closer examination of the parallels at each level of temporal processing may illuminate how non-speech and speech measures might be related. Age-related difficulties in speech perception and language comprehension will be considered in terms of each of the three levels of temporal processing. Specifically, evidence of age-related differences in word recognition has been demonstrated in studies employing speeded speech, speech with reduced stop-consonant gaps, and jittered or desynchronized speech. While researchers have argued that perceptual and cognitive declines both affect aging listeners, experiments in which the difficulty of the listening situations are matched for young and old listeners suggest that the apparent age-related differences in cognitive performance on memory or comprehension tasks seem to be secondary to perceptual differences. In fact, older listeners show greater benefit from use of supportive context during word recognition, suggesting that cognitive adaptations may functionally enhance comprehension even when perception is poorer for older listeners than for younger listeners. Both time and timing seem to be affected, with aging auditory systems being slower and more asynchronous. Slowing and asynchrony seem to be characteristics shared by perceptual and cognitive systems in aging. It seems that auditory aging may exacerbate apparent cognitive decline, whereas expert deployment of cognitive resources may offset loss of perceptual abilities.


[ Lay Language Paper Index | Press Room ]