Dorea Ruggles – email@example.com
Barbara Shinn-Cunningham – firstname.lastname@example.org
44 Cummington Street
Boston, MA 02
Popular version of paper 1aPP7
Presented Monday morning, May 23, 2011
161st ASA Meeting, Seattle, Wash.
In everyday conversation, speech is at a comfortable, listening level, well above the threshold of hearing. Many problems with communication come about not because people cannot hear the speech, but rather that that they cannot hear it well enough: the speech is usually detected, but may not be clear enough to be understandable. However, conventional definitions of “normal” hearing depend solely on the audibility of quiet sounds, completely ignoring how well details of clearly audible (supra-threshold) sound are encoded.
A small but significant percentage of “normal” hearing listeners struggle with difficult tasks like listening to a friend in a loud, crowded room, when there are multiple audible sound sources. These individual differences among “normal” hearing people may stem from a number of different sources. One hypothesis is that the differences are cognitive: some people are better able to “think” about what they are listening to. Alternatively, the differences may stem from variability in the auditory system’s ability to properly represent all of the information contained in the sounds people hear. Our study examines this second idea, that differences in how well listeners can direct auditory attention to understand speech in complex settings comes about from differences in how well the sensory system encodes supra-threshold sound.
Auditory attention refers to the ability of listeners to determine which sounds come from which sources in the environment and to attend to a desired source (suppressing other, competing sounds). Auditory attention requires high-level cognitive processes. It also requires that audible sounds be represented with enough detail to allow a listener to determine which sound is the important one and which are to be ignored. Early sensory encoding of sound takes place in a portion of the brain called the brainstem. The neural signals that represent sound navigate a network of bodies in the brainstem before they arrive at the cortex, where higher-order processing takes place, including association of sound with meaning, memory, and abstract processing.
We find that listeners with clinically “normal” hearing exhibit large individual differences on a demanding auditory attention task. We also demonstrate that performance on a selective auditory attention task correlates strongly with objective physiological measures of the brainstem’s ability to encode the timing and pitch information in sound.
These results illustrate how differences in the auditory periphery of “normal-hearing” listeners can explain performance differences in “central,” supra-threshold auditory tasks like those important for everyday communication. The tasks reported here may be clinically useful for diagnosing peripheral deficits in “normal-hearing” listeners and for teasing apart the relative contributions of peripheral and central processing to complex auditory tasks.