Xing Li – xingli@u.washington.edu
Kaibao Nie – niek@u.washington.edu
Nikita Imennov – imennov@u.washington.edu
Jong Ho Won – jhwon@u.washington.edu
Les Atlas – atlas@u.washington.edu
Jay Rubinstein – rubinj@u.washington.edu
University of Washington, Seattle, WA 98195
Popular version of paper: 5aPP15
Presented Friday morning, May 27th, 2011
161st ASA Meeting, Seattle, WA
Cochlear implants are one of the world’s most successful medical devices. To date, they have restored partial hearing to nearly 190,000 profoundly deaf individuals worldwide. Unlike hearing aids, which only amplify the sound, cochlear implants work by bypassing the non-functioning parts of the inner ear to restore speech and sound perception with small electrical pulse patterns.
Modern cochlear implants allow patients to achieve remarkably high levels of speech understanding in quiet settings. However, these people still have difficulty hearing music and speech in the presence of background noise, such as in a café or on a busy street. Previous studies have shown that poor encoding of pitch information, that is, melody, is partly responsible for the subpar performance in these difficult settings. We have come up with a new approach to convert ambient sounds into electrical pulses that may improve pitch perception by cochlear implant patients. Our approach breaks down the incoming sound into its individual pitch harmonic components then uses synchronized electrical pulse patterns, restoring some potential for normal listener pitch coding.
To gauge the potential of our approach, we conducted a test where eight cochlear implant patients were asked to identify common musical instruments, which differ in timbre. Using our approach, 5 out of 8 subjects showed substantial improvement over their conventional clinical processor. Two of them also showed great improvement in recognizing familiar melodies. The statistically significant improvement in cochlear implant performance suggests that pitch and timbre information can be better delivered with our approach.
Better delivery of pitch information is not only beneficial for music perception, but also for tonal language speakers, where pitch patterns can be important for a speech sound’s meaning. For example, spoken in monotone, Mandarin “ma” means “mom,” but pronounced with a falling and then rising pitch pattern—similar to asking a question in English—its meaning changes to “horse.” Since cochlear implant patients in United States are typically native English speakers, a tonal language experiment was not done. Instead, we generated a simulation of cochlear implant speech and presented these sounds to five normal-hearing Mandarin speakers. In a test to discriminate Mandarin tonal patterns, the five subjects all achieved nearly perfect scores with our approach, whereas they all experienced significant difficulty when listening to the simulation sound of a typical conventional clinical approach. Furthermore, in the neural responses produced by an auditory nerve model, we have also found evidence of the better tonal pattern representation. With more than a quarter of the world’s population using tonal languages, the better pitch information provided by our new approach has the potential to make a significant impact.