State of the Art in the Perceptual Design
of Hearing Aids
Brent Edwards- brent@edwards.net
Sound ID
3430 West Bayshore Road
Palo Alto, CA 94101
Popular version of paper 2aPPa1
Presented Tuesday morning, June 4, 2002
143rd ASA Meeting, Pittsburgh, PA
Sixteen percent of the United States population is hearing impaired, a percentage that is expected to increase as the Baby Boomers enter the age range of increased hearing loss. This change to hearing can have a significant effect on people's lives, beginning with the difficulty of understanding speech. You can compare how speech is heard by someone with normal hearing to how the same speech is heard by someone with a typical moderate hearing impairment. Current hearing aid technology, based on models of the ear, processes speech by applying more amplification to the softer and harder-to-hear consonants than to the more perceptible loud vowels, as demonstrated by this (750 KB) video clip. The blue signal on the right is a sentence before processing and the red signal on the left is the same sentence after processing.
Hearing aids have gone through a major technological revolution over the past six years. The most significant development has been the introduction of digital signal processing (DSP) to these products, an innovation that had awaited breakthroughs in size and power consumption reduction for both DSP chips and analog-to-digital converters. Digital technology has significantly increased hearing aid functionality, much in the same way that digital technology in personal video recorders such as Tivo has advanced functionality beyond that of VCRs.
Some of the advanced features that are now available on many high-end hearing aids are:
Sophisticated research in the physiology and psychology of hearing loss is under development that should lead to further improvements to hearing aid signal processing. Eric Young at Johns Hopkins University is investigating the ability of the auditory nerve of an impaired auditory system to encode speech that may lead to new hearing aid designs. Brian Moore at Cambridge University has developed a clinical technique similar to an audiogram that determines how sensorineural damage is divided between inner and outer hair cells in the cochlea, which may allow hearing aid processing to be even more customized to the wearer than it already is.
As these and other research projects move from the laboratory to the hearing aid manufacturer, hearing aids will continue to evolve into more sophisticated and feature-rich devices. Wireless connectivity between hearing aids, not yet available, would result in improved directional processing that increases the ability of the hearing impaired to understand speech in noisy situations. This is one of the most active areas of research among hearing aid manufacturers because speech understanding in noisy environments is one of the most difficult situations for the hearing impaired. The figure on the left shows the number of decibels that speech must be above noise in order for subjects to understand half of the words. The increasing bars show that a person's ability to understand speech in noisy situations, such as a crowded restaurant, worsens as their hearing loss increases. Other potential benefits from the development of binaural (two-eared) wireless hearing aids are the improved understanding of speech in highly reverberant (echo-producing) rooms, and the improved ability to localize sounds such that hearing aid wearers can better tell from what direction sound sources are located.
In the special session titled "Hearing Aid Design: Psychophysics and Signal Processing" at the 143rd meeting of the Acoustical Society of America, several prominent researchers have been invited to talk on advanced topics that may affect future hearing aid technology: