Harry Levitt
Professor Emeritus
The City University of New York
Special Lay-Language Paper for
ASA's 75th Anniversary Meeting
May 2004
The primary function of a hearing aid is to amplify sound and that is exactly
what the earliest hearing aids were designed to do. There were, of course, imperfections
in that these early instruments introduced internally generated noise and non-linear
distortions. Nevertheless, they served their designed function relatively well.
As our understanding of hearing loss and related factors improved, the design
of hearing aids changed to include other features.
Vanity First
What are the factors that need to be taken into account in the design of a modern
hearing? Two factors dominate: The nature of the hearing loss and human vanity.
The latter factor has, for the most part, been the driving force in the evolution
of hearing aids and will be discussed first.
The earliest hearing aids were large, bulky instruments that were inconvenient
to use. The introduction of miniature electron tubes in the 1930s allowed for
the development of hearing aids that were small enough to be wearable. The use
of wearable hearing aids grew rapidly and, with the growing demand for these
'personal hearing instruments', increased attention was given to the problem
of prescribing and fitting hearing aids. Some progress was made along these
lines, but manufacturers soon discovered that vanity was the driving force in
the market place. So began a long-lasting trend to make hearing aids smaller
and smaller, thereby making them less visible and cosmetically more acceptable.
After the invention of the transistor it was possible to make hearing aids small
enough to fit behind the ear (BTE hearing aids). Further advances in solid state
electronics provided the means for making hearing aids small enough to fit in
the ear (ITE hearing aids) and even instruments small enough to fit completely
in the ear canal (CIC hearing aids). These instruments are barely visible.
Hearing aids worn on the ear have important audiological advantages over body
worn instruments in that they approximate the normal mode of receiving sound
far more closely, but it is clear from the competition among hearing aid manufacturers
to make their ear level instruments smaller and less visible that cosmetic rather
than audiological considerations were the driving force in the trend towards
further miniaturization. Relatively large BTE hearing aids with superior signal
processing capabilities have consistently failed in the market place.
The miniaturization of hearing aids is approaching a limit in that instruments
that are barely visible have been developed so that there is little additional
cosmetic advantage to further miniaturization other than a totally implantable
hearing aid which is completely invisible. There is an ongoing effort to develop
implantable hearing aids, and partially implantable instruments have been developed
successfully, but there are major bio-engineering problems that need to be resolved
if a totally implantable hearing aid is to ever become a viable alternative
to conventional ear-level hearing aids. As a consequence, the major thrust in
hearing aid development today is changing from further miniaturization to developing
improved forms of signal processing. This change in emphasis comes at a fortuitous
time in that recent advances in digital technology provide the means for implementing
substantially more advanced forms of signal processing in modern hearing aids.
A New Beginning
Now that more advanced methods of signal processing are possible, what are the
options? One approach is to model the processing of signals in the impaired
auditory system and then, by reverse engineering, to process signals in the
hearing aid so as to compensate for the limitations introduced by the hearing
loss. For example, a normal ear acts as a compression amplifier, the outer hair
cells serving as a key link in the feedback loop. The outer hair cells are usually
damaged in a sensorineural hearing loss so that, for this type of loss, the
compressive function of the ear is reduced or eliminated. The large majority
of hearing aid users have a sensorineural hearing loss with reduced dynamic
range. As a consequence, amplification that is sufficient to make the weaker
sounds of speech comfortably loud will also make the stronger speech sounds
uncomfortably loud.
A common design philosophy is to provide compression amplification in the hearing
aid so as to compensate for the loss of compression in the impaired ear. The
problem, however, is that the compression characteristics of the ear are very
complex and the loss of compression varies substantially among individuals depending
on the nature and degree of the hearing loss. Factors to be considered in modeling
the compression characteristics of the impaired ear include the compression
ratio (the increase in output level, in dB, for a 1 dB increase in input level),
the variation in compression ratio with frequency, the signal level at which
compression amplification begins (the compression threshold), and the time course
of compression (e.g., slow, rapid or instantaneous changes in gain with changes
in signal level).
Several different types of compression hearing aid have been developed in recent
years. The simplest of these only provides compression at high signal levels
so as not to overload the ear. In this form of compression amplification, known
as compression limiting, signals below the threshold of compression are amplified
without compression while signals above this threshold are compressed substantially.
In wide dynamic range (WDR) compression, the threshold of compression is low
and signals above this threshold are compressed, but only moderately. Multi-channel
compression is widely used in order to approximate the frequency-dependent compression
characteristics of the normal ear. The input signal is filtered into a set of
contiguous frequency bands, each band having a different set of compression
characteristics.
Compression amplification has been shown experimentally to be of benefit to
hearing aid users, but there is no consensus as to which form of compression
is most beneficial. Whereas single-channel compression is substantially better
than no compression, experimental evaluations of multi-channel compression systems
show only a modest improvement of two-channel compression over single-channel
compression and ambiguous results with respect to the use of many compression
channels.
Compression amplification generally works well in quiet, but there are problems
with compression amplification in noise. For example, a low level background
noise will be amplified with increased gain during a pause in speech in the
same way that a low level speech sound receives increased gain. One way of addressing
this problem is to fine tune the time course of compression such that gain is
reduced very quickly at the start of a strong speech sound (the attack time)
but the gain continues unchanged for a short while after a reduction in speech
level (the release time). In addition, an appropriate choice of compression
threshold is needed such that low level background noise during a pause in the
speech signal is not amplified unduly.
Signal Processing for Noise Reduction
A second approach to the development of improved signal processing for hearing
aids is to enhance the quality of the speech signal prior to (or jointly with)
signal processing to compensate for the limitations of the impaired auditory
system. Hearing-aid users have great difficulty understanding speech in a noisy
and/or reverberant acoustic environment. The speech signal is a highly robust
redundant signal so that if some speech cues are lost as a result of noise or
other distortions, it is still possible to understand what is said from the
remaining cues. However, if many speech cues are not available because of the
hearing loss, redundancy is reduced and the impoverished speech signal is no
longer robust. Under these conditions, the loss of additional cues due to noise
or other distortions will reduce speech intelligibility far more than would
be the case for a normally redundant speech signal. As a consequence, people
with hearing loss are particularly susceptible to the damaging effects of background
noise on speech intelligibility.
There have been numerous attempts at developing signal processing techniques
for improving the intelligibility of speech in noise. The early work in this
area focused on normal hearing listeners, but has since expanded to include
listeners with hearing loss. The results obtained for the two groups are very
similar, but not identical. Subjects with hearing loss are more sensitive to
background noise and frequently prefer signal processing strategies that reduce
noise level even at the expense of a small reduction in speech intelligibility.
The classic problem is that of a single microphone picking up both speech and
noise. In multi-channel amplitude compression it is possible to attenuate frequency
bands in which the noise level is very high, thereby reducing the overall loudness
of the background noise. Hearing aid users typically prefer this lower noise
condition even if the processing is not perfect and some speech cues in less
noisy bands are also attenuated. In some cases, if a very intense noise is concentrated
in a narrow frequency region, there may be a small increase in intelligibility
due to a reduction in spread of making by the high level noise to neighboring
frequency regions where the speech level exceeds that of the noise.
An alternative approach that has yielded similar results is that of spectrum
subtraction. The noise spectrum is estimated during pauses in the speech and
then subtracted from the speech-plus-noise spectrum when speech is present.
Although this technique is effective in reducing the background noise level,
speech intelligibility remains essentially unchanged, or reduced to some extent
as a result of audible signal-processing distortions. Even with a small decrement
in intelligibility, listeners with hearing loss who are especially sensitive
to background noise have indicated a preference for the processed, slightly
distorted signals over the noisy unprocessed signals.
Although improved intelligibility in noise has proven to be elusive with a single
microphone input, substantial improvements have been obtained using two or more
microphones. Adaptive noise cancellation in which one microphone is placed close
to the noise source can yield improvements in speech-to-noise ratio of over
30 dB. Unfortunately, this arrangement is not practical for everyday hearing
aid use. If microphone placement is not a problem, a much more practical approach
is to simply place a microphone at the speech source. In this way, the speech
signal is picked up with negligible background noise or room reverberation and
there is no need for signal processing for noise reduction.
Convenient placement of a remote microphone is not usually possible in everyday
hearing-aid use. A more practical way of implementing multi-microphone noise
reduction with personal hearing aids is for the microphones to be worn by the
user. One approach is to mount a microphone on each ear and to use adaptive
noise cancellation to improve the speech-to-noise ratio. Although improvements
on the order of 6 to 10 dB are possible with this approach, there is the practical
problem of linking the two microphones so as to process the incoming signals
for noise reduction. A hard-wired link is inconvenient and a wireless link introduces
additional power supply and signal transmission problems. In addition, there
is the loss of directional cues provided by conventional binaural amplification.
Nevertheless, given the relatively large improvement in speech-to-noise ratio
that is possible and the concomitant large improvement in speech intelligibility,
this approach remains an option provided the practical problems noted above
can be resolved.
A directional microphone array at the input to a hearing aid can improve the
speech-to-noise ratio substantially, provided the speech and noise come from
different directions. The use of a directional input in hearing aids is not
new, but the combination of directionality and adaptive signal processing provides
powerful new ways of improving speech reception in a noisy environment. The
directionality of a microphone array, for example, can be controlled adaptively
so as to focus on the signal source when and if needed. A recent innovation
is a clip-on microphone array that can be connected to a conventional hearing
aid. It is about the size of a small pencil and provides substantially more
directionality than that possible with a single-microphone directional hearing
aid. This microphone array, however, is clearly visible and may be cosmetically
unacceptable to many hearing aid users. If, as a result of its superior performance
in a noisy environment, it meets with widespread acceptance then it will have
bucked the trend towards further miniaturization for cosmetic purposes.
Hearing Aids in a Wireless World
The telecoil is a useful alternative input to a hearing aid. It was developed
initially to pick up the magnetic field generated by a telephone handset, thereby
avoiding the noise and distortion introduced by the transduction from an electrical
signal in the telephone to an acoustic signal in the ear and the subsequent
transduction of the acoustic signal plus acoustic room noise to an electrical
signal by the hearing aid microphone. Because of these advantages, telecoils
are now widely used in hearing aids. The use of telecoils has also expanded
to include additional ways of linking hearing aids to other audio equipment.
A wire loop around a room can be used to establish a magnetic field that can
be picked up by the telecoil in a hearing aid. This method of signal transmission
is particularly useful for one-way transmissions in a room or other large acoustic
enclosure where acoustic signals are subject to significant amounts of reverberation
and background noise. In many museums, for example, magnetic loop systems are
used to provide pre-recorded commentaries on the exhibits without interference
from room noise or reverberation.
A small loop that fits around the neck can also be used to connect a hearing
aid to other audio equipment. Modulation of the current in the neck loop will
result in a modulated magnetic field that is picked up by the telecoil of the
hearing aid. The electrical output of any audio device can thus be connected
to a modulator that drives the neck loop so that the audio signal can be picked
up magnetically by the telecoil of the hearing aid without any acoustic noise
or distortions.
A remote microphone with a wireless link to the hearing aid and placed close
to the speech source is a very effective way of improving speech-to-noise ratio.
This technique is widely used in classrooms with deaf or hard-of-hearing students.
The teacher speaks directly into a microphone, the output of which is transmitted
by wireless means to the student's hearing aid. A magnetic induction loop (if
the classroom is wired for this purpose) or an FM radio link to a receiver-cum-hearing
aid worn by the student is typically used for this purpose. The signal received
by the student is free of classroom noise, reverberation, and other acoustic
distortions. Similar systems are also used in theatres, churches, lecture rooms
and other meeting places for the benefit of hard-of-hearing members of the audience.
In this case, an infrared carrier rather FM radio is the preferred mode of signal
transmission.
Wireless technologies for digital signal transmission, recently introduced by
the computer and entertainment industries, provide novel and more efficient
ways of linking a hearing aid to other communication systems, as well as to
the internet, computers, and other digital devices. One possibility is that
of a special purpose cellular telephone that links directly to a hearing aid
and which also provides text messages and other visual information for people
with severe hearing losses. Another possibility is the wireless transmission
of control signals to check if a digital hearing aid is functioning properly
and, if necessary, to reprogram the instrument.
There is also a downside to the growing use of digital wireless technology.
A hearing aid in close proximity to a digital wireless device is subject to
electromagnetic (EM) interference. A cellular telephone, for example, transmits
a relatively powerful EM signal at a transmission frequency in the gigahertz
range. A short piece of wire only a few millimeters in length can serve effectively
as an antenna at these frequencies. Wiring or other conductive components in
a hearing aid are capable of picking up these very high frequency EM signals.
Although the carrier frequencies of the EM signals are extremely high, the digital
modulations of these signals are in the audio frequency range and are demodulated
by non-linearities in the hearing aid circuitry and then amplified. Since a
cellular telephone is normally held against the ear, the interference picked
up by a hearing aid mounted on that ear can be substantial.
The hearing aid and wireless telephone and industries, at the prodding of the
FCC, are working on ways of reducing EM interference in hearing aids to an acceptable
level. A useful step in this direction is the development of a national standard
method for measuring the strength of the EM signal generated by a wireless telephone
at the position of a hearing aid, the degree of immunity of the hearing aid
to EM interference, and a means for specifying acceptable levels of interference.