Brian C.J. Moore - bcjm@cam.ac.uk
Department of Experimental Psychology
University of Cambridge, England
Neal Viemeister - nfv@umn.edu
Department of Psychology
University of Minnesota
William Yost - wyost@luc.edu
Parmly Hearing Institute
Loyola University Chicago
Hearing is one of the most important gateways to the mind. Therefore, not surprisingly,
the fascination with sound and hearing (audition) is almost as old as recorded
history. The study of hearing dates back at least to the time of Pythagoras
in ancient Greece; investigating an ancient single-string instrument known as
the monochord, he wrote of the relationship between the vibrating length of
monochord string and the pitch it produced. The study of hearing and the ear
has had a rich history ever since.
The pace of discovery accelerated over the past 100 years. Late in the 19th
century, scientific giants like Hermann von Helmholtz brought renewed scientific
attention to the relationship between the physics of sound and auditory perception
(psychological acoustics or psychoacoustics) and the biological basis for auditory
perception (physiological acoustics). It was well known then that sound had
the physical properties of frequency, level, and time. While a change in frequency
is perceived as a change in pitch and a change in level as a change in loudness,
these perceptual attributes are related to the physical properties of sounds
in complicated ways. The psychoacousticians of the late 19th and early 20th
centuries worked out many of the psychophysical relationships among frequency,
level, loudness, and pitch. Several of these relationships found their way into
the products we use today, such as the loudness control on our radios or stereo
systems. This type of psychoacoustics research continues, and has a profound
influence on the sounds of consumer products ranging from cars to electric shavers.
Auditory scientists have also studied the ability of listeners to detect sounds,
both in the absence and presence of other sounds, and to discriminate among
different sounds. Under the leadership of Harvey Fletcher, scientists at Bell
Laboratories in the 1920s and 30s made many discoveries that not only resulted
in the present day telephone and telecommunication systems, but also provided
a wealth of knowledge about hearing and speech communication.
The measurement of the ability of listeners to detect tonal sounds of different
frequencies (the audiogram) that started at Bell Labs became the standard way
to measure hearing loss. Hearing aids may be fitted to the patient such that
sound is amplified in frequency regions where the patient has a hearing loss.
In the 1950s and 60s scientists such as Ira Hirsh and Hallowell Davis at the
Central Institute for the Deaf (CID) in St. Louis provided many valuable measurements
of hearing, some of which led to tests that allow audiologists to determine
the possible biological site and/or etiology of a patient's hearing loss.
A new direction in psychological acoustics became important following World
War II, as sonar and radar were refined. Scientists such as David Green and
John Swets at the University of Michigan developed the Theory of Signal Detection
(TSD) in the late 1950s and early 60s to account for decisions one makes in
tasks such as detecting whether a sound has been presented. TSD is a powerful
theory accounting for many aspects of decision making; its applications include,
but extend far beyond, auditory science. Today TSD and its many advances provide
useful means for analyzing the detection of all sorts of "signals,"
ranging from a neural impulse to x-ray pictures of tumors to warning signals
to decisions made by groups and individuals.
A major focus of auditory research during the past century has been on the exquisite
frequency selectivity of the auditory system. This selectivity enables us to
"hear out" or separate the individual frequencies that make up complex
sounds such as speech and music. Without frequency selectivity, speech would
be little more than Morse code, music would be limited to drum beats, and we
would be unaware of many sound-producing objects in our environment. The story
of auditory frequency processing has unfolded over several centuries, but a
turning point occurred with the Nobel-prize winning research (awarded in 1961)
of George von Békésy. He showed that the biomechanical properties
of the inner ear structures (the cochlea) cause them to vibrate in response
to sound in a particular manner that depends specifically upon the frequency
of the sound; each place within the cochlea is "tuned" to respond
to a limited range of frequencies. Special auditory sensory cells, called hair
cells, respond with signals that reflect the vibration at specific places in
the cochlea. The hair cell signals excite the nerve fibers in the auditory nerve
bundle, which in turn carry information to the brain. . The result of the frequency-specific
pattern of cochlear vibration is that each auditory nerve fiber carries information
to the brain about a narrow region of frequency. Through the work of Békésy
and many other scientists at places like Harvard, MIT, University of Wisconsin,
and Johns Hopkins, this story of biomechanical and neural processing of sound
and its frequency content became a well-accepted theory.
The theory of cochlear sound processing underwent radical developments starting
in the 1970s. One discovery is that cochlear vibration is highly "nonlinear."
What this means is that a 10-fold increase in sound intensity, for example,
produces an increase in cochlear vibration that is much less than 10-fold. This
is the result of a cochlear process in which weak sounds are amplified and strong
sounds are not. This enables us to hear very weak sounds and may result from
muscle-like movements of the hair cells. The recent and surprising discovery
-- that hair cells can change their length in response to stimulation -- has
considerably altered the understanding of the processing of sound by the inner
ear. Another recent finding, by David Kemp of University College London, is
that the ear is not a passive detector of sound, but also an active producer
of it: When a sound is presented to the ear, a faint echo of that sound can
be recorded in the outer ear canal. These echoes or "otoacoustic emissions"
are generated in the cochlea and are probably related to hair cell movements.
Since otoacoustic emissions only occur when the hair cells are functioning normally,
and since normal hair cell function is required for normal hearing, measurement
of otoacoustic emissions is now used as a hearing test. This test is quick and
inexpensive, and consequently otoacoustic emission testing is now often used
for screening newborn infants for hearing loss.
Following the pioneering work of Lord Rayleigh in the early 1900s, many auditory
scientists have investigated how the auditory system uses the sound arriving
at the ears to "compute" (via neuronal circuits) the location of the
sound source. Recent research in sound localization has resulted in electronic
systems that deliver sounds over headphones such that the perceived sounds appear
to come from actual sources external to the listener, exactly as they would
in the real world. These virtual auditory reality systems are being used to
assist pilots and by the audio industry in providing listening experiences in
one's living room or over headphones that are like being in the concert hall.
These are but a very few of the many advances made over the past one hundred
years in understanding how we hear. The Acoustical Society of America has been
an international leader in stimulating and disseminating these advances during
the past 75 years. The next century promises to be one in which many more of
nature's hearing secrets are revealed.