4aSC2 – Effects of language and music experience on speech perception – T. Christina Zhao, Patricia K. Kuhl

4aSC2 – Effects of language and music experience on speech perception – T. Christina Zhao, Patricia K. Kuhl

Effects of language and music experience on speech perception

T. Christina Zhao — zhaotc@uw.edu
Patricia K. Kuhl — pkkuhl@uw.edu
Institute for Learning & Brain Sciences
University of Washington, BOX 357988
Seattle, WA, 98195

Popular version of paper 4aSC2, “Top-down linguistic categories dominate over bottom-up acoustics in lexical tone processing”
Presented Thursday morning, May 21st, 2015, 8:00 AM, Ballroom 2
169th ASA Meeting, Pittsburgh

Speech perception involves constant interplay between top-down and bottom-up processing. For example, to process phonemes (e.g. ‘b’ from ‘p’), the listener must accurately process the acoustical information in the speech signals (i.e. bottom-up strategy) and assign these sounds efficiently to a category (i.e. top-down strategy). Listeners’ performance in speech perception tasks is influenced by their experience in either processing strategy. Here, we use lexical tone processing as a window to examine how extensive experience in both strategies influence speech perception.

Lexical tones are contrastive pitch contour patterns at the word level. That is, a small difference in the pitch contour can result in different word meaning. Native speakers of a tonal language thus have extensive experience in using the top-down strategy to assign highly variable pitch contours into lexical tone categories. This top-down influence is reflected by the reduced sensitivity to acoustic differences within a phonemic category compared to across categories (Halle, Chang, & Best, 2004). On the other hand, individuals with extensive music training early in life exhibit enhanced sensitivities to pitch differences not only in music, but also in speech, reflecting stronger bottom-up influence. Such bottom-up influence is reflected by the enhanced sensitivity in detecting differences between lexical tones when the listeners are non-tonal language speakers (Wong, Skoe, Russo, Dees, & Kraus, 2007).
How does extensive experience in both strategies influence lexical tone processing? To address this question, native Mandarin speakers with extensive music training (N=17) completed a music pitch discrimination task and a lexical tone discrimination task. We compared their performance with individuals with extensive experience in only one of the processing strategies (i.e. Mandarin nonmusicians (N=20) and English musicians (N=20), data from Zhao & Kuhl (2015)).

Despite the enhanced performance in the music pitch discrimination task in Mandarin musicians, their performance in the lexical tone discrimination ask is similar to the performance of the Mandarin nonmusicians, and different from the English musicians’ performance
(Fig. 1, ‘Sensitivity across lexical tone continuum by group’). That is, they exhibited reduced sensitivities within phonemic categories (i.e. on either end of the line) compared to within categories (i.e. the middle of the line), and their overall performance is lower than the English musicians. This result strongly suggests a dominant effect of the top-down influence in processing lexical tone. Yet, further analyses revealed that Mandarin musicians and Mandarin nonmusicians may still be relying on different underlying mechanisms for performing in the lexical tone discrimination task. In the Mandarin musician, their music pitch discrimination scores are correlated with their lexical tone discrimination scores, suggesting a contribution of the bottom-up strategy in their lexical tone discrimination performance (Fig. 2, ‘Music pitch and lexical tone discrimination’, purple). This relation is similar to the English musicians (Fig. 2, peach) but very different from the Mandarin non-musicians

(Fig. 2, yellow). Specifically, for Mandarin nonmusicians, the music pitch discrimination scores do not correlate with the lexical tone discrimination scores, suggesting independent processes.

Tags: linguistics, speech, music, speech perception

Halle, P. A., Chang, Y. C., & Best, C. T. (2004). Identification and discrimination of Mandarin Chinese tones by Mandarin Chinese vs. French listeners. Journal of Phonetics, 32(3), 395-421. doi: 10.1016/s0095-4470(03)00016-0
Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat. Neurosci., 10(4), 420-422. doi: 10.1038/nn1872
Zhao, T. C., & Kuhl, P. K. (2015). Effect of musical experience on learning lexical tone categories. The Journal of the Acoustical Society of America, 137(3), 1452-1463. doi: doi:http://dx.doi.org/10.1121/1.4913457

4aPP2 – Localizing Sound Sources when the Listener Moves: Vision Required – William A. Yost

4aPP2 – Localizing Sound Sources when the Listener Moves: Vision Required – William A. Yost

Localizing Sound Sources when the Listener Moves: Vision Required
William A. Yost – william.yost@asu.edu, paper presenter
Xuan Zhong – xuan.zhong@asu.edu
Speech and Hearing Science
Arizona State University
P.O. Box 870102
Tempe, AZ 87285

Popular version of paper 4aPP2, related papers 1aPPa1, 1pPP7, 1pPP17, 3aPP4,
Presented Monday morning, May 18, 2015
169th ASA Meeting, Pittsburgh

When an object (sound source) produces sound, that sound can be used to locate the spatial position of the sound source. Since sound has no physical attributes related to space and the auditory receptors do not respond according to where the sound comes from, the brain makes computations based on the sound’s interaction with the listener’s head. These computations provide information about sound source location. For instance, sound from a source opposite the right ear will reach that ear slightly before reaching the left ear since the source is closer to the right ear. This slight difference in arrival time produces an interaural (between the ears) time difference (ITD), which is computed in neural circuits in the auditory brainstem as one cue used for sound source localization (i.e., small ITDs indicate that the sound source is near the front and large ITDs that the sound source is off to one side).
We are investigating sound source localization when the listener and/or the source move. See Figure 1 for a picture of the laboratory that is an echo-reduced room with 36 loudspeakers on a 5-foot radius sphere and a computer-controlled chair for rotating listeners while they listen to sounds presented from the loudspeakers. Conditions when sounds and listeners move presents a challenge for the auditory system in processing auditory spatial cues for sound source localization. When either the listener or the source moves, the ITDs change. So when the listener moves the ITD changes, signaling that the source moved even if it didn’t. In order to prevent this type of confusion about the location of sound sources, the brain needs another piece of information. We have shown that in addition to computing auditory spatial cues like the ITD, the brain also needs information about the location of the listener. Without both types of information, our experiments indicate that major errors occur in locating sound sources. When vision is used to provide information about the location of the listener, accurate sound source localization occurs. Thus, sound source localization requires information about the auditory spatial cues such as the ITD, but also information provided by systems like vision indicating the listener’s spatial location. This has been an underappreciated aspect of sound source localization. Additional research will be needed to more fully understand how these two forms of essential information are combined and used to locate sound sources. Improving sound source localization accuracy when listeners and/or sources move has many practical applications ranging from aiding people with hearing impairment to improving robots’ abilities to use sound to locate objects (e.g., a person in a fire). [The research was supported by an Air Force Office of Scientific Research, AFOSR, grant].

Figure 1. The Spatial Hearing Laboratory at ASU with sound absorbing materials on all walls, ceiling, and floor; 36-loudspeakers on a 5-foot radius sphere, and a computer controlled rotating chair.

Experimental demonstration of under ice acoustic communication

Experimental demonstration of under ice acoustic communication

Experimental demonstration of under ice acoustic communication
The winter of Harbin is quite cold with an average temperature of -20o C to -30o C in January. An under ice acoustic communication experiment was done in Songhua River, Harbin, China in January 2015. The Songhua River is a river in Northeast China, and is the largest tributary of the Heilong River, flowing about 1,434 kilometers from Changbai Mountains through Jilin and Heilongjiang provinces. In winter conditions, the Songhua River is covered with about 0.5m thick ice, which provides a natural environment for under ice acoustic experiments.

Figure 1. Songhua River in winter
Minus 20~30 degrees work environment brings a great challenge to the under ice experimental. One of our initial concerns was the issue of quickly building temporary experimental base in cold conditions. The experimental base of the transmitter is located in a wharf that provides enough power source and heat.
Figure 2. Temporary experimental base

Figure 2 shows the temporary experimental base for the receiver, which can easily be assembled by four people in roughly 5 minutes.

Figure 3. The inside of experimental base
Figure 3 shows the inside of the experimental base. Insulation blankets and plastic plates were placed on the ice to avoid prolonged contact for both the experimenters and instruments, as most of the instruments won’t function in conditions of minus 20 degrees. Our second issue was to make sure that all of them worked at the right temperature when receiving signals – we found burning briquettes for heating was a good solution, as this can keep the temperature of the inside experimental base above zero degrees (see Figure 4).


Figure. 5 Under ice channel based on real data
The under ice channel is quite stable. Figure 5 gives the measured under the ice channel based on real data. Figure 6 shows the CIR of the under ice channel at different depths, and it can be seen that the channels closer to the ice are simpler.

Figure. 6 Under ice channel with different depth
A series of underwater acoustic communication tests including spread spectrum, OFDM, Pattern Time Delay Shift Coding (PDS) and CDMA have been achieved. All of the under ice acoustic communication tests achieved low bit error rate communication at 1km range with different received depth. Under ice CDMA multiuser acoustic communication shows that as many as 12 users can be supported simultaneously with as few as five receivers in under ice channels, using the time reversal mirror combined with the differential correlation detectors.

2aAB7 – Nocturnal peace at a Conservation Center for Species Survival?- Suzi Wiseman

2aAB7 – Nocturnal peace at a Conservation Center for Species Survival?- Suzi Wiseman

The “Sounds of Silence” at a Wildlife Conservation Center

Suzi Wiseman – sw1210txstate@gmail.com
Texas State University-San Marcos
Environmental Geography
601 University Drive, San Marcos, Texas 78666
Preston S. Wilson – wilsonps@austin.utexas.edu
University of Texas at Austin
Mechanical Engineering Department
1 University Station C2200
Austin, TX 78712

Popular version of paper 2aAB7, “Nocturnal peace at a Conservation Center for Species Survival?”
Presented Tuesday morning, May 19, 2015 at 10.15am
169th ASA Meeting, Pittsburgh

The acoustic environment is essential to wildlife, providing vital information about prey and predators and the activities of other living creatures (biophonic information) (Wilson, 1984), about changing weather conditions and occasionally geophysical movement (geophonic), and about human activities (anthrophonic) (Krause 1987). Small sounds can be as critical as loud, depending on the species trying to listen. Some hear infrasonically (too low for humans, generally considered below 20 Hz), others ultrasonically (too high, above 20 kHz). Biophonic soundscapes frequently exhibit temporal and seasonal patterns, for example a dawn “chorus”, mating and nurturing calls, diurnal and crepuscular events.
Some people are attracted to large parks due in part to their “peace and quiet” (McKenna 2013). But even in a desert, a snake may be heard to slither or wind may sigh between rocks. Does silence in fact exist? Finding truly quiet places, in nature or the built environment is increasingly difficult. Even in our anechoic chamber, which was purpose built to be extremely quiet, located in the heart of our now very crowded and busy urban campus, we became aware of infrasound that penetrated, possibly from nearby construction equipment or from heavy traffic that was not nearly as common when the chamber was first built more than 30 years ago. Is anywhere that contains life actually silent?


Figure 1: In the top window, the waveform in blue indicates the amplitude over time each occasion that a pulse of sound was broadcast in the anechoic chamber, as shown in the spectrogram in the lower window, where the frequency is shown over the same time, and the color indicates the intensity of the sound (red being more intense than blue). Considerable very low frequency sound was evident and can be seen between the pulses in the waveform (which should be silent), and throughout at the bottom of the spectrogram. The blue dotted vertical lines show harmonics that were generated within the loudspeaker system. (Measurements shown in this study were by a Roland R26 recorder with Earthworks M23 measurement microphones with frequency response 9Hz to 23kHz ±1/-3dB)

As human populations increase, so do all forms of anthrophonic noise, often masking the sounds of nature. Does this noise cease at night, especially if well away from major cities and when humans are not close-by? This study analyzed the soundscape continuously recorded beside the southern white rhinoceros (Ceratotherium simum simum) enclosure at Fossil Rim Wildlife Center, about 75 miles southwest of Dallas Texas for a week during Fall 2013, to determine the quietest period each night and the acoustic environment in which these periods tended to occur. Rhinos hear infrasound, so the soundscape was measured from 0.1 Hz to 22,050 kHz. Since frequencies below 9 Hz still need to be confirmed however, these lowest frequencies were removed from this portion of the study.

Figure 2: Part of the white rhinoceros enclosure of Fossil Rim Wildlife Center, looking towards the tree line where the central recorder was placed

Figure 3 illustrates the rhythm of a day at Fossil Rim as shown by the sound level of a fairly typical 24 hours starting from midnight, apart from the evening storm. As often occurred, the quietest period was between midnight and the dawn chorus.

Figure 3: The sound pressure level throughout a relatively quiet day at the rhino enclosure. The loudest sounds were normally vehicles, machinery, equipment, aircraft, and crows. The 9pm weather front was a major contrast.
While there were times during the day when birds and insects were their most active and anthrophonic noise was not heard above them, it was discovered that all quiet periods contained anthrophonic noise, even at night. There was generally a low frequency, low amplitude hum – at times just steady and machine-like and not yet identified – and depending on wind direction, often short hums from traffic on a state highway over a mile away. Quiet periods ranged from a few minutes to almost an hour, usually eventually broken by anthrophonic sounds such as vehicles on a nearby county road, high aircraft, or dogs barking on neighboring ranches. However there was also a strong and informative biophonic presence – from insects to nocturnal birds and wildlife such as coyotes, to sounds made by the rhinos themselves and by other species at Fossil Rim. Geophonic intrusions were generally wind, thunder or rain, possibly hail.
The quietest quarter hour was about 4am on the Friday depicted in figure 3, but even then the absolute sound pressure level averaged 44.7 decibels, about the level of a quiet home or library. The wind was from the south southeast around 10 to 14 mph during this time. Audio clip 1 is the sound of this quiet period.


Figure 4: The quietest quarter hour recorded at Fossil Rim appears between the vertical red selection lines, with an average absolute sound pressure level of 44.5 decibels. The fairly constant waveform shown in blue in the top graph and the low frequency noise at the bottom of the spectrogram seemed to comprise the machine-like hum, the distant traffic hum which varies over time, and insects. The blue flashes between 3 and 5 Hz were mainly bird calls.
By contrast, the loudest of the “quietest nightly periods” was less than six minutes long, around 5am on Wednesday 23rd October, as shown between the vertical red lines in figure 5. Despite being the quietest period that night, it averaged a sound pressure level of 55.5 decibels, which is roughly the equivalent of a spoken conversation.

Wiseman_Fig5_LoudestWed_55.5.png (1)

Figure 5: The loudest “quietest period each night” reveals broadband machine noise (possibly road work equipment somewhere in the district?) which continued for some hours and appears as the blue flecks across all frequencies. The horizontal blue line at 16.5 kHz is characteristic of bats. All species identification is being left to biologists for confirmation. Audio clip 2 is this selection.

Either side of the “quiet” minutes were short bursts of low frequency but intense truck and/or other machine noise indicated in red, some of which partially covered a clang when a rhino hit its fence with its horn, and distant barks, howls, moos and other vocalizations. The noise may have masked the extremely low frequency hums and insects that had been apparent on other nights or to have caused the insects to cease their activity. The strata below 2.5 kHz appear more ragged, indicating they are not being produced in such a uniform way as on quieter nights, and they are partially covered by the blue flecks of machine noise. However the strata at 5.5, 8.5, 11 and especially at 16.5 kHz that appeared on other nights are still evident. They appear to be birds, insects and bats. Audio clip 3 contains the sounds that broke this quiet period.

At no point during the entire week was anything closely approaching “silence” apparent. Krause reports that healthy natural soundscapes comprise a myriad of biophony, and indeed the ecological health of a region can be measured by its diverse voices (Krause 1987). However if these voices are too frequently masked or deterred by anthrophonic noise, animals may be altered behaviorally and physiologically (Pater et al, 2009), as the World Health Organization reports to be the case with humans who are exposed to chronic noise (WHO 1999). Despite some level of anthrophonic noise at most times, Fossil Rim seems to provide a healthy acoustic baseline since so many endangered species proliferate there.
Understanding soundscapes and later investigating any acoustic parameters that may correlate with animals’ behavior and/or physiological responses may lead us to think anew about the environments in which we hold animals captive in conservation, agricultural and even domestic environments, and about wildlife in parts of the world that are being increasingly encroached upon by man.

tags: animals, conservation, soundscape, silence, environment

Krause, B. 1987. The niche hypothesis. Whole Earth Review . Wild Sanctuary.
———. 1987. Bio-acoustics: Habitat ambience & ecological balance. Whole Earth Review. Wild Sanctuary.
McKenna, Megan F., et al. “Patterns in bioacoustic activity observed in US National Parks.” The Journal of the Acoustical Society of America 134.5 (2013): 4175-4175.
Pater, L. L., T. G. Grubb, and D. K. Delaney. 2009. Recommendations for improved assessment of noise impacts on wildlife. The Journal of Wildlife Management 73:788-795.
Wilson, E. O. 1984. Biophilia. Harvard University Press.
World Health Organization. “Guidelines for community noise”. WHO Expert Taskforce Meeting. London. 1999.


Edward J. Walsh – Edward.Walsh@boystown.org
JoAnn McGee – JoAnn.McGee@boystown.org
Boys Town National Research Hospital
555 North 30th St.
Omaha, NE 68131

Cara E. Whalen – carawhalen@gmail.com
Larkin A. Powell – lpowell3@unl.edu
Mary Bomberger Brown – mbrown9@unl.edu
School of Natural Resources
University of Nebraska-Lincoln
Lincoln, NE 68583

Popular version of paper 1pABa2
Presented Monday afternoon, May 18, 2015
169th ASA Meeting, Pittsburgh

The Sand Hills ecoregion of central Nebraska is distinguished by rolling grass-stabilized sand dunes that rise up gently from the Ogallala aquifer. The aquifer itself is the source of widely scattered shallow lakes and marshes, some permanent and others that come and go with the seasons.
However, the sheer magnificence of this prairie isn’t its only distinguishing feature. Early on frigid, wind-swept, late-winter mornings, a low pitched hum, interrupted by the occasional dawn song of a Western Meadowlark (Sturnella neglecta) and other songbirds inhabiting the region, is virtually impossible to ignore.

The hum is the chorus of the Greater Prairie Chicken (Tympanuchus cupido pinnatus), the communal expression of the courtship song of lekking male birds performing an elaborate testosterone-driven, foot-pounding ballet that will decide which males are selected to pass genes to the next generation; the word “lek” is the name of the so-called “booming” or courtship grounds where the birds perform their wooing displays.
While the birds cackle, whine, and whoop to defend territories and attract mates, it is the loud “booming” call, an integral component of the courtship display that attracts the interest of the bioacoustician – and the female prairie chicken.

The “boom” is an utterance that is carried long distances over the rolling grasslands and wetlands by a narrow band of frequencies ranging from roughly 270 to 325 cycles per second (Whalen et al., 2014). It lasts about 1.9 seconds and is repeated frequently throughout the morning courtship ritual.
Usually, the display begins with a brief but energetic bout of foot stamping or dancing, which is followed by an audible tail flap that gives way to the “boom” itself.

For the more acoustically and technologically inclined, a graphic representation of the pressure wave of a “boom,” along with its spectrogram (a visual representation showing how the frequency content of the call changes during the course of the bout) and graphs depicting precisely where in the spectral domain the bulk of the acoustic power is carried is shown in Figure 1. The “boom” is clearly dominated by very low frequencies that are centered on approximately 300 Hz (cycles per second).

Vocalization is, of course, only one side of the communication equation. Knowing what these stunning birds can hear is on the other.
We are interested in what Greater Prairie Chickens can hear because wind energy developments are encroaching onto their habitat, a condition that makes us question whether noise generated by wind turbines might have the capacity to mask vocal output and complicate communication between “booming” males and attending females.
Step number one in addressing this question is to determine what sounds the birds are capable of hearing – what their active auditory space looks like. The golden standard of hearing tests are behavioral in nature – you know, the ‘raise your hand or press this button if you can hear this sound’ kind of testing. However, this method isn’t very practical in a field setting; you can’t easily ask a Greater Prairie Chicken to raise its hand, or in this case its wing, when it hears the target sound.
To solve this problem, we turn to electrophysiology – to an evoked brain potential that is a measure of the electrical activity produced by the auditory parts of the inner ear and brain in response to sound. The specific test that we settled on is known as the ABR, the auditory brainstem response.
The ABR is a fairly remarkable response that captures much of the peripheral and central auditory pathway in action when short tone bursts are delivered to the animal. Within approximately 5 milliseconds following the presentation of a stimulus, the auditory periphery and brain produce a series of as many as five positive-going, highly reproducible electrical waves. These waves, or voltage peaks, more or less represent the sequential activation of primary auditory centers sweeping from the auditory nerve (the VIIIth cranial nerve), which transmits the responses of the sensory cells of the inner ear rostrally, through auditory brainstem centers toward the auditory cortex.
Greater Prairie Chickens included in this study were captured using nets that were placed on leks in the early morning hours. Captured birds were transported to a storage building that had been reconfigured into a remote auditory physiology lab where ABRs were recorded from birds positioned in a homemade, sound attenuating space – an acoustic wedge-lined wooden box.

The waveform of the Greater Prairie Chicken ABR closely resembles ABRs recorded from other birds – three prominent positive-going electrical peaks, and two smaller amplitude waves that follow, are easily identified, especially at higher levels of stimulation. In Figure 2, ABR waveforms recorded from an individual bird in response to 2.8 kHz tone pips are shown in the left panel and the group averages of all birds studied under the same stimulus conditions are shown in the right panel; the similarity of response waveforms from bird to bird, as indicated in the nearly imperceptible standard errors (shown in gray), testifies to the stability and utility of the tool. As stimulus level is lowered, ABR peaks decrease in amplitude and occur at later time points following stimulus onset.
Since our goal was to determine if Greater Prairie Chickens are sensitive to sounds produced by wind turbines, we generated an audiogram based on level-dependent changes in ABRs representing responses to tone pips spanning much of the bird’s audiometric range (Figure 3). An audiogram is a curve representing the relationship between response threshold (i.e., the lowest stimulus level producing a clear response) and stimulus frequency; in this case, thresholds were averaged across all animals included in the investigation.

As shown in Figure 3, the region of greatest hearing sensitivity is in the 1 to 4 kHz range and thresholds increase (sensitivity is lost) rapidly at higher stimulus frequencies and more gradually at lower frequencies. Others have shown that ABR threshold values are approximately 30 dB higher than thresholds determined behaviorally in the budgerigar (Melopsittacus undulates) (Brittan-Powell et al., 2002). So, to answer the question posed in this investigation, ABR threshold values were adjusted to estimate behavioral thresholds, and the resulting sensitivity curve was compared with the acoustic output of a wind turbine farm studied by van den Berg in 2006. The finding is clear; wind turbine noise falls well within the audible space of Greater Prairie Chickens occupying booming grounds in the acoustic footprint of active wind turbines.
While findings reported here indicate that Greater Prairie Chickens are sensitive to at least a portion of wind turbine acoustic output, the next question that we plan to address will be more difficult to answer: Does noise propagated from wind turbines interfere with vocal communication among Greater Prairie Chickens courting one another in the Nebraska Sand Hills? Efforts to answer that question are in the works.

Presentation #1pABa2 “Hearing sensitivity in the Greater Prairie Chicken” by Edward J. Walsh, Cara Whalen, Larkin Powell, Mary B. Brown, and JoAnn McGee will be take place on Monday, May 18, 2015, at 1:15 PM in the Rivers room at the Wyndham Grand Pittsburgh Downtown Hotel. The abstract can be found by searching for the presentation number here:

tags: chickens, mating, courtship, hearing, Nebraska, wind turbines

Brittan-Powell, E.F., Dooling, R.J. and Gleich, O. (2002). Auditory brainstem responses in adult budgerigars (Melopsittacus undulates). J. Acoust. Soc. Am. 112:999-1008.
van den Berg, G.P. (2006). The sound of high winds. The effect of atmospheric stability on wind turbine sound and microphone noise. Dissertation, Groningen University, Groningen, The Netherlands.
Whalen, C., Brown, M.B., McGee, J., Powell, L.A., Smith, J.A. and Walsh, E.J. (2014). The acoustic characteristics of greater prairie-chicken vocalizations. J. Acoust. Soc. Am. 136:2073.

2pED – Sound education for the deaf and hard of hearing  Cameron Vongsawad,Mark Berardi, Kent Gee, Tracianne Neilsen, Jeannette Lawler

2pED – Sound education for the deaf and hard of hearing Cameron Vongsawad,Mark Berardi, Kent Gee, Tracianne Neilsen, Jeannette Lawler

Sound education for the deaf and hard of hearing

Cameron Vongsawad – cvongsawad@byu.edu
Mark Berardi – markberardi12@gmail.com
Kent Gee – kentgee@physics.byu.edu
Tracianne Neilsen – tbn@byu.edu
Jeannette Lawler – jeannette_lawler@physics.byu.edu
Department of Physics & Astronomy
Brigham Young University
Provo, Utah 84602

Popular version of paper 2pED, “Development of an acoustics outreach program for the deaf.”
Presented Tuesday Afternoon, May 19, 2015, 1:45 pm, Commonwealth 2
169th ASA Meeting, Pittsburgh

The deaf and hard of hearing have less intuition with sound but are no strangers to the effects of pressure, vibrations, and other basic acoustical principles. Brigham Young University recently expanded their “Sounds to Astound” outreach program (sounds.byu.edu) and developed an acoustics demonstration program for visiting deaf students. The program was designed to help the students connect to a wide variety of acoustical principles through highly visual and kinesthetic demonstrations of sound as well as utilizing the students’ primary language of American Sign Language (ASL).

In science education, the “Hear and See” methodology (Beauchamp 2005) has been shown to be an effective teaching tool in assisting students to internalize new concepts. This sensory-focused approach can be applied to a deaf audience in a different way, the “See and Feel” method. In both, whenever possible students participate in demonstrations to experience the physical principle being taught.

In developing the “See and Feel” approach, a fundamental consideration was to select the principles of sound that were easily communicated using words that exist and are commonly used in ASL. For example, the word “pressure” is common, while the word “wave” is uncommon. Additionally, the sign for “wave” is closely associated with a water wave, which could lead to confusion about the nature of sound as a longitudinal wave. In the absence of an ASL sign for “resonance,” the nature of sound was taught by focusing on the signs for “vibration” and “pressure.” Additional vocabulary, i.e., mode, amplitude, node, antinode, and wave propagation, were presented using classifiers (non-lexical visualizations of gestures and hand shapes) and finger spelling the words. (Sheetz 2012)

Two bilingual teaching approaches were tried to make ASL the primary instruction language while also enabling communication among the demonstrators. In the first approach, the presenter used ASL and spoken English simultaneously. In the second approach, the presenter used only ASL and other interpreters provided the spoken English translation. The second approach proved to be more effective for both the audience and the presenters because it allowed the presenter to focus on describing the principles in the native framework of ASL, resulting in a better presentation flow for the deaf students.

In addition to the tabletop demonstrations (illustrated in the figures), the students were also able to feel sound in BYU’s reverberation chamber as a large subwoofer was operated at resonance frequencies of the room. The students were invited to walk around the room to find where the vibrations felt weakest. In doing so, the students mapped the nodal lines of the wave patterns in the room. In addition, the participants enjoyed standing in the corners of the room, where the sound pressure is eight times as strong and feeling the power of sound vibrations.

The experience of sharing acoustics with the deaf and hard of hearing has been remarkable. We have learned a few lessons about what does and doesn’t work well with regards to the ASL communication, visual instruction, and accessibility of the demos to all participants. Clear ASL communication is key to the success of the event. As described above, it is more effective if the main presenter communicates with ASL and someone else, who understands ASL and physics, provides a verbal interpretation for non-ASL volunteers. Having a fair ratio of interpreters to participants gives individualized voices for each person in attendance throughout the event. Another important consideration is that the ASL presenter needs to be visible to all students at all times. Extra thought is required to illuminate the presenter when the demonstrations require low lighting for maximum visual effect.

Because most of the demonstration traditionally rely on the perception of sound, care must be taken to provide visual instruction about the vibrations for hearing-impaired participants. (Lang 1973, 1981) This required the presenters to think creatively about how to modify demos. Dividing students into smaller groups (3-4 students) allow each student to interact with the demonstrations more closely. (Vongsawad 2014) This hands-on approach will improve the students’ ability to “See & Feel” the principles of sound being illustrated in the demonstrations and benefit more fully from the event.

While a bit hesitant at first, by the end of the event, students were participating more freely, asking questions and excited about what they had learned. They left with a better understanding of principles of acoustics and how sound affects their lives. The primary benefit, however, was providing opportunities for deaf children to see that resources exist at universities for them to succeed in higher education.

We would like to acknowledge support for this work from a National Science Foundation Grant (IIS-1124548) and from the Sorensen Impact Foundation. The visiting students also took part in a research project to develop a technology referred to as “Signglasses” – head-mounted artificial reality displays that could be used to help deaf and hard of hearing students better participate in planetarium shows. We also appreciate the support from the Acoustical Society of America in the development of BYU’s student chapter outreach program, “Sounds to Astound.” This work could not have been completed without the help of the Jean Massieu School of the Deaf in Salt Lake City, Utah.

This video demonstrates the use of ASL as the primary means of communication for students. Communication in their native language improved understanding.

Vongsawad Fig 1 String Vibrations

Figure 1: Vibrations on a string were made to appear “frozen” in time by matching the frequency of a strobe light to the frequency of oscillation, which enhanced the ability of students to analyze the wave properties visually.

Vongsawad Fig 3 SpectrumOscilloscope

Figure 2: The Rubens Tube is another classic physics and acoustics demonstration to show resonance in a pipe. Similarly to the vibrations on a string, but this time being affected by sound waves directly. A speaker is attached to the end of a tube full of propane and the exiting propane that is lit on fire shows the variations in pressure due to the pressure wave caused by the sound in the tube. Here students are able to visualize a variety of sound properties.

Vongsawad Fig 4a LoudCandle

Figure 3: Free spectrum analyzer and oscilloscope software was used to visualize the properties of sound broken up into its derivative parts. Students were encouraged to make sounds by clapping, snapping, using a tuning fork or their voice, and were able to see that sounds made in different ways have different features. It was significant for the hearing-impaired students to see that the noises they made looked similar to everyone else’s.

Vongsawad Fig 4b LoudCandle

Figure 4: A loudspeaker driven at a frequency of 40 Hz was used to first make a candle flame flicker and then blow out as the loudness was increased to demonstrate the power of sound traveling as a pressure wave in the air.

Vongsawad Fig 5b Surface Vibration Speaker

Figure 5: A surface vibration loudspeaker placed on a table was another effective demonstration for the students to feel the sound. Students felt the sound as the surface vibration loudspeaker was placed on a table. Some students placed the surface vibration loudspeaker on their heads for an even more personal experience with sound.

Vongsawad Fig 6 Fogger

Figure 6: Pond foggers use high frequency and high amplitude sound to turn water into fog, or cold water vapor. This demonstration gave students the opportunity to see and feel how powerful sound or vibrations can be. They could also put their fingers close to the fogger and feel the vibrations in the water.

Tags: education, deafness, language


Michael S. Beauchamp, “See me, hear me, touch me: Multisensory integration in lateral occipital-temporal cortex,” Cognitive Neuroscience: Current Opinion in Neurobiology 15, 145-153 (2005).

N. A. Scheetz, Deaf Education in the 21st Century: Topics and Trends (Pearson, Boston, 2012) pp. 152-62.

Cameron T. Vongsawad, Tracianne B. Neilsen, and Kent L. Gee, “Development of educational stations for Acoustical Society of America outreach,” Proc. Mtgs. Acoust. 20, 025003 (2014).

Harry G. Lang, “Teaching Physics to the Deaf,” Phys. Teach. 11, 527 (September 1973).

Harry, G. Lang, “Acoustics for deaf physics students,” Phys. Teach. 11, 248 (April 1981).