2pED – Sound education for the deaf and hard of hearing

Cameron Vongsawad – cvongsawad@byu.edu
Mark Berardi – markberardi12@gmail.com
Kent Gee – kentgee@physics.byu.edu
Tracianne Neilsen – tbn@byu.edu
Jeannette Lawler – jeannette_lawler@physics.byu.edu
Department of Physics & Astronomy
Brigham Young University
Provo, Utah 84602

Popular version of paper 2pED, “Development of an acoustics outreach program for the deaf”
Presented Tuesday Afternoon, May 19, 2015, 1:45 pm, Commonwealth 2
169th ASA Meeting, Pittsburgh
Click here to read the abstract.

The deaf and hard of hearing have less intuition with sound but are no strangers to the effects of pressure, vibrations, and other basic acoustical principles. Brigham Young University recently expanded their “Sounds to Astound” outreach program (sounds.byu.edu) and developed an acoustics demonstration program for visiting deaf students. The program was designed to help the students connect to a wide variety of acoustical principles through highly visual and kinesthetic demonstrations of sound as well as utilizing the students’ primary language of American Sign Language (ASL).

In science education, the “Hear and See” methodology (Beauchamp 2005) has been shown to be an effective teaching tool in assisting students to internalize new concepts. This sensory-focused approach can be applied to a deaf audience in a different way, the “See and Feel” method. In both, whenever possible students participate in demonstrations to experience the physical principle being taught.

In developing the “See and Feel” approach, a fundamental consideration was to select the principles of sound that were easily communicated using words that exist and are commonly used in ASL. For example, the word “pressure” is common, while the word “wave” is uncommon. Additionally, the sign for “wave” is closely associated with a water wave, which could lead to confusion about the nature of sound as a longitudinal wave. In the absence of an ASL sign for “resonance,” the nature of sound was taught by focusing on the signs for “vibration” and “pressure.” Additional vocabulary, i.e., mode, amplitude, node, antinode, and wave propagation, were presented using classifiers (non-lexical visualizations of gestures and hand shapes) and finger spelling the words. (Sheetz 2012)

Two bilingual teaching approaches were tried to make ASL the primary instruction language while also enabling communication among the demonstrators. In the first approach, the presenter used ASL and spoken English simultaneously. In the second approach, the presenter used only ASL and other interpreters provided the spoken English translation. The second approach proved to be more effective for both the audience and the presenters because it allowed the presenter to focus on describing the principles in the native framework of ASL, resulting in a better presentation flow for the deaf students.

In addition to the tabletop demonstrations (illustrated in the figures), the students were also able to feel sound in BYU’s reverberation chamber as a large subwoofer was operated at resonance frequencies of the room. The students were invited to walk around the room to find where the vibrations felt weakest. In doing so, the students mapped the nodal lines of the wave patterns in the room. In addition, the participants enjoyed standing in the corners of the room, where the sound pressure is eight times as strong and feeling the power of sound vibrations.

The experience of sharing acoustics with the deaf and hard of hearing has been remarkable. We have learned a few lessons about what does and doesn’t work well with regards to the ASL communication, visual instruction, and accessibility of the demos to all participants. Clear ASL communication is key to the success of the event. As described above, it is more effective if the main presenter communicates with ASL and someone else, who understands ASL and physics, provides a verbal interpretation for non-ASL volunteers. Having a fair ratio of interpreters to participants gives individualized voices for each person in attendance throughout the event. Another important consideration is that the ASL presenter needs to be visible to all students at all times. Extra thought is required to illuminate the presenter when the demonstrations require low lighting for maximum visual effect.

Because most of the demonstration traditionally rely on the perception of sound, care must be taken to provide visual instruction about the vibrations for hearing-impaired participants. (Lang 1973, 1981) This required the presenters to think creatively about how to modify demos. Dividing students into smaller groups (3-4 students) allow each student to interact with the demonstrations more closely. (Vongsawad 2014) This hands-on approach will improve the students’ ability to “See & Feel” the principles of sound being illustrated in the demonstrations and benefit more fully from the event.

While a bit hesitant at first, by the end of the event, students were participating more freely, asking questions and excited about what they had learned. They left with a better understanding of principles of acoustics and how sound affects their lives. The primary benefit, however, was providing opportunities for deaf children to see that resources exist at universities for them to succeed in higher education.

Acknowledgments
We would like to acknowledge support for this work from a National Science Foundation Grant (IIS-1124548) and from the Sorensen Impact Foundation. The visiting students also took part in a research project to develop a technology referred to as “Signglasses” – head-mounted artificial reality displays that could be used to help deaf and hard of hearing students better participate in planetarium shows. We also appreciate the support from the Acoustical Society of America in the development of BYU’s student chapter outreach program, “Sounds to Astound.” This work could not have been completed without the help of the Jean Massieu School of the Deaf in Salt Lake City, Utah.Vongsawad Fig 1 String Vibrations

Figure 1: Vibrations on a string were made to appear “frozen” in time by matching the frequency of a strobe light to the frequency of oscillation, which enhanced the ability of students to analyze the wave properties visually.

 

Vongsawad Fig 3 SpectrumOscilloscope

Figure 2: The Rubens Tube is another classic physics and acoustics demonstration to show resonance in a pipe. Similarly to the vibrations on a string, but this time being affected by sound waves directly. A speaker is attached to the end of a tube full of propane and the exiting propane that is lit on fire shows the variations in pressure due to the pressure wave caused by the sound in the tube. Here students are able to visualize a variety of sound properties.

 

Vongsawad Fig 4a LoudCandle

Figure 3: Free spectrum analyzer and oscilloscope software was used to visualize the properties of sound broken up into its derivative parts. Students were encouraged to make sounds by clapping, snapping, using a tuning fork or their voice, and were able to see that sounds made in different ways have different features. It was significant for the hearing-impaired students to see that the noises they made looked similar to everyone else’s.

 

Vongsawad Fig 4b LoudCandle

Figure 4: A loudspeaker driven at a frequency of 40 Hz was used to first make a candle flame flicker and then blow out as the loudness was increased to demonstrate the power of sound traveling as a pressure wave in the air.

 

Vongsawad Fig 5b Surface Vibration Speaker

Figure 5: A surface vibration loudspeaker placed on a table was another effective demonstration for the students to feel the sound. Students felt the sound as the surface vibration loudspeaker was placed on a table. Some students placed the surface vibration loudspeaker on their heads for an even more personal experience with sound.

 

Vongsawad Fig 6 Fogger

Figure 6: Pond foggers use high frequency and high amplitude sound to turn water into fog, or cold water vapor. This demonstration gave students the opportunity to see and feel how powerful sound or vibrations can be. They could also put their fingers close to the fogger and feel the vibrations in the water.

 

This video demonstrates the use of ASL as the primary means of communication for students. Communication in their native language improved understanding.

 

References

Michael S. Beauchamp, “See me, hear me, touch me: Multisensory integration in lateral occipital-temporal cortex,” Cognitive Neuroscience: Current Opinion in Neurobiology 15, 145-153 (2005).

N. A. Scheetz, Deaf Education in the 21st Century: Topics and Trends (Pearson, Boston, 2012) pp. 152-62.

Cameron T. Vongsawad, Tracianne B. Neilsen, and Kent L. Gee, “Development of educational stations for Acoustical Society of America outreach,” Proc. Mtgs. Acoust. 20, 025003 (2014).

Harry G. Lang, “Teaching Physics to the Deaf,” Phys. Teach. 11, 527 (September 1973).

Harry, G. Lang, “Acoustics for deaf physics students,” Phys. Teach. 11, 248 (April 1981).

2aSC – Speech: An eye and ear affair!

Pamela Trudeau-Fisette – ptrudeaufisette@gmail.com
Lucie Ménard – menard.lucie@uqam.ca
Université du Quebec à Montréal
320 Ste-Catherine E.
Montréal, H3C 3P8

Popular version of poster session 2aSC, “Auditory feedback perturbation of vowel production: A comparative study of congenitally blind speakers and sighted speakers”
Presented Tuesday morning, May 19, 2015, Ballroom 2, 8:00 AM – 12:00 noon
169th ASA Meeting, Pittsburgh
———————————
When learning to speak, young infants and toddlers use auditory and visual cues to correctly associate speech movements to a specific speech sound. In doing so, typically developing children compare their own speech and those of their ambient language to build and improve the relationship between what they hear, see and feel, and how to produce it.

In many day-to-day situations, we exploit the multimodal nature of speech: in noisy environments, for instance like in a cocktail party, we look at our interlocutor’s face and use lip reading to recover speech sounds. When speaking clearly, we open our mouth wider to make ourself sound more intelligible. Sometimes, just seeing someone’s face is enough to communicate!

What happens in cases of congenital blindness? Despite the fact that blind speakers learn to produce intelligible speech, they do not quite speak like sighted speakers do. Since they do not perceive others’ visual cues, blind speakers do not produce visible labial movements as much as their sighted peers do.

Production of the French vowel “ou” (similar as in cool) produced by a sighted adult speaker (on the left) and a congenitally blind adult speaker (on the right). We can clearly see that the articulatory movements of the lips are more explicit for the sighted speaker.

Therefore, blind speakers put more weight on what they hear (auditory feedback) than sighted speakers, because one sensory input is lacking. How does that affect the way blind individuals speak?

To answer this question, we conducted an experiment during which we asked congenitally blind adult speakers and sighted adult speakers to produce multiple repetitions of the French vowel “eu”. While they were producing the 130 utterances, we gradually altered their auditory feedback through headphones – without them knowing it- so that they were not hearing the exact sound they were saying. Consequently, they needed to modify the way they produced the vowel in order to compensate for the acoustic manipulation, so they could hear the vowel they were asked to produce (and the one they thought they were saying all along!).

What we were interested in is whether blind speakers and sighted speakers would react differently to this auditory manipulation. The blind speakers not being able to rely on visual feedback, we hypothesized that they would grant more importance on their auditory feedback and, therefore, compensate to a greater extent for the acoustic manipulation.

To explore this matter, we observed the acoustic (produced sounds) and articulatory (lips and tongue movements) differences between the two groups at three distinct time points of the experiment phases.

As predicted, congenitally blind speakers compensated for the altered auditory feedback in a greater extent than their sighted peers. More specifically, even though both speaker groups adapted their productions, the blind group compensated more than the control group did, as if they were integrating the auditory information more strongly. Also, we found that both speaker groups used different articulatory strategies to respond to the applied manipulation: blind participants used their tongue (which is not visible when you speak) more to compensate. This latter observation is not surprising considering the fact that blind speakers do not use their lips (which is visible when you speak) as much as their sighted peers do.

Tags: speech, language, learning, vision, blindness