2aSC – Speech: An eye and ear affair! – Pamela Trudeau-Fisette, Lucie Ménard

2aSC – Speech: An eye and ear affair! – Pamela Trudeau-Fisette, Lucie Ménard

Speech: An eye and ear affair!
Pamela Trudeau-Fisette – ptrudeaufisette@gmail.com
Lucie Ménard – menard.lucie@uqam.ca
Université du Quebec à Montréal
320 Ste-Catherine E.
Montréal, H3C 3P8

Popular version of poster session 2aSC, “Auditory feedback perturbation of vowel production: A comparative study of congenitally blind speakers and sighted speakers”
Presented Tuesday morning, May 19, 2015, Ballroom 2, 8:00 AM – 12:00 noon
169th ASA Meeting, Pittsburgh
———————————
When learning to speak, young infants and toddlers use auditory and visual cues to correctly associate speech movements to a specific speech sound. In doing so, typically developing children compare their own speech and those of their ambient language to build and improve the relationship between what they hear, see and feel, and how to produce it.

In many day-to-day situations, we exploit the multimodal nature of speech: in noisy environments, for instance like in a cocktail party, we look at our interlocutor’s face and use lip reading to recover speech sounds. When speaking clearly, we open our mouth wider to make ourself sound more intelligible. Sometimes, just seeing someone’s face is enough to communicate!

What happens in cases of congenital blindness? Despite the fact that blind speakers learn to produce intelligible speech, they do not quite speak like sighted speakers do. Since they do not perceive others’ visual cues, blind speakers do not produce visible labial movements as much as their sighted peers do.

Production of the French vowel “ou” (similar as in cool) produced by a sighted adult speaker (on the left) and a congenitally blind adult speaker (on the right). We can clearly see that the articulatory movements of the lips are more explicit for the sighted speaker.

Therefore, blind speakers put more weight on what they hear (auditory feedback) than sighted speakers, because one sensory input is lacking. How does that affect the way blind individuals speak?
To answer this question, we conducted an experiment during which we asked congenitally blind adult speakers and sighted adult speakers to produce multiple repetitions of the French vowel “eu”. While they were producing the 130 utterances, we gradually altered their auditory feedback through headphones – without them knowing it- so that they were not hearing the exact sound they were saying. Consequently, they needed to modify the way they produced the vowel in order to compensate for the acoustic manipulation, so they could hear the vowel they were asked to produce (and the one they thought they were saying all along!).
What we were interested in is whether blind speakers and sighted speakers would react differently to this auditory manipulation. The blind speakers not being able to rely on visual feedback, we hypothesized that they would grant more importance on their auditory feedback and, therefore, compensate to a greater extent for the acoustic manipulation.

To explore this matter, we observed the acoustic (produced sounds) and articulatory (lips and tongue movements) differences between the two groups at three distinct time points of the experiment phases.
As predicted, congenitally blind speakers compensated for the altered auditory feedback in a greater extent than their sighted peers. More specifically, even though both speaker groups adapted their productions, the blind group compensated more than the control group did, as if they were integrating the auditory information more strongly. Also, we found that both speaker groups used different articulatory strategies to respond to the applied manipulation: blind participants used their tongue (which is not visible when you speak) more to compensate. This latter observation is not surprising considering the fact that blind speakers do not use their lips (which is visible when you speak) as much as their sighted peers do.

Tags: speech, language, learning, vision, blindness

4aSCb8 – How do kids communicate in challenging conditions? – Valerie Hazan

4aSCb8 – How do kids communicate in challenging conditions? – Valerie Hazan

Kids learn to speak fluently at a young age and we expect young teenagers to communicate as effectively as adults. However, researchers are increasingly realizing that certain aspects of speech communication have a slower developmental path. For example, as adults, we are very skilled at adapting the way that we speak according to the needs of the communication. When we are speaking a predictable message in good listening conditions, we do not need to make an effort to pronounce speech clearly and we can expend less effort. However, in poor listening conditions or when transmitting new information, we increase the effort that we make to enunciate speech clearly in order to be more easily understood.

In our project, we investigated whether 9 to 14 year olds (divided into three age bands) were able to make such skilled adaptations when speaking in challenging conditions. We recorded 96 pairs of friends of the same age and gender while they carried out a simple picture-based ‘spot the difference’ game (See Figure 1).
Hazan1_fig
Figure 1: one of the picture pairs in the DiapixUK ‘spot the difference’ task.

The two friends were seated in different rooms and spoke to each other via headphones; they had to try to find 12 differences between their two pictures without seeing each other or the other picture. In the ‘easy communication’ condition, both friends could hear each other normally, while in the ‘difficult communication’ condition, we made it difficult for one of the friends (‘Speaker B’) to hear the other by heavily distorting the speech of ‘Speaker A’ using a vocoder (See Figure 2 and sound demos 1 and 2). Both kids had received some training at understanding this type of distorted speech. We investigated what adaptations Speaker A, who was hearing normally, made to his or her speech in order to make themselves understood by their friend with ‘impaired’ hearing, so that they could complete the task successfully.
Hazan2_fig
Figure 2: The recording set up for the ‘easy communication’ (NB) and ‘difficult communication’ (VOC) conditions.


Sound 1: Here, you will hear an excerpt from the diapix task between two 10 year olds in the ‘difficult communication’ conversation from the viewpoint of the talker hearing normally. Hear how she attempts to clarify her speech when her friend has difficulty understanding her.


Sound 2: Here, you will hear the same excerpt but from the viewpoint of the talker hearing the heavily degraded (vocoded) speech. Even though you will find this speech very difficult to understand, even 10 year olds get better at perceiving it after a bit of training. However, they are still having difficulty understanding what is being said, which forces their friend to make greater effort to communicate.

We looked at the time it took to find the differences between the pictures as a measure of communication efficiency. We also carried out analyses of the acoustic aspects of the speech to see how these varied when communication was easy or difficult.
We found that when communication was easy, the child groups did not differ from adults in the average time that it took to find a difference in the picture, showing that 9 to 14 year olds were communicating as efficiently as adults. When the speech of Speaker A was heavily distorted, all groups took longer to do the task, but only the 9-10 year old group took significantly longer than adults (See Figure 3). The additional problems experienced by younger kids are likely to be due both to greater difficulty for Speaker B in understanding degraded speech and to Speaker A being less skilled at compensating for this difficulties. The results obtained for children aged 11 and older suggest that they were using good strategies to compensate for the difficulties imposed on the communication (See Figure 3).
Hazan3_fig
Figure 3: Average time taken to find one difference in the picture task. The four talker groups do not differ when communication is easy (blue bars); in the ‘difficult communication’ condition (green bars), the 9-10 years olds take significantly longer than the adults but the other child groups do not.

In terms of the acoustic characteristics of their speech, the 9 to 14 year olds differed in certain aspects from adults in the ‘easy communication’ condition. All child groups produced more distinct vowels and used a higher pitch than adults; kids younger than 11-12 also spoke more slowly and more loudly than adults. They hadn’t learnt to ‘reduce’ their speaking effort in the way that adults would do when communication was easy. When communication was made difficult, the 9 to 14 year olds were able to make adaptations to their speech for the benefit of their friend hearing the distorted speech, even though they themselves were having no hearing difficulties. For example, they spoke more slowly (See Figure 4) and more loudly. However, some of these adaptations differed from those produced by adults.
Hazan4_fig
Figure 4: Speaking rate changes with age and communication difficulty. 9-10 year olds spoke more slowly than adults in the ‘easy communication’ condition (blue bars). All speaker groups slowed down their speech as a strategy to help their friend understand them in the ‘difficult communication’ (vocoder) condition (green bars).

Overall, therefore, even in the second decade of life, there are changes taking place in the conversational speech produced by young people. Some of these changes are due to physiological reasons such as growth of the vocal apparatus, but increasing experience with speech communication and cognitive developments occurring in this period also play a part.

Younger kids may experience greater difficulty than adults when communicating in difficult conditions and even though they can make adaptations to their speech, they may not be as skilled at compensating for these difficulties. This has implications for communication within school environments, where noise is often an issue, and for communication with peers with hearing or language impairments.

 

Valerie Hazan – v.hazan@ucl.ac.uk
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK

Michèle Pettinato – Michele.Pettinato@uantwerpen.be
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK

Outi Tuomainen – o.tuomainen@ucl.ac.uk
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK
Sonia Granlund – s.granlund@ucl.ac.uk
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK
Popular version of paper 4aSCb8
Presented Thursday morning, October 30, 2014
168th ASA Meeting, Indianapolis

2aSC8 – Some people are eager to be heard: anticipatory posturing in speech production – Sam Tilsen

2aSC8 – Some people are eager to be heard: anticipatory posturing in speech production – Sam Tilsen

Consider a common scenario in a conversation: your friend is in the middle of asking you a question, and you already know the answer. To be polite, you wait to respond until your friend finishes the question. But what are you doing while you are waiting?

You might think that you are passively waiting for your turn to speak, but the results of this study suggest that you may be more impatient than you think. In analogous circumstances recreated experimentally, speakers move their vocal organs—i.e. their tongues, lips, and jaw—to positions that are appropriate for the sounds that they intend to produce in the near future. Instead of waiting passively for their turn to speak, they are actively preparing to respond.

To examine how speakers control their vocal organs prior to speaking, this study used real-time magnetic resonance imaging of the vocal tract. This recently developed technology takes a picture of tissue in middle of the vocal tract, much like an x-ray, and it takes the picture about 200 times every second. This allows for measurement of rapid changes in the positions of vocal organs before, during, and after people are speaking.

A video is available online (http://youtu.be/h2_NFsprEF0).

To understand how changes in the positions of vocal organs are related to different speech sounds, it is helpful to think of your mouth and throat as a single tube, with your lips at one end and the vocal folds at the other. When your vocal folds vibrate, they create sound waves that resonate in this tube. By using your lips and tongue to make closures or constrictions in the tube, you can change the frequencies of the resonating sound waves. You can also use an organ called the velum to control whether sound resonates in your nasal cavity. These relations between vocal tract postures and sounds provide a basis for extracting articulatory features from images of the vocal tract. For example, to make a “p” sound you close your lips, to make an “m” sound you close your lips and lower your velum, and to make “t” sound you press the tip of the tongue against the roof of your mouth.
Participants in this study produced simple syllables with a consonant and vowel (such as “pa” and “na”) in several different conditions. In one condition, speakers knew ahead of time what syllable to produce, so that they could prepare their vocal tract specifically for the response. In another condition, they produced the syllable immediately without any time for response-specific preparation. The experiment also manipulated whether speakers were free to position their vocal organs however they wanted before responding, or whether they were constrained by the requirement to produce the vowel “ee” before their response.
All of the participants in the study adopted a generic “speech-ready” posture prior to making a response, but only some of them adjusted this posture specifically for the upcoming response. This response-specific anticipation only occurred when speakers knew ahead of time exactly what response to produce. Some examples of anticipatory posturing are shown in the figures below.
fig_tilsen02

Figure 2. Examples of anticipatory postures for “p” and “t” sounds. The lips are closer together in anticipation of “p” and the tongue tip is raised in anticipation of “t”.
fig_tilsen03
Figure 3. Examples of anticipatory postures for “p” and “m” sounds. The velum is raised in anticipation of “p” and lowered in anticipation of “m”.
The surprising finding of this study was that only some speakers anticipatorily postured their vocal tracts in a response-specific way, and that speakers differed greatly in which vocal organs they used for this purpose. Furthermore, some of the anticipatory posturing that was observed facilitates production of an upcoming consonant, while other anticipatory posturing facilitates production of an upcoming vowel. The figure below summarizes these results.
fig_tilsen04
Figure 4. Summary of anticipatory posturing effects, after controlling for generic speech-ready postures.
Why do some people anticipate vocal responses while others do not? Unfortunately, we don’t know: the finding that different speakers use different vocal organs to anticipate different sounds in an upcoming utterance is challenging to explain with current models of speech production. Future research will need to investigate the mechanisms that give rise to anticipatory posturing and the sources of variation across speakers.

 

Sam Tilsen – tilsen@cornell.edu
Peter Doerschuk – pd83@cornell.edu
Wenming Luh – wl358@cornell.edu
Robin Karlin – rpk83@cornell.edu
Hao Yi – hy433@cornell.edu
Cornell University
Ithaca, NY 14850

Pascal Spincemaille – pas2018@med.cornell.edu
Bo Xu – box2001@med.cornell.edu
Yi Wang – yiwang@med.cornell.edu
Weill Medical College
New York, NY 10065

Popular version of paper 2aSC8
Presented Tuesday morning, October 28, 2014
168th ASA Meeting, Indianapolis

4pAAa1 – Auditory Illusions of Supernatural Spirits: Archaeological Evidence and Experimental Results – Steven J. Waller

Introduction: Auditory illusions
The ear can be tricked by ambiguous sounds, just as the eye can be fooled by optical illusions. Sound reflection, whisper galleries, reverberation, ricochets, and interference patterns were perceived in the past as eerie sounds attributed to invisible echo spirits, thunder gods, ghosts, and sound-absorbing bodies. These beliefs in the supernatural were recorded in ancient myths, and expressed in tangible archaeological evidence as canyon petroglyphs, cave paintings, and megalithic stone circles including Stonehenge. Controlled experiments demonstrate that certain ambiguous sounds cause blindfolded listeners to believe in the presence of phantom objects.

WallerFig1_HolyGhostScan

Figure 1. This prehistoric pictograph of a ghostly figure in Utah’s Horseshoe Canyon will answer you back.

 

1. Echoes = Answers from Echo Spirits (relevant to canyon petroglyphs)
Voices coming out of solid rock gave our ancestors the impression of echo spirits calling out from the rocks. Just as light reflection in a mirror gives an illusion of yourself duplicated as a virtual image, sound waves reflecting off a surface are mathematically identical to sound waves emanating from a virtual sound source behind a reflecting plane such as a large cliff face. This can result in an auditory illusion of somebody answering you from deep within the rock. It struck me that canyon petroglyphs might have been made in response to hearing echoes and believing that the echo spirits dwelt in rocky places. Ancient myths contain descriptions of echo spirits that match prehistoric petroglyphs, including witches that hide in sheep bellies and snakeskins. My acoustic measurements have shown that the artists chose to place their art precisely where they could hear the strongest echoes.
Listen to an echo at a rock art site in the Grand Canyon (click here).


Watch a video of an echoing rock art site in Utah

WallerFig2_DSCN0662_CedarSpringsTX2010

Figure 2. This figure on the Pecos River in Texas is painted in a shallow shelter with interesting acoustics.
2. Whisper Galleries = Disembodied Voices (relevant to parabolic shelters)
Just as light reflected in a concave mirror can focus to give a “real image” floating in front of the surface, a shallow rock shelter can focus sound waves like a parabolic dish. Sounds from unseen sources miles away can be focused to result in an auditory illusion of disembodied voices coming from thin air right next to you. Such rock shelters were often considered places of power, and were decorated with mysterious paintings. These shelters can also act like loud-speakers to broadcast sounds outward, such that listeners at great distances would wonder why they could not see who was making the sounds.

WallerFig3_HoofedCavePaintingsIndia

Figure 3. This stampede of hoofed animals is painted in a cave with thunderous reverberation in central India.

3. Reverberation = Thunder from Hoofed Animals (relevant to cave paintings)
Echoes of percussion noises can sound like hoof beats. Multiple echoes of a simple clap in a cavern blur together into thunderous reverberation, which mimics the sound of the thundering herds of stampeding hoofed animals painted in prehistoric caves. Ancient myths describe thunder as the hoof beats of supernatural gods. I realized that the reverberation in caves must have given the auditory illusion of being thunder, and thus inspired the cave paintings depicting that the same mythical hoofed thunder gods causing thunder in the sky also cause thunder in the underworld.
Listen to thunderous reverberation of a percussion sound in a prehistoric cave in France (click here).

 

 

4. Ricochets = “Boo-o-o!” (relevant to ghostly hauntings)
Can you hear the ricochet reminiscent of a ghostly “Boo” in this recording  of a clap in a highly reverberant room?

WallerFig4_DSCN2779a_DNM_flute

 

Figure 4. A petroglyph of a flute player in an echoing location within Dinosaur National Monument.
5. Resonance = spritely music (relevant to cave and canyon paintings)
Listen to the difference between a flute being played in a non-echoing environment, then how haunting it sounds if played in a cave (click here);

it is as if spirit musicians are in accompaniment. (Thanks to Simon Wyatt for the flute music, to which half-way through I added cave acoustics via the magic of a convolution reverberation program.)

WallerFig5_rippletank12 nodes 3D w stonehenge perspective

Figure 5. An interference pattern from two sound sources such as bagpipes can cause the auditory illusion that the silent zones are acoustic shadows from a megalithic stone circle, and vice versa.
6. Interference Patterns = Acoustic Shadows of a Ring of Pillars (relevant to Stonehenge and Pipers’ Stones)
Mysterious silent zones in an empty field can give the impression of a ring of large phantom objects casting acoustic shadows. Two sound sources, such as bagpipes playing the same tone, can produce an interference pattern. Zones of silence radiating outward occur where the high pressure of sound waves from one source cancel out the low pressure of sound waves from the other source. Blindfolded participants hearing an interference pattern in controlled experiments attributed the dead zones to the presence of acoustic obstructions in an arrangement reminiscent of Stonehenge.
These experimental results demonstrate that regions of low sound intensity due to destructive interference of sound waves from musical instruments can be misperceived as an auditory illusion of acoustic shadows cast by a ring of large rocks:
WallerFig6_CFuller_InterferenceRocks

Figure 6. Drawing by participant C. Fuller after hearing interference pattern blindfolded, as envisioned from above (shown on left), and in perspective from ground level (shown on right).

I then visited the U.K. and made measurements of the actual acoustic shadows radiating out from Stonehenge and other megalithic stone circles, and demonstrated that the pattern of alternating loud and quiet zones recreates a dual source sound wave interference pattern. My theory that musical interference patterns served as blueprints for megalithic stone circles – many of which are named “Pipers’ Stones” — is supported by ancient legends that two magic pipers enticed maidens to dance in a circle and they all turned to stone.
Listen for yourself to the similarity between sound wave interference as I walk around two flutes in an empty field (click here), and acoustic shadows as I walk around a megalithic Pipers’ Stone circle (click here); both have similar modulations between loud and quiet. How would you have explained this if you couldn’t see what was “blocking” the sound?

 

Conclusions:
Complex behaviors of sound such as reflection and interference (which scientists today explain by sound wave theory and dismiss as acoustical artifacts) can experimentally give rise to psychoacoustic misperceptions in which such unseen sonic phenomena are attributed to the invisible/supernatural. The significance of this research is that it can help explain the motivation for some of mankind’s most mysterious behaviors and greatest artistic achievements. There are several implications and applications of my research. It shows that acoustical phenomena were culturally significant to ancient peoples, leading to the immediate conclusion that the natural soundscapes of archaeological sites should be preserved in their natural state for further study and greater appreciation. It demonstrates that even today sensory input can be used to manipulate perception, and can give spooky illusions inconsistent with scientific reality, which could have interesting practical applications for virtual reality and special effects in entertainment media. A key point to learn from my research is that objectivity is questionable, since a given set of data can be used to support multiple conclusions. For example, an echo can be used as “proof” for either an echo spirit or sound wave reflection. Also, just based on their interpretation of sounds heard in an empty field, people can be made to believe there is a ring a huge rocks taller than themselves. The history of humanity is full of misinterpretations, such as the visual illusion that the sun propels itself across the sky above the flat earth. Sound, being invisible with complex properties, can lead to auditory illusions of the supernatural. This leads to the more general question, what other perceptional illusions are we currently living under due to other phenomena that we are currently misinterpreting?

 

 
See https://sites.google.com/site/rockartacoustics/ for further detail.

Auditory Illusions of Supernatural Spirits: Archaeological Evidence and Experimental Results

Steven J. Waller — wallersj@yahoo.com
Rock Art Acoustics
5415 Lake Murray Boulevard #8
La Mesa, CA 91942

Popular version of paper 4pAAa1
Presentation Thursday afternoon, October 30, 2014
Session: “Acoustic Trick-or-Treat: Eerie Noises, Spooky Speech, and Creative Masking”
168th Acoustical Society of America Meeting, Indianapolis, IN

4pAAa10 – Eerie voices: Odd combinations, extremes, and irregularities. – Brad Story

The human voice is a pattern of sound generated by both the mind and body, and carries with it information about about a speaker’s mental and physical state. Qualities such as gender, age, physique, dialect, health, and emotion are often embedded in the voice, and can produce sounds that are comforting and pleasant, intense and urgent, sad and happy, and so on. The human voice can also project a sense of eeriness when the sound contains qualities that are human-like, but not necessarily typical of the speech that is heard on a daily basis. A person with an unusually large head and neck, for example, may produce highly intelligible speech, but it will be oddly dominated by low frequency sounds that belie the atypical size of the talker. Excessively slow or fast speaking rates, strangely-timed and irregular speech, as well as breathiness and tremor may all also contribute to an eeriness if produced outside the boundaries of typical speech.

The sound pattern of the human voice is produced by the respiratory system, the larynx, and the vocal tract. The larynx, located at the bottom of the throat, is comprised of a left and right vocal fold (often referred to as vocal cords) and a surrounding framework of cartilage and muscle. During breathing the vocal folds are spread far apart to allow for an easy flow of air to and from the lungs. To generate sound they are brought together firmly, allowing air pressure to build up below them. This forces the vocal folds into vibration, creating the sound waves that are the “raw material” to be formed into speech by the vocal tract. The length and mass of the vocal folds largely determine the vocal pitch and vocal quality. Small and light vocal folds will generally produce a high pitched sound, whereas low pitch typically originate with large, heavy vocal folds.

The vocal tract is the airspace created by the throat and the mouth whose shape at any instant of time depends on the positions of the tongue, jaw, lips, velum, and larynx. During speech it is a continuously changing tube-like structure that “sculpts” the raw sound produced by the vocal folds into a stream of vowels and consonants. The size and shape of the vocal tract imposes another layer of information about the talker. A long throat and large mouth may transmit the impression of a large body while more subtle characteristics like the contour of the roof of the mouth may add characteristics that are unique to the talker.

For this study, speech was simulated with a mathematical representation of the vocal folds and vocal tract. Such simulations allow for modifications of size and shape of structures, as well as temporal aspects of speech. The goal was to simulate extremes in vocal tract length, unusual timing patterns of speech movements, and odd combinations of breathiness and tremor. The result can be both eerie and amusing because the sounds produced are almost human, but not quite.

Three examples are included to demonstrate these effects. The first is set of seven simulations of the word “abracadabra” produced while gradually decreasing the vocal tract length from 22 cm to 6.6 cm, increasing the vocal pitch from very low to very high, and increasing the speaking rate from slow to fast. The longest and shortest vocal tracts are shown in Figure 1 and are both configured as “ah” vowels; for production of the entire word, the vocal tract shape continuously changes. The set of simulations can be heard in sound sample 1.

Although it may be tempting to assume that the changes present in sound sample 1 are similar to simply increasing the playback speed of the audio, the changes are based on physiological scaling of the vocal tract, vocal folds, as well as an increase in the speaking rate. Sound sample 2 contains the same seven simulations except that the speaking rate is exactly the same in each case, eliminating the sense of increased playback speed.

The third example demonstrates the effects of modifying the timing of the vowels and consonants within the word “abracadabra” while simultaneously adding a shaky or tremor-like quality, and an increased amount of breathiness. A series of six simulations can be heard in sound sample 3; the first three versions of the word are based on the structure of an unusually large male talker, whereas the second three are representative of an adult female talker.

This simulation model used for these demonstrations has been developed for purposes of studying and understanding human speech production and speech development. Using the model to investigate extreme cases of structure and unusual timing patterns is useful for better understanding the limits of human speech.

 

story

Figure 1 caption:
Unnaturally long and short tube-like representations of the human vocal tract. Each vocal tract is configured as an “ah” vowel (as in “hot”), but during speech the vocal tract continuously changes shape. Vocal tract lengths for typical adult male and adult female talkers are approximately 17.5 cm and 15 cm, respectively. Thus, the 22 cm long tract would be representative of a person with an unusually large head and neck, whereas the 6.6 cm vocal tract is even shorter than a typical infant.

 

Brad Story – bstory@email.arizona.edu
Dept. of Speech, Language, and Hearing Sciences
University of Arizona
P.O. Box 210071
Tucson, AZ 85712

Popular version of paper 4pAAa10
Presented Thursday afternoon, October 30, 2014
168th ASA Meeting, Indianapolis

 

4pAAa13 – Impact of Room Acoustics on Emotional Response – Martin Lawless, Michelle C. Vigeant

Background

Music has the potential to evoke powerful emotions, both positive and negative. When listening to an enjoyable piece or song, an individual can experience intense, pleasurable “chills” that signify a surge of dopamine and activations in certain regions in the brain, such as the ventral striatum1 (see Fig. 1). Conversely, regions of the brain associated with negative emotions, for instance the parahippocampal gyrus, can activate during the presentation of music without harmony or a distinct rhythmic pattern2. Prior research has shown that the nucleus accumbens (NAcc) in the ventral striatum specifically activates during reward processing3, even if the stimulus does not present a tangible benefit, such as that from food, sex, or drugs4-6.

Lawless_Figure1Lawless_Figure1

Figure 1: A cross-section of the human brain detailing (left) the ventral striatum, which houses the nucleus accumbens (NAcc), and (right) the parahippocampal gyrus.

Even subtle changes in acoustic (sound) stimuli can affect experiences positively or negatively. In terms of concert hall design, the acoustical characteristics of a room, such as reverberance, the lingering of sound in the space, contribute significantly to an individual’s perception of music, and in turn influences room acoustics preference7-8. As with the case for music, different regions of the brain should activate depending on how pleasing the stimulus is to the listener. For instance, a reverberant stimulus may evoke a positive emotional response in listeners that appreciate reverberant rooms (e.g. a concert hall), while negative emotional regions may be activated for those that prefer drier rooms (e.g. a conference room). The identification of which regions in the brain are activated due to changes in reverberance will provide insight for future research to investigate other acoustic attributes that contribute to preference, such as the sense of envelopment.

Methods

Stimuli

The acoustic stimuli presented to the participants ranged in levels of perceived reverberance from anechoic to very reverberant conditions, e.g. a large cathedral. Example stimuli, which are similar to those used in the study, can be heard using the links below. As you listen to the excerpts, pay attention to how the characteristics of the sound changes even though the classical piece remains the same.

Example Reverberant Stimuli:

ShortMediumLong

The set of stimuli with varying levels of reverberation were created by convolving an anechoic recording of a classical excerpt with a synthesized impulse response (IR) that represented the IR of a concert hall. The synthesized IR was double-sloped (see Fig. 2a) such that early part of the response was consistent between the different conditions, but the late reverberation differed. As shown in Fig. 2b the late parts of the IR vary greatly, while the first 100 milliseconds overlap. The reverberation times (RT) of the stimuli varied from 0 to 5.33 seconds

Lawless_Figure2

(a)                                                (b)

Figure 2: Impulse responses for the four synthesized conditions: (a) the total impulse response, (b) Time scale from 0 to 1 seconds to highlight the early part of the IR.

Functional magnetic resonance imaging (fMRI) was used to locate the regions of the brain that were activated by the stimuli. In order to find these regions, the images obtained due to the musical stimuli are each compared with the activations resulting due to control stimuli, which for this study were noise stimuli. Examples of control stimuli that are matched to the musical ones provided earlier can be heard using the links below. The noise stimuli were matched to have the same rhythm and frequency content for each reverberant condition.

Example Noise Stimuli:

Short                Medium                 Long

Experimental Design

A total of 10 stimuli were used in the experiment: five acoustic stimuli and five corresponding noise stimuli, and each stimulus was presented eight times. Each stimulus presentation lasted for 16 seconds. After each presentation, the participant was given 10 seconds to rate the stimulus in terms of preference on a five-point scale, where -2 was equal to “Strongly Dislike,” 0 was “Neither Like Nor Dislike,” and +2 was “Strongly Like.”

Results

The following data represent the results of one participant averaged over the total number of repeated stimuli presentations. The average preference ratings for the five musical stimuli are shown in Fig. 3. While the majority of the ratings were not statistically different, the general trend is that the preference ratings were higher for the stimuli with the 1-2 second RTs and lowest for the excessively long RT of 5.33 seconds. These results are consistent with a pilot study that was run with seven subjects, and in particular, the stimulus with the 1.44 second RT was found to have the highest preference rating.

Lawless_Figure3

Figure 3: Average preference ratings for the five acoustic stimuli.

The fMRI results were found to be in agreement for the highest rated stimulus with an RT of 1.44 seconds. Brain activations were found in regions shown to be associated with positive emotions and reward processing: the right ventral striatum (p<0.001) (Fig. 4a) and the left and right amygdala (p<0.001) (Fig. 4b). No significant activation were found in regions shown to be associated with negative emotions for this stimulus, which supports the original hypothesis. In contrast, a preliminary analysis of a second participant’s results possibly indicates that activations occurred in areas linked to negative emotions for the lowest-rated stimulus, which is the one with the longest reverberation time of 5.33 seconds.

Lawless_Figure4

Figure 4: Acoustic Stimulus > Noise Stimulus (p<0.001) for RT = 1.44 s showing activation in the (a) right ventral striatum, and (b) the left and right amygdala.

Conclusion

A first-level analysis of one participant exhibited promising results that support the hypothesis, which is that a stimulus with a high preference rating will lead to activation of regions of the brain associated with reward (in this case, the ventral striatum and the amygdala). Further study of additional participants will aid in the identification of the neural mechanism engaged in the emotional response to stimuli of varying reverberance.

References:

1. Blood, AJ and Zatorre, RJ Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. PNAS. 2001, Vol. 98, 20, pp. 11818-11823.

2. Blood, AJ, et al. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience. 1999, Vol. 2, 4, pp. 382-387.

3. Schott, BH, et al. Mesolimbic functional magnetic resonance imaging activations during reward anticipation correlate with reward-related ventral striatal dopamine release. Journal of Neuroscience. 2008, Vol. 28, 52, pp. 14311-14319.

4. Menon, V and Levitin, DJ. The rewareds of music listening: Response and physiological connectivity of the mesolimbic system. NeuroImage. 2005, Vol. 28, pp. 175-184.

5. Salimpoor, VN., et al. Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience. 2011, Vol. 14, 2, pp. 257-U355.

6. Salimpoor, VN., et al. Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science. 2013, Vol. 340, pp. 216-219.

7. Beranek, L. Concert hall acoustics. J. Acoust. Soc. Am. 1992, Vol. 92, 1, pp. 1-39.

8. Schroeder, MR, Gottlob, D and Siebrasse, KF. Comparative sutdy of European concert halls: correlation of subjective preference with geometric and acoustic parameters. J. Acoust. Soc. Am. 1974, Vol. 56, 4, pp. 1195-1201.

 

Impact of Room Acoustics on Emotional Response

Martin Lawless – msl224@psu.edu

Michele Vigeant, Ph.D. – mcv3@psu.edu

Graduate Program in Acoustics

Pennsylvania State University

Popular version of paper 4pAAa13

Presented Thursday afternoon, October 30, 2014

168th ASA Meeting, Indianapolis