2pAB9 – Vocal behavior of Southeast Alaskan humpback whales: context matters

Michelle Fournet – michelle.fournet@gmail.com
Oregon State University
425 SE Bridgeway Ave
Corvallis, OR 97333

David K. Mellinger – david.k.mellinger@noaa.gov
Cooperative Institute for Marine Resources Studies, Oregon State University
NOAA Pacific Marine Environmental Laboratory
2030 SE Marine Science Dr.
Newport, OR 97365

Lay language paper 2pAB9
Presented Tuesday Afternoon, October 28th, 2014
168th ASA Meeting, Indianapolis

Humpback whales (Megaptera novaeangliae) were made famous by the discovery that male whales sing long complex songs on the breeding grounds[1]. Humpbacks, however, also produce a wide range of sounds throughout their range—purrs, shrieks, whups, moans, and more—that have received considerably less attention[2-5]. Unlike song, which is produced exclusively by male whales and serves a presumptive breeding purpose, males, females, and juveniles all produce these ‘non-song vocalizations’ [3, 4, 6-10], although the context under which these sounds are used remains largely unknown.

The ocean is getting louder. As shipping throughout the North Pacific, and the world, continues to increase humpback whales, and many other acoustically oriented marine animals, run the risk of being negatively impacted by an inundation of man-made (anthropogenic) noise. Large vessel noise from shipping in particular may have the ability to acoustically mask humpback whale vocalizations, preventing animals from being able to detect one another (Figure 1).


Figure 1 – Humpback whales increasingly share the ocean with vessels ranging in size from cruise ships to zodiacs. All motorized vessels have the potential to input some sort of noise into the marine environment. As acoustically oriented animals humpbacks whales produce a wide range of vocalizations to communicate, though their function is not yet understood.

The ability to adapt to these changing ocean conditions may be critical for the success of the species, and the ecosystems they inhabit. Recognizing adaptation in the face of a changing ocean is contingent on understanding vocal behavior now in a relatively quiet ocean, and comparing it to future behavior. Understanding patterns of use and the role of non-song vocal behavior in humpback whale communication allows for a more comprehensive assessment of the potential risks of increasing man-made (anthropogenic) noise.

Fournet_Figure2 - Southeast Alaskan humpback whales

Figure 2 – A recent study of Southeast Alaskan humpback whales found that whales produce at least 16 unique call types that fall into one of four vocal classes. Example spectrograms (visual representations of sound) of calls from each class are shown above: (L-R) Low-Frequency Harmonic, Pulsed, Noisy/Complex, Tonal.

Sound File 1 – An example of a Southeast Alaskan ‘whup’ call (file missing)
Sound File 2 – An example of a Southeast Alaskan ‘swop’ call

Sound File 3 –  An example of a Southeast Alaskan ‘Ascending Shriek’ (file missing)
Sound File 4 – An example of a Southeast Alaskan ‘Feeding Call’

In Southeast Alaska humpback whales are known to produce at least sixteen unique vocalizations that fall into four vocal classes (Figure 2, Sounds 1-4). In this study we investigated whether call types from each vocal class were used equally, and what impact social interaction between animals may have on vocal behavior. What we found was that unlike song, which is highly stereotyped and repeated throughout the breeding season, the use of non-song calls on foraging grounds is at least somewhat context driven and may be spatially mediated. For example, the use of pulsed (P) calls, including wops, swops, and horse calls, increased as whale clustered together on a foraging ground. Furthermore, as clustering increased the vocal behavior of the whales grew more diverse; indicating that as the opportunity for close range interaction increased the amount of information conveyed with vocalizations grew more complex

Not all calls were used equally; some calls, like the Southeast Alaskan “Growl” and “Whup” calls, dominated the soundscape, while other calls with structure more reminiscent of song were relatively uncommon. The whup and growl calls, which have been proposed as contact calls, made up more than half of the vocalizations detected throughout the study. While the discrete function of these and other non-song vocalizations is still unknown, this study indicates that non-song vocalizations serve a communicative function that may be social in nature. Work like this lays the foundation for investigation into discrete call function and vocal resilience; two topics which will play a key role in understanding the vocal behavior of humpback whales and how they respond to increasing anthropogenic noise in our world’s oceans. (Figure 3)

Fournet_figure3 - Humpback whales

Figure 3 – Humpback whales are considered a medium sized baleen whale, weighing in at approximately 35 tons in weight and reaching lengths of 30-50 feet.

  1. Payne, R.S. and S. McVay, Songs of humpback whales. Science, 1971. 173(3397): p. 585-597.
  2. Dunlop, R.A., D.H. Cato, and M.J. Noad, Non-song acoustic communication in migrating humpback whales (Megaptera novaeangliae). Marine Mammal Science, 2008. 24(3): p. 613-629.
  3. Stimpert, A.K., et al., Common humpback whale (Megaptera novaeangliae) sound types for passive acoustic monitoring. Journal of the Acoustical Society of America, 2011. 129(1): p. 476-82.
  4. Fournet, M., Vocal repertoire of Southeast Alaska humpback whales (Megaptera novaeangliae), in Marine Resource Management2014, Oregon State University.
  5. Rekdahl, M.L., et al., Temporal stability and change in the social call repertoire of migrating humpback whales. Journal of the Acoustical Society of America, 2013. 133(3): p. 1785-1795.
  6. Dunlop, R.A., et al., The social vocalization repertoire of east Australian migrating humpback whales (Megaptera novaeangliae). Journal of the Acoustical Society of America, 2007. 122(5): p. 2893-905.
  7. Silber, G.K., The relationship of social vocalizations to surface behavior and aggression in the Hawaiian humpback whale (Megaptera novaeangliae). Canadian Journal of Zoology, 1986. 64(10): p. 2075-2080.
  8. Cerchio, S. and M. Dalheim, Variations in feeding vocalizations of humpback whales (Megaptera novaeangliae) from southeast Alaska. Bioacoustics, 2001. 11/4(11 4): p. 277-295.
  9. Stimpert, A.K., et al., ‘Megapclicks’: acoustic click trains and buzzes produced during night-time foraging of humpback whales (Megaptera novaeangliae). Biology letters, 2007. 3(5): p. 467-70.
  10. Zoidis, A.M., et al., Vocalizations produced by humpback whale (Megaptera novaeangliae) calves recorded in Hawaii. Journal of the Acoustical Society of America, 2008. 123(3): p. 1737-46.

4pAAa2 – Uncanny Acoustics: Phantom Instrument Guides at Ancient Chavín de Huántar, Peru

Miriam Kolar, Ph.D. – mkolar@amherst.edu
AC# 2255, PO Box 5000
Architectural Studies Program & Dept. of Music
Amherst College
Amherst, MA 01002

Popular version of paper 4pAAa2. Pututus, Resonance and Beats: Acoustic Wave   Interference Effects at Ancient Chavín de Huántar, Perú
Presented Thursday afternoon, October 30, 2014
168th ASA Meeting, Indianapolis
See also: Archaeoacoustics: Re-Sounding Material Culture

Excavated from Pre-Inca archaeological sites high in the Peruvian Andes, giant conch shell horns known as “pututus” have been discovered far from the tropical sea floor these marine snails once inhabited.

Fig1a_ChavinPututu_inSitu_byJohnRick Chavín de Huántar
Fig. 1a: Excavation of a Chavín pututu at Chavín de Huántar, 2001. Photo by John Rick.

Fig1b_ChavinPututu_MuseoNacChavin_2295_byJLC Chavín de Huántar
C)Fig1c_ChavinPututu_MuseoNacChavin_5976_byJLC Chavín de Huántar

Fig. 1 B-C: Chavín pututus: decorated 3,000-year-old conch shell horns from the Andes, on display at the Peruvian National Museum in Chavín de Huántar. Photos by José Luis Cruzado.

At the 3,000-year-old ceremonial center Chavín de Huántar, carvings on massive stone blocks depict humanoid figures holding and perhaps blowing into the weighty shells. A fragmented ceramic orb depicts groups of conches or pututus separated from spiny oysters by rectilinear divisions on its relief-modeled surface. Fossil sea snail shells are paved into the floor of the site’s Circular Plaza.

Fig. 2: Depictions of pututus players on facing stones in the Circular Plaza at Chavín. Photo by José Luis Cruzado & Miriam Kolar.

Pututus are the only known musical or sound-producing instruments from Chavín, whose monumental stone architecture was constructed and used over several centuries during the first millennium B.C.E.

Fig. 3 (VIDEO): Chavín’s monumental stone-and-earthen-mortar architecture towers above plazas and encloses kilometers of labyrinthine corridors, room, and canals. Video by José Luis Cruzado and Miriam Kolar, with soundtrack of a Chavín pututu performed by Tito La Rosa in the Museo Nacional Chavin.

How, by whom, and in what cultural contexts were these instruments played at ancient Chavín? What was their significance? How did they sound, and what sonic effects could have been produced between pututus and Chavín’s architecture or landform surroundings? Such questions haunt and intrigue archaeoacousticians, who apply the science of sound to material traces of the ancient past. Acoustic reconstructions of ancient buildings, instruments, and soundscapes can help us connect with our ancestors through experiential analogy. Computer music pioneer Dr. John Chowning and archaeologist Dr. John Rick founded the Chavín de Huántar Archaeological Acoustics Project (https://ccrma.stanford.edu/groups/chavin/) to discover more.

Material traces of past life––such as artifacts of ancient sound-producing instruments and architectural remains––provide data from which to reconstruct ancient sound. Nineteen use-worn Strombus galeatus pututus were unearthed at Chavín in 2001 by Stanford University’s Rick and teams. Following initial sonic evaluation by Rick and acoustician David Lubman (ASA 2002), a comprehensive assessment of their acoustics and playability was made in 2008 by Dr. Perry Cook and researchers based at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA).

Fig. 4: Dr. Perry Cook performs acoustic measurements of the Chavín pututus. Photo by José Luis Cruzado.

Transforming an empty room at the Peruvian National Museum at Chavín into a musical acoustics lab, we established a sounding-tone range for these specific instruments from about 272 Hz to 340 Hz (frequencies corresponding to a few notes ascending from around Middle C on the piano), and charted their harmonic structure.

Fig. 5 (VIDEO): Dr. Perry Cook conducting pututu measurements with Stanford CCRMA team. Video by José Luis Cruzado.

Back at CCRMA, Dr. Jonathan Abel led audio digital signal processing to map their strong directionality, and to track the progression of sound waves through their exponentially spiraling interiors. This data constitutes a digital archive of the shell instrument sonics, and drives computational acoustic models of these so-called Chavín pututus (ASA 2010; Flower World 2012; ICTM 2013).

Where does data meet practice? How could living musicians further inform our study? Cook’s expertise as winds and shells player allowed him to evaluate the Chavín pututus’ playability with respect to a variety of other instruments, and produce a range of articulations. Alongside the acoustic measurement sessions, Peruvian master musician Tito La Rosa offered a performative journey, a meditative ritual beginning and ending with the sound of human breath, the source of pututu sounding. This reverent approach took us away from our laboratory perspectives for a moment, and pushed us to consider not only the performative dynamics of voicing the pututus, but their potential for nuanced sonic expression.

Fig. 6 (VIDEO): Tito La Rosa performs one of the Chavín pututus in the Museo Nacional Chavín. Video by Cobi van Tonder.

When Cook and La Rosa played pututus together, we noted the strong acoustic “beats” that result when shell horns’ similar frequencies constructively and destructively interfere, producing an amplitude variation at a much lower frequency. Some nearby listeners described this as a “warbling” or “throbbing” of the tone, and said they thought that the performers were creating this effect through a performance technique (not so; it’s a well-known acoustic wave-interference phenomenon; see Hartmann 1998: 393-396).

Fig. 7 (VIDEO): José Cruzado and Swiss trombonist Michael Flury demonstrate amplitude “beats” between replica pututus in Chavín’s Doble Ménsula Galley. Video by Miriam Kolar.

If present-day listeners are unaware of an acoustics explanation for a sound effect, how might ancient listeners have understood and attributed such a sound? A pututu player would know that s/he was not articulating this warble, yet would be subject to its strong sensations. How would this visceral experience be interpreted? Might it be experienced as a phantom force?

The observed acoustic beating effect between pututus was so impressive that we sought to reproduce it during our on-site tests of architectural acoustics using replica shell horns. CCRMA Director Dr. Chris Chafe joined us, and he and Rick moved through Chavín’s labyrinthine corridors, blasting and droning pututus in different articulations to identify and excite acoustic resonances in the confined interior “galleries” of the site.

Fig8a_CC_Pututu_Laberintos_2792_byJLC Fig8b_JR_TritonPututu_Laberintos_2718_byJLC

Fig. 8: CCRMA Director Chris Chafe and archaeologist John Rick play replica pututus to test the acoustics of Chavín’s interior galleries. Photos by José Luis Cruzado.

The short reverberation times of Chavín’s interior architecture allow the pututus to be performed as percussive instruments in the galleries (ASA 2008). However, the strong modal resonances of the narrow corridors, alcoves, and rooms also support sustained tonal production, in an acoustically fascinating way. Present-day pututu players have reported the experience of their instruments’ tones being “pulled into tune” with these architectural resonances. This eerie effect is both sonic and sensed, an acoustic experience that is not only heard, but felt through the body, an external force that seemingly influences the way the instrument is played.

Fig. 9 (AUDIO MISSING): Resonant compliance: Discussion of phantom tuning effect as Kolar and Cruzado perform synchronizing replica pututus in the Laberintos Gallery at Chavín. Audio by Miriam Kolar.

From an acoustical science perspective, what could be happening? As is well known from musical acoustics research (e.g., Fletcher and Rossing 1998), shell horns are blown-open lip-reed or lip-valve instruments, terminology that refers to the physical dynamics of their sounding. Mechanically speaking, the instrument player’s lips vibrate (or “buzz”) in collaborative resonance with the oscillations produced within the air column of the pututu’s interior, known in instrument lingo as its “bore”. Novice players may have great difficulty producing sound, or immediately generate a strong tone; there’s not one typical tendency, though producing higher, lower, or sustained tones requires greater control.

Experienced pututu players such as Cook and La Rosa can change their lip vibrations to increase the frequency––and therefore raise the perceived pitch––that the shell horn produces. To drop the pitch below the instrument’s natural sounding tone (the fundamental resonant frequency of its bore), the player can insert a hand in the lip opening, or “bell”, of the shell horn. Instrument players also modify intonation by altering the shape of their vocal tracts. This vocal tract modification is produced intuitively, by “feel”, and may involve several different parts of that complex sound-producing system.

Strong architectural acoustic resonance can “couple”, or join with the air column in the instrument that is also coupled to that of the player’s vocal tract (with the players lips opening and closing in between). When the oscillatory frequencies of the player’s lips, those within the air column of his or her vocal tract, the pututu bore resonance, and the corridor resonance are synchronized, the effect can produce a strong sensation of immersion in the acoustic environment for the performer. The pututu is “tuned” to the architecture: both performer and shell horn are acoustically compliant with the architectural resonance.

When a second pututu player joins the first in the resonant architectural location, both players may share the experience of having their instrument tones guided into tune with the space, yet at the same time, sense the synchrony between their instruments. The closer together the shell openings, the more readily their frequencies will synchronize with each other. As Cook has observed, “if players are really close together, the wavefronts can actually get into the shells, and the lips of the players can phase lock.” (Interview between Kolar & Cook 2011: https://ccrma.stanford.edu/groups/chavin/interview_prc.html).

Fig. 10 (VIDEO): Kolar and Cruzado performing resonance-synchronizing replica pututus in the Laberintos Gallery at Chavín. Video by Miriam Kolar.

From the human interpretive perspective, what might pututu players in ancient Chavín have thought about these seemingly phantom instrument guides? A solo pututu performer who sensed the architectural and instrumental acoustic coupling might understand this effect to be externally driven, but how would s/he attribute the phenomenon? Would it be thought of as embodied by the instrument being played, or as an intervention of an otherworldly power, or an effect of some other aspect of the ceremonial context? Pairs or multiple performers experiencing the resonant pull might attribute the effect to the skill of a powerful lead player, with or without command of supernatural forces. Such interpretations are motivated by archaeological interpretations of Chavín as a cult center or religious site where social hierarchy was developing (Rick 2006).

However these eerie sonics might have been understood by people in ancient Chavín, from an acoustics perspective we can theorize and demonstrate complex yet elegant physical dynamics that are reported to produce strong experiential effects. Chavín’s phantom forces––however their causality might be interpreted––guide the sound of its instruments into resonant synchrony with each other and its architecture.

Chavín de Huántar Archaeological Acoustics Project: https://ccrma.stanford.edu/groups/chavin/

(ASA 2002): Rick, John W., and David Lubman. “Characteristics and Speculations on the Uses of Strombus Trumpets found at the Ancient Peruvian Center Chavín de Huántar”. (Abstract). In Journal of the Acoustical Society of America 112/5, 2366, 2002.

(ASA 2010): Cook, Perry R., Abel, Jonathan S., Kolar, Miriam A., Huang, Patty, Huopaniemi, Jyri, Rick, John W., Chafe, Chris, and Chowning, John M. “Acoustic Analysis of the Chavín Pututus (Strombus galeatus Marine Shell Trumpets).(Abstract). Journal of the Acoustical Society of America, Vol. 128, No. 2, 359, 2010.

(Flower World 2012): Kolar, Miriam A., with Rick, John W., Cook, Perry R., and Abel, Jonathan S. “Ancient Pututus Contextualized: Integrative Archaeoacoustics at Chavín de Huántar, Perú”. In Flower World – Music Archaeology of the Americas, Vol. 1. Eds. M. Stöckli and A. Both. Ekho VERLAG, Berlin, 2012.

(ICTM 2013): Kolar, Miriam A. “Acoustics, Architecture, and Instruments in Ancient Chavín de Huántar, Perú: An Integrative, Anthropological Approach to Archaeoacoustics and Music Archaeology”. In Music & Ritual: Bridging Material & Living Cultures. Ed. R. Jiménez Pasalodos. Publications of the ICTM Study Group on Music Archaeology, Vol. 1. Ekho VERLAG, Berlin, 2013.

(Hartmann 1998): Hartmann, William M. Signals, Sound, and Sensation. Springer-Verlag, New York, 1998.

(ASA 2008): Abel, Jonathan S., Rick, John W., Huang, Patty P., Kolar, Miriam A., Smith, Julius O. / Chowning, John. “On the Acoustics of the Underground Galleries of Ancient Chavín de Huántar, Peru”. (Abstract). Journal of the Acoustical Society of America, Vol. 123, No. 3, 605, 2008.

(Fletcher and Rossing 1998): Fletcher, Neville H., and Thomas D. Rossing. The Physics of Musical Instruments. Springer-Verlag, New York, 1998.

Kolar and Cook Interview 2011: https://ccrma.stanford.edu/groups/chavin/interview_prc.html

(Rick 2006): Rick, John W. “Chavín de Huántar: Evidence for an Evolved Shamanism”. In Mesas and Cosmologies in the Central Andes (Douglas Sharon, ed.), 101-112. San Diego Museum Papers 44, San Diego, 2006.

4pAAa13 – Impact of Room Acoustics on Emotional Response

Martin Lawless – msl224@psu.edu
Michele Vigeant, Ph.D. – mcv3@psu.edu

Graduate Program in Acoustics
Pennsylvania State University
Popular version of paper 4pAAa13
Presented Thursday afternoon, October 30, 2014
168th ASA Meeting, Indianapolis
See also: Sensitivity of the human auditory cortex and reward network to reverberant musical stimuli

Music has the potential to evoke powerful emotions, both positive and negative. When listening to an enjoyable piece or song, an individual can experience intense, pleasurable “chills” that signify a surge of dopamine and activations in certain regions in the brain, such as the ventral striatum1 (see Fig. 1). Conversely, regions of the brain associated with negative emotions, for instance the parahippocampal gyrus, can activate during the presentation of music without harmony or a distinct rhythmic pattern2. Prior research has shown that the nucleus accumbens (NAcc) in the ventral striatum specifically activates during reward processing3, even if the stimulus does not present a tangible benefit, such as that from food, sex, or drugs4-6.

Figure 1: A cross-section of the human brain detailing (left) the ventral striatum, which houses the nucleus accumbens (NAcc), and (right) the parahippocampal gyrus.

Even subtle changes in acoustic (sound) stimuli can affect experiences positively or negatively. In terms of concert hall design, the acoustical characteristics of a room, such as reverberance, the lingering of sound in the space, contribute significantly to an individual’s perception of music, and in turn influences room acoustics preference7-8. As with the case for music, different regions of the brain should activate depending on how pleasing the stimulus is to the listener. For instance, a reverberant stimulus may evoke a positive emotional response in listeners that appreciate reverberant rooms (e.g. a concert hall), while negative emotional regions may be activated for those that prefer drier rooms (e.g. a conference room). The identification of which regions in the brain are activated due to changes in reverberance will provide insight for future research to investigate other acoustic attributes that contribute to preference, such as the sense of envelopment.


The acoustic stimuli presented to the participants ranged in levels of perceived reverberance from anechoic to very reverberant conditions, e.g. a large cathedral. Example stimuli, which are similar to those used in the study, can be heard using the links below. As you listen to the excerpts, pay attention to how the characteristics of the sound changes even though the classical piece remains the same.

Example Reverberant Stimuli:




The set of stimuli with varying levels of reverberation were created by convolving an anechoic recording of a classical excerpt with a synthesized impulse response (IR) that represented the IR of a concert hall. The synthesized IR was double-sloped (see Fig. 2a) such that early part of the response was consistent between the different conditions, but the late reverberation differed. As shown in Fig. 2b the late parts of the IR vary greatly, while the first 100 milliseconds overlap. The reverberation times (RT) of the stimuli varied from 0 to 5.33 seconds

Figure 2: Impulse responses for the four synthesized conditions: (L) the total impulse response, (R) Time scale from 0 to 1 seconds to highlight the early part of the IR.

Functional magnetic resonance imaging (fMRI) was used to locate the regions of the brain that were activated by the stimuli. In order to find these regions, the images obtained due to the musical stimuli are each compared with the activations resulting due to control stimuli, which for this study were noise stimuli. Examples of control stimuli that are matched to the musical ones provided earlier can be heard using the links below. The noise stimuli were matched to have the same rhythm and frequency content for each reverberant condition.

Example Noise Stimuli:




Experimental Design
A total of 10 stimuli were used in the experiment: five acoustic stimuli and five corresponding noise stimuli, and each stimulus was presented eight times. Each stimulus presentation lasted for 16 seconds. After each presentation, the participant was given 10 seconds to rate the stimulus in terms of preference on a five-point scale, where -2 was equal to “Strongly Dislike,” 0 was “Neither Like Nor Dislike,” and +2 was “Strongly Like.”

The following data represent the results of one participant averaged over the total number of repeated stimuli presentations. The average preference ratings for the five musical stimuli are shown in Fig. 3. While the majority of the ratings were not statistically different, the general trend is that the preference ratings were higher for the stimuli with the 1-2 second RTs and lowest for the excessively long RT of 5.33 seconds. These results are consistent with a pilot study that was run with seven subjects, and in particular, the stimulus with the 1.44 second RT was found to have the highest preference rating.

Figure 3: Average preference ratings for the five acoustic stimuli.

The fMRI results were found to be in agreement for the highest rated stimulus with an RT of 1.44 seconds. Brain activations were found in regions shown to be associated with positive emotions and reward processing: the right ventral striatum (p<0.001) (Fig. 4a) and the left and right amygdala (p<0.001) (Fig. 4b). No significant activation were found in regions shown to be associated with negative emotions for this stimulus, which supports the original hypothesis. In contrast, a preliminary analysis of a second participant’s results possibly indicates that activations occurred in areas linked to negative emotions for the lowest-rated stimulus, which is the one with the longest reverberation time of 5.33 seconds.

Figure 4: Acoustic Stimulus > Noise Stimulus (p<0.001) for RT = 1.44 s showing activation in the (a) right ventral striatum, and (b) the left and right amygdala.

A first-level analysis of one participant exhibited promising results that support the hypothesis, which is that a stimulus with a high preference rating will lead to activation of regions of the brain associated with reward (in this case, the ventral striatum and the amygdala). Further study of additional participants will aid in the identification of the neural mechanism engaged in the emotional response to stimuli of varying reverberance.

1. Blood, AJ and Zatorre, RJ Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. PNAS. 2001, Vol. 98, 20, pp. 11818-11823.

2. Blood, AJ, et al. Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience. 1999, Vol. 2, 4, pp. 382-387.

3. Schott, BH, et al. Mesolimbic functional magnetic resonance imaging activations during reward anticipation correlate with reward-related ventral striatal dopamine release. Journal of Neuroscience. 2008, Vol. 28, 52, pp. 14311-14319.

4. Menon, V and Levitin, DJ. The rewareds of music listening: Response and physiological connectivity of the mesolimbic system. NeuroImage. 2005, Vol. 28, pp. 175-184.

5. Salimpoor, VN., et al. Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience. 2011, Vol. 14, 2, pp. 257-U355.

6. Salimpoor, VN., et al. Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science. 2013, Vol. 340, pp. 216-219.

7. Beranek, L. Concert hall acoustics. J. Acoust. Soc. Am. 1992, Vol. 92, 1, pp. 1-39.

8. Schroeder, MR, Gottlob, D and Siebrasse, KF. Comparative sutdy of European concert halls: correlation of subjective preference with geometric and acoustic parameters. J. Acoust. Soc. Am. 1974, Vol. 56, 4, pp. 1195-1201.

4pAAa12 – Hearing voices in the high frequencies: What your cell phone isn’t telling you

Brian B. Monson – bmonson@research.bwh.harvard.edu

Department of Pediatric Newborn Medicine
Brigham and Women’s Hospital
Harvard Medical School
75 Francis St
Boston, MA 02115

Popular version of 4pAAa12 Are you hearing voices in the high frequencies of human speech and voice?
Presented Thursday afternoon, October 30, 2014
168th ASA Meeting, Indianapolis
Read the abstract by clicking here.

Ever noticed how or wondered why people sound different on your cell phone than in person? You might already know that the reason is because a cell phone doesn’t transmit all of the sounds that the human voice creates. Specifically, cell phones don’t transmit very low-frequency sounds (below about 300 Hz) or high-frequency sounds (above about 3,400 Hz). The voice can and typically does make sounds at very high frequencies in the “treble” audio range (from about 6,000 Hz up to 20,000 Hz) in the form of vocal overtones and noise from consonants. Your cell phone cuts all of this out, however, leaving it up to your brain to “fill in” if you need it.


Figure 1. A spectrogram showing acoustical energy up to 20,000 Hz (on a logarithmic axis) created by a male human voice. The current cell phone bandwidth (dotted line) only transmits sounds between about 300 and 3400 Hz. High-frequency energy (HFE) above 6000 Hz (solid line) has information potentially useful to the brain when perceiving singing and speech.

What are you missing out on? One way to answer this question is to have individuals listen to only the high frequencies and report what they hear. We can do this using conventional signal processing methods: cut out everything below 6,000 Hz thereby only transmitting sounds above 6,000 Hz to the ear of the listener. When we do this, some listeners only hear chirps and whistles, but most normal-hearing listeners report hearing voices in the high frequencies. Strangely, some voices are very easy to hear out in the high frequencies, while others are quite difficult. The reason for this difference is not yet clear. You might experience this phenomenon if you listen to the following clips of high frequencies from several different voices. (You’ll need a good set of high-fidelity headphones or speakers to ensure you’re getting the high frequencies.)


Until recently, these treble frequencies were only thought to affect some aspects of voice quality or timbre. If you try playing with the treble knob on your sound system you’ll probably notice the change in quality. We now know, however, that it’s more than just quality (see Monson et al., 2014). In fact, the high frequencies carry a surprising amount of information about a vocal sound. For example, could you tell the gender of the voices you heard in the examples? Could you tell whether they were talking or singing? Could you tell what they were saying or singing? (Hint: the words are lyrics to a familiar song.) Most of our listeners could accurately report all of these things, even when we added noise to the recordings.


Figure 2. A frequency spectrum (on a linear axis) showing the energy in the high frequencies combined with speech-shaped low-frequency noise.


What does this all mean? Cell phone and hearing aid technology is now attempting to include transmission of the high frequencies. It is tempting to speculate how inclusion of the high frequencies in cell phones, hearing aids, and even cochlear implants might benefit listeners. Lack of high-frequency information might be why we sometimes experience difficulty understanding someone on our phones, especially when sitting on a noisy bus or at a cocktail party. High frequencies might be of most benefit to children who tend to have better high-frequency hearing than adults. And what about quality? High frequencies certainly play a role in determining voice quality, which means vocalists and sound engineers might want to know the optimal amount of high-frequency energy for the right aesthetic. Some voices naturally produce higher amounts of high-frequency energy, and this might contribute to how well you like that voice. These possibilities give rise to many research questions we hope to pursue in our study of the high frequencies.



Monson, B. B., Hunter, E. J., Lotto, A. J., and Story, B. H. (2014). “The perceptual significance of high-frequency energy in the human voice,” Frontiers in Psychology, 5, 587, doi: 10.3389/fpsyg.2014.00587.

4pAAa1 – Auditory Illusions of Supernatural Spirits: Archaeological Evidence and Experimental Results

Steven J. Waller — wallersj@yahoo.com
Rock Art Acoustics
5415 Lake Murray Boulevard #8
La Mesa, CA 91942

Popular version of paper 4pAAa1
Presentation Thursday afternoon, October 30, 2014
Session: “Acoustic Trick-or-Treat: Eerie Noises, Spooky Speech, and Creative Masking”
168th Acoustical Society of America Meeting, Indianapolis, IN

Introduction: Auditory illusions
The ear can be tricked by ambiguous sounds, just as the eye can be fooled by optical illusions. Sound reflection, whisper galleries, reverberation, ricochets, and interference patterns were perceived in the past as eerie sounds attributed to invisible echo spirits, thunder gods, ghosts, and sound-absorbing bodies. These beliefs in the supernatural were recorded in ancient myths, and expressed in tangible archaeological evidence as canyon petroglyphs, cave paintings, and megalithic stone circles including Stonehenge. Controlled experiments demonstrate that certain ambiguous sounds cause blindfolded listeners to believe in the presence of phantom objects.
WallerFig1_HolyGhostScan spirits
Figure 1. This prehistoric pictograph of a ghostly figure in Utah’s Horseshoe Canyon will answer you back.

1. Echoes = Answers from Echo Spirits (relevant to canyon petroglyphs)
Voices coming out of solid rock gave our ancestors the impression of echo spirits calling out from the rocks. Just as light reflection in a mirror gives an illusion of yourself duplicated as a virtual image, sound waves reflecting off a surface are mathematically identical to sound waves emanating from a virtual sound source behind a reflecting plane such as a large cliff face. This can result in an auditory illusion of somebody answering you from deep within the rock. It struck me that canyon petroglyphs might have been made in response to hearing echoes and believing that the echo spirits dwelt in rocky places. Ancient myths contain descriptions of echo spirits that match prehistoric petroglyphs, including witches that hide in sheep bellies and snakeskins. My acoustic measurements have shown that the artists chose to place their art precisely where they could hear the strongest echoes. Listen to an echo at a rock art site in the Grand Canyon (click here).

Watch a video of an echoing rock art site in Utah

Figure 2. This figure on the Pecos River in Texas is painted in a shallow shelter with interesting acoustics.

2. Whisper Galleries = Disembodied Voices (relevant to parabolic shelters)
Just as light reflected in a concave mirror can focus to give a “real image” floating in front of the surface, a shallow rock shelter can focus sound waves like a parabolic dish. Sounds from unseen sources miles away can be focused to result in an auditory illusion of disembodied voices coming from thin air right next to you. Such rock shelters were often considered places of power, and were decorated with mysterious paintings. These shelters can also act like loud-speakers to broadcast sounds outward, such that listeners at great distances would wonder why they could not see who was making the sounds.
Figure 3. This stampede of hoofed animals is painted in a cave with thunderous reverberation in central India.

3. Reverberation = Thunder from Hoofed Animals (relevant to cave paintings)
Echoes of percussion noises can sound like hoof beats. Multiple echoes of a simple clap in a cavern blur together into thunderous reverberation, which mimics the sound of the thundering herds of stampeding hoofed animals painted in prehistoric caves. Ancient myths describe thunder as the hoof beats of supernatural gods. I realized that the reverberation in caves must have given the auditory illusion of being thunder, and thus inspired the cave paintings depicting that the same mythical hoofed thunder gods causing thunder in the sky also cause thunder in the underworld.
Listen to thunderous reverberation of a percussion sound in a prehistoric cave in France (click here).

4. Ricochets = “Boo-o-o!” (relevant to ghostly hauntings)
Can you hear the ricochet reminiscent of a ghostly “Boo” in this recording  of a clap in a highly reverberant room?

Figure 4. A petroglyph of a flute player in an echoing location within Dinosaur National Monument.

5. Resonance = spritely music (relevant to cave and canyon paintings)
Listen to the difference between a flute being played in a non-echoing environment, then how haunting it sounds if played in a cave;

It is as if spirit musicians are in accompaniment. (Thanks to Simon Wyatt for the flute music, to which half-way through I added cave acoustics via the magic of a convolution reverberation program.)
WallerFig5_rippletank12 nodes 3D w stonehenge perspective
Figure 5. An interference pattern from two sound sources such as bagpipes can cause the auditory illusion that the silent zones are acoustic shadows from a megalithic stone circle, and vice versa.

6. Interference Patterns = Acoustic Shadows of a Ring of Pillars (relevant to Stonehenge and Pipers’ Stones)
Mysterious silent zones in an empty field can give the impression of a ring of large phantom objects casting acoustic shadows. Two sound sources, such as bagpipes playing the same tone, can produce an interference pattern. Zones of silence radiating outward occur where the high pressure of sound waves from one source cancel out the low pressure of sound waves from the other source. Blindfolded participants hearing an interference pattern in controlled experiments attributed the dead zones to the presence of acoustic obstructions in an arrangement reminiscent of Stonehenge.
These experimental results demonstrate that regions of low sound intensity due to destructive interference of sound waves from musical instruments can be misperceived as an auditory illusion of acoustic shadows cast by a ring of large rocks:
Figure 6. Drawing by participant C. Fuller after hearing interference pattern blindfolded, as envisioned from above (shown on left), and in perspective from ground level (shown on right).

I then visited the U.K. and made measurements of the actual acoustic shadows radiating out from Stonehenge and other megalithic stone circles, and demonstrated that the pattern of alternating loud and quiet zones recreates a dual source sound wave interference pattern. My theory that musical interference patterns served as blueprints for megalithic stone circles – many of which are named “Pipers’ Stones” — is supported by ancient legends that two magic pipers enticed maidens to dance in a circle and they all turned to stone.
Listen for yourself to the similarity between sound wave interference as I walk around two flutes in an empty field (click here), and acoustic shadows as I walk around a megalithic Pipers’ Stone circle (click here); both have similar modulations between loud and quiet. How would you have explained this if you couldn’t see what was “blocking” the sound?

Complex behaviors of sound such as reflection and interference (which scientists today explain by sound wave theory and dismiss as acoustical artifacts) can experimentally give rise to psychoacoustic misperceptions in which such unseen sonic phenomena are attributed to the invisible/supernatural. The significance of this research is that it can help explain the motivation for some of mankind’s most mysterious behaviors and greatest artistic achievements. There are several implications and applications of my research. It shows that acoustical phenomena were culturally significant to ancient peoples, leading to the immediate conclusion that the natural soundscapes of archaeological sites should be preserved in their natural state for further study and greater appreciation. It demonstrates that even today sensory input can be used to manipulate perception, and can give spooky illusions inconsistent with scientific reality, which could have interesting practical applications for virtual reality and special effects in entertainment media. A key point to learn from my research is that objectivity is questionable, since a given set of data can be used to support multiple conclusions. For example, an echo can be used as “proof” for either an echo spirit or sound wave reflection. Also, just based on their interpretation of sounds heard in an empty field, people can be made to believe there is a ring a huge rocks taller than themselves. The history of humanity is full of misinterpretations, such as the visual illusion that the sun propels itself across the sky above the flat earth. Sound, being invisible with complex properties, can lead to auditory illusions of the supernatural. This leads to the more general question, what other perceptional illusions are we currently living under due to other phenomena that we are currently misinterpreting?

See https://sites.google.com/site/rockartacoustics/ for further detail.