2pABa9 – Energetically speaking, do all sounds that a dolphin makes cost the same?

Marla M. Holt – marla.holt@noaa.gov
Dawn P. Noren – dawn.noren@noaa.gov
Conservation Biology Division
NOAA NMFS Northwest Fisheries Science Center
2725 Montlake Blvd East
Seattle WA, 98112

Robin C. Dunkin – rdunkin@ucsc.edu
Terrie M. Williams – tmwillia@ucsc.edu
Department of Ecology and Evolutionary Biology
University of California, Santa Cruz
100 Shaffer Road
Santa Cruz, CA 95060

Popular version of paper 2pABa9, “The metabolic costs of producing clicks and social sounds differ in bottlenose dolphins (Tursiops truncatus).”
Presented Tuesday afternoon, November 3, 2015, 3:15, City Terrace room
170th ASA Meeting Jacksonville

Dolphins are known to be quite vocal, producing a variety of sounds described as whistles, squawks, barks, quacks, pops, buzzes and clicks.  These sounds can be tonal (think whistle) or broadband (think buzz), short or long, or loud or not.  Some sounds, such as whistles, are used in social contexts for communication.  Other sounds, such as clicks and buzzes, are used for echolocation, a form of active biosonar that is important for hunting fish [1].   Regardless of what type of sound a dolphin makes in its diverse vocal repertoire, sounds are generated in an anatomically unique way compared to other mammals.   Most mammals, including humans, make sound in their throats or technically, in the larynx.  In contrast, dolphins make sound in their nasal cavity via two sets of structures called the “phonic lips” [2].

All sound production comes at an energetic cost to the signaler [3].  That is, when an animal produces sound, metabolic rate increases a certain amount above baseline or resting (metabolic) rate.  Additionally, many vociferous animals, including dolphins and other marine mammals, modify their acoustic signals in noise.  That is, they call louder, longer or more often in an attempt to be heard above the background din.  Ocean noise levels are rising, particularly in some areas from shipping traffic and other anthropogenic activities and this motivated a series of recent studies to understand the metabolic costs of sound production and vocal modification in dolphins.

We recently measured the energetic cost for both social sound and click production in dolphins and determined if these costs increased when the animals increased the loudness or other parameters of their sounds [4,5].  Two bottlenose dolphins were trained to rest and vocalize under a specialized dome which allowed us to measure their metabolic rates while making different kinds of sounds and while resting (Figure 1).  The dolphins also wore an underwater microphone (a hydrophone embedded in a suction cup) on their foreheads to keep track of vocal performance during trials. The amount of metabolic energy that the dolphins used increased as the total acoustic energy of the vocal bout increased regardless of the type of sound the dolphin made.  The results clearly demonstrate that higher vocal effort results in higher energetic cost to the signaler.

Holt fig 1 - dolphins

Figure 1 – A dolphin participating in a trial to measure metabolic rates during sound production.  Trials were conducted in Dr. Terrie Williams’ Mammalian Physiology lab at the University of California Santa Cruz.  All procedures were approved by the UC Santa Cruz Institutional Animal Care and Use Committee and conducted under US National Marine Fisheries Service permit No.13602.

These recent results allow us to compare metabolic costs of production of different sound types. However, the average total energy content of the sounds produced per trial was different depending on the dolphin subject and whether the dolphins were producing social sounds or clicks.  Since metabolic cost is dependent on vocal effort, metabolic cost comparisons across sound types need to be made for equal energy sound production.

The relationship between energetic cost and vocal effort for social sounds allowed us to predict metabolic costs of producing these sounds at the same sound energy as in click trials.  The results, shown in Figure 2, demonstrate that bottlenose dolphins produce clicks at a very small fraction of the metabolic cost of producing whistles of equal energy.  These findings are consistent with empirical observations demonstrating that considerably higher air pressure within the dolphin nasal passage is required to generate whistles compared to clicks [1].  This pressurized air is what powers sound production in dolphins and toothed whales [1] and mechanistically explains the observed difference in metabolic cost between the different sound types.

Holt fig 2 - dolphins

Figure 2 – Metabolic costs of producing social sounds and clicks of equal energy content within a dolphin subject.

Differences in metabolic costs of whistling versus clicking have implications for understanding the biological consequences of behavioral responses to ocean noise.  Across different sound types, metabolic costs depend on vocal effort.  Yet, overall costs of producing clicks are substantially lower than costs of producing whistles.  The results reported in this paper demonstrate that the biological consequences of vocal responses to noise can be quite different depending on the behavioral context of the animals affected, as well as the extent of the response.

 

  1. Au, W. W. L. The Sonar of Dolphins, New York: Springer-Verlag.
  2. Cranford, T. W., et al., Observation and analysis of sonar signal generation in the bottlenose dolphin (Tursiops truncatus): evidence for two sonar sources. Journal of Experimental Marine Biology and Ecology, 2011. 407: p. 81-96.
  3. Ophir, A. G., Schrader, S. B. and Gillooly, J. F., Energetic cost of calling: general constraints and species-specific differences. Journal of Evolutionary Biology, 2010. 23: p. 1564-1569.
  4. Noren, D. P., Holt, M. M., Dunkin, R. C. and Williams, T. M. The metabolic cost of communicative sound production in bottlenose dolphins (Tursiops truncatus). Journal of Experimental Biology, 2013. 216: 1624-1629.
  5. Holt, M. M., Noren, D. P., Dunkin, R. C. and Williams, T. M. Vocal performance affects metabolic rate in dolphins: implication for animals communicating in noisy environments. Journal of Experimental Biology, 2015. 218: 1647-1654.

4aAB2 – Seemingly simple songs: Black-capped chickadee song revisited

Allison H. Hahn – ahhahn@ualberta.ca
Christopher B. Sturdy – csturdy@ualberta.ca

University of Alberta
Edmonton, AB, Canada

Popular version of 4aAB2 – Seemingly simple songs: Black-capped chickadee song revisited
Presented Thursday morning, November 5, 8:55 AM, City Terrace Room
170th ASA Meeting, Jacksonville, Fl

Vocal communication is a mode of communication important to many animal species, including humans. Over the past 60 years, songbird vocal communication has been widely-studied, largely because the invention of the sound spectrograph allows researchers to visually represent vocalizations and make precise acoustic measurements. Black-capped chickadees (Poecile atricapillus; Figure 1) are one example of a songbird whose song has been well-studied. Black-capped chickadees produce a short (less than 2 seconds), whistled fee-bee song. Compared to the songs produced by many songbird species, which often contain numerous note types without a fixed order, black-capped chickadee song is relatively simple, containing two notes produced in the same order during each song rendition. Although the songs appear to be acoustically simple, they contain a rich variety of information about the singer including: dominance rank, geographic location, and individual identity [1,2,3].

Interestingly, while songbird song has been widely-examined, most of the focus (at least for North Temperate Zone species) has been on male-produced song, largely because it was thought that only males actually produced song. However, more recently, there has been mounting evidence that in many songbird species, both males and females produce song [4,5]. In the study of black-capped chickadees, the focus has also been on male-produced song. However, recently, we reported that female black-capped chickadees also produce fee-bee song. One possible reason that female song has not been extensively reported is that to human vision, male and female chickadees are visually identical, so females that are singing may be mistakenly identified as male. However, by identifying a bird’s sex (via DNA analysis) and recording both males and females, our work [6] has shown that female black-capped chickadees do produce fee-bee song. Additionally, these songs are overall acoustically similar to male song (songs of both sexes contain two whistled notes; see Figure 2), making vocal discrimination by humans difficult.

Our next objective was to determine if any acoustic features varied between male and female songs. Using bioacoustic techniques, we were able to demonstrate that there are acoustic differences in male and female song, with females producing songs that contain a greater frequency decrease in the first note compared to male songs (Figure 2). These results demonstrate that there are sufficient acoustic differences to allow birds to identify the sex of a signing individual even in the absence of visual cues. Because birds may live in densely wooded environments, in which visual, but not auditory, cues are often obscured, being able to identify the sex of a bird (and whether the singer is a potential mate or territory rival) would be an important ability.

Following our bioacoustic analysis, an important next step was to determine whether birds are able to distinguish between male and female songs. In order to examine this, we used a behavioral paradigm that is common in animal learning studies: operant conditioning. By using this task, we were able to demonstrate that birds can distinguish between male and female songs; however, the particular acoustic features birds use in order to discriminate between the sexes may depend on the sex of the bird that is listening to the song. Specifically, we found evidence that male subjects responded based on information in the song’s first note, while female subjects responded based on information in the song’s second note [7]. One possible reason for this difference in responding is that in the wild, males need to quickly respond to a rival male that is a territory intruder, while females may assess the entire song to gather as much information about the singing individual (for example, information regarding a potential mate’s quality). While the exact function of female song is unknown, our studies clearly indicate that female black-capped chickadees produce songs and the birds themselves can perceive differences between male and female songs.

Black-capped chickadee
Figure 1. An image of a black-capped chickadee.

Sturdy_Figure2
Figure 2. Spectrogram (x-axis: time; y-axis: frequency in kHz) on a male song (top) and female song (bottom).

Sound file 1. An example of a male fee-bee song.

Sound file 2. An example of a female fee-bee song.

References

  1. Hoeschele, M., Moscicki, M.K., Otter, K.A., van Oort, H., Fort, K.T., Farrell, T.M., Lee, H., Robson, S.W.J., & Sturdy, C.B. (2010). Dominance signalled in an acoustic ornament. Animal Behaviour, 79, 657–664.
  2. Hahn, A.H., Guillette, L.M., Hoeschele, M., Mennill, D.J., Otter, K.A., Grava, T., Ratcliffe, L.M., & Sturdy, C.B. (2013). Dominance and geographic information contained within black-capped chickadee (Poecile atricapillus) song. Behaviour, 150, 1601-1622.
  3. Christie, P.J., Mennill, D.J., & Ratcliffe, L.M. (2004). Chickadee song structure is individually distinctive over long broadcast distances. Behaviour 141, 101–124.
  4. Langmore, N.E. (1998). Functions of duet and solo songs of female birds. Trends in Ecology and Evolution, 13, 136–140.
  5. Riebel, K. (2003). The “mute” sex revisited: vocal production and perception learning in female songbirds. Advances in the Study of Behavior, 33, 49–86
  6. Hahn, A.H., Krysler, A., & Sturdy, C.B. (2013). Female song in black-capped chickadees (Poecile atricapillus): Acoustic song features that contain individual identity information and sex differences. Behavioural Processes, 98, 98-105.
  7. Hahn, A.H., Hoang, J., McMillan, N., Campbell, K., Congdon, J., & Sturdy, C.B. (2015). Biological salience influences performance and acoustic mechanisms for the discrimination of male and female songs. Animal Behaviour, 104, 213-228.

1pABb1 – Mice ultrasonic detection and localization in laboratory environment

Yegor Sinelnikov – yegor.sinelnikov@gmail.com
Alexander Sutin, Hady Salloum, Nikolay Sedunov, Alexander Sedunov
Stevens Institute of Technology
Hoboken, NJ 07030

Tom Zimmerman, Laurie Levine
DLAR Stony Brook University
Stony Brook, NY 11790

David Masters
Department of Homeland Security
Science and Technology Directorate
Washington, DC

Popular version of poster 1pABb1, “Mice ultrasonic detection and localization in laboratory environment”
Presented Tuesday afternoon, November 3, 2015, 3:30 PM, Grand Ballroom 3
170th ASA Meeting, Jacksonville

A house mouse, mus musculus, historically shares the human environment without much permission. It lives in our homes, enjoys our husbandry, and passes through walls and administrative borders unnoticed and unaware of our wary attention. Over the thousands of years of coexistence, mice excelled in a carrot and stick approach. Likewise, an ordinary wild mouse brings both danger and cure to humans todays. A danger is in the form of rodent-borne diseases, amongst them plague epidemics, well remembered in European medieval history, continue to pose a threat to human health. A cure is in the form of lending themselves as research subjects for new therapeutic agents, an airily misapprehension of genomic similarities, small size, and short life span. Moreover, physiological similarity in inner ear construction, brain auditory responses and unexpected richness in vocal signaling attested to the tremendous interest to mice bioacoustics and emotion perception.

The goal of this work is to start addressing possible threats reportedly carried by invasive species crossing US borders unnoticed in multiple cargo containers. This study focuses on demonstrating the feasibility of acoustic detection of potential rodent intrusions.

Animals communicate with smell, touch, movement, visual signaling and sound. Mice came well versed in sensorial abilities to face the challenge of sharing habitat with humans. Mice gave up color vision, developed exceptional stereoscopic smell, and learned to be deceptively quiet in human auditory range, discretely shifting their social acoustic interaction to higher frequencies. They predominantly use ultrasonic frequencies above the human hearing range as a part of their day-to-day non aggressive social interaction. Intricate ultrasonic mice songs composed of multiple syllable sounds often constituting complex phrases separated by periods of silence are well known to researchers.

In this study, mice sounds were recorded in a laboratory environment at an animal facility at Stony Brook University Hospital. The mice were allowed to move freely, a major condition for their vocalization in ultrasonic range. Confined to cages, mice did not produce ultrasonic signals. Four different microphones with flat ultrasonic frequency response were positioned in various arrangements and distances from the subjects. The distances varied from a few centimeters to several meters. An exemplary setup is shown in Figure 1. Three microphones, sensitive in the frequency range between 20 kHz and 100 kHz, were connected to preamplifiers via digital converters to a computer equipped with dedicated sound recording software. The fourth calibrated microphone was used for measurements of absolute sound level produced by a mouse. The spectrograms were monitored by an operator in real time to detect the onset of mice communications and simplify line data processing.

Sinenikov fig 1

Figure 1. Setup of experiment showing the three microphones (a) on a table with unrestrained mouse (b), recording equipment preamplifiers and digitizers (c) and computer (d).

Listen to a single motif of mice ultrasonic vocalization and observe mouse movement here:

This sound fragment was down converted (slowed down) fifteen times to be audible. In reality, mice social songs are well above the human audible range and are very fast. The spectrograms of mice vocalization at distances of 1 m and 5 m are shown in Figure 2. Mice vocalization was detectable at 5 m and retained recognizable vocalization pattern. Farther distances were not tested due to the limitation of the room size.

The real time detection of mice vocalization required detection of the fast, noise insensitive and automated algorithm. An innovative approach was required. Recognizing that no animal communication comes close to become a language, the richness and diversity of mice ultrasonic vocalization prompted us to apply speech processing measures for their real time detection. A number of generic speech processing measures such temporal signal to noise ratio, cepstral distance, and likelihood ratio were tested for the detection of mice vocalization events in the presence of background noise.  These measures were calculated from acoustical measurements and compared with conventional techniques, such as bandpass filtering, spectral power, or continuous monitoring of signal frames for the presence of expected tones.

screenshot - Mice

Figure 2. Sonograms of short ultrasonic vocalization syllables produced by mice at 1 m (left) and 5 m (right) distances from microphones.  The color scale is in the decibels.

Although speech processing measures were invented to assess human speech intelligibility, we found them applicable for the acoustic mice detection within few meters. Leaving aside the question about mice vocalization intelligibly, we concluded that selected speech processing measures enabled us to detect events of mice vocalization better than other generic signal processing techniques.

As a secondary goal of this study, upon successful acoustic detection, the mice vocalization needed to be processed to determine animal location. It was of main interest for border patrol applications, where both acoustic detection and spatial localization are critical, and because mice movement has a behavioral specificity. To prove the localization feasibility, detected vocalization events from each microphone pair were processed to determine the time difference of arrival (TDOA). The analysis was limited to nearby locations by relatively short cabling system. Because the animals were moving freely on the surface of a laboratory table, roughly coplanar with microphones, the TDOA values were converted to the animal location using simple triangulation scheme. The process is illustrated schematically in Figure 3 for two selected microphones. Note that despite low signal to noise ratio for the microphone 2, the vocalization events were successfully detected. The cross correlograms, calculated in spectral domain with empirical normalization to suppress the effect of uncorrelated noise, yielded reliable TDOA. A simple check for the zero sum of TDOA was used as a consistency control. Calculated TDOA were converted into spatial locations, which were assessed for correctness, experimental and computational uncertainties and compared with available video recordings. Despite relatively high level of technogenic noise, the TDOA calculated locations agreed well with video recordings. The TDOA localization uncertainty was estimated on the order of the mouse size, roughly corresponding to several wavelengths at 50 kHz. A larger number of microphones is expected to improve detectability and enable more precise three dimensional localization.

Hence, mice ultrasonic socialization sounds are detectable by the application of speech processing techniques, their TDOA are identifiable by cross correlation and provide decent spatial localization of animals in agreement with video observations.

screenshot

Figure 3. The localization process. First, the detected vocalization events from two microphones (left) are paired and their cross correlogram is calculated (middle). The maxima, marked by asterisks, define a set of identified TDOA.  The process is repeated for every pair of microphones. Second, the triangulation is performed (right). The colored hyperbolas illustrate possible locations of animal on a laboratory table based on calculated TDOA. Hyperbolas intersection provides the location of animal. The numbered squares mark the location of microphones.

1The constructed recording system is particularly important for the detection of mice in containers at US ports of entry, where low frequency noises are high. This pilot study confirms the feasibility of using Stevens Institute’s ultrasonic recording system for simultaneous detection of mice vocalization and movement.

This work was funded by the U.S. Department of Homeland Security’s Science and Technology Directorate. The views and conclusions contained in this paper are those of the authors and should not necessarily be interpreted as representing the official policies, either expressed or implied of the U.S. Department of Homeland Security.

1pAB6 – Long-lasting suppression of spontaneous firing in inferior colliculus neurons: implication to the residual inhibition of tinnitus

A.V. Galazyuk – agalaz@neomed.edu
Northeast Ohio Medical University

Popular version of poster 1pAB6
Presented Monday morning, November 2, 2015, 3:25 PM – 3:45 PM, City Terrace 9
170th ASA Meeting, Jacksonville

More than one hundred years ago, US clinician James Spalding first described an interesting phenomenon when he observed tinnitus patients suffering from perceived phantom ringing [1]. Many of his patients reported that a loud, long-lasting sound produced by violin or piano made their tinnitus disappear for about a minute after the sound was presented. Nearly 70 years later, the first scientific study was conducted to investigate how this phenomenon, termed residual inhibition, is able to provide tinnitus relief [2]. Further research into this phenomenon has been conducted to understand the basic properties of this “inhibition of ringing” and to identify what sounds are most effective at producing the residual inhibition [3].

The research indicated that indeed, residual inhibition is an internal mechanism for temporary tinnitus suppression. However, at present, little is known about the neural mechanisms underlying residual inhibition. Increased knowledge about residual inhibition may not only shed light on the cause of tinnitus, but also may open an opportunity to develop an effective tinnitus treatment.

For the last four years we have studied a fascinating phenomenon of sound processing in neurons of the auditory system that may provide an explanation of what causes the residual inhibition in tinnitus patients. After presenting a sound to a normal hearing animal, we observed a phenomenon where firing activity of auditory neurons is suppressed [4, 5]. There are several striking similarities between this suppression in the normal auditory system and residual inhibition observed in tinnitus patients:

  1. Relatively loud sounds trigger both the neuronal firing suppression and residual inhibition.
  2. Both the suppression and residual inhibition last for the same amount of time after a sound, and increasing the duration of the sound makes both phenomena last longer.
  3. Simple tones produce more robust suppression and residual inhibition compared to complex sounds or noises.
  4. Multiple attempts to induce suppression or residual inhibition within a short timeframe make both much weaker.

These similarities make us believe that the normal sound-induced suppression of spontaneous firing is an underlying mechanism of residual inhibition.

The most unexpected outcome from our research is that the phenomenon of residual inhibition, which focuses on tinnitus patients, appears to be a natural feature of sound processing, because suppression was observed in both the normal hearing mice and in mice with tinnitus. If so, why is it that people with tinnitus experience residual inhibition whereas those without tinnitus do not?

It is well known that hyperactivity in auditory regions of the brain has been linked to tinnitus, meaning that in tinnitus, auditory neurons have elevated spontaneous firing rates [6]. The brain then interprets this hyperactivity as phantom sound. Therefore, suppression of this increased activity by a loud sound should lead to elimination or suppression of tinnitus. Normal hearing people also have this suppression occurring after loud sounds. However spontaneous firing of their auditory neurons remains low enough that they never perceive the phantom ringing that tinnitus sufferers do. Thus, even though there is suppression of neuronal firing by loud sounds in normal hearing people, it is not perceived.

Most importantly, our research has helped us identify a group of drugs that can alter this suppression response [5], as well as the spontaneous firing of the auditory neurons responsible for tinnitus. These drugs will be further investigated in our future research to develop effective tinnitus treatments.

This research was supported by the research grant RO1 DC011330 from the National Institute on Deafness and Other Communication Disorders of the U.S. Public Health Service.

[1] Spalding J.A. (1903). Tinnitus, with a plea for its more accurate musical notation. Archives of Otology, 32(4), 263-272.

[2] Feldmann H. (1971). Homolateral and contralateral masking of tinnitus by noise-bands and by pure tones. International Journal of Audiology, 10(3), 138-144.

[3] Roberts L.E. (2007). Residual inhibition. Progress in Brain Research, Tinnitus: Pathophysiology and Treatment, Elsevier, 166, 487-495.

[4] Voytenko SV, Galazyuk AV. (2010) Suppression of spontaneous firing in inferior colliculus neurons during sound processing. Neuroscience 165: 1490-1500.

[5] Voytenko SV, Galazyuk AV (2011) mGluRs modulate neuronal firing in the auditory midbrain. Neurosci Lett. 492: 145-149

[6] Eggermont JJ, Roberts LE. (2015) Tinnitus: animal models and findings in humans. Cell Tissue Res. 361: 311-336.

2aAB7 – Nocturnal peace at a Conservation Center for Species Survival?

Suzi Wiseman – sw1210txstate@gmail.com
Texas State University-San Marcos
Environmental Geography
601 University Drive, San Marcos, Texas 78666
Preston S. Wilson – wilsonps@austin.utexas.edu
University of Texas at Austin
Mechanical Engineering Department
1 University Station C2200
Austin, TX 78712

Popular version of paper 2aAB7, “Nocturnal peace at a Conservation Center for Species Survival?”
Presented Tuesday morning, May 19, 2015 at 10.15am
169th ASA Meeting, Pittsburgh

The acoustic environment is essential to wildlife, providing vital information about prey and predators and the activities of other living creatures (biophonic information) (Wilson, 1984), about changing weather conditions and occasionally geophysical movement (geophonic), and about human activities (anthrophonic) (Krause 1987). Small sounds can be as critical as loud, depending on the species trying to listen. Some hear infrasonically (too low for humans, generally considered below 20 Hz), others ultrasonically (too high, above 20 kHz). Biophonic soundscapes frequently exhibit temporal and seasonal patterns, for example a dawn “chorus”, mating and nurturing calls, diurnal and crepuscular events.

Some people are attracted to large parks due in part to their “peace and quiet” (McKenna 2013). But even in a desert, a snake may be heard to slither or wind may sigh between rocks. Does silence in fact exist? Finding truly quiet places, in nature or the built environment is increasingly difficult. Even in our anechoic chamber, which was purpose built to be extremely quiet, located in the heart of our now very crowded and busy urban campus, we became aware of infrasound that penetrated, possibly from nearby construction equipment or from heavy traffic that was not nearly as common when the chamber was first built more than 30 years ago. Is anywhere that contains life actually silent?

Wiseman_Fig1a_AnechoicChmbr_WAVEFORM

Figure 1: In the top window, the waveform in blue indicates the amplitude over time each occasion that a pulse of sound was broadcast in the anechoic chamber, as shown in the spectrogram in the lower window, where the frequency is shown over the same time, and the color indicates the intensity of the sound (red being more intense than blue). Considerable very low frequency sound was evident and can be seen between the pulses in the waveform (which should be silent), and throughout at the bottom of the spectrogram. The blue dotted vertical lines show harmonics that were generated within the loudspeaker system. (Measurements shown in this study were by a Roland R26 recorder with Earthworks M23 measurement microphones with frequency response 9Hz to 23kHz ±1/-3dB)

As human populations increase, so do all forms of anthrophonic noise, often masking the sounds of nature. Does this noise cease at night, especially if well away from major cities and when humans are not close-by? This study analyzed the soundscape continuously recorded beside the southern white rhinoceros (Ceratotherium simum simum) enclosure at Fossil Rim Wildlife Center, about 75 miles southwest of Dallas Texas for a week during Fall 2013, to determine the quietest period each night and the acoustic environment in which these periods tended to occur. Rhinos hear infrasound, so the soundscape was measured from 0.1 Hz to 22,050 kHz. Since frequencies below 9 Hz still need to be confirmed however, these lowest frequencies were removed from this portion of the study.

Wiseman_Fig2_RhinoEncl

Figure 2: Part of the white rhinoceros enclosure of Fossil Rim Wildlife Center, looking towards the tree line where the central recorder was placed

Wiseman_Fig3_DailyRhythm_Fri.png

Figure 3: The sound pressure level throughout a relatively quiet day at the rhino enclosure. The loudest sounds were normally vehicles, machinery, equipment, aircraft, and crows. The 9pm weather front was a major contrast.

Figure 3 illustrates the rhythm of a day at Fossil Rim as shown by the sound level of a fairly typical 24 hours starting from midnight, apart from the evening storm. As often occurred, the quietest period was between midnight and the dawn chorus.

While there were times during the day when birds and insects were their most active and anthrophonic noise was not heard above them, it was discovered that all quiet periods contained anthrophonic noise, even at night. There was generally a low frequency, low amplitude hum – at times just steady and machine-like and not yet identified – and depending on wind direction, often short hums from traffic on a state highway over a mile away. Quiet periods ranged from a few minutes to almost an hour, usually eventually broken by anthrophonic sounds such as vehicles on a nearby county road, high aircraft, or dogs barking on neighboring ranches. However there was also a strong and informative biophonic presence – from insects to nocturnal birds and wildlife such as coyotes, to sounds made by the rhinos themselves and by other species at Fossil Rim. Geophonic intrusions were generally wind, thunder or rain, possibly hail.

The quietest quarter hour was about 4am on the Friday depicted in figure 3, but even then the absolute sound pressure level averaged 44.7 decibels, about the level of a quiet home or library. The wind was from the south southeast around 10 to 14 mph during this time. Audio clip 1 is the sound of this quiet period.

Wiseman_Fig4_QuietestFri_44.5.png

Figure 4: The quietest quarter hour recorded at Fossil Rim appears between the vertical red selection lines, with an average absolute sound pressure level of 44.5 decibels. The fairly constant waveform shown in blue in the top graph and the low frequency noise at the bottom of the spectrogram seemed to comprise the machine-like hum, the distant traffic hum which varies over time, and insects. The blue flashes between 3 and 5 Hz were mainly bird calls.

By contrast, the loudest of the “quietest nightly periods” was less than six minutes long, around 5am on Wednesday 23rd October, as shown between the vertical red lines in figure 5. Despite being the quietest period that night, it averaged a sound pressure level of 55.5 decibels, which is roughly the equivalent of a spoken conversation.

Wiseman_Fig5_LoudestWed_55.5.png (1)

Figure 5: The loudest “quietest period each night” reveals broadband machine noise (possibly road work equipment somewhere in the district?) which continued for some hours and appears as the blue flecks across all frequencies. The horizontal blue line at 16.5 kHz is characteristic of bats. All species identification is being left to biologists for confirmation. Audio clip 2 is this selection.

Either side of the “quiet” minutes were short bursts of low frequency but intense truck and/or other machine noise indicated in red, some of which partially covered a clang when a rhino hit its fence with its horn, and distant barks, howls, moos and other vocalizations. The noise may have masked the extremely low frequency hums and insects that had been apparent on other nights or to have caused the insects to cease their activity. The strata below 2.5 kHz appear more ragged, indicating they are not being produced in such a uniform way as on quieter nights, and they are partially covered by the blue flecks of machine noise. However the strata at 5.5, 8.5, 11 and especially at 16.5 kHz that appeared on other nights are still evident. They appear to be birds, insects and bats. Audio clip 3 contains the sounds that broke this quiet period.

At no point during the entire week was anything closely approaching “silence” apparent. Krause reports that healthy natural soundscapes comprise a myriad of biophony, and indeed the ecological health of a region can be measured by its diverse voices (Krause 1987). However if these voices are too frequently masked or deterred by anthrophonic noise, animals may be altered behaviorally and physiologically (Pater et al, 2009), as the World Health Organization reports to be the case with humans who are exposed to chronic noise (WHO 1999). Despite some level of anthrophonic noise at most times, Fossil Rim seems to provide a healthy acoustic baseline since so many endangered species proliferate there.

Understanding soundscapes and later investigating any acoustic parameters that may correlate with animals’ behavior and/or physiological responses may lead us to think anew about the environments in which we hold animals captive in conservation, agricultural and even domestic environments, and about wildlife in parts of the world that are being increasingly encroached upon by man.

tags: animals, conservation, soundscape, silence, environment

References:
Krause, B. 1987. The niche hypothesis. Whole Earth Review . Wild Sanctuary.
———. 1987. Bio-acoustics: Habitat ambience & ecological balance. Whole Earth Review. Wild Sanctuary.
McKenna, Megan F., et al. “Patterns in bioacoustic activity observed in US National Parks.” The Journal of the Acoustical Society of America 134.5 (2013): 4175-4175.
Pater, L. L., T. G. Grubb, and D. K. Delaney. 2009. Recommendations for improved assessment of noise impacts on wildlife. The Journal of Wildlife Management 73:788-795.
Wilson, E. O. 1984. Biophilia. Harvard University Press.
World Health Organization. “Guidelines for community noise”. WHO Expert Taskforce Meeting. London. 1999.

1pABa2 – Could wind turbine noise interfere with greater prairie chicken (tympanuchus cupido pinnatus) courtship?

Edward J. Walsh – Edward.Walsh@boystown.org
JoAnn McGee – JoAnn.McGee@boystown.org
Boys Town National Research Hospital
555 North 30th St.
Omaha, NE 68131

Cara E. Whalen – carawhalen@gmail.com
Larkin A. Powell – lpowell3@unl.edu
Mary Bomberger Brown – mbrown9@unl.edu
School of Natural Resources
University of Nebraska-Lincoln
Lincoln, NE 68583

Popular version of paper 1pABa2 Hearing sensitivity in the Greater Prairie Chicken
Presented Monday afternoon, May 18, 2015
169th ASA Meeting, Pittsburgh

The Sand Hills ecoregion of central Nebraska is distinguished by rolling grass-stabilized sand dunes that rise up gently from the Ogallala aquifer. The aquifer itself is the source of widely scattered shallow lakes and marshes, some permanent and others that come and go with the seasons.

However, the sheer magnificence of this prairie isn’t its only distinguishing feature. Early on frigid, wind-swept, late-winter mornings, a low pitched hum, interrupted by the occasional dawn song of a Western Meadowlark (Sturnella neglecta) and other songbirds inhabiting the region, is virtually impossible to ignore.

Click here to listen to the hum

The hum is the chorus of the Greater Prairie Chicken (Tympanuchus cupido pinnatus), the communal expression of the courtship song of lekking male birds performing an elaborate testosterone-driven, foot-pounding ballet that will decide which males are selected to pass genes to the next generation; the word “lek” is the name of the so-called “booming” or courtship grounds where the birds perform their wooing displays.

While the birds cackle, whine, and whoop to defend territories and attract mates, it is the loud “booming” call, an integral component of the courtship display that attracts the interest of the bioacoustician – and the female prairie chicken.

The “boom” is an utterance that is carried long distances over the rolling grasslands and wetlands by a narrow band of frequencies ranging from roughly 270 to 325 cycles per second (Whalen et al., 2014). It lasts about 1.9 seconds and is repeated frequently throughout the morning courtship ritual.
Usually, the display begins with a brief but energetic bout of foot stamping or dancing, which is followed by an audible tail flap that gives way to the “boom” itself.

Watch the video clip below to observe the courtship display

For the more acoustically and technologically inclined, a graphic representation of the pressure wave of a “boom,” along with its spectrogram (a visual representation showing how the frequency content of the call changes during the course of the bout) and graphs depicting precisely where in the spectral domain the bulk of the acoustic power is carried is shown in Figure 1. The “boom” is clearly dominated by very low frequencies that are centered on approximately 300 Hz (cycles per second).

FIGURE 1 (file missing): Acoustics Characteristics of the “BOOM”

Vocalization is, of course, only one side of the communication equation. Knowing what these stunning birds can hear is on the other. We are interested in what Greater Prairie Chickens can hear because wind energy developments are encroaching onto their habitat, a condition that makes us question whether noise generated by wind turbines might have the capacity to mask vocal output and complicate communication between “booming” males and attending females.

Step number one in addressing this question is to determine what sounds the birds are capable of hearing – what their active auditory space looks like. The golden standard of hearing tests are behavioral in nature – you know, the ‘raise your hand or press this button if you can hear this sound’ kind of testing. However, this method isn’t very practical in a field setting; you can’t easily ask a Greater Prairie Chicken to raise its hand, or in this case its wing, when it hears the target sound.

To solve this problem, we turn to electrophysiology – to an evoked brain potential that is a measure of the electrical activity produced by the auditory parts of the inner ear and brain in response to sound. The specific test that we settled on is known as the ABR, the auditory brainstem response.

The ABR is a fairly remarkable response that captures much of the peripheral and central auditory pathway in action when short tone bursts are delivered to the animal. Within approximately 5 milliseconds following the presentation of a stimulus, the auditory periphery and brain produce a series of as many as five positive-going, highly reproducible electrical waves. These waves, or voltage peaks, more or less represent the sequential activation of primary auditory centers sweeping from the auditory nerve (the VIIIth cranial nerve), which transmits the responses of the sensory cells of the inner ear rostrally, through auditory brainstem centers toward the auditory cortex.

Greater Prairie Chickens included in this study were captured using nets that were placed on leks in the early morning hours. Captured birds were transported to a storage building that had been reconfigured into a remote auditory physiology lab where ABRs were recorded from birds positioned in a homemade, sound attenuating space – an acoustic wedge-lined wooden box.

FIGURE 2 (file missing): ABR Waveforms

The waveform of the Greater Prairie Chicken ABR closely resembles ABRs recorded from other birds – three prominent positive-going electrical peaks, and two smaller amplitude waves that follow, are easily identified, especially at higher levels of stimulation. In Figure 2, ABR waveforms recorded from an individual bird in response to 2.8 kHz tone pips are shown in the left panel and the group averages of all birds studied under the same stimulus conditions are shown in the right panel; the similarity of response waveforms from bird to bird, as indicated in the nearly imperceptible standard errors (shown in gray), testifies to the stability and utility of the tool. As stimulus level is lowered, ABR peaks decrease in amplitude and occur at later time points following stimulus onset.

Since our goal was to determine if Greater Prairie Chickens are sensitive to sounds produced by wind turbines, we generated an audiogram based on level-dependent changes in ABRs representing responses to tone pips spanning much of the bird’s audiometric range (Figure 3). An audiogram is a curve representing the relationship between response threshold (i.e., the lowest stimulus level producing a clear response) and stimulus frequency; in this case, thresholds were averaged across all animals included in the investigation.

FIGURE 3 (file missing): Audiogram and wind turbine noise

As shown in Figure 3, the region of greatest hearing sensitivity is in the 1 to 4 kHz range and thresholds increase (sensitivity is lost) rapidly at higher stimulus frequencies and more gradually at lower frequencies. Others have shown that ABR threshold values are approximately 30 dB higher than thresholds determined behaviorally in the budgerigar (Melopsittacus undulates) (Brittan-Powell et al., 2002). So, to answer the question posed in this investigation, ABR threshold values were adjusted to estimate behavioral thresholds, and the resulting sensitivity curve was compared with the acoustic output of a wind turbine farm studied by van den Berg in 2006. The finding is clear; wind turbine noise falls well within the audible space of Greater Prairie Chickens occupying booming grounds in the acoustic footprint of active wind turbines.

While findings reported here indicate that Greater Prairie Chickens are sensitive to at least a portion of wind turbine acoustic output, the next question that we plan to address will be more difficult to answer: Does noise propagated from wind turbines interfere with vocal communication among Greater Prairie Chickens courting one another in the Nebraska Sand Hills? Efforts to answer that question are in the works.

tags: chickens, mating, courtship, hearing, Nebraska, wind turbines

References
Brittan-Powell, E.F., Dooling, R.J. and Gleich, O. (2002). Auditory brainstem responses in adult budgerigars (Melopsittacus undulates). J. Acoust. Soc. Am. 112:999-1008.
van den Berg, G.P. (2006). The sound of high winds. The effect of atmospheric stability on wind turbine sound and microphone noise. Dissertation, Groningen University, Groningen, The Netherlands.
Whalen, C., Brown, M.B., McGee, J., Powell, L.A., Smith, J.A. and Walsh, E.J. (2014). The acoustic characteristics of greater prairie-chicken vocalizations. J. Acoust. Soc. Am. 136:2073.