2aAB8 – How dolphins deal with background noise

Maria Zapetis – maria.zapetis@usm.edu
University of Southern Mississippi
118 College Drive
Hattiesburg, MS 39406

Jason Mulsow – jason.mulsow@nmmpfoundation.org
National Marine Mammal Foundation
2240 Shelter Island Drive, Suite 200
San Diego, CA 92106

Carolyn E. Schlundt – melka@peraton.com
Peraton Corporation
4045 Hancock Street, Suite 210
San Diego, CA 92110

James J. Finneran – james.finneran@navy.mil
US Navy Marine Mammal Program
Space and Naval Warfare Systems Center, Pacific
53560 Hull Street
San Diego, CA 92152

Heidi Lyn – hlyn@southalabama.edu
University of South Alabama
75 South University Boulevard
Mobile, AL 36688

Popular version of paper 2aAB8, “Bottlenose dolphin (Tursiops truncatus) vocal modifications in response to spectrally pink background noise”
Presented Tuesday morning, November 6, 2019, 11:00 AM in Shaughnessy (FE)
176th ASA Meeting, Victoria, British Columbia, Canada

You’re in the middle of a conversation when you walk out of a quiet building onto a crowded street. A loud plane flies directly overhead. You stop your car for a passing train. Chances are, you’ve experienced these kinds of anthropogenic (man-made) noises in your everyday life. What do you do? Most people raise their voice to be heard, an automatic reaction called the Lombard effect [1, 2]. Similarly, dolphins and other marine mammals experience anthropogenic noise — from human activities such as boat traffic and construction activities — and raise their “voices” to better communicate [3, 4]. Understanding the extent to which dolphins exhibit the Lombard effect and alter their vocalizations in the presence of man-made noise is important for predicting and mitigating the potential effects of noise on wild marine mammals.

In this study, bottlenose dolphins were trained to “whistle” upon hearing a computer-generated tone (Figure 1). After successful detection of a tone, dolphins typically produced a “victory squeal” (a pulsed call associated with success [5]). During tone-detection trials, the dolphins’ whistles and victory squeals were recorded while one of three computer-generated noise conditions played in the background (Figure 2). The dolphins responded to every background noise condition with the Lombard effect: as the noise frequency content and level increased, the dolphins’ whistles got louder (increased amplitude) (Figure 3). Other noise-induced vocal modifications were observed, such as changes in the number of whistle harmonics, depending on the specifics of the noise condition. Because this was a controlled exposure study with trained dolphins, we were not only able to exclude extraneous variables but were able to see how the dolphins responded to different levels of background noise. Control over the background noise allowed us to tease apart the effects of noise level and noise frequency content. Both properties of the noise appear to affect the parameters of dolphin signals differently, and may reflect an ability to discriminate within those properties independently.

dolphins

Figure 1. Hearing test procedure with US Navy Marine Mammal Program dolphins in San Diego Bay, CA. [A] Each trial begins with the trainer directing the dolphin to dive underwater and position herself on a “biteplate” in front of an underwater speaker. [B] Once on the biteplate, the dolphin waits for the hearing test tone to be presented. When the dolphin hears the tone, she whistles in response. The researcher lets the dolphin know that she is correct by playing a “reward buzzer” out of another underwater speaker. The dolphin will often respond to the reward buzzer with a victory squeal before [C] coming up for a fish reward. The dolphin’s vocalizations are recorded from the hydrophone (underwater microphone) in a green suction cup just behind the blowhole.

Figure 2. Spectrogram examples of the four conditions. The [W] whistles, [RT] reward buzzers, and [VS] victory squeals of the dolphin in Figure 1 are labeled.

Figure 3. Whistle Amplitude across four conditions. Compared to the control condition (San Diego Bay ambient noise), both dolphins produced louder whistles in every noise condition.

  1. Lombard, E. (1911). Le signe de l’élévation de la voix. Annales des Maladies de L’Oreille et du Larynx, 37, 101–119.
  2. Rabin, L. A., McCowan, B., Hooper, S. L., & Owings, D. H. (2003). Anthropogenic noise and its effect on animal communication: an interface between comparative psychology and conservation biology. The International Journal of Comparative Psychology, 16, 172-192.
  3. Buckstaff, K. C. (2004). Effects of watercraft noise on the acoustic behavior of bottlenose dolphins, Tursiops Truncatus, in Sarasota Bay, Florida. Marine Mammal Science, 20(4), 709–725.
  4. Hildebrand, J. (2009). Anthropogenic and natural sources of ambient noise in the ocean. Marine Ecology Progress Series, 395, 5–20.
  5. Dibble, D. S., Van Alstyne, K. R., & Ridgway, S. (2016). Dolphins signal success by producing a victory squeal. International Journal of Comparative Psychology, 29.

 

4pNSa4 – Inciting our children to turn their music down: the “Age of Your Ear” concept

Jeremie Voix
Romain Dumoulin
École de technologie supérieure, Université du Québec, Montréal, Quebec, Canada

Popular version of paper 4pNSa4, “Inciting our children to turn their music down: the AYE concept.”
Presented Thu, Nov 08   1:45pm – 2:00pm in SALON C (VCC)
176th Meeting Acoustical Society of America and 2018 Acoustics Week in Canada (Canadian Acoustical Association) at the Victoria Conference Centre, Victoria, BC, Canada

Problem
According to the World Health Organization (WHO), more than 1.1 billion people are currently at risk of losing their hearing due to excessive exposure to noise. Of this, a significant proportion consists of children, youth and young adults who are exposing themselves to excessive levels of sound through various leisure activities (music players, concerts, movies at the theatre, dance clubs, etc.).

Existing solutions
To address this issue, many approaches have been developed, ranging from general awareness messages to volume limiters on personal music players. For instance, the recent “Make listening safe” [1] initiative from WHO aims at gathering all stakeholders, public health authorities, and manufacturers to define and develop a consolidated approach to limit these non-occupational sound exposures, based on dosimetry. Indeed, significant efforts have been put into the idea of assessing directly on a PMP (personal music player) the individual noise dose, i.e. the product of the sound pressure level  and the duration, induced during music listening.

Need to find a better way to sensitize the users
While many technical issues are still actively discussed in some related standards, a major concern arose with regards to the message communicated to the end-users. End-users need to be educated on the risk of noise induced hearing loss (NIHL) and its irreversibility, but at the same time they also need to be made aware that NIHL is 100% preventable pending safe listening practices are followed.

More importantly, end users have to be left with an appealing noise dose measurement. In that regard, expressing equivalent sound pressure level in decibels (dB) or the noise dose in percentage (%) is of little value given the complexity of one and the abstraction of the other. But communicating about the dangers of music playback is definitely something very new for most of the hearing conservation specialists and communicating with this particular group of youth is only adding to the difficulty.

Our approach
In the quest for a meaningful message to pass to these young end users, this article introduces a new metric, the “Age of Your Ears” (AYE), that is an indication of the predicted extra aging caused by the excessive noise dose each user is exposed to. To perform such prediction, a multi-regression statistical model was developed based on normative data found in ISO 1999 [2] standard. This way, an AYE value can be computed for each subject, using only his age, sex and sound exposure, to represent the possible acceleration of aging caused by excessive music listening, as illustrated in Fig. 1.

Age of Your Ear

Fig. 1: While hearing will normally worsen because of the natural aging process (dotted black line), this ageing can be dramatically accelerated because of over-exposure to noise (solid color lines).

Conclusions
In a world where personal musical players are ubiquitous, and have also been putting hearing at risk, it is interesting to see them as potential tool, not only to address the issues they created, but also for raising awareness on the dangers of Noise-Induced Hearing Loss at large.

The proposed AYE metric will be first implemented in a measurement manikin setup that is currently under development at the Centre for Interdisciplinary Research in Music Media and Technology, housed at the Schulich School of Music at McGill University (CIRMMT). This setup, further described in [3], is inspired by the “Jolene” manikin developed though the “Dangerous Decibels” program [4]. The resulting measurement kiosk will be complemented by a smartphone-based measurement app that will enable musicians to assess their entire noise exposure. It is hoped that the proposed AYE metric will be relevant and simple enough to have a beneficial impact on everyone’s safe hearing practices.

[1] WHO – Make Listening Safe- http://www.who.int/deafness/activities/mls/en/.

[2] ISO 1999:2013 – Acoustics – Estimation of noise-induced hearing loss, 2013.

[3] Jérémie Voix, Romain Dumoulin, Julia Levesque, and Guilhem Viallet. Inciting our children to turn their music down : the AYE proposal and implementation. In Proceedings of Meetings on Acoustics , volume Paper 3007868, Victoria, BC, Canada, 2018. Acoustical Society of America.

[4] Dangerous Decibels – JOLENE – http://dangerousdecibels.org/jolene

 

 

 

5aUW1 – Ship-of-opportunity noise inversions for geoacoustic profiles of a layered mud-sand seabed

Dag Tollefsen – dag.tollefsen@ffi.no
Norwegian Defence Research Establishment,
Horten, NO-3191, Norway

Stan E. Dosso – sdosso@uvic.ca
School of Earth and Ocean Sciences,
Victoria BC, V8W 3P6, Canada

David P. Knobles – dpknobles@kphysics.org
Knobles Scientific and Analysis, Austin, Texas 78755, USA

Popular version of paper 5aUW1
Presented Friday morning, November 9, 2018
176th ASA Meeting, Victoria, BC, Canada

We infer geoacoustic properties of seabed sediment layers via remote sensing using underwater sound from a container ship.  While several techniques are available for seabed characterization via acoustic remote sensing, this is a first demonstration using noise from a large ship-of-opportunity (i.e., a passing ship with no connection to an experiment).  A benefit of this passive acoustics approach is that such sound is readily available in the ocean.

Our data were collected as part of a large coordinated ocean acoustic experiment conducted at the New England Mud Patch in March, 2017 [1].  To investigate properties of a muddy seabed, the experiment employed multiple techniques including direct measurements by geophysical coring [2] and remote sensing using sound from various controlled acoustic sources.  Our team deployed a 480-m long linear array of hydrophones on the seabed to record noise from passing ships.

The array was located two nautical miles from a commercial shipping lane leading to the Port of New York and New Jersey.  An average of three ships passed the array daily.  We identified ship passages by “bathtub” interference patterns (Fig. 1) in the recordings.  The structure seen in Fig. 1 is due to the interaction between ship-to-hydrophone sound paths, some interacting with the seabed, shifting with time as the ship passes the hydrophone.

Ship locations were obtained from Automatic Identification System data provided by the US Coast Guard Navigation Center.  This enabled us to run a numerical model of underwater sound propagation from source to receivers.  The final step is inverse modeling, where a large number of possible seabed models were sampled probabilistically, with each model used to predict the corresponding sound field that was matched with the recorded data.  We used Bayesian sampling and statistical inference methods based on [3].

Inferred geoacoustic profiles (Fig. 2) indicate fine-grained sediment (mud) in the upper seabed and coarse-grained (higher sound speed and density) sediment (sand) in the lower seabed.  Results are in overall good agreement with sediment core data.  This work establishes that noise from large commercial ships can contain usable information for seabed characterization.

[Work supported by the Office of Naval Research, Ocean Acoustics. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of ONR].

seabed

Figure 1. Spectrogram of noise measured on a hydrophone on the seabed (left) due to a passing container ship (right).

seabed

Figure 2. Inversion results in terms of probability densities for sediment layer interface depths (left), and geoacoustic properties of sound speed (middle), and density (right) as a function of depth into the seabed. Warm colors (e.g., red) indicate high probabilities.

[1]            P.S. Wilson and D.P. Knobles, “Overview of the seabed characterization experiment 2017,” in Proc. 4th Underwater Acoustics Conference and Exhibition, 743-748, ISSN 2408-0195 (2017).

[2]       J.D. Chaytor, M.S. Ballard, B. Buczkowski, J.A. Goff, K. Lee, A. Reed, “Measurements of Geologic Characteristics, Geophysical Properties, and Geoacoustic Response of Sediments from the New England Mud Patch,” submitted to IEEE J. Ocean Eng. (2018).

[3]            S.E. Dosso, J. Dettmer, G. Steininger, and C.W. Holland, “Efficient trans-dimensional Bayesian inversion for seabed geoacoustic profile estimation,” Inverse Problems, 30, 29pp (2014).

1a2b3c – How bowhead whales cope with changes in natural and anthropogenic ocean noise in the arctic

Aaron M. Thode athode@ucsd.edu
Scripps Institution of Oceanography, UCSD, La Jolla, California 92093, USA

Susanna B. Blackwell, Katherine H. Kim, Alexander S. Conrad
Greeneridge Sciences, Inc., 90 Arnold Place, Suite D, Santa Barbara, California 93117, USA

Popular version of paper 1a2b3c
Presented Monday morning, 1pAO, Arctic Acoustical Oceanography II

We live in a world full of ever-changing noise with both natural and industrial origins.  Despite this constant interference, we’ve developed several strategies for communicating with each other.  If you’ve ever attended a busy party, you’ve probably found yourself shouting to be overheard by a nearby companion, and maybe even had to repeat yourself a few times to be understood.

bowhead whales

Figure 1: Spectrogram of whale calls; sound file attached 4x normal speed

 

Whales are even more dependent on sound for communicating with each other.  For example, each autumn bowhead whales, a species of baleen whale, migrate along the Alaskan North Slope from their Canadian summer feeding grounds towards the Bering Sea.  During this voyage, they make numerous low-frequency sounds (50-200 Hz) that are detectable on underwater microphones, or “hydrophones,” up to tens of kilometers away.  There are many mysteries about these calls, such as what type of information they convey, and why their frequency content seems to be shifting downward with time (Thode et al., 2017) for unknown reasons.  Nevertheless, scientists generally agree that bowheads use these sounds for communication.

Changing conditions in the Arctic have encouraged more human industrial activity in this formerly remote region.  For example, over the past decade multiple organizations have conducted seismic surveys throughout the Arctic Ocean to pinpoint oil-drilling locations or establish territorial claims. The impulsive “airgun” sounds generated by these activities could be detected over distances of more than 1000 km (Thode et al., 2010).

bowhead whales

Figure 2: Spectrogram of seismic airgun signals along with bowhead whale calls; sound file attached 4x normal speed

Previous work by our team has found that bowhead whales double their calling rate whenever distant seismic signals are present (Blackwell et al., 2015).  But what consequences, if any, could this behavioral change have on the long-term health of the bowhead population?

To answer this question my colleagues at Greeneridge Sciences Inc. and I have studied how bowhead whales respond to natural changes in noise levels, which during the summer and fall are caused primarily by wind in the relatively ship-free waters of the North.  We found that, like humans at a party, whales can respond in two ways to rising noise levels.  They can increase their loudness, or “source level,” and/or they can increase the rate at which they produce calls.  Measuring this effect is challenging, because whenever background noise levels increase it becomes difficult to detect weaker calls, an effect called “masking”.  Because of masking, as noise levels rise one might measure a decrease in calling rate as well as an apparent increase in call source levels, even if a whale population didn’t actually change their calling behavior.

To solve this problem our team deployed multiple groups of hydrophones that allowed machine learning algorithms to localize the positions of over a million whale calls over the eight years of our study.  We then threw away over 90% of these positions, keeping only calls that were produced at close range to our sensors, and thus wouldn’t become masked by changes in noise levels.  Continuing the party analogy, we effectively only listened to people close to us, so we could still detect whispers along with shouts.

We found that whales tried to increase their source levels as noise levels increased, but when noise levels became high enough (75% of maximum noise levels encountered naturally) the whales didn’t call any louder, even as noise levels continued to rise (Figure 3).

Figure 3: Relationship between background noise level and whale calling level, for calls made within 3.5 km of a sensor.

Whales do, however, keep increasing their call rates with rising noise levels.  We found that a 26-dB (400-times) increase in noise levels caused calling rates to double, the same rate increase caused by seismic airguns (Figure 4).

Figure 4: Image of relationship between whale calling rate (over ten minutes) and background noise level.

This work has thus allowed us to place bowhead whale responses to human disturbance in a natural noise context, which eventually may assist us in evaluating the long-term impact of such activities on population growth.

References

Blackwell, S. B., Nations, C. S., McDonald, T. L., Thode, A. M., Mathias, D., Kim, K. H., C.R. Greene, J., and Macrander, A. M. (2015). “The effects of airgun sounds on bowhead whale calling rates:  evidence for two behavioral thresholds.  ,” PLoS One 10, e0125720.

Thode, A. M., Blackwell, S. B., Conrad, A. S., Kim, K. H., and Michael Macrander, A. (2017). “Decadal-scale frequency shift of migrating bowhead whale calls in the shallow Beaufort Sea,” The Journal of the Acoustical Society of America 142, 1482-1502.

Thode, A. M., Kim, K., Greene, C. R., and Roth, E. H. (2010). “Long range transmission loss of broadband seismic pulses in the Arctic under ice-free conditions,” J. Acoust. Soc. Am. 128, EL181-EL187.

5aUW7 – Using Noise to Probe Seafloor

Tsuwei Tan – ttan1@nps.edu
Oleg A. Godin – oagodin@nps.edu
Physics Dept., Naval Postgraduate School
1 University Cir.
Monterey CA, 93943, USA

Popular version of paper 5aUW7
Presented Friday morning, November 9, 2018, 10:15-10:30 AM
176th ASA Meeting, Victoria, BC Canada

Introduction
Scientists have long used sound to probe the ocean and its bottom. Breaking waves, roaring earthquakes, speeding supertankers, snapping shrimp, and vocalizing whales make the ocean a very noisy place. Rather than “shouting” above this ambient noise with powerful dedicated sound sources, we are now learning how to measure ocean currents and seafloor properties using the noise itself. In this paper, we combine long recordings of ambient noise with a signal processing skill called time warping to quantify seafloor properties. Time warping changes the signal rate so we can extract individual modes, which carry information about the ocean’s properties.

Experiment & Data
We pulled our data from Michael Brown and colleagues [1].  They recorded ambient noise in the Straits of Florida with several underwater microphones (hydrophones) continuously over six days (see Figure 1). We applied time warping to this data. By measuring (cross-correlating) noise recordings made at points A and B several kilometers apart, one obtains a signal that approximates the signal received at A when a sound source is placed at B. With this approach, a hydrophone becomes a virtual sound source. The sound of the virtual source (the noise cross-correlation function) can be played in Figure 2. There are two nearly symmetric peaks in the cross-correlation function shown in Figure 1 because A also serves as a virtual source of sound at B. Having two virtual sources allowed Oleg Godin and colleagues to measure current velocity in the Straits of Florida [2].

Figure 1. Illustration of the site of the experiment and the cross-correlation function of ambient noise received by hydrophones A and B in 100 m-deep water at horizontal separation of about 5 km in the Straits of Florida. Figure 2. Five-second audio of correlated ambient noise from Figure 1: At receiver A, a stronger impulsive sound starts at 3.25 sec, which is the time it takes underwater acoustic waves to travel from B to A. Listen here

Retrieving Environmental Information
Sound travels faster or slower underwater depending on how soft or hard the seafloor is. We employ time warping to analyze the signal produced by the virtual sound source. Time warping is akin to using a whimsical clock that makes the original signal run at a decreasing pace rather than steadily (Figure 3a  3b). The changing pace is designed to split the complicated signal into simple, predictable components called normal modes (Figure 3c  3d). Travel times from B to A of normal modes at different acoustic frequencies prove to be very sensitive to sound speed and density in the ocean’s bottom layers. Depth-dependence of these geo-acoustic parameters at the experimental site as well as precise distance from B to A can be determined by trying various sets of the parameters and finding the one that best fits the acoustic normal modes revealed by the ambient noise measurements. The method is illustrated in Figure 4. The sound of the virtual source (Figure 2), which emerges from ambient noise, reveals that the ocean bottom at the experimental site is an 11 m-thick layer of sand overlying a much thicker layer of limestone (Figure 5).

Figure 3. Time warping process: Components of the virtual source signal from noise are separated in the spectrogram of the warped signal from (c) to (d). Figure 4. Comparison of measured travel times of normal modes to the travel time theoretically predicted for various trial models of the ocean bottom and the geometry of the experiment. The measured and theoretically predicted travel times are shown by circles and lines, respectively. Individual normal modes are distinguished by color. By fixing the geo-acoustic parameters (sound speed and density), the precise range r between hydrophones A and B can be found by minimizing the difference between the measured and predicted travel times. The best fit is found at r = 4988m. Watch here

Figure 5. Ocean bottom properties retrieved from ambient noise. Blue and red lines show sound speed in water and bottom, respectively, at different depths below the ocean surface. The ratios ρs and ρb of the bottom density to seawater density are also shown in two bottom layers.  

Conclusion Ambient noise does not have to be an obstacle to acoustic remote sensing of the ocean.  We are learning how to use it to quantify ocean properties. In this research, we used ambient noise to probe the ocean bottom. Time warping has been applied to ambient noise records to successfully measure sound speeds and densities at different depths below the seafloor in the Straits of Florida. Our passive acoustic approach is inexpensive, non-invasive, and environmentally friendly. We are currently working on applying the same approach to the extensive underwater ambient noise recordings obtained at several sites off New Jersey during the Shallow Water 2006 experiment.  

Reference

[1] M. G. Brown, O. A. Godin, N. J. Williams, N. A. Zabotin, L. Zabotina, and G. J. Banker, “Acoustic Green’s function extraction from ambient noise in a coastal ocean environment,” Geophys. Res. Lett. 41, 5555–5562 (2014). [

2] O. A. Godin, M. Brown, N. A.  Zabotin, L. Y. Zabotina, and N. J. Williams, “Passive acoustic measurement of flow velocity in the Straits of Florida.” Geoscience Lett. 1, 16 (2014).

1pEAa5 – A study on the optimal speaker position for improving sound quality of flat panel display

Sungtae Lee, owenlee@lgdisplay.com
Kwanho Park, khpark12@lgdisplay.com
Hyungwoo Park, pphw@ssu.ac.kr
Myungjin Bae, mjbae@ssu.ac.kr
37-8, LCD-ro 8beon-gil, Wollong-myeon Paju-si, Gyeonggi-do, Korea (the Republic of)

This “OLED Panel Speaker” was developed by attaching exciters on the back of OLED panels, which do not have backlights. Synchronizing the video and sound on screen, OLED Panel Speaker delivers clear voice and immersive sound. This technology which only can be applied to OLED, is already adopted by some TV makers and receiving great reviews and evaluations.

speaker
With the continuous development of display industry and progress of IT technology, the display is gradually becoming more advanced. Throughout the development in display technology followed by CRT to LCD and OLED, TVs have evolved to offer much better picture quality. The remarkable development of picture quality has enabled to receive positive market reactionsIn the mean time, relatively bulky speaker was hidden behind the panel to make TVs thin. TV sound could not keep up with the progress of the picture quality, until LG Display developed Flat Panel Speaker using the merit of OLED panel thickness, less than 1mm.

speaker
To realize the technology, we developed an exciter that simplifies the normal speaker structureSpecially-designed exciters are positioned at the back of the panel, invisibly vibrate the screen to create sound.

speaker
We developed and applied an enclosure structure in order to realize “stereo sound” on one sheet of OLED panel and found positive results through vibrational mode analysis.


Depending on the shape of enclosure tape, there are Peak/Dip at a certain frequency created by standing wave. Changing the shape of peak and dip frequencies to 1/3 λ, the peak is improved by 37% from 8dB to 5 dB.


When this technology applied, the sound image moves to the center of the screen, maximizing the immersive experience and enabling the realistic sound.

Sungtae_Lee_Lay Paper