3aAB2 – Assembling an acoustic catalogue for different dolphin species in the Colombian Pacific coast: an opportunity to parameterize whistles before rising noise pollution levels.

Daniel Noreña – d.norena@uniandes.edu.co
Kerri D. Seger
Susana Caballero

Laboratorio de Ecologia Molecular de Vertebrados Marinos
Universidad de los Andes
Bogotá, Colombia

Popular version of paper 3aAB2
Presented Wednesday morning, December 9 , 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Growing ship traffic worldwide has led to a relatively recent increase in underwater noise, raising concerns about effects on marine mammal communication. Many populations of several dolphin species inhabit the eastern Pacific Ocean, particularly along the Chocó coast of Colombia. Recent research has confirmed that anthropologic noise pollution levels in this region are one of the lowest in any studied area around the globe, allowing an opportunity for scientists to listen and analyze a relatively undisturbed soundscape in our oceans.

Figure 1. Vessel traffic in the Americas (a) and in (b) Colombia in particular. Red indicates high traffic and blue areas have no traffic. Note the gap in traffic in the Colombian Pacific coast where the Gulf of Tribugá is located (inside black/red box) as compared to all other coastal regions.

Currently, the CPC is slated for the construction of a port in the Gulf of Tribugá, pending permits. Previous port construction projects in other countries have shown that this will change the acoustic environment and could compromise marine fauna, such as dolphin communication. This is the first study to document the whistle acoustic parameters from several dolphin species in the region before any disturbance. Opportunistic recordings were made in two different locations alongside the coast: Coquí, Chocó, and a few hundred kilometers north Bahía Solano, Chocó.

Figure 1. (a) The Colombian Pacific coast and (b) whale-watching locations and ports of the Pacific coast of Colombia. Ports are red markers and whale-watching spots are blue markers.

Five different delphinid species were recorded: Common bottlenose dolphin (Tursiops truncatus), Pantropical spotted dolphin (Stenella attenuata), Spinner dolphin (Stenella longirostris), False killer whale (Pseudorca crassidens) and Short- beaked common dolphin (Delphinus delphis). Comparing these recordings to those made from dolphin populations in more disturbed areas around the globe showed that the repertoires of four of the five species were different. These differences could be because the Chocó dolphins represent populations that use whistles with more natural features while the other, more disturbed, populations may have already changed their whistle features to avoid overlapping with boat traffic noise.

However, avoiding overlap with other conspecifics or other species in the same habitat is natural, too. This is called the acoustic niche hypothesis (ANH). The ANH states that geographically sympatric species should occupy specific frequency bands to avoid overlapping with each other. A Linear Discriminant Analysis (LDA) was done to explore whether the five different species have already adjusted their whistle features to avoid overlapping with other species. Frequency band separation is not the only feature of whistles that dolphins could adjust. The LDA used nine different features to observe if there is any natural division between any of the features.

dolphinFigure 2. LDA plot for nine whistle variables among the five species.

Tracking these whistle features in Chocó over time will help determine whether the different whistle features between the Chocó dolphins and dolphins from more disturbed areas are a result of the natural acoustic niche hypothesis or a result of noise pollution avoidance. If constructed, the port could force species to adjust their whistle features like populations from noisier habitats already have, and that could disrupt the acoustic niches that already exist, some of their whistles may still be interrupted by boat noise. Such disturbances could increase their stress levels or could lead to area abandonment, which would cause economic and ecological disasters for the region that relies on artisanal fishing and ecotourism.

3pAO1 – Can We Map the Entire Global Ocean Seafloor by 2030?

Larry Mayer – larry@ccom.unh.edu
Center for Coastal and Ocean Mapping
University of New Hampshire
Durham, N.H. 03824

Popular version of paper 3pAO1
Presented Wednesday afternoon, December 09, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Today it is trivial, with a few clicks of a mouse, to enter an application like Google Earth and explore the complexity of a range of earth processes with extraordinary detail.  While this is true for the brown and green parts of the Earth, it is not the case for the three-quarters of the earth that is blue – for the light waves that are used to image the land cannot penetrate far into ocean waters.  Thus while 100% of the land surface on the earth is mapped in remarkable detail, most of the ocean is unmapped and unexplored.  Knowing seabed depths, (bathymetry) is of vital importance for safety of navigation, predicting storm surge and tsunami inundation, mapping deep-sea habitats and ecosystems, laying cables and pipelines, exploring for resources, understanding ocean currents and their impact on climate change, national security issues and exploring human history as preserved in shipwrecks.

Given the inability of light to penetrate the oceans, for thousands of years, the only technique available to map the deep ocean was a hunk of lead at the end of a rope (lead line).  Unlike light, sound travels far distances in seawater and in the early 1900’s, the development of echo-sounders allowed for a much more rapid and accurate means of measuring ocean depths.  Initially echo-sounders used a single beam of sound that generated a broadly averaged measurement of depth, but in the late 1980s a new type of echo-sounder (multibeam echo-sounder) was developed that simultaneously provided hundreds of high-resolution measurements over a wide swath, revolutionizing our ability to map the seafloor.   By 2018 however, only 9% of the deep ocean seafloor had been mapped with multibeam echo-sounders.

Evolution of mapping systems from lead-line, to singlebeam sonar to multibeam sonar. Credit NOAA https://noaacoastsurvey.files.wordpress.com/2015/07/surveying.jpg

Best depiction of bathymetry offshore southern California from single beam echosounder data

Bathymetry of offshore southern California from multibeam echosounder.  Credit USGS.

Recognizing the poor state of knowledge of ocean depths and the critical role such knowledge plays in understanding and maintaining our planet, the Nippon Foundation challenged the mapping community to produce a complete map of the world ocean seafloor by 2030. The result, “The Nippon Foundation-GEBCO Seabed 2030 Project,” has already increased publicly-available holdings of modern deep-sea mapping data from 9% to 19% in the 2020.  Some of this initial increase came through discovery of existing data; the challenge now is to complete new mapping, an effort estimated to require approximately 200 ship-years (at a cost of $3-5B) using current technologies. While this seems like a large amount to spend on mapping our planet, the reality is that we have spent much more than this mapping other planets (i.e., Mars and the Moon) at much higher resolution. Why not our own planet?

Nippon Foundation – GEBCO Seabed 2030 Project

Meeting the challenge of complete mapping of the global ocean will require innovative new technologies that can increase efficiency, cost-effectiveness and, capabilities.  Autonomous vessels are being developed that can deliver high-resolution mapping systems without the significant cost of crews, and wind-powered autonomous systems, without the cost of crews or fuel.  Along with these new platform technologies innovative new acoustic approaches capable of providing wider swaths and higher resolution are also being developed.  As these new technologies evolve, the aspirational goal of Seabed 2030 may very well become a reality.

22 meter (72 foot) uncrewed Saildrone Surveyor – soon to be launched to autonomously sail the globe collecting deep-sea bathymetric (and other) data.

2pSPc4 – Determining the Distance of Sperm Whales in the Northern Gulf of Mexico from an Underwater Acoustic Recoding Device

Kendal Leftwich – kmleftwi@uno.edu
George Druant – George.Drouant@oit.edu
Julia Robe – jerobe@uno.edu
Juliette Ioup – jioup@uno.edu

University of New Orleans
2000 Lakeshore Drive
New Orleans, LA 70148

Determining the range to marine mammals in the Northern gulf of Mexico via bayesian acoustic signal processing
Presented Acoustic Localization IV afternoon, December 8, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

The Littoral Acoustic Demonstration Center – Gulf Ecological Monitoring and Modeling (LADC-GEMM) has been collecting underwater acoustic data in the Northern Gulf of Mexico (GoM) since 2002 through 2017.  Figure 1 shows the collection sites and the location of the BP oil spill in April 2010.  The data are collected by a hydrophone, an underwater microphone, which records the acoustic signals or sounds of the region.

One of the goals of the research at the University of New Orleans (UNO) is to identify individual marine mammals by their acoustic signal.  Part of this identification includes being able to locate them.   In this paper we will briefly explain how we are attempting to locate sperm whales in the GoM.

First, we need to understand how the whale’s sounds travel through the water and what happens to them as they do.  Any sound that travels through a medium (air, water, or any material) will have its loudness decreased.  For example, it is much easier to hear a person talking to you when you are in the same room, but if they are talking to you through a wall their voice level is reduced because the signal travels through a medium (the wall) that reduces its loudness.  Therefore, as the whale signal travels through the GoM to our hydrophones the loudness of the signal is reduced.  The impact that this has on the whale’s signal is determined by the temperature, the depth of the recording device below the surface, the salinity, and the pH level of the water.  Using this information, we can determine how much the loudness of the whale’s signal will decrease per kilometer that the signal travels.  This can be seen in figure 2.

We will use the known signal loudness of the sound emitted by a sperm whale and the recorded loudness of the signal along with the impact of the GoM on the signal to determine how far away the sperm whale is from our hydrophone.   Unfortunately, due to technical limitations of the equipment we can only do this for a single hydrophone so we cannot currently locate the sperm whale’s exact position. We can only tell you where it is located at a certain distance around the hydrophone.  Figures 3 shows graphically the results of our calculations for two of the 276 sperm whale signals we used with our model to estimate how far away the whale is from our hydrophone.

1aBAb – Shear wave elastography for skeletal muscle diagnostics

Timofey Krit – timofey@acs366.phys.msu.ru
Arina Ivanova – ivanova.ad16@physics.msu.ru
M.V. Lomonosov Moscow State University, Faculty of Physics
Vorobyovy Gory, 1/2, Moscow 119991, Russia

Yuly Kamalov – kamalov53@yandex.ru
Russian Scientific Center of Surgery named after academician B.V. Petrovsky
Abrikosovsky Lane, 2, Moscow 119991, Russia

Popular version of paper 1aBAb
Presented Monday morning, December 7, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Ultrasound is a widespread diagnostic method due to its ease of use and relatively low cost of equipment. However, it is impossible to accurately determine the state of viscera using ultrasound. That is because the method is based on the detection of the boundaries between media with significantly different elastic properties. But ultrasound allows to track the movement of the medium precisely. More than two decades ago, academician of the Russian Academy of Science, professor O.V. Rudenko proposed a method in which a shear wave is excited at a certain depth. The shear wave velocity is defined primarily by the shear modulus of the medium. This elastic parameter changes significantly when the functional state of the medium changes. The values of the shear modulus of different organs range from several pascals to several gigapascals (Fig. 1). Therefore, the registration of the propagation of a shear wave, in contrast to ultrasound, allows one to determine the functional state of body organs and tissues.

Shear wave

Figure 1: The values of the shear modulus of different body organs

Skeletal muscle is a rather complex and unique entity. Many widely used elastic models are inapplicable to them. In this work, we used the above-mentioned method for measuring the shear modulus of skeletal muscles. The method was modified by focusing the ultrasonic beam into a shape called “blade” (Fig. 2).

Figure 2: The ultrasonic beam focused in a shape of “blade”

When focusing the ultrasonic beam, the “blade” shape makes it possible to excite a shear wave along the ultrasonic probe only. The excited wave then propagates to the left and to the right from the axis of symmetry of the ultrasound probe (Fig. 2). In anisotropic media, including skeletal muscles, this approach increases the measurement accuracy. “Blade” shape in the beam focus is not currently used in existing clinical equipment. However, measurements of the shear wave velocity in skeletal muscles can still be carried out on existing equipment that use the SWEI algorithm built into the ultrasound diagnostic device. For this purpose, the ultrasound probe is first placed along and then across the muscle fibers. Muscle fibers are visible in the standard B-mode, which allows measurements at two specified probe positions.

We obtained experimental data in healthy volunteers, whose biceps were loaded with the barbell plates. And with several loads, we measured the shear modulus along and across the muscle fibers of the biceps (Fig. 3).

Figure 3: The clinical trials in healthy volunteers

It turned out that with an increase in the load on the biceps, the shear modulus along the muscle fibers increases nonlinearly. The shear modulus across the muscle fibers does not depend on the applied load. The algorithm is clinically tested. It can be used to determine the functional state of skeletal muscles. The proposed method can already be used today to assess how various physical activities and nutrition systems affect skeletal muscles, as well as to identify the location of shear modulus inhomogeneities that cause muscle malfunction.

1aSCb4 – Formant and voice quality changes as a function of age in women

Laura L. Koenig – koenig@haskins.yale.edu
Adelphi University
158 Cambridge Avenue
Garden City NY 11530

Susanne Fuchs – fuchs@leibniz-zas.de
Leibniz-Zentrum Allgemeine Sprachwissenschaft (ZAS)
Schützenstr. 18
10117 Berlin (Germany)

Annette Gerstenberg – gerstenberg@uni-potsdam.de
University of Potsdam, Department of Romance Studies
Am Neuen Palais 10
14467 Potsdam (Germany)

Moriah Rastegar – moriahrastegar@mail.adelphi.edu
Adelphi University
158 Cambridge Avenue
Garden City NY 11530

Popular version of the paper: 1aSCb4
Presented: December 7, 2020 at 10:15 AM – 11:00 AM EST

As we age, we change in many ways:  How we look, the way we dress, and how we speak.  Some of these changes are biological, and others are social.  All are potentially informative to those we interact with.

age age

Captions: “Younger (left) and older (right). Image obtained under the publicly-available creative commons licence.  Aging manipulation courtesy of Jolanda Fuchs.”

******

The human voice is a rich source of information on speaker characteristics, and studies indicate that listeners are relatively accurate in judging the age of an unknown person they hear on the phone.  Vocal signals carry information on (a) the sizes of the mouth and throat cavities, which change as we produce different vowels and consonants; (b) the voice pitch, which reflects characteristics of the vocal-folds; and (c) the voice quality, which also reflects vocal-fold characteristics, but in complex and multidimensional ways.  One voice quality dimension is whether a person speaks with a breathier voice quality.  Past studies on the acoustic effects of vocal aging have concentrated on formants, which reflect upper-airway cavity sizes, and fundamental frequency, which corresponds to voice pitch.  Few studies have assessed voice quality.

Further, most past work investigated age by comparing people from different generations.  Cross-generational studies can be confounded by changes in human living conditions such as nutrition, employment settings, and exposure to risk factors.  To separate effects of aging from environmental factors, it is preferable to assess the same individuals at different time points.  Such work is rather rare given the demands of re-connecting with people over long periods of time.

Here, we take advantage of the French LangAge corpus (https://www.uni-potsdam.de/langage/).  Participants engaged in bibliographic interviews beginning in 2005, and were revisited in subsequent years.  Our analysis is based on four women recorded in 2005 and 2015.  We focus on women because biological aging may differ across the sexes. Out of all words, we selected two of the most frequent ones that were produced for each speaker and time point and did not include voiceless sounds.

Numbers 049 and 016 identify the two speakers, f=female, and the following value (e.g. 72) is the age of the speaker.

049_f_72_LeGris.wav 016_f_71_chiens.wav
049_f_82_LeBaigneur.wav 016_f_81_chiens.wav

Our results show that all four speakers have a lower cavity (formant) frequency at older ages.  This may reflect lengthening of the upper airways, e.g. the larynx descends somewhat over time.  Voice quality also changed, with breathier vocal quality at younger ages than at older ages.  However, speakers differed considerably in the magnitude of these changes and in which measures demonstrated aging effects.

In some cultures, a breathy vocal quality is a marker of gender. Lifestyle changes in later life could lead to a reduced need to demonstrate “female” qualities. In our dataset, the speaker with the largest changes in breathiness was widowed between recording times.  Along with physiological factors and social-communicative conditions, ongoing adaptation to gender roles as a person ages may also contribute to changes in voice quality.

5aPPb1 – The freedom to move around – Hearing aid research takes a big step towards the real life

Stefan Klockgether – stefan.klockgether@sonova.com
Diego Ulloa Sanchez
Charlotte Vercammen
Peter Derleth
Sonova AG
Laubisrütistrasse 28
CH 8712 Stäfa
Switzerland

Popular version of paper 5aPPb1
Presented Friday morning, December 11, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

One major aspect of hearing aid development is audiological performance. This describes the benefit in hearing a hearing impaired person can have from using a hearing aid.

Measuring audiological performance depends on the perception of individuals. To reduce the impact of individual behavior of subjects on measured results – a lack of experiment control, the degrees of freedom during an audiological study are usually strongly limited and important aspects of perception in real life are sacrificed in favor of control.

In recent years, efforts have been taken to substantiate the performance of hearing aids in real life situations. It is important to understand the listeners behavior in realistic acoustic environments, especially the potential differences between normal hearing and hearing impaired people.

The new “Real Life Lab” at Sonova brings the freedom to move around to controlled laboratory conditions. The lab provides a stage where persons can move around freely and interact with sound sources. The stage is surrounded by loudspeakers to present sound from all directions. Any motion by persons on the stage can be tracked in real time to regain the control. The motion data can be passively tracked or actively used to trigger audio and video reproduction.

real life lab

Figure 1: The Sonova Real Life Lab with loudspeakers at the sides, below the floor and at the ceiling.

The lab is also used to investigate the behavior in acoustic scenes. A pilot study has been done, to find differences between normal hearing and hearing impaired. The subjects had to find different acoustic targets in a complex scene (Video 1). Their motion as well as there performance was tracked with motion capturing (Video 2).

Video 1: The subject has to find different acoustic targets (a crying baby, a barking dog or a ringing telephone). The subject wears a hairband to track head position and orientation, a vest to track the torso and a controller to point to the found target.

Video 2: Motion capturing view of the task. The three tracked objects are the head position in pink, the torso in orange and the pointer in light blue with a pink beam indicating when and where a target has been found.  

Five normal hearing (NH, age ≈ 27), five subjects with hearing loss wearing hearing aids (HL, age ≈ 74) and five age-matched persons with age-appropriate hearing (AA, age ≈ 71) participated in the study.

Number of found targets in an allowed time. Weighted amount of head movements and accuracy.

The results show clear differences in the performance as well as in the search strategy. The young normal hearing were fast, accurate and moved their heads a lot. The age-appropriate hearing group was slower, but as accurate and moved their heads a lot. The hearing impaired were slower, less accurate and moved their heads less. Hearing impaired seem to benefit less from the gain in acoustic information which is provided by head movements and may therefore reduce the movements.