2aAO5 – Tracking natural hydrocarbons gas flow over the course of a year

Alexandra M Padilla – apadilla@ccom.unh.edu
Thomas C Weber – weber@ccom.unh.edu
University of New Hampshire
24 Colovos Road
Durham, NH, 03824

Frank Kinnaman – frank_kinnaman@ucsb.edu
David L Valentine – valentine@ucsb.edu
University of California – Santa Barbara
Webb Hall
Santa Barbara, CA, 93106

Popular version of paper 2aAO5
Presented Wednesday morning, June 9, 2021
180th ASA Meeting, Acoustics in Focus

Researchers have been studying the release of methane, a greenhouse gas, in the form of bubbles from different regions of the ocean’s seafloor for decades to understand its impact on global climate change and ocean acidification (Kessler, 2014). One region, the Coal Oil Point (COP) seep field, is a well-studied natural hydrocarbon (e.g., oil droplets and methane gas bubbles) seep site, known for its prolific hydrocarbon activity (Figure 1; Hornafius et al., 1999). Researchers that have studied the COP seep field have observed both spatial and temporal changes in the gas flow in the area, that has been thought to be linked to external processes such as tides (Boles et al., 2001) and offshore oil production from oil rigs within the seep field (Quigley et al., 1999).

Figure 1. Video of methane gas bubbles rising through the ocean’s water column within the COP seep field.

In recent years, an oil platform within the COP seep field, known as Platform Holly, has become inactive and decommissioned, and there has been a resurgence in natural hydrocarbon seepage activity in the vicinity of the platform based on anecdotal observations. This led a group  from UNH and UCSB to map the hydrocarbon activity in the COP seep field (Padilla et al., 2019), where we were able to identify a large patch of high seepage activity near Platform Holly (Figure 2). The shut-in at Platform Holly provided us with the opportunity to deploy a long-term acoustic monitoring system to study both the spatial and temporal changes in hydrocarbon gas flow in the region and to assess how it is affected by external processes.

Figure 2. a) Acoustic map of natural hydrocarbon activity within the COP seep field (Padilla et al., 2019). b) Zoomed in acoustic map near Platform Holly. c) Image of Platform Holly.

We mounted a split-beam echosounder, at a depth of approximately 8 m  below the sea surface, on one of Platform Holly’s cross beams. The echosounder was programmed to emit an acoustic signal every 10 seconds and has been collecting acoustic data since early September 2019, providing us with more than a year’s worth of acoustic data to process and analyze (Figure 3). The acoustic signal emitted by the echosounder interacts with scatterers in the water column, mostly methane gas bubbles in our case, and measures the target strength of these scatterers. The target strength represents how strong a scatterer scatters sound back towards the echosounder (for more information of acoustics and gas bubbles, see article by Weber, 2016).

Figure 3. Acoustic observations of hydrocarbon activity (ranges between 10-140 m) west of Platform Holly as a function of range from the echosounder and time. Warm and cool colors represent high and low target strength, which correspond, roughly, to high and low seepage activity, respectively.

The acoustic measurements, shown in Figure 3, indicate that there are temporal changes in the location and the target strength of the hydrocarbons in the region; however, it does not tell us how the amount of gas flow of these hydrocarbons is changing with time. Exploiting the split-beam capability of the echosounder, allowed us to track the position of scatterers in the acoustic data, so we can identify and classify different hydrocarbon structure types (Figure 4) and use the appropriate mathematical equations to convert acoustic measurements into gas flow. This will allow us to track changes in gas flow of hydrocarbons near Platform Holly and learn more about how gas flow is affected by external processing, like tides, storms, and earthquakes.

Figure 4. a) Acoustic observations of hydrocarbon activity. b) Acoustic classification map of different hydrocarbon structure types.

1pAB6 – Oscillatory whistles – the ups and downs of identifying species in passive acoustic recordings

Julie N. Oswald – jno@st-andrews.ac.uk
Sam F. Walmsley – sjfw@st-andrews.ac.uk
Scottish Oceans Institute
School of Biology
University of St Andrews, UK

Caroline Casey – cbcasey@ucsc.edu
Selene Fregosi – selene.fregosi@gmail.com
Brandon Southall – brandon.southall@sea-inc.net
SEA Inc.,
9099 Soquel Drive,
Aptos, CA 95003

Vincent M. Janik – vj@st-andrews.ac.uk
Scottish Oceans Institute
School of Biology
University of St Andrews, UK

Popular version of paper 1pAB6 Oscillatory whistles—The ups and downs of identifying species in passive acoustic recordings
Presented Tuesday afternoon, June 8, 2021
180th ASA Meeting, Acoustics in Focus

Many dolphin species communicate using whistles. Because whistles are produced so frequently and travel well under water, they are the focus of a wide range of passive acoustic studies. A challenge inherent to this type of work is that many acoustic recordings do not have associated visual observations and so species in the recordings must be identified based on the sounds that they make.

Acoustic species identification can be challenging for several reasons. First, the frequency contours of dolphin whistles are variable, and each species produces many different whistle types. Also, whistles often exhibit significant overlap in their characteristics between species. Traditionally, acoustic species classifiers use variables measured from all whistles, regardless of what type they are. An assumption of this approach is that there are underlying features in every whistle that provide information about species identity. In human terms, we can tell a human scream or grunt from those of a chimpanzee because they sound different. But is this the case for dolphin whistles? Can a dolphin tell whether a whistle it hears is produced by another species? If so, is species information carried in all whistles?

To investigate these questions, we analyzed whistles produced by short- and long-beaked common dolphins in the Southern California Bight. Our previous work has shown that the whistles of these closely related species overlap significantly in time and frequency characteristics measured from all whistles, so we hypothesized that species information may be carried in the shape of specific whistle contours rather than by general characteristics of all whistles. We used artificial neural networks to organize whistles into categories, or whistle types. Most of the resulting whistle types were produced by both species (we called these shared whistle types), but each species also had distinctive whistle types that only they produced (we called these species-specific whistle types). Almost half of the species-specific whistles produced by short-beaked common dolphins had oscillations in their contours, while oscillations were very rare for both long-beaked common dolphins and shared whistle types. This clear difference between species in the use of one specific whistle shape suggests that whistle type is important for species identification.

We further tested the role of species-specific whistle types in acoustic species identification by creating three different classifiers for the two species – one using all whistles, one using only whistles from shared whistle types and one using only whistles from species-specific whistle types. The classifier that used whistles from species-specific whistle types performed significantly better than the other two classifiers, demonstrating that species-specific whistle types collectively carry more species information than other whistle types, and the assumption that all whistles carry species information is not correct.

The results of this study show that we should re-evaluate our approach to acoustic species identification. Instead of measuring variables from whistles regardless of type, we should focus on identifying species-specific whistle types and creating classifiers based on those whistles alone. This new focus on species-specific whistle types would pave the way for more accurate tools for identifying species in passive acoustic recordings.

2aMU5 – Evaluation of individual differences of vibration duration of tuning forks

Kyota Nomizu – k-nomizu@chiba-u.jp
Sho Otsuka – otsuka.s@chiba-u.jp
Seiji Nakagawa – s-nakagawa@chiba-u.jp
Chiba University
1-33 Yayoi-cho, Inage-ku
Chiba-shi, 263-8522, Japan

Popular version of paper 2aMU5
Presented Wednesday morning, June 9, 2021
180th ASA Meeting, Acoustics in Focus

A tuning fork is a metal device that emits a sound of a certain frequency when struck. Tuning forks are used for various purposes, such as music, medicine, and healing. In addition to the fundamental frequency component, the harmonic tone appears immediately after struck, with a 6-times higher frequency. First of all, the accuracy of the fundamental frequency is needed. Additionally, the fundamental tone needs to be sustained for a long time, while the harmonic tone should decay rapidly. However, only the fundamental frequency is tuned in the manufacturing process of the tuning forks, durations of tones have not been evaluated. In addition, most studies on tuning forks have been about frequencies of tones or mode analysis, and those on the vibration duration are very limited.

tuning forks

Figure 1: Tuning forks used in the experiment.

In this study, we aimed to assess individual differences in the vibration duration of tuning forks. Also, we tried to clarify factors that affect the vibration duration. In this study, as a first step, we evaluated the effect of the holding force.

In the experiment, we struck four individual tuning forks of the same type and recorded their sound, and estimated durations of their fundamental and harmonic tones. Measurements were repeated with changing the holding force.

Figure 2: Evaluation of the vibration duration.

As a result, significant individual differences in the duration of fundamental and harmonic tones were observed. Especially, the tuning fork with the shorter length and the smaller mass had a shorter fundamental tone. Also, the duration of fundamental and harmonic tones varied depending on the holding force. The best holding forces for both tones were different for each tuning fork.

These results suggest that even for the same type of tuning fork, small differences in shape and heterogeneity of the material may affect the vibration duration. It is also suggested that there is a desirable holding force for each tuning fork that can achieve both a long duration of the fundamental tone and rapid decay of the harmonic tone.

Figure 3: Duration of the fundamental tone at each holding force range.

In the future, based on these results showing the relationship with the holding force, it is necessary to conduct a comprehensive study on the effects of shape parameters and environmental conditions such as temperature and humidity. It is thought that the results theoretically contribute to improving the manufacturing process of tuning forks, which currently relies on the empirical knowledge of artisans.

1aSC2 – The McGurk Illusion

Kristin J. Van Engen – kvanengen@wustl.edu
Washington University in St. Louis
1 Brookings Dr.
Saint Louis, MO 63130

Popular version of paper 1aSC2 The McGurk illusion
Presented Tuesday morning, June 8, 2021
180th ASA Meeting, Acoustics in Focus

In 1976, Harry McGurk and John MacDonald published their now-famous article, “Hearing Lips and Seeing Voices.” The study was a remarkable demonstration of how what we see affects what we hear: when the audio for the syllable “ba” was presented to listeners with the video of a face saying “ga”, listeners consistently reported hearing “da”.

That original paper has been cited approximately 7500 times to date, and in the subsequent 45 years, the “McGurk effect” has been used in countless studies of audiovisual processing in humans. It is typically assumed that people who are more susceptible to the illusion are also better at integrating auditory and visual information. This assumption has led to the use of susceptibility to the McGurk illusion as a measure of an individual’s ability to process audiovisual speech.

However, when it comes to understanding real-world multisensory speech perception, there are several reasons to think that McGurk-style stimuli are poorly-suited to the task. Most problematic is the fact that McGurk stimuli rely on audiovisual incongruence that never occurs in real-life audiovisual speech perception. Furthermore, recent studies show that susceptibility to the effect does not actually correlate with performance on audiovisual speech perception tasks such as understanding sentences in noisy conditions. This presentation reviews these issues, arguing that, while the McGurk effect is a fascinating illusion, it is the wrong tool for understanding the combined use of auditory and visual information during speech perception.

3aSP1 – Using Physics to Solve the Cocktail Party Problem

Keith McElveen – keith.mcelveen@wavesciencescorp.com
Wave Sciences
151 King Street
Charleston, SC USA 29401

Popular version of paper ‘Robust speech separation in underdetermined conditions by estimating Green’s functions’
Presented Thursday morning, June 10th, 2021
180th ASA Meeting, Acoustics in Focus

Nearly seventy years ago, a hearing researcher named Colin Cherry said that “One of our most important faculties is our ability to listen to, and follow, one speaker in the presence of others. This is such a common experience that we may take it for granted; we may call it the cocktail party problem.” No machine has been constructed to do just this, to filter out one conversation from a number jumbled together.”

Despite many claims of success over the years, the Cocktail Party Problem has resisted solution.  The present research investigates a new approach that blends tricks used by human hearing with laws of physics. With this approach, it is possible to isolate a voice based on where it must have come from – somewhat like visualizing balls moving around a billiard table after being struck, except in reverse, and in 3D. This approach is shown to be highly effective in extremely challenging real-world conditions with as few as four microphones – the same number as found in many smart speakers and pairs of hearing aids.

The first “trick” is something that hearing scientists call “glimpsing”. Humans subconsciously piece together audible “glimpses” of a desired voice as it momentarily rises above the level of competing sounds. After gathering enough glimpses, our brains “learn” how the desired voice moves through the room to our ears and use this knowledge to ignore the other sounds.

The second “trick” is based on how humans use sounds that arrive “late”, because they bounced off of one or more large surfaces along the way. Human hearing somehow combines these reflected “copies” of the talker’s voice with the direct version to help us hear more clearly.

The present research mimics human hearing by using glimpses to build a detailed physics model – called a Green’s Function – of how sound travels from the talker to each of several microphones. It then uses the Green’s Function to reject all sounds that arrived via different paths and to reassemble the direct and reflected copies into the desired speech. The accompanying sound file illustrates typical results this approach achieves.

Original Cocktail Party Sound File, Followed by Separated Nearest Talker, then Farthest

While prior approaches have struggled to equal human hearing in a realistic cocktail party babel, even at close distances, the research results we are presenting imply that it is now possible to not only equal, but to exceed human hearing and solve The Cocktail Party Problem, even with a small number of microphones in no particular arrangement.

The many implications of this research include improved conference call systems, hearing aids, automotive voice command systems, and other voice assistants – such as smart speakers. Our future research plans include further testing as well as devising intuitive user interfaces that can take full advantage of this capability.

No one knows exactly how human hearing solves the Cocktail Party Problem, but it would be very interesting indeed if it is found to use its own version of a Green’s Function.

1aABa1 – Ending the day with a song: patterns of calling behavior in a species of rockfish

Annebelle Kok – akok@ucsd.edu
Ella Kim – ebkim@ucsd.edu
Simone Baumann-Pickering – sbaumann@ucsd.edu
Scripps Institution of Oceanography – University of California San Diego
9500 Gilman Drive
La Jolla, CA 92093

Kelly Bishop – kellybishop@ucsb.edu
University of California Santa Barbara
Santa Barbara, CA 93106

Tetyana Margolina – tmargoli@nps.edu
John Joseph – jejoseph@nps.edu
Naval Postgraduate School
1 University Circle
Monterey, CA 93943

Lindsey Peavey Reeves – lindsey.peavey@noaa.gov
NOAA Office of National Marine Sanctuaries
1305 East-West Highway, 11th Floor
Silver Spring, MD 20910

Leila Hatch – leila.hatch@noaa.gov
NOAA Stellwagen Bank National Marine Santuary
175 Edward Foster Road
Scituate, MA 02474

Popular version of paper 1aABa1 Ending the day with a song: Patterns of calling behavior in a species of rockfish
Presented Tuesday morning, June 8, 2021
180th ASA Meeting, Acoustics in Focus

Fish can be seen as ‘birds’ of the sea. Like birds, they sing during the mating season to attract potential partners to and to repel rival singers. At the height of the mating season, fish singing can become so prominent that it is a dominant feature of the acoustic landscape, or soundscape, of the ocean. Even though this phenomenon is widespread in fish species, not much is known about fish calling behavior, a stark contrast to what we’ve learned about bird calling behavior. As part of SanctSound, a large collaboration of over 20 organizations investigating soundscapes of US National Marine Sanctuaries, we have investigated the calling behavior of bocaccio (Sebastes paucispinis), a species of rockfish residing along the west coast of North America. Bocaccio produce helicopter-like drumming sounds that increase in amplitude.

We deployed acoustic recorders at five sites across the Channel Islands National Marine Sanctuary for about a year to record bocaccio, and used an automated detection algorithm to extract their calls from the data. Next, we investigated how their calling behavior varied with time of day, moon phase and season. Bocaccio predominantly called at night, with peaks at sunset and sunrise. Shallow sites had a peak early in the night, while the peak at deeper sites was more towards the end of the night, suggesting that bocaccio might move up and down in the water column over the course of the night. Bocaccio avoided calling during full moon, preferentially producing their calls when there was little lunar illumination. Nevertheless, bocaccio were never truly quiet: they called throughout the year, with peaks in winter and early spring.

The southern population of bocaccio on the US west coast was considered overfished by commercial and recreational fisheries prior to 2017, and has been rebuilt to be a sustainably fished stock today. One of the keys to this sustainability is reproductive success: bocaccio are very long-lived fish that don’t reproduce until they are 4-7 years old, and they can live to be 50 years old. They are known to spawn in the Channel Islands National Marine Sanctuary region from October to July, peaking in January, and studying their calling patterns can help us ensure that we keep this population and its habitat viable well into the future. Characterizing their acoustic ecology can tell us more about where in the sanctuary they reside and spawn, and understanding their reproductive calling behavior can help tell us which time of the year they are most vulnerable to noise pollution. More importantly, these results give us more insight into the wondrous marine soundscape and let us imagine what life must be like for marine creatures that contribute to and rely on it.