1aMU2 – Measurements and Analysis of Acoustic Guitars During Various Stages of Their Construction – Mark Rau

Measurements and Analysis of Acoustic Guitars During Various Stages of Their Construction

Mark Rau – mrau@ccrma.stanford.edu
Center for Computer Research in Music and Acoustics (CCRMA), Stanford University
660 Lomita Court
Stanford, California 94305, USA

Popular version of paper ‘1aMU2’ Measurements and Analysis of Acoustic Guitars During Various Stages of Their Construction
Presented Tuesday morning 9:50 – 10:05am, June 08, 2021
180th ASA Meeting, Acoustics in Focus

Stringed instruments have an internal structure which determines how they vibrate and produce sound when driven by the strings. This internal structure is made up of multiple vibrational resonances and is referred to as the resonant structure. Many stringed instrument builders (luthiers) will take acoustic measurements of instruments as they are being built to probe the resonant structure and make changes so that the instrument will sound as intended. However, the resonant structure of the instrument continuously evolves throughout the construction process, so it is unclear at which stage the acoustic measurements should be made.

To address this, we measured the resonant structure of three guitars during their construction. Two guitars are of the Orchestra Model (OM) style and were made by the Santa Cruz Guitar Company. The third is an 000-28 style guitar built by the author. The guitars were measured at multiple stages while being constructed, including: during the bracing of the top, construction of the box, sanding, application of polish, and once fully constructed. The stages of construction of the 000-28 are shown in Figure 2.

Figure 1: The three guitars in their completed state. The left and center guitars are the OMs and the right guitar is the 000-28.

Figure 2: Various stages of the 000-28 construction.

The resonant structure was measured by using a small hammer to impart a force to the instrument, and a laser Doppler vibrometer to measure the resulting vibrations. This provided the frequency and amplitude of each structural resonance as well as how long it would ring once struck.

Figure 3: Vibration measurement setup.

The lowest resonances are the most important, because they fall near the fundamental frequencies of most notes on the guitar, so we tracked how the first three prominent resonances changed. Figure 4 shows the frequency response of the 000-28 with the box constructed and sanded (top right of Fig. 2) and the guitar fully constructed (bottom right of Fig. 2). The lowest three prominent resonances are circled and their structural mode shapes are shown for the guitar box.

Figure 4: Frequency response of the 000-28 box (left) and completed guitar (right). The lowest three prominent resonances are highlighted.

We observed some general trends as the guitar evolves, such as the resonant frequencies and amplitudes decreasing as the guitar nears completion, particularly as the polish is applied. If one is trying to achieve a specific sonic quality from an instrument, we recommend taking measurements before the final sanding and adjusting the amount of sanding based on these observations. Final alterations can be made by carving the braces through the sound hole.

2aAO5 – Tracking natural hydrocarbons gas flow over the course of a year – Alexandra M Padilla

Tracking natural hydrocarbons gas flow over the course of a year

Alexandra M Padilla – apadilla@ccom.unh.edu
Thomas C Weber – weber@ccom.unh.edu
University of New Hampshire
24 Colovos Road
Durham, NH, 03824

Frank Kinnaman – frank_kinnaman@ucsb.edu
David L Valentine – valentine@ucsb.edu
University of California – Santa Barbara
Webb Hall
Santa Barbara, CA, 93106

Popular version of paper 2aAO5

Presented Wednesday morning, June 9, 2021

180th ASA Meeting, Acoustics in Focus

Researchers have been studying the release of methane, a greenhouse gas, in the form of bubbles from different regions of the ocean’s seafloor for decades to understand its impact on global climate change and ocean acidification (Kessler, 2014). One region, the Coal Oil Point (COP) seep field, is a well-studied natural hydrocarbon (e.g., oil droplets and methane gas bubbles) seep site, known for its prolific hydrocarbon activity (Figure 1; Hornafius et al., 1999). Researchers that have studied the COP seep field have observed both spatial and temporal changes in the gas flow in the area, that has been thought to be linked to external processes such as tides (Boles et al., 2001) and offshore oil production from oil rigs within the seep field (Quigley et al., 1999).

In recent years, an oil platform within the COP seep field, known as Platform Holly, has become inactive and decommissioned, and there has been a resurgence in natural hydrocarbon seepage activity in the vicinity of the platform based on anecdotal observations. This led a group  from UNH and UCSB to map the hydrocarbon activity in the COP seep field (Padilla et al., 2019), where we were able to identify a large patch of high seepage activity near Platform Holly (Figure 2). The shut-in at Platform Holly provided us with the opportunity to deploy a long-term acoustic monitoring system to study both the spatial and temporal changes in hydrocarbon gas flow in the region and to assess how it is affected by external processes.

We mounted a split-beam echosounder, at a depth of approximately 8 m  below the sea surface, on one of Platform Holly’s cross beams. The echosounder was programmed to emit an acoustic signal every 10 seconds and has been collecting acoustic data since early September 2019, providing us with more than a year’s worth of acoustic data to process and analyze (Figure 3). The acoustic signal emitted by the echosounder interacts with scatterers in the water column, mostly methane gas bubbles in our case, and measures the target strength of these scatterers. The target strength represents how strong a scatterer scatters sound back towards the echosounder (for more information of acoustics and gas bubbles, see article by Weber, 2016).

The acoustic measurements, shown in Figure 3, indicate that there are temporal changes in the location and the target strength of the hydrocarbons in the region; however, it does not tell us how the amount of gas flow of these hydrocarbons is changing with time. Exploiting the split-beam capability of the echosounder, allowed us to track the position of scatterers in the acoustic data, so we can identify and classify different hydrocarbon structure types (Figure 4) and use the appropriate mathematical equations to convert acoustic measurements into gas flow. This will allow us to track changes in gas flow of hydrocarbons near Platform Holly and learn more about how gas flow is affected by external processing, like tides, storms, and earthquakes.

Figure 1. Video of methane gas bubbles rising through the ocean’s water column within the COP seep field.


Figure 2. a) Acoustic map of natural hydrocarbon activity within the COP seep field (Padilla et al., 2019). b) Zoomed in acoustic map near Platform Holly. c) Image of Platform Holly.


Figure 3. Acoustic observations of hydrocarbon activity (ranges between 10-140 m) west of Platform Holly as a function of range from the echosounder and time. Warm and cool colors represent high and low target strength, which correspond, roughly, to high and low seepage activity, respectively.


Figure 4. a) Acoustic observations of hydrocarbon activity. b) Acoustic classification map of different hydrocarbon structure types.

1pAB6 – Oscillatory whistles – the ups and downs of identifying species in passive acoustic recordings – Julie N. Oswald

Julie N. Oswald – jno@st-andrews.ac.uk
Sam F. Walmsley – sjfw@st-andrews.ac.uk
Scottish Oceans Institute
School of Biology
University of St Andrews, UK

Caroline Casey – cbcasey@ucsc.edu
Selene Fregosi – selene.fregosi@gmail.com
Brandon Southall – brandon.southall@sea-inc.net
SEA Inc.,
9099 Soquel Drive,
Aptos, CA 95003

Vincent M. Janik – vj@st-andrews.ac.uk
Scottish Oceans Institute
School of Biology
University of St Andrews, UK

Popular version of paper 1pAB6 Oscillatory whistles—The ups and downs of identifying species in passive acoustic recordings
Presented Tuesday afternoon, June 8, 2021
180th ASA Meeting, Acoustics in Focus


Many dolphin species communicate using whistles. Because whistles are produced so frequently and travel well under water, they are the focus of a wide range of passive acoustic studies. A challenge inherent to this type of work is that many acoustic recordings do not have associated visual observations and so species in the recordings must be identified based on the sounds that they make.

Acoustic species identification can be challenging for several reasons. First, the frequency contours of dolphin whistles are variable, and each species produces many different whistle types. Also, whistles often exhibit significant overlap in their characteristics between species. Traditionally, acoustic species classifiers use variables measured from all whistles, regardless of what type they are. An assumption of this approach is that there are underlying features in every whistle that provide information about species identity. In human terms, we can tell a human scream or grunt from those of a chimpanzee because they sound different. But is this the case for dolphin whistles? Can a dolphin tell whether a whistle it hears is produced by another species? If so, is species information carried in all whistles?

To investigate these questions, we analyzed whistles produced by short- and long-beaked common dolphins in the Southern California Bight. Our previous work has shown that the whistles of these closely related species overlap significantly in time and frequency characteristics measured from all whistles, so we hypothesized that species information may be carried in the shape of specific whistle contours rather than by general characteristics of all whistles. We used artificial neural networks to organize whistles into categories, or whistle types. Most of the resulting whistle types were produced by both species (we called these shared whistle types), but each species also had distinctive whistle types that only they produced (we called these species-specific whistle types). Almost half of the species-specific whistles produced by short-beaked common dolphins had oscillations in their contours, while oscillations were very rare for both long-beaked common dolphins and shared whistle types. This clear difference between species in the use of one specific whistle shape suggests that whistle type is important for species identification.

We further tested the role of species-specific whistle types in acoustic species identification by creating three different classifiers for the two species – one using all whistles, one using only whistles from shared whistle types and one using only whistles from species-specific whistle types. The classifier that used whistles from species-specific whistle types performed significantly better than the other two classifiers, demonstrating that species-specific whistle types collectively carry more species information than other whistle types, and the assumption that all whistles carry species information is not correct.

The results of this study show that we should re-evaluate our approach to acoustic species identification. Instead of measuring variables from whistles regardless of type, we should focus on identifying species-specific whistle types and creating classifiers based on those whistles alone. This new focus on species-specific whistle types would pave the way for more accurate tools for identifying species in passive acoustic recordings.

2aMU5 – Evaluation of individual differences of vibration duration of tuning forks – Kyota Nomizu

Evaluation of individual differences of vibration duration of tuning forks

Kyota Nomizu – k-nomizu@chiba-u.jp
Sho Otsuka – otsuka.s@chiba-u.jp
Seiji Nakagawa – s-nakagawa@chiba-u.jp
Chiba University
1-33 Yayoi-cho, Inage-ku
Chiba-shi, 263-8522, Japan

Popular version of paper 2aMU5

Presented Wednesday morning, June 9, 2021

180th ASA Meeting, Acoustics in Focus

A tuning fork is a metal device that emits a sound of a certain frequency when struck. Tuning forks are used for various purposes, such as music, medicine, and healing. In addition to the fundamental frequency component, the harmonic tone appears immediately after struck, with a 6-times higher frequency. First of all, the accuracy of the fundamental frequency is needed. Additionally, the fundamental tone needs to be sustained for a long time, while the harmonic tone should decay rapidly. However, only the fundamental frequency is tuned in the manufacturing process of the tuning forks, durations of tones have not been evaluated. In addition, most studies on tuning forks have been about frequencies of tones or mode analysis, and those on the vibration duration are very limited.


Figure 1: Tuning forks used in the experiment.


In this study, we aimed to assess individual differences in the vibration duration of tuning forks. Also, we tried to clarify factors that affect the vibration duration. In this study, as a first step, we evaluated the effect of the holding force.

In the experiment, we struck four individual tuning forks of the same type and recorded their sound, and estimated durations of their fundamental and harmonic tones. Measurements were repeated with changing the holding force.


Figure 2: Evaluation of the vibration duration.


As a result, significant individual differences in the duration of fundamental and harmonic tones were observed. Especially, the tuning fork with the shorter length and the smaller mass had a shorter fundamental tone. Also, the duration of fundamental and harmonic tones varied depending on the holding force. The best holding forces for both tones were different for each tuning fork.

These results suggest that even for the same type of tuning fork, small differences in shape and heterogeneity of the material may affect the vibration duration. It is also suggested that there is a desirable holding force for each tuning fork that can achieve both a long duration of the fundamental tone and rapid decay of the harmonic tone.


Figure 3: Duration of the fundamental tone at each holding force range.


In the future, based on these results showing the relationship with the holding force, it is necessary to conduct a comprehensive study on the effects of shape parameters and environmental conditions such as temperature and humidity. It is thought that the results theoretically contribute to improving the manufacturing process of tuning forks, which currently relies on the empirical knowledge of artisans.

1aSC2 – The McGurk Illusion – Kristin J. Van Engen

The McGurk Illusion

Kristin J. Van Engen – kvanengen@wustl.edu
Washington University in St. Louis
1 Brookings Dr.
Saint Louis, MO 63130

Popular version of paper 1aSC2 The McGurk illusion
Presented Tuesday morning, June 8, 2021
180th ASA Meeting, Acoustics in Focus

In 1976, Harry McGurk and John MacDonald published their now-famous article, “Hearing Lips and Seeing Voices.” The study was a remarkable demonstration of how what we see affects what we hear: when the audio for the syllable “ba” was presented to listeners with the video of a face saying “ga”, listeners consistently reported hearing “da”.

That original paper has been cited approximately 7500 times to date, and in the subsequent 45 years, the “McGurk effect” has been used in countless studies of audiovisual processing in humans. It is typically assumed that people who are more susceptible to the illusion are also better at integrating auditory and visual information. This assumption has led to the use of susceptibility to the McGurk illusion as a measure of an individual’s ability to process audiovisual speech.

However, when it comes to understanding real-world multisensory speech perception, there are several reasons to think that McGurk-style stimuli are poorly-suited to the task. Most problematic is the fact that McGurk stimuli rely on audiovisual incongruence that never occurs in real-life audiovisual speech perception. Furthermore, recent studies show that susceptibility to the effect does not actually correlate with performance on audiovisual speech perception tasks such as understanding sentences in noisy conditions. This presentation reviews these issues, arguing that, while the McGurk effect is a fascinating illusion, it is the wrong tool for understanding the combined use of auditory and visual information during speech perception.


3aSP1 – Using Physics to Solve the Cocktail Party Problem – Keith McElveen

Keith McElveen – keith.mcelveen@wavesciencescorp.com
Wave Sciences
151 King Street
Charleston, SC USA 29401

Popular version of paper ‘Robust speech separation in underdetermined conditions by estimating Green’s functions’
Presented Thursday morning, June 10th, 2021
180th ASA Meeting, Acoustics in Focus

Nearly seventy years ago, a hearing researcher named Colin Cherry said that “One of our most important faculties is our ability to listen to, and follow, one speaker in the presence of others. This is such a common experience that we may take it for granted; we may call it the cocktail party problem.” No machine has been constructed to do just this, to filter out one conversation from a number jumbled together.”

Despite many claims of success over the years, the Cocktail Party Problem has resisted solution.  The present research investigates a new approach that blends tricks used by human hearing with laws of physics. With this approach, it is possible to isolate a voice based on where it must have come from – somewhat like visualizing balls moving around a billiard table after being struck, except in reverse, and in 3D. This approach is shown to be highly effective in extremely challenging real-world conditions with as few as four microphones – the same number as found in many smart speakers and pairs of hearing aids.

The first “trick” is something that hearing scientists call “glimpsing”. Humans subconsciously piece together audible “glimpses” of a desired voice as it momentarily rises above the level of competing sounds. After gathering enough glimpses, our brains “learn” how the desired voice moves through the room to our ears and use this knowledge to ignore the other sounds.

The second “trick” is based on how humans use sounds that arrive “late”, because they bounced off of one or more large surfaces along the way. Human hearing somehow combines these reflected “copies” of the talker’s voice with the direct version to help us hear more clearly.

The present research mimics human hearing by using glimpses to build a detailed physics model – called a Green’s Function – of how sound travels from the talker to each of several microphones. It then uses the Green’s Function to reject all sounds that arrived via different paths and to reassemble the direct and reflected copies into the desired speech. The accompanying sound file illustrates typical results this approach achieves.

McElveen_Before_Then_Near_Then_Far_Talkers.wav, Original Cocktail Party Sound File, Followed by Separated Nearest Talker, then Farthest

While prior approaches have struggled to equal human hearing in a realistic cocktail party babel, even at close distances, the research results we are presenting imply that it is now possible to not only equal, but to exceed human hearing and solve The Cocktail Party Problem, even with a small number of microphones in no particular arrangement.

The many implications of this research include improved conference call systems, hearing aids, automotive voice command systems, and other voice assistants – such as smart speakers. Our future research plans include further testing as well as devising intuitive user interfaces that can take full advantage of this capability.

No one knows exactly how human hearing solves the Cocktail Party Problem, but it would be very interesting indeed if it is found to use its own version of a Green’s Function.