2aMU5 – Evaluation of individual differences of vibration duration of tuning forks

Kyota Nomizu – k-nomizu@chiba-u.jp
Sho Otsuka – otsuka.s@chiba-u.jp
Seiji Nakagawa – s-nakagawa@chiba-u.jp
Chiba University
1-33 Yayoi-cho, Inage-ku
Chiba-shi, 263-8522, Japan

Popular version of paper 2aMU5
Presented Wednesday morning, June 9, 2021
180th ASA Meeting, Acoustics in Focus

A tuning fork is a metal device that emits a sound of a certain frequency when struck. Tuning forks are used for various purposes, such as music, medicine, and healing. In addition to the fundamental frequency component, the harmonic tone appears immediately after struck, with a 6-times higher frequency. First of all, the accuracy of the fundamental frequency is needed. Additionally, the fundamental tone needs to be sustained for a long time, while the harmonic tone should decay rapidly. However, only the fundamental frequency is tuned in the manufacturing process of the tuning forks, durations of tones have not been evaluated. In addition, most studies on tuning forks have been about frequencies of tones or mode analysis, and those on the vibration duration are very limited.

tuning forks

Figure 1: Tuning forks used in the experiment.

In this study, we aimed to assess individual differences in the vibration duration of tuning forks. Also, we tried to clarify factors that affect the vibration duration. In this study, as a first step, we evaluated the effect of the holding force.

In the experiment, we struck four individual tuning forks of the same type and recorded their sound, and estimated durations of their fundamental and harmonic tones. Measurements were repeated with changing the holding force.

Figure 2: Evaluation of the vibration duration.

As a result, significant individual differences in the duration of fundamental and harmonic tones were observed. Especially, the tuning fork with the shorter length and the smaller mass had a shorter fundamental tone. Also, the duration of fundamental and harmonic tones varied depending on the holding force. The best holding forces for both tones were different for each tuning fork.

These results suggest that even for the same type of tuning fork, small differences in shape and heterogeneity of the material may affect the vibration duration. It is also suggested that there is a desirable holding force for each tuning fork that can achieve both a long duration of the fundamental tone and rapid decay of the harmonic tone.

Figure 3: Duration of the fundamental tone at each holding force range.

In the future, based on these results showing the relationship with the holding force, it is necessary to conduct a comprehensive study on the effects of shape parameters and environmental conditions such as temperature and humidity. It is thought that the results theoretically contribute to improving the manufacturing process of tuning forks, which currently relies on the empirical knowledge of artisans.

1aSC2 – The McGurk Illusion

Kristin J. Van Engen – kvanengen@wustl.edu
Washington University in St. Louis
1 Brookings Dr.
Saint Louis, MO 63130

Popular version of paper 1aSC2 The McGurk illusion
Presented Tuesday morning, June 8, 2021
180th ASA Meeting, Acoustics in Focus

In 1976, Harry McGurk and John MacDonald published their now-famous article, “Hearing Lips and Seeing Voices.” The study was a remarkable demonstration of how what we see affects what we hear: when the audio for the syllable “ba” was presented to listeners with the video of a face saying “ga”, listeners consistently reported hearing “da”.

That original paper has been cited approximately 7500 times to date, and in the subsequent 45 years, the “McGurk effect” has been used in countless studies of audiovisual processing in humans. It is typically assumed that people who are more susceptible to the illusion are also better at integrating auditory and visual information. This assumption has led to the use of susceptibility to the McGurk illusion as a measure of an individual’s ability to process audiovisual speech.

However, when it comes to understanding real-world multisensory speech perception, there are several reasons to think that McGurk-style stimuli are poorly-suited to the task. Most problematic is the fact that McGurk stimuli rely on audiovisual incongruence that never occurs in real-life audiovisual speech perception. Furthermore, recent studies show that susceptibility to the effect does not actually correlate with performance on audiovisual speech perception tasks such as understanding sentences in noisy conditions. This presentation reviews these issues, arguing that, while the McGurk effect is a fascinating illusion, it is the wrong tool for understanding the combined use of auditory and visual information during speech perception.

3aSP1 – Using Physics to Solve the Cocktail Party Problem

Keith McElveen – keith.mcelveen@wavesciencescorp.com
Wave Sciences
151 King Street
Charleston, SC USA 29401

Popular version of paper ‘Robust speech separation in underdetermined conditions by estimating Green’s functions’
Presented Thursday morning, June 10th, 2021
180th ASA Meeting, Acoustics in Focus

Nearly seventy years ago, a hearing researcher named Colin Cherry said that “One of our most important faculties is our ability to listen to, and follow, one speaker in the presence of others. This is such a common experience that we may take it for granted; we may call it the cocktail party problem.” No machine has been constructed to do just this, to filter out one conversation from a number jumbled together.”

Despite many claims of success over the years, the Cocktail Party Problem has resisted solution.  The present research investigates a new approach that blends tricks used by human hearing with laws of physics. With this approach, it is possible to isolate a voice based on where it must have come from – somewhat like visualizing balls moving around a billiard table after being struck, except in reverse, and in 3D. This approach is shown to be highly effective in extremely challenging real-world conditions with as few as four microphones – the same number as found in many smart speakers and pairs of hearing aids.

The first “trick” is something that hearing scientists call “glimpsing”. Humans subconsciously piece together audible “glimpses” of a desired voice as it momentarily rises above the level of competing sounds. After gathering enough glimpses, our brains “learn” how the desired voice moves through the room to our ears and use this knowledge to ignore the other sounds.

The second “trick” is based on how humans use sounds that arrive “late”, because they bounced off of one or more large surfaces along the way. Human hearing somehow combines these reflected “copies” of the talker’s voice with the direct version to help us hear more clearly.

The present research mimics human hearing by using glimpses to build a detailed physics model – called a Green’s Function – of how sound travels from the talker to each of several microphones. It then uses the Green’s Function to reject all sounds that arrived via different paths and to reassemble the direct and reflected copies into the desired speech. The accompanying sound file illustrates typical results this approach achieves.

Original Cocktail Party Sound File, Followed by Separated Nearest Talker, then Farthest

While prior approaches have struggled to equal human hearing in a realistic cocktail party babel, even at close distances, the research results we are presenting imply that it is now possible to not only equal, but to exceed human hearing and solve The Cocktail Party Problem, even with a small number of microphones in no particular arrangement.

The many implications of this research include improved conference call systems, hearing aids, automotive voice command systems, and other voice assistants – such as smart speakers. Our future research plans include further testing as well as devising intuitive user interfaces that can take full advantage of this capability.

No one knows exactly how human hearing solves the Cocktail Party Problem, but it would be very interesting indeed if it is found to use its own version of a Green’s Function.

1aABa1 – Ending the day with a song: patterns of calling behavior in a species of rockfish

Annebelle Kok – akok@ucsd.edu
Ella Kim – ebkim@ucsd.edu
Simone Baumann-Pickering – sbaumann@ucsd.edu
Scripps Institution of Oceanography – University of California San Diego
9500 Gilman Drive
La Jolla, CA 92093

Kelly Bishop – kellybishop@ucsb.edu
University of California Santa Barbara
Santa Barbara, CA 93106

Tetyana Margolina – tmargoli@nps.edu
John Joseph – jejoseph@nps.edu
Naval Postgraduate School
1 University Circle
Monterey, CA 93943

Lindsey Peavey Reeves – lindsey.peavey@noaa.gov
NOAA Office of National Marine Sanctuaries
1305 East-West Highway, 11th Floor
Silver Spring, MD 20910

Leila Hatch – leila.hatch@noaa.gov
NOAA Stellwagen Bank National Marine Santuary
175 Edward Foster Road
Scituate, MA 02474

Popular version of paper 1aABa1 Ending the day with a song: Patterns of calling behavior in a species of rockfish
Presented Tuesday morning, June 8, 2021
180th ASA Meeting, Acoustics in Focus

Fish can be seen as ‘birds’ of the sea. Like birds, they sing during the mating season to attract potential partners to and to repel rival singers. At the height of the mating season, fish singing can become so prominent that it is a dominant feature of the acoustic landscape, or soundscape, of the ocean. Even though this phenomenon is widespread in fish species, not much is known about fish calling behavior, a stark contrast to what we’ve learned about bird calling behavior. As part of SanctSound, a large collaboration of over 20 organizations investigating soundscapes of US National Marine Sanctuaries, we have investigated the calling behavior of bocaccio (Sebastes paucispinis), a species of rockfish residing along the west coast of North America. Bocaccio produce helicopter-like drumming sounds that increase in amplitude.

We deployed acoustic recorders at five sites across the Channel Islands National Marine Sanctuary for about a year to record bocaccio, and used an automated detection algorithm to extract their calls from the data. Next, we investigated how their calling behavior varied with time of day, moon phase and season. Bocaccio predominantly called at night, with peaks at sunset and sunrise. Shallow sites had a peak early in the night, while the peak at deeper sites was more towards the end of the night, suggesting that bocaccio might move up and down in the water column over the course of the night. Bocaccio avoided calling during full moon, preferentially producing their calls when there was little lunar illumination. Nevertheless, bocaccio were never truly quiet: they called throughout the year, with peaks in winter and early spring.

The southern population of bocaccio on the US west coast was considered overfished by commercial and recreational fisheries prior to 2017, and has been rebuilt to be a sustainably fished stock today. One of the keys to this sustainability is reproductive success: bocaccio are very long-lived fish that don’t reproduce until they are 4-7 years old, and they can live to be 50 years old. They are known to spawn in the Channel Islands National Marine Sanctuary region from October to July, peaking in January, and studying their calling patterns can help us ensure that we keep this population and its habitat viable well into the future. Characterizing their acoustic ecology can tell us more about where in the sanctuary they reside and spawn, and understanding their reproductive calling behavior can help tell us which time of the year they are most vulnerable to noise pollution. More importantly, these results give us more insight into the wondrous marine soundscape and let us imagine what life must be like for marine creatures that contribute to and rely on it.

1aBAb2 – Transcranial Radiation of Guided Waves for Brain Ultrasound

Eetu Kohtanen – ekohtanen3@gatech.edu
Alper Erturk – alper.erturk@me.gatech.edu
Georgia Institute of Technology
771 Ferst Drive NW
Atlanta, GA 30332

Matteo Mazzotti – matteo.mazzotti@colorado.edu
Massimo Ruzzene – massimo.ruzzene@colorado.edu
University of Colorado Boulder
1111 Engineering Dr
Boulder, CO 80309

Popular version of paper ‘1aBAb2’
Presented Tuesday morning, June 8, 2021
180th ASA Meeting, Acoustics in Focus

Ultrasound imaging is a safe and familiar tool for producing medical images of soft tissues. Ultrasound can also be used to ablate tumors by focusing a large amount of acoustic energy (“focused ultrasound”) capable of destroying tumors.

The use of ultrasound in the imaging and treatment of soft tissues is well established, but ultrasound treatment for the brain poses important scientific challenges. Conventional medical ultrasound uses bulk acoustic waves that travel directly through the skull into the brain. While the center of the brain is relatively accessible in this way to treat disorders such as essential tremor, the need for transmitting waves to the brain periphery or the skull-brain interface efficiently (with reduced heating of the skull) motivates research on alternative methods.

The skull is an obstacle for bulk waves, but for guided waves it presents opportunity. Unlike bulk waves, guided (Lamb) waves propagate along structures (such as the skull), rather than through them—as the name suggests, their direction of travel is guided by structural boundaries.  If these guided waves are fast enough, they “leak” into the brain efficiently. However, there are challenges due to the complex skull geometry and bone porosity. Our research seeks a fundamental understanding of how guided waves in the skull radiate energy into the brain to pave the way for making guided waves a viable medical ultrasound tool to expand the treatment envelope.

To study the radiation of guided waves from skull bone, experiments were conducted with submersed skull segments. A transducer emits pressure waves that hit the outer side of the bone, and a hydrophone measures the pressure field on the inner side. In the following animation, the dominant guided wave radiation angle can be seen as 65 degrees. With further data processing, the experimental radiation angles (contours) are obtained with frequency. Additionally, a numerical model that considers the separate bone layers and the fluid loading is constructed to predict the radiation angles of a set of different guided wave types (solid branches). The experimental contours are always accompanied by corresponding numerical prediction, validating the model.

Experimental pressure field on the inner side of the skull bone segment and the corresponding radiation angles

With these results, we have a better understanding of guided wave radiation from the skull bone. The authors hope that these fundamental findings will eventually lead to application of guided waves for focused ultrasound in the brain.