The Infrasonic Choir: Decoding Songs to Inform Decisions

Sarah McComas – sarah.mccomas@usace.army.mil

U.S. Army Engineer Research and Development Center, Vicksburg, MS, 39180, United States

Popular version of 1pPAb4 – The Infrasonic Choir: Decoding Songs to Inform Decisions
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3658000

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

InfrasonicFigure 1. Infrasound is a low frequency, sub-audible sound propagated over long distances (10’s to 1000’s of kilometers) and typically below the threshold of human hearing. Image courtesy of author.

The world around us is continuously evolving due to the actions of Mother Nature and man-made activities, impacting how we interact with the environment. Many of these activities generate infrasound, which is sound below the frequency threshold for human hearing (Figure 1). These signals can travel for long distances, 10s to 100s km based on source strength, while maintaining key information about what generated the signal. The multitude of signals can be thought of as an infrasonic choir with voices from a wide variety of sources which include natural signals such as surf and volcanic activity and man-made including infrastructure or industrial activities. Listening to, and deciphering, the infrasonic choir around us allows us to better understand how the world around us is evolving.

The infrasonic choir is observed by placing groupings, called arrays, of specialized sensors around the environment we wish to understand. These sensors are microphones designed to capture very low frequency sounds. An array’s geometry enables us to identify the direction the signal is observed. Using multiple arrays around a region allow for identification of the source location.

One useful application of decoding infrasonic songs is listening to infrastructure, such as a bridge. Bridges vibrate at frequencies related to the engineering characteristics of the structure, such as mass and stiffness. Bridges are surrounded by the fluid atmosphere which allow the bridge vibrations to create waves that can be measured with infrasound sensor arrays. One can visualize this as waves generated after a rock is thrown into a pond. As the bridge’s overall health degrades, whether through time or other events, its engineering characteristics change causing a change in the vibrational frequency. Being able to identify a change from a healthy, “in-tune” structure to an “out-of-tune”, unhealthy structure without having to see or inspect the bridge would enable continuous monitoring of entire regional road networks. The ability to conduct this type of monitoring after a natural disaster, such as hurricane or earthquake, would enable quick identification of damaged structures for prioritization of limited structural assessment resources.

Understanding how to decode the infrasonic choir within the symphony of the environment to better understand the world around us is the focus of ongoing research at the U.S. Army Engineer Research and Development Center. This research effort focuses on moving monitoring into source rich urban environments, the design of lightweight and low-cost sensors and mobile arrays, and the development of automated processing methods for analysis. When successful, continuous monitoring of this largely untapped source of information will provide a method for understanding the environment to better inform decisions.

Permission to publish was granted by the Director, Geotechnical and Structures Laboratory, U.S. Army Engineer Research and Development Center.

High-resolution microvessel imaging using novel beamforming techniques and no microbubbles!

Michael Oelze – oelze@illinois.edu
X (Twitter): @Oelze_Url

University of Illinois at Urbana-Champaign
Urbana, IL 61801
United States

Zhengchang Kou
University of Illinois at Urbana-Champaign
Urbana, IL 61801
United States

Popular version of 1pBAb7 – Contrast-Free Microvessel Imaging Using Null Subtraction Imaging Combined with Harmonic Imaging
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3675358

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Bubbles? We don’t need no stinking bubbles!

In recent years, super resolution imaging techniques for imaging the microvasculature have been developed and demonstrated for applications such as functional ultrasound imaging in mice and assessing Alzheimer’s disease in pre-clinical models. While these novel super resolution techniques have produced images with incredible detail and vessel contrast, one drawback to the approach is the need to inject contrast agents, which consist of small gas-filled microbubbles. This reduces the clinical application and adoption of these techniques. Furthermore, the time required to construct these images can take hours because it involves localizing and tracking individual microbubbles as they progress through the vasculature.

FIGURE 1: Video of 3D rendering of rat brain using traditional approaches at a fundamental frequency (top left) and at twice the fundamental frequency (bottom left) compared to using our novel approach (NSI) at a fundamental frequency (top right) and at twice the fundamental frequency (bottom right) (please note this will be a playable video online)

In our novel approach to microvessel imaging, we don’t need no stinking microbubbles! Instead, we utilize a novel nonlinear beamforming approach that allows fast reconstructions with much better spatial resolution. This allows us to approach super resolution without the need to inject microbubbles into the body. Along with the beamforming approach we also use a pulse inversion scheme, where we transmit with one frequency and receive with twice the transmit frequency. This allows a doubling of the spatial resolution over receiving with the same transmit frequency. However, the use of pulse inversion scheme can introduce unwanted clutter into the image. With our novel beamforming approach, clutter is greatly reduced or eliminated from the images.

beamformingFIGURE 2 Single image frames comparing traditional power Doppler methods (left) with our novel approach (right).

We demonstrated our new technology in a rat brain (both 2D and a 3D rendering) and rabbit kidney and compared our images to traditional beamforming approaches without the use of contrast agents. The video shows a 3D rendering of the microvasculature of a rat brain and the corresponding figure shows a particular frame of the 3D rendering. We showed that our approach eliminates the clutter produced by the pulse inversion scheme, increases the contrast of microvessel images, results in more observable vessels, and produces a much finer spatial resolution better than one fourth of a wavelength. The time to reconstruct the images using our novel technique was a fraction of the time needed for current super resolution techniques that rely on localizing and tracking microbubbles in the vasculature. Therefore, our novel approach could provide microbubble-free technology to produce high-resolution power Doppler images of the microvasculature with the potential for clinical applications.

Moving Cargo, Keeping Whales: Investigating Solutions for Ship Noise Pollution

Vanessa ZoBell – vmzobell@ucsd.edu
Instagram: @vanessa__zobell

Scripps Institution of Oceanography, La Jolla, California, 92037, United States

John A. Hildebrand, Kaitlin E. Frasier
UCSD – Scripps Institution of Oceanography

Twitter & Instagram: @scripps_mbarc
Twitter & Instagram: @scripps_ocean

Popular version of 2pAB8 – Moving Cargo, Keeping Whales: Investigating Solutions for Ocean Noise Pollution
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3678721

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Ship Noise Figure 1. Image Courtesy of ZoBell, Vanessa M., John A. Hildebrand, and Kaitlin E. Frasier. “Comparing pre-industrial and modern ocean noise levels in the Santa Barbara Channel.” Marine Pollution Bulletin 202 (2024): 116379.

Southern California waters are lit up with noise pollution (Figure 1). The Port of Los Angeles and the Port of Long Beach are the first and second busiest shipping ports in the western hemisphere, supporting transits from large container ships that radiated noise throughout the region. Underwater noise generated by these vessels dominate ocean soundscapes, negatively affecting marine organisms, like mammals, fish, and invertebrates, who rely on sound for daily life functions. In this project, we modeled what the ocean would sound like without human activity and compared it with what it sounds like in modern day. We found in this region, which encompasses the Channel Islands National Marine Sanctuary and feeding grounds of the endangered northeastern Pacific blue whale, modern ocean noise levels were up to 15 dB higher than pre-industrial levels. This would be like having a picnic in a meadow versus having a picnic on an airport tarmac.

Reducing ship noise in critical habitats has become an international priority for protecting marine organisms. A variety of noise reduction techniques have been discussed, with some already operationalized. To understand the effectiveness of these techniques, broad stakeholder engagement, robust funding, and advanced signal processing is required. We modeled a variety of noise reduction simulations and identified effective strategies to quiet whale habitats in the Santa Barbara Channel region. Simulating conservation scenarios will allow more techniques to be explored without having to be implemented, saving time, money, and resources in the pursuit of protecting the ocean.

A general method to obtain clearer images at a higher resolution than theoretical limit

Jian-yu Lu – jian-yu.lu@ieee.org
X (Twitter): @Jianyu_lu
Instagram: @jianyu.lu01
Department of Bioengineering, College of Engineering, The University of Toledo, Toledo, Ohio, 43606, United States

Popular version of 1pBAb4 – Reconstruction methods for super-resolution imaging with PSF modulation
Presented at the 186 ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3675355

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imaging is an important fundamental tool to advance science, engineering, and medicine, and is indispensable in our daily life. Here we have a few examples: Acoustical and optical microscopes have helped to advance biology. Ultrasound imaging, X-ray radiography, X-ray computerized tomography (X-ray CT), magnetic resonance imaging (MRI), gamma camera, single-photon emission computerized tomography (SPECT), and positron emission tomography (PET) have been routinely used for medical diagnoses. Electron and scanning tunneling microscopes have revealed structures in nanometer or atomic scale, where one nanometer is one billionth of a meter. And photography, including the cameras in cell phones, is in our everyday life.

Despite the importance of imaging, it was first recognized by Ernest Abbe in 1873 that there is a fundamental limit known as the diffraction limit for resolution in wave-based imaging systems due to the diffraction of waves. This effects acoustical, optical, and electromagnetic waves, and so on.

Recently (see Lu, IEEE TUFFC, January 2024), the researcher developed a general method to overcome such a long-standing diffraction limit. This method is not only applicable to wave-based imaging systems such as ultrasound, optical, electromagnetic, radar, and sonar; it is in principle also applicable to other linear shift-invariant (LSI) imaging systems such as X-ray radiography, X-ray CT, MRI, gamma camera, SPECT, and PET since it increases image resolution by introducing high spatial frequencies through modulating the point-spread function (PSF) of an LSI imaging system. The modulation can be induced remotely from outside of an object to be imaged, or can be small particles introduced into or on the surface of the object and manipulated remotely. The LSI system can be understood with a geometric distortion corrected optical camera in the photography, where the photo of a person will be the same or invariant in terms of the size and shape if the person only shifts his/her position in the direction that is perpendicular to the camera optical axis within the camera field of view.

Figure 1 below demonstrates the efficacy of the method using an acoustical wave. The method was used to image a passive object (in the first row) through a pulse-echo imaging or to image wave source distributions (in the second row) with a receiver. The best images obtainable under the Abbe’s diffraction limit are in the second column, and the super-resolution (better than the diffraction limit) images obtained with the new method are in the last column. The super-resolution images had a resolution that was close to 1/3 of the wavelength used from a distance with an f-number (focal distance divided by the diameter of the transducer) close to 2.

Figure 1. This figure was modified in courtesy of IEEE (doi.org/10.1109/TUFFC.2023.3335883).

Because the method developed is based on the convolution theory of an LSI system and many practical imaging systems are LSI, the method opens an avenue for various new applications in science, engineering, and medicine. With a proper choice of a modulator and imaging system, nanoscale imaging with resolution similar to that of a scanning electron microscope (SEM) is possible even with visible or infrared light.

Why is it easier to understand people we know?

Emma Holmes – emma.holmes@ucl.ac.uk
X (Twitter): @Emma_Holmes_90

University College London (UCL), Department of Speech Hearing and Phonetic Sciences, London, Greater London, WC1N 1PF, United Kingdom

Popular version of 4aPP4 – How does voice familiarity affect speech intelligibility?
Presented at the 186 ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3674814

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

It’s much easier to understand what others are saying if you’re listening to a close friend or family member, compared to a stranger. If you practice listening to the voices of people you’ve never met before, you might also become better at understanding them too.

Many people struggle to understand what others are saying in noisy restaurants or cafés. This can become much more challenging as people get older. It’s often one of the first changes that people notice in their hearing. Yet, research shows that these situations are much easier if people are listening to someone they know very well.

In our research, we ask people to visit the lab with a friend or partner. We record their voices while they read sentences aloud. We then invite the volunteers back for a listening test. During the test, they hear sentences and click words on a screen to show what they heard. This is made more difficult by playing a second sentence at the same time, which the volunteers are told to ignore. This is like having a conversation when there are other people talking around you. Our volunteers listen to many sentences over the course of the experiment. Sometimes, the sentence is one recorded from their friend or partner. Other times, it’s one recorded from someone they’ve never met. Our studies have shown that people are best at understanding the sentences spoken by their friend or partner.

In one study, we manipulated the sentence recordings, to change the sound of the voices. The voices still sounded natural. Yet, volunteers could no longer recognize them as their friend or partner. We found that participants were still better at understanding the sentences, even though they didn’t recognize the voice.

In other studies, we’ve investigated how people learn to become familiar with new voices. Each volunteer learns the names of three new people. They’ve never met these people, but we play them lots of recordings of their voices. This is like when you listen to a new podcast or radio show. We’ve found that people become very good at understanding these people. In other words, we can train people to become familiar with new voices.

In new work that hasn’t yet been published, we found that voice familiarization training benefits both older and younger people. So, it may help older people who find it very difficult to listen in noisy places. Many environments contain background noise—from office parties to hospitals and train stations. Ultimately, we hope that we can familiarize people with voices they hear in their daily lives, to make it easier to listen in noisy places.

Busting the myth that new violins sound better after a period of “playing-in”

Andy Piacsek – andy.piacsek@cwu.edu

Central Washington University, Department of Physics, Ellensburg, WA, 98926, United States

Seth Lowery
Ph.D. candidate, University of Texas
Dept. of Mechanical Engineering
Austin, TX

Popular version of 4pMU3 – An experiment to measure changes in violin instrument response due to playing-in
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023547

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

How is a violin like a pair of hiking boots? Many violinists would respond “They both improve with use.” Just as boots need to be “broken in” by being worn several times to make them more supple, many musicians believe that a new violin, cello, or guitar, needs to be “played in” for a period of time, typically months, in order to fully develop its acoustic properties. There is even a commercial product, the Tone-Ritethat is marketed as a way to accelerate the playing-in process, with the claim of dramatically increasing “resonance, balance, and range,” and some builders of stringed instruments, known as luthiers, provide a service of pre-playing-in their instruments, using their own methods of mechanical stimulus, prior to selling them. But do we know if violins actually improve with use?

We tested the hypothesis that putting vibrational energy into a violin will, over time, change how the violin body responds to the vibration of the strings, which is measured as the frequency response. We used three violins in our experiment: one was left alone, serving as a control, while the two test violins were “played” by applying mechanical vibrations directly to the bridge. One of the mechanical sources was the Tone Rite, the other was a shaker driven with a signal created from a Vivaldi violin concerto as shown in the video below. The total time of vibration exceeded 1600 hours, equivalent to ten months of being played six hours per day.

Approximately once per week, we measured the frequency response of all three violins using two standard methods: bridge admittance, which characterizes the vibration of the violin body, and acoustic radiativity, which is based on the sound radiated by the violin. The measurement set up is illustrated in Figure 1.

Figure 1: Measuring the frequency response of a violin in an anechoic chamber.

Having a control violin allowed us to account for factors not associated with playing-in, such as fluctuating environmental conditions or simple aging, that might affect the frequency response. If mechanical vibrations had the hypothesized effect of physically altering the violin body, such as creating microcracks in the wood, glue, or varnish, and if the result were an increase in “resonance, balance, and range”, then we would expect a noticeable and cumulative change in the frequency response of the test violins compared to the control violin.

We did not observe any changes in the frequency responses of the violins that correlate with the amount of vibration. In Figure 2a, we plot a normalized difference in the bridge admittance between the two test violins and the control violin; Figure 2b shows a similar plot for the acoustic radiativity.

In both plots, we see no evidence that the difference between the test violins and the control violin increases with more vibration; instead we see random fluctuations that can be attributed to the slightly different experimental conditions of each measurement. This applies to both the Tone-Rite, which vibrates primarily with the 60 Hz frequency of the electric power it is plugged into, and the shaker, which provided the same frequencies that a violinist practicing her instrument would create.

Our conclusion is that long term vibrational stimulus of a violin, whether achieved mechanically or by actual playing, does not produce a physical change in the violin body that could affect its tonal characteristics.