Listening for bubbles to make scuba diving safer

Joshua Currens – jcurrens@unc.edu

Department of Radiology; Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, United States

Popular version of 5aBAb8 – Towards real-time decompression sickness mitigation using wearable capacitive micromachined ultrasonic transducer arrays
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027683

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Scuba diving is a fun recreational activity but carries the risk of decompression sickness (DCS), commonly known as ‘the bends’. This condition occurs when divers ascend too quickly, causing gas that has accumulated in their bodies to expand rapidly into larger bubbles—similar to the fizz when a soda can is opened.

To prevent this, divers will follow specific safety protocols that limit how fast they rise to the surface and stop at predetermined depths to allow bubbles in their body to dissipate. However, these are general guidelines that do not account for every person in every situation. This limitation can make it harder to prevent DCS effectively in all individuals without unnecessarily lengthening the time to ascend for a large portion of divers. Traditionally, these bubbles have only been detected with ultrasound technology after the diver has surfaced, so it is a challenge to predict DCS before it occurs (Figure 1b&c). Early identification of these bubbles could allow for the development of personalized underwater instructions to bring divers back to the surface and minimize the risk of DCS.

To address this challenge, our team is creating a wearable ultrasound device that divers can use underwater.

Ultrasound works by sending sound waves into the body and then receiving the echoes that bounce back. Bubbles reflect these sound waves strongly, making them visible in ultrasound images (Figure 1d). Unlike traditional ultrasound systems that are too large and not suited for underwater use, our innovative device will be compact and efficient, designed specifically for real-time bubble monitoring while diving.

Currently, our research involves testing this technology and optimizing imaging parameters in controlled environments like hyperbaric chambers. These are specialized rooms where underwater conditions can be replicated by increasing the inside pressure. We recently collected the first ultrasound scans of human divers during a hyperbaric chamber dive with a research ultrasound system, and next we plan to use it with our first prototype. With this data, we hope to find changes in the images that indicate where bubbles are forming. In the future, we plan to start testing our custom ultrasound tool on divers, which will be a big step towards continuously monitoring divers underwater, and eventually personalized DCS prevention.

divingFigure 1. (a) Scuba diver underwater. (b) Post-dive monitoring for bubbles using ultrasound. (c) Typical ultrasound system (developed using Biorender). (d) Bubbles detected in ultrasound images as bright spots in heart. Images courtesy of JC, unless otherwise noted.

The science of baby speech sounds: men and women may experience them differently

M. Fernanda Alonso Arteche – maria.alonsoarteche@mail.mcgill.ca
Instagram: @laneurotransmisora

School of Communication Science and Disorders, McGill University, Center for Research on Brain, Language, and Music (CRBLM), Montreal, QC, H3A 0G4, Canada

Instagram: @babylabmcgill

Popular version of 2pSCa – Implicit and explicit responses to infant sounds: a cross-sectional study among parents and non-parents
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027179

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine hearing a baby coo and instantly feeling a surge of positivity. Surprisingly, how we react to the simple sounds of a baby speaking might depend on whether we are women or men, and whether we are parents. Our lab’s research delves into this phenomenon, revealing intriguing differences in how adults perceive baby vocalizations, with a particular focus on mothers, fathers, and non-parents.

Using a method that measures reaction time to sounds, we compared adults’ responses to vowel sounds produced by a baby and by an adult, as well as meows produced by a cat and by a kitten. We found that women, including mothers, tend to respond positively only to baby speech sounds. On the other hand, men, especially fathers, showed a more neutral reaction to all sounds. This suggests that the way we process human speech sounds, particularly those of infants, may vary significantly between genders. While previous studies report that both men and women generally show a positive response to baby faces, our findings indicate that their speech sounds might affect us differently.

Moreover, mothers rated babies and their sounds highly, expressing a strong liking for babies, their cuteness, and the cuteness of their sounds. Fathers, although less responsive in the reaction task, still rated highly their liking for babies, the cuteness of them, and the appeal of their sounds. This contrast between implicit (subconscious) reactions and explicit (conscious) opinions highlights an interesting complexity in parental instincts and perceptions. Implicit measures, such as those used in our study, tap into automatic and unconscious responses that individuals might not be fully aware of or may not express when asked directly. These methods offer a more direct window into the underlying feelings that might be obscured by social expectations or personal biases.

This research builds on earlier studies conducted in our lab, where we found that infants prefer to listen to the vocalizations of other infants, a factor that might be important for their development. We wanted to see if adults, especially parents, show similar patterns because their reactions may also play a role in how they interact with and nurture children. Since adults are the primary caregivers, understanding these natural inclinations could be key to supporting children’s development more effectively.

The implications of this study are not just academic; they touch on everyday experiences of families and can influence how we think about communication within families. Understanding these differences is a step towards appreciating the diverse ways people connect with and respond to the youngest members of our society.

What makes drones sound annoying? The answer may lie in noise fluctuations

Ze Feng (Ted) Gan – tedgan@psu.edu

Department of Aerospace Engineering, The Pennsylvania State University, University Park, PA, 16802, United States

Popular version of 2aNSa3 – Multirotor broadband noise modulation
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026987

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Picture yourself strolling through a quiet park. Suddenly, you are interrupted by the “buzz” of a multirotor drone. You ask yourself: why does this sound so annoying? Research demonstrates that a significant source is the time variation of broadband noise levels over a rotor revolution. These noise fluctuations have been found to be important for how we perceive sound. This research has found that these sound variations are significantly affected by the blade angle offsets (azimuthal phasing) between different rotors. This demonstrates the potential for synchronizing the rotors to reduce noise: a concept that has been studied extensively for tonal noise, but not broadband noise.

Sound consists of air pressure fluctuations. One major source of sound generated by rotors consists of the random air pressure fluctuations of turbulence, which encompass a wide range of frequencies. Accordingly, this sound is called broadband noise. A common example and model of broadband noise is white noise, shown in Figure 1, where the random nature characteristic of broadband noise is evident. Despite this randomness, we hear the noise of Figure 1 as having a nearly constant sound level.

Figure 1: White noise with a nearly constant sound level.

A better model of rotor noise is white noise with amplitude modulation (AM). Amplitude modulation is caused by the blades’ rotation: sound levels are louder when the blade moves towards the listener, and quieter when the blade moves away. This is called Doppler amplification, and is analogous to the Doppler effect that shifts sound frequency when a sound source travels towards or away from you. AM white noise is shown in Figure 2: the sound is still random, but has a sinusoidal “envelope” with a modulation frequency corresponding to the blade passage frequency. AM causes time-varying sound levels, as shown in Figure 3. This time variation is characterized by the modulation depth, the peak-to-trough amplitude in decibels (dB), as shown in Figure 3. A greater value for modulation depth typically corresponds to the noise sounding more annoying.

Figure 2: White noise with amplitude modulation (AM).
Figure 3: Time-varying sound levels of AM white noise.

Broadband noise modulation is known to be important for wind turbines, whose “swishing” is found to be annoying even at low sound levels. This contrasts with white noise, which is typically considered soothing when it has a constant sound level (i.e., no AM). This exemplifies the importance of considering time variation of sound levels for capturing human perception of sound. More recently, the importance of broadband noise modulation has been demonstrated for helicopters, as this chopping noise is what makes a helicopter sound like a realistic helicopter, even if it has low sound levels.

Researchers have not extensively studied broadband noise modulation for aircraft with many rotors. Computational studies in the literature predict that summing the broadband noise modulation of many rotors causes “destructive interference”, resulting in nearly no modulation. However, flight test measurements of a six-rotor drone showed that broadband noise modulation was significant. To investigate this discrepancy, changes in modulation depth were studied as the blade angle offset between rotors was varied. This offset is typically not considered in noise predictions and experiments. The results are shown in Figure 4. For each data point in Figure 4, the rotor rotation speeds are synchronized, but the value for the constant blade angle offset between rotors is different. The results of Figure 4 demonstrate the potential for synchronizing rotors to reduce broadband noise modulation. This synchronization controls the blade angle offset between rotors to be as constant as possible, and has been extensively studied for controlling tones (sounds at a single frequency), but not broadband noise modulation.

Figure 4: Modulation depth as a function of blade angle offset between two synchronized rotors.

If the rotors are not synchronized, which is typically the case, the flight controller continuously varies the rotors’ rotation speeds to stabilize or maneuver the drone. This causes the blade angle offsets between rotors to with vary with time, which in turn causes the summed noise to vary between different points in Figure 4. Measurements showed that all rotor blade angle offsets are equally likely (i.e., azimuthal phasing follows a uniform probability distribution). Therefore, multirotor broadband noise modulation can be characterized and predicted by constructing a plot like Figure 4, by adding copies of the broadband noise modulation of a single rotor.

Teaching about the Dangers of Loud Music with InteracSon’s Hearing Loss Simulation Platform

Jérémie Voix – Jeremie.Voix@etsmtl.ca

École de technologie supérieure, Université du Québec, Montréal, Québec, H3C 1K3, Canada

Rachel Bouserhal, Valentin Pintat & Alexis Pinsonnault-Skvarenina
École de technologie supérieure, Université du Québec

Popular version of 1pNSb12 – Immersive Auditory Awareness: A Smart Earphones Platform for Education on Noise-Induced Hearing Risks
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026825

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Ever thought about how your hearing might change in the future based on how much and how loudly you listen to music through earphones? And how would knowing this affect your music listening habits? We developed a tool called InteracSon, which is a digital earpiece you can wear to help you better understand the risks of losing your hearing from listening to loud music trough earphones.

In this interactive platform, you can first select your favourite song, and play it through a pair of earphones at your preferred listening volume. After providing InteracSon with the amount of time you usually spend listening to music, it calculates the “Age of Your Ears”. This tells you how much your ears have aged due to your music listening habits. So even if you’re, say, 25 years old, your ears might be like they’re 45 years old because of all that loud music!

Picture of the “InteracSon” platform during calibration on an acoustic manikin. Photo by V. Pintat, ÉTS/ CC BY

To really demonstrate what this means, InteracSon provides you with an immersive experience of what it’s like to have hearing loss. It has a mode where you can still hear what’s going on around you, but it filters sounds based on what your ears might be like with hearing loss. You can also hear what tinnitus, a ringing in the ears, sounds like, which is a common problem for people who listen to music too loudly. You can even listen to your favorite song again, but this time it would be altered to simulate your predicted hearing loss.

With more than 60% of adolescents listening to their music at unsafe levels, and nearly 50% of them reporting hearing-related problems, InteracSon is a powerful tool to teach them about the adverse effects of noise exposure on hearing and to promote awareness about how to prevent hearing loss.

Babies lead the way – a discovery with infants brings new insights to vowel perception

Linda Polka – linda.polka@mcgill.ca

School of Communication Sciences & Disorders, McGill University SCSD, 2001 McGill College Avenue, Montreal, Quebec, H3A 1G1, Canada

Matthew Masapollo, PhD
Motor Neuroscience Laboratory
Department of Psychology
McGill University

Popular version of 2ASC7 – What babies bring to our understanding of vowel perception
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027029

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

From the early months of life, infants perceive and produce vowel sounds, which occupy a central role in speech communication across the lifespan. Infant research has typically focused on understanding how their vowel perception and production skills mature into an adult-like form. But infants, being genuine and notoriously unpredictable, often give us new insights that go beyond our study goals. In our lab, several findings initially discovered in infants are now directing novel research with adults. One such discovery is the focal vowel bias, a perceptual pattern we observed when we tested infants on their ability to discriminate two vowel sounds. For example, when testing infants (~4-12 months) to see if they could discriminate two vowel sounds such as “eh” (as in bed) and ‘ae” (as in bad), infants showed very good performance in detecting the change from ‘eh’ to ‘ae’, but very poor performance when the direction of change was reversed (detecting change from ‘ae’ to ‘eh”). Initially, these unexpected directional differences were puzzling because the sounds were identical. However, we soon realized that we could predict this pattern by considering the degree of articulatory movement required to produce each sound. Articulatory movement describes how fast and how far we have to move our tongue, lips, or jaw to produce a speech sound. We noticed that infants find it easier to discriminate vowels when the vowel that involves the most articulatory movement is presented second rather than first. In essence, this pattern shows us that vowels produced with more extreme articulatory movements are also more perceptually salient. Our scientific name for this pattern- the focal vowel bias – is a shorthand way to describe the acoustic signatures of the vowels produced with larger articulatory movements.

These infant findings led us to explore the focal vowel bias in adults. We ran experiments using the “oo” vowels in English and French, which are slightly different sounds. Compared to English “oo”, French “oo” has more articulatory movement due to enhanced lip rounding. Using these vowel sounds (produced by a bilingual speaker), we found that adults showed the pattern we observed in infants. They discriminated a change from English “oo” to French “oo” more easily than the reverse direction, consistent with the focal vowel bias. Adults did this regardless of whether they spoke English or French, showing that that the focal vowel bias is not related to language experience. We then ran many experiments using different versions of the French and English ‘oo” vowels, including natural and synthesized vowels, visual vowel signals (just a moving face with no sound), and animated dots and shapes that follow the lip movements of each vowel sound. We found that adults displayed the focal vowel bias for both visual and auditory vowel signals. Adults also showed the bias when tested with simple visual animations that retained the global shape, orientation, and dynamic movements of a mouth, even though subjects failed to perceive these animations as a mouth. No bias was found when movement and mouth orientation were disrupted (static images or animations rotated sideways). These findings show us that the focal vowel bias is related to how we process the speech movements in different sensory modalities.

These adult findings highlight our exquisite sensitivity to articulatory movement and suggest that the information we attend to in speech is multimodal and closely tied to how speech is produced. We now resume our infant research focused on a new question – as young infants begin learning to produce speech, do their speech movements also critically contribute to this perceptual bias and help them form vowel categories? We are eager to see where the next round of infant research will take us.