What makes drones sound annoying? The answer may lie in noise fluctuations

Ze Feng (Ted) Gan – tedgan@psu.edu

Department of Aerospace Engineering, The Pennsylvania State University, University Park, PA, 16802, United States

Popular version of 2aNSa3 – Multirotor broadband noise modulation
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026987

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Picture yourself strolling through a quiet park. Suddenly, you are interrupted by the “buzz” of a multirotor drone. You ask yourself: why does this sound so annoying? Research demonstrates that a significant source is the time variation of broadband noise levels over a rotor revolution. These noise fluctuations have been found to be important for how we perceive sound. This research has found that these sound variations are significantly affected by the blade angle offsets (azimuthal phasing) between different rotors. This demonstrates the potential for synchronizing the rotors to reduce noise: a concept that has been studied extensively for tonal noise, but not broadband noise.

Sound consists of air pressure fluctuations. One major source of sound generated by rotors consists of the random air pressure fluctuations of turbulence, which encompass a wide range of frequencies. Accordingly, this sound is called broadband noise. A common example and model of broadband noise is white noise, shown in Figure 1, where the random nature characteristic of broadband noise is evident. Despite this randomness, we hear the noise of Figure 1 as having a nearly constant sound level.

Figure 1: White noise with a nearly constant sound level.

A better model of rotor noise is white noise with amplitude modulation (AM). Amplitude modulation is caused by the blades’ rotation: sound levels are louder when the blade moves towards the listener, and quieter when the blade moves away. This is called Doppler amplification, and is analogous to the Doppler effect that shifts sound frequency when a sound source travels towards or away from you. AM white noise is shown in Figure 2: the sound is still random, but has a sinusoidal “envelope” with a modulation frequency corresponding to the blade passage frequency. AM causes time-varying sound levels, as shown in Figure 3. This time variation is characterized by the modulation depth, the peak-to-trough amplitude in decibels (dB), as shown in Figure 3. A greater value for modulation depth typically corresponds to the noise sounding more annoying.

Figure 2: White noise with amplitude modulation (AM).
Figure 3: Time-varying sound levels of AM white noise.

Broadband noise modulation is known to be important for wind turbines, whose “swishing” is found to be annoying even at low sound levels. This contrasts with white noise, which is typically considered soothing when it has a constant sound level (i.e., no AM). This exemplifies the importance of considering time variation of sound levels for capturing human perception of sound. More recently, the importance of broadband noise modulation has been demonstrated for helicopters, as this chopping noise is what makes a helicopter sound like a realistic helicopter, even if it has low sound levels.

Researchers have not extensively studied broadband noise modulation for aircraft with many rotors. Computational studies in the literature predict that summing the broadband noise modulation of many rotors causes “destructive interference”, resulting in nearly no modulation. However, flight test measurements of a six-rotor drone showed that broadband noise modulation was significant. To investigate this discrepancy, changes in modulation depth were studied as the blade angle offset between rotors was varied. This offset is typically not considered in noise predictions and experiments. The results are shown in Figure 4. For each data point in Figure 4, the rotor rotation speeds are synchronized, but the value for the constant blade angle offset between rotors is different. The results of Figure 4 demonstrate the potential for synchronizing rotors to reduce broadband noise modulation. This synchronization controls the blade angle offset between rotors to be as constant as possible, and has been extensively studied for controlling tones (sounds at a single frequency), but not broadband noise modulation.

Figure 4: Modulation depth as a function of blade angle offset between two synchronized rotors.

If the rotors are not synchronized, which is typically the case, the flight controller continuously varies the rotors’ rotation speeds to stabilize or maneuver the drone. This causes the blade angle offsets between rotors to with vary with time, which in turn causes the summed noise to vary between different points in Figure 4. Measurements showed that all rotor blade angle offsets are equally likely (i.e., azimuthal phasing follows a uniform probability distribution). Therefore, multirotor broadband noise modulation can be characterized and predicted by constructing a plot like Figure 4, by adding copies of the broadband noise modulation of a single rotor.

Teaching about the Dangers of Loud Music with InteracSon’s Hearing Loss Simulation Platform

Jérémie Voix – Jeremie.Voix@etsmtl.ca

École de technologie supérieure, Université du Québec, Montréal, Québec, H3C 1K3, Canada

Rachel Bouserhal, Valentin Pintat & Alexis Pinsonnault-Skvarenina
École de technologie supérieure, Université du Québec

Popular version of 1pNSb12 – Immersive Auditory Awareness: A Smart Earphones Platform for Education on Noise-Induced Hearing Risks
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026825

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Ever thought about how your hearing might change in the future based on how much and how loudly you listen to music through earphones? And how would knowing this affect your music listening habits? We developed a tool called InteracSon, which is a digital earpiece you can wear to help you better understand the risks of losing your hearing from listening to loud music trough earphones.

In this interactive platform, you can first select your favourite song, and play it through a pair of earphones at your preferred listening volume. After providing InteracSon with the amount of time you usually spend listening to music, it calculates the “Age of Your Ears”. This tells you how much your ears have aged due to your music listening habits. So even if you’re, say, 25 years old, your ears might be like they’re 45 years old because of all that loud music!

Picture of the “InteracSon” platform during calibration on an acoustic manikin. Photo by V. Pintat, ÉTS/ CC BY

To really demonstrate what this means, InteracSon provides you with an immersive experience of what it’s like to have hearing loss. It has a mode where you can still hear what’s going on around you, but it filters sounds based on what your ears might be like with hearing loss. You can also hear what tinnitus, a ringing in the ears, sounds like, which is a common problem for people who listen to music too loudly. You can even listen to your favorite song again, but this time it would be altered to simulate your predicted hearing loss.

With more than 60% of adolescents listening to their music at unsafe levels, and nearly 50% of them reporting hearing-related problems, InteracSon is a powerful tool to teach them about the adverse effects of noise exposure on hearing and to promote awareness about how to prevent hearing loss.

Babies lead the way – a discovery with infants brings new insights to vowel perception

Linda Polka – linda.polka@mcgill.ca

School of Communication Sciences & Disorders, McGill University SCSD, 2001 McGill College Avenue, Montreal, Quebec, H3A 1G1, Canada

Matthew Masapollo, PhD
Motor Neuroscience Laboratory
Department of Psychology
McGill University

Popular version of 2ASC7 – What babies bring to our understanding of vowel perception
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027029

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

From the early months of life, infants perceive and produce vowel sounds, which occupy a central role in speech communication across the lifespan. Infant research has typically focused on understanding how their vowel perception and production skills mature into an adult-like form. But infants, being genuine and notoriously unpredictable, often give us new insights that go beyond our study goals. In our lab, several findings initially discovered in infants are now directing novel research with adults. One such discovery is the focal vowel bias, a perceptual pattern we observed when we tested infants on their ability to discriminate two vowel sounds. For example, when testing infants (~4-12 months) to see if they could discriminate two vowel sounds such as “eh” (as in bed) and ‘ae” (as in bad), infants showed very good performance in detecting the change from ‘eh’ to ‘ae’, but very poor performance when the direction of change was reversed (detecting change from ‘ae’ to ‘eh”). Initially, these unexpected directional differences were puzzling because the sounds were identical. However, we soon realized that we could predict this pattern by considering the degree of articulatory movement required to produce each sound. Articulatory movement describes how fast and how far we have to move our tongue, lips, or jaw to produce a speech sound. We noticed that infants find it easier to discriminate vowels when the vowel that involves the most articulatory movement is presented second rather than first. In essence, this pattern shows us that vowels produced with more extreme articulatory movements are also more perceptually salient. Our scientific name for this pattern- the focal vowel bias – is a shorthand way to describe the acoustic signatures of the vowels produced with larger articulatory movements.

These infant findings led us to explore the focal vowel bias in adults. We ran experiments using the “oo” vowels in English and French, which are slightly different sounds. Compared to English “oo”, French “oo” has more articulatory movement due to enhanced lip rounding. Using these vowel sounds (produced by a bilingual speaker), we found that adults showed the pattern we observed in infants. They discriminated a change from English “oo” to French “oo” more easily than the reverse direction, consistent with the focal vowel bias. Adults did this regardless of whether they spoke English or French, showing that that the focal vowel bias is not related to language experience. We then ran many experiments using different versions of the French and English ‘oo” vowels, including natural and synthesized vowels, visual vowel signals (just a moving face with no sound), and animated dots and shapes that follow the lip movements of each vowel sound. We found that adults displayed the focal vowel bias for both visual and auditory vowel signals. Adults also showed the bias when tested with simple visual animations that retained the global shape, orientation, and dynamic movements of a mouth, even though subjects failed to perceive these animations as a mouth. No bias was found when movement and mouth orientation were disrupted (static images or animations rotated sideways). These findings show us that the focal vowel bias is related to how we process the speech movements in different sensory modalities.

These adult findings highlight our exquisite sensitivity to articulatory movement and suggest that the information we attend to in speech is multimodal and closely tied to how speech is produced. We now resume our infant research focused on a new question – as young infants begin learning to produce speech, do their speech movements also critically contribute to this perceptual bias and help them form vowel categories? We are eager to see where the next round of infant research will take us.

Reducing Ship Noise Pollution with Structured Quarter-Wavelength Resonators

Mathis Vulliez – mathis.vulliez@usherbrooke.ca

Université de Sherbrooke, Département de génie mécanique, Sherbrooke, Québec, J1K 2R1, Canada

Marc-André Guy, Département de génie mécanique, Université de Sherbrooke
Kamal Kesour, Innovation Maritime, Rimouski, QC, Canada
Jean-Christophe G.Marquis, Innovation Maritime, Rimouski, QC, Canada
Giuseppe Catapane, University of Naples Federico II, Naples, Italy
Giuseppe Petrone, University of Naples Federico II, Naples, Italy
Olivier Robin, Département de génie mécanique, Université de Sherbrooke

Popular version of 1pEA6 – Use of metamaterials to reduce underwater noise generated by ship machinery
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026790

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

The underwater noise generated by maritime traffic is the most significant source of ocean noise pollution. This pollution threatens marine biodiversity, from large marine mammals to invertebrates. At low speeds, the machinery dominates the underwater radiated noise from vessels. It also has a precise sound signature since it usually operates at a fixed rotation frequency. If you think of it, an idling vehicle produces a tonal acoustic excitation. The sound energy distribution is mainly concentrated at a few precise frequencies and multiples. Indeed, the engine rotates at a given rotation speed – in round per minutes – or frequency (divided by 60, it is the number of oscillations per second). In addition to the rotating frequency, the firing order and the number of cylinders will lead to the generation of excitation multiples of the rotating frequency. The problem is that the produced frequencies are generally low and difficult to mitigate with classical soundproofing materials requiring substantial material thickness.

This research project delves into new solutions to mitigate underwater noise pollution using innovative noise control technologies. The solution investigated in this work is structured quarter-wavelength acoustic resonators. These resonators usually absorb sound at a resonant frequency and odd harmonics, making them ideal for targeting precise frequencies and their multiples. However, the length of these resources is dictated by the wavelength corresponding to the target frequency. As for the required material thickness, this wavelength is significant at low frequencies (in air, for a frequency of 100 Hz and a speed of sound of 340 m/s, the wavelength is 3.4 m since the wavelength is the ratio of speed by frequency). The length of a quarter wavelength resonator tuned at 100 Hz is thus 0.85 m.

Fig.1. Comparison between classical and innovative soundproofing material on sound absorption, from Centre de recherche acoustique-signal-humain, Université de Sherbrooke.

Therefore, a coiled quarter wavelength resonator was considered to reduce its bulkiness, and facilitate their installation. The inspiration follows Archimedes’ spiral geometry shape, a structure easily manufactured using today’s 3D printing technologies. Experimental laboratory tests were conducted to characterize the prototypes and determine their effectiveness in absorbing sound. We also created a numerical model that allows us to quickly answer optimization questions and study the efficiency of a hybrid solution: a rock wool panel with embedded coiled resonators. We aim to combine classic and innovative solutions tom propose low weight and compact solutions to efficiently reduce underwater noise pollution!

Fig.2. Numerical model of coiled resonators embedded in rockwool, from Centre de recherche acoustique-signal-humain, Université de Sherbrooke.

Popping Droplets for Drug Delivery

Aaqib Khan – aaqib.khan@iitgn.ac.in

Chemical Engineering Department, Indian Institute of Technology Gandhinagar, Gandhinagar, Gujarat, 382355, India

Sameer V. Dalvi – sameervd@iitgn.ac.in
Chemical Engineering Department,
Indian Institute of Technology Gandhinagar
Gandhinagar, Gujarat 382355
India

Popular version of 4pBAa3 – Ultrasound Responsive Multi-Layered Emulsions for Drug Delivery
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027523

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

What are popping droplets? Imagine you are making popcorn in a pot. Each little popcorn seed consists of a tiny bit of water. When you heat the seeds, the water inside them gets hot and turns into steam. This makes the seed pop and turn into a popcorn. Similarly, think of each popcorn seed as a droplet. The special liquid used to create popping droplets is called perfluoropentane (PFP), which is similar to the water inside the corn seed. PFP can boil at low temperatures and turn into a bubble, which makes it perfect for crafting these special droplets.

Vaporizable/Popping droplets hold great promise in the fields of both diagnosis and therapy. By using sound waves to vaporize PFP present in the droplets, medicine (drugs) can be delivered efficiently to specific areas in the body, such as tumors, while minimizing impacts on healthy tissues. This targeted approach has the potential to improve the safety and effectiveness of therapy, ultimately benefiting patients.

Figure 1. Vaporizable/popping droplets with perfluoropentane (PFP) in the core with successive layers of water and oil

What do we propose? Researchers have been exploring complex structures like double emulsions to load drugs onto droplets (just like filling a backpack with books), especially those that are water-soluble. Building on this, our study introduces multi-layered droplets featuring a vaporizable core (Fig.1). This design enables the incorporation of both water-soluble and insoluble drugs into separate layers within the same droplet. To better visualize this, imagine a club sandwich with layers of bread stacked on top of each other, each layer containing a different filling. Alternatively, picture an onion with multiple stacked layers that can be peeled off one by one. Similarly, multi-layered droplets comprise stacked layers, each capable of holding various substances, such as drugs or therapeutic agents.

To explore the features of the multi-layered droplets further, we carried out two separate studies. First, we estimated the peak negative pressure of the sound wave at which the PFP in the droplets vaporize. This is similar to how water boils at 100°C (212°F) under standard atmospheric pressure, but at low/negative pressure (like under a vacuum), water can boil at low temperatures. Sound waves are known to induce both positive and negative pressure changes. During instances of negative pressure, the pressure drops below the atmospheric pressure, creating a vacuum-like effect. This decrease in pressure can trigger the vaporization of the perfluoropentane (PFP) in the droplets at room temperatures.

Secondly, we loaded a water-insoluble drug, curcumin, which is an anti-inflammatory drug, in the oil layer and estimated the amount of drug loading (just like counting number of books in the backpack).

Figure 2. Relationship between Mean Grayscale (mean brightness) and soundwave pressure for droplet vaporization

Figure 2 depicts the relationship between the increase in mean grayscale (just like the increase in bright areas or brightness of a black-and-white picture) and the peak negative pressure of the sound wave. Based on our study, the peak negative pressure at which the PFP in the droplets was found to vaporize was 6.7 MPa. Furthermore, the loading for curcumin was estimated to be 0.87 ± 0.1 milligrams (mg), which indicates a higher drug loading capacity in multi-layered droplets.

These studies are essential because they help us determine two critical things. The first one allows us to figure out the exact sound wave pressure needed to make the droplets pop. This is useful for the controlled release of drugs in targeted areas. The second study tells us how much drug these droplets can hold, which is helpful in designing drug delivery systems.

Together, these studies enhance our understanding of multi-layered droplets and pave the way for a new targeted therapy, where popping droplets serve as vehicles for delivering drugs or therapeutic agents to specific locations upon activation by sound waves.

Novel audio analysis helps identify multiple sounds in forensic gunshot recordings

Steven Beck – stevendbeck@alumni.rice.edu

Beck Audio Forensics, 7618 Rockpoint Dr, Austin, Texas, 78731, United States

Popular version of 2pEA8 – Dissecting Recorded Gunshot Sounds
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027107

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Forensic audio can provide important evidence, especially when the scene information is not captured on video.  Audio recordings of gunshots often supply additional information, or the only information, from a shooting incident.  Analysis of the recorded audio can help determine who fired first, how many guns were fired, and sometimes identify each gunshot in a sequence.  The muzzle blast, or “bang”, and the ballistic shockwave, or “crack” from a supersonic bullet are the most common sounds analyzed.  These very loud sounds often obscure other gunshot sounds that can provide important forensic evidence.  New recording technology, like floating point recorders and high sensitivity police body cameras, may capture multiple acoustic sound sources from a gunshot, depending on the shooting, propagation, and recording conditions.

In order to investigate the multiple sounds from a gunshot, a body camera and Zoom F6 multi-channel floating point recorder were used to record gunshot sounds using microphones placed at 90o, 135o, and 180o relative to the line of fire.  Revolvers, semiautomatic pistols, and long barrel firearms (rifles and shotguns) are shown to have different sequences of acoustic source events:

  • Revolver have the primer blast, chamber gas jet, muzzle blast, mechanical sounds
  • Pistols have the primer blast, muzzle blast, slide/bolt sound, mechanical sounds
  • Semiautomatic rifles have primer blast, slide/bolt sound, muzzle blast, mechanical sounds
  • Bold action rifles have primer blast, muzzle blast, later mechanical sounds

Figure 1 shows an example of multiple acoustic sources in a gunshot.  The left side plots show a muzzle blast followed by a mechanical sound.  Zooming in on the amplitude shows a much quieter primer blast that occurs before the muzzle blast.  The bottom right plot is the same gunshot recorded on a police-style body camera.  The primer blast is very clear, but the other gunshot sounds are clipped.  Since the primer blast can only be recorded behind or to the side of a close-by shooter, it’s presence in a recording can help determine “who fired first” or help identify individual gunshots.

Figure 1 – A Primer Blast Prior to the Muzzle Blast Indicates the Presence of a Near-by Shooter

In addition to blast-related sounds, there are sounds related to ballistics.  Recording these sounds are very position dependent, and require the recording system to be close to the source.  These sounds include the ballistic shockwave and ballistic flow sound (recorded close to a passing bullet), tumbling bullet, reverberation and reflections, and the ballistic impact.  Ballistic sounds can help identify a gunshot or a possible shooting location.