Noise Survey Highlights Need for New Direction at Canadian Airports #ASA186

Noise Survey Highlights Need for New Direction at Canadian Airports #ASA186

Annoyance data gathered during pandemic reveals flaws in existing methods to assess and mitigate noise impacts.

Media Contact:
AIP Media
301-209-3090
media@aip.org

OTTAWA, Ontario, May 16, 2024 – The COVID-19 pandemic changed life in many ways, including stopping nearly all commercial flights. At the Toronto Pearson International Airport, airplane traffic dropped by 80% in the first few months of lockdown. For a nearby group of researchers, this presented a unique opportunity.

noise survey

Low-flying aircraft can lead to noisy and unhealthy neighborhoods, and a pioneering survey can help track their impact around Canadian airports. Image Credit: Julia Jovanovic

Julia Jovanovic will present the results of a survey conducted on aircraft noise and annoyance during the pandemic era Thursday, May 16, at 11:10 a.m. EDT as part of a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association, running May 13-17 at the Shaw Centre located in downtown Ottawa, Ontario, Canada.

“For many years, researchers like me have looked to assess the impacts of aircraft noise on communities surrounding airports, particularly in terms of annoyance,” said Jovanovic. “The travel restrictions due to COVID and the resulting sustained reductions in noise gave us an unprecedented opportunity to test the correlation between noise and annoyance.”

In early 2020, the NVH-SQ Research Group out of the University of Windsor surveyed residents living around the airport to gauge how their annoyance levels changed with the reduction in noise. A follow-up survey in 2021 provided even more data for the researchers, and according to Jovanovic, they highlight flaws in the tools authorities use to assess and manage the impacts of aircraft noise on communities.

“The industry has, for too long, erroneously relied on noise complaints as a proxy measure for annoyance,” said Jovanovic. “These surveys show that complaints and annoyance are different phenomena, triggered by different mechanisms. Only annoyance has a proven correlation to overall noise levels.”

According to their data, while noise complaints dropped overall during the pandemic, many of the people sending those complaints continued to do so, and some areas even saw an increase in complaints. This demonstrates the need for collecting survey data on annoyance specifically, something Canadian authorities overseeing air transport have been reluctant to do.

“Even though the annoyance metric draws much criticism due to its subjective nature, it is still indicative of the overall effect of aircraft noise on individuals and the resulting possible long-term health impacts,” said Jovanovic. “These types of surveys are conducted in most developed nations on a regular basis. To the best of our knowledge, we are unaware of any similar efforts in any other Canadian airport.”

Jovanovic and her colleagues hope these results will spur regulatory agencies to collect better data and use it to develop more updated standards and guidelines for protecting the public from aircraft noise and protecting the future of airport operations from continuous residential encroachment.

“The survey should be repeated around all of our nation’s airports to get an accurate representation of the effects of aircraft noise on Canadian communities and update Transport Canada’s severely outdated guidelines for the management of aircraft noise,” said Jovanovic.

———————– MORE MEETING INFORMATION ———————–
​Main Meeting Website: https://acousticalsociety.org/ottawa/    
Technical Program: https://eppro02.ativ.me/src/EventPilot/php/express/web/planner.php?id=ASASPRING24

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the in-person meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

ABOUT THE CANADIAN ACOUSTICAL ASSOCIATION/ASSOCIATION CANADIENNE D’ACOUSTIQUE

  • fosters communication among people working in all areas of acoustics in Canada
  • promotes the growth and practical application of knowledge in acoustics
  • encourages education, research, protection of the environment, and employment in acoustics
  • is an umbrella organization through which general issues in education, employment and research can be addressed at a national and multidisciplinary level

The CAA is a member society of the International Institute of Noise Control Engineering (I-INCE) and the International Commission for Acoustics (ICA), and is an affiliate society of the International Institute of Acoustics and Vibration (IIAV). Visit https://caa-aca.ca/.

Taking Pictures of the Sound of a Rocket

Grant W. Hart – grant_hart@byu.edu
Brigham Young University
Provo, UT 84602
United States

Kent Gee (@KentLGee on X)
Eric Hintz
Giovanna Nuccitelli
Trevor Mahlmann (@TrevorMahlmann on X)

Popular version of 1pNSa8 – A photographic analysis of Mach wave radiation from a rocket plume
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3674104

The rumble of a large rocket launching is one of the loudest non-explosive sounds that mankind has ever made. Where does that sound come from?  Surprisingly, it doesn’t come from the rocket itself, or even the exhaust nozzle, but rather from the plume of exhaust that shoots out of the back. The plume is supersonic when it comes out of the rocket, and it emits sound as it slows down in the atmosphere.

This process was visualized in some recent pictures taken by Trevor Mahlmann of a Falcon 9 launch from Cape Canaveral.  The launch was just after dawn, and Mahlmann took a series of striking pictures as the rocket passed in front of the sun. Two of those pictures are shown below. If you look at the edge of the sun in the later picture you can see distortions caused by the intense sound waves coming from the rocket.

Recognizing the possibility of gaining more information from these pictures, researchers at Brigham Young University got permission from Mr. Mahlmann to further analyze them.  The third picture below shows a portion of the difference between the first two pictures. The colors have been modified to show the sound waves more clearly.  The waves clearly are coming from a region far down the plume of the rocket, rather than the nozzle of the rocket. The source was typically about 10-25 times the diameter of the rocket down the plume.

The sound is also directional – it doesn’t go out evenly in all directions, but rather goes out most strongly at about 20-30 degrees below the horizontal. Most rockets sound loudest to people watching the launch when they are 20-30 degrees above the ground. This is all consistent with the models of the sound being produced by the processes that slow down the exhaust from supersonic speeds.  A good introduction to rocket noise is found in a recent article in Physics Today.

The researchers first had to line up the images so that the sun was in the same place in each frame. They were then able to subtract the later image from the first one to get the difference and leave just the distortions caused by the waves in the second image.  To find the source of the waves, it was necessary to draw a line backward from the wave’s image and find where it met the rocket’s path across the Sun. Since it took time for the wave to get from the source to where it was observed, they had to find where the rocket was at the time the sound wave was given off. They did this by finding how far the sound had traveled and used the speed of sound to find the time it took to get there. With that information the researchers could find the position of the source and the direction of the wave.

Falcon 9 rocket

Figure 1. A Falcon 9 rocket about to pass in front of the Sun. Image courtesy of Trevor Mahlmann. Used by permission. Higher resolution versions available from the photographer.

 

Falcon 9 rocket

Figure 2. A Falcon 9 rocket passing in front of the Sun. Note the distortions of the edge of the Sun caused by the sound waves produced by the rocket. Image courtesy of Trevor Mahlmann. Used by permission. Higher resolution versions available from the photographer.

 

rocket

Figure 3. A portion of the difference between the two previous figures, showing the enhanced sound waves. The bottom of the rocket is at the top of the image. Image adapted from Hart et al.’s original paper.

What makes drones sound annoying? The answer may lie in noise fluctuations

Ze Feng (Ted) Gan – tedgan@psu.edu

Department of Aerospace Engineering, The Pennsylvania State University, University Park, PA, 16802, United States

Popular version of 2aNSa3 – Multirotor broadband noise modulation
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3673871

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Picture yourself strolling through a quiet park. Suddenly, you are interrupted by the “buzz” of a multirotor drone. You ask yourself: why does this sound so annoying? Research demonstrates that a significant source is the time variation of broadband noise levels over a rotor revolution. These noise fluctuations have been found to be important for how we perceive sound. This research has found that these sound variations are significantly affected by the blade angle offsets (azimuthal phasing) between different rotors. This demonstrates the potential for synchronizing the rotors to reduce noise: a concept that has been studied extensively for tonal noise, but not broadband noise.

Sound consists of air pressure fluctuations. One major source of sound generated by rotors consists of the random air pressure fluctuations of turbulence, which encompass a wide range of frequencies. Accordingly, this sound is called broadband noise. A common example and model of broadband noise is white noise, shown in Figure 1, where the random nature characteristic of broadband noise is evident. Despite this randomness, we hear the noise of Figure 1 as having a nearly constant sound level.

Figure 1: White noise with a nearly constant sound level.

A better model of rotor noise is white noise with amplitude modulation (AM). Amplitude modulation is caused by the blades’ rotation: sound levels are louder when the blade moves towards the listener, and quieter when the blade moves away. This is called Doppler amplification, and is analogous to the Doppler effect that shifts sound frequency when a sound source travels towards or away from you. AM white noise is shown in Figure 2: the sound is still random, but has a sinusoidal “envelope” with a modulation frequency corresponding to the blade passage frequency. AM causes time-varying sound levels, as shown in Figure 3. This time variation is characterized by the modulation depth, the peak-to-trough amplitude in decibels (dB), as shown in Figure 3. A greater value for modulation depth typically corresponds to the noise sounding more annoying.

Figure 2: White noise with amplitude modulation (AM).
Figure 3: Time-varying sound levels of AM white noise.

Broadband noise modulation is known to be important for wind turbines, whose “swishing” is found to be annoying even at low sound levels. This contrasts with white noise, which is typically considered soothing when it has a constant sound level (i.e., no AM). This exemplifies the importance of considering time variation of sound levels for capturing human perception of sound. More recently, the importance of broadband noise modulation has been demonstrated for helicopters, as this chopping noise is what makes a helicopter sound like a realistic helicopter, even if it has low sound levels.

Researchers have not extensively studied broadband noise modulation for aircraft with many rotors. Computational studies in the literature predict that summing the broadband noise modulation of many rotors causes “destructive interference”, resulting in nearly no modulation. However, flight test measurements of a six-rotor drone showed that broadband noise modulation was significant. To investigate this discrepancy, changes in modulation depth were studied as the blade angle offset between rotors was varied. This offset is typically not considered in noise predictions and experiments. The results are shown in Figure 4. For each data point in Figure 4, the rotor rotation speeds are synchronized, but the value for the constant blade angle offset between rotors is different. The results of Figure 4 demonstrate the potential for synchronizing rotors to reduce broadband noise modulation. This synchronization controls the blade angle offset between rotors to be as constant as possible, and has been extensively studied for controlling tones (sounds at a single frequency), but not broadband noise modulation.

Figure 4: Modulation depth as a function of blade angle offset between two synchronized rotors.

If the rotors are not synchronized, which is typically the case, the flight controller continuously varies the rotors’ rotation speeds to stabilize or maneuver the drone. This causes the blade angle offsets between rotors to with vary with time, which in turn causes the summed noise to vary between different points in Figure 4. Measurements showed that all rotor blade angle offsets are equally likely (i.e., azimuthal phasing follows a uniform probability distribution). Therefore, multirotor broadband noise modulation can be characterized and predicted by constructing a plot like Figure 4, by adding copies of the broadband noise modulation of a single rotor.

Teaching about the Dangers of Loud Music with InteracSon’s Hearing Loss Simulation Platform

Jérémie Voix – Jeremie.Voix@etsmtl.ca

École de technologie supérieure, Université du Québec, Montréal, Québec, H3C 1K3, Canada

Rachel Bouserhal, Valentin Pintat & Alexis Pinsonnault-Skvarenina
École de technologie supérieure, Université du Québec

Popular version of 1pNSb12 – Immersive Auditory Awareness: A Smart Earphones Platform for Education on Noise-Induced Hearing Risks
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=Session&project=ASASPRING24&id=3673898

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Ever thought about how your hearing might change in the future based on how much and how loudly you listen to music through earphones? And how would knowing this affect your music listening habits? We developed a tool called InteracSon, which is a digital earpiece you can wear to help you better understand the risks of losing your hearing from listening to loud music trough earphones.

In this interactive platform, you can first select your favourite song, and play it through a pair of earphones at your preferred listening volume. After providing InteracSon with the amount of time you usually spend listening to music, it calculates the “Age of Your Ears”. This tells you how much your ears have aged due to your music listening habits. So even if you’re, say, 25 years old, your ears might be like they’re 45 years old because of all that loud music!

Picture of the “InteracSon” platform during calibration on an acoustic manikin. Photo by V. Pintat, ÉTS/ CC BY

To really demonstrate what this means, InteracSon provides you with an immersive experience of what it’s like to have hearing loss. It has a mode where you can still hear what’s going on around you, but it filters sounds based on what your ears might be like with hearing loss. You can also hear what tinnitus, a ringing in the ears, sounds like, which is a common problem for people who listen to music too loudly. You can even listen to your favorite song again, but this time it would be altered to simulate your predicted hearing loss.

With more than 60% of adolescents listening to their music at unsafe levels, and nearly 50% of them reporting hearing-related problems, InteracSon is a powerful tool to teach them about the adverse effects of noise exposure on hearing and to promote awareness about how to prevent hearing loss.

Tools for shaping the sound of the future city in virtual reality

Christian Dreier – cdr@akustik.rwth-aachen.de

Institute for Hearing Technology and Acoustics
RWTH Aachen University
Aachen, Northrhine-Westfalia 52064
Germany

– Christian Dreier (lead author, LinkedIn: Christian Dreier)
– Rouben Rehman
– Josep Llorca-Bofí (LinkedIn: Josep Llorca Bofí, X: @Josepllorcabofi, Instagram: @josep.llorca.bofi)
– Jonas Heck (LinkedIn: Jonas Heck)
– Michael Vorländer (LinkedIn: Michael Vorländer)

Popular version of 3aAAb9 – Perceptual study on combined real-time traffic sound auralization and visualization
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3671183

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

“One man’s noise is another man’s signal”. This famous quote by Edward Ng from a 1990’s New York Times article breaks down a major learning from noise research. A rule of thumb within noise research states the community response to noise, when asked for “annoyance” ratings, is said to be statistically explained only to one third by acoustic factors (like the well-known A-weighted sound pressure level, which can be found on household devices as “dB(A)” information). Referring to Ng’s quote, another third is explained by non-acoustic, personal or social variables, whereas the last third cannot be explained according to the current state of research.

Noise reduction in built urban environments is an important goal for urban planners, as noise is not only a cause of cardio-vascular diseases, but also affects learning and work performance in schools and offices. To achieve this goal, a number of solutions are available, ranging from switching to electrified public transport, speed limits, traffic flow management or masking of annoyant noise by pleasant noise, for example fountains.

In our research, we develop a tool for making the sound of virtual urban scenery audible and visible. From its visual appearance, the result is comparable to a computer game, with the difference that the acoustic simulation is physics-based, a technique that is called auralization. The research software “Virtual Acoustics” simulates the entire physical “history” of a sound wave for producing an audible scene. Therefore, the sonic characteristics of traffic sound sources (cars, motorcycles, aircraft) are modeled, the sound wave’s interaction with different materials at building and ground surfaces are calculated, and human hearing is considered.

You might have recognized a lightning strike sounding dull when being far away and bright when being close, respectively. The same applies for aircraft sound too. In an according study, we auralized the sound of an aircraft for different weather conditions. A 360° video compares how the same aircraft typically sounds during summer, autumn and winter when the acoustical changes due to the weather conditions are considered (use headphones for full experience!)

In another work we prepared a freely available project template for using Virtual Acoustics. Therefore, we acoustically and graphically modeled the IHTApark, that is located next to the Institute for Hearing Technology and Acoustics (IHTA): https://www.openstreetmap.org/#map=18/50.78070/6.06680.

In our latest experiment, we focused on the perception of especially annoyant traffic sound events. Therefore, we presented the traffic situations by using virtual reality headsets and asked the participants to assess them. How (un)pleasant would be the drone for you during a walk in the IHTApark?

Soundscape to Improve the Experience of People with Dementia; Considering How They Process Sounds

Arezoo Talebzadeh – arezoo.talebzadeh@ugent.be
X (twitter): @arezoonia
Instagram: @arezoonia
Ghent University, Technology Campus, iGent, Technologiepark 126, Gent, Gent, 9052, Belgium

Dick Botteldooren and Paul Devos
Ghent University
Technology Campus, iGent, Technologiepark 126
Gent, Gent 9052
Belgium

Popular version of 2aNSb7 – Soundscape Augmentation for People with Dementia Requires Accounting for Disease-Induced Changes in Auditory Scene Analysis.
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3673959

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Sensory stimuli are significant in guiding us through space and making us aware of time. Sound plays an essential role in this awareness. Soundscape is an acoustic environment as perceived and experienced by a person. A well-designed soundscape can make the experience pleasant and improve moods; in contrast, an unfamiliar and chaotic soundscape can increase anxiety and stress. We aim to discuss different auditory symptoms of dementia and introduce ways to design an augmented soundscape to foster individual auditory needs.

People with dementia suffer from a neurodegenerative disorder that leads to a progressive decline in cognitive health. Behavioural and psychological symptoms of dementia refer to a group of noncognitive behaviours that affect the prediction and control of dementia. Reducing the occurrence of these symptoms is one of the main goals of dementia care. Environmental intervention is the best nonpharmacological treatment to improve the behaviour of people with dementia.

People with severe dementia usually live in nursing homes, long-term care facilities, or memory care units where sensory perception is unfamiliar. Strange sensory stimuli add to residents’ anxiety and distress, as care facilities are often not customized based on individual needs. Studies show that incorporating pleasant sounds into the environment, known as an ‘augmented soundscape,’ positively impacts behaviour and reduces the psychological syndrome of dementia. Sound augmentation can also help a person navigate through space and identify the time of the day. By implementing sound augmentation as part of the design, we can enhance mood, reduce apathy, lower anxiety and stress, and promote health. People with dementia experience changes in perception, including misperceptions, misidentifications, hallucinations, delusions, and time-shifting. Sound augmentation can support a better understanding of the environment and help with daily navigation. In the previous study by the research team, implementing soundscape in nursing homes and dementia care units showed a promising result in reducing the psychological symptoms of dementia.

It’s crucial to recognize that dementia is not a singular entity but a complex spectrum of degenerative diseases. For example, environmental sound agnosia—the difficulty in understanding non-speech environmental sounds—is common in some with frontotemporal dementia. Therefore, sound augmentation should be focused on non-complicated sounds. Amusia, another type of dementia, is when a person cannot recognize music; thus, playing music is not recommended for this group.

Dementia

Each type of dementia presents with its unique set of symptoms, including a variety of auditory manifestations. These can range from auditory hallucinations and disorientation to heightened sound sensitivity, agnosia for environmental sounds, auditory agnosia, amusia, and musicophilia. Understanding these diverse syndromes of auditory perception is critical when designing a soundscape augmentation for individuals with dementia.