Tools for shaping the sound of the future city in virtual reality

Christian Dreier – cdr@akustik.rwth-aachen.de

Institute for Hearing Technology and Acoustics
RWTH Aachen University
Aachen, Northrhine-Westfalia 52064
Germany

– Christian Dreier (lead author, LinkedIn: Christian Dreier)
– Rouben Rehman
– Josep Llorca-Bofí (LinkedIn: Josep Llorca Bofí, X: @Josepllorcabofi, Instagram: @josep.llorca.bofi)
– Jonas Heck (LinkedIn: Jonas Heck)
– Michael Vorländer (LinkedIn: Michael Vorländer)

Popular version of 3aAAb9 – Perceptual study on combined real-time traffic sound auralization and visualization
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027232

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

“One man’s noise is another man’s signal”. This famous quote by Edward Ng from a 1990’s New York Times article breaks down a major learning from noise research. A rule of thumb within noise research states the community response to noise, when asked for “annoyance” ratings, is said to be statistically explained only to one third by acoustic factors (like the well-known A-weighted sound pressure level, which can be found on household devices as “dB(A)” information). Referring to Ng’s quote, another third is explained by non-acoustic, personal or social variables, whereas the last third cannot be explained according to the current state of research.

Noise reduction in built urban environments is an important goal for urban planners, as noise is not only a cause of cardio-vascular diseases, but also affects learning and work performance in schools and offices. To achieve this goal, a number of solutions are available, ranging from switching to electrified public transport, speed limits, traffic flow management or masking of annoyant noise by pleasant noise, for example fountains.

In our research, we develop a tool for making the sound of virtual urban scenery audible and visible. From its visual appearance, the result is comparable to a computer game, with the difference that the acoustic simulation is physics-based, a technique that is called auralization. The research software “Virtual Acoustics” simulates the entire physical “history” of a sound wave for producing an audible scene. Therefore, the sonic characteristics of traffic sound sources (cars, motorcycles, aircraft) are modeled, the sound wave’s interaction with different materials at building and ground surfaces are calculated, and human hearing is considered.

You might have recognized a lightning strike sounding dull when being far away and bright when being close, respectively. The same applies for aircraft sound too. In an according study, we auralized the sound of an aircraft for different weather conditions. A 360° video compares how the same aircraft typically sounds during summer, autumn and winter when the acoustical changes due to the weather conditions are considered (use headphones for full experience!)

In another work we prepared a freely available project template for using Virtual Acoustics. Therefore, we acoustically and graphically modeled the IHTApark, that is located next to the Institute for Hearing Technology and Acoustics (IHTA): https://www.openstreetmap.org/#map=18/50.78070/6.06680.

In our latest experiment, we focused on the perception of especially annoyant traffic sound events. Therefore, we presented the traffic situations by using virtual reality headsets and asked the participants to assess them. How (un)pleasant would be the drone for you during a walk in the IHTApark?

Reducing the Sound Transmission Between Suites, One Conduit at a Time

Michael Kundakcioglu – mkundakcioglu@hgcengineering.com

HGC Engineering, 2000 Argentia Road, Plaza One, Suite 203, Mississauga, Ontario, L5N 1P7, Canada

Jessica Tinianov
Adam Doiron

Popular version of 1aAA9 – Sound flanking through common low-voltage electrical conduit in multi-family residential buildings
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026642

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Residents living in an apartment or condominium expect a certain amount of privacy, especially when it comes to noise intrusions from neighbours. In fact, there are Building Code requirements in most jurisdictions which outline minimum requirements for the design of suite-demising architectural assemblies, to limit the allowable amount of sound that can go directly through (or in some cases, around) walls, floors, and ceilings. Despite this, sometimes, noise finds a way to travel through the building in unexpected ways, sometimes bypassing these assemblies. One such “sneaky” path is through the electrical conduits – those tubes that carry electrical wires between suites.

These conduits can act like a highway for sound, especially if they’re not sealed properly at certain points, like where they connect to fire alarms. This can allow noise from one suite to easily travel to another, even if the walls are properly designed to block sound. It’s a bit like having string from one suite to another, tied to a foam cup on each side, like those makeshift telephones we used to make as children.

This isn’t just a minor annoyance; it can be a big problem. In fact, this conduit issue has been found in multiple buildings in recent times, and it can reduce the effectiveness of the walls that are meant to keep sound in – by quite a bit. In many cases, this simple flaw in construction can cause the sound transmission between suites to fail Building Code requirements mentioned above, depending on the local requirements.

The good news is that this can be prevented. Sealing the open holes at the end of the conduits with simple flexible caulking on both sides of the tube greatly reduces the amount of noise from traveling through them (see Figure 1 below). It’s a simple solution that can make a big difference in the level of noise intrusion between suites.

Figure 1: Unsealed Conduit Opening in Fire Alarm Junction Box (Left), and Conduit Opening after Applying Sealant (Right). Image Courtesy of HGC Engineering

Standard sound transmission testing (known as Apparent Sound Transmission Class or ASTC testing) has shown that sealing these conduits can reduce the amount of sound travelling through the conduit so much that the amount of sound transmitted from suite-to-suite returns to the expected design values. In Figures 2 and 3 below, we plot the amount of sound transmitted between two adjacent suites as tested in four different real-world buildings with three different wall types separating the suites (double steel stud walls in Figure 2, and poured concrete walls in Figure 3); the dotted lines represent the amount of sound blocked by the wall when the conduit routed between the suites is left unsealed, while the solid lines represent the amount of sound blocked when the conduit has been sealed with caulking.

Figure 2: Steel Stud Walls Transmission Loss Results, as Tested by HGC Engineering
Figure 3: Poured Concrete Walls Transmission Loss Results, as Tested by HGC Engineering

 

In the above tests, we see the ASTC rating increase by 5 to 10 points once the conduits are sealed, which is a significant and very noticeable difference. In conclusion, if you are a developer, builder, architect, or engineer, it might be worth looking into whether the conduits in the suites in your buildings are properly sealed. It’s a fix that can help everyone get back to enjoying their own space in peace.

Listen In: Infrasonic Whispers Reveal the Hidden Structure of Planetary Interiors and Atmospheres

Quentin Brissaud – quentin@norsar.no
X (twitter): @QuentinBrissaud

Research Scientist, NORSAR, Kjeller, N/A, 2007, Norway

Sven Peter Näsholm, University of Oslo and NORSAR
Marouchka Froment, NORSAR
Antoine Turquet, NORSAR
Tina Kaschwich, NORSAR

Popular version of 1pPAb3 – Exploring a planet with infrasound: challenges in probing the subsurface and the atmosphere
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026837

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

infrasoundLow frequency sound, called infrasound, can help us better understand our atmosphere and explore distant planetary atmospheres and interiors.

Low-frequency sound waves below 20 Hz, known as infrasound, are inaudible to the human ear. They can be generated by a variety of natural phenomena, including volcanoes, ocean waves, and earthquakes. These waves travel over large distances and can be recorded by instruments such as microbarometers, which are sensitive to small pressure variations. This data can give unique insight into the source of the infrasound and the properties of the media it traveled through, whether solid, oceanic, or atmospheric. In the future, infrasound data might be key to build more robust weather prediction models and understand the evolution of our solar system.

Infrasound has been used on Earth to monitor stratospheric winds, to analyze the characteristics of man-made explosions, and even to detect earthquakes. But its potential extends beyond our home planet. Infrasound waves generated by meteor impacts on Mars have provided insight into the planet’s shallow seismic velocities, as well as near-surface winds and temperatures. On Venus, recent research considers that balloons floating in its atmosphere, and recording infrasound waves, could be one of the few alternatives to detect “venusquakes” and explore its interior, since surface pressures and temperatures are too extreme for conventional instruments.

Sonification of sound generated by the Flores Sea earthquake as recorded by a balloon flying at 19 km altitude.

Until recently, it has been challenging to map infrasound signals to various planetary phenomena, including ocean waves, atmospheric winds, and planetary interiors. However, our research team and collaborators have made significant strides in this field, developing tools to unlock the potential of infrasound-based planetary research. We retrieve the connections between source and media properties, and sound signatures through 3 different techniques: (1) training neural networks to learn the complex relationships between observed waveforms and source and media characteristics, (2) perform large-scale numerical simulations of seismic and sound waves from earthquakes and explosions, and (3) incorporate knowledge about source and seismic media from adjacent fields such as geodynamics and atmospheric chemistry to inform our modeling work. Our recent work highlights the potential of infrasound-based inversions to predict high-altitude winds from the sound of ocean waves with machine learning, to map an earthquake’s mechanism to its local sound signature, and to assess the detectability of venusquakes from high-altitude balloons.

To ensure the long-term success of infrasound research, dedicated Earth missions will be crucial to collect new data, support the development of efficient global modeling tools, and create rigorous inversion frameworks suited to various planetary environments. Nevertheless, Infrasound research shows that tuning into a planet’s whisper unlocks crucial insights into its state and evolution.

Soundscape to Improve the Experience of People with Dementia; Considering How They Process Sounds

Arezoo Talebzadeh – arezoo.talebzadeh@ugent.be
X (twitter): @arezoonia
Instagram: @arezoonia
Ghent University, Technology Campus, iGent, Technologiepark 126, Gent, Gent, 9052, Belgium

Dick Botteldooren and Paul Devos
Ghent University
Technology Campus, iGent, Technologiepark 126
Gent, Gent 9052
Belgium

Popular version of 2aNSb7 – Soundscape Augmentation for People with Dementia Requires Accounting for Disease-Induced Changes in Auditory Scene Analysis
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026999

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Sensory stimuli are significant in guiding us through space and making us aware of time. Sound plays an essential role in this awareness. Soundscape is an acoustic environment as perceived and experienced by a person. A well-designed soundscape can make the experience pleasant and improve moods; in contrast, an unfamiliar and chaotic soundscape can increase anxiety and stress. We aim to discuss different auditory symptoms of dementia and introduce ways to design an augmented soundscape to foster individual auditory needs.

People with dementia suffer from a neurodegenerative disorder that leads to a progressive decline in cognitive health. Behavioural and psychological symptoms of dementia refer to a group of noncognitive behaviours that affect the prediction and control of dementia. Reducing the occurrence of these symptoms is one of the main goals of dementia care. Environmental intervention is the best nonpharmacological treatment to improve the behaviour of people with dementia.

People with severe dementia usually live in nursing homes, long-term care facilities, or memory care units where sensory perception is unfamiliar. Strange sensory stimuli add to residents’ anxiety and distress, as care facilities are often not customized based on individual needs. Studies show that incorporating pleasant sounds into the environment, known as an ‘augmented soundscape,’ positively impacts behaviour and reduces the psychological syndrome of dementia. Sound augmentation can also help a person navigate through space and identify the time of the day. By implementing sound augmentation as part of the design, we can enhance mood, reduce apathy, lower anxiety and stress, and promote health. People with dementia experience changes in perception, including misperceptions, misidentifications, hallucinations, delusions, and time-shifting. Sound augmentation can support a better understanding of the environment and help with daily navigation. In the previous study by the research team, implementing soundscape in nursing homes and dementia care units showed a promising result in reducing the psychological symptoms of dementia.

It’s crucial to recognize that dementia is not a singular entity but a complex spectrum of degenerative diseases. For example, environmental sound agnosia—the difficulty in understanding non-speech environmental sounds—is common in some with frontotemporal dementia. Therefore, sound augmentation should be focused on non-complicated sounds. Amusia, another type of dementia, is when a person cannot recognize music; thus, playing music is not recommended for this group.

Each type of dementia presents with its unique set of symptoms, including a variety of auditory manifestations. These can range from auditory hallucinations and disorientation to heightened sound sensitivity, agnosia for environmental sounds, auditory agnosia, amusia, and musicophilia. Understanding these diverse syndromes of auditory perception is critical when designing a soundscape augmentation for individuals with dementia.

Vowel Adjustments: The Key to High-Pitched Singing

May Pik Yu Chan – pikyu@sas.upenn.edu

University of Pennsylvania, 3401-C Walnut Street, Suite 300, C Wing, Philadelphia, PA, 19104, United States

Jianjing Kuang

Popular version of 4aMU6 – Ultrasound tongue imaging of vowel spaces across pitches in singing
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027410

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Singing isn’t just for the stage – everyone enjoys finding their voices in songs, regardless of whether they are performing in an auditorium or merely humming in the shower. Singing well is more than just hitting the right notes, it’s also about using your voice as an instrument effectively. One technique that professional opera singers master is to change how they pronounce their vowels based on the pitch they are singing. But why do singers change their vowels? Is it only to sound more beautiful, or is it necessary to hit these higher notes?

We explore this question by studying what non-professional singers do – if it is necessary to change the vowels to reach higher notes, then non-professional singers will also do the same at higher notes. The participants were asked to sing various English vowels across their pitch range, much like a vocal warm-up exercise. These vowels included [i] (like “beat”), [ɛ] (like “bet”), [æ] (like “bat”), [ɑ] (like “bot”), and [u] (like “boot”). Since vowels are made by different tongue gestures, we used ultrasound imaging to capture images of the participants’ tongue positions as they sang. This allowed us to see how the tongue moved across different pitches and vowels.

We found that participants who managed to sing more pitches did indeed adjust their tongue shapes when reaching high notes. Even when isolating the participants who said they have never sung in choir or acapella group contexts, the trend still stands. Those who are able to sing at higher pitches try to adjust their vowels at higher pitches. In contrast, participants who cannot sing a wide pitch range generally do not change their vowels based on pitch.

We then compared this to pilot data from an operatic soprano, who showed gradual adjustments in tongue positions across her whole pitch range, effectively neutralising the differences between vowels at her highest pitches. In other words, all the vowels at her highest pitches sounded very similar to each other.

Overall, these findings suggest that maybe changing our mouth shape and tongue position is necessary when singing high pitches. The way singers modify their vowels could be an essential part of achieving a well-balanced, efficient voice, especially for hitting high notes. By better understanding how vowels and pitch interact with each other, this research opens the door to further studies on how singers use their vocal instruments and what are the keys to effective voice production. Together, this research offers insights into not only our appreciation for the art of singing, but also into the complex mechanisms of human vocal production.

 

Video 1: Example of sung vowels at relatively lower pitches.
Video 2: Example of sung vowels at relatively higher pitches.

Bats could help the development of AI robots

Rolf Müller – rolf.mueller@vt.edu
X (twitter): @UBDVTLab
Instagram: @ubdvtcenter
Department of Mechanical Engineering, Virginia Tech, Blacksburg, Virginia, 24061, United States

Popular version of 4aAB7 – Of bats and robots
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027373

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Given the ongoing revolution in AI, it may appear that all humanity can do now is wait for AI-powered robots to take over the world. However, while stringing together eloquently worded sentences is certainly impressive, AI is still far from dealing with many of the complexities of the real world. Besides serving the sinister goal of world-domination, robots that have the intelligence to accomplish demanding missions in complex environments could transform humanity’s ability to deal with fundamental key challenges to its survival, e.g., production of food and regrowable materials as well as maintaining healthy ecosystems.

To accomplish the goal of having a robot operate autonomously in complex real-world environments, a variety of methods have been developed – typically with mixed results at best. At the basis of these methods are usually two related concepts: The creation of a model for the geometry of an environment and the use of deterministic templates to identify objects. However, both approaches have already proven to be limited in their applicability, reliability, as well as due to their often prohibitively high computational cost.

Bats navigating dense vegetation – such as in rainforests of Southeast Asia, where our fieldwork is being carried out – may provide a promising alternative to the current approaches: The animals sense their environments through a small number of brief echoes to ultrasonic pulses. The comparatively large wavelengths of these pulses (millimeter to centimeter) combined with the fact that the ears of the bats fall not too far above from these wavelengths on the size scale condemns bat biosonar to poor angular resolution. This prevents the animals from resolving densely packed scatterers such as leave in a foliage. Hence, the echoes that bats navigating under such conditions have to deal with inputs that can be classified as “clutter”, i.e., signals that consists of contributions from many unresolvable scatterers that must be treated as random due to lack of knowledge. The nature of the clutter echoes makes it unlikely that bats having to deal with complex environments rely heavily on three-dimensional models of their surroundings and deterministic templates.

Hence, bats must have evolved sensing paradigms to ensure that the clutter echoes contain the relevant sensory information and that this information can be extracted. Coupling between sensing and actuation could very well play a critical role in this. Hence, robotics might be of pivotal importance in replicating the skills of bats in sensing and navigating their environments. Similarly, the deep-learning revolution could bring a previously unavailable ability to extract complex patterns from data to bear on the problem of extracting insight from clutter echoes. Taken together, insights from these approaches could lead to novel acoustics-based paradigms for obtaining relevant sensory information on complex environment in a direct and highly parsimonious manner. These approaches could then enable autonomous robots that can learn to navigate new environments in a fast and highly efficient manner and transform the use of autonomous systems in outdoor tasks.

Biomimetic robots designed to reproduce the (a) biosonar sensing and (b) flapping-flight capabilities of bats. Design renderings by Zhengsheng Lu (a) and Adam Carmody (b).

As pilot demonstration for this approach, we present a twin pair of bioinspired robots, one to mimic the biosonar sensing abilities of bats and the other to mimic the flapping flight of the animals. The biosonar robot has been used successfully to identify locations and find passageways in complex, natural environments. To accomplish this, the biomimetic sonar has been integrated with deep-learning analysis of clutter echoes. The flapping-flight line of biomimetic robots has just started to reproduce some of the many degrees of freedom in the wing kinematics of bats. Ultimately, the two robots are to be integrated into a single system to investigate the coupling of biosonar sensing and flight.