Northwestern University, Communication Sciences & Disorders, Evanston, IL, 60208, United States
Jeff Crukley – University of Toronto; McMaster University
Emily Lundberg – University of Colorado, Boulder
James M. Kates – University of Colorado, Boulder
Kathryn Arehart – University of Colorado, Boulder
Pamela Souza – Northwestern University
Popular version of 3aPP1 – Modeling the relationship between listener factors and signal modification: A pooled analysis spanning a decade
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027317
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Imagine yourself in a busy restaurant, trying to focus on a conversation. Often, even with hearing aids, the background noise can make it challenging to understand every word. While some listeners manage to follow the conversations rather easily, others find it hard to follow along, despite having their hearing aids adjusted.
Studies show that cognitive abilities (and not just how well we hear) can affect how well we understand speech in noisy places. Individuals with weaker cognitive abilities struggle more in these situations. Unfortunately, current clinical approaches to hearing aid treatment have not yet been catered to these individuals. The standard approach to setting up hearing aids is to make speech sounds louder or more audible. However, a downside is that hearing aid settings that make speech more audible or attempt to remove background noise, can unintentionally modify other important cues, such as fluctuations in the intensity of the sound, that are necessary for understanding speech. Consequently, some listeners who depend on these cues may be at a disadvantage. Our investigations have focused on understanding why listeners with hearing aids experience these noisy environments differently and developing an evidence-based method for adjusting hearing aids to each person’s individual abilities.
To address this, we pooled data from 73 individuals across four different published studies from our group over the last decade. In these studies, listeners with hearing loss were asked to repeat sentences that were mixed with background chatter (like at a restaurant or a social gathering). The signals were processed through hearing aids that were adjusted in various ways, changing how they handle loudness and background noise. We measured how these adjustments applied to the noisy speech affected the ability of the listeners to understand the sentences. Each of these studies also used a measurement to capture how the hearing aids and background noise together alter the speech sounds (signal fidelity) heard by the listener.
Figure 1. Effect of individual cognitive abilities (working memory) on word recognition as signal fidelity changes.
Our findings reveal that listeners generally understand speech better when the background noise is less intrusive, and the hearing aids do not alter the speech cues too much. But there’s more to it: how well a person’s brain collects and manipulates speech information (their working memory), their age, and the severity of their hearing loss all play a role in how well they understand speech in noisy situations. Specifically, those with lower working memory tend to have more difficulty understanding speech when it is obscured by noise or altered by the hearing aid (Figure 1). So, improving the listening environment by reducing the background noise and/or choosing milder settings on the hearing aids could benefit these individuals.
In summary, our study indicates that a tailored approach that considers each person’s cognitive abilities could lead to better communication, especially in noisier situations. Clinically, the measurement of signal fidelity may be a useful tool to help make these decisions. This could mean the difference between straining to hear and enjoying a good conversation over dinner with family.
American University, Department of Performing Arts, American University, Washington, DC, 20016, United States
Braxton Boren, Department of Performing Arts, American University
X (twitter): @bbboren
Popular version of 2pAAa12 – Acoustics of two Hindu temples in southern India
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027050
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
What is the history behind the sonic experiences of millions of devotees of one of the oldest religions in the world?
Hindu temple worship dates back over 1,500 years. There are Vedic scriptures from the 5th century C.E describing the rules for temple construction. Sound is a key component of Hindu worship, and consequently, its temples. Acoustically important aspects include, the striking of bells, gongs, blowing of conch shells, and chanting of the Vedas. The bells, gongs, and conch shells all have specific fundamental frequencies and unique sonic characteristics that play out of them, while the chanting is specifically stylized to include phonetic characteristics such as pitch, duration, emphasis, and uniformity. This great prominence of the frequency domain soundscape makes Hindu worship unique. In this study, we analyzed the acoustic characteristics of two UNESCO heritage temples in Southern India.
Figure 1: Virupaksha temple, Pattadakal
The Virupaksha temple in Pattadakal, built around 745 C.E, is part of one of the largest and ancient temple complexes in India.1 We performed a thorough analysis of the space, taking sine sweep measurements from 36 different source-receiver positions. The mid-frequency reverberation time (the time it takes for the sound to decay by a level of 60dB) was found to be 2.1s and the clarity index for music, C80 was -0.9dB. Clarity index is a metric that tells us how balanced the space is and how well complex passages of music can be heard. A reverberation time of 2.1s is similar to a modern concert hall’s reinforcement, and a C80 of -0.9dB means that the space is very good for complex music too. In terms of the music performed, it would be a combination of vocal and instrumental South Indian music with the melodic framework being akin to melodic modes of western classical music set to different time signatures and played at various tempi ranging from very slow (40-50 beats per minute) to very fast (200+ beats per minute).
Figure 2: The sine sweep measurement process in progress at the Virupaksha temple, Pattadakal
The second site was the 15th century Vijaya Vittala temple in Hampi which is another major tourist attraction. Here the poet, composer, and the father of South Indian classical music, Purandara Dasa, spent many years creating compositions in praise of the deity. He was known to have created thousands of compositions in many complex melodic modes.
Measurements at this site spanned 29 source-receiver positions with the mid-frequency reverberation time being 2.5s and the clarity index for music, C80 being -1.7dB. These values also fall in the ideal range for complex music to be interpreted clearly. Based on these findings, we conclude that the Vijaya Vittala temple provided the optimum acoustical conditions for the performance and appreciation of Purandara Dasa’s compositions and South Indian classical music more broadly.
Other standard room acoustic metrics have been calculated and analyzed from the temples’ sound decay curves. We will use this data to build wave-based computer simulations and further analyze the resonant modes in the temples, study the sonic characteristics of the bells, gongs, and conch shells to understand the relationship between the worship ceremony and the architecture of the temples. We also plan to auralize compositions of Purandara Dasa to recreate his experience in the Vijaya Vittala temple 500 years ago.
1 Alongside the ritualistic sounds discussed earlier, music performance holds a vital place in Hindu worship. The Virupaksha temple, in particular, has a rich history of fulfilling this role, as evidenced by inscriptions detailing grants given to temple musicians by the local queen.
Arup, Suite 900, Toronto, Ontario, M4W 3M5, Canada
Vincent Jurdic
Chris Pollock
Willem Boning
Popular version of 1aAA13 – The cost of transparency: balancing acoustic, financial and sustainability considerations for glazed office partitions
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026646
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
I’m an acoustician and here’s why we need to use less glass inside office buildings.
Glass partitions are good for natural light and visual connection but how does glass perform when it comes to blocking sound? What about the environmental cost? These questions came up when my team was addressing problems with acoustic privacy in a downtown Toronto office building. One of the issues was the glass partitions (sometimes called a “storefront” system) between private offices or meeting rooms and open office areas. Staff reported overhearing conversations outside these offices, an issue that ranges from just being distracting to undermining confidentiality for staff and clients.
Glass is ubiquitous in office buildings, inside and out. As a façade system, it’s been a major part of the modern city since at least the 1950s. Inside offices, it often gives us a sense of connection and inclusivity. But as an acoustician, I know that glass partitions are not effective at blocking sound compared to traditional stud walls or masonry walls. How good or bad depends on the glazing design – how thick the glass is, lamination, double panes and air gaps, and how the glass is sealed. When working on fixing the speech privacy problems in the Toronto office, we measured the sound isolation of the glazed partitions by playing random noise very loudly in each office and measuring the sound level difference between that room and the area outside. Our measurements supported the experience of the office staff: conversations are not just audible but comprehensible on the other side of the glass. The seals around the sliding doors often had gaps and sometimes there were joints without any seals – big enough to put your fingers through. Sound is made by tiny fluctuations in air pressure; even small gaps can be a problem.
Figure 1: Example of glass storefront with a sliding door and no seal (Arup image)
This acoustics problem led me to other questions about the cost of transparency in offices, especially the carbon cost. Glass is energy-intensive to produce. Per unit area, ¼” glass can require seven times the embodied carbon of one layer of 5/8” type X gypsum. When Arup compared several glazed partition systems that all had about the same acoustic performance, we found the glass was the greatest contributor to carbon emissions compared to all the other components (see Figure 2). Using these embodied carbon values, we estimated that the carbon cost of all the glazed partitions in this particular office was about 56,800 kgCO2eq, equivalent to driving one-way from New York to Seattle 51 times in an average gasoline-powered car.
Figure 2: Embodied carbon for typical aluminum storefront with three glazing buildups with the same sound isolation rating (Arup research)
So how should these costs be balanced? First, acousticians should be involved early on in space planning and can encourage architects to use less glazing to achieve the design outcomes, including acceptable acoustic performance. Second, we could encourage designers to create a glass aesthetic that uses “less perfect” glass in some locations. Offices may not require the degree of transparency that has become the norm. Where visual privacy is important, glass made from recycled cullet could be specified, leaving the perfectly transparent glass manufactured from virgin silica sand for key locations where a strong visual connection matters. The right balance depends on the project, but asking questions about the multiple costs of transparency is a good place to start.
University of Texas at Austin, Applied Research Laboratories and Walker Department of Mechanical Engineering, Austin, Texas, 78766-9767, United States
Michael R. Haberman; Mark F. Hamilton (both at Applied Research Laboratories and Walker Department of Mechanical Engineering)
Popular version of 5pPA13 – Effects of increasing orbital number on the field transformation in focused vortex beams
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027778
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
When a chef tosses pizza dough, the spinning motion stretches the dough into a circular disk. The more rapidly the dough is spun, the wider the disk becomes.
Fig 1. Pizza dough gets stretched out into a circular disk when it is spun. Source
A similar phenomenon occurs when sound waves are subjected to spinning motion: the beam spreads out more rapidly with increased spinning. One can use the theory of diffraction—the study of how waves constructively and destructively interfere to form a field pattern that evolves with distance—to explain this unique sound field, known as a vortex beam.
In addition to exhibiting a helical field structure, vortex beams can be focused, the same way sunlight passing through a magnifying glass can be focused to a bright spot. When sound is simultaneously spun and focused, something unexpected happens. Rather than converging to a point, the combination of spinning and focusing can cause the sound field to create a region of zero acoustic pressure, analogous to a shadow in optics, between the source and focal point, the shape of which resembles a rugby ball.
While the theory of diffraction predicts this effect, it does not provide insight into what creates the shadow region when the acoustic field is simultaneously spun and focused. To understand why this happens, one can resort to a simpler concept that approximates sound as a collection of rays. This simpler description, known as ray theory, is based on the assumption that waves do not interfere with one another, and that the sound field can be described by straight arrows emerging from a source, just like sun rays emerging from behind a cloud. According to this description, the pressure is proportional to the number of rays present in a given region in space.
Analysis of the paths of individual sound rays permits one to unravel how the overall shape and intensity of the beam are affected by spinning and focusing. One key finding is the formation of an annular channel, resembling a tunnel, within the beam’s structure. This channel is created by a multitude of individual sound rays that are converging due to focusing but are skewed away from the beam axis due to spinning.
By studying this channel, one can calculate the amplitude of the sound field according to ray theory, offering perspectives that the theory of diffraction does not readily reveal. Specifically, the annular channels reveal that the sound field is greatest on the surface of a spheroid, coinciding with the feature shaped like a rugby ball predicted by the theory of diffraction.
In the figure below from the work of Gokani et al., the annular channels and spheroidal shadow zone predicted by ray theory are overlaid as white lines on the upper half of the field predicted by the theory of diffraction, represented by colors corresponding to intensity increasing from blue to red. The amount by which the sound is spun is characterized by ℓ, the orbital number, which increases from left to right in the figure.
Fig 4. Annular channels (thin white lines) and spheroidal shadow zones (thick white lines) overlaid on the diffraction pattern (colors). From Gokani et al., J. Acoust. Soc. Am. 155, 2707-2723 (2024).
As can be seen from Fig. 4, ray theory distills the intricate dynamics of sound that is spun and focused to a tractable geometry problem. Insights gained from this theory not only expand one’s fundamental knowledge of sound and waves but also have practical applications related to particle manipulation, biomedical ultrasonics, and acoustic communications.
Biologist, National Park Service, Natural Sounds and Night Skies Division
1201 Oakridge Drive Suite 100
Fort Collins, CO, 80524, United States
Popular version of 2aAB5 – From sounds to science on public lands: using emerging tools in terrestrial bioacoustics to understand national park soundscapes
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026931
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
In recent decades, audio recordings have helped scientists learn more about wildlife. Natural sounds help answer questions such as: which animals are present or absent from the environment? When do frogs and birds start calling in the spring? How are wildlife reacting to something humans are doing on a landscape?
As audio recordings have become less expensive and easier to collect, scientists can rapidly amass thousands of hours of data. To absorb this volume of data, instead of listening ourselves, we create automated detectors to find animal sounds in the recordings. However, it is a daunting and time-consuming task to create detectors for a diversity of species, habitats, and types of research.
Figure 1. Varied Thrush at Glacier Bay National Park and Preserve in 2015. Image courtesy of the National Park Service.
Several bird species vocalize at an acoustic monitoring station at Glacier Bay National Park and Preserve, including Pacific Wren, American Robin, and Varied Thrush. This example was recorded on June 13, 2017, at 3:22am local time. Audio recording courtesy of the National Park Service.
As more parks collect audio data to answer pressing research and management questions, building a unique automated detector for a single park project is no longer tenable. Instead, we are adopting emerging technology like BirdNET, a machine learning model trained on thousands of species worldwide (not just birds!). BirdNET provides us with more capacity. Instead of painstakingly building one detector for one project, BirdNET enables us to answer questions across multiple national parks.
But emerging technology poses more questions, too. How do we access these tools? What are the best practices for analyzing and interpreting outputs? How do we adapt new methods to answer many diverse park questions? We don’t all have the answers yet, but now we have code and workflows that help us process terabytes of audio, wrangle millions of rows of output, and produce plots to visualize and explore the data.
We are learning even more by collaborating with other scientists and land managers. So far, we’re exploring avian soundscapes at Glacier Bay National Park and Preserve across a decade of monitoring – from when birds are most vocally active during the spring (Fig.2), to when they are most active during the dawn chorus (Fig. 3). We are learning more about wildlife in the Chihuahuan Desert, wood frogs in Alaska, and how birds respond to simulated beaver structures at Rocky Mountain National Park.
The information we provide and interpret from audio data helps parks understand more about wildlife and actions to protect park resources. Translating huge piles of raw audio data into research insights is still a challenging task, but emerging tools are making it easier.
Figure 2. Heat map of BirdNET detection volume for selected focal species at Glacier Bay National Park and Preserve. (a) Hermit Thrush, (b) Pacific-slope Flycatcher, (c) Pacific Wren, (d) Ruby-crowned Kinglet, (e) Townsend’s Warbler, and (f) Varied Thrush. Dates ranging in color from purple to yellow indicate increasing numbers of detections. Dates colored gray had zero detections. White areas show dates where no recordings were collected. Image courtesy of the National Park Service.
Figure 3. Heat map of Varied Thrush detections across date and time of day at Glacier Bay National Park and Preserve. Timesteps ranging in color from purple to yellow indicate increasing numbers of detections. Timesteps colored gray had zero detections. White areas show times when no recordings were collected. Audio recordings were scheduled based on sunrise times. Image courtesy of the National Park Service.
Applied Ocean Phusics and Engineering, Woods Hole Oceanographic Instuitution., Woods Hole, MA, 02543, United States
Andi Petculescu
University of Louisiana
Department of Physics
Lafayette, Louisiana, USA
Popular version of 3aPAa6 – Calculating the Acoustics Internal Gravity Wave Dispersion Relations in Venus’s Supercritical Lower Atmosphere
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027303
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Venus is the second planet from the sun and is the closest in size and mass to Earth. Satellite images show large regions of tectonic deformations and volcanic material, indicating the area is seismically and volcanically active. Ideally, to study its subsurface and seismic and volcanic activity, we would deploy seismometers on the surface to measure the ground motions following venusquakes or volcanic eruptions; this will allow us to understand the planet’s past and current geological processes and evolution. However, the extreme conditions at the surface of Venus prevent us from doing that. With temperatures exceeding 400°C (854°F) and a pressure of more than 90 bars (90 times more than on Earth), instruments don’t last long.
One alternative to overcome this challenge is to study Venus’s subsurface and seismic activity using balloon-based acoustic sensors floating in the atmosphere to detect venusquakes from the air. But before doing that, we first need to assess its feasibility. This means we must better understand how seismic energy is transferred to acoustic energy in Venus’s atmosphere and how the acoustic waves propagate through it. In our research, we address the following questions. 1) How efficiently does seismic motion turn to atmospheric acoustic waves across Venus’ surface? 2) how do acoustic waves propagate in Venus’s atmosphere? and 3) what is the frequency range of acoustic waves in Venus’s atmosphere?
Venus’s extreme pressure and temperature correspond to supercritical fluid conditions in the atmosphere’s lowest few kilometers. Supercritical fluids combine gases and fluids’ properties and exhibit nonintuitive behavior, such as high density and compressibility. Therefore, to describe the behavior of such fluids, we need to use an equation of state (EoS) that captures these phenomena. Different EoSs are appropriate for different fluid conditions, but only a limited selection adequately describes supercritical fluids. One of these equations is the Peng-Robinson (PR) EoS. Incorporating the PR-EoS with the fluid dynamics equations allows us to study acoustics propagation in Venus’s atmosphere.
Our results show that the energy transported across Venus’s surface from seismic sources is two orders of magnitude larger than on Earth, pointing to a better seismic-to-acoustic transmission. This is mainly due to Venus’s denser atmosphere (~68 kg/m3) compared to Earth’s (~1 kg/m3). Using numerical simulations, we show that different seismic waves will be coupled to Venus’s atmosphere at different spatial positions. Therefore, when considering measurements from floating balloons, they will measure different seismic-to-acoustic signals depending on their position. In addition, we show that Venus’s atmosphere allows lower acoustic frequencies than Earth’s. This will be useful in 1) preparing the capabilities of the acoustic instruments used on the balloons, and 2) interpreting future observations.