Rolf Müller – rolf.mueller@vt.edu
X (twitter): @UBDVTLab
Instagram: @ubdvtcenter
Department of Mechanical Engineering, Virginia Tech, Blacksburg, Virginia, 24061, United States
Popular version of 4aAB7 – Of bats and robots
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027373
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Given the ongoing revolution in AI, it may appear that all humanity can do now is wait for AI-powered robots to take over the world. However, while stringing together eloquently worded sentences is certainly impressive, AI is still far from dealing with many of the complexities of the real world. Besides serving the sinister goal of world-domination, robots that have the intelligence to accomplish demanding missions in complex environments could transform humanity’s ability to deal with fundamental key challenges to its survival, e.g., production of food and regrowable materials as well as maintaining healthy ecosystems.
To accomplish the goal of having a robot operate autonomously in complex real-world environments, a variety of methods have been developed – typically with mixed results at best. At the basis of these methods are usually two related concepts: The creation of a model for the geometry of an environment and the use of deterministic templates to identify objects. However, both approaches have already proven to be limited in their applicability, reliability, as well as due to their often prohibitively high computational cost.
Bats navigating dense vegetation – such as in rainforests of Southeast Asia, where our fieldwork is being carried out – may provide a promising alternative to the current approaches: The animals sense their environments through a small number of brief echoes to ultrasonic pulses. The comparatively large wavelengths of these pulses (millimeter to centimeter) combined with the fact that the ears of the bats fall not too far above from these wavelengths on the size scale condemns bat biosonar to poor angular resolution. This prevents the animals from resolving densely packed scatterers such as leave in a foliage. Hence, the echoes that bats navigating under such conditions have to deal with inputs that can be classified as “clutter”, i.e., signals that consists of contributions from many unresolvable scatterers that must be treated as random due to lack of knowledge. The nature of the clutter echoes makes it unlikely that bats having to deal with complex environments rely heavily on three-dimensional models of their surroundings and deterministic templates.
Hence, bats must have evolved sensing paradigms to ensure that the clutter echoes contain the relevant sensory information and that this information can be extracted. Coupling between sensing and actuation could very well play a critical role in this. Hence, robotics might be of pivotal importance in replicating the skills of bats in sensing and navigating their environments. Similarly, the deep-learning revolution could bring a previously unavailable ability to extract complex patterns from data to bear on the problem of extracting insight from clutter echoes. Taken together, insights from these approaches could lead to novel acoustics-based paradigms for obtaining relevant sensory information on complex environment in a direct and highly parsimonious manner. These approaches could then enable autonomous robots that can learn to navigate new environments in a fast and highly efficient manner and transform the use of autonomous systems in outdoor tasks.
Biomimetic robots designed to reproduce the (a) biosonar sensing and (b) flapping-flight capabilities of bats. Design renderings by Zhengsheng Lu (a) and Adam Carmody (b).
As pilot demonstration for this approach, we present a twin pair of bioinspired robots, one to mimic the biosonar sensing abilities of bats and the other to mimic the flapping flight of the animals. The biosonar robot has been used successfully to identify locations and find passageways in complex, natural environments. To accomplish this, the biomimetic sonar has been integrated with deep-learning analysis of clutter echoes. The flapping-flight line of biomimetic robots has just started to reproduce some of the many degrees of freedom in the wing kinematics of bats. Ultimately, the two robots are to be integrated into a single system to investigate the coupling of biosonar sensing and flight.
Northwestern University, Communication Sciences & Disorders, Evanston, IL, 60208, United States
Jeff Crukley – University of Toronto; McMaster University
Emily Lundberg – University of Colorado, Boulder
James M. Kates – University of Colorado, Boulder
Kathryn Arehart – University of Colorado, Boulder
Pamela Souza – Northwestern University
Popular version of 3aPP1 – Modeling the relationship between listener factors and signal modification: A pooled analysis spanning a decade
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027317
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Imagine yourself in a busy restaurant, trying to focus on a conversation. Often, even with hearing aids, the background noise can make it challenging to understand every word. While some listeners manage to follow the conversations rather easily, others find it hard to follow along, despite having their hearing aids adjusted.
Studies show that cognitive abilities (and not just how well we hear) can affect how well we understand speech in noisy places. Individuals with weaker cognitive abilities struggle more in these situations. Unfortunately, current clinical approaches to hearing aid treatment have not yet been catered to these individuals. The standard approach to setting up hearing aids is to make speech sounds louder or more audible. However, a downside is that hearing aid settings that make speech more audible or attempt to remove background noise, can unintentionally modify other important cues, such as fluctuations in the intensity of the sound, that are necessary for understanding speech. Consequently, some listeners who depend on these cues may be at a disadvantage. Our investigations have focused on understanding why listeners with hearing aids experience these noisy environments differently and developing an evidence-based method for adjusting hearing aids to each person’s individual abilities.
To address this, we pooled data from 73 individuals across four different published studies from our group over the last decade. In these studies, listeners with hearing loss were asked to repeat sentences that were mixed with background chatter (like at a restaurant or a social gathering). The signals were processed through hearing aids that were adjusted in various ways, changing how they handle loudness and background noise. We measured how these adjustments applied to the noisy speech affected the ability of the listeners to understand the sentences. Each of these studies also used a measurement to capture how the hearing aids and background noise together alter the speech sounds (signal fidelity) heard by the listener.
Figure 1. Effect of individual cognitive abilities (working memory) on word recognition as signal fidelity changes.
Our findings reveal that listeners generally understand speech better when the background noise is less intrusive, and the hearing aids do not alter the speech cues too much. But there’s more to it: how well a person’s brain collects and manipulates speech information (their working memory), their age, and the severity of their hearing loss all play a role in how well they understand speech in noisy situations. Specifically, those with lower working memory tend to have more difficulty understanding speech when it is obscured by noise or altered by the hearing aid (Figure 1). So, improving the listening environment by reducing the background noise and/or choosing milder settings on the hearing aids could benefit these individuals.
In summary, our study indicates that a tailored approach that considers each person’s cognitive abilities could lead to better communication, especially in noisier situations. Clinically, the measurement of signal fidelity may be a useful tool to help make these decisions. This could mean the difference between straining to hear and enjoying a good conversation over dinner with family.
American University, Department of Performing Arts, American University, Washington, DC, 20016, United States
Braxton Boren, Department of Performing Arts, American University
X (twitter): @bbboren
Popular version of 2pAAa12 – Acoustics of two Hindu temples in southern India
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027050
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
What is the history behind the sonic experiences of millions of devotees of one of the oldest religions in the world?
Hindu temple worship dates back over 1,500 years. There are Vedic scriptures from the 5th century C.E describing the rules for temple construction. Sound is a key component of Hindu worship, and consequently, its temples. Acoustically important aspects include, the striking of bells, gongs, blowing of conch shells, and chanting of the Vedas. The bells, gongs, and conch shells all have specific fundamental frequencies and unique sonic characteristics that play out of them, while the chanting is specifically stylized to include phonetic characteristics such as pitch, duration, emphasis, and uniformity. This great prominence of the frequency domain soundscape makes Hindu worship unique. In this study, we analyzed the acoustic characteristics of two UNESCO heritage temples in Southern India.
Figure 1: Virupaksha temple, Pattadakal
The Virupaksha temple in Pattadakal, built around 745 C.E, is part of one of the largest and ancient temple complexes in India.1 We performed a thorough analysis of the space, taking sine sweep measurements from 36 different source-receiver positions. The mid-frequency reverberation time (the time it takes for the sound to decay by a level of 60dB) was found to be 2.1s and the clarity index for music, C80 was -0.9dB. Clarity index is a metric that tells us how balanced the space is and how well complex passages of music can be heard. A reverberation time of 2.1s is similar to a modern concert hall’s reinforcement, and a C80 of -0.9dB means that the space is very good for complex music too. In terms of the music performed, it would be a combination of vocal and instrumental South Indian music with the melodic framework being akin to melodic modes of western classical music set to different time signatures and played at various tempi ranging from very slow (40-50 beats per minute) to very fast (200+ beats per minute).
Figure 2: The sine sweep measurement process in progress at the Virupaksha temple, Pattadakal
The second site was the 15th century Vijaya Vittala temple in Hampi which is another major tourist attraction. Here the poet, composer, and the father of South Indian classical music, Purandara Dasa, spent many years creating compositions in praise of the deity. He was known to have created thousands of compositions in many complex melodic modes.
Measurements at this site spanned 29 source-receiver positions with the mid-frequency reverberation time being 2.5s and the clarity index for music, C80 being -1.7dB. These values also fall in the ideal range for complex music to be interpreted clearly. Based on these findings, we conclude that the Vijaya Vittala temple provided the optimum acoustical conditions for the performance and appreciation of Purandara Dasa’s compositions and South Indian classical music more broadly.
Other standard room acoustic metrics have been calculated and analyzed from the temples’ sound decay curves. We will use this data to build wave-based computer simulations and further analyze the resonant modes in the temples, study the sonic characteristics of the bells, gongs, and conch shells to understand the relationship between the worship ceremony and the architecture of the temples. We also plan to auralize compositions of Purandara Dasa to recreate his experience in the Vijaya Vittala temple 500 years ago.
1 Alongside the ritualistic sounds discussed earlier, music performance holds a vital place in Hindu worship. The Virupaksha temple, in particular, has a rich history of fulfilling this role, as evidenced by inscriptions detailing grants given to temple musicians by the local queen.
Arup, Suite 900, Toronto, Ontario, M4W 3M5, Canada
Vincent Jurdic
Chris Pollock
Willem Boning
Popular version of 1aAA13 – The cost of transparency: balancing acoustic, financial and sustainability considerations for glazed office partitions
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026646
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
I’m an acoustician and here’s why we need to use less glass inside office buildings.
Glass partitions are good for natural light and visual connection but how does glass perform when it comes to blocking sound? What about the environmental cost? These questions came up when my team was addressing problems with acoustic privacy in a downtown Toronto office building. One of the issues was the glass partitions (sometimes called a “storefront” system) between private offices or meeting rooms and open office areas. Staff reported overhearing conversations outside these offices, an issue that ranges from just being distracting to undermining confidentiality for staff and clients.
Glass is ubiquitous in office buildings, inside and out. As a façade system, it’s been a major part of the modern city since at least the 1950s. Inside offices, it often gives us a sense of connection and inclusivity. But as an acoustician, I know that glass partitions are not effective at blocking sound compared to traditional stud walls or masonry walls. How good or bad depends on the glazing design – how thick the glass is, lamination, double panes and air gaps, and how the glass is sealed. When working on fixing the speech privacy problems in the Toronto office, we measured the sound isolation of the glazed partitions by playing random noise very loudly in each office and measuring the sound level difference between that room and the area outside. Our measurements supported the experience of the office staff: conversations are not just audible but comprehensible on the other side of the glass. The seals around the sliding doors often had gaps and sometimes there were joints without any seals – big enough to put your fingers through. Sound is made by tiny fluctuations in air pressure; even small gaps can be a problem.
Figure 1: Example of glass storefront with a sliding door and no seal (Arup image)
This acoustics problem led me to other questions about the cost of transparency in offices, especially the carbon cost. Glass is energy-intensive to produce. Per unit area, ¼” glass can require seven times the embodied carbon of one layer of 5/8” type X gypsum. When Arup compared several glazed partition systems that all had about the same acoustic performance, we found the glass was the greatest contributor to carbon emissions compared to all the other components (see Figure 2). Using these embodied carbon values, we estimated that the carbon cost of all the glazed partitions in this particular office was about 56,800 kgCO2eq, equivalent to driving one-way from New York to Seattle 51 times in an average gasoline-powered car.
Figure 2: Embodied carbon for typical aluminum storefront with three glazing buildups with the same sound isolation rating (Arup research)
So how should these costs be balanced? First, acousticians should be involved early on in space planning and can encourage architects to use less glazing to achieve the design outcomes, including acceptable acoustic performance. Second, we could encourage designers to create a glass aesthetic that uses “less perfect” glass in some locations. Offices may not require the degree of transparency that has become the norm. Where visual privacy is important, glass made from recycled cullet could be specified, leaving the perfectly transparent glass manufactured from virgin silica sand for key locations where a strong visual connection matters. The right balance depends on the project, but asking questions about the multiple costs of transparency is a good place to start.
University of Texas at Austin, Applied Research Laboratories and Walker Department of Mechanical Engineering, Austin, Texas, 78766-9767, United States
Michael R. Haberman; Mark F. Hamilton (both at Applied Research Laboratories and Walker Department of Mechanical Engineering)
Popular version of 5pPA13 – Effects of increasing orbital number on the field transformation in focused vortex beams
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027778
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
When a chef tosses pizza dough, the spinning motion stretches the dough into a circular disk. The more rapidly the dough is spun, the wider the disk becomes.
Fig 1. Pizza dough gets stretched out into a circular disk when it is spun. Source
A similar phenomenon occurs when sound waves are subjected to spinning motion: the beam spreads out more rapidly with increased spinning. One can use the theory of diffraction—the study of how waves constructively and destructively interfere to form a field pattern that evolves with distance—to explain this unique sound field, known as a vortex beam.
In addition to exhibiting a helical field structure, vortex beams can be focused, the same way sunlight passing through a magnifying glass can be focused to a bright spot. When sound is simultaneously spun and focused, something unexpected happens. Rather than converging to a point, the combination of spinning and focusing can cause the sound field to create a region of zero acoustic pressure, analogous to a shadow in optics, between the source and focal point, the shape of which resembles a rugby ball.
While the theory of diffraction predicts this effect, it does not provide insight into what creates the shadow region when the acoustic field is simultaneously spun and focused. To understand why this happens, one can resort to a simpler concept that approximates sound as a collection of rays. This simpler description, known as ray theory, is based on the assumption that waves do not interfere with one another, and that the sound field can be described by straight arrows emerging from a source, just like sun rays emerging from behind a cloud. According to this description, the pressure is proportional to the number of rays present in a given region in space.
Analysis of the paths of individual sound rays permits one to unravel how the overall shape and intensity of the beam are affected by spinning and focusing. One key finding is the formation of an annular channel, resembling a tunnel, within the beam’s structure. This channel is created by a multitude of individual sound rays that are converging due to focusing but are skewed away from the beam axis due to spinning.
By studying this channel, one can calculate the amplitude of the sound field according to ray theory, offering perspectives that the theory of diffraction does not readily reveal. Specifically, the annular channels reveal that the sound field is greatest on the surface of a spheroid, coinciding with the feature shaped like a rugby ball predicted by the theory of diffraction.
In the figure below from the work of Gokani et al., the annular channels and spheroidal shadow zone predicted by ray theory are overlaid as white lines on the upper half of the field predicted by the theory of diffraction, represented by colors corresponding to intensity increasing from blue to red. The amount by which the sound is spun is characterized by ℓ, the orbital number, which increases from left to right in the figure.
Fig 4. Annular channels (thin white lines) and spheroidal shadow zones (thick white lines) overlaid on the diffraction pattern (colors). From Gokani et al., J. Acoust. Soc. Am. 155, 2707-2723 (2024).
As can be seen from Fig. 4, ray theory distills the intricate dynamics of sound that is spun and focused to a tractable geometry problem. Insights gained from this theory not only expand one’s fundamental knowledge of sound and waves but also have practical applications related to particle manipulation, biomedical ultrasonics, and acoustic communications.