Fifteen Years of Research on Active Noise Control Systems for Partially Open Windows

Delf Sachau – sachau@hsu-hh.de

Professur für Mechatronik, Helmut-Schmidt-Universität, Hamburg, Hamburg, 22043, Germany

Dr.-Ing Tim Karl
Professur für Mechatronik
Helmut-Schmidt-Universität
Hamburg

Popular version of 3pAA10 – Fifteen Years of Research on Active Noise Control Systems for Partially Open Windows: A Summary of Key Findings
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me/appinfo.php?page=Session&project=ASAASJ25&id=3979397&server=eppro02.ativ.me

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Motivation
In many cities, people want to keep their windows open to allow fresh air into their homes. However, especially in busy urban areas, open windows also let in unwanted noise from traffic, trains, aircraft, and general city activity. Constant exposure to this noise is not just annoying, it can affect sleep, concentration, and even long-term health. To address this problem, researchers at the Helmut Schmidt University in Hamburg have spent the past fifteen years developing systems that can reduce noise coming through partially open windows while still allowing natural ventilation.

Passive Absorbers
The approach combines two methods: passive noise reduction and active noise control (ANC). Passive noise reduction involves using materials that naturally absorb or block sound, such as foam-like acoustic panels or special seals. These materials are very good at reducing high-frequency noise but are less effective for deeper, low-pitched sounds like engines or traffic rumble.

ActiveNoise Control
This is where active noise control comes in. ANC works in a way similar to noise-cancelling headphones. Small loudspeakers placed near the window play “anti-noise sound waves” that are shaped to cancel out incoming noise. When the incoming noise and the anti-noise meet, they interfere with each other and reduce the amount of sound that reaches inside the room. To make this happen, microphones are used to measure the sound, while computer algorithms constantly adjust the sound from the speakers to keep the cancellation effective.

Figure 1: Internoise 2020, J. Hanselka, D. Sachau, Converting an Active Noise Blocker for a Tilted Window from Feedforward Control into a Feedback System

Algorithm
worked on improving the computer algorithms that run the ANC system. These algorithms need to react quickly to changing noise, remain stable, and avoid using too much power. Therefor analyses conduction different real-time-controller platforms were evaluated, including DSP and FPGA technology

Figure 2: ISMA 2014, D. Sachau, S. Jukkert, Real-time implementation of the frequency-domain FxLMS algorithm without block delay for adaptive noise blocker

Simulation
However, using ANC at an open window is much more complicated than inside headphones. The sound field near an open window is irregular and constantly changing because of airflow, reflections, and outdoor conditions. The research team therefore studied how sound moves through small openings of different shapes and sizes. One important discovery is that the depth of the opening relative to the wavelength of the sound plays a enormous role in how much noise gets through. This knowledge helps guide how the ANC system can be designed and placed.

Figure 3: Internoise2020, M. Sandner, D. Sachau, Influence of parameters of small gaps regarding sound transmission and ANC-performance-a numerical simulation

Position Optimization
Another major research effort focused on the best positions for microphones and speakers. Their placement determines how well the noise can be cancelled. The researchers found that placing the speaker near the center of the opening often provides the most even noise reduction throughout the room. Meanwhile, microphone placement is very important for stability, because the microphone input is what guides the control system in real time.

Figure 4: DAGA 2025, T. Karl, D. Sachau, Numerical position optimization approach for sensor and actuator placement in an active noise cancelling system

Conclusion
Overall, the research shows that a combination of passive materials and active noise control is the best approach. Passive elements reduce parts of the noise that are hard to cancel electronically, while ANC handles the deep, low-frequency noise that humans find especially disturbing. Together, these methods make it possible to keep windows open for fresh air -without letting in the city.

Breaking the Skull Barrier: “Listening” to Ultrasound Therapy Inside the Brain

Pradosh Pritam Dash – ppdash@gatech.edu

Instagram: @pra.dosh.dash
George W. Woodruff School of Mechanical Engineering
Georgia Institute of Technology
Atlanta, GA, 30318
United States

Costas D. Arvanitis
Georgia Institute of Technology and Emory University

Popular version of 3pBAa7 – Breaking the Skull Barrier: Parametric Array Enable Non-Invasive Monitoring of Transcranial Focused Ultrasound
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=Session&project=ASAASJ25&id=3982986&nohistory&nohistory=true

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

The Challenge of Treating the Brain
Focused Ultrasound (FUS) is a revolutionary, incision-free technology that promises to treat brain disorders, such as tumors and Parkinson’s disease. It works by concentrating high-frequency sound waves to a precise point deep within the brain, much like a magnifying glass focuses sunlight. However, this promising therapy faces a major obstacle: the human skull. The skull is a thick, bony barrier that scrambles, reflects, and weakens these high-frequency waves. This makes it incredibly difficult for doctors to monitor the treatment in real-time and confirm that the energy is actually reaching the intended target. This uncertainty limits the safety and effectiveness of FUS brain therapies.

Figure 1: a - Conceptual illustration of the technique. A transmitter (bottom) sends high-frequency (1 MHz) therapeutic ultrasound waves through the skull. Where these waves interact at the focus, they generate a 50kHz low frequency "parametric Array" signal that easily passes through the skull to a receiver (top). The HASPA framework uses this detected signal to map the therapy. b- The reconstructed (first order) 1 MHz high-frequency and 100 kHz low frequency parametric field using HASPA framework with 3,6, and 9 dB contours.

Figure 1: a – Conceptual illustration of the technique. A transmitter (bottom) sends high-frequency (1 MHz) therapeutic ultrasound waves through the skull. Where these waves interact at the focus, they generate a 50kHz low frequency “parametric Array” signal that easily passes through the skull to a receiver (top). The HASPA framework uses this detected signal to map the therapy. b- The reconstructed (first order) 1 MHz high-frequency and 100 kHz low frequency parametric field using HASPA framework with 3,6, and 9 dB contours.

The skull is a thick, bony barrier that scrambles, reflects, and weakens these high-frequency waves. This makes it incredibly difficult for doctors to monitor the treatment in real-time and confirm that the energy is actually reaching the intended target. This uncertainty limits the safety and effectiveness of FUS brain therapies.d

An Acoustic “Trick” to Overcome the Barrier
Researchers at Georgia Tech and Emory University have developed a new computational framework called HASPA (Heterogeneous Angular Spectrum Parametric Array) that exploits a nonlinear acoustic “trick” known as the “parametric array effect.” When two high-frequency ultrasound beams around 1 MHz beams used for therapy meet at the target inside the brain, they interact nonlinearly and mix. This interaction generates a brand-new sound wave at a much lower difference frequency (around 50-100 kHz).

Think of it this way: High-frequency sounds, like a faint whistle, are easily blocked by a thick wall (the skull). However, low-frequency sounds, like the thumping bass from a neighbor’s stereo, travel through walls easily. In this new approach, the therapeutic “whistles” create a localized “bass” beat exactly where the treatment is happening. This low-frequency signal acts as a messenger, traveling cleanly back out through the skull to be detected by external sensors.

Decoding the Message: The HASPA Framework
The challenge is translating this low-frequency message back into a high-resolution picture of the high-frequency treatment zone inside the brain.

To achieve this, the team developed a novel computational framework called HASPA (Heterogeneous Angular Spectrum Parametric Array) and an associated inverse algorithm (iHASPA).

iHASPA analyzes the low-frequency signal measured outside the skull and mathematically reconstructs a map of the original therapy beams deep inside the brain. Crucially, the framework accounts for the complex ways sound travels through the specific properties of the patient’s skull and brain tissue, correcting for distortions.

Impact and Future
By leveraging this nonlinear acoustic effect, the HASPA framework allows us to “see” through the skull using sound. This new technique enables real-time, non-invasive monitoring of ultrasound beams inside the brain, paving the way for safer, more precise, and more effective focused ultrasound therapies for debilitating neurological disorders.

Hearing where it counts: Toward better directional hearing during earplug and earmuff use

Andrew Brown – andrewdb@uw.edu

University of Washington, Department of Speech and Hearing Sciences, Seattle, WA, 98105, United States

Additional authors: DJ Audet Jr, Aoi A. Hunsaker, Mallory Butler, Carol Sammeth, Alexandria Podolski, Theodore F. Argo, David A. Anderson, Nathaniel T. Greene,

Popular version of 2pNSa4 – Two-dimensional sound localization during hearing protector use in a large sample of human listeners
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3982069

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

In noisy professions – from manufacturing to the military – hearing protection and perception are often at odds. The sense of hearing normally enables listeners to detect and locate sounds arriving from any direction – an especially valuable ability in settings with low visibility (darkness, fog, smoke), visual clutter, or in which important sound sources may be outside the field of vision altogether, whether off in the distance or “right behind you!” However, when noisy settings demand the use of hearing protectors (usually earplugs or earmuffs), the ability to determine sound direction is reduced. Hearing protectors lower the level of transmitted sound – their designed purpose – but they also change the quality of the transmitted sound, disrupting the subtle bits of acoustic information the brain relies on to determine sound direction. This means listeners may confuse forward and rearward sounds, or struggle to locate sounds overhead. The trade-off between protection and perception can contribute to disuse of hearing protectors in critical settings where situational awareness and personal safety may be acutely valued above long-term hearing health.

Methods to evaluate hearing protector impacts have varied widely across previous studies; hearing protectors come in many shapes and sizes, and directional hearing ability varies across people even before hearing protectors enter the picture. Here, in an effort to identify key factors that mediate hearing protector impacts, we measured directional hearing during hearing protector use in a large sample of listeners across two different sites (130 subjects enrolled study-wide). Listeners were asked to orient to sounds that varied in horizontal and vertical location while wearing a variety of commercially available hearing protector styles, with orientation accuracy measured using wireless sensors.

All hearing protectors reduced directional hearing ability, but variation across devices pointed to key variables that may impact performance – and may be captured using relatively simple acoustic measurements. This work is part of an effort to develop metrics beyond the industry-standard “Noise Reduction Rating” that consumers and hearing conservation professionals alike might use to select job-appropriate hearing protectors, and that hearing protection manufacturers might leverage to design and build better devices.

This work was funded by the US Department of Defense Joint Warfighter Medical Research Program.

Acoustic Suction Tweezers: A new compact acoustic gadget for small object manipulation

Shoya Yoneda – yoneda-shoya@ed.tmu.ac.jp

Department of Electrical Engineering and Computer Science
Tokyo Metropolitan University
Hino-shi, Tokyo, 191-0065
Japan

Kan Okubo – kanne@tmu.ac.jp

Popular version of 4pPA6 – Miniaturized Acoustic Suction Tweezers: Lift Control and Cap Design for Mobile Applications
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me/appinfo.php?page=IntHtml&project=ASAASJ25&id=3983403&server=eppro02.ativ.me

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Can you believe that ultrasound-induced forces can actually pull objects?
It may sound surprising, but this phenomenon is real. In this paper we introduce a fascinating world of sound-based manipulation.

Our research group has long been developing acoustic tweezers capable of picking up tiny objects using ultrasonic forces. (See https://youtu.be/PoZsKjst82g)

In our latest work, we take this idea a step further. By using a remarkably simple structure and cleverly harnessing the lifting force generated by sound, we have created a new acoustic gadget: the acoustic suction tweezer.

Yes —acoustic suction tweezers, sometimes called an “acoustic pipette,” pull objects toward them using sound energy, with no vacuum effect involved.

Proposal Device: Acoustic Suction Tweezer

Video 1. Introduction of the Acoustic Suction Tweezer

Sound exerts a force on objects known as the acoustic radiation force, which typically pushes objects away. However, by placing a small aperture unit in front of the transducer, we can shape a unique sound field that transforms this force into attraction and lift —almost like a miniature vacuum cleaner made of sound. To harness this effect, we developed an acoustic focusing cap through extensive trial and error, testing various designs manufactured with a 3D printer to evaluate their performance.

Figure 1. Make various Acoustic Focusing Caps

Figure 1. Make various Acoustic Focusing Caps

The figure below shows the simulated sound pressure levels. Relatively high-pressure regions are concentrated near the tip of the cap, which correlates with the generation of attractive acoustic radiation forces in this area.

Figure 3. An Example of Sound Pressure Levels Inside the Cap

Figure 2. An Example of Sound Pressure Levels Inside the Cap

How does it compare to other devices?
Our previously proposed acoustic tweezers require large transducer arrays and complex phase control (See https://www.eurekalert.org/news-releases/923462). In contrast, the acoustic suction tweezers overcome these limitations through careful design considerations. Remarkably, they lift objects even larger than the wavelength of sound, such as 15 mm polystyrene spheres.

Practicality
The Acoustic Suction Tweezer excels in practicality; it can be implemented quickly, at low cost, using just a 3D printer and a single ultrasonic transducer.

We confirmed that the device can handle lightweight industrial items such as coated wires and even delicate objects like feathers —materials conventional vacuum tweezers struggle to grasp.

We confirmed that the device can handle lightweight industrial items such as coated wires and even delicate objects like feathers —materials conventional vacuum tweezers struggle to grasp.We expect this device to have strong potential for applications in diverse fields, including medicine, biochemistry, and engineering. We also hope that this system will inspire further innovation and the creation of many other useful acoustic-based tools.

Sound(e)scape: Can a Sonic Break Improve Cognitive Performance?

Alaa Algargoosh – algargoosh@vt.edu

Virginia Polytechnic Institute and State University (Virginia Tech), Perry St, Blacksburg, VA, 24061, United States

Megan Wysocki
Virginia Polytechnic Institute and State University (Virginia Tech)

Amneh Hamida
RWTH Aachen University.

Popular version of 1pNSa4 – Cognitive Restoration in Virtual Interactions with Indoor Acoustic Environments
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3977035

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

People often associate restorative experiences with nature: the sound of birds, wind, or flowing water. But what if indoor spaces could offer their own kind of mental escape, not through what we see, but through how we interact with sound?

This idea began with a simple observation. When you walk into a space and notice how your footsteps and voice are reflected back to you, the echoes create a subtle sense of awe. According to Attention Restoration Theory, experiences that evoke fascination and effortless engagement can help replenish mental resources. We wanted to explore whether these moments of acoustic interaction between a person and a space could invite gentle attention and, in turn, support cognitive restoration. In Attention Restoration Theory, this is referred to as soft fascination, a type of stimulus that is engaging but not overwhelming.

Exploring Echoes as a Path to Mental Restoration:
During a live demonstration at the MIT Museum, we used auralization a technology that allows you to hear your voice as if you were in a different place using that place’s sound signature or impulse response. A volunteer hummed into the acoustic signature of Hagia Sophia. Later, the entire audience hummed together and reflected on their experiences. The conversation pointed to the potential of such acoustic interaction to support a meditative state by impacting sense of space, time, and self.

This inspired a controlled experiment to study the restorative potential of indoor acoustic environments. We asked people to experience different sound environments (Figure 1) and measure their cognitive activity before and after each interaction. Early results suggest that interactive acoustics may support attention restoration depending on the acoustic characteristics, opening a new way of thinking about how sound affects us indoors.

Figure 1: Virtual interaction with an acoustic environment during the experiment, where a person hears their own voice transformed through the acoustic signature of another space.

Why does this matter?
We spend most of our time indoors, yet discussions of restorative environments often focus on natural settings. This is especially relevant for workplaces and schools, where mental fatigue is common. It may also hold meaningful promise for neurodivergent individuals, including those with ADHD, who often benefit from environments that support attention without overstimulating it.
We imagine applications in immersive restorative spaces where people can interact with sound to reset and return to their activities with greater clarity. We also envision subtle integration into transitional spaces such as staircases, corridors, and building entrances that provide gentle cognitive relief as people move throughout their day.

Sound(e)scape reframes acoustics not as background, but as a tool for well-being. By understanding how interactive sound shapes attention and cognition, we can design buildings that do not simply avoid harmful noise. They can actively help the mind take a restorative break.

Figure 2: Visualization of interacting with different acoustic environments. Left: A person vocalizing in an office environment (MIT Media Lab). Middle: “Hagia Sophia – Muhammad, Allah, Abu Bakr” by Rabe!, licensed under CC BY-SA 3.0 (https://commons.wikimedia.org/wiki/File:Hagia_Sophia_-_Muhammad,_Allah,_Abu_Bakr.jpg) Cropped and one person added by Alaa Algargoosh. Right: A person vocalizing in Boston Symphony Hall.

Sound recordings:
1. Vocalizing in an office environment (MIT Media Lab).
2. Virtual vocalization in Hagia Sophia.
3. Virtual vocalization in Boston Symphony Hall.
The virtual vocalizations were generated using the impulse responses available at ODEON software library.

Acoustics of Korean Traditional Architecture: A Case Study of Magoksa Temple

Sungjoon Kim – sungjoon.kim@kaist.ac.kr

Instagram: @jooon.kim
291, Daehak-ro, N25, Yuseong-gu, Daejeon, 34141, South Korea

Popular version of 1aAA2 – Acoustics of Korean Traditional Architecture: A Case Study of Magoksa Temple
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3979424

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Western churches typically evoke the impression of long, reverberant echoes. This acoustic quality is largely influenced by their domed ceilings and stone construction, which amplify and sustain sound. A single note from an organ or choir can travel far and linger in the air, creating a bright and grand sound field.

In contrast, Asian temples often have acoustic characteristics that differ significantly from those of Western churches. In particular, traditional Korean temples have a soft and warm sound environment. Their structures are primarily composed of wood, soil, and paper, reflecting Korea’s architectural philosophy of harmony with nature and the surrounding landscape. Instead of a strong, ringing echo, the listener experiences a gentle and intimate atmosphere.

Our study explores the acoustic characteristics of Magoksa Temple in South Korea, a Buddhist temple complex whose main halls date back to the 17th century. We measured the reverberation and other acoustic properties of three main temple halls and analyzed how sound behaves in these wooden spaces. The goal of this study is to understand these unique sound behaviors and to consider how they can be recreated when digitally restoring historical sites in virtual reality content and other media.

Figure 1: Main worship hall (Daegwangbojeon) of Magoksa Temple and surrounding courtyard.

To carry out the measurements, we played test signals through a loudspeaker and recorded the responses using microphones, including a three-dimensional (3D) microphone array. These room impulse responses capture the “acoustic fingerprint” of each hall: how long sound lasts, which frequency bands are emphasized or reduced, and how sound energy arrives from different directions around a listener.

Figure 2: Acoustic measurement setup inside a temple hall with a loudspeaker and 3D microphone array.

We found that all three temple halls share two distinctive features:

  1. Strong low-frequency resonance – Deep sounds, such as drums or low chanting, tend to linger longer than higher-pitched sounds. One important reason is structural: the floors are hollow beneath the wooden planks, and this cavity reinforces low-frequency energy, similar to the body of a musical instrument.
  2. High-frequency absorption – Soft materials such as paper doors, soil walls, and exposed wood absorb much of the high-frequency content. This reduces sharp reflections and makes the space sound calm and close, rather than bright or very echoey like a stone cathedral.
Figure 3: Frequency responses of the three main halls at Magoksa Temple.

Using the 3D microphone array, we also examined spatial characteristics, such as which parts of the structure (floor, ceiling, or side walls) create the most prominent reflections, and how sound surrounds a seated listener. These results help us understand more deeply how traditional Korean temples use their wooden structures and natural materials to create such distinctive acoustics.

Understanding these sound patterns helps us preserve more than just the visual beauty of cultural heritage—it allows us to capture the aural identity of a place. By integrating these findings into digital reconstructions and virtual reality experiences, we can make presentations of traditional Korean architecture feel more realistic and immersive, allowing future generations not only to see history but also to hear it.