–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
In a noisy world, capturing clear audio from specific directions can be a game-changer. Imagine a system that can zero in on a target sound, even amid background noise. This is the goal of Target Directional Sound Extraction (TDSE), a process designed to isolate sounds from a particular direction, while filtering out unwanted noise.
Our team has developed an innovative TDSE system that combines Digital Signal Processing (DSP) and deep learning. Traditional sound extraction relies on signal processing, but it struggles when multiple sounds come from various directions or when using fewer microphones. Deep learning can help, but it sometimes results in distorted audio. By integrating DSP-based spatial filtering with a deep neural network (DNN), our system extracts clear target audio with minimal interference, even with limited microphones.
The system relies on spatial filtering techniques like beamforming and blocking. Beamforming serves as a signal estimator, enhancing sounds from the target direction, while blocking acts as a noise estimator, suppressing sounds from the target direction and leaving other unwanted noises intact. Using a deep learning model, our system processes spatial features and sound embeddings (unique characteristics of the target sound), yielding clear, isolated audio. In our tests, this method improved sound quality by 3-9 dB and performed well with different microphone setups, even those not used during training.
TDSE could transform various industries, from virtual meetings to entertainment, by enhancing audio clarity in real time. Our system’s design offers flexibility, making it adaptable for real-world applications where clear directional audio is crucial.
This approach is an exciting step toward more robust, adaptive audio processing systems, allowing users to capture target sounds even in challenging environments.
École de technologie supérieure, Université du Québec, Montréal, Québec, H3C 1K3, Canada
Rachel Bouserhal, Valentin Pintat & Alexis Pinsonnault-Skvarenina
École de technologie supérieure, Université du Québec
Popular version of 1pNSb12 – Immersive Auditory Awareness: A Smart Earphones Platform for Education on Noise-Induced Hearing Risks
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026825
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Ever thought about how your hearing might change in the future based on how much and how loudly you listen to music through earphones? And how would knowing this affect your music listening habits? We developed a tool called InteracSon, which is a digital earpiece you can wear to help you better understand the risks of losing your hearing from listening to loud music trough earphones.
In this interactive platform, you can first select your favourite song, and play it through a pair of earphones at your preferred listening volume. After providing InteracSon with the amount of time you usually spend listening to music, it calculates the “Age of Your Ears”. This tells you how much your ears have aged due to your music listening habits. So even if you’re, say, 25 years old, your ears might be like they’re 45 years old because of all that loud music!
Picture of the “InteracSon” platform during calibration on an acoustic manikin. Photo by V. Pintat, ÉTS/ CC BY
To really demonstrate what this means, InteracSon provides you with an immersive experience of what it’s like to have hearing loss. It has a mode where you can still hear what’s going on around you, but it filters sounds based on what your ears might be like with hearing loss. You can also hear what tinnitus, a ringing in the ears, sounds like, which is a common problem for people who listen to music too loudly. You can even listen to your favorite song again, but this time it would be altered to simulate your predicted hearing loss.
With more than 60% of adolescents listening to their music at unsafe levels, and nearly 50% of them reporting hearing-related problems, InteracSon is a powerful tool to teach them about the adverse effects of noise exposure on hearing and to promote awareness about how to prevent hearing loss.
Northwestern University, Communication Sciences & Disorders, Evanston, IL, 60208, United States
Jeff Crukley – University of Toronto; McMaster University
Emily Lundberg – University of Colorado, Boulder
James M. Kates – University of Colorado, Boulder
Kathryn Arehart – University of Colorado, Boulder
Pamela Souza – Northwestern University
Popular version of 3aPP1 – Modeling the relationship between listener factors and signal modification: A pooled analysis spanning a decade
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027317
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Imagine yourself in a busy restaurant, trying to focus on a conversation. Often, even with hearing aids, the background noise can make it challenging to understand every word. While some listeners manage to follow the conversations rather easily, others find it hard to follow along, despite having their hearing aids adjusted.
Studies show that cognitive abilities (and not just how well we hear) can affect how well we understand speech in noisy places. Individuals with weaker cognitive abilities struggle more in these situations. Unfortunately, current clinical approaches to hearing aid treatment have not yet been catered to these individuals. The standard approach to setting up hearing aids is to make speech sounds louder or more audible. However, a downside is that hearing aid settings that make speech more audible or attempt to remove background noise, can unintentionally modify other important cues, such as fluctuations in the intensity of the sound, that are necessary for understanding speech. Consequently, some listeners who depend on these cues may be at a disadvantage. Our investigations have focused on understanding why listeners with hearing aids experience these noisy environments differently and developing an evidence-based method for adjusting hearing aids to each person’s individual abilities.
To address this, we pooled data from 73 individuals across four different published studies from our group over the last decade. In these studies, listeners with hearing loss were asked to repeat sentences that were mixed with background chatter (like at a restaurant or a social gathering). The signals were processed through hearing aids that were adjusted in various ways, changing how they handle loudness and background noise. We measured how these adjustments applied to the noisy speech affected the ability of the listeners to understand the sentences. Each of these studies also used a measurement to capture how the hearing aids and background noise together alter the speech sounds (signal fidelity) heard by the listener.
Figure 1. Effect of individual cognitive abilities (working memory) on word recognition as signal fidelity changes.
Our findings reveal that listeners generally understand speech better when the background noise is less intrusive, and the hearing aids do not alter the speech cues too much. But there’s more to it: how well a person’s brain collects and manipulates speech information (their working memory), their age, and the severity of their hearing loss all play a role in how well they understand speech in noisy situations. Specifically, those with lower working memory tend to have more difficulty understanding speech when it is obscured by noise or altered by the hearing aid (Figure 1). So, improving the listening environment by reducing the background noise and/or choosing milder settings on the hearing aids could benefit these individuals.
In summary, our study indicates that a tailored approach that considers each person’s cognitive abilities could lead to better communication, especially in noisier situations. Clinically, the measurement of signal fidelity may be a useful tool to help make these decisions. This could mean the difference between straining to hear and enjoying a good conversation over dinner with family.
Swedish National Road and Transport Research Institute (VTI), Linkoping, -, SE-58195, Sweden
Popular version of 1pNSb9 – Acoustic labelling of tires, road vehicles and road pavements: A vision for substantially improved procedures
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022814
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Not many vehicle owners know that they can contribute to reducing traffic noise by making an informed choice of their tires, while not sacrificing safety or economy. At least you can do so in Europe, where there is a regulation requiring tires be labelled with noise level (among others). But it has substantial flaws for which we propose solutions by applying state-of-the-art and innovative solutions.
It is here where consumer labels come in. In most parts of the world, we have consumer labels including noise levels on household appliances, lawn mowers, printers, etc. But when it comes to vehicles, tires, and road pavements, a noise label on the product is rare. So far, it is mandatory only on tires sold in the European Union, and it took a lot of efforts of noise researchers to get it accepted along with the more “popular” labels for energy (rolling resistance), and wet grip (skid resistance). Figure 1 shows and explains the European label.
Figure 1: The present European tire label, which must be attached to all tires sold in the European Union, here supplemented by explanations.
Why so much focus on tires? Figure 2 illustrates how much of the noise energy that comes from European car tires compared to the “propulsion noise”; i.e. noise from engine, exhaust, transmission, and fans. For speeds above 50 km/h (31 mph) over 80 % of the noise comes from tires. For trucks and busses, the picture is similar although above 50 km/h it may be 50-80 % from the tires. For electric powered vehicles, of course, the tires are almost entirely dominating as a noise source at all speeds. Thus, already now but even more in the future, consumer choices favouring lower noise tires will have an impact on traffic noise exposure. To achieve this progress, tire labels including noise are needed, and they must be fair and discriminate between the quiet and the noisy.
Figure 2: Distribution of tire/road vs propulsion noise. Calculated for typical traffic with 8 % heavy vehicles in Switzerland [Heutschi et al., 2018].
The EU label is a good start, but there are some problems. When we have purchased tires and made noise measurements on them (in A-weighted dB), there is almost no correlation between the noise labels and our measured dB levels. To identify the cause of the problem and suggest improvements, the European Road Administrations (CEDR) funded a project named STEER (Strengthening the Effect of quieter tyres on European Roads), also supplemented by a supporting project by the Swedish Road Administration. STEER found that there were two severe problems in the noise measuring procedure: (1) the test track pavement defined in an ISO standard showed rather large variations from test site to test site, and (2) in many cases only the noisiest tires were measured, and all other tires of the same type (“family”) were labelled with the same value although they could be up to 6 dB quieter. Such “families” may include over 100 different dimensions, as well as load and speed ratings. Consequently, the full potential of the labelling system is far from being used.
The author’s presentation at Acoustics 2023 will deal with the noise labelling problem and suggest in more detail how the measurement procedures may be made much more reproducible and representative. This includes using special reference tires for calibrating test track surfaces, production of such test track surfaces by additive manufacturing (3D printing) from digitally described originals, and calculating the noise levels by digital simulations, modelling, and using AI. Most if not all the noise measurements can go indoors, see an existing facility in Figure 3, to be conducted in laboratories that have large steel drums. Also in such a case a drum surface made by 3D printing is needed.
Figure 3: Laboratory drum facility for measurement of both rolling resistance and noise emission of tires (both for cars and trucks). Note the microphones. The tire is loaded and rolled against one of the three surfaces on the drum. Photo from the Gdansk University of Technology, courtesy of Dr P Mioduszewski.
Brigham Young University, Provo, Utah, 84602, United States
Kent. L. Gee, Mark K. Transtrum, Shane V. Lympany
Popular version of 4aCA5 – Big data to streamlined app: Nationwide traffic noise prediction
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018816
VROOM! Vehicles are loud, and we hear them all the time. But how loud is it near your home, or at the park across town? The National Transportation Noise Map can’t give you more than an average daily sound level, even though it’s probably a lot quieter at night and louder during rush hour. So, we created an app that can predict the noise where, when, and how you want. How loud is it by that interstate at 3 AM, or at 5 PM? Using physics-based modeling, we can predict that for you. Why does the noise sound lower in pitch near the freeway than near other roads? Probably because of all the large trucks. How does the noise on your street during the winter compare to that across town, or on the other side of the country? Our app can predict that for you in a snap.
This (aptly named) app is called VROOM, for the Vehicular Reduced-Order Observation-based Model. It was made by using observed hourly traffic counts at stations across the country. It also uses information such as the average percentage of heavy trucks on freeways at night and the average number of delivery trucks on smaller roads on weekdays to predict sound characteristics across the nation. The app includes a user-friendly interface, and with only 700 MB of stored data can predict traffic noise for roads throughout the country, including near where you live. You don’t need a supercomputer to get a good estimate. The app will show you the sound levels by creating an interactive map so you can zoom in to see what the noise looks like downtown or near your home.