Beyond Necessity, Hearing Aids Bring Enjoyment Through Music #ASA184

Beyond Necessity, Hearing Aids Bring Enjoyment Through Music #ASA184

Hearing aids aren’t particularly good at preserving the sound quality of music – but some manufacturers do better than others.

Media Contact:
Ashley Piccone
AIP Media

CHICAGO, May 8, 2023 – For decades, hearing aids have been focused on improving communication by separating speech from background noise. While the technology has made strides in terms of speech, it is still subpar when it comes to music.

Over the years, hearing aids have improved in terms of speech. But they are still subpar when it comes to music. Credit: Emily Sandgren

In their talk, “Evaluating the efficacy of music programs in hearing aids,” Emily Sandgren and Joshua Alexander of Purdue University will describe experiments to determine the best hearing aids for listening to music. The presentation will take place Monday, May 8, at 11:45 a.m. Eastern U.S. in the Indiana/Iowa room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

“Americans listen to music for more than two hours a day on average, and music can be related to mental and emotional health. But research over the past two decades has shown that hearing aid users are dissatisfied with the sound quality of music when using their hearing aids,” said Sandgren. “People with hearing loss deserve both ease of communication and to maintain quality of life by enjoying sources of entertainment like music.”

In response to this problem, hearing aid manufacturers have designed music programs for their devices. To test and compare each of these programs, Sandgren and Alexander took over 200 recordings of music samples as processed by hearing aids from seven popular manufacturers.

They asked study participants to rate the sound quality of these recordings and found that the hearing aids had lower ratings for music than their control stimuli. The researchers found bigger differences in music quality between hearing aid brands than between speech and music programs, with two manufacturers standing out among the rest.

The team is still trying to determine the causes behind these differences.

“One contributing factor is how hearing aids adapt to loud, sudden sounds,” said Sandgren. “When you’re listening to a conversation, if a door slams behind you, you don’t want that door slam to be amplified very much. But with music, there are loud sudden sounds that we do want to hear, like percussion instruments.”

Distortion may be one of the biggest problems. Unlike speech, music often has intense low-frequency harmonics.

“Our analyses suggest that brands rated highest in music quality processed the intense ultralow frequency peaks with less distortion than those rated lowest in music quality,” said Alexander.

This work will improve future technology and help audiologists select the best current hearing aids for their patients.

Main meeting website:
Technical program:

In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at

ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at

ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal –
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

5aPPb2 – Using a virtual restaurant to test hearing aid settings

Gregory M Ellis –
Pamela Souza –

Northwestern University
Frances Searle Building
2240 Campus Drive
Evanston, IL 60201

Popular version of paper 5aPPb2
Presented Friday morning, December 11th, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

True scientific discoveries require a series of tightly controlled experiments conducted in lab settings. These kinds of studies tell us how to implement and improve technologies we use every day—technologies like fingerprint scanners, face recognition, and voice recognition. One of the downsides of these tightly controlled environments, however, is that the real world is anything but tightly controlled. Dust may be on your fingerprint, the light may make it difficult for the face recognition software to work, or the background may be noisy making your voice impossible to pick up. Can we account for these scenarios in the lab when we’re performing experiments? Can we bring the real world—or parts of it—into a lab setting?

In our line of research, we believe we can. While the technologies listed above are interesting in their own right, our research focuses on hearing aid processing. Our lab generally asks: what factors, and to what extent do those factors, affect speech understanding for a person with a hearing aid? The project I’m presenting at this conference is specifically looking at environmental and hearing aid processing factors. Environmental factors include the loudness of background noises and echoes. Processing factors involve the software within the hearing aid that attempts to reduce or eliminate background noise and amplification strategies that make relatively quiet parts of speech louder so they’re easier to hear. We are using computer simulations to look at both the environmental and the processing factors. We can examine the effects of the environmental and processing factors on a listener by seeing how speech intelligibility is affected by those factors.

The room simulation is first. We built a very simple virtual environment pictured below:

virtual restaurant

The virtual room used in our experiments. The red dot represents the listener. The green dot represents the speaker. The blue dots represent other people in the restaurant having their own conversations and making noise.”

We can simulate the properties of the sounds in that room using a model that has been shown to be a good approximation of real recordings of sounds in rooms. After passing the speech for the speaker and all of the competing talkers through this room model, you will have a realistic simulation of the sounds in a room.

If you’re wearing headphones while you read this article, you can listen to an example here:

A woman speaking the sentence “Ten pins were set in order.” You should be able to hear other people talking to your right, all of whom are quieter than the woman in front. All of the sound has a slight echo to it. Note that this will not work if you aren’t wearing headphones!”

We then take this simulation and pass it through a hearing aid simulator. This imposes the processing you might expect in a widely-available hearing aid. Here’s an example of what that would sound like:

Same sentence as the restaurant simulation, but this is processed through a simulated hearing aid. You should notice a slightly different pitch to the sentence and the environment. This is because the simulated hearing loss is more extreme at higher pitches.”

Based on the results of hundreds of sentences, we would have a better understanding of how the environmental factors and the hearing aid processing interact. We found that for listeners with hearing impairment, there is an interaction between noise level and processing strategy, though more data will need to be collected before we can draw any solid conclusions. While these results are a promising first step, there are many more factors to look at—different amounts of echo, different amounts of noise, different types of processing strategies… and none of these factors include anything about the person listening to the sentences either. Does age, attention span, or degree of hearing loss affect their ability to perform the task? Ongoing and future research will be able to answer these questions.

This work is important because it shows that we can account for some environmental factors in tightly-controlled research. The method works well and produces results that we would expect to see. If you want results from the lab to be relatable to the real world, try to bring the real world into the lab!

4pPPa6 – Benefits of a Smartphone as a Remote Microphone System

Dr. Linda Thibodeau,
Dr. Issa Panahi
The University of Texas at Dallas

Popular version of paper 4pPPa6
Presented Thursday afternoon, December 5, 2019
178th ASA Meeting, San Diego, CA

A common problem reported by persons with hearing loss is reduced ability to hear speech in noisy environments. Despite sophisticated microphone and noise reduction technology in personal amplification devices to address this challenge, speech perception remains compromised by factors such as distance from the talker and reverberation. Remote microphone (RM) systems have been shown to reduce the challenges hearing aid users face with communicating in noisy environments. The RMs worn by the speaker can stream their voice wirelessly to the users’ hearing aids which results in a significant improvement in the signal-to-noise ratio and make it easier to hear and understand speech.

Given that the additional cost of a RM may not be feasible for some individuals, the possible use of applications on a smartphone has been explored. In the past five years, it has become increasingly common for hearing aids to connect wireless to smartphones. In fact, one desirable feature of the connection to the Apple iPhone has been an application called ‘Live Listen’ (LL). This application allows the iPhone to be used as an RM with made for iPhone hearing aids.

The Statistical Signal Processing Research Laboratory at The University of Texas at Dallas has developed an application for the iPhone that is also designed to be used as an RM. The application, called SHARP, has been tested with persons with normal and impaired hearing and with several types of hearing aids in the Hearing Health Laboratory at the University of Texas at Dallas. A study was conducted to compare the benefit of LL and the SHARP application for participants with and without hearing loss on sentence recognition tasks in noise when listening through hearing aids connected to an iPhone. A video summary of the testing protocol is show in the following short video clip.

Both the LL feature and the SHARP app provide a range of benefits in speech recognition in noise from no benefit to 30% depending on the degree of hearing loss and type of aid. The results suggest that persons can improve speech recognition in noise and perhaps increase overall quality of life through the use of applications such as SHARP on the smartphone in conjunction with wirelessly connected hearing aids.

1pPP – Trends that are shaping the future of hearing aid technology

Brent Edwards –

Popular version of paper 1pPPa, “Trends that are shaping the future of hearing aid technology”
Presented Monday afternoon, May 7, 2018, 1:00PM, Nicollet D2 Room
175th ASA Meeting, Minneapolis

Hearing aid technology is experiencing a faster rate of change than it has in the history of its existence. A primary reason for this is its convergence with consumer electronics, resulting in an acceleration of the pace of innovation and a change in its nature from incremental to disruptive.

Hearable and wearable technology are non-medical devices that use sensors to measure and inform the user about their biometric data in addition to providing other sensory information. Since hearing aids are worn every day and the ear is an ideal location to place many of these sensors, hearing aids have the potential to become the ideal form factor for consumer wearables. Conversely, hearable devices that augment and enhance audio for normal hearing consumers while also measuring their biometric data have the potential to become a new form of hearing aids for those with hearing loss, combining medical functionality of hearing loss compensation with such consumer functionality as speech recognition with always-on access to Siri. The photo below shows one hearable on the market that allows the wearer to measure their hearing with a smartphone app and adjust the audibility of the hearing to personalise the sound for the individual’s hearing ability, a process that has similarities to the fitting of a traditional hearing aid by an audiologist.

Hearing aid technologyAccelerating this convergence between medical and consumer hearing technologies is the recently passed congressional bill that mandates the creation of a new over-the-counter hearing aid that consumers can purchase in a store and fit their own prescription. E-health technologies already exist that allow a consumer to measure their own hearing loss and apply clinically-validated prescriptions to their hearable devices. This technology development will explode once over-the-counter hearing aids are a reality.

Deep science is also impacting hearing aid innovation. The integration of cognitive function with hearing aid technology will continue to be one of the strongest trends in the field. Neural measures of the brain using EEG have the potential to be used to fit hearing devices and also to demonstrate hearing aid benefit by showing how wearing devices affects activity in the brain. Brain sensors have been proven able to determine which talker a person is listening to, a capability that could be included in future hearing aids to enhance the speech from the desired talker and suppress all other sounds. Finally, science continues to advance our understanding of how hearing aid technology can benefit cognitive function. These scientific and other medical developments such as light-driven hearing aids will advance hearing aid benefit through the more traditional medical channel, complementing the advances on the consumer side of the healthcare delivery spectrum.