5aPPb2 – Using a virtual restaurant to test hearing aid settings

Gregory M Ellis – gregory.ellis@northwestern.edu
Pamela Souza – p-souza@northwestern.edu

Northwestern University
Frances Searle Building
2240 Campus Drive
Evanston, IL 60201

Popular version of paper 5aPPb2
Presented Friday morning, December 11th, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

True scientific discoveries require a series of tightly controlled experiments conducted in lab settings. These kinds of studies tell us how to implement and improve technologies we use every day—technologies like fingerprint scanners, face recognition, and voice recognition. One of the downsides of these tightly controlled environments, however, is that the real world is anything but tightly controlled. Dust may be on your fingerprint, the light may make it difficult for the face recognition software to work, or the background may be noisy making your voice impossible to pick up. Can we account for these scenarios in the lab when we’re performing experiments? Can we bring the real world—or parts of it—into a lab setting?

In our line of research, we believe we can. While the technologies listed above are interesting in their own right, our research focuses on hearing aid processing. Our lab generally asks: what factors, and to what extent do those factors, affect speech understanding for a person with a hearing aid? The project I’m presenting at this conference is specifically looking at environmental and hearing aid processing factors. Environmental factors include the loudness of background noises and echoes. Processing factors involve the software within the hearing aid that attempts to reduce or eliminate background noise and amplification strategies that make relatively quiet parts of speech louder so they’re easier to hear. We are using computer simulations to look at both the environmental and the processing factors. We can examine the effects of the environmental and processing factors on a listener by seeing how speech intelligibility is affected by those factors.

The room simulation is first. We built a very simple virtual environment pictured below:

virtual restaurant

The virtual room used in our experiments. The red dot represents the listener. The green dot represents the speaker. The blue dots represent other people in the restaurant having their own conversations and making noise.”

We can simulate the properties of the sounds in that room using a model that has been shown to be a good approximation of real recordings of sounds in rooms. After passing the speech for the speaker and all of the competing talkers through this room model, you will have a realistic simulation of the sounds in a room.

If you’re wearing headphones while you read this article, you can listen to an example here:

A woman speaking the sentence “Ten pins were set in order.” You should be able to hear other people talking to your right, all of whom are quieter than the woman in front. All of the sound has a slight echo to it. Note that this will not work if you aren’t wearing headphones!”

We then take this simulation and pass it through a hearing aid simulator. This imposes the processing you might expect in a widely-available hearing aid. Here’s an example of what that would sound like:

Same sentence as the restaurant simulation, but this is processed through a simulated hearing aid. You should notice a slightly different pitch to the sentence and the environment. This is because the simulated hearing loss is more extreme at higher pitches.”

Based on the results of hundreds of sentences, we would have a better understanding of how the environmental factors and the hearing aid processing interact. We found that for listeners with hearing impairment, there is an interaction between noise level and processing strategy, though more data will need to be collected before we can draw any solid conclusions. While these results are a promising first step, there are many more factors to look at—different amounts of echo, different amounts of noise, different types of processing strategies… and none of these factors include anything about the person listening to the sentences either. Does age, attention span, or degree of hearing loss affect their ability to perform the task? Ongoing and future research will be able to answer these questions.

This work is important because it shows that we can account for some environmental factors in tightly-controlled research. The method works well and produces results that we would expect to see. If you want results from the lab to be relatable to the real world, try to bring the real world into the lab!

4pPPa6 – Benefits of a Smartphone as a Remote Microphone System

Dr. Linda Thibodeau, thib@utdallas.edu
Dr. Issa Panahi
The University of Texas at Dallas

Popular version of paper 4pPPa6
Presented Thursday afternoon, December 5, 2019
178th ASA Meeting, San Diego, CA

A common problem reported by persons with hearing loss is reduced ability to hear speech in noisy environments. Despite sophisticated microphone and noise reduction technology in personal amplification devices to address this challenge, speech perception remains compromised by factors such as distance from the talker and reverberation. Remote microphone (RM) systems have been shown to reduce the challenges hearing aid users face with communicating in noisy environments. The RMs worn by the speaker can stream their voice wirelessly to the users’ hearing aids which results in a significant improvement in the signal-to-noise ratio and make it easier to hear and understand speech.

Given that the additional cost of a RM may not be feasible for some individuals, the possible use of applications on a smartphone has been explored. In the past five years, it has become increasingly common for hearing aids to connect wireless to smartphones. In fact, one desirable feature of the connection to the Apple iPhone has been an application called ‘Live Listen’ (LL). This application allows the iPhone to be used as an RM with made for iPhone hearing aids.

The Statistical Signal Processing Research Laboratory at The University of Texas at Dallas has developed an application for the iPhone that is also designed to be used as an RM. The application, called SHARP, has been tested with persons with normal and impaired hearing and with several types of hearing aids in the Hearing Health Laboratory at the University of Texas at Dallas. A study was conducted to compare the benefit of LL and the SHARP application for participants with and without hearing loss on sentence recognition tasks in noise when listening through hearing aids connected to an iPhone. A video summary of the testing protocol is show in the following short video clip.

Both the LL feature and the SHARP app provide a range of benefits in speech recognition in noise from no benefit to 30% depending on the degree of hearing loss and type of aid. The results suggest that persons can improve speech recognition in noise and perhaps increase overall quality of life through the use of applications such as SHARP on the smartphone in conjunction with wirelessly connected hearing aids.

1pPP – Trends that are shaping the future of hearing aid technology

Brent Edwards – Brent.Edwards@nal.gov.au

Popular version of paper 1pPPa, “Trends that are shaping the future of hearing aid technology”
Presented Monday afternoon, May 7, 2018, 1:00PM, Nicollet D2 Room
175th ASA Meeting, Minneapolis

Hearing aid technology is experiencing a faster rate of change than it has in the history of its existence. A primary reason for this is its convergence with consumer electronics, resulting in an acceleration of the pace of innovation and a change in its nature from incremental to disruptive.

Hearable and wearable technology are non-medical devices that use sensors to measure and inform the user about their biometric data in addition to providing other sensory information. Since hearing aids are worn every day and the ear is an ideal location to place many of these sensors, hearing aids have the potential to become the ideal form factor for consumer wearables. Conversely, hearable devices that augment and enhance audio for normal hearing consumers while also measuring their biometric data have the potential to become a new form of hearing aids for those with hearing loss, combining medical functionality of hearing loss compensation with such consumer functionality as speech recognition with always-on access to Siri. The photo below shows one hearable on the market that allows the wearer to measure their hearing with a smartphone app and adjust the audibility of the hearing to personalise the sound for the individual’s hearing ability, a process that has similarities to the fitting of a traditional hearing aid by an audiologist.

Hearing aid technologyAccelerating this convergence between medical and consumer hearing technologies is the recently passed congressional bill that mandates the creation of a new over-the-counter hearing aid that consumers can purchase in a store and fit their own prescription. E-health technologies already exist that allow a consumer to measure their own hearing loss and apply clinically-validated prescriptions to their hearable devices. This technology development will explode once over-the-counter hearing aids are a reality.

Deep science is also impacting hearing aid innovation. The integration of cognitive function with hearing aid technology will continue to be one of the strongest trends in the field. Neural measures of the brain using EEG have the potential to be used to fit hearing devices and also to demonstrate hearing aid benefit by showing how wearing devices affects activity in the brain. Brain sensors have been proven able to determine which talker a person is listening to, a capability that could be included in future hearing aids to enhance the speech from the desired talker and suppress all other sounds. Finally, science continues to advance our understanding of how hearing aid technology can benefit cognitive function. These scientific and other medical developments such as light-driven hearing aids will advance hearing aid benefit through the more traditional medical channel, complementing the advances on the consumer side of the healthcare delivery spectrum.