A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

Improving pitch sensitivity by cochlear-implant users

John Middlebrooks – middlebj@hs.uci.edu

University of California, Irvine, Irvine, CA, 92697-5310, United States

Matthew Richardson and Harrison Lin
University of California, Irvine

Robert Carlyon
University of Cambridge

Popular version of 2aPP6 – Temporal pitch processing in an animal model of normal and electrical hearing
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018352

A cochlear implant can restore reasonable speech perception to a deaf individual. Sensitivity to the pitches of sounds, however, typically is negligible. Lack of pitch sensitivity deprives implant users of appreciation of musical melodies, disrupts pitch cues that are important for picking out a voice amid competing sounds, and impairs understanding of lexical tones in tonal languages (like Mandarin or Vietnamese, for example). Efforts to improve pitch perception by cochlear-implant users could benefit from studies in experimental animals, in which the investigator can control the history of deafness and electrical stimulation and can evaluate novel implanted devices. We are evaluating cats for studies of pitch perception in normal and electrical hearing.

We train normal-hearing cats to detect changes in the pitches of trains of sound pulses – this is “temporal pitch” sensitivity. The cat presses a pedal to start a pulse train at a particular base rate. After a random delay, the pulse rate is changed and the cat can release the pedal to receive a food reward. The range of temporal pitch sensitivity by cats corresponds well to that of humans, although the pitch range of cats is shifted somewhat higher in frequency in keeping with the cat’s higher frequency range of hearing.

We record small voltages from the scalps of sedated cats. The frequency-following response (FFR) consists of voltages originating in the brainstem that synchronize to the stimulus pulses. We can detect FFR signals across the range of pulse rates that is relevant for temporal pitch sensitivity. The acoustic change complex (ACC) is a voltage that arises from the auditory cortex in response to a change in an ongoing stimulus. We can record ACC signals in response to pitch changes across ranges similar to the sensitive ranges seen in the behavioral trials in normal-hearing cats.

We have implanted cats with devices like cochlear implants used by humans. Both FFR and ACC could be recorded in response to electrical stimulation of the implants.

The ACC could serve as a surrogate for behavioral training for conditions in which a cat’s learning might not keep up with changes in stimulation strategies, like when a cochlear implant is newly implanted or a novel stimulating pattern is tested.

We have found previously in short-term experiments in anesthetized cats that an electrode inserted into the auditory (hearing) nerve can selectively stimulate pathways that are specialized for transmission of timing information, e.g., for pitch sensation. In ongoing experiments, we plan to place long-term indwelling electrodes in the auditory nerve. Pitch sensitivity with those electrodes will be evaluated with FFR and ACC recording. If performance of the auditory nerve electrodes in the animal model turns out as anticipated, such electrode could offer improved pitch sensitivity to human cochlear implant users.

The ability to differentiate between talkers based on their voice cues changes with age

Yael Zaltz – yaelzalt@tauex.tau.ac.il

Department of Communication Disorders, Steyer School of Health Professions, Faculty of Medicine, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, -, 6997801, Israel

Popular version of 4aPP2 – The underlying mechanisms for voice discrimination across the life span
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018859

By using voice cues, a listener can keep track of a specific talker and tell it apart from other relevant and irrelevant talkers. Voice cues help listeners understand speech in everyday, noisy environments that include multiple talkers. The present study demonstrates that both young children and older adults aren’t as good at voice discrimination compared to young adults. Young children and older adults use more top-down, high-order cognitive resources for voice discrimination.

Four experiments were designed to assess voice discrimination based on two voice cues: the speaker’s fundamental frequency and formant frequencies. These are the resonant frequencies of the vocal tract, reflecting vocal tract length. Two of the experiments assessed voice discrimination in quiet conditions, one experiment assessed the effect of noise on voice discrimination, and one experiment assessed the effect of different testing methods on voice discrimination. In all experiments, an adaptive procedure was used to assess voice discrimination. In addition, high-order cognitive abilities such as non-verbal intelligence, attention, and processing speed were evaluated. The results showed that the youngest children and the older adults displayed the poorest voice discrimination, with significant correlations between voice discrimination and top-down, cognitive abilities; children and older adults with better attention skills and faster processing speed (Figure 1) achieved better voice discrimination. In addition, voice discrimination for the children was shown to depend more on comprehensive acoustic and linguistic information, compared to young adults, and their ability to form an acoustic template in memory to be used as perceptual anchor for the task was less efficient. The outcomes provide an important insight on the effect of age on basic auditory abilities and suggest that voice discrimination is less automatic for children and older adults, perhaps as a result of less mature or deteriorated peripheral (spectral and/or temporal) processing. These findings may partly explain the difficulties of children and older adults in understanding speech in multi-talker situations.

Figure 1: Individual voice discrimination results for (a) the children and (b) the older adults as a function of their scores in the Trail Making Test that assess attention skills and processing speed.

There is a way to differently define the acoustic environment

Semiha Yilmazer – semiha@bilkent.edu.tr

Department of Interior Architecture and Environmental Design, Bilkent University, Ankara, Turkey, 06800, Turkey

Ela Fasllija, Enkela Alimadhi, Zekiye Şahin, Elif Mercan, Donya Dalirnaghadeh

Popular version of 5aPP9 – A Corpus-based Approach to Define Turkish Soundscape Attributes
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0019179

We hear sound wherever we are, on buses, in streets, in cafeterias, museums, universities, halls, churches, mosques, and so forth. How we describe sound environments (soundscapes) changes according to the different experiences we have throughout our lives. Based on this, we wonder how people delineate sound environments and, thus how they perceive them.

There are reasons to believe there may be variances in how soundscape affective attributes are called in a Turkish context. Considering the historical and cultural differences countries have, we thought that it would be important to assess the sound environment by asking individuals of different ages all over Turkey. For our aim, we used the Corpus-driven approach (CDA), an approach found in Cognitive Linguistics. This allowed us to collect data from laypersons to effectively identify soundscapes based on adjective usage.

In this study, the aim is to discover linguistically and culturally appropriate equivalents of Turkish soundscape attributes. The study involved two phases. In the first phase, an online questionnaire was distributed to native Turkish speakers proficient in English, seeking adjective descriptions of their auditory environment and English-to-Turkish translations. This CDA phase yielded 79 adjectives.


Figure 1 Example public spaces; a library and a restaurant

Examples: audio 1, audio 2

In the second phase, a semantic-scale questionnaire was used to evaluate recordings of different acoustic environments in public spaces. The set of environments comprised seven distinct types of public spaces, including cafes, restaurants, concert halls, masjids, libraries, study areas, and design studios. These recordings were collected at various times of the day to ensure they also contained different crowdedness and specific features. A total of 24 audio recordings were evaluated for validity; each listened to 10 times by different participants. In total, 240 audio clips were randomly assessed, with participants rating 79 adjectives per recording on a five-point Likert scale.


Figure 2 The research process and results

The results of the study were analyzed using a principal component analysis (PCA), which showed that there are two main components of soundscape attributes: Pleasantness and Eventfulness. The components were organized in a two-dimensional model, where each is associated with a main orthogonal axis such as annoying-comfortable and dynamic-uneventful. This circular organization of soundscape attributes is supported by two additional axes, namely chaotic-calm and monotonous-enjoyable. It was also observed that in the Turkish circumplex, the Pleasantness axis was formed by adjectives derived from verbs in a causative form, explaining the emotion the space causes the user to feel. It was discovered that Turkish has a different lexical composition of words compared to many other languages, where several suffixes are added to the root term to impose different meanings. For instance, the translation of tranquilizer in Turkish is sakin-leş (reciprocal suffix) -tir (causative suffix)- ici (adjective suffix).

The study demonstrates how cultural differences impact sound perception and language’s role in expression. Its method extends beyond soundscape research and may benefit other translation projects. Further investigations could probe parallel cultures and undertake cross-cultural analyses.

What parts of the brain are stimulated by cochlear implants in children with one deprived ear?

Karen Gordon – karen.gordon@utoronto.ca

Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children, University of Toronto, The Hospital for Sick Children, TORONTO, ON, M5G1X8, Canada

Additional Authors – Anderson, C., Jiwani, S., Polonenko, M., Wong, D.D.E., Cushing, S.L., Papsin, B.C.

Additional Links
SickKids: https://lab.research.sickkids.ca/archies-cochlear-implant/
Hear Here Podcast: https://linktr.ee/hearherepodcast

Popular version of 3aPP5 – Non-auditory processing of cochlear implant stimulation after unilateral auditory deprivation in children
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018669/

Decades of research have shown that hearing from only one ear in childhood should not be dismissed as a “minimal” hearing problem as it can impair language, cognitive, and academic development.  We have been exploring whether there are effects of unilateral hearing on the developing brain.  A series of studies has been done in children who have one deaf ear and who hear from the other side through a normal or typically hearing ear, a hearing aid, or a cochlear implant. We record electrical fields of brain activity from electrodes placed on the surface of the head (encephalography); we then calculate what parts of the brain are responding.

The findings show that auditory pathways from the hearing ear to the auditory cortices are strengthened in children with long term unilateral hearing. In other words, the hearing brain has developed a preference for the hearing ear. As shown in Figure 1, responses from the better hearing ear were also from areas of the brain involving attention and other sensory processing. This means that areas beyond the auditory parts of the brain are involved in hearing from the better ear.

Figure 1 legend: Cortical areas abnormally active from the experienced ear in children with long periods of unilateral cochlear implant use include left frontal cortex and precuneus.Adapted from Jiwani S, Papsin BC, Gordon KA. Early unilateral cochlear implantation promotes mature cortical asymmetries in adolescents who are deaf. Hum Brain Mapp. 2016 Jan;37(1):135-52. doi: 10.1002/hbm.23019. Epub 2015 Oct 12. PMID: 26456629; PMCID: PMC6867517.

We also asked whether there were brain changes from the ear deprived of sound in children. This question was addressed by measuring cortical responses in three cohorts of children with unilateral hearing who received a cochlear implant in their deaf ear (single sided deafness, bilateral hearing aid users with asymmetric hearing loss, and unilateral cochlear implant users).  Many of these children showed atypical responses from the cochlear implant with unusually strong responses from the brain on the same side of the deaf implanted ear. As shown in Figure 2, this unusual response was most clear in children who had not heard from that ear for several years (Figure 2A) and was already present during the first year of bilateral implant use (Figure 2B).

Figure 2 legend: Cortical responses evoked by the second cochlear implant (CI-2) in children receiving bilateral devices. A) Whereas expected contralateral lateralization of activity is evoked in children with short periods of unilateral deprivation/short delays to bilateral implantation, abnormal ipsilateral responses are found in children with long periods of unilateral deprivation despite several years of bilateral CI use.  Adapted from: Gordon KA, Wong DD, Papsin BC. Bilateral input protects the cortex from unilaterally-driven reorganization in children who are deaf. Brain. 2013 May;136(Pt 5):1609-25. doi: 10.1093/brain/awt052. Epub 2013 Apr 9. PMID: 23576127. B) Abnormal ipsilateral responses are also found throughout the first year of bilateral CI use in children with long periods of unilateral deprivation/long delays to bilateral CI.  Adapted from Anderson CA, Cushing SL, Papsin BC, Gordon KA. Cortical imbalance following delayed restoration of bilateral hearing in deaf adolescents. Hum Brain Mapp. 2022 Aug 15;43(12):3662-3679. doi: 10.1002/hbm.25875. Epub 2022 Apr 15. PMID: 35429083; PMCID: PMC9294307

New analyses have shown that this this response from the CI in the longer deaf ear includes areas of the brain involved in attention, language, and vision.

Results across these studies demonstrate brain changes that occur in children with unilateral hearing/deprivation.  Some of these changes happen in the auditory system but others involve other brain areas and suggest that multiple parts of the brain are working when children listen with their cochlear implants.

Vocal Tract Size, Shape Dictate Speech Sounds

Vocal Tract Size, Shape Dictate Speech Sounds

Main anatomical shape factors of the vocal tract. Credit: Antoine Serrurier

WASHINGTON, March 21, 2023 – Only humans have the ability to use speech. Remarkably, this communication is understandable across accent, social background, and anatomy despite a wide variety of ways to produce the necessary sounds. In JASA, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from…click to read more

From the Journal: The Journal of the Acoustical Society of America
Article: Morphological and acoustic modeling of the vocal tract
DOI: 10.1121/10.0017356