A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

Here we are…Hear our story! Brothertown Indian Heritage, through acoustic research and technology

seth wenger – seth.wenger@nyu.edu

Settler Scholar and Public Historian with Brothertown Indian Nation, Ridgewood, NY, 11385, United States

Jessica Ryan – Vice Chair of the Brothertown Tribal Council

Popular version of 3pAA6 – Case study of a Brothertown Indian Nation cultural heritage site–toward a framework for acoustics heritage research in simulation, analysis, and auralization
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018718

The Brothertown Indian Nation has a centuries old heritage of group singing. Although this singing is an intangible heritage, these aural practices have left a tangible record through published music, as well as extensive personal correspondence and journal entries about the importance of singing in the political formation of the Tribe. One specific tangible artifact of Brothertown ancestral aural heritage–and focus of the acoustic research in this case study–is a house built in the 18th century by Andrew Curricomp, a Tunxis Indian.

Figure 1: Images courtesy of authors

In step with the construction of the house at Tunxis Sepus, Brothertown political formation also solidified in the 18th century between members of seven parent Tribes: various Native communities of Southern New England including Mohegan, Montauk, Narragansett, Niantic, Stonington (Pequot), Groton/Mashantucket (Pequot) and Farmington (Tunxis). Settler colonial pressure along the Northern Atlantic coast forced Brothertown Indian ancestors to leave various Indigenous towns and settlements to form into a body politic named Brotherton (Eeyamquittoowauconnuck). Nearly a century later, after multiple forced relocations, the Tribe–including many of Andrew Curricomp’s grand, and great grandchildren–were displaced again to the Midwest. Today, after nearly two more centuries, the Brothertown Indian Nation Community Center and museum are located in Fond du Lac, WI, just south of their original Midwestern settlement.

During contemporary trips back to visit parent tribes, members of the Brothertown Indian Nation have visited the Curricomp House at Tunxis Sepus.

Figure 2: Image courtesy of authors

However, by then it was known as the William Day Museum of Indian Artifacts. After the many relocations of Brothertown and their parent Tribes, the Curricomp house was purchased by a local landowner of European descent. The man’s groundskeeper, Bill Day, had a hobby of collecting stone lithic artifacts he would find during his gardening around the property. The land owner decided that having the Curricomp house would be a perfect home for his groundskeeper’s musings, as it was locally told that the house belonged to the last living Indian in the town. He had the Curricomp House moved to his property and named it for his gardener, the William Day Museum of Indian Artifacts.

The myth of the vanishing Indian is a commonly held trope in popular Western Culture. This colonial, or “last living Indian” history that dominates the archive, includes no real information about what Native communities actually used the space for, or where the descendants of Tunxis are now living. This acoustics case study intends for the living descendants of Tunxis Sepus to have sovereignty over the digital content created, as the house serves as a tangible cultural signifier of their intangible aural heritage.

Architectural acoustic heritage throughout Brothertown’s history of displacement is of value to their vibrant contemporary culture. Many of these tangible heritage sites have been made intangible to the Brothertown Community, as they are settler owned, demolished, or geographically inaccessible to the Brothertown diaspora–requiring creative solutions to make this heritage available. Both in-situ and web-based immersive interfaces are being designed to interact with the acoustic properties of the Curricomp house.

Figure 3: Image courtesy of authors

These interfaces use various music and speech source media that feature Brothertown aural Heritage. The acoustic simulations and auralizations created during this case study of the Curricomp House are tools: a means by which living descendants might hear one another in the difficult to access acoustic environments of their ancestors.

Acoustics should be a bigger piece in the building decarbonization puzzle

Jonathan Broyles – j.broyles@psu.edu

The Pennsylvania State University, University Park, PA, 16801, United States

Popular version of 5aAA6 – Acoustic design trade-offs when reducing the carbon footprint of buildings
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0019111

The built environment is responsible for upwards of 40% of global carbon emissions and has sparked a change in how buildings are designed in an effort to mitigate the global climate crisis. Design goals to reduce the carbon footprint of a building directly affects acoustics, potentially causing unintended acoustical consequences. Yet building acoustics is often considered much later in the design of a building, if at all, resulting in missed opportunities to harmonize sustainable and acoustical design goals. Significant research is needed to further understand acoustic-decarbonization trade-offs, as preliminary results found that research at the intersection of building decarbonization and acoustic design lacks well behind other building disciplines (see Figure 1). Despite the lack of published work, several studies suggest that holistic building design solutions are possible, including the balancing the mass distribution of structures to achieve high sound insulation with less material, designing with natural materials that can reduce echoes, and selecting efficient mechanical systems that prevent unwanted noise.

Figure 1: Publication trends for five building disciplines and building decarbonization.

Building carbon emissions can be reduced by designing sustainable structural elements (including green roofs and mass timber structures) and reducing the material consumption of structures with high carbon emissions (such as concrete floors, as shown in Figure 2). Innovations in building and construction materials can further improve carbon emission savings, from reducing carbon emissions during material manufacturing and during building operation. Such strategies include the use of natural materials (including straw bales and compressed earth blocks), concrete mixes with lower cement proportions, and material optimization. Carbon emissions during the service life of a building can be reduced by selecting more efficient systems (such as multi-pane windows) and smart mechanical systems. These solutions also highlight the interdisciplinary nature of building design, as decisions in one discipline can directly influence acoustic performance.

Figure 2: Example of synergizing sustainable, acoustical, and structural design goals. Image courtesy of Broyles et al., 2023.

Many of the strategies to reduce carbon emissions while balancing acoustic design goals have important trade-offs that should be further studied. Many sustainable structures can have unfavorable sound insulation due to a lack of mass. Many natural materials deteriorate at a faster rate than conventional materials. Lastly, the upfront cost and maintenance of efficient systems can make these solutions unattractive to building owners. This emphasizes the importance for further research at the intersection of building decarbonization and acoustics to better understand how to provide sustainable solutions that benefit the planet, building occupants, and building owners. Future decarbonization technologies will need to consider the acoustic implications to prevent post-construction retrofits and other design modifications. As the building industry continues to pursue aggressive sustainable targets, a holistic approach to building design is needed to truly provide a sustainable building.

What is a webchuck?

Chris Chafe – cc@ccrma.stanford.edu

Stanford University
CCRMA / Music
Stanford, CA 94305
United States

Ge Wang
Stanford University

Michael Mulshine
Stanford University

Jack Atherton
Stanford University

Popular version of 1aCA1 – What would a Webchuck Chuck?
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018058

Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video

Video 1: The Metered Tide

The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.

Video 2: The Metered Tide with backing track

Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.

Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.

Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.

Improving Child Development by Monitoring Noisy Daycares #ASA183

Improving Child Development by Monitoring Noisy Daycares #ASA183

Noise levels can negatively impact children and staff but focusing on the sound environment can help.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

NASHVILLE, Tenn., Dec. 8, 2022 – During some of their most formative years, many children go to daycare centers outside their homes. While there, they require a supportive, healthy environment that includes meaningful speech and conversation. This hinges on the soundscape of the childcare center.

Understanding the soundscape in a daycare center can improve childhood development. Credit: George G. Meade Public Affairs Office

In his presentation at the 183rd Meeting of the Acoustical Society of America, Kenton Hummel of the University of Nebraska – Lincoln (UNL) will describe how soundscape research in daycares can improve child and provider outcomes and experiences. The presentation, “Applying unsupervised machine learning clustering techniques to early childcare soundscapes,” will take place on Dec. 8 at 11:25 a.m. Eastern U.S. in the Summit A room, as part of the meeting running Dec. 5-9 at the Grand Hyatt Nashville Hotel.

“Few studies have rigorously examined the indoor sound quality of childcare centers,” said Hummel. “The scarcity of research may deprive providers and engineers from providing the highest quality of care possible. This study aims to better understand the sound environment of childcare centers to pave the way toward better childcare.”

The goal of the research is to understand the relationship between noise and people. High noise levels and long periods of loud fluctuating sound can negatively impact children and staff by increasing the effort it takes to communicate. In contrast, a low background noise level allows for meaningful speech, which is essential for language, brain, cognitive, and social/emotional development.

Hummel is a member of the UNL Soundscape Lab led by Dr. Erica Ryherd. Their team collaborated with experts in engineering, sensing, early childcare, and health to monitor three daycare centers for 48-hour periods. They also asked staff to evaluate the sound in their workplace. From there, they used machine learning to characterize the acoustic environment and determine what factors influence the child and provider experience.

“Recent work in offices, hospitals, and schools has utilized machine learning to understand their respective environments in a way that goes beyond typical acoustic analyses,” said Hummel. “This work utilizes similar machine learning techniques to build and expand on that work.”

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

How Behind-the-Scenes Sound Mixing Makes Movie Magic #ASA183

How Behind-the-Scenes Sound Mixing Makes Movie Magic #ASA183

Capturing consistent room tones and ambience enhances dialogue and draws the audience in.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

NASHVILLE, Tenn., Dec. 7, 2022 – If you’ve ever watched a movie where the audio is out of sync, it quickly becomes obvious that smooth, consistent sound is critical for movie enjoyment, especially during dialogue. Even slight discrepancies in background noise can disrupt a moviegoer’s experience.

Jeffrey Reed demonstrates the behind-the-scenes audio engineering required to recreate the acoustics of a movie set. Credit: Jeffrey Reed

At the upcoming meeting of the Acoustical Society of America, Jeffrey Reed of Taproot Audio Design will demonstrate the behind-the-scenes audio engineering required to recreate the acoustics of movie sets and locations. During the session, “Modern movie sound: reality and simulated reality,” Reed will share short clips of film to compare the original recording to the studio mixed product. The presentation will take place on Dec. 7 at 2:00 p.m. Eastern U.S. in the Summit A room at the Grand Hyatt Nashville Hotel, as part of ASA’s 183rd meeting running Dec. 5-9.

“Nearly everything you hear in a film has been added later or enhanced for effect. Consistency in background noise has a major impact, especially on dialogue in a movie,” said Reed. “Sometimes every single line of dialogue in a scene can have a different noise profile – the sound in the background varies and makes the sound choppy and disjointed. It’s up to us to smooth that out.”

Modern movie sound mixing uses techniques like impulse responses to reproduce dialogue and other sounds. These methods are crucial to align what moviegoers see and hear and keep them engaged in the story.

An impulse response is a short recording that allows audio engineers to recreate the acoustics of a room. Sonic qualities are recorded when a sound reverberates off the unique layout of a space. The impulse recording is then applied to the audio mix to digitally recreate the sound of that space and make the resulting scene of a film as believable as possible.

“There are a lot of moving parts to a film mix, from dialog, effects, and the ever-important musical score,” said Reed. “Each and every one is crucial to a film, and the joy of mixing is finding out what needs to be where at the right time. When it’s all said done though, dialog is king in a film mix and everything must carefully revolve around it.”

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.