Beyond Necessity, Hearing Aids Bring Enjoyment Through Music #ASA184

Beyond Necessity, Hearing Aids Bring Enjoyment Through Music #ASA184

Hearing aids aren’t particularly good at preserving the sound quality of music – but some manufacturers do better than others.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 8, 2023 – For decades, hearing aids have been focused on improving communication by separating speech from background noise. While the technology has made strides in terms of speech, it is still subpar when it comes to music.

Over the years, hearing aids have improved in terms of speech. But they are still subpar when it comes to music. Credit: Emily Sandgren

In their talk, “Evaluating the efficacy of music programs in hearing aids,” Emily Sandgren and Joshua Alexander of Purdue University will describe experiments to determine the best hearing aids for listening to music. The presentation will take place Monday, May 8, at 11:45 a.m. Eastern U.S. in the Indiana/Iowa room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

“Americans listen to music for more than two hours a day on average, and music can be related to mental and emotional health. But research over the past two decades has shown that hearing aid users are dissatisfied with the sound quality of music when using their hearing aids,” said Sandgren. “People with hearing loss deserve both ease of communication and to maintain quality of life by enjoying sources of entertainment like music.”

In response to this problem, hearing aid manufacturers have designed music programs for their devices. To test and compare each of these programs, Sandgren and Alexander took over 200 recordings of music samples as processed by hearing aids from seven popular manufacturers.

They asked study participants to rate the sound quality of these recordings and found that the hearing aids had lower ratings for music than their control stimuli. The researchers found bigger differences in music quality between hearing aid brands than between speech and music programs, with two manufacturers standing out among the rest.

The team is still trying to determine the causes behind these differences.

“One contributing factor is how hearing aids adapt to loud, sudden sounds,” said Sandgren. “When you’re listening to a conversation, if a door slams behind you, you don’t want that door slam to be amplified very much. But with music, there are loud sudden sounds that we do want to hear, like percussion instruments.”

Distortion may be one of the biggest problems. Unlike speech, music often has intense low-frequency harmonics.

“Our analyses suggest that brands rated highest in music quality processed the intense ultralow frequency peaks with less distortion than those rated lowest in music quality,” said Alexander.

This work will improve future technology and help audiologists select the best current hearing aids for their patients.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Here we are…Hear our story! Brothertown Indian Heritage, through acoustic research and technology

seth wenger – seth.wenger@nyu.edu

Settler Scholar and Public Historian with Brothertown Indian Nation, Ridgewood, NY, 11385, United States

Jessica Ryan – Vice Chair of the Brothertown Tribal Council

Popular version of 3pAA6 – Case study of a Brothertown Indian Nation cultural heritage site–toward a framework for acoustics heritage research in simulation, analysis, and auralization
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018718

The Brothertown Indian Nation has a centuries old heritage of group singing. Although this singing is an intangible heritage, these aural practices have left a tangible record through published music, as well as extensive personal correspondence and journal entries about the importance of singing in the political formation of the Tribe. One specific tangible artifact of Brothertown ancestral aural heritage–and focus of the acoustic research in this case study–is a house built in the 18th century by Andrew Curricomp, a Tunxis Indian.

Figure 1: Images courtesy of authors

In step with the construction of the house at Tunxis Sepus, Brothertown political formation also solidified in the 18th century between members of seven parent Tribes: various Native communities of Southern New England including Mohegan, Montauk, Narragansett, Niantic, Stonington (Pequot), Groton/Mashantucket (Pequot) and Farmington (Tunxis). Settler colonial pressure along the Northern Atlantic coast forced Brothertown Indian ancestors to leave various Indigenous towns and settlements to form into a body politic named Brotherton (Eeyamquittoowauconnuck). Nearly a century later, after multiple forced relocations, the Tribe–including many of Andrew Curricomp’s grand, and great grandchildren–were displaced again to the Midwest. Today, after nearly two more centuries, the Brothertown Indian Nation Community Center and museum are located in Fond du Lac, WI, just south of their original Midwestern settlement.

During contemporary trips back to visit parent tribes, members of the Brothertown Indian Nation have visited the Curricomp House at Tunxis Sepus.

Figure 2: Image courtesy of authors

However, by then it was known as the William Day Museum of Indian Artifacts. After the many relocations of Brothertown and their parent Tribes, the Curricomp house was purchased by a local landowner of European descent. The man’s groundskeeper, Bill Day, had a hobby of collecting stone lithic artifacts he would find during his gardening around the property. The land owner decided that having the Curricomp house would be a perfect home for his groundskeeper’s musings, as it was locally told that the house belonged to the last living Indian in the town. He had the Curricomp House moved to his property and named it for his gardener, the William Day Museum of Indian Artifacts.

The myth of the vanishing Indian is a commonly held trope in popular Western Culture. This colonial, or “last living Indian” history that dominates the archive, includes no real information about what Native communities actually used the space for, or where the descendants of Tunxis are now living. This acoustics case study intends for the living descendants of Tunxis Sepus to have sovereignty over the digital content created, as the house serves as a tangible cultural signifier of their intangible aural heritage.

Architectural acoustic heritage throughout Brothertown’s history of displacement is of value to their vibrant contemporary culture. Many of these tangible heritage sites have been made intangible to the Brothertown Community, as they are settler owned, demolished, or geographically inaccessible to the Brothertown diaspora–requiring creative solutions to make this heritage available. Both in-situ and web-based immersive interfaces are being designed to interact with the acoustic properties of the Curricomp house.

Figure 3: Image courtesy of authors

These interfaces use various music and speech source media that feature Brothertown aural Heritage. The acoustic simulations and auralizations created during this case study of the Curricomp House are tools: a means by which living descendants might hear one another in the difficult to access acoustic environments of their ancestors.

What is a webchuck?

Chris Chafe – cc@ccrma.stanford.edu

Stanford University
CCRMA / Music
Stanford, CA 94305
United States

Ge Wang
Stanford University

Michael Mulshine
Stanford University

Jack Atherton
Stanford University

Popular version of 1aCA1 – What would a Webchuck Chuck?
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018058

Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video

Video 1: The Metered Tide

The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.

Video 2: The Metered Tide with backing track

Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.

Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.

Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.

Virtual Reality Musical Instruments for the 21st Century

Rob Hamilton – hamilr4@rpi.edu
Twitter: @robertkhamilton

Rensselaer Polytechnic Institute, 110 8th St, Troy, New York, 12180, United States

Popular version of 1aCA3 – Real-time musical performance across and within extended reality environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018060

Have you ever wanted to just wave your hands to be able to make beautiful music? Sad your epic air-guitar skills don’t translate into pop/rock super stardom? Given the speed and accessibility of modern computers, it may come as little surprise that artists and researchers have been looking to virtual and augmented reality to build the next generation of musical instruments. Borrowing heavily from video game design, a new generation of digital luthiers is already exploring new techniques to bring the joys and wonders of live musical performance into the 21st Century.

Image courtesy of Rob Hamilton.

One such instrument is ‘Coretet’: a virtual reality bowed string instrument that can be reshaped by the user into familiar forms such as a violin, viola, cello or double bass. While wearing a virtual reality headset such as Meta’s Oculus Quest 2, performers bow and pluck the instrument in familiar ways, albeit without any physical interaction with strings or wood. Sound is generated in Coretet using a computer model of a bowed or plucked string called a ‘physical model’ driven by the motion of a performer’s hands and the use of their VR game controllers. And borrowing from multiplayer online games, Coretet performers can join a shared network server and perform music together.

Our understanding of music, and live musical performance on traditional physical instruments is tightly coupled to time, specifically the understanding that when a finger plucks a string, or a stick strikes a drum head, a sound will be generated immediately, without any delay or latency. And while modern computers are capable of streaming large amounts of data at the speed of light – significantly faster than the speed of sound – bottlenecks in the CPUs or GPUs themselves, or in the code designed to mimic our physical interactions with instruments, or even in the network connections that connect users and computers alike, often introduce latency, making virtual performances feel sluggish or awkward.

This research focuses on some common causes for this kind of latency and looks at ways that musicians and instrument designers can work around or mitigate these latencies both technically and artistically.

Coretet overview video: Video courtesy of Rob Hamilton.

Lead Vocal Tracks in Popular Music Go Quiet

Lead Vocal Tracks in Popular Music Go Quiet

An analysis of top popular music from 1946 to 2020 shows a marked decrease in volume of the lead vocal track and differences across musical genres.

Estimated lead-to-accompaniment-ratio, LAR, for songs in five genres from 1990-2020. Purple circles correspond to solo artists and green squares to bands. Credit: Kai Siedenburg

WASHINGTON, April 25, 2023 – A general rule of music production involves mixing various soundtracks so the lead singer’s voice is in the foreground. But it is unclear how such track mixing – and closely related lyric intelligibility – has changed over the years.

Scientists from the University of Oldenburg in Germany carried out an analysis of hundreds of popular song recordings from 1946 to 2020 to determine…click to read more

From the Journal: JASA Express Letters
Article: Lead-vocal level in recordings of popular music 1946-2020
DOI: 10.1121/10.0017773

Can a Playlist be Your Therapist? Balancing Emotions Through Music #ASA183

Can a Playlist be Your Therapist? Balancing Emotions Through Music #ASA183

Music app provides therapy by consoling, relaxing, uplifting users

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

NASHVILLE, Tenn., Dec. 5, 2022 – Music has the potential to change emotional states and can distract listeners from negative thoughts and pain. It has also been proven to help improve memory, performance, and mood.

The Emotion Equalization App surveys your mood and energy to create a corresponding therapeutic playlist. Credit: Man Hei Law

At the upcoming meeting of the Acoustical Society of America, Man Hei Law of Hong Kong University of Science and Technology will present an app that creates custom playlists to help listeners care for their emotions through music. The presentation, “Emotion equalization app: A first study and results,” will take place at the Grand Hyatt Nashville Hotel on Dec. 5 at 3:15 p.m. Eastern U.S. in the Rail Head room, as part of ASA’s 183rd meeting running Dec. 5-9.

“As humanity’s universal language, music can significantly impact a person’s physical and emotional state,” said Law. “For example, music can help people to manage pain. We developed this app as an accessible first aid strategy for balancing emotions.”

The app could be used by people who may not want to receive counseling or treatment because of feelings of shame, inadequacy, or distrust. By taking listeners on an emotional roller-coaster ride, the app aims to leave them in a more positive and focused state than where they began.

Users take three self-led questionnaires in the app to measure their emotional status and provide the information needed to create a playlist. Current emotion and long-term emotion status are gauged with a pictorial assessment tool that helps identify emotions in terms of energy level and mood. Energy level can run from high, medium, to low and mood can register as positive, neutral, or negative. A Patient Health Questionnaire and a General Anxiety Disorder screening are also used to establish personalized music therapy treatments.

By determining the emotional state of the user, the app creates a customized and specifically sequenced playlist of songs using one of three strategies: consoling, relaxing, or uplifting. Consoling music reflects the energy and mood of the user, while relaxing music provides a positive, low energy. Uplifting music is also positive but more high energy.

“In our experiments, we found out that relaxing and uplifting methods can significantly move listeners from negative to more positive emotional states. Especially, when listeners are at a neutral mood, all three proposed methods can change listeners’ emotions to more positive,” said Law.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.