Enhancing Museum Experiences: The Impact of sounds on visitor perception.

Milena J. Bem – jonasm@rpi.edu

School of Architecture, Rensselaer Polytechnic Institute, TROY, New York, 12180, United States

Samuel R.V. Chabot – Rensselaer Polytechnic Institute
Jonas Braasch – Rensselaer Polytechnic Institute

Popular version of 4aAA8 – Effects of sounds on the visitors’ experience in museums
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023459

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Have you ever wondered how a museum’s subtle backdrop of sound affects your experience? Are you drawn to the tranquility of silence, the ambiance of exhibition-congruent sounds, or perhaps the hum of people chatting and footsteps echoing through the halls?

Museums increasingly realize that acoustics are crucial in shaping a visitor’s experience. There are acoustic challenges in museum environments, such as finding the right balance between speech intelligibility and privacy, particularly in spaces with open-plan exhibition halls, coupled rooms, high volumes, and highly reflective surfaces.

Addressing the Challenge
Our proposal focuses on using sound masking systems to tackle these challenges. Sound masking is a proven and widely used technique in diverse settings, from offices to public spaces. Conventionally, it involves introducing low-level broadband noise to mask or diminish unwanted sounds, reducing distractions.

Context is Key
In recognizing the pivotal role of context in shaping human perception, strategically integrating sounds as design elements emerges as a powerful tool for enhancing visitor experiences. In line with this, we propose using sounds congruent with the museum environment more effectively than conventional masking sounds like low-level broadband noise. This approach reduces background noise distractions and enhances artwork engagement, creating a more immersive and comprehensive museum experience.

Evaluating the Effects: The Cognitive Immersive Room (CIR)
We assessed these effects using the Cognitive Immersive Room at Rensselaer Polytechnic Institute. This cutting-edge space features a 360° visual display and an eight-channel loudspeaker system for spatial audio rendering. We projected panoramic photographs and ambisonic audio recordings from 16 exhibitions across five relevant museums — MASS MoCA, New York State Museum, Williams College Museum of Art, UAlbany Art Museum, and Hessel Museum of Art.

The Study Setup
Each participant experienced four soundscape scenarios: the original recorded soundscape in each exhibition, the recorded soundscape combined with a conventional sound masker, the recorded soundscape combined with a congruent sound masker, and “silence” which does not involve any recording, comprising the ambient room noise of 41 dB. Figure 1 shows one of the displays used in the experiment and below the presented sound stimulus.

Figure1: Birds of New York exhibition – New York State Museum. The author took the photo with the permission of the museum’s Director of Exhibitions.

Scenario 1: originally recorded soundscape in situ.
Scenario 2: recorded soundscape combined with a conventional sound masker.
Scenario 3: the recorded soundscape combined with a congruent sound masker.

After each sound stimulus, they responded to a questionnaire. It was applied through a program developed for this research where participants could interact and answer the questions using an iPad. After experiencing the four soundscapes, a final question was asked regarding the participants’ soundscape preference within the exhibition context. Figure 2 shows the experiment design.

Figure 2

Key Findings
The statistically significant results showed a clear preference for congruent sounds, significantly reducing distractions, enhancing focus, and fostering comprehensive and immersive experiences. A majority of 58% of participants preferred the congruent sound scenario, followed by silence at 20%, original soundscape at 14%, and conventional maskers at 8%.

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

Improving pitch sensitivity by cochlear-implant users

John Middlebrooks – middlebj@hs.uci.edu

University of California, Irvine, Irvine, CA, 92697-5310, United States

Matthew Richardson and Harrison Lin
University of California, Irvine

Robert Carlyon
University of Cambridge

Popular version of 2aPP6 – Temporal pitch processing in an animal model of normal and electrical hearing
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018352

A cochlear implant can restore reasonable speech perception to a deaf individual. Sensitivity to the pitches of sounds, however, typically is negligible. Lack of pitch sensitivity deprives implant users of appreciation of musical melodies, disrupts pitch cues that are important for picking out a voice amid competing sounds, and impairs understanding of lexical tones in tonal languages (like Mandarin or Vietnamese, for example). Efforts to improve pitch perception by cochlear-implant users could benefit from studies in experimental animals, in which the investigator can control the history of deafness and electrical stimulation and can evaluate novel implanted devices. We are evaluating cats for studies of pitch perception in normal and electrical hearing.

We train normal-hearing cats to detect changes in the pitches of trains of sound pulses – this is “temporal pitch” sensitivity. The cat presses a pedal to start a pulse train at a particular base rate. After a random delay, the pulse rate is changed and the cat can release the pedal to receive a food reward. The range of temporal pitch sensitivity by cats corresponds well to that of humans, although the pitch range of cats is shifted somewhat higher in frequency in keeping with the cat’s higher frequency range of hearing.

We record small voltages from the scalps of sedated cats. The frequency-following response (FFR) consists of voltages originating in the brainstem that synchronize to the stimulus pulses. We can detect FFR signals across the range of pulse rates that is relevant for temporal pitch sensitivity. The acoustic change complex (ACC) is a voltage that arises from the auditory cortex in response to a change in an ongoing stimulus. We can record ACC signals in response to pitch changes across ranges similar to the sensitive ranges seen in the behavioral trials in normal-hearing cats.

We have implanted cats with devices like cochlear implants used by humans. Both FFR and ACC could be recorded in response to electrical stimulation of the implants.

The ACC could serve as a surrogate for behavioral training for conditions in which a cat’s learning might not keep up with changes in stimulation strategies, like when a cochlear implant is newly implanted or a novel stimulating pattern is tested.

We have found previously in short-term experiments in anesthetized cats that an electrode inserted into the auditory (hearing) nerve can selectively stimulate pathways that are specialized for transmission of timing information, e.g., for pitch sensation. In ongoing experiments, we plan to place long-term indwelling electrodes in the auditory nerve. Pitch sensitivity with those electrodes will be evaluated with FFR and ACC recording. If performance of the auditory nerve electrodes in the animal model turns out as anticipated, such electrode could offer improved pitch sensitivity to human cochlear implant users.

The ability to differentiate between talkers based on their voice cues changes with age

Yael Zaltz – yaelzalt@tauex.tau.ac.il

Department of Communication Disorders, Steyer School of Health Professions, Faculty of Medicine, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, -, 6997801, Israel

Popular version of 4aPP2 – The underlying mechanisms for voice discrimination across the life span
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018859

By using voice cues, a listener can keep track of a specific talker and tell it apart from other relevant and irrelevant talkers. Voice cues help listeners understand speech in everyday, noisy environments that include multiple talkers. The present study demonstrates that both young children and older adults aren’t as good at voice discrimination compared to young adults. Young children and older adults use more top-down, high-order cognitive resources for voice discrimination.

Four experiments were designed to assess voice discrimination based on two voice cues: the speaker’s fundamental frequency and formant frequencies. These are the resonant frequencies of the vocal tract, reflecting vocal tract length. Two of the experiments assessed voice discrimination in quiet conditions, one experiment assessed the effect of noise on voice discrimination, and one experiment assessed the effect of different testing methods on voice discrimination. In all experiments, an adaptive procedure was used to assess voice discrimination. In addition, high-order cognitive abilities such as non-verbal intelligence, attention, and processing speed were evaluated. The results showed that the youngest children and the older adults displayed the poorest voice discrimination, with significant correlations between voice discrimination and top-down, cognitive abilities; children and older adults with better attention skills and faster processing speed (Figure 1) achieved better voice discrimination. In addition, voice discrimination for the children was shown to depend more on comprehensive acoustic and linguistic information, compared to young adults, and their ability to form an acoustic template in memory to be used as perceptual anchor for the task was less efficient. The outcomes provide an important insight on the effect of age on basic auditory abilities and suggest that voice discrimination is less automatic for children and older adults, perhaps as a result of less mature or deteriorated peripheral (spectral and/or temporal) processing. These findings may partly explain the difficulties of children and older adults in understanding speech in multi-talker situations.

Figure 1: Individual voice discrimination results for (a) the children and (b) the older adults as a function of their scores in the Trail Making Test that assess attention skills and processing speed.

There is a way to differently define the acoustic environment

Semiha Yilmazer – semiha@bilkent.edu.tr

Department of Interior Architecture and Environmental Design, Bilkent University, Ankara, Turkey, 06800, Turkey

Ela Fasllija, Enkela Alimadhi, Zekiye Şahin, Elif Mercan, Donya Dalirnaghadeh

Popular version of 5aPP9 – A Corpus-based Approach to Define Turkish Soundscape Attributes
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0019179

We hear sound wherever we are, on buses, in streets, in cafeterias, museums, universities, halls, churches, mosques, and so forth. How we describe sound environments (soundscapes) changes according to the different experiences we have throughout our lives. Based on this, we wonder how people delineate sound environments and, thus how they perceive them.

There are reasons to believe there may be variances in how soundscape affective attributes are called in a Turkish context. Considering the historical and cultural differences countries have, we thought that it would be important to assess the sound environment by asking individuals of different ages all over Turkey. For our aim, we used the Corpus-driven approach (CDA), an approach found in Cognitive Linguistics. This allowed us to collect data from laypersons to effectively identify soundscapes based on adjective usage.

In this study, the aim is to discover linguistically and culturally appropriate equivalents of Turkish soundscape attributes. The study involved two phases. In the first phase, an online questionnaire was distributed to native Turkish speakers proficient in English, seeking adjective descriptions of their auditory environment and English-to-Turkish translations. This CDA phase yielded 79 adjectives.

Figure 1 Example public spaces; a library and a restaurant

Examples: audio 1, audio 2

In the second phase, a semantic-scale questionnaire was used to evaluate recordings of different acoustic environments in public spaces. The set of environments comprised seven distinct types of public spaces, including cafes, restaurants, concert halls, masjids, libraries, study areas, and design studios. These recordings were collected at various times of the day to ensure they also contained different crowdedness and specific features. A total of 24 audio recordings were evaluated for validity; each listened to 10 times by different participants. In total, 240 audio clips were randomly assessed, with participants rating 79 adjectives per recording on a five-point Likert scale.

Figure 2 The research process and results

The results of the study were analyzed using a principal component analysis (PCA), which showed that there are two main components of soundscape attributes: Pleasantness and Eventfulness. The components were organized in a two-dimensional model, where each is associated with a main orthogonal axis such as annoying-comfortable and dynamic-uneventful. This circular organization of soundscape attributes is supported by two additional axes, namely chaotic-calm and monotonous-enjoyable. It was also observed that in the Turkish circumplex, the Pleasantness axis was formed by adjectives derived from verbs in a causative form, explaining the emotion the space causes the user to feel. It was discovered that Turkish has a different lexical composition of words compared to many other languages, where several suffixes are added to the root term to impose different meanings. For instance, the translation of tranquilizer in Turkish is sakin-leş (reciprocal suffix) -tir (causative suffix)- ici (adjective suffix).

The study demonstrates how cultural differences impact sound perception and language’s role in expression. Its method extends beyond soundscape research and may benefit other translation projects. Further investigations could probe parallel cultures and undertake cross-cultural analyses.

What parts of the brain are stimulated by cochlear implants in children with one deprived ear?

Karen Gordon – karen.gordon@utoronto.ca

Archie’s Cochlear Implant Laboratory, The Hospital for Sick Children, University of Toronto, The Hospital for Sick Children, TORONTO, ON, M5G1X8, Canada

Additional Authors – Anderson, C., Jiwani, S., Polonenko, M., Wong, D.D.E., Cushing, S.L., Papsin, B.C.

Additional Links
SickKids: https://lab.research.sickkids.ca/archies-cochlear-implant/
Hear Here Podcast: https://linktr.ee/hearherepodcast

Popular version of 3aPP5 – Non-auditory processing of cochlear implant stimulation after unilateral auditory deprivation in children
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018669/

Decades of research have shown that hearing from only one ear in childhood should not be dismissed as a “minimal” hearing problem as it can impair language, cognitive, and academic development.  We have been exploring whether there are effects of unilateral hearing on the developing brain.  A series of studies has been done in children who have one deaf ear and who hear from the other side through a normal or typically hearing ear, a hearing aid, or a cochlear implant. We record electrical fields of brain activity from electrodes placed on the surface of the head (encephalography); we then calculate what parts of the brain are responding.

The findings show that auditory pathways from the hearing ear to the auditory cortices are strengthened in children with long term unilateral hearing. In other words, the hearing brain has developed a preference for the hearing ear. As shown in Figure 1, responses from the better hearing ear were also from areas of the brain involving attention and other sensory processing. This means that areas beyond the auditory parts of the brain are involved in hearing from the better ear.

Figure 1 legend: Cortical areas abnormally active from the experienced ear in children with long periods of unilateral cochlear implant use include left frontal cortex and precuneus.Adapted from Jiwani S, Papsin BC, Gordon KA. Early unilateral cochlear implantation promotes mature cortical asymmetries in adolescents who are deaf. Hum Brain Mapp. 2016 Jan;37(1):135-52. doi: 10.1002/hbm.23019. Epub 2015 Oct 12. PMID: 26456629; PMCID: PMC6867517.

We also asked whether there were brain changes from the ear deprived of sound in children. This question was addressed by measuring cortical responses in three cohorts of children with unilateral hearing who received a cochlear implant in their deaf ear (single sided deafness, bilateral hearing aid users with asymmetric hearing loss, and unilateral cochlear implant users).  Many of these children showed atypical responses from the cochlear implant with unusually strong responses from the brain on the same side of the deaf implanted ear. As shown in Figure 2, this unusual response was most clear in children who had not heard from that ear for several years (Figure 2A) and was already present during the first year of bilateral implant use (Figure 2B).

Figure 2 legend: Cortical responses evoked by the second cochlear implant (CI-2) in children receiving bilateral devices. A) Whereas expected contralateral lateralization of activity is evoked in children with short periods of unilateral deprivation/short delays to bilateral implantation, abnormal ipsilateral responses are found in children with long periods of unilateral deprivation despite several years of bilateral CI use.  Adapted from: Gordon KA, Wong DD, Papsin BC. Bilateral input protects the cortex from unilaterally-driven reorganization in children who are deaf. Brain. 2013 May;136(Pt 5):1609-25. doi: 10.1093/brain/awt052. Epub 2013 Apr 9. PMID: 23576127. B) Abnormal ipsilateral responses are also found throughout the first year of bilateral CI use in children with long periods of unilateral deprivation/long delays to bilateral CI.  Adapted from Anderson CA, Cushing SL, Papsin BC, Gordon KA. Cortical imbalance following delayed restoration of bilateral hearing in deaf adolescents. Hum Brain Mapp. 2022 Aug 15;43(12):3662-3679. doi: 10.1002/hbm.25875. Epub 2022 Apr 15. PMID: 35429083; PMCID: PMC9294307

New analyses have shown that this this response from the CI in the longer deaf ear includes areas of the brain involved in attention, language, and vision.

Results across these studies demonstrate brain changes that occur in children with unilateral hearing/deprivation.  Some of these changes happen in the auditory system but others involve other brain areas and suggest that multiple parts of the brain are working when children listen with their cochlear implants.