Let’s go soundwalking!

David Woolworth – dwoolworth@rwaconsultants.net

Roland, Woolworth & Associates, Oxford, MS, 38655, United States

Bennett Brooks and Brigitte Schulte-Fortkamp

Popular version of 4pAAb1 – Introduction to Soundwalking – an important part of the soundscape method
Presented at the 185th ASA Meeting
Read the abstract at https://eppro01.ativ.me/web/index.php?page=IntHtml&project=ASAFALL23&id=3589830

Our acoustic environment is a critical part of our everyday experience; it is often unconsciously processed with all other stimuli to form an impression of a place and time, but its impact is not always fully understood. Soundscape is a method of assessing the acoustic environment where perception is prioritized. The soundscape method and the soundwalk tool integrate measurements of the human perception of sound with other observations that characterize the environment, such as the sound levels, the type of location and the various sound sources. The combination of these perceptual measurements with other observations helps us to understand how the acoustic environment impacts the people there and can provide directions for possible changes that can improve their quality of life.

The soundscape method suggests assessing all sounds which occur in an environment using collected data related to human perception, the physical acoustic setting, and context. Context includes visual cues, geographic, social, psychological and cultural aspects, including one’s mental image or memory of a place. Soundscape transcends the common studies of noise and sound levels, and is a powerful tool for effecting positive results with regard to the quality of life for stakeholders in the acoustic environment; standardized methodology has been developed that can be adapted to various applications, using sound as a resource. Soundwalks are an important part of the soundscape method and are a useful way to engage stakeholders who participate by consciously observing and evaluating the soundscape.

Figure 1

A soundwalk is an element of the soundscape method that typically will include a walking tour of observation locations over a predetermined route to solicit perceptual feedback from the participants regarding the acoustic environment (see Figures 1 and 2). The participants of the soundwalk typically include stakeholders or “local experts”: members of the community that experience the soundscape daily, users/patrons of a space, residents, business people, and local officials. Soundwalks can be performed from urban areas to wilderness settings, indoors and outdoors; the information collected can have many applications including ordinances and planning, preservation or improvement of the acoustic environment, and building public/self-awareness of the acoustic environment.

Figure 2

The perceptual information collected during a soundwalk includes the sounds heard by the participants and often directed questions with scaled answers; this along with objective sound level measurements and audio recordings can be used to assess an acoustic space(s) in an effort to effect the purpose of the soundwalk. (see Figures 3 and 4) In some cases, the participants are interviewed to get a deeper understanding of their responses or the data can be taken to a lab for further study.

Figure 3

The soundwalk and post processing of collected information is flexible relative to soundscape standard methods to target an acoustic space and purpose of the investigation. This makes it an adaptable and powerful tool for assessing an acoustic environment and improving the quality of life for the those that live in or use that environment, using their own perceptions and feedback.

Figure 4

Enhancing Museum Experiences: The Impact of sounds on visitor perception.

Milena J. Bem – jonasm@rpi.edu

School of Architecture, Rensselaer Polytechnic Institute, TROY, New York, 12180, United States

Samuel R.V. Chabot – Rensselaer Polytechnic Institute
Jonas Braasch – Rensselaer Polytechnic Institute

Popular version of 4aAA8 – Effects of sounds on the visitors’ experience in museums
Presented at the 185th ASA Meeting
Read the abstract at https://eppro01.ativ.me/web/index.php?page=Session&project=ASAFALL23&id=3581286

Have you ever wondered how a museum’s subtle backdrop of sound affects your experience? Are you drawn to the tranquility of silence, the ambiance of exhibition-congruent sounds, or perhaps the hum of people chatting and footsteps echoing through the halls?

Museums increasingly realize that acoustics are crucial in shaping a visitor’s experience. There are acoustic challenges in museum environments, such as finding the right balance between speech intelligibility and privacy, particularly in spaces with open-plan exhibition halls, coupled rooms, high volumes, and highly reflective surfaces.

Addressing the Challenge
Our proposal focuses on using sound masking systems to tackle these challenges. Sound masking is a proven and widely used technique in diverse settings, from offices to public spaces. Conventionally, it involves introducing low-level broadband noise to mask or diminish unwanted sounds, reducing distractions.

Context is Key
In recognizing the pivotal role of context in shaping human perception, strategically integrating sounds as design elements emerges as a powerful tool for enhancing visitor experiences. In line with this, we propose using sounds congruent with the museum environment more effectively than conventional masking sounds like low-level broadband noise. This approach reduces background noise distractions and enhances artwork engagement, creating a more immersive and comprehensive museum experience.

Evaluating the Effects: The Cognitive Immersive Room (CIR)
We assessed these effects using the Cognitive Immersive Room at Rensselaer Polytechnic Institute. This cutting-edge space features a 360° visual display and an eight-channel loudspeaker system for spatial audio rendering. We projected panoramic photographs and ambisonic audio recordings from 16 exhibitions across five relevant museums — MASS MoCA, New York State Museum, Williams College Museum of Art, UAlbany Art Museum, and Hessel Museum of Art.

The Study Setup
Each participant experienced four soundscape scenarios: the original recorded soundscape in each exhibition, the recorded soundscape combined with a conventional sound masker, the recorded soundscape combined with a congruent sound masker, and “silence” which does not involve any recording, comprising the ambient room noise of 41 dB. Figure 1 shows one of the displays used in the experiment and below the presented sound stimulus.

Figure1: Birds of New York exhibition – New York State Museum. The author took the photo with the permission of the museum’s Director of Exhibitions.

Scenario 1: originally recorded soundscape in situ.
Scenario 2: recorded soundscape combined with a conventional sound masker.
Scenario 3: the recorded soundscape combined with a congruent sound masker.

After each sound stimulus, they responded to a questionnaire. It was applied through a program developed for this research where participants could interact and answer the questions using an iPad. After experiencing the four soundscapes, a final question was asked regarding the participants’ soundscape preference within the exhibition context. Figure 2 shows the experiment design.

Figure 2

Key Findings
The statistically significant results showed a clear preference for congruent sounds, significantly reducing distractions, enhancing focus, and fostering comprehensive and immersive experiences. A majority of 58% of participants preferred the congruent sound scenario, followed by silence at 20%, original soundscape at 14%, and conventional maskers at 8%.

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

Improving pitch sensitivity by cochlear-implant users

John Middlebrooks – middlebj@hs.uci.edu

University of California, Irvine, Irvine, CA, 92697-5310, United States

Matthew Richardson and Harrison Lin
University of California, Irvine

Robert Carlyon
University of Cambridge

Popular version of 2aPP6 – Temporal pitch processing in an animal model of normal and electrical hearing
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018352

A cochlear implant can restore reasonable speech perception to a deaf individual. Sensitivity to the pitches of sounds, however, typically is negligible. Lack of pitch sensitivity deprives implant users of appreciation of musical melodies, disrupts pitch cues that are important for picking out a voice amid competing sounds, and impairs understanding of lexical tones in tonal languages (like Mandarin or Vietnamese, for example). Efforts to improve pitch perception by cochlear-implant users could benefit from studies in experimental animals, in which the investigator can control the history of deafness and electrical stimulation and can evaluate novel implanted devices. We are evaluating cats for studies of pitch perception in normal and electrical hearing.

We train normal-hearing cats to detect changes in the pitches of trains of sound pulses – this is “temporal pitch” sensitivity. The cat presses a pedal to start a pulse train at a particular base rate. After a random delay, the pulse rate is changed and the cat can release the pedal to receive a food reward. The range of temporal pitch sensitivity by cats corresponds well to that of humans, although the pitch range of cats is shifted somewhat higher in frequency in keeping with the cat’s higher frequency range of hearing.

We record small voltages from the scalps of sedated cats. The frequency-following response (FFR) consists of voltages originating in the brainstem that synchronize to the stimulus pulses. We can detect FFR signals across the range of pulse rates that is relevant for temporal pitch sensitivity. The acoustic change complex (ACC) is a voltage that arises from the auditory cortex in response to a change in an ongoing stimulus. We can record ACC signals in response to pitch changes across ranges similar to the sensitive ranges seen in the behavioral trials in normal-hearing cats.

We have implanted cats with devices like cochlear implants used by humans. Both FFR and ACC could be recorded in response to electrical stimulation of the implants.

The ACC could serve as a surrogate for behavioral training for conditions in which a cat’s learning might not keep up with changes in stimulation strategies, like when a cochlear implant is newly implanted or a novel stimulating pattern is tested.

We have found previously in short-term experiments in anesthetized cats that an electrode inserted into the auditory (hearing) nerve can selectively stimulate pathways that are specialized for transmission of timing information, e.g., for pitch sensation. In ongoing experiments, we plan to place long-term indwelling electrodes in the auditory nerve. Pitch sensitivity with those electrodes will be evaluated with FFR and ACC recording. If performance of the auditory nerve electrodes in the animal model turns out as anticipated, such electrode could offer improved pitch sensitivity to human cochlear implant users.

The ability to differentiate between talkers based on their voice cues changes with age

Yael Zaltz – yaelzalt@tauex.tau.ac.il

Department of Communication Disorders, Steyer School of Health Professions, Faculty of Medicine, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, -, 6997801, Israel

Popular version of 4aPP2 – The underlying mechanisms for voice discrimination across the life span
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018859

By using voice cues, a listener can keep track of a specific talker and tell it apart from other relevant and irrelevant talkers. Voice cues help listeners understand speech in everyday, noisy environments that include multiple talkers. The present study demonstrates that both young children and older adults aren’t as good at voice discrimination compared to young adults. Young children and older adults use more top-down, high-order cognitive resources for voice discrimination.

Four experiments were designed to assess voice discrimination based on two voice cues: the speaker’s fundamental frequency and formant frequencies. These are the resonant frequencies of the vocal tract, reflecting vocal tract length. Two of the experiments assessed voice discrimination in quiet conditions, one experiment assessed the effect of noise on voice discrimination, and one experiment assessed the effect of different testing methods on voice discrimination. In all experiments, an adaptive procedure was used to assess voice discrimination. In addition, high-order cognitive abilities such as non-verbal intelligence, attention, and processing speed were evaluated. The results showed that the youngest children and the older adults displayed the poorest voice discrimination, with significant correlations between voice discrimination and top-down, cognitive abilities; children and older adults with better attention skills and faster processing speed (Figure 1) achieved better voice discrimination. In addition, voice discrimination for the children was shown to depend more on comprehensive acoustic and linguistic information, compared to young adults, and their ability to form an acoustic template in memory to be used as perceptual anchor for the task was less efficient. The outcomes provide an important insight on the effect of age on basic auditory abilities and suggest that voice discrimination is less automatic for children and older adults, perhaps as a result of less mature or deteriorated peripheral (spectral and/or temporal) processing. These findings may partly explain the difficulties of children and older adults in understanding speech in multi-talker situations.

Figure 1: Individual voice discrimination results for (a) the children and (b) the older adults as a function of their scores in the Trail Making Test that assess attention skills and processing speed.

There is a way to differently define the acoustic environment

Semiha Yilmazer – semiha@bilkent.edu.tr

Department of Interior Architecture and Environmental Design, Bilkent University, Ankara, Turkey, 06800, Turkey

Ela Fasllija, Enkela Alimadhi, Zekiye Şahin, Elif Mercan, Donya Dalirnaghadeh

Popular version of 5aPP9 – A Corpus-based Approach to Define Turkish Soundscape Attributes
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0019179

We hear sound wherever we are, on buses, in streets, in cafeterias, museums, universities, halls, churches, mosques, and so forth. How we describe sound environments (soundscapes) changes according to the different experiences we have throughout our lives. Based on this, we wonder how people delineate sound environments and, thus how they perceive them.

There are reasons to believe there may be variances in how soundscape affective attributes are called in a Turkish context. Considering the historical and cultural differences countries have, we thought that it would be important to assess the sound environment by asking individuals of different ages all over Turkey. For our aim, we used the Corpus-driven approach (CDA), an approach found in Cognitive Linguistics. This allowed us to collect data from laypersons to effectively identify soundscapes based on adjective usage.

In this study, the aim is to discover linguistically and culturally appropriate equivalents of Turkish soundscape attributes. The study involved two phases. In the first phase, an online questionnaire was distributed to native Turkish speakers proficient in English, seeking adjective descriptions of their auditory environment and English-to-Turkish translations. This CDA phase yielded 79 adjectives.


Figure 1 Example public spaces; a library and a restaurant

Examples: audio 1, audio 2

In the second phase, a semantic-scale questionnaire was used to evaluate recordings of different acoustic environments in public spaces. The set of environments comprised seven distinct types of public spaces, including cafes, restaurants, concert halls, masjids, libraries, study areas, and design studios. These recordings were collected at various times of the day to ensure they also contained different crowdedness and specific features. A total of 24 audio recordings were evaluated for validity; each listened to 10 times by different participants. In total, 240 audio clips were randomly assessed, with participants rating 79 adjectives per recording on a five-point Likert scale.


Figure 2 The research process and results

The results of the study were analyzed using a principal component analysis (PCA), which showed that there are two main components of soundscape attributes: Pleasantness and Eventfulness. The components were organized in a two-dimensional model, where each is associated with a main orthogonal axis such as annoying-comfortable and dynamic-uneventful. This circular organization of soundscape attributes is supported by two additional axes, namely chaotic-calm and monotonous-enjoyable. It was also observed that in the Turkish circumplex, the Pleasantness axis was formed by adjectives derived from verbs in a causative form, explaining the emotion the space causes the user to feel. It was discovered that Turkish has a different lexical composition of words compared to many other languages, where several suffixes are added to the root term to impose different meanings. For instance, the translation of tranquilizer in Turkish is sakin-leş (reciprocal suffix) -tir (causative suffix)- ici (adjective suffix).

The study demonstrates how cultural differences impact sound perception and language’s role in expression. Its method extends beyond soundscape research and may benefit other translation projects. Further investigations could probe parallel cultures and undertake cross-cultural analyses.