Sound(e)scape: Can a Sonic Break Improve Cognitive Performance?

Alaa Algargoosh – algargoosh@vt.edu

Virginia Polytechnic Institute and State University (Virginia Tech), Perry St, Blacksburg, VA, 24061, United States

Megan Wysocki
Virginia Polytechnic Institute and State University (Virginia Tech)

Amneh Hamida
RWTH Aachen University.

Popular version of 1pNSa4 – Cognitive Restoration in Virtual Interactions with Indoor Acoustic Environments
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3977035

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

People often associate restorative experiences with nature: the sound of birds, wind, or flowing water. But what if indoor spaces could offer their own kind of mental escape, not through what we see, but through how we interact with sound?

This idea began with a simple observation. When you walk into a space and notice how your footsteps and voice are reflected back to you, the echoes create a subtle sense of awe. According to Attention Restoration Theory, experiences that evoke fascination and effortless engagement can help replenish mental resources. We wanted to explore whether these moments of acoustic interaction between a person and a space could invite gentle attention and, in turn, support cognitive restoration. In Attention Restoration Theory, this is referred to as soft fascination, a type of stimulus that is engaging but not overwhelming.

Exploring Echoes as a Path to Mental Restoration:
During a live demonstration at the MIT Museum, we used auralization a technology that allows you to hear your voice as if you were in a different place using that place’s sound signature or impulse response. A volunteer hummed into the acoustic signature of Hagia Sophia. Later, the entire audience hummed together and reflected on their experiences. The conversation pointed to the potential of such acoustic interaction to support a meditative state by impacting sense of space, time, and self.

This inspired a controlled experiment to study the restorative potential of indoor acoustic environments. We asked people to experience different sound environments (Figure 1) and measure their cognitive activity before and after each interaction. Early results suggest that interactive acoustics may support attention restoration depending on the acoustic characteristics, opening a new way of thinking about how sound affects us indoors.

Figure 1: Virtual interaction with an acoustic environment during the experiment, where a person hears their own voice transformed through the acoustic signature of another space.

Why does this matter?
We spend most of our time indoors, yet discussions of restorative environments often focus on natural settings. This is especially relevant for workplaces and schools, where mental fatigue is common. It may also hold meaningful promise for neurodivergent individuals, including those with ADHD, who often benefit from environments that support attention without overstimulating it.
We imagine applications in immersive restorative spaces where people can interact with sound to reset and return to their activities with greater clarity. We also envision subtle integration into transitional spaces such as staircases, corridors, and building entrances that provide gentle cognitive relief as people move throughout their day.

Sound(e)scape reframes acoustics not as background, but as a tool for well-being. By understanding how interactive sound shapes attention and cognition, we can design buildings that do not simply avoid harmful noise. They can actively help the mind take a restorative break.

Figure 2: Visualization of interacting with different acoustic environments. Left: Max Addae vocalizing in an office environment (MIT Media Lab). Middle: “Hagia Sophia – Muhammad, Allah, Abu Bakr” by Rabe!, licensed under CC BY-SA 3.0 (https://commons.wikimedia.org/wiki/File:Hagia_Sophia_-_Muhammad,_Allah,_Abu_Bakr.jpg) Cropped and one person (Max Addae) added by Alaa Algargoosh. Right: Max Addae vocalizing in Boston Symphony Hall.

Sound recordings:
1. Vocalizing in an office environment (MIT Media Lab). (Voice: Max Addae)
2. Virtual vocalization in Hagia Sophia. (Voice: Max Addae)
3. Virtual vocalization in Boston Symphony Hall. (Voice: Max Addae)
The virtual vocalizations were generated using the impulse responses available at ODEON software library.

Does Virtual Reality Match Reality? Vocal Performance Across Environments

Pasquale Bottalico – pb81@illinois.edu

University of Illinois, Urbana-Champaign
Champaign, IL 61820
United States

Carly Wingfield2, Charlie Nudelman1, Joshua Glasner3, Yvonne Gonzales Redman1,2

  1. Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign
  2. School of Music University of Illinois Urbana-Champaign
  3. School of Graduate Studies, Delaware Valley University

Popular version of 2aAAa1 – Does Virtual Reality Match Reality? Vocal Performance Across Environments
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037496

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Singers often perform in very different spaces than where they practice—sometimes in small, dry rooms and later in large, echoey concert halls. Many singers have shared that this mismatch can affect how they sing. Some say they end up singing too loudly because they can’t hear themselves well, while others say they hold back because the room makes them sound louder than they are. Singers have to adapt their voices to unfamiliar concert halls, and often they have very little rehearsal time to adjust.

While research has shown that instrumentalists adjust their playing depending on the room they are in, there’s been less work looking specifically at singers. Past studies have found that different rooms can change how singers use their voices, including how their vibrato (the small, natural variation in pitch) changes depending on the room’s echo and clarity.

At the University of Illinois, our research team from the School of Music and the Department of Speech and Hearing Science is studying whether virtual reality (VR) can help singers train for different acoustic environments. The big question: can a virtual concert hall give singers the same experience as a real one?

To explore this, we created virtual versions of three real performance spaces on campus (Figure 1).

Figure 1. 360 degree images of the three performance spaces investigated.

Singers wore open-backed headphones and a VR headset while singing into a microphone in a sound booth. As they sang, their voices were processed in real time to sound like they were in one of the real venues, and this audio was sent back to them through the headphones. In the Video (Video1), you can see a singer performing in the sound booth where the acoustic environments were recreated virtually. In the audio file (Audio1), you can hear exactly what the singer heard: the real-time, acoustically processed sound being sent back to their ears through the open-backed headphones.

Video 1. Singer performing in the virtual environment.

 

Audio 1. Example of real-time auralized feedback.

Ten trained singers performed in both the actual venues (Figure 2) and in virtual versions of those same spaces.

Figure 2. Singer performing in the rear environment.

We then compared how they sang and how they felt during each performance. The results showed no significant differences in how the singers used their voices or how they perceived the experience between real and virtual environments.

This is an exciting finding because it suggests that virtual reality could become a valuable tool in voice training. If a singer can’t practice in a real concert hall, a VR simulation could help them get used to the sound and feel of the space ahead of time. This technology could give students greater access to performance preparation and allow voice teachers to guide students through the process in a more flexible and affordable way.

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.