Does Virtual Reality Match Reality? Vocal Performance Across Environments

Pasquale Bottalico – pb81@illinois.edu

University of Illinois, Urbana-Champaign
Champaign, IL 61820
United States

Carly Wingfield2, Charlie Nudelman1, Joshua Glasner3, Yvonne Gonzales Redman1,2

  1. Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign
  2. School of Music University of Illinois Urbana-Champaign
  3. School of Graduate Studies, Delaware Valley University

Popular version of 2aAAa1 – Does Virtual Reality Match Reality? Vocal Performance Across Environments
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037496

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Singers often perform in very different spaces than where they practice—sometimes in small, dry rooms and later in large, echoey concert halls. Many singers have shared that this mismatch can affect how they sing. Some say they end up singing too loudly because they can’t hear themselves well, while others say they hold back because the room makes them sound louder than they are. Singers have to adapt their voices to unfamiliar concert halls, and often they have very little rehearsal time to adjust.

While research has shown that instrumentalists adjust their playing depending on the room they are in, there’s been less work looking specifically at singers. Past studies have found that different rooms can change how singers use their voices, including how their vibrato (the small, natural variation in pitch) changes depending on the room’s echo and clarity.

At the University of Illinois, our research team from the School of Music and the Department of Speech and Hearing Science is studying whether virtual reality (VR) can help singers train for different acoustic environments. The big question: can a virtual concert hall give singers the same experience as a real one?

To explore this, we created virtual versions of three real performance spaces on campus (Figure 1).

Figure 1. 360 degree images of the three performance spaces investigated.

Singers wore open-backed headphones and a VR headset while singing into a microphone in a sound booth. As they sang, their voices were processed in real time to sound like they were in one of the real venues, and this audio was sent back to them through the headphones. In the Video (Video1), you can see a singer performing in the sound booth where the acoustic environments were recreated virtually. In the audio file (Audio1), you can hear exactly what the singer heard: the real-time, acoustically processed sound being sent back to their ears through the open-backed headphones.

Video 1. Singer performing in the virtual environment.

 

Audio 1. Example of real-time auralized feedback.

Ten trained singers performed in both the actual venues (Figure 2) and in virtual versions of those same spaces.

Figure 2. Singer performing in the rear environment.

We then compared how they sang and how they felt during each performance. The results showed no significant differences in how the singers used their voices or how they perceived the experience between real and virtual environments.

This is an exciting finding because it suggests that virtual reality could become a valuable tool in voice training. If a singer can’t practice in a real concert hall, a VR simulation could help them get used to the sound and feel of the space ahead of time. This technology could give students greater access to performance preparation and allow voice teachers to guide students through the process in a more flexible and affordable way.

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.