Tools for shaping the sound of the future city in virtual reality

Christian Dreier – cdr@akustik.rwth-aachen.de

Institute for Hearing Technology and Acoustics
RWTH Aachen University
Aachen, Northrhine-Westfalia 52064
Germany

– Christian Dreier (lead author, LinkedIn: Christian Dreier)
– Rouben Rehman
– Josep Llorca-Bofí (LinkedIn: Josep Llorca Bofí, X: @Josepllorcabofi, Instagram: @josep.llorca.bofi)
– Jonas Heck (LinkedIn: Jonas Heck)
– Michael Vorländer (LinkedIn: Michael Vorländer)

Popular version of 3aAAb9 – Perceptual study on combined real-time traffic sound auralization and visualization
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027232

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

“One man’s noise is another man’s signal”. This famous quote by Edward Ng from a 1990’s New York Times article breaks down a major learning from noise research. A rule of thumb within noise research states the community response to noise, when asked for “annoyance” ratings, is said to be statistically explained only to one third by acoustic factors (like the well-known A-weighted sound pressure level, which can be found on household devices as “dB(A)” information). Referring to Ng’s quote, another third is explained by non-acoustic, personal or social variables, whereas the last third cannot be explained according to the current state of research.

Noise reduction in built urban environments is an important goal for urban planners, as noise is not only a cause of cardio-vascular diseases, but also affects learning and work performance in schools and offices. To achieve this goal, a number of solutions are available, ranging from switching to electrified public transport, speed limits, traffic flow management or masking of annoyant noise by pleasant noise, for example fountains.

In our research, we develop a tool for making the sound of virtual urban scenery audible and visible. From its visual appearance, the result is comparable to a computer game, with the difference that the acoustic simulation is physics-based, a technique that is called auralization. The research software “Virtual Acoustics” simulates the entire physical “history” of a sound wave for producing an audible scene. Therefore, the sonic characteristics of traffic sound sources (cars, motorcycles, aircraft) are modeled, the sound wave’s interaction with different materials at building and ground surfaces are calculated, and human hearing is considered.

You might have recognized a lightning strike sounding dull when being far away and bright when being close, respectively. The same applies for aircraft sound too. In an according study, we auralized the sound of an aircraft for different weather conditions. A 360° video compares how the same aircraft typically sounds during summer, autumn and winter when the acoustical changes due to the weather conditions are considered (use headphones for full experience!)

In another work we prepared a freely available project template for using Virtual Acoustics. Therefore, we acoustically and graphically modeled the IHTApark, that is located next to the Institute for Hearing Technology and Acoustics (IHTA): https://www.openstreetmap.org/#map=18/50.78070/6.06680.

In our latest experiment, we focused on the perception of especially annoyant traffic sound events. Therefore, we presented the traffic situations by using virtual reality headsets and asked the participants to assess them. How (un)pleasant would be the drone for you during a walk in the IHTApark?

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

Virtual Reality Musical Instruments for the 21st Century

Rob Hamilton – hamilr4@rpi.edu
Twitter: @robertkhamilton

Rensselaer Polytechnic Institute, 110 8th St, Troy, New York, 12180, United States

Popular version of 1aCA3 – Real-time musical performance across and within extended reality environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018060

Have you ever wanted to just wave your hands to be able to make beautiful music? Sad your epic air-guitar skills don’t translate into pop/rock super stardom? Given the speed and accessibility of modern computers, it may come as little surprise that artists and researchers have been looking to virtual and augmented reality to build the next generation of musical instruments. Borrowing heavily from video game design, a new generation of digital luthiers is already exploring new techniques to bring the joys and wonders of live musical performance into the 21st Century.

Image courtesy of Rob Hamilton.

One such instrument is ‘Coretet’: a virtual reality bowed string instrument that can be reshaped by the user into familiar forms such as a violin, viola, cello or double bass. While wearing a virtual reality headset such as Meta’s Oculus Quest 2, performers bow and pluck the instrument in familiar ways, albeit without any physical interaction with strings or wood. Sound is generated in Coretet using a computer model of a bowed or plucked string called a ‘physical model’ driven by the motion of a performer’s hands and the use of their VR game controllers. And borrowing from multiplayer online games, Coretet performers can join a shared network server and perform music together.

Our understanding of music, and live musical performance on traditional physical instruments is tightly coupled to time, specifically the understanding that when a finger plucks a string, or a stick strikes a drum head, a sound will be generated immediately, without any delay or latency. And while modern computers are capable of streaming large amounts of data at the speed of light – significantly faster than the speed of sound – bottlenecks in the CPUs or GPUs themselves, or in the code designed to mimic our physical interactions with instruments, or even in the network connections that connect users and computers alike, often introduce latency, making virtual performances feel sluggish or awkward.

This research focuses on some common causes for this kind of latency and looks at ways that musicians and instrument designers can work around or mitigate these latencies both technically and artistically.

Coretet overview video: Video courtesy of Rob Hamilton.

4pNS2 – Use of virtual reality in designing and developing sonic environment for dementia care facilities

Arezoo Talebzadeh – arezoo.talebzadeh@UGent.be
Ph.D. Student
Ghent University
Tech Lane Ghent Science Park, 126, B-9052 Gent, Belgium

Popular version of 4pNS2 – Use of virtual reality in designing and developing soundscape for dementia care facilities
Presented in the afternoon of May 26, 2022
182nd ASA Meeting in Denver, Colorado
Click here to read the abstract

Sound is essential in making people aware of their environment; sound also helps in recognizing the time of the day. People with dementia have difficulties understanding and identifying their senses. The sonic environment can help them navigate through the space and realize the time; it can also reduce their agitation and anxiety. Care facilities and nursing homes, and long-term cares (LTC) usually have an unfamiliar acoustic environment for anyone new in the place. A well-designed soundscape can enhance the feeling of safety, elevate the mood and enrich the atmosphere. Designing the soundscape that fosters well-being for a person with dementia is challenging as mental disorders change one’s perception of space. Soundscape is the sonic environment as perceived by a person in context.

This research aims to enhance the soundscape experience during the design and development of care facilities by using Virtual Reality and defining the context during the process.

Walking through the space while hearing the soundscape demonstrates how sound helps spatial orientation and understanding of time. Specific rooms can have a unique sound dedicated to them to help residents find the location. Natural soundscape in the lounge or sounds of coffee brewing in the dining room during breakfast. Birds sound inside residents’ rooms during the morning to elevate their mood and help them start their day.

Sound is not visual (tangible); therefore, it is hard to examine and experience the design before implementation. Virtual Reality is a suitable tool for demonstrating sound augmentation and the outcome. By walking through the space and listening to the augmented sonic environment, caregivers and family members can participate during the design process as they are most familiar with the person with dementia and their interests. This method helps in evaluating the soundscape. People with dementia have a different mental model. Virtual Reality can help feature diverse mental models and sympathize with people with dementia.

5pSP6 – Assessing the Accuracy of Head Related Transfer Functions in a Virtual Reality Environment

Joseph Esce – esce@hartford.edu
Eoin A King – eoking@hartford.edu
Acoustics Program and Lab
Department of Mechanical Engineering
University of Hartford
200 Bloomfield Avenue
West Hartford
CT 06119
U.S.A

Popular version of paper 5pSP6: “Assessing the Accuracy of Head Related Transfer Functions in a Virtual Reality Environment”, presented Friday afternoon, November 9, 2018, 2:30 – 2:45pm, RATTENBURY A/B, ASA 176th Meeting/2018 Acoustics Week in Canada, Victoria, Canada.

Virtual RealityIntroduction
While visual graphics in Virtual Reality (VR) systems are very well developed, the manner in which acoustic environments and sounds may be recreated in a VR system is not. Currently, the standard procedure to represent sound in a virtual environment is to use a generic head related transfer function (HRTF), i.e. a user selects a generic HRTF from a library, with limited personal information. It is essentially a ‘best-guess’ representation of an individual’s perception of a sound source. This limits the accuracy of the representation of the acoustic environment, as every person has a HRTF that is unique to themselves.

What is a HRTF?
If you close your eyes and someone jangles keys behind your head, you will be able to identify the general location of the keys just from the sound you hear. A HRTF is a mathematical function that captures these transformations, and can be used to recreate the sound of those keys in a pair of headphones – so that it appears that the sound recording of the keys has a direction associated with it. However, everyone has vastly different ear and head shapes, therefore HRTFs are unique to each person. The objective of our work was to determine how the accuracy of sound localization in a VR world varies for different users, and how we can improve it.

Test procedure
In our tests, volunteers entered a VR world, which was essentially an empty room, and an invisible sound source made a short bursts of noise at various positions in the room. Volunteers were asked to point to the location of the sound source, and results were captured using the VR’s motion tracking system. Results were captured to the nearest millimeter. We tested three cases: 1) where volunteers were not allowed to move their head to assist in the localization, 2) where some slight head movements were allowed to assist in sound localization, and 3) where volunteers could turn around freely and ‘search’ (with their ears) for the sound source. The head movement was tracked by using the VR system to track the volunteer’s eye movement, and if the volunteer moved, the sound source was switched off.

Results
We observed that the accuracy with which volunteers were able to localize the sound source varied significantly from person to person. There was significant error when volunteers’ head movements were restricted, but the accuracy significantly improved when people were able to move around and listen to the sound source. This suggests that the initial impression of a sounds location in a VR world is refined when the user can move their head to refine their search.

Future Work
We are currently analyzing our results in more detail to account for the different characteristics of each user (e.g. head size, size and shape of ear, etc). Further, we are aiming to develop the experimental methodology to use machine learning algorithms enabling each user to create a pseudo-personalized HRTF, which would improve the immersive experience for all VR users.