Cost-Effective Virtual Reality for Smarter Architecture: Predicting How We Hear Spaces

Angela Guastamacchia – angela.guastamacchia@polito.it
Department of Energy, Politecnico di Torino
Torino, Torino 10129
Italy

Popular version of 3aAAb4 – Subjective and objective validation of a virtual reality system as a tool for studying speech intelligibility in architectural spaces
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAICA25&id=3869566

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

When we communicate, clear speech is crucial—it helps us exchange ideas, learn, and build human connections. But often, poor acoustic conditions in rooms like crowded restaurants, wide lecture halls, or meeting spaces can make it difficult to understand speech clearly. Indoor architectural design significantly impacts speech clarity, so studying how different spaces affect communication, especially when hearing-impaired people are involved, is essential for fostering optimal designs that facilitate effective communication.

Virtual Reality (VR) might provide a practical and time-saving solution for this research, allowing us to reproduce various architectural environments and study how people perceive speech within those spaces without needing access to the real environments. Some laboratories have already implemented systems to accurately reproduce acoustics targeting diverse research goals. However, these systems typically rely on complex and costly arrays of dozens of loudspeakers, making studies difficult to set up, expensive, and inaccessible for architectural designers who are not VR experts.

Thus, a question arises: can even a less complex VR system still replicate a realistic experience of listening to speech in an actual room?

At the Audio Space Lab of the Politecnico di Torino, we set up a simpler and more affordable VR system. This system combines a VR headset with a spherical array of 16 loudspeakers to create immersive and realistic audiovisual communication scenarios surrounding the listener in a 360° experience, using an audio technique called 3rd-Order Ambisonics. We then tested whether our VR setup could consistently replicate the experience of listening in a medium-sized, echoey lecture room.

To test this, we compared the speech understanding of thirteen volunteers in the real lecture hall and in its virtual replica. During the tests, volunteers listened to single sentences and repeated what they understood across five different audiovisual scenes, varying the speech source location and the presence or absence of distracting noise. All scenarios included typical background noise, such as the hum of air conditioning, to closely mimic real-life conditions.

In Figure 1, you can see a volunteer in the real lecture room listening to sentences emitted by the loudspeaker positioned to their right, while a distracting noise is presented from the frontal loudspeaker. In Video 1, a volunteer performs the same speech test within the VR system, replicating the exact audiovisual scene shown in Figure 1. Figure 2 shows what the volunteer saw during the test.

Virtual Reality

Figure 1. Volunteer performing the speech comprehension test in the real lecture room.

Video 1. Volunteer performing the speech comprehension test in the virtual lecture room using the VR system.

Virtual Reality

Figure 2. Volunteers’ view during both real and virtual speech comprehension tests.

Our findings are promising: we found no significant differences in speech comprehension between the real and virtual settings across all tested scenes.

Additionally, we asked the volunteers how closely their VR experience matched reality. On average, they rated it as “almost very consistent,” reinforcing that the VR system provided a believable acoustic experience.

These results are exciting because they suggest that even with a less complex VR system, real-life-like speech perception in ordinary environments can be effectively predicted. Our affordable and user-friendly VR system could thus become a powerful tool for architects, acousticians, and researchers, offering an accessible way to easily study speech comprehension in architectural spaces and pursue improved acoustic designs.

Tools for shaping the sound of the future city in virtual reality

Christian Dreier – cdr@akustik.rwth-aachen.de

Institute for Hearing Technology and Acoustics
RWTH Aachen University
Aachen, Northrhine-Westfalia 52064
Germany

– Christian Dreier (lead author, LinkedIn: Christian Dreier)
– Rouben Rehman
– Josep Llorca-Bofí (LinkedIn: Josep Llorca Bofí, X: @Josepllorcabofi, Instagram: @josep.llorca.bofi)
– Jonas Heck (LinkedIn: Jonas Heck)
– Michael Vorländer (LinkedIn: Michael Vorländer)

Popular version of 3aAAb9 – Perceptual study on combined real-time traffic sound auralization and visualization
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027232

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

“One man’s noise is another man’s signal”. This famous quote by Edward Ng from a 1990’s New York Times article breaks down a major learning from noise research. A rule of thumb within noise research states the community response to noise, when asked for “annoyance” ratings, is said to be statistically explained only to one third by acoustic factors (like the well-known A-weighted sound pressure level, which can be found on household devices as “dB(A)” information). Referring to Ng’s quote, another third is explained by non-acoustic, personal or social variables, whereas the last third cannot be explained according to the current state of research.

Noise reduction in built urban environments is an important goal for urban planners, as noise is not only a cause of cardio-vascular diseases, but also affects learning and work performance in schools and offices. To achieve this goal, a number of solutions are available, ranging from switching to electrified public transport, speed limits, traffic flow management or masking of annoyant noise by pleasant noise, for example fountains.

In our research, we develop a tool for making the sound of virtual urban scenery audible and visible. From its visual appearance, the result is comparable to a computer game, with the difference that the acoustic simulation is physics-based, a technique that is called auralization. The research software “Virtual Acoustics” simulates the entire physical “history” of a sound wave for producing an audible scene. Therefore, the sonic characteristics of traffic sound sources (cars, motorcycles, aircraft) are modeled, the sound wave’s interaction with different materials at building and ground surfaces are calculated, and human hearing is considered.

You might have recognized a lightning strike sounding dull when being far away and bright when being close, respectively. The same applies for aircraft sound too. In an according study, we auralized the sound of an aircraft for different weather conditions. A 360° video compares how the same aircraft typically sounds during summer, autumn and winter when the acoustical changes due to the weather conditions are considered (use headphones for full experience!)

In another work we prepared a freely available project template for using Virtual Acoustics. Therefore, we acoustically and graphically modeled the IHTApark, that is located next to the Institute for Hearing Technology and Acoustics (IHTA): https://www.openstreetmap.org/#map=18/50.78070/6.06680.

In our latest experiment, we focused on the perception of especially annoyant traffic sound events. Therefore, we presented the traffic situations by using virtual reality headsets and asked the participants to assess them. How (un)pleasant would be the drone for you during a walk in the IHTApark?