–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Museums are designed to dazzle the eyes but often fail the ears. Imagine standing in a stunning gallery with high ceilings and gleaming floors, only to struggle to hear the tour guide over the echoes. Later, you pause before a painting, hoping for quiet reflection, but you get distracted by nearby chatter. Our research shows how simple design choices, like swapping concrete floors for carpet or adding acoustic ceilings, can transform visitor experiences by improving the acoustic environment.
The Acoustic Challenge in Museums Contemporary museums often embrace a “white box” aesthetic, where minimalist architecture puts art center stage. Usually, this approach relies on hard, highly reflective finishes like glass, concrete, and masonry, paired with high ceilings and open‐plan layouts. While visually striking, these designs rarely account for their acoustic side effects, creating echo chambers that distract from the art they’re meant to highlight.
Testing “What if?” in Real Galleries
Figure 1. Room-impulse-response measurement in progress: a dodecahedral loudspeaker (left) emits test signals while a microphone records the gallery’s acoustic “fingerprint.” Photo: Aleksandr Tsurupa
To solve this, we visited museum rooms, recording how sound traveled in each space, like capturing an “acoustic fingerprint”, which we name room impulse response. Using these recordings, we built virtual models to test how different materials (e.g., carpet vs. concrete) changed the sound in the space. We evaluated three levels of sound absorption (low, medium, and high) on the floor, ceiling, and walls. Then we evaluated how these choices affected key acoustics metrics, including how long sound lingers (reverberation time, or RT), how intelligible speech is (Speech Transmission Index, or STI), and how far away you can still understand a conversation clearly (distraction distance).
Key Findings
1. More Absorption Always Helps: Our first big finding is that adding more absorption always helps—no exceptions. Increasing from low→medium→high absorption consistently: cut reverberation in half or more, boosted speech clarity by 0.05–0.10 STI points, and made speech level drop faster with distance (good for privacy).
2. Placement Matters: where you put that absorption makes a practical difference:
Floors yield the single biggest improvement, swapping concrete for carpet cuts reverberation by 1.8 seconds. However, it does not always guarantee meeting ideal results; supplemental ceiling or wall treatments may still be needed to hit ideal RT, clarity, and privacy levels.
Ceilings delivered the largest jumps in STI and clarity, showing the greatest overall increase in distraction distance and better sound attenuation. So, going from a fully reflective ceiling to wood and then microperforated ceiling panels is compelling for intelligibility.
Walls emerged as the ultimate privacy tool. Only high-absorption plaster walls drove conversation levels at 4 m below 52 dB and created the steepest drop-off, perfect for whisper-quiet exhibits or multimedia spaces.
3. A Simple STI‐Prediction Shortcut: Measuring speech intelligibility typically requires specialized equipment and complex calculations. We distilled our data into a simple formula to predict STI using just a room’s volume and total absorption—no advanced math required (STI ranges from 0–1; closer to 1 = perfect intelligibility).
Figure 2. Predicted Speech Transmission Index (STI) across room volume and total absorption area. Warm colors indicate higher STI in smaller, highly absorptive spaces; cool colors indicate lower STI in large, reflective rooms. The overlaid equation estimates STI from volume, absorption, and reverberation time. Source: Authors
Hear the Difference: Auralizations from Williams College Museum Below is one of the rooms that was used as a case study (Figure 3). Using auralizations (audio simulations that let you “hear” a space before it’s built), you can experience these changes yourself. Click each scenario below to hear the differences!
Figure 3. Museum gallery (photo) and its calibrated 3D model. The highlighted gallery “W1” served as a case study for virtually swapping floor, wall, and ceiling finishes to predict acoustic outcomes. Source: Authors
Note: Weighted absorption coefficient (αw): varies from 0 to 1, higher = more sound absorbed.
Floor:
Scenario 1: concrete floor (αw = 0.01)
Scenario 2: wooden floor (αw = 0.09)
Scenario 3: carpet floor (αw = 0.43)
Wall:
Scenario 4: masonry wall (αw = 0.01)
Scenario 5: 13 mm Gypsum/Plaster board on frame, 100 mm mineral wool behind (αw = 0.09)
Scenario 9: Perforated Acoustic Panels 20–25 mm thick with porous backing (αw = 0.81)
The takeaway? Start with sound-absorbing floors to reduce echoes, add ceiling panels to sharpen speech, and use high-performance walls where privacy matters most. These steps do not require sacrificing aesthetics—materials like sleek microperforated wood or acoustic plaster blend seamlessly into designs. By considering acoustics early, designers can create museums that are as comfortable to hear as they are to see.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Microscopic bubbles, when caused to vibrate by ultrasound waves, can be powerful enough to break through the body’s natural barriers and even to destroy tissue. Growth, resonance, and violent collapse of these microbubbles, called acoustic cavitation, is enabling new medical therapies such as drug delivery through the skin, opening of the blood-brain barrier, and destruction of tumors. However, the biomedical effects of cavitation are still challenging to understand and control. A special session at the 188th meeting of the Acoustical Society of America, titled “Double, Double, Toil and Trouble – Towards a Cavitation Dose,” is bringing together researchers working on methods to consistently and accurately measure these bubble effects.
For more than 30 years, scientists have measured bubble activity by listening with electronic sensors, called passive cavitation detection. The detected sounds can resemble sustained musical tones, from continuously vibrating bubbles, or applause-like noise, from groups of collapsing bubbles. However, results are challenging to compare between different measurement configurations and therapeutic applications. Researchers at the University of Cincinnati are proposing a method for reliably characterizing the activity of cavitating bubbles by quantifying their radiated sound.
A passive cavitation detector (left) listens for sound waves radiated by a collection of cavitating bubbles (blue dots) within a region of interest (blue rectangle).
The Cincinnati researchers are trying to improve measurements of bubble activity by precisely accounting for the spatial sensitivity patterns of passive cavitation detectors. The result is a measure of cavitation dose, equal to the total sound power radiated from bubbles per unit area or volume of the treated tissue. The hope this approach will enable better prediction and monitoring of medical therapies based on acoustic cavitation.
Figure 1: In an experiment simulating drug delivery through the skin (left), a treatment source projects an ultrasound beam onto animal skin. A passive cavitation detector (PCD) listens for sound radiated by bubbles at the skin surface, while the skin’s permeability is measured from its electrical resistance. Measured bubble activity is quantified using the sensitivity pattern of the PCD within the treated region (highlighted blue circle).
The researchers reported results from two experiments testing their methods for characterizing cavitation. In experiments testing ultrasound methods for drug delivery through the skin (Figure 1), they found that total power of subharmonic acoustic emissions (like musical tones indicating sustained vibrations of resonating bubbles) per unit skin surface area consistently increased when the skin became more permeable, quantifying the role of bubble activity in drug delivery. In a second experiment (Figure 2), the researchers quantified bubble activity during heating of animal liver tissue by ultrasound, simulating cancer therapies called thermal ablation. They found that increased bubble activity could indicate both faster tissue heating near the treatment source and reduced heating further from the source.
Figure 2: An ultrasound (US) array sonicates animal liver tissue with a high-intensity ultrasound beam, causing tissue heating (thermal ablation) as used for liver tumor treatments. Increased bubble activity was found to reduce the depth of treatment, while sometimes also increasing the area of ablated tissue near the tissue surface.
This approach to measuring bubble activity could help to establish standard cavitation doses for many different ultrasound therapy methods. Quantitative measurements of bubble activity could help confirm treatment success, such as drug delivery through the skin, or to guide thermal treatments by optimizing bubble activity to heat tumors more efficiently. Standard measures of cavitation dose should also help scientists more rapidly develop new medical therapies based on ultrasound-activated microbubbles.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Have you ever wondered how we manage to understand someone in echoey and noisy spaces? For people using cochlear implants, understanding speech in these environments is especially difficult—and our research aims to explore why.
Figure 1. Spectrogram of reverberant speech before (top) and after (bottom) Cochlear Implant processing
When sound is produced in a room, it reflects off surfaces and lingers—creating reverberation. Reflections of both target speech and background noise make understanding speech even more difficult. However, for listeners with typical hearing, the brain quickly adapts to these reflections through short-term exposure, helping separate the speech signal from the room’s acoustic “fingerprint.” This process, known as adaptation, relies on specific sound features: the reverberation tail (the lingering energy after the speech stops), reduced modulation depth (how much the amplitude of the speech varies), and increased energy at low frequencies. Together, these cues create temporal and spectral patterns that the brain can group as separate from the speech itself.
While typical-hearing listeners adapt, many cochlear implant (CI) users report extreme difficulty understanding speech in everyday places like restaurants, where background noise and sound reflections are common. Although cochlear implants have been remarkably effective in restoring access to sound and speech for people with profound hearing loss, they still fall short in complex acoustic environments. This study explores the nature of distortions introduced by cochlear implants to key acoustic cues that listeners with typical hearing use to adapt to reverberant rooms.
The study examined how cochlear implant signal processing affects these cues by analysing room impulse response signals before and after simulated CI processing. Two key parameters were manipulated: the input dynamic range (IDR), which determines how much of the incoming sound is preserved before compression and affects how soft and loud sounds are balanced in the delivered electric signal. The second parameter, the Logarithmic Growth Function (LGF), controls how sharply the sound is compressed at higher levels. A lower LGF results in more abrupt shifts in volume, which can distort fine details in the sound.
The results show that cochlear implant processing significantly alters the acoustic cues that support adaptation. Specifically, it reduces the fidelity with which modulations are preserved, shortens the reverberation tail, and diminishes the low-frequency energy typically added by reflections. Overall, this degrades the speech clarity index of the sound, which can contribute to CI users’ difficulty communicating in reflective spaces.
Further, increasing the IDR extended the reverberation tail but also reduced the clarity index by increasing the relative contribution of reverberant energy to the total energy. Similarly, lowering the LGF factor caused more abrupt energy changes in the reverberation tail, degrading modulation fidelity. Interestingly, it also led to a more gradual drop-off in low-frequency energy—highlighting a complex trade-off.
Together, these findings suggest that cochlear implant users may struggle in reverberant environments not only because of reflections but also because their devices alter or distort the acoustic regularities that enable room adaptation. Improving how cochlear implants encode these features could make speech more intelligible in real-world, echo-filled spaces.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Exploring the Lives of the Ocean’s Deepest Divers After the Deepwater Horizon oil spill, restoring marine mammal populations in the Gulf of Mexico became a priority. Protecting these animals starts with understanding how they use their habitat and where they go. Sperm whales and beaked whales are some of the ocean’s most extreme divers, spending much of their lives navigating the dark depths. They rely on bursts of sound called echolocation clicks to find their prey and navigate. These clicks act like acoustic fingerprints, helping us figure out where whales go and what environments they prefer.
To track their movements, we set up 18 underwater listening stations throughout the Gulf. These instruments recorded sounds continuously for three years. By analyzing this data, we discovered patterns in where the whales appeared and how those locations were linked to oceanographic features like currents and slopes.
Video: Deploying the instruments.
Where Whales Go Different whale species tend to favor different parts of the deep Gulf. Goose-beaked whales often stay near deep eddies and steep slopes. Gervais’ beaked whales are more likely to follow surface and midwater eddies, while sperm whales mostly stick to areas where freshwater from rivers mixes with the open ocean. They tend to avoid the tropical Loop Current, a warm flow from the Caribbean into the Gulf, that seems to create conditions less favorable for these whales.
An example of how marine mammals use different parts of the Gulf of Mexico. The maps show ocean features at three depth ranges: surface (0-250 m), mid-depth (700-1250 m), and deep (1500-3000 m). Dolphins are shown in the surface plot, sperm whales in the mid-depth plot, and goose-beaked whales in the deep plot. Colors indicate water movement, with red showing strong currents and blue showing calmer areas. Circles mark recording stations, with bigger circles showing more animals detected.
Whales Shape Their Environment Whales don’t just adapt to their surroundings, they also shape them. Their powerful clicks, produced by the millions, bounce off the seafloor and underwater features, making their presence a key part of the local acoustic environment. Where whales occur, the acoustic environment changes, influenced both by their vocalizations and by the prey that may be present. Prey layers can influence how sound propagates through the water, adding complexity to the acoustic field. Detecting whales in specific areas helps us understand how the acoustic environment might vary under different conditions. Mapping where whales are present also reveals potential biological hotspots and helps us understand how sound behaves in these deep-sea habitats.
Why This Matters This research is a collaboration between scientists from the United States and Mexico, supported by NOAA’s RESTORE Science Program, the Deepwater Horizon Restoration Open Ocean Marine Mammal Trustee Implementation Group, and the Office of Naval Research Task Force Ocean. These detailed maps of whale distribution are vital for identifying critical habitats and guiding conservation strategies. They help us understand how threats like oil spills, industrial activity, and environmental changes impact whale populations, allowing us to plan effective mitigation and restoration efforts to maintain healthy ecosystems.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
When we communicate, clear speech is crucial—it helps us exchange ideas, learn, and build human connections. But often, poor acoustic conditions in rooms like crowded restaurants, wide lecture halls, or meeting spaces can make it difficult to understand speech clearly. Indoor architectural design significantly impacts speech clarity, so studying how different spaces affect communication, especially when hearing-impaired people are involved, is essential for fostering optimal designs that facilitate effective communication.
Virtual Reality (VR) might provide a practical and time-saving solution for this research, allowing us to reproduce various architectural environments and study how people perceive speech within those spaces without needing access to the real environments. Some laboratories have already implemented systems to accurately reproduce acoustics targeting diverse research goals. However, these systems typically rely on complex and costly arrays of dozens of loudspeakers, making studies difficult to set up, expensive, and inaccessible for architectural designers who are not VR experts.
Thus, a question arises: can even a less complex VR system still replicate a realistic experience of listening to speech in an actual room?
At the Audio Space Lab of the Politecnico di Torino, we set up a simpler and more affordable VR system. This system combines a VR headset with a spherical array of 16 loudspeakers to create immersive and realistic audiovisual communication scenarios surrounding the listener in a 360° experience, using an audio technique called 3rd-Order Ambisonics. We then tested whether our VR setup could consistently replicate the experience of listening in a medium-sized, echoey lecture room.
To test this, we compared the speech understanding of thirteen volunteers in the real lecture hall and in its virtual replica. During the tests, volunteers listened to single sentences and repeated what they understood across five different audiovisual scenes, varying the speech source location and the presence or absence of distracting noise. All scenarios included typical background noise, such as the hum of air conditioning, to closely mimic real-life conditions.
In Figure 1, you can see a volunteer in the real lecture room listening to sentences emitted by the loudspeaker positioned to their right, while a distracting noise is presented from the frontal loudspeaker. In Video 1, a volunteer performs the same speech test within the VR system, replicating the exact audiovisual scene shown in Figure 1. Figure 2 shows what the volunteer saw during the test.
Figure 1. Volunteer performing the speech comprehension test in the real lecture room.
Video 1. Volunteer performing the speech comprehension test in the virtual lecture room using the VR system.
Figure 2. Volunteers’ view during both real and virtual speech comprehension tests.
Our findings are promising: we found no significant differences in speech comprehension between the real and virtual settings across all tested scenes.
Additionally, we asked the volunteers how closely their VR experience matched reality. On average, they rated it as “almost very consistent,” reinforcing that the VR system provided a believable acoustic experience.
These results are exciting because they suggest that even with a less complex VR system, real-life-like speech perception in ordinary environments can be effectively predicted. Our affordable and user-friendly VR system could thus become a powerful tool for architects, acousticians, and researchers, offering an accessible way to easily study speech comprehension in architectural spaces and pursue improved acoustic designs.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Rehearsal rooms for orchestras pose many acoustic design challenges. The most fundamental concern is that of safety. Modern musical instruments are loud enough to create a significant risk of long-term hearing damage to the players and conductor. Loudness also takes a toll on musicians from constant exposure to loud sound and musicians feeling that they have to always “hold back” and cannot play their instrument normally.
Unless the rehearsal venue has similar size to a performance venue, increasing cost and embodied materials, rooms are often either too loud to be a safe working environment for the orchestra or suffer from a lack of reverberation and richness which makes it hard for musicians and conductor to work on the color, blend and nuance of the music.
The use of electronic acoustic enhancement systems offers a way to break some of the fundamental “interlocks” between size and loudness of a rehearsal venue and resolve some of these challenges. Beyond just an artificial reverberation system, enhancement systems allow a “virtual acoustic environment” to be created – providing musicians with sound reflections that simulate the experience of playing in a larger room plus a richer – but quieter – room sound. This gives the musicians “breathing room” for their rehearsal.
The recent Australian Chamber Orchestra auditorium at Walsh Bay Arts Precinct, Sydney is an excellent example of how this technology has allowed a safe and comfortable rehearsal environment for the orchestra in a smaller space, without sacrificing musical quality.
Located in a heritage-listed former industrial wharf complex in Sydney Harbour, the ACO’s a 277-seat venue, The Nielson, is an “artist’s studio of sound” which features views of the Sydney Harbour Bridge through its upper floor windows. The ACO plays across all major Australian cities in venues that seat up to 2500 people, so providing the ability to preview how a performance would sound in each touring venue is important to allow the orchestra to adjust for how their performance will change in each room. The orchestra size for each tour varies from small chamber groups up to full symphony orchestra with added wind and brass players. The Nielson must therefore provide a wide range of acoustic conditions at the touch of a button, all while managing musicians’ noise exposure.
Figure 1: View of The Nielson in flat floor mode with seats retracted. Source: Authors
The electro-acoustic enhancement system installed in ‘The Neilson’ is a Yamaha AFC4 system consisting of 16 microphones, various DSP (Digital Signal Processing) modules, 79 amplifier channels and 79 loudspeakers mounted within the walls and ceiling space which allow the room’s apparent width and height, reverberation and timbre to be varied, creating different virtual ”venues” for the orchestra to rehearse and perform in.
To provide support to musicians and control loudness, the physical room’s surface finishes emphasize reflections from the side walls (lateral reflections) and de-emphasize sound reflections from above.
This allows the AFC4 system to “raise the roof” and create the impression of a much larger room without overwhelming the sound, “knitting together” the physical and electronic parts of the room sound.
The Nielson’s walls and ceiling include several sound scattering finishes that blend and “soften” the sound, where the architecture itself was inspired by music.
The lower walls are textured with small indentations, encoding a quote by Beethoven written in Braille.
Figure 2: View of the “wavy wall” with “Braille” acoustic diffusion. Source: Authors
The glazed upper walls along the balcony level are “frozen music”, based on the chord progression of Bach’s Chaconne for solo violin, with each of the 16 window sections “spelling” a chord (the widths of the panes of glass are in proportion to the intervals of the notes in the chord).
Figure 3: Render of the “Chaconne window” glass diffuser. Source: TZG Architects
The ceiling “wells” and “fins” were set out in a sequence where the height of the wells in each portion of the ceiling was proportional to the intervals between notes in three famous musical motifs by Wagner (Tristan und Isolde), Shostakovich (String Quartet No.8) and Richard Strauss (Elektra).
The “virtual acoustics” provided in the Nielson make it more than just a beautiful space, but one of the most flexible orchestra rehearsal rooms in the world that allows the ACO to preview how they will adjust their performance to venues ten times larger than the “real” room – and unlock new performance options for audiences in the room and reach new streaming audiences online. It provides a great example of how technology has allowed “more from less” via the sustainable re-use of an existing heritage building.