What does a glass bottle and an ancient Indian flute have in common? Explorations in acoustic color

Ananya Sen Gupta – ananya-sengupta@uiowa.edu
Department of Electrical and Computer Engineering
University of Iowa
Iowa City, IA 52242
United States

Trevor Smith – trevor-smith@uiowa.edu

Panchajanya Dey – panchajanyadey@gmail.com
@panchajanya_official

Popular version of 5aMU4 – Exploring the acoustic color signature paterns of Bansuri, the traditional Indian bamboo flute using principles of the Helmholtz generator and geometric signal processing techniques
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAICA25&id=3848014

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

The Bansuri, the ancient Indian bamboo flute

 

More media files accessed here

Bansuri, the ancient Indian bamboo flute, is of rich historical, cultural and spiritual significance to South Asian musical heritage. It has been mentioned in ancient Hindu texts dating back centuries, sometimes millennia, and is still played all over India in classical, folk, movie songs, and other musical genres today. Made from a single bamboo reed, with seven finger holes (six are mostly played) and one blow-hole, the Bansuri carries the rich melody of wind whistling through the tropical woods. In terms of musical acoustics, the Bansuri essentially works as a composite Helmholtz resonator, also known as wind throb, with a cylindrical rather than spherical and partially open cavity. The cavity openings are through the finger holes that are open during playing, as well as the open end of the shaft. Helmholtz resonance refers to the phenomenon of air resonance in a cavity, an effect named after the German physicist Hermann von Helmholtz. The bansuri sound is created when the air going in through the blow-hole is trapped inside the cavity of the bamboo shaft, before it leaves primarily through the end of the bamboo shaft as well as the first open finger holes.

The longer the length of the effective air shaft, which depends on how many finger-holes are closed, the lower the fundamental resonant frequency. However, the acoustical quality of the bansuri is determined not only by the fundamental (lowest) frequency but also by the relative dominance of the harmonics (higher octaves). The different octaves (typical bansuri has a range of thee octaves) can be activated by the bansuri player by controlling the angle and “beam-width” of the blow, which significantly impacts the dynamics of the air pressure, vorticity and air flow. A direct blow into the blow-hole for any finger-hole combination activates the direct propagation mode, where the lowest octave is dominant. To hit the higher octaves of the same note, the flautist has to blow at an angle to activate the other modes of sound propagation, which proceeds through the air column as well as the wooden body of the bansuri.

The accompanying videos and images show a basic demonstration of the bansuri as a musical instrument by Panchajanya Dey, simple demonstrations of a glass bottle as a Helmholtz resonator, and exposition of how the acoustic color (shown in the figures) can be used to bridge interdisciplinary artists to create new forms of music.

Acoustic color is a popular data science tool that expresses the relative distribution of power across the frequency spectrum as a function time. Visually these are images with colormap (red=high, blue = low) representing the relative power between the harmonics of the flute, and a rising (or falling) curve within the acoustic color image indicates a rising (or falling) tone for a harmonic. For the bansuri, the harmonic structures exist as non-linear braid-like curves within the acoustic color image. The higher harmonics, which may contain useful melodic information, are often embedded against background noise that sounds like hiss, likely from mixing of airflow modes and irregular reed vibrations. However, some hiss is natural to the flute and filtering it out makes the music lose its authenticity. In the talk, we presented computational techniques based on harmonic filtering to separate the modes of acoustic propagation and sound production in the Bansuri, e.g. filtering out leakage due to mixing of modes. We also exposited how the geometric aspects of the acoustic color features (e.g. harmonic signatures) may be exploited to create a fluid feature dictionary. The purpose of this dictionary is to store the harmonic signatures of different melodic movements, without sacrificing the rigor of musical grammar, or the authentic earthy sound of the bansuri (e.g. some of the hiss is natural and supposed to be there). This fluid feature repository may be harnessed with large language models (LLM) or similar AI/ML architecture to enable machine interpretation of Indian classical music, create collaborative infrastructure to enable artists from different musical traditions to experiment with an authentic software testbed, among other exciting applications.

Sound Highways of the Sea: Mapping Acoustic Corridors for Whales and Fish in Colombia’s Pacific

Maria Paula Rey Baquero – rey_m@javeriana.edu.co
Instagram: @mariapaulareyb
Pontificia Universidad Javeriana
Fundación Macuáticos Colombia
Bogotá
Colombia

Additional Authors:
Kerri D. Seger
Camilo Andrés Correa Ayram
Natalia Botero Acosta
Maria Angela Echeverry-Galvis

Project Ports, Humpbacks y Sound In Colombia – @physicolombia
Fundación Macuaticos Colombia – @macuaticos
Semillero Aquasistemas – @aquasistemaspuj

Popular version of 4aAB5 – Modeling for acoustical corridors in patchy reef habitats of the Gulf of Tribugá, Colombia
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAICA25&id=3864155

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Sound plays a fundamental role in marine ecosystems, functioning as an invisible network of “pathways” or corridors that connect habitat patches and enable critical behaviors like migration, communication, and reproduction. In Colombia’s northern Pacific, one of the most biodiverse regions, the Gulf of Tribugá stands out for its pristine soundscape, dominated by the sounds of marine life. Designated a UNESCO Biosphere Reserve and a “Hope Spot” for conservation, this area serves as a vital nursery for humpback whales and supports local livelihoods through ecotourism and artisanal fishing. However, increasing human activities, including boat traffic and climate change, threaten these acoustic habitats, prompting researcher on how sound influences ecological connectivity—the lifeline for marine species’ movement and survival.

This study in Colombia’s Gulf of Tribugá mapped how ocean sounds connect marine life by integrating acoustic data with ecological modeling. Researchers analyzed how sound travels through the marine environment, finding that humpback whale songs (300 Hz) create natural acoustical corridors along coastal areas and rocky islands (‘riscales’). These pathways, though occasionally interrupted by depth variations, appear crucial for whale communication, navigation, and maintaining social connections during migration. In contrast, fish calls (100 Hz) showed no detectable sound corridors, suggesting fish may depend less on acoustic signals or use alternative navigation cues like wave noise when moving between habitats.

Photographs of some of the recorded fish species. Source: Author

The research underscores that acoustical connectivity is species-specific. While humpback whales may depend on sound corridors and prioritize long-distance communication, fish may prioritize short-range communication or other environmental signals. At any distance, noise pollution disrupts these systems universally: The bubbling/popping sounds created by spinning boat propellers, for instance, generate frequencies that can covers up the whale songs and fish calls and degrade habitat quality, even if fish are less affected over the same distances that whales are. Background noise shrinks and breaks up the underwater corridors that marine animals use to communicate and navigate, harming their underwater sound habitat.

Figure 1. Received sound levels when emitted by singers (a) without noise and (b) with background noise, at a grain size of 2 Φ. The left column shows conditions without background noise, and the right column shows conditions with noise. Sound intensities most likely to be heard by a humpback whale at 200 Hz are shown in green, less likely sounds in orange, and inaudible sounds in black. Source: Author

Noise pollution alters behaviors and acoustic corridors humpback whales rely on for communication and navigation in Colombia’s Pacific waters. Notably, the fish species studied showed no sound-dependent movement, suggesting their reliance on other cues. The study advocates for sound-inclusive conservation, proposing that acoustic data (more easily gathered today via satellites, field recordings, and public databases) should join traditional metrics like currents or temperature in marine management. Protecting acoustic corridors could become as vital as safeguarding breeding grounds, especially in biodiverse hubs like Tribugá.

This work marks a first step towards integrated acoustical-ecological models, offering tools to quantify noise impacts and design smarter protections. Future research could refine species-specific sound thresholds or expand to deeper oceanic areas. For now, the message is preserving marine ecosystems requires listening, not just looking. Combining efforts to lessen human noise by using mapped soundscapes to target critical corridors could help in the conservation of marine species.

Locating the lives of blue whales with sound informs conservation

John Ryan – ryjo@mbari.org

Monterey Bay Aquarium Research Institute, Moss Landing, CA, 95039, United States

Popular version of 4aUW7 – Wind-driven movement ecology of blue whales detected by acoustic vector sensing
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me/appinfo.php?page=Session&project=ASAICA25&id=3866920&server=eppro01.ativ.me

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

A technology that captures multiple dimensions of underwater sound is revealing how blue whales live, thereby informing whale conservation.

The most massive animal ever to evolve on Earth, the blue whale, needs a lot of food. Finding that food in a vast foraging habitat is challenging, and these giants must travel far and wide in search of it. The searching that leads them to life-sustaining nutrition can also lead them to a life-ending collision with a massive fast-moving ship. To support the recovery of this endangered species, we must understand where and how the whales live, and how human activities intersect with whale lives.

Toward better understanding and protecting blue whales in the California Current ecosystem, an interdisciplinary team of scientists is applying a technology called an acoustic vector sensor. Sitting just above the seafloor, this technology receives the powerful sounds produced by blue whales and quantifies changes in both pressure and particle motion that are caused by the sound waves. The pressure signal reveals the type of sound produced. The particle motion signal points to where the sound originated, thereby providing spatial information on the whales.

blue whalesA blue whale in the California Current ecosystem. Image Credit: Goldbogen Lab of Stanford University / Duke Marine Robotics and Remote Sensing Lab; NMFS Permit 16111.

For blue whales, it is all about the thrill of the krill. Krill are small-bodied crustaceans that can form massive swarms. Blue whales only eat krill, and they locate swarms to consume krill by the millions (would that be krillions?). Krill form dense swarms in association with cold plumes of water that result from a wind-driven circulation called upwelling. Sensors riding on the backs of blue whales reveal that the whales can track cold plumes precisely and persistently when they are foraging.

The close relationships between upwelling and blue whale movements motivates the hypothesis that the whales move farther offshore when upwelling habitat expands farther offshore, as occurs during years with stronger wind-driven upwelling. We tested this hypothesis by tracking upwelling conditions and blue whale locations over a three-year period. As upwelling doubled over the study period, the percentage of blue whale calls originating from offshore habitat also nearly doubled. A shift in habitat occupancy offshore, where the shipping lanes exist, also brings higher risk of fatal collisions with ships.

However, there is good news for blue whales and other whale species in this region. Reducing ship speeds can greatly reduce the risk of ship-whale collisions. An innovative partnership, Protecting Blue Whales and Blue Skies, has been fostering voluntary speed reductions for large vessels over the last decade. This program has expanded to cover a great stretch of the California coast, and the growing participation of shipping companies is a powerful and welcome contribution to whale conservation.

Designing Museum Spaces That Sound as Good as They Look

Milena Jonas Bem – jonasm@rpi.edu
School of Architecture, Rensselaer Polytechnic Institute
Greene Bldg, 110 8th St
Troy, NY 12180
United States

Popular version of 2pAAa7 – Acoustic Design in Contemporary Museums: Balancing Architectural Aesthetics and Auditory Experience
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAICA25&id=3869561

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Museums are designed to dazzle the eyes but often fail the ears. Imagine standing in a stunning gallery with high ceilings and gleaming floors, only to struggle to hear the tour guide over the echoes. Later, you pause before a painting, hoping for quiet reflection, but you get distracted by nearby chatter. Our research shows how simple design choices, like swapping concrete floors for carpet or adding acoustic ceilings, can transform visitor experiences by improving the acoustic environment.

The Acoustic Challenge in Museums
Contemporary museums often embrace a “white box” aesthetic, where minimalist architecture puts art center stage. Usually, this approach relies on hard, highly reflective finishes like glass, concrete, and masonry, paired with high ceilings and open‐plan layouts. While visually striking, these designs rarely account for their acoustic side effects, creating echo chambers that distract from the art they’re meant to highlight.

Testing “What if?” in Real Galleries

museum gallery

Figure 1. Room-impulse-response measurement in progress: a dodecahedral loudspeaker (left) emits test signals while a microphone records the gallery’s acoustic “fingerprint.” Photo: Aleksandr Tsurupa

To solve this, we visited museum rooms, recording how sound traveled in each space, like capturing an “acoustic fingerprint”, which we name room impulse response. Using these recordings, we built virtual models to test how different materials (e.g., carpet vs. concrete) changed the sound in the space. We evaluated three levels of sound absorption (low, medium, and high) on the floor, ceiling, and walls. Then we evaluated how these choices affected key acoustics metrics, including how long sound lingers (reverberation time, or RT), how intelligible speech is (Speech Transmission Index, or STI), and how far away you can still understand a conversation clearly (distraction distance).

Key Findings

1. More Absorption Always Helps: Our first big finding is that adding more absorption always helps—no exceptions. Increasing from low→medium→high absorption consistently: cut reverberation in half or more, boosted speech clarity by 0.05–0.10 STI points, and made speech level drop faster with distance (good for privacy).

2. Placement Matters: where you put that absorption makes a practical difference:

    • Floors yield the single biggest improvement, swapping concrete for carpet cuts reverberation by 1.8 seconds. However, it does not always guarantee meeting ideal results; supplemental ceiling or wall treatments may still be needed to hit ideal RT, clarity, and privacy levels.
    • Ceilings delivered the largest jumps in STI and clarity, showing the greatest overall increase in distraction distance and better sound attenuation. So, going from a fully reflective ceiling to wood and then microperforated ceiling panels is compelling for intelligibility.
    • Walls emerged as the ultimate privacy tool. Only high-absorption plaster walls drove conversation levels at 4 m below 52 dB and created the steepest drop-off, perfect for whisper-quiet exhibits or multimedia spaces.

3. A Simple STI‐Prediction Shortcut: Measuring speech intelligibility typically requires specialized equipment and complex calculations. We distilled our data into a simple formula to predict STI using just a room’s volume and total absorption—no advanced math required (STI ranges from 0–1; closer to 1 = perfect intelligibility).

Figure 2. Predicted Speech Transmission Index (STI) across room volume and total absorption area. Warm colors indicate higher STI in smaller, highly absorptive spaces; cool colors indicate lower STI in large, reflective rooms. The overlaid equation estimates STI from volume, absorption, and reverberation time. Source: Authors

Hear the Difference: Auralizations from Williams College Museum
Below is one of the rooms that was used as a case study (Figure 3). Using auralizations (audio simulations that let you “hear” a space before it’s built), you can experience these changes yourself. Click each scenario below to hear the differences!

Figure 3. Museum gallery (photo) and its calibrated 3D model. The highlighted gallery “W1” served as a case study for virtually swapping floor, wall, and ceiling finishes to predict acoustic outcomes. Source: Authors

Note: Weighted absorption coefficient (αw): varies from 0 to 1, higher = more sound absorbed.

Wall:

Ceiling:

The takeaway?
Start with sound-absorbing floors to reduce echoes, add ceiling panels to sharpen speech, and use high-performance walls where privacy matters most. These steps do not require sacrificing aesthetics—materials like sleek microperforated wood or acoustic plaster blend seamlessly into designs. By considering acoustics early, designers can create museums that are as comfortable to hear as they are to see.

 

Measuring the sounds of microbubbles for ultrasound therapy

T. Douglas mast – doug.mast@uc.edu

Instagram: @baglamist
University of Cincinnati, Cincinnati, OH, 45224, United States

Popular version of 2aBAa7 – Measure for measure: Diffraction correction for consistent quantification of bubble-related acoustic emissions
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me/appinfo.php?page=Session&project=ASAICA25&id=3868554

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Microscopic bubbles, when caused to vibrate by ultrasound waves, can be powerful enough to break through the body’s natural barriers and even to destroy tissue. Growth, resonance, and violent collapse of these microbubbles, called acoustic cavitation, is enabling new medical therapies such as drug delivery through the skin, opening of the blood-brain barrier, and destruction of tumors. However, the biomedical effects of cavitation are still challenging to understand and control. A special session at the 188th meeting of the Acoustical Society of America, titled “Double, Double, Toil and Trouble – Towards a Cavitation Dose,” is bringing together researchers working on methods to consistently and accurately measure these bubble effects.

For more than 30 years, scientists have measured bubble activity by listening with electronic sensors, called passive cavitation detection. The detected sounds can resemble sustained musical tones, from continuously vibrating bubbles, or applause-like noise, from groups of collapsing bubbles. However, results are challenging to compare between different measurement configurations and therapeutic applications. Researchers at the University of Cincinnati are proposing a method for reliably characterizing the activity of cavitating bubbles by quantifying their radiated sound.

A passive cavitation detector (left) listens for sound waves radiated by a collection of cavitating bubbles (blue dots) within a region of interest (blue rectangle).

The Cincinnati researchers are trying to improve measurements of bubble activity by precisely accounting for the spatial sensitivity patterns of passive cavitation detectors. The result is a measure of cavitation dose, equal to the total sound power radiated from bubbles per unit area or volume of the treated tissue. The hope this approach will enable better prediction and monitoring of medical therapies based on acoustic cavitation.

Figure 1: In an experiment simulating drug delivery through the skin (left), a treatment source projects an ultrasound beam onto animal skin. A passive cavitation detector (PCD) listens for sound radiated by bubbles at the skin surface, while the skin’s permeability is measured from its electrical resistance. Measured bubble activity is quantified using the sensitivity pattern of the PCD within the treated region (highlighted blue circle).

The researchers reported results from two experiments testing their methods for characterizing cavitation. In experiments testing ultrasound methods for drug delivery through the skin (Figure 1), they found that total power of subharmonic acoustic emissions (like musical tones indicating sustained vibrations of resonating bubbles) per unit skin surface area consistently increased when the skin became more permeable, quantifying the role of bubble activity in drug delivery. In a second experiment (Figure 2), the researchers quantified bubble activity during heating of animal liver tissue by ultrasound, simulating cancer therapies called thermal ablation. They found that increased bubble activity could indicate both faster tissue heating near the treatment source and reduced heating further from the source.

Figure 2: An ultrasound (US) array sonicates animal liver tissue with a high-intensity ultrasound beam, causing tissue heating (thermal ablation) as used for liver tumor treatments. Increased bubble activity was found to reduce the depth of treatment, while sometimes also increasing the area of ablated tissue near the tissue surface.

This approach to measuring bubble activity could help to establish standard cavitation doses for many different ultrasound therapy methods. Quantitative measurements of bubble activity could help confirm treatment success, such as drug delivery through the skin, or to guide thermal treatments by optimizing bubble activity to heat tumors more efficiently. Standard measures of cavitation dose should also help scientists more rapidly develop new medical therapies based on ultrasound-activated microbubbles.

Why do Cochlear Implant Users Struggle to Understand Speech in Echoey Spaces?

Prajna BK – prajnab2@illinois.edu

University of Illinois Urbana-Champaign, Speech and Hearing Science, Champaign, IL, 61820, United States

Justin Aronoff

Popular version of 2pSPb4 – Impact of Cochlear Implant Processing on Acoustic Cues Critical for Room Adaptation
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAICA25&id=3867053

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Have you ever wondered how we manage to understand someone in echoey and noisy spaces? For people using cochlear implants, understanding speech in these environments is especially difficult—and our research aims to explore why.

Figure 1. Spectrogram of reverberant speech before (top) and after (bottom) Cochlear Implant processing

 

When sound is produced in a room, it reflects off surfaces and lingers—creating reverberation. Reflections of both target speech and background noise make understanding speech even more difficult. However, for listeners with typical hearing, the brain quickly adapts to these reflections through short-term exposure, helping separate the speech signal from the room’s acoustic “fingerprint.” This process, known as adaptation, relies on specific sound features: the reverberation tail (the lingering energy after the speech stops), reduced modulation depth (how much the amplitude of the speech varies), and increased energy at low frequencies. Together, these cues create temporal and spectral patterns that the brain can group as separate from the speech itself.

While typical-hearing listeners adapt, many cochlear implant (CI) users report extreme difficulty understanding speech in everyday places like restaurants, where background noise and sound reflections are common. Although cochlear implants have been remarkably effective in restoring access to sound and speech for people with profound hearing loss, they still fall short in complex acoustic environments. This study explores the nature of distortions introduced by cochlear implants to key acoustic cues that listeners with typical hearing use to adapt to reverberant rooms.

The study examined how cochlear implant signal processing affects these cues by analysing room impulse response signals before and after simulated CI processing. Two key parameters were manipulated: the input dynamic range (IDR), which determines how much of the incoming sound is preserved before compression and affects how soft and loud sounds are balanced in the delivered electric signal. The second parameter, the Logarithmic Growth Function (LGF), controls how sharply the sound is compressed at higher levels. A lower LGF results in more abrupt shifts in volume, which can distort fine details in the sound.

The results show that cochlear implant processing significantly alters the acoustic cues that support adaptation. Specifically, it reduces the fidelity with which modulations are preserved, shortens the reverberation tail, and diminishes the low-frequency energy typically added by reflections. Overall, this degrades the speech clarity index of the sound, which can contribute to CI users’ difficulty communicating in reflective spaces.

Further, increasing the IDR extended the reverberation tail but also reduced the clarity index by increasing the relative contribution of reverberant energy to the total energy. Similarly, lowering the LGF factor caused more abrupt energy changes in the reverberation tail, degrading modulation fidelity. Interestingly, it also led to a more gradual drop-off in low-frequency energy—highlighting a complex trade-off.

Together, these findings suggest that cochlear implant users may struggle in reverberant environments not only because of reflections but also because their devices alter or distort the acoustic regularities that enable room adaptation. Improving how cochlear implants encode these features could make speech more intelligible in real-world, echo-filled spaces.