Locating the lives of blue whales with sound informs conservation

John Ryan – ryjo@mbari.org

Monterey Bay Aquarium Research Institute, Moss Landing, CA, 95039, United States

Popular version of 4aUW7 – Wind-driven movement ecology of blue whales detected by acoustic vector sensing
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me/appinfo.php?page=Session&project=ASAICA25&id=3866920&server=eppro01.ativ.me

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

A technology that captures multiple dimensions of underwater sound is revealing how blue whales live, thereby informing whale conservation.

The most massive animal ever to evolve on Earth, the blue whale, needs a lot of food. Finding that food in a vast foraging habitat is challenging, and these giants must travel far and wide in search of it. The searching that leads them to life-sustaining nutrition can also lead them to a life-ending collision with a massive fast-moving ship. To support the recovery of this endangered species, we must understand where and how the whales live, and how human activities intersect with whale lives.

Toward better understanding and protecting blue whales in the California Current ecosystem, an interdisciplinary team of scientists is applying a technology called an acoustic vector sensor. Sitting just above the seafloor, this technology receives the powerful sounds produced by blue whales and quantifies changes in both pressure and particle motion that are caused by the sound waves. The pressure signal reveals the type of sound produced. The particle motion signal points to where the sound originated, thereby providing spatial information on the whales.

blue whalesA blue whale in the California Current ecosystem. Image Credit: Goldbogen Lab of Stanford University / Duke Marine Robotics and Remote Sensing Lab; NMFS Permit 16111.

For blue whales, it is all about the thrill of the krill. Krill are small-bodied crustaceans that can form massive swarms. Blue whales only eat krill, and they locate swarms to consume krill by the millions (would that be krillions?). Krill form dense swarms in association with cold plumes of water that result from a wind-driven circulation called upwelling. Sensors riding on the backs of blue whales reveal that the whales can track cold plumes precisely and persistently when they are foraging.

The close relationships between upwelling and blue whale movements motivates the hypothesis that the whales move farther offshore when upwelling habitat expands farther offshore, as occurs during years with stronger wind-driven upwelling. We tested this hypothesis by tracking upwelling conditions and blue whale locations over a three-year period. As upwelling doubled over the study period, the percentage of blue whale calls originating from offshore habitat also nearly doubled. A shift in habitat occupancy offshore, where the shipping lanes exist, also brings higher risk of fatal collisions with ships.

However, there is good news for blue whales and other whale species in this region. Reducing ship speeds can greatly reduce the risk of ship-whale collisions. An innovative partnership, Protecting Blue Whales and Blue Skies, has been fostering voluntary speed reductions for large vessels over the last decade. This program has expanded to cover a great stretch of the California coast, and the growing participation of shipping companies is a powerful and welcome contribution to whale conservation.

Designing “Virtual acoustics” rehearsal rooms for orchestra safety

Cameron Hough – chough@marshallday.com

Marshall Day Acoustics, Melbourne, VIC, 3066, Australia

Nick Boulter, Arup
Simon Tait, AmberTech

Popular version of 5aAA3 – The use of electrocoustic enhancement systems in the design of orchestral rehearsal rooms
Presented at the 188th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAICA25&id=3864386

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Rehearsal rooms for orchestras pose many acoustic design challenges. The most fundamental concern is that of safety. Modern musical instruments are loud enough to create a significant risk of long-term hearing damage to the players and conductor. Loudness also takes a toll on musicians from constant exposure to loud sound and musicians feeling that they have to always “hold back” and cannot play their instrument normally.

Unless the rehearsal venue has similar size to a performance venue, increasing cost and embodied materials, rooms are often either too loud to be a safe working environment for the orchestra or suffer from a lack of reverberation and richness which makes it hard for musicians and conductor to work on the color, blend and nuance of the music.

The use of electronic acoustic enhancement systems offers a way to break some of the fundamental “interlocks” between size and loudness of a rehearsal venue and resolve some of these challenges. Beyond just an artificial reverberation system, enhancement systems allow a “virtual acoustic environment” to be created – providing musicians with sound reflections that simulate the experience of playing in a larger room plus a richer – but quieter – room sound. This gives the musicians “breathing room” for their rehearsal.

The recent Australian Chamber Orchestra auditorium at Walsh Bay Arts Precinct, Sydney is an excellent example of how this technology has allowed a safe and comfortable rehearsal environment for the orchestra in a smaller space, without sacrificing musical quality.

Located in a heritage-listed former industrial wharf complex in Sydney Harbour, the ACO’s a 277-seat venue, The Nielson, is an “artist’s studio of sound” which features views of the Sydney Harbour Bridge through its upper floor windows. The ACO plays across all major Australian cities in venues that seat up to 2500 people, so providing the ability to preview how a performance would sound in each touring venue is important to allow the orchestra to adjust for how their performance will change in each room. The orchestra size for each tour varies from small chamber groups up to full symphony orchestra with added wind and brass players. The Nielson must therefore provide a wide range of acoustic conditions at the touch of a button, all while managing musicians’ noise exposure.

rehearsalFigure 1: View of The Nielson in flat floor mode with seats retracted. Source: Authors

The electro-acoustic enhancement system installed in ‘The Neilson’ is a Yamaha AFC4 system consisting of 16 microphones, various DSP (Digital Signal Processing) modules, 79 amplifier channels and 79 loudspeakers mounted within the walls and ceiling space which allow the room’s apparent width and height, reverberation and timbre to be varied, creating different virtual ”venues” for the orchestra to rehearse and perform in.

To provide support to musicians and control loudness, the physical room’s surface finishes emphasize reflections from the side walls (lateral reflections) and de-emphasize sound reflections from above.

This allows the AFC4 system to “raise the roof” and create the impression of a much larger room without overwhelming the sound, “knitting together” the physical and electronic parts of the room sound.

The Nielson’s walls and ceiling include several sound scattering finishes that blend and “soften” the sound, where the architecture itself was inspired by music.

The lower walls are textured with small indentations, encoding a quote by Beethoven written in Braille.

Figure 2: View of the “wavy wall” with “Braille” acoustic diffusion. Source: Authors

The glazed upper walls along the balcony level are “frozen music”, based on the chord progression of Bach’s Chaconne for solo violin, with each of the 16 window sections “spelling” a chord (the widths of the panes of glass are in proportion to the intervals of the notes in the chord).

Figure 3: Render of the “Chaconne window” glass diffuser. Source: TZG Architects

The ceiling “wells” and “fins” were set out in a sequence where the height of the wells in each portion of the ceiling was proportional to the intervals between notes in three famous musical motifs by Wagner (Tristan und Isolde), Shostakovich (String Quartet No.8) and Richard Strauss (Elektra).

The “virtual acoustics” provided in the Nielson make it more than just a beautiful space, but one of the most flexible orchestra rehearsal rooms in the world that allows the ACO to preview how they will adjust their performance to venues ten times larger than the “real” room – and unlock new performance options for audiences in the room and reach new streaming audiences online. It provides a great example of how technology has allowed “more from less” via the sustainable re-use of an existing heritage building.

The underwater sound of an earthquake at the Main Endeavour Hydrothermal Vent Field

Brendan Smith – brendan.smith@dal.ca
Twitter: @bsmithacoustics
Instagram: @brendanthehuman
Dalhousie University, Department of Oceanography, Halifax, Nova Scotia, B3H 4R2, Canada

Additional author:
Dr. David Barclay

Popular version of 1aAO4 – Passive acoustic monitoring of a major seismic event at the Main Endeavour Hydrothermal Vent Field
Presented at the 187th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0034918

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


The Main Endeavour Hydrothermal Vent Field (MEF) is located on the Juan de Fuca Ridge in the Northeast Pacific Ocean. This ridge is a seafloor spreading center, where tectonic plates pull apart and new oceanic crust is formed as magma upwells from beneath the earth’s surface. This movement of the earth’s crust causes cracks to form, allowing seawater to penetrate downwards towards the magma below, where it circulates and eventually resurfaces into the ocean at temperatures over 300 degrees Celsius. Uniquely adapted organisms thrive at these sites, surviving from energy provided not by the sun, but by the heat and chemical composition of the vent fluid.

Figure 1: Black-smoker hydrothermal vent chimney at the Main Endeavour Hydrothermal Vent Field (Image courtesy of Ocean Networks Canada)

Long term measurements of hydrothermal vent activity are of scientific interest. However, the high temperatures and caustic chemical characteristics make it challenging to place probes directly in the vent flow. For this reason, passive acoustics (listening) can be a useful tool for hydrothermal vent monitoring, because the hydrophones (underwater microphones) can be located a safe distance from the vent fluid. Ocean Networks Canada have had a hydrophone at MEF continuously recording for over 5 years, and for the past year, a 4-element hydrophone array has been recording at this location.

The motion of the tectonic plates in these regions causes a lot of seismic activity, such as earthquakes. On March 6, 2024, a large ~4.1 magnitude earthquake was recorded at MEF, and earthquake rates were the highest observed since 2005. This earthquake was recorded on the hydrophone array and can be seen in the spectrogram in Figure 2.

Figure 2: Spectrogram of ~4.1 magnitude earthquake at MEF

Figure 3 shows differences in the soundscape at Endeavour before, during, and after the earthquake. The changes after the earthquake persist more than 1-week following the event. The duration and higher frequency components of the changes in the soundscape suggest sources other than seismicity.

Figure 3: Acoustic spectra before, during, and after the earthquake at MEF

The hydrophone array also provides us with the opportunity to gain further insights. For example, surface wind/wave-generated noise is a predominant source of ambient sound in the ocean, and the coherence, or spatial relationship between multiple hydrophone elements in the presence of this sound source, is well known. We can compare the measured coherence with the expected (modeled) coherence to explore any deviations, which could be attributed to hydrothermal vent activity. In Figure 4 we see differences between the measurements and model below 1 kHz (outlined by black boxes), suggesting the influence of hydrothermal vent sounds on the local soundscape.

Figure 4: Measured and modeled acoustic vertical coherence at MEF

In conclusion, passive acoustic monitoring can be used to monitor changes in hydrothermal vent fields in response to seismic activity. This earthquake provided a test case to prepare for a more major seismic event, which is expected to occur at Endeavour in the coming years. Passive acoustic monitoring will be an important tool to document vent field activity during this future event.

What could happen to Earth if we blew up an incoming asteroid?

Brin Bailey – brittanybailey@ucsb.edu

University of California, Santa Barbara, Physics Department, Santa Barbara, CA, 93106, United States

Popular version of 4aPA12 – Acoustic ground effects simulations from asteroid disruption via the ‘Pulverize It’ method
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027433

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Let’s imagine a hypothetical scenario: a new asteroid has just been discovered, on a path straight towards Earth, threatening to hit us in just a few days. What can we do about it?

A new study funded by NASA is trying to answer that question. Pulverize It, or PI for short, is a proposed method for planetary defense–the effort of monitoring and protecting Earth from incoming asteroids. In essence, PI’s plan of attack is to penetrate an incoming asteroid with high-speed, bullet-like projectiles, which would split the asteroid into many smaller fragments (pieces) (Figure 1). PI’s key difference from other planetary defense methods is its versatility. It is designed to work for a wide variety of scenarios, meaning that PI could be used whether an asteroid impact is one year away or one week away (depending on the asteroid’s size and speed).

asteroid

Figure 1. PI works by penetrating an asteroid with a high-speed, high-density projectile, which rapidly converts a portion of the asteroid’s kinetic energy into heat and shock waves within the rocky material. The heat energy of the impact locally vaporizes and ionizes material near the impact site(s), and the subsequent shock waves damage and fracture the asteroid material as they move and pass (refract) through it.

How is this possible, and how could the asteroid fragments affect us here on Earth? Rather than using momentum transfer–like in methods such as asteroid deflection, as demonstrated by NASA’s recent Double Asteroid Redirection Test (DART) mission–PI utilizes energy transfer to mitigate a threat by disassembling (or breaking apart) an asteroid.

If the asteroid is blown apart while far away from Earth (generally, at least several months before impact), these fragments would miss the planet entirely. This is PI’s preferred mode of operation,as it is always more favorable to keep the action away from us when possible. In a scenario where we have little warning time (a “terminal” scenario), the small asteroid fragments may enter Earth’s atmosphere–but this is part of the plan (Figure 2).

asteroid

Figure 2. In a short-warning scenario where the asteroid is intercepted and broken up close to Earth (“terminal” scenario), the fragment cloud enters Earth’s atmosphere. Each fragment will burst at high altitude, dispersing the energy of the original asteroid into optical and acoustical ground effects. As the fragments in the cloud spread out, they will enter the atmosphere at different times and in different places, creating spatially and temporally de-correlated shock waves. The spread of the fragment cloud depends on a variety of factors, mainly intercept time (the amount of time between asteroid breakup and ground impact) and fragment disruption velocity (the speed and direction at which fragments move away from the fragment cloud’s center of mass).

Earth’s atmosphere acts as a bulletproof vest, shielding us from harmful ultraviolet radiation, typical space debris, and, in this case, asteroid fragments. As these small rocky pieces enter the atmosphere at very high speeds, air molecules exert large amounts of pressure on them. This puts stress on the rock and causes it to break up. As the fragment’s altitude decreases, the atmosphere’s density increases. This adds heat and increases pressure until the fragment can’t remain intact anymore, causing the fragment to detonate, or “burst.”

When taken together, these bursts can be thought of as a cosmic fireworks show. As each fragment travels through the atmosphere and bursts, it produces a small amount of light (like a shooting star) and pressure (as a shock wave, like a sonic boom). The collection of these optical and acoustical effects, referred to as “ground effects,” work to disperse the energy of the original asteroid over a wide area and over time. In reasonable mitigation scenarios that are appropriate for the incoming asteroid (for example, based on asteroid size or by breaking the asteroid into a very large number of very small pieces), these ground effects result in little to no damage.

In this study, we investigate the acoustical ground effects that PI may produce when blowing apart an incoming asteroid in a “terminal” scenario with little warning. As each fragment enters Earth’s atmosphere and bursts, the pressure released creates a shock wave, carrying energy and creating an audible “boom” for each fragment (a sonic boom). Using custom codes, we simulate the acoustical ground effects for a variety of scenarios that are designed to keep the total pressure output below 3 kPa–the pressure at which residential windows may begin to break–in order to minimize potential damage (Figure 3).

Figure 3. Simulation of the acoustical ground effects from a 50 m diameter asteroid which is broken into 1000 fragments one day before impact. The asteroid is modeled as a spherical rocky body (average density of 2.6 g/cm3) traveling through space at 20 km/s and entering Earth’s atmosphere at an angle of 45°. The fragments move away from each other at an average speed of 1 m/s. The sonic “booms” produced by the fragment bursts are simulated here based upon the arrival of each shock wave at an observer on the ground (indicated by the green dot in the left plot). Note that both plots take into account the constructive interference between shock waves. Left: real-time pressure. Right: maximum pressure, where each pixel displays the highest pressure it has experienced. The dark orange lines, which display higher pressure values, signify areas where two shock waves have overlapped.

Figure 4. Simulation of the acoustical ground effects from an unfragmented (as in, not broken up) 50 m diameter asteroid. The asteroid is modeled as a spherical rocky body (average density of 2.6 g/cm3) traveling through space at 20 km/s and entering Earth’s atmosphere at an angle of 45°. Upon entering and descending through Earth’s atmosphere, the asteroid undergoes a great amount of pressure from air molecules, eventually causing the asteroid to airburst. This burst releases a large amount of pressure, creating a powerful shock wave. Left: real-time pressure. Right: maximum pressure, where each pixel displays the highest pressure it has experienced.

Our simulations support that the ground effects from an asteroid blown apart by PI are vastly less damaging than if the asteroid hit Earth intact. For example, we find that a 50-meter-diameter asteroid that is broken into 1000 fragments only one day before Earth impact is vastly less damaging than if it was left intact (Figure 3 versus Figure 4). In the mitigated scenario, we estimate that the observation area (±150 km from the fragment cloud’s center) would experience an average pressure of ~0.4 kPa and a maximum pressure of ~2 kPa (Figure 3). In the unfragmented asteroid case (as in, not broken up), we estimate an average pressure of ~3 kPa and a maximum pressure of ~20 kPa (Figure 4). The asteroid mitigated by PI keeps all areas below the 3 kPa damage threshold, while the maximum pressure in the unmitigated case is almost seven times higher than the threshold.

The key is that the shock waves from the many fragments are “de-correlated” at any given observer, and hence vastly less threatening. Our findings suggest that PI is an effective approach for planetary defense that can be used in both short-warning (“terminal” scenarios) and extended warning scenarios, to result in little to no ground damage.

While we would rather not use this terminal defense mode–as it is preferable to intercept asteroids far ahead of time–PI’s short-warning mode could be used to mitigate threats that we fail to see coming. We envision that asteroid impact events similar to the in Chelyabinsk airburst in 2013 (~20 m diameter) or Tunguska airburst in 1908 (~40-50 m diameter) could be effectively mitigated by PI with remarkably short intercepts and relatively little intercept mass.

Website and additional resources
Please see our website for further information regarding the PI project, including papers, visuals, and simulations. For our full suite of ground effects simulations, please check our YouTube channel.

Funding
Funding for this program comes from NASA NIAC Phase I grant 80NSSC22K0764 , NASA NIAC Phase II grant 80NSSC23K0966, NASA California Space Grant NNX10AT93H and the Emmett and Gladys W. fund. We gratefully acknowledge support from the NASA Ames High End Computing Capability (HECC) and Lawrence Livermore National Laboratory (LLNL) for the use of their ALE3D simulation tools used for modeling the hypervelocity penetrator impacts, as well as funding from NVIDIA for an Academic Hardware Grant for a high-end GPU to speed up ground effect simulations.

Reducing Ship Noise Pollution with Structured Quarter-Wavelength Resonators

Mathis Vulliez – mathis.vulliez@usherbrooke.ca

Université de Sherbrooke, Département de génie mécanique, Sherbrooke, Québec, J1K 2R1, Canada

Marc-André Guy, Département de génie mécanique, Université de Sherbrooke
Kamal Kesour, Innovation Maritime, Rimouski, QC, Canada
Jean-Christophe G.Marquis, Innovation Maritime, Rimouski, QC, Canada
Giuseppe Catapane, University of Naples Federico II, Naples, Italy
Giuseppe Petrone, University of Naples Federico II, Naples, Italy
Olivier Robin, Département de génie mécanique, Université de Sherbrooke

Popular version of 1pEA6 – Use of metamaterials to reduce underwater noise generated by ship machinery
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026790

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

The underwater noise generated by maritime traffic is the most significant source of ocean noise pollution. This pollution threatens marine biodiversity, from large marine mammals to invertebrates. At low speeds, the machinery dominates the underwater radiated noise from vessels. It also has a precise sound signature since it usually operates at a fixed rotation frequency. If you think of it, an idling vehicle produces a tonal acoustic excitation. The sound energy distribution is mainly concentrated at a few precise frequencies and multiples. Indeed, the engine rotates at a given rotation speed – in round per minutes – or frequency (divided by 60, it is the number of oscillations per second). In addition to the rotating frequency, the firing order and the number of cylinders will lead to the generation of excitation multiples of the rotating frequency. The problem is that the produced frequencies are generally low and difficult to mitigate with classical soundproofing materials requiring substantial material thickness.

This research project delves into new solutions to mitigate underwater noise pollution using innovative noise control technologies. The solution investigated in this work is structured quarter-wavelength acoustic resonators. These resonators usually absorb sound at a resonant frequency and odd harmonics, making them ideal for targeting precise frequencies and their multiples. However, the length of these resources is dictated by the wavelength corresponding to the target frequency. As for the required material thickness, this wavelength is significant at low frequencies (in air, for a frequency of 100 Hz and a speed of sound of 340 m/s, the wavelength is 3.4 m since the wavelength is the ratio of speed by frequency). The length of a quarter wavelength resonator tuned at 100 Hz is thus 0.85 m.

Fig.1. Comparison between classical and innovative soundproofing material on sound absorption, from Centre de recherche acoustique-signal-humain, Université de Sherbrooke.

Therefore, a coiled quarter wavelength resonator was considered to reduce its bulkiness, and facilitate their installation. The inspiration follows Archimedes’ spiral geometry shape, a structure easily manufactured using today’s 3D printing technologies. Experimental laboratory tests were conducted to characterize the prototypes and determine their effectiveness in absorbing sound. We also created a numerical model that allows us to quickly answer optimization questions and study the efficiency of a hybrid solution: a rock wool panel with embedded coiled resonators. We aim to combine classic and innovative solutions tom propose low weight and compact solutions to efficiently reduce underwater noise pollution!

Fig.2. Numerical model of coiled resonators embedded in rockwool, from Centre de recherche acoustique-signal-humain, Université de Sherbrooke.