Unlocking the Secrets of Ocean Dynamics: Insights from ALMA

Florent Le Courtois – florent.lecourtois@gmail.com

DGA Tn, Toulon, Var, 83000, France

Samuel Pinson, École Navale, Rue du Poulmic, 29160 Lanvéoc, France
Victor Quilfen, Shom, 13 Rue de Châtellier, 29200 Brest, France
Gaultier Real, CMRE, Viale S. Bartolomeo, 400, 19126 La Spezia, Italy
Dominique Fattaccioli, DGA Tn, Avenue de la Tour Royale, 83000 Toulon, France

Popular version of 4aUW7 – The Acoustic Laboratory for Marine Applications (ALMA) applied to fluctuating environment analysis
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027503

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Ocean dynamics happen at various spatial and temporal scales. They cause the displacement and the mixing of water bodies of different temperatures. Acoustic propagation is strongly impacted by these fluctuations as sound speed depends mainly on the underwater temperature. Monitoring underwater acoustic propagation and its fluctuations remains a scientific challenge, especially at mid-frequency (typically the order of 1 to 10 kHz). Dedicated measurement campaigns have to be conducted to increase the understanding of the fluctuations, their impacts on the acoustic propagation and thus to develop appropriate localization processing.

The Acoustic Laboratory for Marine Application (ALMA) has been proposed by the French MOD Procurement Agency (DGA) to conduct research for passive and active sonar since 2014, in support of future sonar array design and processing. Since its inception in 2014, ALMA has undergone remarkable transformations, evolving from a modest array of hydrophones to a sophisticated system equipped with 192 hydrophones and advanced technology. With each upgrade, ALMA’s capabilities have expanded, allowing us to delve deeper into the secrets of the sea.

ALMA

Figure 1. Evolution of the ALMA array configuration, from 2014 to 2020. Real and Fattacioli, 2018

Bulletin of sea temperature to understand the acoustic propagation
The campaign of 2016 took place Nov 7 – 17, 2016, off the Western Coast of Corsica in the Mediterranean Sea, located by the blue dot in Fig.2 (around 42.4 °N and 9.5 °E). We analyzed signals from a controlled acoustic source and temperature recording, corresponding approximately to 14 hours of data.

Figure 2. Map of surface temperature during the campaign. Heavy rains of previous days caused a vortex in the north of Corsica. Pinson et. al, 2022

The map of sea temperature during the campaign was computed. It is similar to a weather bulletin for the sea. From previous days, heavy rains caused a global cooling over the areas. A vortex appeared in the Ligurian Sea between Italy and the North of Corsica. Then the cold waters traveled Southward along Corsica Western coast to reach the measurement area. The water cooling was measured as well on the thermometers. The main objective was to understand the changes in the echo pattern in relation to the temperature change. Echos can characterize the acoustic paths. We are mainly interested in the amplitude, the time of travel and the angle of arrival of echoes to describe the acoustic path between the source and ALMA array.

All echoes extracted by processing ALMA data are plotted as dots in 3D. They depend on the time of the campaign, the angle of arrival and the time of flight. The loudness of the echo is indicated by the colorscale. The 3D image is sliced in Fig. 3 a), b) and c) for better readability. The directions of the last reflection are estimated in Fig. 3 a): positive angles come from the surface reflection while negative angles come from seabed reflection. The global cooling of the waters caused a slowly increasing fluctuation of the time of flight between the source and the array in Fig. 3 b). A surprising result was a group of spooky arrivals, who appeared briefly during the campaign at an angle close to 0 ° during 3 and 12 AM in Fig. 3 b) and c).

All the echoes detected by processing the acoustic data. Pinson et. al, 2022

Figure 3. Evolution of the acoustic paths during the campaign. Each path is a dot defined by the time of flight and the angle of arrival during the period of the campaign. Pinson et. al, 2022

The acoustic paths were computed using the bulletin of sea temperature. A more focused map of the depth of separation between cold and warm waters, also called mixing layer depths (MLD), is plotted in Fig 4. We noticed that, when the mixing layer depth is below the depth of the source, the cooling causes acoustic paths to be trapped by bathymetry in the lower part of the water column. It explains the apparition of the spooky echoes. Trapped paths are plotted in the blue line while regular paths are plotted in black in Fig. 5.

Figure 4. Evolution of the depth of separation between cold and warm water during the campaign. Pinson et. al, 2022

Figure 5. Example of acoustic paths in the area: black lines indicate regular propagation of the sound; blue lines indicate the trapped paths of the spooky echoes. Pinson et. al, 2022

Overview
The ALMA system and the associated tools allowed illustrating practical ocean acoustics phenomena. ALMA has been deployed during 5 campaigns, representing 50 days at sea, mostly in the Western Mediterranean Sea, but also in the Atlantic to tackle other complex physical problems.

Tools for shaping the sound of the future city in virtual reality

Christian Dreier – cdr@akustik.rwth-aachen.de

Institute for Hearing Technology and Acoustics
RWTH Aachen University
Aachen, Northrhine-Westfalia 52064
Germany

– Christian Dreier (lead author, LinkedIn: Christian Dreier)
– Rouben Rehman
– Josep Llorca-Bofí (LinkedIn: Josep Llorca Bofí, X: @Josepllorcabofi, Instagram: @josep.llorca.bofi)
– Jonas Heck (LinkedIn: Jonas Heck)
– Michael Vorländer (LinkedIn: Michael Vorländer)

Popular version of 3aAAb9 – Perceptual study on combined real-time traffic sound auralization and visualization
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027232

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

“One man’s noise is another man’s signal”. This famous quote by Edward Ng from a 1990’s New York Times article breaks down a major learning from noise research. A rule of thumb within noise research states the community response to noise, when asked for “annoyance” ratings, is said to be statistically explained only to one third by acoustic factors (like the well-known A-weighted sound pressure level, which can be found on household devices as “dB(A)” information). Referring to Ng’s quote, another third is explained by non-acoustic, personal or social variables, whereas the last third cannot be explained according to the current state of research.

Noise reduction in built urban environments is an important goal for urban planners, as noise is not only a cause of cardio-vascular diseases, but also affects learning and work performance in schools and offices. To achieve this goal, a number of solutions are available, ranging from switching to electrified public transport, speed limits, traffic flow management or masking of annoyant noise by pleasant noise, for example fountains.

In our research, we develop a tool for making the sound of virtual urban scenery audible and visible. From its visual appearance, the result is comparable to a computer game, with the difference that the acoustic simulation is physics-based, a technique that is called auralization. The research software “Virtual Acoustics” simulates the entire physical “history” of a sound wave for producing an audible scene. Therefore, the sonic characteristics of traffic sound sources (cars, motorcycles, aircraft) are modeled, the sound wave’s interaction with different materials at building and ground surfaces are calculated, and human hearing is considered.

You might have recognized a lightning strike sounding dull when being far away and bright when being close, respectively. The same applies for aircraft sound too. In an according study, we auralized the sound of an aircraft for different weather conditions. A 360° video compares how the same aircraft typically sounds during summer, autumn and winter when the acoustical changes due to the weather conditions are considered (use headphones for full experience!)

In another work we prepared a freely available project template for using Virtual Acoustics. Therefore, we acoustically and graphically modeled the IHTApark, that is located next to the Institute for Hearing Technology and Acoustics (IHTA): https://www.openstreetmap.org/#map=18/50.78070/6.06680.

In our latest experiment, we focused on the perception of especially annoyant traffic sound events. Therefore, we presented the traffic situations by using virtual reality headsets and asked the participants to assess them. How (un)pleasant would be the drone for you during a walk in the IHTApark?

Listen In: Infrasonic Whispers Reveal the Hidden Structure of Planetary Interiors and Atmospheres

Quentin Brissaud – quentin@norsar.no
X (twitter): @QuentinBrissaud

Research Scientist, NORSAR, Kjeller, N/A, 2007, Norway

Sven Peter Näsholm, University of Oslo and NORSAR
Marouchka Froment, NORSAR
Antoine Turquet, NORSAR
Tina Kaschwich, NORSAR

Popular version of 1pPAb3 – Exploring a planet with infrasound: challenges in probing the subsurface and the atmosphere
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026837

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

infrasoundLow frequency sound, called infrasound, can help us better understand our atmosphere and explore distant planetary atmospheres and interiors.

Low-frequency sound waves below 20 Hz, known as infrasound, are inaudible to the human ear. They can be generated by a variety of natural phenomena, including volcanoes, ocean waves, and earthquakes. These waves travel over large distances and can be recorded by instruments such as microbarometers, which are sensitive to small pressure variations. This data can give unique insight into the source of the infrasound and the properties of the media it traveled through, whether solid, oceanic, or atmospheric. In the future, infrasound data might be key to build more robust weather prediction models and understand the evolution of our solar system.

Infrasound has been used on Earth to monitor stratospheric winds, to analyze the characteristics of man-made explosions, and even to detect earthquakes. But its potential extends beyond our home planet. Infrasound waves generated by meteor impacts on Mars have provided insight into the planet’s shallow seismic velocities, as well as near-surface winds and temperatures. On Venus, recent research considers that balloons floating in its atmosphere, and recording infrasound waves, could be one of the few alternatives to detect “venusquakes” and explore its interior, since surface pressures and temperatures are too extreme for conventional instruments.

Sonification of sound generated by the Flores Sea earthquake as recorded by a balloon flying at 19 km altitude.

Until recently, it has been challenging to map infrasound signals to various planetary phenomena, including ocean waves, atmospheric winds, and planetary interiors. However, our research team and collaborators have made significant strides in this field, developing tools to unlock the potential of infrasound-based planetary research. We retrieve the connections between source and media properties, and sound signatures through 3 different techniques: (1) training neural networks to learn the complex relationships between observed waveforms and source and media characteristics, (2) perform large-scale numerical simulations of seismic and sound waves from earthquakes and explosions, and (3) incorporate knowledge about source and seismic media from adjacent fields such as geodynamics and atmospheric chemistry to inform our modeling work. Our recent work highlights the potential of infrasound-based inversions to predict high-altitude winds from the sound of ocean waves with machine learning, to map an earthquake’s mechanism to its local sound signature, and to assess the detectability of venusquakes from high-altitude balloons.

To ensure the long-term success of infrasound research, dedicated Earth missions will be crucial to collect new data, support the development of efficient global modeling tools, and create rigorous inversion frameworks suited to various planetary environments. Nevertheless, Infrasound research shows that tuning into a planet’s whisper unlocks crucial insights into its state and evolution.

Data sonification & case study presenting astronomical events to the visually Impaired via sound

Kim-Marie Jones – kim.jones@arup.com

Arup, L5 Barrack Place 151 Clarence Street, Sydney, NSW, 2000, Australia

Additional authors: Mitchell Allen (Arup) , Kashlin McCutcheon

Popular version of 3aSP4 – Development of a Data Sonification Toolkit and Case Study Sonifying Astrophysical Phenomena for Visually Impaired Individuals
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023301

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Have you ever listened to stars appearing in the night sky?

Image courtesy of NASA & ESA; CC BY 4.0

Data is typically presented in a visual manner. Sonification is the use of non-speech audio to convey information.

Acousticians at Arup had the exciting opportunity to collaborate with astrophysicist Chris Harrison to produce data sonifications of astronomical events for visually impaired individuals. The sonifications were presented at the 2019 British Science Festival (at a show entitled A Dark Tour of The Universe).

There are many sonification tools available online. However, many of these tools require in-depth knowledge of computer programming or audio software.

The researchers aimed to develop a sonification toolkit which would allow engineers working at Arup to produce accurate representations of complex datasets in Arup’s spatial audio lab (called the SoundLab), without needing to have an in-depth knowledge of computer programming or audio software.

Using sonifications to analyse data has some benefits over data visualisation. For example:

  • Humans are capable of processing and interpreting many different sounds simultaneously in the background while carrying out a task (for example, a pilot can focus on flying and interpret important alarms in the background, without having to turn his/her attention away to look at a screen or gauge),
  • The human auditory system is incredibly powerful and flexible and is capable of effortlessly performing extremely complex pattern recognition (for example, the health and emotional state of a speaker, as well as the meaning of a sentence, can be determined from just a few spoken words) [source],
  • and of course, sonification also allows visually impaired individuals the opportunity to understand and interpret data.

The researchers scaled down and mapped each stream of astronomical data to a parameter of sound and they successfully used their toolkit to create accurate sonifications of astronomical events for the show at the British Science Festival. The sonifications were vetted by visually impaired astronomer Nicolas Bonne to validate their veracity.

Information on A Dark Tour of the Universe is available at the European Southern Observatory website, as are links to the sonifications. Make sure you listen to stars appearing in the night sky and galaxies merging! Table 1 gives specific examples of parameter mapping for these two sonifications. The concept of parameter mapping is further illustrated in Figure 1.

Table 1
Figure 1: image courtesy of NASA’s Space Physics Data Facility

Fire Hydrant Hydrophones Find Water Leaks #ASA184

Fire Hydrant Hydrophones Find Water Leaks #ASA184

Locating leaks in water distribution networks is made easier with hydrant-mounted hydrophones and advanced algorithms.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 11, 2023 – Access to clean drinking water is essential for healthy communities, but delivering that water is growing increasingly difficult for many utilities. Corroding pipes and land shifts in aging water distribution networks can create frequent leaks, wasting water before it ever gets to the tap. Utilities in the U.S. lose about 6 billion gallons of water a day — enough to fill 9,000 swimming pools — due to leaks, in addition to wasted energy and resources spent in collecting and treating that water.

An overview of the methodology utilized for leak identification consisting of collecting acoustic data, extracting relevant features, and employing advanced machine learning and probabilistic models for leak detection and localization. Credit: Pranav Agrawal

Pranav Agrawal and Sriram Narasimhan from the University of California, Los Angeles will discuss an innovative acoustic solution to identify and track leaks in water distribution networks in their talk, “Maximum likelihood estimation for leak localization in water distribution networks using in-pipe acoustic sensing.” The presentation will take place Thursday, May 11, at 12:25 p.m. Eastern U.S. in the Purdue/Wisconsin room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

Detecting a leak in a single straight pipe is not a challenge, but large urban networks can be a grid of hundreds or thousands of pipes, and precisely locating a leak is no easy task. Acoustic monitoring is the go-to solution, as the sounds from leaks are unique and travel far in water, but even this method struggles in complex pipe networks.

“Localization of the leak is complex as it involves factors like hydrophone density, the frequency bandwidth of the leak sound, and material properties of the pipe,” said Agrawal. “It is impractical to have highly dense sensing that can localize leaks at any location in the network.”

To tackle the problem, the researchers developed algorithms that operate on acoustic signals collected via hydrophones mounted on the most accessible parts of the pipe network: fire hydrants.

“We have developed algorithms which operate on acoustic data collected from state-of-the-art monitoring devices mounted on fire hydrants and ‘listen’ to the sound produced by leaks inside the water column,” said Agrawal. “This device is now commercially available through Digital Water Solutions and has been deployed in various locations in Canada and the U.S., including in ongoing demonstration trials at the Naval Facilities Engineering and Expeditionary Warfare Center, Ventura County in California.”

Attaching their sensors to fire hydrants means the team can avoid costly excavation and reposition the devices as needed. Combined with novel probabilistic and machine-learning techniques to analyze the signals and pinpoint leaks, this technology could support water conservation efforts, especially in the Western U.S, where this is direly needed.

Reference in this release to the U.S. Navy Facilities Engineering and Expeditionary Warfare Center (NAVFAC-EXWC) does not imply endorsement of an individual contractor or solution by NAVFAC-EXWC, the U.S. Navy, or the U.S. Department of Defense.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.