4aPA – Using Sound Waves to Quantify Erupted Volumes and Directionality of Volcanic Explosions

Alexandra Iezzi – amiezzi@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

David Fee – dfee1@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

Popular version of paper 4aPA
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

Volcanic eruptions can produce serious hazards, including ash plumes, lava flows, pyroclastic flows, and lahars. Volcanic phenomena, especially explosions, produce a substantial amount of sound, particularly in the infrasound band (<20 Hz, below human hearing) that can be detected at both local and global distances using dedicated infrasound sensors. Recent research has focused on inverting infrasound data collected within a few kilometers of an explosion, which can provide robust estimates of the mass and volume of erupted material in near real time. While the backbone of local geophysical monitoring of volcanoes typically relies on seismometers, it can sometimes be difficult to determine whether a signal originates from the subsurface only or has become subaerial (i.e. erupting). Volcano infrasound recordings can be combined with seismic monitoring to help illuminate whether or not material is actually coming out of the volcano, therefore posing a potential threat to society.

This presentation aims to summarize results from many recent studies on acoustic source inversions for volcanoes, including a recent study by Iezzi et al. (in review) at Yasur volcano, Vanuatu. Yasur is easily accessible and has explosions every 1 to 4 minutes making it a great place to study volcanic explosion mechanisms (Video 1).

Video 1 – Video of a typical explosion at Yasur volcano, Vanuatu.

Most volcano infrasound inversion studies assume that sound radiates equally in all directions. However, the potential for acoustic directionality from the volcano infrasound source mechanism is not well understood due to infrasound sensors usually being deployed only on Earth’s surface. In our study, we placed an infrasound sensor on a tethered balloon that was walked around the volcano to measure the acoustic wavefield above Earth’s surface and investigate possible acoustic directionality (Figure 1).

Figure 1 [file missing] – Image showing the aerostat on the ground prior to launch (left) and when tethered near the crater rim of Yasur (right).

Volcanos typically have high topographic relief that can significantly distort the waveform we record, even at distances of only a few kilometers. We can account for this effect by modeling the acoustic propagation over the topography (Video 2).

Video 2 – Video showing the pressure field that results from inputting a simple compressional source at the volcanic vent and propagating the wavefield over a model of topography. The red denotes positive pressure (compression) and blue denotes negative pressure (rarefaction). We note that all complexity past the first red band is due to topography.

Once the effects of topography are constrained, we can assume that when we are very close to the source, all other complexity in the infrasound data is due to the acoustic source. This allows us to solve for the volume flow rate (potentially in real time). In addition, we can examine directionality for all explosions, which may lead to volcanic ejecta being launched more often and farther in one direction than in others. This poses a great hazard to tourists and locals near the volcano and may be mitigated by studying the acoustic source from a safe distance using infrasound.

4APP28 – Listening to music with bionic ears: Identification of musical instruments and genres by cochlear implant listeners

Ying Hsiao – ying_y_hsiao@rush.edu
Chad Walker
Megan Hebb
Kelly Brown
Jasper Oh
Stanley Sheft
Valeriy Shafiro – Valeriy_Shafiro@rush.edu
Department of Communication Disorders and Sciences
Rush University
600 S Paulina St
Chicago, IL 60612, USA

Kara Vasil
Aaron Moberly
Department of Otolaryngology – Head & Neck Surgery
Ohio State University Wexner Medical Center
410 W 10th Ave
Columbus, OH 43210, USA

Popular version of paper 4APP28
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

For many people, music is an integral part of everyday life. We hear it everywhere: cars, offices, hallways, elevators, restaurants, and, of course, concert halls and peoples’ homes. It can often make our day more pleasant and enjoyable, but its ubiquity also makes it easy to take it for granted. But imagine if the music you heard around you sounded garbled and distorted. What if you could no longer tell apart different instruments that were being played, rhythms were no longer clear, and much of it sounded out of tune? This unfortunate experience is common for people with hearing loss who hear through cochlear implants, or CIs, the prosthetic devices that convert sounds around a person to electrical signals that are then delivered directly to the auditory nerve, bypassing the natural sensory organ of hearing – the inner ear. Although CIs have been highly effective in improving speech perception for people with severe to profound hearing loss, music perception has remained difficult and frustrating for people with CIs.

Audio 1.mp4, “Music processed with the cochlear implant simulator, AngelSim by Emily Shannon Fu Foundation”

Audio 2.mp4, “Original version [“Take Five” by Francesco Muliedda is licensed under CC BY-NC-SA]”

To find out how well CI listeners identify musical instruments and music genres, we used a version of a previously developed test – Appreciation of Music in Cochlear Implantees (AMICI). Unlike other tests that examine music perception in CI listeners using simple-structured musical stimuli to pinpoint specific perceptual challenges, AMICI takes a more synthetic approach and uses real-world musical pieces, which are acoustically more complex. Our findings confirmed that CI listeners indeed have considerable deficits in music perception. Participants with CIs correctly identify musical instruments only 69% of the time and musical genres 56% of the time, whereas their age-matched normal-hearing peers identified instruments and genres with 99% and 96% correct, respectively. The easiest instrument for CI listeners were drums, which were correctly identified 98% of the time. In contrast, the most difficult instrument was flute, with only 18% identification accuracy. Flute was more often, 77% of the time, confused with string instruments. Among the genres, identification of classical music was the easiest, reaching 83% correct, while Latin and rock/pop music were most difficult (41% correct). Remarkably, CI listeners’ abilities to identify musical instruments and genres correlated with their ability to identify common environmental sounds (such as dog barking, car horn) and also spoken sentences in noise. These results provide a foundation for future work that will focus on rehabilitation in music perception for CI listeners, so that music may sound pleasing and enjoyable to them once again, with possible additional benefits for speech and environmental sound perception.

1aPAb1 – On the origin of thunder: reconstruction of lightning flashes, statistical analysis and modeling

Arthur Lacroix – arthur.lacroix@dalembert.upmc.fr
Thomas Farges –thomas.farges@cea.fr
CEA, DAM, DIF, Arpajon, France

Régis Marchiano – regis.marchiano@sorbonne-universite.fr
François Coulouvrat – francois.coulouvrat@sorbonne-universite.fr
Institut Jean Le Rond d’Alembert, Sorbonne Université & CNRS, Paris, France

Popular version of paper 1aPAb1
Presented Monday morning, November 5, 2018
176th ASA Meeting, Vancouver, Canada

Thunder is the sound produced by lightning, a frequent natural phenomenon occurring in the mean about 25 times per second somewhere on the Earth. The Ancients associated thunder with the voice of deities, though old Greek scientists like Aristotle invoked some natural causes. Modern science established the link between lightning and thunder. Although the sound is audible, thunder also contains an infrasonic frequency component, non-audible by humans, whose origin remains controversial. As part of the European project HyMeX on the hydrological cycle of the Mediterranean region, thunder was recorded continuously by an array of four microphones during two months in 2012 in Southern France, in the frequency range of 0.5 to 180 Hz covering both infrasound and audible sound. In particular, 27 lightning flashes were studied in detail. By measuring the time delays between the different parts of the signals at different microphones, the direction from which thunder comes is determined. Dating the lighting ground impact and therefore the emission time, the detailed position of each noise source within the lightning flash can be reconstructed. This “acoustical lightning photography” process was validated by comparison with a high frequency direct electromagnetic reconstruction based on an array of 12 antennas from New Mexico Tech installed for the first time in Europe. By examining the altitude of the acoustic sources as a function of time, it is possible to distinguish, within the acoustical signal, the part that originates from the lightning flash channel connecting the cloud to the ground, from the part taking place within the ground. In some cases, it is even possible to separate several cloud-to-ground branches. Thunder infrasound comes unambiguously mainly from return strokes linking cloud to ground. Our observations contradict one of the theories proposed for the emission of infrasound by thunder, which links thunder to the release of electrostatic pressure in the cloud. On the contrary, it is in agreement with the theory explaining thunder as resulting from the sudden and intense air compression and heating – typically 20,000 to 30,000 K – within the lightning stroke. The second main result of our observations is the strong dependence of the characteristics of thunder with the distance between the lightning and the observer. Although a common experience, this dependence has not been clearly demonstrated in the past. To consolidate our data, a theoretical model of thunder has been developed. A tortuous shape for the lightning strike between cloud and ground is randomly generated. Each individual part of this strike is modeled as a giant spark, solving the complex equations of hydrodynamics and plasma physics. Summing all contributions, the lightning stroke is transformed into a source of noise which is then propagated down to a virtual listener. This simulated thunder is analyzed and compared to the recordings. Many of our observations are qualitatively recovered by the model. In the future, this model, combined with present and new thunder recordings, could potentially be used as a lightning thermometer, to directly record the large, sudden and yet inaccessible temperature rise within the lightning channel.

acoustical lighting photography

3aPA8 – High Altitude Venus Operational Concept (HAVOC)

Adam Trahan – ajt6261@louisiana.edu
Andi Petculescu – andi@louisiana.edu

University of Louisiana at Lafayette
Physics Department
240 Hebrard Blvd., Broussard Hall
Lafayette, LA 70503-2067

Popular version of paper 3aPA8
Presented Wednesday morning, May 9, 2018
175th ASA Meeting, Minneapolis, MN

HAVOC

Artist’s rendition of the envisioned HAVOC mission. (Credit: NASA Systems Analysis and Concepts Directorate, sacd.larc.nasa.gov/smab/havoc)

The motivation for this research stems from NASA’s proposed High Altitude Venus Operational Concept (HAVOC), which, if successful, would lead to a possible month-long human presence above the cloud layer of Venus.

The atmosphere of Venus is composed of primarily carbon dioxide with small amounts of Nitrogen and other trace molecules in the parts-per-million. With surface temperatures exceeding that of Earth’s by about 2.5 times and pressures roughly 100 times, the Venusian surface is quite a hostile environment. Higher into the atmosphere, however, the environment becomes relatively benign, with temperatures and pressures similar to those at Earth’s surface. In the 40-70 km region, condensational sulfuric acid clouds prevail, which contribute to the so-called “runaway greenhouse” effect.

The main condensable species on Venus is a binary mixture of sulfuric acid dissolved in water. The existence of aqueous sulfuric acid droplets is restricted to a thin region in Venus’ atmosphere, namely40-70 km from the surface. Nothing more than a light haze can exist in liquid form above and below this main cloud layer due to evaporation below and above. Inside the cloud layer, there exist three further sublayers; the upper cloud layer is produced using energy from the sun, while the lower and middle cloud layers are produced via condensation. The goal of this research is to determine how the lower and middle condensational cloud layers, affect the propagation of a sound waves, as they travel through the atmosphere.

It is true that for most waves to travel there must be a medium present, except for the case of electromagnetic waves (light), which are able to travel through the vacuum of space. But for sound waves, a fluid (gas or liquid) is necessary to support the wave. The presence of tiny particles affects the propagation of acoustic waves via energy loss processes; these effects have been well studied in Earth’s atmosphere. Using theoretical and numerical techniques, we are able to predict how much an acoustic wave would be weakened (attenuated) for every kilometer traveled in Venus’ clouds.

(attenuation_v_freq.jpg)

Figure 2. The frequency dependence of the wave attenuation coefficient. The attenuation is stronger at high frequencies, with a large transition region between 1 and 100 Hz.

Figure 2 shows how the attenuation parameter changes with frequency. At higher frequencies (greater than 100 Hz), the attenuation is larger than at lower frequencies, due primarily to the motion of the liquid cloud droplets as they react to the passing acoustic wave. In the lower frequency region, the attenuation is lower and is due primarily to evaporation and condensation processes, which require energy from the acoustic wave.

For the present study, the cloud environment was treated like a perfect (ideal) gas, which assumes the gas molecules behave like billiard balls, simply bouncing off one another. This assumption is valid for low-frequency sound waves. To complete the model, real-gas effects are added, to obtain the background attenuation in the surrounding atmosphere. This will enable us to predict the net amount of losses an acoustic wave is likely to experience at the projected HAVOC altitudes.

The results of this study could prove valuable for guiding the development of acoustic sensors designed to investigate atmospheric properties on Venus.

This research was sponsored by a grant from the Louisiana Space Consortium (LaSPACE).

1pPA – Assessment of Learning Algorithms to Model Perception of Sound

Menachem Rafaelof
National Institute of Aerospace (NIA)

Andrew Schroeder
NASA Langley Research Center (NIFS intern, summer 2017)

175th Meeting of the
Acoustical Society of America
Minneapolis Minnesota
7-11 May 2018
1pPA, Novel Methods in Computational Acoustics II

Sound and its Perception
Sound waves are basically fluctuations of air pressure at points in a space. While this simple physical description of sound captures what sound is, its perception is much more complicated involving physiological and psychological processes.

Physiological processes involve a number of functions during transmission of sound through the outer, middle and inner ear before transduction into neural signals. Examples of these processes include amplification due to resonance within the outer ear, substantial attenuation at low frequencies within the inner ear and frequency component separation within the inner ear. Central processing of sound is based on neural impulses (counts of electrical signals) transferred to the auditory center of the brain. This transformation occurs at different levels in the brain. A major component in this processing is the auditory cortex, where sound is consciously perceived as being, for example, loud, soft, pleasing, or annoying.

Motivation
Currently an effort is underway to develop and put to use “air taxis”, vehicles for on-demand passenger transport. A major concern with these plans is operation of air vehicles close to the public and the potential negative impact of their noise. This concern motivates the need for the development of an approach to predict human perception of sound. Such capability will enable the designers to compare different vehicle configurations and their sounds, and address design factors that are important to noise perception.

Approach
Supervised learning algorithms are a class of machine learning algorithms capable of learning from examples. During the learning stage samples of input and matching response data are used to construct a predictive model. This work compared the performance of four supervised learning algorithms (Linear Regression (LR), Support Vector Machines (SVM), Decision Trees (DTs) and Random Forests (RFs)) to predict human annoyance from sounds. Construction of predictive models included three stages: 1) sample sounds for training are analyzed in term of loudness (N), roughness (R) , sharpness (S) , tone prominence ratio (PR) and fluctuation strength (FS). These parameters quantify various subjective attributes of sound and serve as predictors within the model. 2) Each training sound is presented to a group of test subjects and their annoyance response (Y in Figure 1) to each sound is gathered. 3) A predictive model (H-hat) is constructed using a machine learning algorithm and is used to predict the annoyance of new sample sounds (Y-hat).

Figure 1: Construction of a model (H-hat) to predict the annoyance of sound. Path a: training sounds are presented to subjects and their annoyance rating (Y) is gathered. Subject rating of training samples and matching predictors are used to construct the model, H-hat. Path b: annoyance of a new sound is estimated using H-hat.

Findings
In this work the performance of four models, or learning algorithms, was examined. Construction of these models relied on the annoyance response of 38 subjects to 103 sounds from 10 different sound sources grouped in four categories: road vehicles, unmanned aerial vehicles for package delivery, distributed electric propulsion aircraft and a simulated quadcopter. Comparison of these algorithms in terms of prediction accuracy (see Figure 2), model interpretability, versatility and computation time points to Random Forests as the best algorithms for the task. These results are encouraging considering the precision demonstrated using a low-dimension model (five predictors only) and the variety of sounds used.

Future Work
• Account for variance in human response data and establish a target error tolerace.
• Explore the use one or two additional predictors (i.e., impulsiveness and audibility)
• Develop an inexpensive, standard, process to gather human response data
• Collect additional human response data
• Establish an annoyance scale for air taxi vehicles

Figure 2: Prediction accuracy for the algorithms examined. Accuracy here is expressed as the fraction of points predicted within error tolerance (in terms of Mean Absolute Error (MAE)) vs. error tolerance or absolute deviation. For each case, Area Over the Curve (AOC) represents the total MAE.

3pPA5 – Hearing Aids that Listen to Your Phone

Jonathon Miegel – jmiegel@swin.edu.au
Philip Branch – pbranch@swin.edu.au
Swinburne University of Technology
John St
Hawthorn, VIC 3122, AU

Peter Blamey – peter.blamey@blameysaunders.com.au
Blamey Saunders hears
364 Albert Street
East Melbourne, VIC 3002, AU

Popular version of paper 3pPA5
Presented Wednesday afternoon, May 09, 2018
175th ASA Meeting, Minneapolis

Hearing loss affects 10% of the global population to some degree but only 20% of sufferers receive treatment1,2. Hearing aids are the most common treatments for hearing loss, with longer battery life and improved ease of use identified as the most desirable advances that will improve acceptance3,4,5. Our research addresses both these issues.

Modern hearing aids have dramatically shrunk in size over the years. This is a positive development since a small hearing aid is less apparent and more comfortable than has been the case in the past. However, with smaller size has come new problems. Controls for modern hearing aids are now much harder to place on the actual device and smaller controls have become increasingly difficult to use, especially for those with low dexterity. Small switches and additional accessories have been the main ways to interact with hearing aids, with increasing adoption of Bluetooth Low Energy (BLE) for connections with smart phones.

The use of BLE and other radio frequency technologies requires additional hardware within the hearing aid, which increases both its price and power consumption. Our work addresses this problem by using high frequency sound waves and ultrasound to communicate between a smart phone and hearing aid (Figure 1). Using hardware already present on the hearing aid allows our technology to be implemented on both old and new hearing aids without any additional hardware costs.

hearing aid

Figure 1 –  An illustration of acoustic wireless communication between a smart phone and a hearing aid.

Our work investigated the performance of multiple communication techniques operating at frequencies within the inaudible range of 16 to 24 kHz. To reduce power consumption, the highly efficient audio processing capabilities of the hearing aid were used alongside simple manipulations of the audio signal. These simple manipulations modulate the amplitude and frequency of the sound waves to transmit binary data.

We were able to transmit 48 bits of data over a range of 3 metres while consuming less power than BLE. While 48 bits of data is relatively small compared to data sent via radio frequency transmissions, it represents multiple commands for the remote operation of two hearing aids. These commands can be used to adjust the volume as well as change program settings for different listening scenarios.

There are benefits to using sound waves as a communication channel for other body worn devices apart from hearing aids. The limited transmission range of high frequency audio provides security through proximity as any potential attacker must be within close range and line of sight to conduct an attack. The prevalence of audio technology in personal electronic devices also has the potential for a universal communication medium across varying platforms.

As hardware on both the transmitting and receiving sides of the acoustic channel continues to develop for the core purpose of each technology, acoustic wireless communication will continue to improve as an option for controlling hearing aid technology and other body worn devices.

References
1 N. Oishi and J. Schacht, “Emerging treatments for noise-induced hearing loss,” Expert Opin. Emerg. Dr. 16, 235-245 (2011).

2 Hartley, D., E. Rochtchina, P. Newall, M. Golding, and P. Mitchell, “Use of hearing aids and assistive listening devices in an older Australian population” Journal of the American Academy of Audiology, 21, 642-653 (2010).

3 S. Kochkin, “MarkeTrak VIII: Consumer satisfaction with hearing aids is slowly increasing,” The Hearing Journal 63, 19-20 (2010).

4 S. Kochkin, “MarkeTrak VIII Mini-BTEs tap new market, users more satisfied,” The Hearing Journal 64, 17-18 (2011).

5 S. Kochkin, “MarkeTrak VIII: The key influencing factors in hearing aid purchase intent,” Hearing Review 19, 12-25 (2012).