1aPAb1 – On the origin of thunder: reconstruction of lightning flashes, statistical analysis and modeling

Arthur Lacroix – arthur.lacroix@dalembert.upmc.fr
Thomas Farges –thomas.farges@cea.fr
CEA, DAM, DIF, Arpajon, France

Régis Marchiano – regis.marchiano@sorbonne-universite.fr
François Coulouvrat – francois.coulouvrat@sorbonne-universite.fr
Institut Jean Le Rond d’Alembert, Sorbonne Université & CNRS, Paris, France

Popular version of paper 1aPAb1
Presented Monday morning, November 5, 2018
176th ASA Meeting, Vancouver, Canada

Thunder is the sound produced by lightning, a frequent natural phenomenon occurring in the mean about 25 times per second somewhere on the Earth. The Ancients associated thunder with the voice of deities, though old Greek scientists like Aristotle invoked some natural causes. Modern science established the link between lightning and thunder. Although the sound is audible, thunder also contains an infrasonic frequency component, non-audible by humans, whose origin remains controversial. As part of the European project HyMeX on the hydrological cycle of the Mediterranean region, thunder was recorded continuously by an array of four microphones during two months in 2012 in Southern France, in the frequency range of 0.5 to 180 Hz covering both infrasound and audible sound. In particular, 27 lightning flashes were studied in detail. By measuring the time delays between the different parts of the signals at different microphones, the direction from which thunder comes is determined. Dating the lighting ground impact and therefore the emission time, the detailed position of each noise source within the lightning flash can be reconstructed. This “acoustical lightning photography” process was validated by comparison with a high frequency direct electromagnetic reconstruction based on an array of 12 antennas from New Mexico Tech installed for the first time in Europe. By examining the altitude of the acoustic sources as a function of time, it is possible to distinguish, within the acoustical signal, the part that originates from the lightning flash channel connecting the cloud to the ground, from the part taking place within the ground. In some cases, it is even possible to separate several cloud-to-ground branches. Thunder infrasound comes unambiguously mainly from return strokes linking cloud to ground. Our observations contradict one of the theories proposed for the emission of infrasound by thunder, which links thunder to the release of electrostatic pressure in the cloud. On the contrary, it is in agreement with the theory explaining thunder as resulting from the sudden and intense air compression and heating – typically 20,000 to 30,000 K – within the lightning stroke. The second main result of our observations is the strong dependence of the characteristics of thunder with the distance between the lightning and the observer. Although a common experience, this dependence has not been clearly demonstrated in the past. To consolidate our data, a theoretical model of thunder has been developed. A tortuous shape for the lightning strike between cloud and ground is randomly generated. Each individual part of this strike is modeled as a giant spark, solving the complex equations of hydrodynamics and plasma physics. Summing all contributions, the lightning stroke is transformed into a source of noise which is then propagated down to a virtual listener. This simulated thunder is analyzed and compared to the recordings. Many of our observations are qualitatively recovered by the model. In the future, this model, combined with present and new thunder recordings, could potentially be used as a lightning thermometer, to directly record the large, sudden and yet inaccessible temperature rise within the lightning channel.

acoustical lighting photography

3aPA8 – High Altitude Venus Operational Concept (HAVOC)

Adam Trahan – ajt6261@louisiana.edu
Andi Petculescu – andi@louisiana.edu

University of Louisiana at Lafayette
Physics Department
240 Hebrard Blvd., Broussard Hall
Lafayette, LA 70503-2067

Popular version of paper 3aPA8
Presented Wednesday morning, May 9, 2018
175th ASA Meeting, Minneapolis, MN

HAVOC

Artist’s rendition of the envisioned HAVOC mission. (Credit: NASA Systems Analysis and Concepts Directorate, sacd.larc.nasa.gov/smab/havoc)

The motivation for this research stems from NASA’s proposed High Altitude Venus Operational Concept (HAVOC), which, if successful, would lead to a possible month-long human presence above the cloud layer of Venus.

The atmosphere of Venus is composed of primarily carbon dioxide with small amounts of Nitrogen and other trace molecules in the parts-per-million. With surface temperatures exceeding that of Earth’s by about 2.5 times and pressures roughly 100 times, the Venusian surface is quite a hostile environment. Higher into the atmosphere, however, the environment becomes relatively benign, with temperatures and pressures similar to those at Earth’s surface. In the 40-70 km region, condensational sulfuric acid clouds prevail, which contribute to the so-called “runaway greenhouse” effect.

The main condensable species on Venus is a binary mixture of sulfuric acid dissolved in water. The existence of aqueous sulfuric acid droplets is restricted to a thin region in Venus’ atmosphere, namely40-70 km from the surface. Nothing more than a light haze can exist in liquid form above and below this main cloud layer due to evaporation below and above. Inside the cloud layer, there exist three further sublayers; the upper cloud layer is produced using energy from the sun, while the lower and middle cloud layers are produced via condensation. The goal of this research is to determine how the lower and middle condensational cloud layers, affect the propagation of a sound waves, as they travel through the atmosphere.

It is true that for most waves to travel there must be a medium present, except for the case of electromagnetic waves (light), which are able to travel through the vacuum of space. But for sound waves, a fluid (gas or liquid) is necessary to support the wave. The presence of tiny particles affects the propagation of acoustic waves via energy loss processes; these effects have been well studied in Earth’s atmosphere. Using theoretical and numerical techniques, we are able to predict how much an acoustic wave would be weakened (attenuated) for every kilometer traveled in Venus’ clouds.

(attenuation_v_freq.jpg)

Figure 2. The frequency dependence of the wave attenuation coefficient. The attenuation is stronger at high frequencies, with a large transition region between 1 and 100 Hz.

Figure 2 shows how the attenuation parameter changes with frequency. At higher frequencies (greater than 100 Hz), the attenuation is larger than at lower frequencies, due primarily to the motion of the liquid cloud droplets as they react to the passing acoustic wave. In the lower frequency region, the attenuation is lower and is due primarily to evaporation and condensation processes, which require energy from the acoustic wave.

For the present study, the cloud environment was treated like a perfect (ideal) gas, which assumes the gas molecules behave like billiard balls, simply bouncing off one another. This assumption is valid for low-frequency sound waves. To complete the model, real-gas effects are added, to obtain the background attenuation in the surrounding atmosphere. This will enable us to predict the net amount of losses an acoustic wave is likely to experience at the projected HAVOC altitudes.

The results of this study could prove valuable for guiding the development of acoustic sensors designed to investigate atmospheric properties on Venus.

This research was sponsored by a grant from the Louisiana Space Consortium (LaSPACE).

1pPA – Assessment of Learning Algorithms to Model Perception of Sound

Menachem Rafaelof
National Institute of Aerospace (NIA)

Andrew Schroeder
NASA Langley Research Center (NIFS intern, summer 2017)

175th Meeting of the
Acoustical Society of America
Minneapolis Minnesota
7-11 May 2018
1pPA, Novel Methods in Computational Acoustics II

Sound and its Perception
Sound waves are basically fluctuations of air pressure at points in a space. While this simple physical description of sound captures what sound is, its perception is much more complicated involving physiological and psychological processes.

Physiological processes involve a number of functions during transmission of sound through the outer, middle and inner ear before transduction into neural signals. Examples of these processes include amplification due to resonance within the outer ear, substantial attenuation at low frequencies within the inner ear and frequency component separation within the inner ear. Central processing of sound is based on neural impulses (counts of electrical signals) transferred to the auditory center of the brain. This transformation occurs at different levels in the brain. A major component in this processing is the auditory cortex, where sound is consciously perceived as being, for example, loud, soft, pleasing, or annoying.

Motivation
Currently an effort is underway to develop and put to use “air taxis”, vehicles for on-demand passenger transport. A major concern with these plans is operation of air vehicles close to the public and the potential negative impact of their noise. This concern motivates the need for the development of an approach to predict human perception of sound. Such capability will enable the designers to compare different vehicle configurations and their sounds, and address design factors that are important to noise perception.

Approach
Supervised learning algorithms are a class of machine learning algorithms capable of learning from examples. During the learning stage samples of input and matching response data are used to construct a predictive model. This work compared the performance of four supervised learning algorithms (Linear Regression (LR), Support Vector Machines (SVM), Decision Trees (DTs) and Random Forests (RFs)) to predict human annoyance from sounds. Construction of predictive models included three stages: 1) sample sounds for training are analyzed in term of loudness (N), roughness (R) , sharpness (S) , tone prominence ratio (PR) and fluctuation strength (FS). These parameters quantify various subjective attributes of sound and serve as predictors within the model. 2) Each training sound is presented to a group of test subjects and their annoyance response (Y in Figure 1) to each sound is gathered. 3) A predictive model (H-hat) is constructed using a machine learning algorithm and is used to predict the annoyance of new sample sounds (Y-hat).

Figure 1: Construction of a model (H-hat) to predict the annoyance of sound. Path a: training sounds are presented to subjects and their annoyance rating (Y) is gathered. Subject rating of training samples and matching predictors are used to construct the model, H-hat. Path b: annoyance of a new sound is estimated using H-hat.

Findings
In this work the performance of four models, or learning algorithms, was examined. Construction of these models relied on the annoyance response of 38 subjects to 103 sounds from 10 different sound sources grouped in four categories: road vehicles, unmanned aerial vehicles for package delivery, distributed electric propulsion aircraft and a simulated quadcopter. Comparison of these algorithms in terms of prediction accuracy (see Figure 2), model interpretability, versatility and computation time points to Random Forests as the best algorithms for the task. These results are encouraging considering the precision demonstrated using a low-dimension model (five predictors only) and the variety of sounds used.

Future Work
• Account for variance in human response data and establish a target error tolerace.
• Explore the use one or two additional predictors (i.e., impulsiveness and audibility)
• Develop an inexpensive, standard, process to gather human response data
• Collect additional human response data
• Establish an annoyance scale for air taxi vehicles

Figure 2: Prediction accuracy for the algorithms examined. Accuracy here is expressed as the fraction of points predicted within error tolerance (in terms of Mean Absolute Error (MAE)) vs. error tolerance or absolute deviation. For each case, Area Over the Curve (AOC) represents the total MAE.

3pPA5 – Hearing Aids that Listen to Your Phone

Jonathon Miegel – jmiegel@swin.edu.au
Philip Branch – pbranch@swin.edu.au
Swinburne University of Technology
John St
Hawthorn, VIC 3122, AU

Peter Blamey – peter.blamey@blameysaunders.com.au
Blamey Saunders hears
364 Albert Street
East Melbourne, VIC 3002, AU

Popular version of paper 3pPA5
Presented Wednesday afternoon, May 09, 2018
175th ASA Meeting, Minneapolis

Hearing loss affects 10% of the global population to some degree but only 20% of sufferers receive treatment1,2. Hearing aids are the most common treatments for hearing loss, with longer battery life and improved ease of use identified as the most desirable advances that will improve acceptance3,4,5. Our research addresses both these issues.

Modern hearing aids have dramatically shrunk in size over the years. This is a positive development since a small hearing aid is less apparent and more comfortable than has been the case in the past. However, with smaller size has come new problems. Controls for modern hearing aids are now much harder to place on the actual device and smaller controls have become increasingly difficult to use, especially for those with low dexterity. Small switches and additional accessories have been the main ways to interact with hearing aids, with increasing adoption of Bluetooth Low Energy (BLE) for connections with smart phones.

The use of BLE and other radio frequency technologies requires additional hardware within the hearing aid, which increases both its price and power consumption. Our work addresses this problem by using high frequency sound waves and ultrasound to communicate between a smart phone and hearing aid (Figure 1). Using hardware already present on the hearing aid allows our technology to be implemented on both old and new hearing aids without any additional hardware costs.

hearing aid

Figure 1 –  An illustration of acoustic wireless communication between a smart phone and a hearing aid.

Our work investigated the performance of multiple communication techniques operating at frequencies within the inaudible range of 16 to 24 kHz. To reduce power consumption, the highly efficient audio processing capabilities of the hearing aid were used alongside simple manipulations of the audio signal. These simple manipulations modulate the amplitude and frequency of the sound waves to transmit binary data.

We were able to transmit 48 bits of data over a range of 3 metres while consuming less power than BLE. While 48 bits of data is relatively small compared to data sent via radio frequency transmissions, it represents multiple commands for the remote operation of two hearing aids. These commands can be used to adjust the volume as well as change program settings for different listening scenarios.

There are benefits to using sound waves as a communication channel for other body worn devices apart from hearing aids. The limited transmission range of high frequency audio provides security through proximity as any potential attacker must be within close range and line of sight to conduct an attack. The prevalence of audio technology in personal electronic devices also has the potential for a universal communication medium across varying platforms.

As hardware on both the transmitting and receiving sides of the acoustic channel continues to develop for the core purpose of each technology, acoustic wireless communication will continue to improve as an option for controlling hearing aid technology and other body worn devices.

References
1 N. Oishi and J. Schacht, “Emerging treatments for noise-induced hearing loss,” Expert Opin. Emerg. Dr. 16, 235-245 (2011).

2 Hartley, D., E. Rochtchina, P. Newall, M. Golding, and P. Mitchell, “Use of hearing aids and assistive listening devices in an older Australian population” Journal of the American Academy of Audiology, 21, 642-653 (2010).

3 S. Kochkin, “MarkeTrak VIII: Consumer satisfaction with hearing aids is slowly increasing,” The Hearing Journal 63, 19-20 (2010).

4 S. Kochkin, “MarkeTrak VIII Mini-BTEs tap new market, users more satisfied,” The Hearing Journal 64, 17-18 (2011).

5 S. Kochkin, “MarkeTrak VIII: The key influencing factors in hearing aid purchase intent,” Hearing Review 19, 12-25 (2012).

2aPA6 – An acoustic approach to assess natural gas quality in real time

Andi Petculescu – andi@louisiana.edu
University of Louisiana at Lafayette
Lafayette, Louisiana, US

Popular version of paper 2aPA6 “An acoustic approach to assess natural gas quality in real time.”
Presented Tuesday morning, December 5, 2017, 11:00 AM-11:20, Balcony L
174th ASA in New Orleans

Infrared laser spectroscopy offers amazing measurement resolution for gas sensing applications, ranging between 1 part per million (ppm) down to a few parts per billion (ppb).

There are applications, however, that require sensor hardware able to operate in harsh conditions, without the need for periodic maintenance or recalibration. Examples are monitoring of natural gas composition in transport pipes, explosive gas accumulation in grain silos, and ethylene concentration in greenhouse environments. A robust alternative is embodied by gas-coupled acoustic sensing. Such gas sensors operate on the principle that sound waves are intimately coupled to the gas under study hence any perturbation on the latter will affect i) how fast the waves can travel and ii) how much energy they lose during propagation. The former effect is represented by the so-called speed of sound, which is the typical “workhorse” of acoustic sensing. The reason the sound speed of a gas mixture changes with composition is because it depends on two gas parameters beside temperature. The first parameter is the mass of the molecules forming the gas mixture; the second parameter is the heat capacity, describing the ability of the gas to follow, via the amount of heat exchanged, the temperature oscillations accompanying the sound wave. All commercial gas-coupled sonic gas monitors rely solely on the dependence of sound speed on molecular mass. This traditional approach, however, can only sense relative changes in the speed of sound hence in mean molecular mass; thus it cannot do a truly quantitative analysis. Heat capacity, on the other hand, is the thermodynamic “footprint” of the amount of energy exchanged during molecular collisions. It therefore opens up the possibility to perform quantitative gas sensing. Furthermore, the attenuation coefficient, which describes how fast energy is lost from the coherent (“acoustic”) motion to incoherent (random) behavior of the gas molecules, has largely been ignored. We have shown that measurements of sound speed and attenuation at only two acoustic frequencies can be used to infer the intermolecular energy transfer rates, depending on the species present in the gas. The foundation of our model is summarized in the pyramid of Figure 1. One can either predict the sound speed and attenuation if the composition is known (bottom-to-top arrow) or perform quantitative analysis or sensing based on measured sound speed and attenuation (top-to-bottom arrow).

gas

Figure 1. The prediction/sensing pyramid of molecular acoustics. Direct problem: prediction of sound wave propagation (speed and attenuation). Inverse problem: quantifying a gas mixture from measured sound speed and attenuation.

We are developing physics-based algorithms that not only quantify a gas mixture but also help identify contaminant species in a base gas. With the right optimization, the algorithms can be used in real time to measure the composition of piped natural gas as well as its degree of contamination by CO2, N2, O2 and other species. It is these features that have sparked the interest of the gas flow-metering industry. Figure 2 shows model predictions and experimental data for the attenuation coefficient for mixtures of nitrogen in methane (Fig. 2a) and ethylene in nitrogen (Fig. 2b).

gas

Figure 2. The normalized (dimensionless) attenuation coefficient in mixtures of N2 in CH4 (a) and C2H4 in N2 (b). Solid lines–theory; symbols–measurements

The sensing algorithm that we named “Quantitative Acoustic Relaxational Spectroscopy” (QARS) is based on a purely geometric interpretation of the frequency-dependent heat capacity of the mixture of polyatomic molecules. This characteristic makes it highly amenable to implementation as a robust real-time sensing/monitoring technique. The results of the algorithm are shown in Figure 3, for a nitrogen-methane mixture. The example shows how the normalized attenuation curve arising from intermolecular exchanges is reconstructed (or synthesized) from data at just two frequencies. The prediction of the first-principles model (dashed line) shows two relaxation times: the main one of approximately 50 us (=1/20000 Hz-1) and a secondary one around 1 ms (=1/1000 Hz-1). Probing the gas with only two frequencies yields the main relaxation process, around 20000 Hz, from which the composition of the mixture can be inferred with relatively high accuracy.

Figure 3. The normalized (dimensionless) attenuation as a function of frequency. Dashed line–theoretical prediction; solid line–reconstructed curve.

5aPA3 – Elastic Properties of a Self-Healing Thermal Plastic

Kenneth A. Pestka II – pestkaka@longwood.edu
Jacob W. Hull – ‪jacob.hull@live.longwood.edu
Jonathan D. Buckley – ‪jonathan.buckley@live.longwood.edu

Department of Chemistry and Physics
Longwood University
Farmville, Virginia, 23909, USA

Stephen J. Kalista Jr. –kaliss@rpi.edu
Department of Biomedical Engineering,
Rensselaer Polytechnic Institute
Troy, New York, 12180, USA

Popular version of paper 5aPA3
Presented Friday morning, May 11, 2018
175th ASA Meeting, Minneapolis, MN

In our lab at Longwood University we have recently used Resonant Acoustic and Ultrasonic Spectroscopy to improve our understanding of a self-healing thermal plastic ionomer composed of polyethylene co-methacrylic acid (EMAA-0.6Na) both before and after damage [1]. Resonant Ultrasound Spectroscopy (RUS) is a prodigious technique ideally suited for the characterization and determination of the elastic properties of novel materials, especially those that are often only accessible in small sample sizes or with exotic attributes, and EMAA-0.6Na is among one of the more exotic materials [1,2]. EMAA-0.6Na is a thermal plastic material that is capable of autonomously self-healing after energetic impact and even after penetration by a bullet [3].

Material samples, including those composed of EMAA-0.6Na, exhibit normal modes of vibration and resonant frequencies that are governed by their sample geometry, mass and elastic properties, as illustrated in Fig. 1. The standard RUS approach uses an initial set of approximate elastic constants as input parameters in a computer program to calculate a set of theoretical resonant frequencies. The resulting theoretically calculated resonant frequencies are then iteratively adjusted and compared to the experimentally measured resonant frequencies in order to determine the actual elastic properties of a material.

Figure 1. 3D-model of a self-healing EMAA-0.6Na sample illustrating the first six vibrational modes.

However, EMAA-60Na is a relatively soft material, leading to sample resonances that are often difficult to isolate and identify. A partial spectrum from an EMAA-0.6Na sample is shown in Fig. 2. In order to extract individual resonant frequencies a multiple peak-fitting algorithm was used as shown in Fig. 2 (b).

Thermal Plastic

Figure 2. Undamaged Sample Behavior: Time dependence of the partial resonant spectrum of an approximately 7✕7.5✕1.4 mm3 EMAA sample over 48 hours (a). Lorentzian multi-peak fit to the signal used to extract individual resonances (b). Time evolution of the resonant frequencies at approximately 8.7 kHz (c) and 9.8 kHz (d) for the undamaged EMAA sample, adapted from[1].

Interestingly, the resonant frequencies of undamaged EMAA-0.6Na samples changed over time as shown in Fig. 2(c) and 2(d), but the observed rate of elastic evolution was quite gradual. However, once the samples were damaged, in this case by a 3mm pinch punch hammered directly into approximately 1mm thick samples, dramatic changes occurred in the resonant spectrum, as shown in Fig. 3. Using this approach we were able to determine the approximate healing timescale of several EMAA-0.6Na samples after exposure to damage.

Thermal Plastic

Figure 3. Partial time dependent spectrum of an approximately 7✕7.5✕1.4 mm3 EMAA sample before damage (a) and after damage (b). The lorentzian multi-peak fits are shown just after damage (c) and over an hour after damage (d), adapted from [1].

Building on this approach we have been able to identify a sufficient number resonant frequencies of undamaged EMAA-0.6Na samples to determine the complete material elastic constants. In addition, it should be possible to assess the evolution of EMAA-0.6Na elastic constants for both undamaged and damaged samples, with the ultimate goal of quantifying the material parameters and environmental conditions that most significantly affect the elastic and self-healing behavior of this unusual material.

[1] Pestka II, K. A., Buckley, J. D., Kalista Jr., S. J., Bowers, N. R., Elastic evolution of a self-healing ionomer observed via acoustic and ultrasonic resonant spectroscopy Rep. vol. 7, Article number: 14417 (2017). doi:10.1038/s41598-017-14321-z

[2] Migliori, A. and Maynard, J. D. Implementation of a modern resonant ultrasound spectroscopy system for the measurement of the elastic moduli of small solid specimens.  Rev. Sci. Instrum. 76, 121301 (2005).

[3] S. J. Kalista, T.C. Ward, Self-Healing of Poly(ethylene-co-methacrylic acid) Copolymers Following Ballistic Puncture, Proceedings of the First International Conference on Self Healing Materials, Noorwijk aan Zee, The Netherlands: Springer (2007).