4bPA2 – Perception of sonic booms from supersonic aircraft of different sizes

Alexandra Loubeau – a.loubeau@nasa.gov
Structural Acoustics Branch
NASA Langley Research Center
MS 463
Hampton, VA 23681
USA

Popular version of paper 4bPA2, “Evaluation of the effect of aircraft size on indoor annoyance caused by sonic booms and rattle noise”
Presented Thursday afternoon, May 10, 2018, 2:00-2:20 PM, Greenway J
175th Meeting of the ASA, Minneapolis, MN, USA

Continuing interest in flying faster than the speed of sound has led researchers to develop new tools and technologies for future generations of supersonic aircraft.  One important breakthrough for these designs is that the sonic boom noise will be significantly reduced as compared to that of previous planes, such as the Concorde.  Currently, U.S. and international regulations prohibit civil supersonic flight over land because of people’s annoyance to the impulsive sound of sonic booms.  In order for regulators to consider lifting the ban and introducing a new rule for supersonic flight, surveys of the public’s reactions to the new sonic boom noise are required. For community overflight studies, a quiet sonic boom demonstration research aircraft will be built. A NASA design for such an aircraft is shown in Fig. 1.

(Loubeau_QueSST.jpg) - sonic booms

Figure 1. Artist rendering of a NASA design for a low-boom demonstrator aircraft, exhibiting a characteristic slender body and carefully shaped swept wings.

To keep costs down, this demonstration plane will be small and only include space for one pilot, with no passengers.  The smaller size and weight of the plane are expected to result in a sonic boom that will be slightly different from that of a full-size plane.  The most noticeable difference is that the demonstration plane’s boom will be shorter, which corresponds to less low-frequency energy.

A previous study assessed people’s reactions, in the laboratory, to simulated sonic booms from small and full-size planes.  No significant differences in annoyance were found for the booms from different size airplanes.  However, these booms were presented without including the secondary rattle sounds that would be expected in a house under the supersonic flight path.

The goal of the current study is to extend this assessment to include indoor window rattle sounds that are predicted to occur when a supersonic aircraft flies over a house.  Shown in Fig. 2, the NASA Langley indoor sonic boom simulator that was used for this test reproduces realistic sonic booms at the outside of a small structure, built to model a corner room of a house.  The sonic booms transmit to the inside of the room that is furnished to resemble a living room, which helps the subjects imagine that they are at home.  Window rattle sounds are played back through a small speaker below the window inside the room.  Thirty-two volunteers from the community rated the sonic booms on a scale ranging from “Not at all annoying” to “Extremely annoying”.  The ratings for 270 sonic boom and rattle combinations were averaged for each boom to obtain an estimate of the general public’s reactions to the sounds.

(Loubeau_IER.jpg) - sonic booms

Figure 2. Inside of NASA Langley’s indoor sonic boom simulator.

The analysis shows that aircraft size is still not significant when realistic window rattles are included in the simulated indoor sound field.  Hence a boom from a demonstration plane is predicted to result in approximately the same level of annoyance as a full-size plane’s boom, as long as they are of the same loudness level.  This further confirms the viability of plans to use the demonstrator for community studies.  While this analysis is promising, additional calculations would be needed to confirm the conclusions for a variety of house types.

5aPA – A Robust Smartphone Based Multi-Channel Dynamic-Range Audio Compression for Hearing Aids

Yiya Hao– yxh133130@utdallas.edu
Ziyan Zou – ziyan.zou@utdallas.edu
Dr. Issa M S Panahi – imp015000@utdallas.edu

Statistical Signal Processing Laboratory (SSPRL)
The University of Texas at Dallas
800W Campbell Road, Richardson, TX – 75080, USA

Popular Version of Paper 5aPA, “A Robust Smartphone Based Multi-Channel Dynamic-Range Audio Compression for Hearing Aids”
Presented Friday morning, May 11, 2018, 10:15 – 10:30 AM, GREENWAY J
175th ASA Meeting, Minneapolis

Records by National Institute on Deafness and Other Communication Disorders (NIDCD) indicate that nearly 15% of adults (37 million) aged 18 and over report some kind of hearing loss in the United States. Amongst the entire world population, 360 million people suffer from hearing loss.

Hearing impairment degrades perception of speech and audio signals due to low frequency- dependent audible threshold levels. Hearing aid devices (HADs) apply prescription gains and dynamic-range compression for improving users’ audibility without increasing the sound loudness to uncomfortable levels. Multi-Channel dynamic-range compression enhances quality and intelligibility of audio output by targeting each frequency band with different compression parameters such as compression ratio (CR), attack time (AT) and release time (RT).

Increasing the number of compression channels can result in more comfortable audio output when appropriate parameters are defined for each channel. However, the use of more channels increases computational complexity of the multi-channel compression algorithm limiting its application to some HADs. In this paper, we propose a nine-channel dynamic-range compression (DRC) with an optimized structure capable of running on smartphones and other portable digital platforms in real time. Test results showing the performance of proposed method are presented too. The block diagram of proposed method shows in Fig.1. And the block diagram of compressor shows in the Fig.2.

Fig.1. Block Diagram of 9-Channel Dynamic-Range Audio Compression

Fig.2. Block Diagram of Compressor

Several experimental results have been measured including the processing time measurements of real-time implementation of proposed method on an Android smartphone, objective evaluations and subjective evaluations, a commercial audio compression & limiter provided by Hotto Engineering [1] is used as a comparison running on a laptop. Proposed method running on a Google Pixel smartphone with operating system 6.0.1. The sampling rate is set to 16kHz and the frame size is set as 10 ms.

The High-quality INT eractomes (HINT) sentences database at 16 kHz sampling rate are used. First experimental measurement is testing the processing time running on the smartphone. Two processing times were measured, round-trip latency and algorithms processing time. Larsen test was used to measure the round-trip latency [2], and the test setup shows in Fig.3. The average processing time results shows in Fig.2 as well. Perceptual evaluation of speech quality (PESQ) [3] and short-time objective intelligibility (STOI) [4] has been used to test the objective quality and intelligibility of proposed nine-channel DRC.

The results could be find in Fig.4. Subjective tests including mean opinion score (MOS) test [5] and word recognition test (WR) have been tested, and the Fig.5 shows the results. Based on the results we can tell that proposed nine-channel DRC could run on the smartphone efficiently, and provides with decent quality and intelligibility as well.

Fig.3. Processing Time Measurements and Results

Fig.4. Objective evaluation results of speech quality and intelligibility.

Fig.5. Subjective evaluation results of speech quality and intelligibility.

Based on the results we can tell, proposed nine-channel dynamic-range audio compression could provide with decent the quality and intelligibility which could run on smartphones. Proposed DRC could pre-set all the parameters based on the audiograms of individuals. With proposed compression, the multi-channel DRC does not limit within advanced hardware, which is costly such as hearing aids or laptops. Proposed method also provides with a portable audio framework, which not just limiting in current version of DRC, but could be extended or upgraded further for research study.

Please refer our lab website http://www.utdallas.edu/ssprl/hearing-aid-project/ for video demos and the sample audio files are as attached below.

Audio files:

Unprocessed_MaleSpeech.wav

Unprocessed_FemaleSpeech.wav

Unprocessed_Song.wav

Processed_MaleSpeech.wav

Processed_FemaleSpeech.wav

Processed_Song.wav

Key References:

  • 2018. [Online]. Available: http://www.hotto.de/
  • 2018. [Online]. Available: https://source.android.com/devices/audio/latency_measurements
  • Rix, W., J. G. Beerends J.G., Hollier, M. P., Hekstra, A. P., “Perceptual evaluation of speech quality (PESQ) – a new method for speech quality assessment of telephone networks and codecs,” IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), 2, pp. 749-752., May 2001.
  • Tall, C. H, Hendricks, R. C., Heusdens, R., Jensen, R., “An algorithm for intelligibility prediction of time-frequency weighted noisy speech,” IEEE trans. Audio, Speech, Lang. Process. 19(7), pp. 2125- 2136., Feb
  • Streijl, R. C., Winkler, S., Hands, D. S., “Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives,” in Multimedia Systems 22.2, pp. 213-227, 2016.

*This work was supported by the National Institute of the Deafness and Other Communication Disorders (NIDCD) of the National Institutes of Health (NIH) under the grant number 5R01DC015430-02. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors are with the Statistical Signal Processing Research Laboratory (SSPRL), Department of Electrical and Computer Engineering, The University of Texas at Dallas.

3aPA7 – Moving and sorting living cells with sound and light

Gabriel Dumy– gabriel.dumy@espci.fr
Mauricio Hoyos – mauricio.hoyos@espci.fr
Jean-Luc Aider – jean-luc.aider@espci.fr
ESPCI Paris – PMMH Lab
10 rue Vauquelin
Paris, 75005, FRANCE

Popular version of paper 3aPA7, “Investigation on a novel photoacoustofluidic effect”

Presented Wednesday morning, December 6, 2017, 11:00-11:15 AM, Balcony L

174th ASA Meeting, New Orleans

Amongst the various ways of manipulating suspensions, acoustic levitation is one of the most practical yet not very known to the public. Allowing for contactless concentration of microscopic bodies (from particles to living cells) in fluids (whether it be air, water, blood…), this technique only requires a small amount of power and materials. It is thus smaller and less power consuming than other technologies using magnetic or electric fields for instance and does not require any preliminary tagging.

Acoustic levitation occurs when using standing ultrasonic waves trapped between two reflecting walls. If the ultrasonic wavelength ac is matched to the distance between the two walls (it has to be a certain number of the half wavelength), then an acoustic pressure field forces the particles or cells to move toward the region where the acoustic pressure is minimal (this region is called a pressure node) [1]. Once the particles or cells have reached the pressure node, they can be kept in so-called “acoustic levitation” as long as needed. They are literally trapped in an “acoustic tweezer”. Using this method, it is easy to force cells or particles to create large clusters or aggregates than can be kept in acoustic levitation as long as the ultrasonic field is on.

What happens if we illuminate the aforementioned aggregates of fluorescent particles or cells with a strong monochromatic (only one color) optic wave? If this wave is absorbed by the levitating objects, then the previously very stable aggregate explodes.

We can observe that the particles are now ejected from the illuminated aggregate at great speed from its periphery. But they are still kept in acoustic levitation, which is not affected by the introduction of light.

We determined that the key parameter is the absorption of light by the levitating objects because the explosions happened even with non-fluorescent particles. Moreover, this phenomenon exhibits a strong coupling between light and sound, as it needs the two sources of energy to be present at the same time to occur. If the particles are not in acoustic levitation, on the bottom of the cavity or floating in the suspending medium, even a very strong light does not move them. Without the adequate illumination, we only observe a classical acoustic aggregation process.

Using this light absorption property together with acoustic levitation opens the way to more complex and challenging experiments, like advanced manipulations of micro-objects in acoustic levitation or fast and highly selective sorting of mixed suspensions, since we can discriminate these particles not only on their mechanical properties but also on their optic ones.

We did preliminary experiments with living cells. We observed that human red blood cells (RBCs), having a strong absorption of blue light, could be easily manipulated by both sounds and light. We were able to break up RBCs aggregates very quickly. As a matter of fact, this new effect coupling both acoustics and light suggests all new perspectives for living cells manipulation and sorting, like cell washing (removing unwanted cells from the target cell).  Indeed, most of the living cells absorb light at different wavelengths and can already be manipulated using acoustic fields. This discovery should allow very selective manipulations and/or sorting of living cells in a very simple and easy way, using a low-cost setup.

Figure 1. Illustration of the acoustic manipulation of suspensions. A suspension is first focused under the influence of the vertical acoustic pressure field in red (a and b). Once in the pressure node, the suspension is radially aggregated c) by secondary acoustic forces [2]. On d), when we enlighten the stable aggregate with an adequate wavelength, this one laterally explodes.

Figure 2. (videos missing): Explosion (red_explosion) of the previously formed aggregate of 1.6 polystyrene beads, that are red fluorescent, by a green light. Explosion (green_explosion) of an aggregate of 1.7µm green fluorescent polystyrene beads by a blue light.

Figure 3 (videos missing): Illustration of the separation potential of the phenomenon. We take an aggregate (a) that is a mix of two kind of polystyrene particles with same diameter, one absorbing blue light and fluorescing green (b), the other absorbing green light and fluorescing red (c), that we cannot separate by acoustics alone. We expose this aggregate to blue light for 10 seconds. On the bottom row is shown the effect of this light, we effectively separated the blue absorbing particles (e) from the green absorbing one (f).

Movie missing – describes the observation from the top of the regular acoustic aggregation process of a suspension of 1.6µm polystyrene beads.

[1] K. Yosioka and Y. Kawasima, “Acoustic radiation pressure on a compressible sphere,” Acustica, vol. 5, pp. 167–173, 1955.

[2] G. Whitworth, M. A. Grundy, and W. T. Coakley, “Transport and harvesting of suspended particles using modulated ultrasound,” Ultrasonics, vol. 29, pp. 439–444, 1991.

3aPA3 – Standing Surface Acoustic Wave Enabled Acoustofluidics for Bioparticle Manipulation

Xiaoyun Ding- Xiaoyun.Ding@Colorado.edu
Department of Mechanical engineering
University of Colorado at Boulder
Boulder, CO 80309

Popular version of paper 3aPA3, “Standing Surface Acoustic Wave Enabled Acoustofluidics for Bioparticle Manipulation”
Presented Wednesday, December 06, 2017, 9:30-10:00 AM, Balcony L
174th ASA meeting, New Orleans

Techniques that can noninvasively and dexterously manipulate cells and other bioparticles (such as organisms, DNAs, proteins, and viruses) in a compact system are invaluable for many applications in life sciences and medicine. Historically, optical tweezers have been the primary tool used in the scientific community for bioparticle manipulation. Despite the remarkable capability and success, optical tweezers have notable limitations, such as complex and bulky instrumentation, high equipment costs, and low throughput. To overcome the limitations of optical tweezers and other particle manipulation methods, we have developed a series of acoustic-based, on-chip devices (Figure to the left) called acoustic tweezers that can manipulate cells and other bioparticles using sound waves in microfluidic channel. Cells viability and proliferation assays were also conducted to confirm the non-invasiveness of our technique. The simple structure/setup of these acoustic tweezers can be integrated with a small radio-frequency power supply and basic electronics to function as a fully integrated, portable, and inexpensive cell-manipulation system. Along with my colleagues, I have demonstrated that our acoustic tweezers can achieve the following functions: 1) single cell/organism manipulation [1]; 2) high-efficiency cell separation [2]; and 3) multichannel cell sorting [3].

Acoustic tweezers based single cell/organism manipulation
The acoustic tweezers I developed was the first acoustic manipulation method which can trap and dexterously manipulate single microparticles, cells, and entire organisms (i.e., Caenorhabditis elegans) along a programmed route in two-dimensions within a microfluidic chip [1]. We demonstrate that the acoustic tweezers can move a 10-µm single polystyrene bead to write the word “PNAS” and a bovine red blood cell to trace the letters “PSU” (Figure to the right). It was also the first technology capable of touchless trapping and manipulating Caenorhabditis elegans, a one-millimeter long roundworm that is one of the most important model systems for studying diseases and development in humans. To the best of our knowledge, this is the first demonstration of non-invasive, non-contact manipulation of C. elegans, a function that is challenging for optical tweezers.

Acoustic tweezers based high-efficiency cell separation
Simple and high-efficiency cell separation techniques are fundamentally important in biological and chemical analyses such as cancer cell detection, drug screening, and tissue engineering. In particular, the ability to separate cancer cells (such as leukaemia cells) from human blood can be invaluable for cancer biology, diagnostics, and therapeutics. We have developed an standing surface acoustic wave based cell separation technique that can achieve high-efficiency (>95%) separation of human leukemia cells (HL-60) from human blood cells and high efficiency separation of breast  cancer cells from human blood based on their size difference (Figure to the right). This method is simple and versatile, capable of separating virtually all kinds of cells (regardless of charge/polarization or optical properties) with high separation efficiency and low power consummation.

Acoustic tweezers based multichannel cell sorting
Cell sorting is essential for many fundamental cell studies, cancer research, clinical medicine, and transplantation immunology. I developed an acoustic-based method that can precisely sort cell into five separate outlets of cells (Figure to the right), rendering it particularly desirable for multi-type cell sorting [3]. Our device requires small sample volumes (~100 μl), making it an ideal tool for research labs and point-of-care diagnostics. Furthermore, it can be conveniently integrated with a small power supply, a fluorescent detection module, and a high-speed electrical feedback module to function as a fully integrated, portable, inexpensive, multi-color, miniature fluorescence-activated cell sorting (μFACS) system.

 

 References:

  1. Xiaoyun Ding, et al., On-Chip Manipulation of Single Microparticles, Cells, and Organisms Using Surface Acoustic Waves, Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2012, 109, 11105-09,.
  2. Xiaoyun Ding, et al., Cell separation using tilted-angle standing surface acoustic waves, Proceedings of the National Academy of Sciences of the United States of America (PNAS), 111, 12992-12997 (2014).
  3. Xiaoyun Ding, et al., Standing surface acoustic wave (SSAW) based multichannel cell sorting, Lab On a Chip, 2012,12, 4228–31,. (COVER ARTICLE)
  4. Xiaoyun Ding, et al., Lab on a Chip, 2012, 12, 2491-97. (COVER ARTICLE)

2aPA8 – Taming Tornadoes: Controlled Trapping and Rotation with Acoustic Vortices

Asier Marzo – amarzo@hotmail.com
Mihai Caleap
Bruce Drinkwater

Bristol University
Senate House, Tyndall Ave,
Bristol, United Kingdom

Popular version of paper 2aPA8, “Taming tornadoes: Controlling orbits inside acoustic vortex traps”
Presented Tuesday afternoon, May 24, 2016, 11:05 AM, Salon H
171st ASA Meeting Salt Lake City

Tractor beams are mysterious beams that have the ability to attract objects towards the source of the emission (Figure 1). These beams have attracted the attention of both scientists and sci-fi fans. For instance, it is quite an iconic device in Star Wars or Star Trek where it is used by big spaceships to trap and capture smaller objects.

Figure-01

Figure 1. A sonic tractor beam working on air.

In the scientific community, they have been studied theoretically for decades and in 2014, a tractor beam made with light was realized [1]. It used the energy of the photons bouncing on a microsphere to keep it trapped laterally and at the same time heated the back of the sphere with different light patterns to pull it towards the laser source. The sphere had a diameter of 50 micrometres, was made of glass and coated with gold.

A tractor beam made with light can only manipulate very small particles and made of specific materials. Making a tractor beam which uses mechanical waves (i.e. sound or ultrasound) would enable the trapping of a much wider range of particle sizes and allow almost any combination of particle and host fluid materials, for example drug delivery agents within the human body.

Recently, it has been proven experimentally that a Vortex beam can act as a tractor beam both in air [2] and in water [3]. A Vortex beam (such as a first order Bessel beam) is analogous to a tornado of sound which is hollow in the middle and spirals about a central axis, the particles get trapped in the calm eye of the tornado (Figure 2).

Figure-02 - Acoustic Vortices

Figure 2. Intensity iso-surface of an Acoustic Vortex. 54 ultrasonic speakers emitting at 40kHz arranged in a hemisphere (see [2] for fuller details) create an acoustic vortex that traps the particle in the middle.

The problem is, that only very small particles are stably trapped inside the vortex. As the particles get bigger, they start to spin and orbit until being ejected (Figure 3). As in a tornado, only the small particles remain within the vortex whereas the larger ones get ejected.

Figure-03

Figure 3. Particle behaviour depending on its size: a small particle is trapped (a), a middle particle orbits (b) and big particles gets ejected (c).

Here we show that, contrary to a tornado, we can change the direction of an acoustic vortex thousands of times per second. In our paper, we prove that by rapidly switching the direction of the acoustic vortex it is possible to produce stable trapping of particles of various sizes. Furthermore, by adjusting the proportion of time that each vortex direction is emitted, the spinning speed of the particle can be controlled (Figure 4).

Figure-04 - Acoustic Vortices

Figure 4. Taming the vortex: a) the vortex rotates all the time in the same direction and this rotation is transferred to the particle. b) the vortex switches direction and thus the angular momentum is completely or partially cancelled, providing rotational control.

The ability to levitate and controllably rotate inside acoustic vortices particles such as liquids, crystals or even living cells enables new possibilities and processes for a variety of disciplines.

References

  1. Shvedov, V., Davoyan, A. R., Hnatovsky, C., Engheta, N., & Krolikowski, W. (2014). A long-range polarization-controlled optical tractor beam. Nature Photonics, 8(11), 846-850.
  2. Marzo, A., Seah, S. A., Drinkwater, B. W., Sahoo, D. R., Long, B., & Subramanian, S. (2015). Holographic acoustic elements for manipulation of levitated objects. Nature communications, 6.
  3. Baresch, D., Thomas, J. L., & Marchiano, R. (2016). Observation of a single-beam gradient force acoustical trap for elastic particles: acoustical tweezers. Physical Review Letters, 116(2), 024301.

4aPA4 – Acoustic multi-pole source inversions of volcano infrasound

Keehoon Kim – kkim32@alaska.edu
University of Alaska Fairbanks
Wilson Infrasound Observatory, Alaska Volcano Observatory, Geophysical Institute
903 Koyukuk Drive, Fairbanks, Alaska 99775

David Fee – dfee1@alaska.edu
University of Alaska Fairbanks
Wilson Infrasound Observatory, Alaska Volcano Observatory, Geophysical Institute
903 Koyukuk Drive, Fairbanks, Alaska 99775

Akihiko Yokoo – yokoo@aso.vgs.kyoto-u.ac.jp
Kyoto University
Institute for Geothermal Sciences
Kumamoto, Japan

Jonathan M. Lees – jonathan.lees@unc.edu
University of North Carolina Chapel Hill
Department of Geological Sciences
104 South Road, Chapel Hill, North Carolina 27599

Mario Ruiz – mruiz@igepn.edu.ec
Escuela Politecnica Nacional
Instituto Geofisico
Quito, Ecuador

Popular version of paper 4aPA4, “Acoustic multipole source inversions of volcano infrasound”
Presented Thursday morning, May 21, 2015, at 9:30 AM in room Kings 1
169th ASA Meeting, Pittsburgh
Click here to read the abstract

Volcano infrasound
Volcanoes are outstanding natural sources of infrasound (low-frequency acoustic waves below 20 Hz). In the last few decades local infrasound networks have become an essential part of geophysical monitoring systems for volcanic activity. Unlike seismic networks dedicated to monitoring subsurface activity (c.f., magma or fluid transportation) infrasound monitoring facilitates detecting and characterizing eruption activity at the earth’s surface. Figure 1a shows Sakurajima Volcano in southern Japan and an infrasound network deployed in July 2013. Figure 1b is an image of a typical explosive eruption during the field experiment, which produces loud infrasound.

Sakurajima Volcano - Kim1Figure 1. a) A satellite image of Sakurajima Volcano, adapted from Kim and Lees (2014). Five stand-alone infrasound sensors were deployed around Showa Crater in July 2013, indicated by inverted triangles. b) An image of a typical explosive eruption observed during the field campaign.

Source of volcano infrasound
One of the major sources of volcano infrasound is a volume change in the atmosphere. Mass discharge from volcanic eruptions displaces the atmosphere near and around the vent and this displacement propagates into the atmosphere as acoustic waves. Infrasound signals can, therefore, represent a time history of the atmospheric volume change during eruptions. Volume flux inferred from infrasound data can be further converted into mass eruption rate with the density of the erupting mixture. Mass eruption rate is a critical parameter for forecasting ash-cloud dispersal during eruptions and consequently important for aviation safety. One of the problems associated with the volume flux estimation is that observed infrasound signals can be affected by propagation path effects between the source and receivers. Hence, these path effects must be appropriately accounted for and removed from the signals in order to obtain the accurate source parameter.

Infrasound propagation modeling
vent of Sakurajima Volcano - Kim2Figure 2. a) Sound pressure level in dB relative to the peak pressure at the source position. b) Variation of infrasound waveforms across the network caused by propagation path effects.

Figure 2 shows the results of numerical modeling of sound propagation from the vent of Sakurajima Volcano. The sound propagation is simulated by solving the acoustic wave equation using a Finite-Difference Time-Domain method taking into account volcanic topography. The synthetic wavefield is excited by a Gaussian-like source time function (with 1 Hz corner frequency) inserted at the center of Showa Crater (Figure 2a). Homogeneous atmosphere is assumed since atmospheric heterogeneity should have limited influence in this local range (< 7 km). The numerical modeling demonstrates that both amplitude and waveform of infrasound are significantly affected by the local topography. In Figure 2a, Sound Pressure Level (SPL) relative to the source amplitude is calculated at each computational grid node on the ground surface. The SPL map indicates an asymmetric radiation pattern of acoustic energy. Propagation paths to the northwest of Showa Crater are obstructed by the summit of the volcano (Minamidake), and as a result acoustic shadow zones are created northwest of the summit. Infrasound waveform also shows significant variation across the network. In Figure 2b, synthetic infrasound signals computed at the station positions (ARI – SVO) show bipolar pulses followed by oscillations in pressure while the pressure time history at the source location exhibits only a positive unipolar pulse. This result indicates that the oscillatory infrasound waveforms can be produced by not only source effects but also propagation path effects. Hence, this waveform distortion must be considered for source parameter inversion.

Volume flux estimates
Because wavelengths of volcano infrasound are usually longer than the dimension of source region, the acoustic sources are typically treated as a monopole, which is a point source approximation of volume expansion or contraction. Then, infrasound data represent the convolution of volume flux history at the source and the response of the propagation medium, called Green’s function. Volume flux history can be obtained by deconvolving the Green’s functions from the data. The Green’s functions can be obtained by two different ways: 3-D numerical modeling considering local topography (Case 1) and the analytic solution in a half-space neglecting volcanic topography (Case 2). Resultant volume histories for a selected infrasound event are compared in Figure 3. Case 1 results in gradually decreasing volume flux curve, but Case 2 shows pronounced oscillation in volume flux. In Case 2, propagation path effects are not appropriately removed from the data leading to misinterpretation of the source effect.

Summary
Proper Green’s function is critical for accurate volume flux history estimation. We obtained a reasonable volume flux history using the 3-D numerical Green’s function. In this study only simple source model (monopole) was considered for volcanic explosions. More general representation can be obtained by multipole expansion of acoustic sources. In 169th ASA Meeting presentation, we will further discuss source complexity of volcano infrasound, which requires the higher-order terms of the multipole series.

Kim3Figure 3. Volume flux history inferred from infrasound data. In Case 1, the Green’s function is computed by 3-D numerical modeling considering volcanic topography. In Case 2, the analytic solution of the wave equation in a half-space is used, neglecting the topography.

References

Kim, K. and J. M. Lees (2014). Local Volcano Infrasound and Source Localization Investigated by 3D Simulation. Seismological Research Letters, 85, 1177-1186