4aPA – Using Sound Waves to Quantify Erupted Volumes and Directionality of Volcanic Explosions

Alexandra Iezzi – amiezzi@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

David Fee – dfee1@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

Popular version of paper 4aPA
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

Volcanic eruptions can produce serious hazards, including ash plumes, lava flows, pyroclastic flows, and lahars. Volcanic phenomena, especially explosions, produce a substantial amount of sound, particularly in the infrasound band (<20 Hz, below human hearing) that can be detected at both local and global distances using dedicated infrasound sensors. Recent research has focused on inverting infrasound data collected within a few kilometers of an explosion, which can provide robust estimates of the mass and volume of erupted material in near real time. While the backbone of local geophysical monitoring of volcanoes typically relies on seismometers, it can sometimes be difficult to determine whether a signal originates from the subsurface only or has become subaerial (i.e. erupting). Volcano infrasound recordings can be combined with seismic monitoring to help illuminate whether or not material is actually coming out of the volcano, therefore posing a potential threat to society.

This presentation aims to summarize results from many recent studies on acoustic source inversions for volcanoes, including a recent study by Iezzi et al. (in review) at Yasur volcano, Vanuatu. Yasur is easily accessible and has explosions every 1 to 4 minutes making it a great place to study volcanic explosion mechanisms (Video 1).

Video 1 – Video of a typical explosion at Yasur volcano, Vanuatu.

Most volcano infrasound inversion studies assume that sound radiates equally in all directions. However, the potential for acoustic directionality from the volcano infrasound source mechanism is not well understood due to infrasound sensors usually being deployed only on Earth’s surface. In our study, we placed an infrasound sensor on a tethered balloon that was walked around the volcano to measure the acoustic wavefield above Earth’s surface and investigate possible acoustic directionality (Figure 1).

Figure 1 [file missing] – Image showing the aerostat on the ground prior to launch (left) and when tethered near the crater rim of Yasur (right).

Volcanos typically have high topographic relief that can significantly distort the waveform we record, even at distances of only a few kilometers. We can account for this effect by modeling the acoustic propagation over the topography (Video 2).

Video 2 – Video showing the pressure field that results from inputting a simple compressional source at the volcanic vent and propagating the wavefield over a model of topography. The red denotes positive pressure (compression) and blue denotes negative pressure (rarefaction). We note that all complexity past the first red band is due to topography.

Once the effects of topography are constrained, we can assume that when we are very close to the source, all other complexity in the infrasound data is due to the acoustic source. This allows us to solve for the volume flow rate (potentially in real time). In addition, we can examine directionality for all explosions, which may lead to volcanic ejecta being launched more often and farther in one direction than in others. This poses a great hazard to tourists and locals near the volcano and may be mitigated by studying the acoustic source from a safe distance using infrasound.

4APP28 – Listening to music with bionic ears: Identification of musical instruments and genres by cochlear implant listeners

Ying Hsiao – ying_y_hsiao@rush.edu
Chad Walker
Megan Hebb
Kelly Brown
Jasper Oh
Stanley Sheft
Valeriy Shafiro – Valeriy_Shafiro@rush.edu
Department of Communication Disorders and Sciences
Rush University
600 S Paulina St
Chicago, IL 60612, USA

Kara Vasil
Aaron Moberly
Department of Otolaryngology – Head & Neck Surgery
Ohio State University Wexner Medical Center
410 W 10th Ave
Columbus, OH 43210, USA

Popular version of paper 4APP28
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

For many people, music is an integral part of everyday life. We hear it everywhere: cars, offices, hallways, elevators, restaurants, and, of course, concert halls and peoples’ homes. It can often make our day more pleasant and enjoyable, but its ubiquity also makes it easy to take it for granted. But imagine if the music you heard around you sounded garbled and distorted. What if you could no longer tell apart different instruments that were being played, rhythms were no longer clear, and much of it sounded out of tune? This unfortunate experience is common for people with hearing loss who hear through cochlear implants, or CIs, the prosthetic devices that convert sounds around a person to electrical signals that are then delivered directly to the auditory nerve, bypassing the natural sensory organ of hearing – the inner ear. Although CIs have been highly effective in improving speech perception for people with severe to profound hearing loss, music perception has remained difficult and frustrating for people with CIs.

Audio 1.mp4, “Music processed with the cochlear implant simulator, AngelSim by Emily Shannon Fu Foundation”

Audio 2.mp4, “Original version [“Take Five” by Francesco Muliedda is licensed under CC BY-NC-SA]”

To find out how well CI listeners identify musical instruments and music genres, we used a version of a previously developed test – Appreciation of Music in Cochlear Implantees (AMICI). Unlike other tests that examine music perception in CI listeners using simple-structured musical stimuli to pinpoint specific perceptual challenges, AMICI takes a more synthetic approach and uses real-world musical pieces, which are acoustically more complex. Our findings confirmed that CI listeners indeed have considerable deficits in music perception. Participants with CIs correctly identify musical instruments only 69% of the time and musical genres 56% of the time, whereas their age-matched normal-hearing peers identified instruments and genres with 99% and 96% correct, respectively. The easiest instrument for CI listeners were drums, which were correctly identified 98% of the time. In contrast, the most difficult instrument was flute, with only 18% identification accuracy. Flute was more often, 77% of the time, confused with string instruments. Among the genres, identification of classical music was the easiest, reaching 83% correct, while Latin and rock/pop music were most difficult (41% correct). Remarkably, CI listeners’ abilities to identify musical instruments and genres correlated with their ability to identify common environmental sounds (such as dog barking, car horn) and also spoken sentences in noise. These results provide a foundation for future work that will focus on rehabilitation in music perception for CI listeners, so that music may sound pleasing and enjoyable to them once again, with possible additional benefits for speech and environmental sound perception.

1aSP1 – From Paper Cranes to New Tech Gains: Frequency Tuning through Origami Folding

Kazuko Fuchi – kfuchi1@udayton.edu
University of Dayton Research Institute
300 College Park, Dayton, OH 45469

Andrew Gillman – andrew.gillman.1.ctr@us.af.mil
Alexander Pankonien – alexander.pankonien.1@us.af.mil
Philip Buskohl – philip.buskohl.1@us.af.mil
Air Force Research Laboratory
Wright-Patterson Air Force Base, OH 45433

Deanna Sessions – deanna.sessions@psu.edu
Gregory Huff – ghuff@psu.edu
Department of Electrical Engineering and Computer Science
Penn State University
207 Electrical Engineering West, University Park, PA 16802

Popular version of lecture: 1aSP1 Topology optimization of origami-inspired reconfigurable frequency selective surfaces
Presented Monday morning, 9:00 AM – 11:15 AM, May 13, 2019
177th ASA Meeting, Louisville, Kentucky

The use of mathematics and computer algorithms by origami artists has led to a renaissance of the art of origami in recent decades. Combining scientific tools with their imagination and artistic skills, these artists discover intricate origami designs that inspire expansive possibilities of the art form.

The intrigue of realizing incredibly complex creatures and exquisite patterns from a piece of paper has captured the attention of the scientific and engineering communities. Our research team and others in the engineering community wanted to make use of the language of origami, which gives us a natural way to navigate through complex geometric transformations through 2D (flat), 3D (folded) and 4D (folding motion) spaces. This beautiful language has enabled numerous innovative technologies including foldable and deployable satellites, self-folding medical devices and shape-changing robots.

Origami, as it turns out, is also useful in controlling how sound and radio waves travel. An electromagnetic device called an origami frequency selective surface for radio waves can be created by laser-scoring and folding a plastic sheet into a repeating pattern called a periodic tessellation and printing electrically conductive, copper decorations aligned with the pattern on the sheet (Figure 1). We have shown that this origami folded device can be used as a filter to block unwanted signals at a specific operating frequency. We can fold and unfold this device to tune the operating frequency, or we can design a device that can be folded, unfolded, bent and twisted into a complex surface shape without changing the operating frequency, all depending on the design of the folding and printing patterns. These findings encourage more research in origami-based innovative designs to accomplish demanding goals for radar, communication and sensor technologies.

origamiFigure 1: Fabricated prototype of origami folded frequency selective surface made of a folded plastic sheet and copper prints, ready to be tested in an anechoic chamber – a room padded with radio-wave-absorbing foam pyramids.

Origami can be used to choreograph complex geometric rearrangements of the active components. In the case of our frequency selective surface, the folded plastic sheet acts as the medium that hosts the electrically active copper prints. As the sheet is folded, the copper prints fold and move relative to each other in a controlled manner. We used our theoretical knowledge along with insight gained from computer simulations to understand how the rearrangements impact the physics of the device’s working mechanism and to decide what designs to fabricate and test in the real world. In this, we attempt to imitate the origami artist’s magical creation of awe-inspiring art in the engineering domain.

1aSAb4 – Seismic isolation in Advanced Virgo gravitational wave detector

Valerio Boschi – valerio.boschi@ego-gw.it
European Gravitational Observatory
Istituto Nazionale di Fisica Nucleare
Sezione di Pisa
Largo B. Pontecorvo, 3
56127 Pisa, Italy

Popular version of paper 1aSAb4
Presented Monday morning, May 13th, 2019
177th ASA Meeting, Louisville, KY

Imagine to drop a glass of water in the ocean. Due to that the global level of all the seas on the Earth will increase by an extremely small amount. A rough estimate would lead you to this amazingly tiny displacement: 10-18 m !! This length is equivalent to the sensitivity of current gravitational wave (GW) detectors.

GWs are ripples of space-time, produced by the collapse of extremely dense astrophysical objects, like black holes or neutron stars. Those signals induce on the matter small variation of length (less than 10-18 m at 100 Hz) that can be detected only by the world most precise rulers, the interferometers.

Second generation gravitational wave interferometers like the Advanced Virgo experiment, shown in fig. 1, which is based in Cascina, Italy and the two US-based Advanced LIGO detectors, are collecting GW signals since 2015 opening the doors of the so-called multi-messenger astronomy.

Figure 1 Aerial View of Advanced Virgo (EGO/Virgo collaboration)

In order reach the required level of sensitivity of current interferometers many disturbances need to be strongly reduced. Seismic noise if not attenuated would represent the main limitation of current detectors. In facts, even in the absence of local or remote earthquakes, ground moves by mm in the frequency region between 0.3 and 0.4 Hz. This motion, called microseism, is caused by the continuous excitation of the Earth crust produced by the sea waves.

In this conference contribution we will present an overview of the seismic isolation systems used in Advanced Virgo GW interferometer. We will concentrate on the so-called super-attenuator, the seismic isolator used for all the detector main optical components, shown in fig. 2. This complex mechanical device is able to provide more than 12 orders of magnitude of attenuation above a few Hz. We will also describe its high-performance digital control system and the control algorithms implemented with it. Thanks to the performance and reliability of this system the current duty cycle of Advanced Virgo, is almost 90 %.

gravitational wave

Figure 2 Inside view of a super-attenuator

1pNS2 – Soundscape, traffic safety, and requirements for public health

Brigitte Schulte-Fortkamp – b.schulte-fortkamp@tu-berlin.de

Technical University Berlin
Psychoacoustics and Noise Effects
Einsteinufer 25
10587 Berlin -Germany

Popular version of paper 1pNS2
Monday, May 13, 2019
177th ASA Meeting in Louisville, KY

When you think about your safety and health with regard to road traffic you may not immediately think about avoidable noise pollution. But: The World Health Organization (WHO) has published a new Noise Guideline for the European Region in October 2018. The focus is set on health effects caused by noise from different sources whereby as transportation noise as road traffic-, railway- and aircraft-noise play the major role.

The use of environmentally friendly electrical vehicles can for sure decrease the road traffic noise pollution as a contribution to public health.  But for safety reason which it is of course also a public health issue there is also policy action for regulations of the use of alert signals.  There is a worldwide consideration about how this could may be counterproductive to a harmonic and healthy soundscape or even support those.

(Regulation (EU) No 540/2014 of the European Parliament 2018, U.S. National Highway Traffic Safety Administration 2018,  Japan Guidelines on Electric vehicle warning sounds 2010)

Soundscape is the new way to understand people’s reaction to the sounds of the world. Soundscape is a construct of human perception that must be understood as a relationship between human beings, acoustic environments, and society. Our focus in this field is here on co-creation in acoustics, architecture, medicine, and urban planning.  It is combined with analysis, advice, and feedback from the ‘users of any acoustic environment as the primary ‘experts’ of any environment – to find creative and responsive solutions for protection of living areas and to enhance the quality of life.

The Soundscape concept is introduced as a scope to rethink the evaluation of noise pollution. The challenge is to account for the perceptual dimension and to consider the limits of acoustic measurements.

Figure 1– The recent international standard ISO 12913-1,2,3 Acoustics – Soundscape
soundscape soundscape soundscape

Figure 2 – Definition of Soundscape
– acoustic environment as perceived or experienced and/or understood by people, in context.
soundscape

Soundscape as defined in 2014 by the International Organization for Standardization (ISO)

Figure 3 – Elements in the perceptual construct of soundscape
soundscape

Context
The context includes the interrelationships between person and activity and place, in space and time. The context may influence soundscape through (1) the auditory sensation, (2) the interpretation of auditory sensation, and (3) the responses to the acoustic environment

The contribution of Soundscape (research) regarding public health means to focus on the perception as a key issue. With Soundscape it is suggested to exploring noise in its complexity and its ambivalence.  Soundscape studies investigate and find increasingly better ways to measure and hone the acoustic environment.

Figure 4 – Soundscape studies
soundscape

Figure 5 – Soundscape model including quality of life and health
soundscape

Otherwise, the new technology in the development of electrical vehicles causes policy action with regulations calling for safety reasons. Regulations and needs have to be considered with respect to the public health recommendations on exposure to environmental noise and soundscapes.

There have to be solutions that follow the need outlined in the WHO guidelines to “provide robust public health advice underpinned by evidence, which is essential to drive policy action that will protect communities from the adverse effects of noise”.

The process of tuning of urban areas with respect to the expertise of people’s mind and quality of life is related to the strategy of co-creation and provides the theoretical frame with regard to the solution of e.g. the change in an area. In other words: Approaching the field on traffic safety and public health in this holistic manner is generally needed.

To establish the Soundscape concept and the Soundscape approach, there is the need to advise the respective local actors and stakeholders in communities to using the resources given with respect to future generations and socio-cultural, aesthetic and economic effects as well. It was widely discussed in earlier publications that a platform is needed for stakeholders for co-creation and find common decisions. Moreover, the current approach within the standardization of Soundscapes have provided a big step towards enhancing the quality of life for people.

REFERENCES
WHO Environmental Noise Guidelines for the European Region (2018)

  1. Kang, J., B. Schulte-Fortkamp (Eds.) Soundscape and the built environment, CRC Press, Taylor & Francis Group, Boca Raton. (2016)
  2. Schulte-Fortkamp, (2013). Soundscape – a matter of human resources, Internoise 2013, Proc., Innsbruck, Austria
  3. Schulte-Fortkamp, J. Kang (editors) Special Issue on Soundscape, JASA 2012
  4. Kang, J., Aletta, F., Gjestland, T.T., Brown, L.A., Botteldooren, D., Schulte-Fortkamp, B., Lercher, P., Kamp, I.van., Genuit, K., Fiebig, A., Bento
  5. Coelho, L., Maffei, L., Lavia, L., (2016). Ten questions on the soundscapes of the built environment, Building and Environment, Vol. 108 (1), 284-294
  6. M. Schafer, “The Soundscape. Our sonic environment and the tuning of the world.” Rochester, Vermont: Destiny Books, (1977).
  7. Hollstein, “Qualitative approaches to social reality: the search for meaning” in: John Scott & Peter J. Carrington (Eds.): Sage handbook of social network analysis. London/New Delhi: Sage. (2012)
  8. Hiramatsu, “Soundscape: The Concept and Its Significance in Acoustics,” Proc. ICA, Kyoto, 2004.
  9. Fiebig, B. Schulte-Fortkamp, K. Genuit, „New options for the determination of environmental noise quality”, 35th International Congress and Exposition on Noise Control Engineering INTER-NOISE 2006, 04.-06.December 2006, Honolulu, HI.
  10. Lercher, B. Schulte-Fortkamp, “Soundscape and community noise annoyance in the context of environmental impact assessments,” Proc. INTER-NOISE 2003, 2815-2824, (2003).
  11. Schulte-Fortkamp, D. Dubois: (editors) Acta Acustica united with Acustica, Special Issue, Recent advances in Soundscape research, Vol 92 (6), (2006).
  12. Regulation (EU) No 540/2014 of the European Parliament and of the Council of 16 April 2014 on the sound level of motor vehicles and of replacement silencing systems, and amending Directive 2007/46/EC and repealing Directive 70/157/EEC (OJ L 158, 27.5.2014)
  13. Regulation No 138 of the Economic Commission for Europe of the United Nations (UNECE) — Uniform provisions concerning the approval of Quiet Road Transport Vehicles with regard to their reduced audibility [2017/71] (OJ L 9, 13.1.2017)

2aAB1 – Most animals hear acoustic flow instead of pressure; we should too

N. Miles – miles@binghamton.edu

Department of Mechanical Engineering
Binghamton University
State University of New York
Binghamton, NY 13902 USA

Popular version of paper 2aAB1
Presented Tuesday morning May 14, 2019.  8:35-8:55 am
177th ASA Meeting, Louisville, KY

The sound we hear consists of tiny, rapid changes in the pressure of air as it fluctuates about the steady atmospheric pressure.  Our ears detect these minute pressure fluctuations because they produce time-varying forces on our eardrums.  Many animals hear sound using pressure-sensitive eardrums such as ours.  However, most animals that hear sound (including countless insects) don’t have eardrums at all. Instead, they listen by detecting the tiny motion of air molecules as they flow back and forth when sound propagates.

The motion of air molecules in a sound wave is illustrated the video below.  The moving dots in this video depict motion of gas molecules due to the back and forth motion of a piston shown at the left.  The sound wave is a propagating fluctuation in the density (and pressure) of the molecules.  Note that a wave propagates to the right while the motion of each molecule (such as the larger moving dot in the center of the image) consists of back and forth motion.  Small animals sense this back and forth motion by sensing the deflection of thin hairs that are driven by viscous forces in the fluctuating acoustic medium.

It is likely that the early inventors of acoustic sensors fashioned microphones to operate based on sensing pressure because they knew that is how humans hear sound.  As a result, all microphones have possessed a thin pressure-sensing diaphragm (or ribbon) that functions much like our eardrums.  The fact that most animals don’t hear this way suggests that there may be significant benefits to considering alternate designs.  In this study, we explore technologies for achieving precise detection of sound using a mechanical structure that is driven by viscous forces associated with the fluctuating velocity of the medium.  In one example, we have shown this to result in a directional microphone with flat frequency response from 1 Hz to 50 kHz (Zhou, Jian, and Ronald N. Miles. “Sensing fluctuating airflow with spider silk.” Proceedings of the National Academy of Sciences 114.46 (2017): 12120-12125.).

Nature shows that there are many ways to fashion a thin, lightweight structure that can respond to minute changes in airflow as occur in a sound field.   A first step in designing an acoustic flow sensor is to understand the effects of the viscosity of the air on such a structure as air flows in a sound field; viscosity is known to be essential in the acoustic flow-sensing ears of small animals.  Our mathematical model predicts that the sound-induced motion of a very thin beam can be dominated by viscous forces when its width becomes on the order of five microns.  Such a structure can be readily made using modern microfabrication methods.

In order to create a microphone, once an extremely thin and compliant structure is designed that can respond to acoustic flow-induced viscous forces, one must develop a means of converting its motion into an electronic signal.  We have described one method of accomplishing this using capacitive transduction (Miles, Ronald N. “A Compliant Capacitive Sensor for Acoustics: Avoiding Electrostatic Forces at High Bias Voltages.” IEEE Sensors Journal 18.14 (2018): 5691-5698).

Acknowledgement:  This research is supported by a grant from NIH National Institute on Deafness and other Communication Disorders (1R01DC017720-01).