1pPA1 – Ammonia chemistry: Sounds better with ultrasound

Dr. Prince Nana AMANIAMPONG, prince.nana.amaniampong@univ-poitiers.fr
CNRS Chargé de Recherche (CRCN)
Bâtiment B1, Rue Marcel Doré, TSA41105
86073 – Poitiers Cedex 9 (France)

Popular version of 1pPA1 – Ammonia chemistry: Sounds better with ultrasound
Presented Monday morning, May 23, 2022
182nd ASA Meeting
Click here to read the abstract

Hydrazine (N2H4) is a chemical of outmost importance in the chemical industry. The global hydrazine market was valued at 510.95 million USD in 2020, and is projected to reach 806.09 million by 2030, mostly boosted by the growing need of our society for the manufacture of polymer foams and agrochemicals. Moreover, hydrazine is used in space vehicles in the form of propellant to reduce the overall concentration of dissolved oxygen. The direct production of hydrazine from ammonia (NH3) is economically and environmentally highly attractive, but it remains a very difficult task. One of the reason stems from the high bond dissociation energy of N-H bond in NH3 (435 kJ/mol), requiring harsh conditions of temperature and pressure, which are not compatible with the stability of hydrazine. Indeed the composition of hydrazine is thermodynamically more favorable than the conversion of ammonia to hydrazine, making the accumulation of hydrazine scientifically challenging.

In this work, we show that cavitation bubbles created by ultrasonic irradiation of aqueous NH3 at a high frequency, act as micro-reactors to activate and convert NH3 to amino species, without assistance of any catalyst, yielding hydrazine at the bubble-liquid interface (Figure 1). The compartmentation of the in-situ produced hydrazine in the bulk solution, which is maintained close to 30 °C, advantageously prevents its thermal degradation, a recurrent problem faced by previous technologies.

ammonia

Figure 1. Cavitation bubbles act as micro-reactors to activate ammonia towards hydrazine formation.

With this technology, a maximum hydrazine production rate of 0.17 mmol.L-1.h-2 in 7 wt. % ammonia solution was achieved (Figure 2). This work opens up new avenues toward the production of hydrazine for industrial and commercial applications using high frequency ultrasound activation technologies.

ammonia

Figure 2. Effect of NH3 concentration on the formation of hydrazine (525 kHz, 0.17 W/mL, 30 °C)

This is has been recently published in Angewandte Chemie International Edition, Anaelle Humblot et al., 60, 48, 25230-25234 (doi.org/10.1002/anie.202109516) and was also highlighted as the front cover image of the issue.

Filtering Microplastics Trash from Water with Acoustic Waves

Filtering Microplastics Trash from Water with Acoustic Waves

Prototype speaker system efficiently separates out microplastics from polluted water

Media Contact:
Larry Frum
AIP Media
301-209-3090
media@aip.org

SEATTLE, November 29, 2021 — Microplastics are released into the environment by cosmetics, clothing, and industrial processes or from larger plastic products as they break down naturally.

The pollutants eventually find their way into rivers and oceans, posing problems for marine life. Filtering and removing the small particles from water is a difficult task, but acoustic waves may provide a solution.

Dhany Arifianto, of the Institut Teknologi Sepuluh Nopember in Surabaya, Indonesia, will discuss a filtration prototype in his presentation, “Using bulk acoustic waves for filtering microplastic on polluted water,” on Monday, Nov. 29 at 6:10 p.m. Eastern U.S. at the Hyatt Regency Seattle. The presentation is part of the 181st Meeting of the Acoustical Society of America, taking place Nov. 29 to Dec. 3.

Arifianto and his team used two speakers to create acoustic waves. The force produced by the waves separates the microplastics from the water by creating pressure on a tube of inflowing water. As the tube splits into three channels, the microplastic particles are pressed toward the center as the clean water flows toward the two outer channels.

The prototype device cleaned 150 liters per hour of polluted water and was tested with three different microplastics. Each plastic was filtered with a different efficiency, but all were above 56% efficient in pure water and 58% efficient in seawater. Acoustic frequency, speaker-to-pipe distance, and density of the water all affected the amount of force generated and therefore the efficiency.

The acoustic waves may impact marine life if the wave frequency is in the audible range. The group is currently studying this potential issue.

“We believe further development is necessary to improve the cleaning rate, the efficiency, and particularly the safety of marine life,” said Arifianto.

———————– MORE MEETING INFORMATION ———————–
USEFUL LINKS
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eventpilotadmin.com/web/planner.php?id=ASASPRING22
Press Room: https://acoustics.org/world-wide-press-room/

WORLDWIDE PRESS ROOM
In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.

PRESS REGISTRATION
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

3pPA4 – Military personnel may be exposed to high level infrasound during training

Alessio Medda, PhD – Alessio.Medda@gtri.gatech.edu
Robert Funk, PhD – Rob.Funk@gtri.gatech.edu
Krish Ahuja, PhD – Krish.Ahuja@gtri.gatech.edu
Aerospace, Transportation & Advanced Systems Laboratory
Georgia Tech Research Institute
Georgia Institute of Technology
260 14th Street NW
Atlanta, GA 30332

Walter Carr, PhD – walter.s.carr.civ@mail.mil
Bradley Garfield – bradley.a.garfield.ctr@mail.mil
Walter Reed Army Institute of Research (WRAIR)
503 Robert Grant Avenue
Silver Springs MD 20910

Popular version of 3pPA4 – Infrasound Signature Measurements for U.S. Army Infantry Weapons During Training
Presented Wednesday morning, December 1, 2021
181st ASA Meeting, Seattle, WA
Click here to read the abstract

Infrasound is defined as an acoustic oscillation with frequencies below the typical lower threshold of human hearing, typically 20 Hz. Although infrasound is considered too low in frequency for humans to hear, it was shown that infrasound could be heard down to about 1 Hz. In this low-frequency range, single frequencies are not perceived as pure tones but are experienced as shocks or pressure waves, through the harmonics generated by the distortion from the middle and inner ear. Moreover, it has been shown that infrasound exposure also can have an effect on the human body, when sound of sufficient intensity is absorbed and stimulates biological tissue to produce effects similar to whole-body vibrations.

United States military personnel are exposed to blast overpressure from a variety of sources during training and military operations. While it is known that repeated exposure to high-level blast overpressure may result in concussion like symptoms, the effect of repeated exposure to low-level blast overpressure is not well understood yet. Exposure to low-level blast rarely produces a concussion, but anecdotal evidence from soldiers indicates that it can still produce transient neurological effects. During interviews, military personnel described the effect of firing portable antitank weapons like “getting punched in your whole body.” In addition, military personnel involved with breaching operations often use the term “breacher’s brain” to identify symptoms that include headache, fatigue, dizziness, and memory issues.
Impulsive acoustic sources such as pressure waves generated by explosions, artillery launches, and rocket launches are typically characterized by a broadband acoustic energy with frequency components well into the infrasound range. In this study, we explore how routine infantry training can result in high level repeated infrasound exposures by analyzing acoustic recordings and highlighting the presence of infrasound.

We present results in the form of time-frequency plots, which have been generated using a technique based on wavelets, a mathematical approach that represents a signal at different scales and uses unique features at each scale. This technique is called Synchrosqueezed Wavelet Transform and it was proposed by Daubechies et al. in 2011. In Figure 1 we show examples of high energy infrasound for three weapons commonly used during infantry training in the US military. Figure 1(A) shows the time-frequency plot of a grenade explosion, Figure 1(B) shows the time-frequency plot obtained from recordings of machine gun fire, and Figure 1(C) shows the time-frequency plot obtained from a recording of a rocket launched from a shoulder-held weapon.

Results indicate that high infrasound levels are present during military training events where impulsive noise is present. Also, service members that are routinely part of these training exercises have reported concussion-like symptoms associated with training exposures.

Through this research, we have an opportunity to establish the nature of the potential threat from infrasound in training environments as a preparation for future studies aimed at developing dose-response relationships between neurophysiological outcomes and environmental measurements.

Time-frequency spectrum for recordings of (A) Grenade Blast, (B) Machine Gun fire, and (C) Rocket Launcher from shoulder weapon. Regions characterized by high energy appear hotter (red) while normal conditions are cooler (blue).

 

3aPA8 – A Midsummer Flights Dream: Detecting Earthquakes from Solar Balloons

A Midsummer Flights Dream: Detecting Earthquakes from Solar Balloons

Leo Martire (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA) – leo.martire@jpl.nasa.gov
Siddharth Krishnamoorthy (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)
Attila Komjathy (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)
Daniel Bowman (Sandia National Laboratories, Albuquerque, NM)
Michael T. Pauken (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)
Jamey Jacob (Oklahoma State University, Stillwater, OK)
Brian Elbing (Oklahoma State University, Stillwater, OK)
Emalee Hough (Oklahoma State University, Stillwater, OK)
Zach Yap (Oklahoma State University, Stillwater, OK)
Molly Lammes (Oklahoma State University, Stillwater, OK)
Hannah Linzy (Oklahoma State University, Stillwater, OK)
Zachary Morrison (Oklahoma State University, Stillwater, OK)
Taylor Swaim (Oklahoma State University, Stillwater, OK)
Alexis Vance (Oklahoma State University, Stillwater, OK)
Payton Miles Simmons (Oklahoma State University, Stillwater, OK)
James A. Cutts (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)

NASA Jet Propulsion Laboratory
California Institute of Technology
4800 Oak Grove Drive
Pasadena, CA 91109

Popular version of paper ‘3aPA8 – A Midsummer Flights’ Dream: Balloon-borne infrasound-based aerial seismology
Presented Wednesday morning, December 01, 2021
181st ASA Meeting, Acoustics in Focus

Earthquakes cause the Earth’s surface to act as a giant speaker producing extremely low frequency sound in the atmosphere, called infrasound, similar to how striking a drum produces audible sound. Because sound attenuation is weak at these low frequencies, infrasound propagates very efficiently in the Earth’s atmosphere, and can be recorded at distances up to hundreds of kilometers.

As a result, pressure sensors carried by high-altitude balloons can record the direct infrasound induced by earthquakes. Our balloons carry two pressure sensors to help detect and characterize the so-called seismic infrasound. The study of infrasound is a viable proxy for measuring the motion of the ground: indeed, computer simulations and previous balloon experiments have shown that the infrasound signal retains information about the earthquake that generated it.

Drone footage of a solar-heated balloon carrying two infrasound sensors over Oklahoma, just after take-off. Notice how the lower instrument is being reeled down to increase sensor separation.

The interior of Venus, Earth’s sister planet, remains a mystery as of today. Unlike Mars, the surface of which has been explored by numerous landers and rovers, the surface of Venus is particularly inhospitable: atmospheric pressure is 92 times that on Earth, and the temperature can exceed 475 degrees Celsius. This makes direct ground motion measurements particularly challenging. However, balloons flying in the Venusian cloud layer would encounter much more temperate conditions (~0 degree Celsius and Earth’s sea level atmospheric pressure), and could therefore survive long enough to make significant records of venusquake-induced infrasound.

On July 22, 2019, Brissaud et al. conducted the first ever experiment to detect the infrasonic signature of a magnitude 4.2 earthquake in California from a high-altitude balloon. During the summer of 2021, NASA’s Jet Propulsion Laboratory (JPL), Oklahoma State University (OSU), and Sandia National Laboratories (SNL) collaborated to increase the number of detections by launching infrasound sensors over the seismically-active plains of Oklahoma. The team used an innovative solar hot air balloon design to reduce the cost and complexity that comes with traditional helium balloons.

Launching an infrasound solar-heated balloon from Oklahoma State University’s Unmanned Aircraft Flight Station (Glencoe, OK)

Over the course of 68 days, 39 balloons were launched in hope of capturing the seismo-acoustic signal of some of the 743 Oklahoma earthquakes. Covering an average distance of 325 km per day and floating at an average altitude of 20 km above sea level, the balloons passed close to 126 weak earthquakes, with a maximum magnitude of 2.8. We are now analyzing this large dataset, which is potentially filled with infrasound signatures of earthquakes, thunderstorms, and several human-caused signals such as chemical explosions and wind farms.

This flight campaign allowed the team to optimize the design of balloon instrumentation for the detection of geophysical events on Earth, and hopefully on Venus in the future.

© 2021. All rights reserved. A portion of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.

4aPA – Using Sound Waves to Quantify Erupted Volumes and Directionality of Volcanic Explosions

Alexandra Iezzi – amiezzi@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

David Fee – dfee1@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

Popular version of paper 4aPA
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

Volcanic eruptions can produce serious hazards, including ash plumes, lava flows, pyroclastic flows, and lahars. Volcanic phenomena, especially explosions, produce a substantial amount of sound, particularly in the infrasound band (<20 Hz, below human hearing) that can be detected at both local and global distances using dedicated infrasound sensors. Recent research has focused on inverting infrasound data collected within a few kilometers of an explosion, which can provide robust estimates of the mass and volume of erupted material in near real time. While the backbone of local geophysical monitoring of volcanoes typically relies on seismometers, it can sometimes be difficult to determine whether a signal originates from the subsurface only or has become subaerial (i.e. erupting). Volcano infrasound recordings can be combined with seismic monitoring to help illuminate whether or not material is actually coming out of the volcano, therefore posing a potential threat to society.

This presentation aims to summarize results from many recent studies on acoustic source inversions for volcanoes, including a recent study by Iezzi et al. (in review) at Yasur volcano, Vanuatu. Yasur is easily accessible and has explosions every 1 to 4 minutes making it a great place to study volcanic explosion mechanisms (Video 1).

Video 1 – Video of a typical explosion at Yasur volcano, Vanuatu.

Most volcano infrasound inversion studies assume that sound radiates equally in all directions. However, the potential for acoustic directionality from the volcano infrasound source mechanism is not well understood due to infrasound sensors usually being deployed only on Earth’s surface. In our study, we placed an infrasound sensor on a tethered balloon that was walked around the volcano to measure the acoustic wavefield above Earth’s surface and investigate possible acoustic directionality (Figure 1).

Figure 1 [file missing] – Image showing the aerostat on the ground prior to launch (left) and when tethered near the crater rim of Yasur (right).

Volcanos typically have high topographic relief that can significantly distort the waveform we record, even at distances of only a few kilometers. We can account for this effect by modeling the acoustic propagation over the topography (Video 2).

Video 2 – Video showing the pressure field that results from inputting a simple compressional source at the volcanic vent and propagating the wavefield over a model of topography. The red denotes positive pressure (compression) and blue denotes negative pressure (rarefaction). We note that all complexity past the first red band is due to topography.

Once the effects of topography are constrained, we can assume that when we are very close to the source, all other complexity in the infrasound data is due to the acoustic source. This allows us to solve for the volume flow rate (potentially in real time). In addition, we can examine directionality for all explosions, which may lead to volcanic ejecta being launched more often and farther in one direction than in others. This poses a great hazard to tourists and locals near the volcano and may be mitigated by studying the acoustic source from a safe distance using infrasound.

4APP28 – Listening to music with bionic ears: Identification of musical instruments and genres by cochlear implant listeners

Ying Hsiao – ying_y_hsiao@rush.edu
Chad Walker
Megan Hebb
Kelly Brown
Jasper Oh
Stanley Sheft
Valeriy Shafiro – Valeriy_Shafiro@rush.edu
Department of Communication Disorders and Sciences
Rush University
600 S Paulina St
Chicago, IL 60612, USA

Kara Vasil
Aaron Moberly
Department of Otolaryngology – Head & Neck Surgery
Ohio State University Wexner Medical Center
410 W 10th Ave
Columbus, OH 43210, USA

Popular version of paper 4APP28
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

For many people, music is an integral part of everyday life. We hear it everywhere: cars, offices, hallways, elevators, restaurants, and, of course, concert halls and peoples’ homes. It can often make our day more pleasant and enjoyable, but its ubiquity also makes it easy to take it for granted. But imagine if the music you heard around you sounded garbled and distorted. What if you could no longer tell apart different instruments that were being played, rhythms were no longer clear, and much of it sounded out of tune? This unfortunate experience is common for people with hearing loss who hear through cochlear implants, or CIs, the prosthetic devices that convert sounds around a person to electrical signals that are then delivered directly to the auditory nerve, bypassing the natural sensory organ of hearing – the inner ear. Although CIs have been highly effective in improving speech perception for people with severe to profound hearing loss, music perception has remained difficult and frustrating for people with CIs.

Audio 1.mp4, “Music processed with the cochlear implant simulator, AngelSim by Emily Shannon Fu Foundation”

Audio 2.mp4, “Original version [“Take Five” by Francesco Muliedda is licensed under CC BY-NC-SA]”

To find out how well CI listeners identify musical instruments and music genres, we used a version of a previously developed test – Appreciation of Music in Cochlear Implantees (AMICI). Unlike other tests that examine music perception in CI listeners using simple-structured musical stimuli to pinpoint specific perceptual challenges, AMICI takes a more synthetic approach and uses real-world musical pieces, which are acoustically more complex. Our findings confirmed that CI listeners indeed have considerable deficits in music perception. Participants with CIs correctly identify musical instruments only 69% of the time and musical genres 56% of the time, whereas their age-matched normal-hearing peers identified instruments and genres with 99% and 96% correct, respectively. The easiest instrument for CI listeners were drums, which were correctly identified 98% of the time. In contrast, the most difficult instrument was flute, with only 18% identification accuracy. Flute was more often, 77% of the time, confused with string instruments. Among the genres, identification of classical music was the easiest, reaching 83% correct, while Latin and rock/pop music were most difficult (41% correct). Remarkably, CI listeners’ abilities to identify musical instruments and genres correlated with their ability to identify common environmental sounds (such as dog barking, car horn) and also spoken sentences in noise. These results provide a foundation for future work that will focus on rehabilitation in music perception for CI listeners, so that music may sound pleasing and enjoyable to them once again, with possible additional benefits for speech and environmental sound perception.