3pPA4 – Military personnel may be exposed to high level infrasound during training

Alessio Medda, PhD – Alessio.Medda@gtri.gatech.edu
Robert Funk, PhD – Rob.Funk@gtri.gatech.edu
Krish Ahuja, PhD – Krish.Ahuja@gtri.gatech.edu
Aerospace, Transportation & Advanced Systems Laboratory
Georgia Tech Research Institute
Georgia Institute of Technology
260 14th Street NW
Atlanta, GA 30332

Walter Carr, PhD – walter.s.carr.civ@mail.mil
Bradley Garfield – bradley.a.garfield.ctr@mail.mil
Walter Reed Army Institute of Research (WRAIR)
503 Robert Grant Avenue
Silver Springs MD 20910

Popular version of 3pPA4 – Infrasound Signature Measurements for U.S. Army Infantry Weapons During Training
Presented Wednesday morning, December 1, 2021
181st ASA Meeting, Seattle, WA
Click here to read the abstract

Infrasound is defined as an acoustic oscillation with frequencies below the typical lower threshold of human hearing, typically 20 Hz. Although infrasound is considered too low in frequency for humans to hear, it was shown that infrasound could be heard down to about 1 Hz. In this low-frequency range, single frequencies are not perceived as pure tones but are experienced as shocks or pressure waves, through the harmonics generated by the distortion from the middle and inner ear. Moreover, it has been shown that infrasound exposure also can have an effect on the human body, when sound of sufficient intensity is absorbed and stimulates biological tissue to produce effects similar to whole-body vibrations.

United States military personnel are exposed to blast overpressure from a variety of sources during training and military operations. While it is known that repeated exposure to high-level blast overpressure may result in concussion like symptoms, the effect of repeated exposure to low-level blast overpressure is not well understood yet. Exposure to low-level blast rarely produces a concussion, but anecdotal evidence from soldiers indicates that it can still produce transient neurological effects. During interviews, military personnel described the effect of firing portable antitank weapons like “getting punched in your whole body.” In addition, military personnel involved with breaching operations often use the term “breacher’s brain” to identify symptoms that include headache, fatigue, dizziness, and memory issues.
Impulsive acoustic sources such as pressure waves generated by explosions, artillery launches, and rocket launches are typically characterized by a broadband acoustic energy with frequency components well into the infrasound range. In this study, we explore how routine infantry training can result in high level repeated infrasound exposures by analyzing acoustic recordings and highlighting the presence of infrasound.

We present results in the form of time-frequency plots, which have been generated using a technique based on wavelets, a mathematical approach that represents a signal at different scales and uses unique features at each scale. This technique is called Synchrosqueezed Wavelet Transform and it was proposed by Daubechies et al. in 2011. In Figure 1 we show examples of high energy infrasound for three weapons commonly used during infantry training in the US military. Figure 1(A) shows the time-frequency plot of a grenade explosion, Figure 1(B) shows the time-frequency plot obtained from recordings of machine gun fire, and Figure 1(C) shows the time-frequency plot obtained from a recording of a rocket launched from a shoulder-held weapon.

Results indicate that high infrasound levels are present during military training events where impulsive noise is present. Also, service members that are routinely part of these training exercises have reported concussion-like symptoms associated with training exposures.

Through this research, we have an opportunity to establish the nature of the potential threat from infrasound in training environments as a preparation for future studies aimed at developing dose-response relationships between neurophysiological outcomes and environmental measurements.

Time-frequency spectrum for recordings of (A) Grenade Blast, (B) Machine Gun fire, and (C) Rocket Launcher from shoulder weapon. Regions characterized by high energy appear hotter (red) while normal conditions are cooler (blue).

 

1pABb6 – Eavesdropping on a bald eagle breeding pair

JoAnn McGee – mcgeej@umn.edu
VA Loma Linda Healthcare System, Loma Linda, CA 92357
Center for Applied and Translational Sensory Science,
University of Minnesota,
Minneapolis, MN 55455

Peggy B. Nelson – nelso477@umn.edu
Department of Speech-Language-Hearing Sciences and the Center for Applied and Translational Sensory Science,
University of Minnesota,
Minneapolis, MN 55455

Julia B. Ponder – ponde003@umn.edu
The Raptor Center,
College of Veterinary Medicine, University of Minnesota, St. Paul, MN 55108

Christopher Feist – feist020@umn.edu
Christopher Milliren – milli079@umn.edu
St. Anthony Falls Laboratory,
University of Minnesota,
Minneapolis, MN 55414

Edward J. Walsh – ewalsh@umn.edu
VA Loma Linda Healthcare System,
Loma Linda, CA 92357
Center for Applied and Translational Sensory Science,
University of Minnesota,
Minneapolis, MN 55455

Popular version of 1pABb6 – A study of the vocal behavior of adult bald eagles during breeding and chick-rearing
Presented at the 181st ASA Meeting in Seattle, Washington
Click here to read the abstract

One of the many challenges associated with efforts to characterize the acoustic properties of free-ranging bald eagle (Haliaeetus leucocephalus) vocalizations in a behavioral context is the relative inaccessibility of individual, interacting signalers. Here, we take advantage of the opportunity to eavesdrop on vocal exchanges between a breeding pair inhabiting a nest furnished with a webcam and microphone located in Decorah, Iowa and managed by the Raptor Resource Project (www.raptorresource.org).

In a previous study centered on captive bald eagles at the University of Minnesota Raptor Center, five call categories, including so-called grunts, screams, squeals, chirps and cackles, were identified. The primary goal of this study was to extend the investigation into the field to begin efforts to characterize and compare the acoustic properties of calls produced in captivity and in the wild.

Predictably, many of the acoustic features of calls produced in captivity and in the wild are generously shared. However, preliminary findings suggest that at least a subset of calls exchanged by breeding pairs may take on a hybrid character, exhibiting blended variations of the chirps, squeals and screams characterized previously in captive birds. Calls analyzed here were taken from a variety of settings that include mating, exchanges associated with feeding at the nest, vocal reaction to intruders near the nest, and short distance call exchanges that appear to function as hailing signals.

The source of raw materials used to relate the behavior of the interacting pair to their vocal exchanges can be appreciated by observing the following audiovisual recording examples.

VIDEO 1
In this video, the female of the pair, an eagle known affectionately as Mom, is not so patiently awaiting the arrival of her partner, known by the less endearing name DM2. As DM2 arrives at the nest with a meal, Mom produces a call sounding a lot like the call of a sea gull; a call with the characteristics of a lower frequency version of the scream observed in captive eagles.

VIDEO 2
Here, Mom appears to be calling out to DM2 for a break from nesting. DM2 arrived shortly after the footage shown here and Mom takes off for higher ground. The call appears to be a commonly produced, seemingly multipurpose utterance closely resembling a spectrally complex version of a call observed in captive eagles known as the chirp.

VIDEO 3
In this sequence, Mom appears to summon DM2 in response to what appears to be an intruder, possibly another bald eagle, in the airspace surrounding their nest. Again, a complex variation of the chirp observed in captive eagles appears to serve as a territorial marker.

The take-home message of preliminary findings reported here is that the acoustic structure of at least a subset of calls produced by free-ranging bald eagles appears to be more nuanced and complex than those representing their captive counterparts. Elements typically representative of three primary call types in captive birds, namely chirps, screams and squeals, intermix in calls produced by free-ranging eagles, creating a vocal repertoire with subtle, but potentially meaningful structural variation. If differences reported here remain stable across a larger sample size, these findings will serve to underline the relative importance of our work in the field.

3aPA8 – A Midsummer Flights Dream: Detecting Earthquakes from Solar Balloons

A Midsummer Flights Dream: Detecting Earthquakes from Solar Balloons

Leo Martire (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA) – leo.martire@jpl.nasa.gov
Siddharth Krishnamoorthy (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)
Attila Komjathy (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)
Daniel Bowman (Sandia National Laboratories, Albuquerque, NM)
Michael T. Pauken (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)
Jamey Jacob (Oklahoma State University, Stillwater, OK)
Brian Elbing (Oklahoma State University, Stillwater, OK)
Emalee Hough (Oklahoma State University, Stillwater, OK)
Zach Yap (Oklahoma State University, Stillwater, OK)
Molly Lammes (Oklahoma State University, Stillwater, OK)
Hannah Linzy (Oklahoma State University, Stillwater, OK)
Zachary Morrison (Oklahoma State University, Stillwater, OK)
Taylor Swaim (Oklahoma State University, Stillwater, OK)
Alexis Vance (Oklahoma State University, Stillwater, OK)
Payton Miles Simmons (Oklahoma State University, Stillwater, OK)
James A. Cutts (NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA)

NASA Jet Propulsion Laboratory
California Institute of Technology
4800 Oak Grove Drive
Pasadena, CA 91109

Popular version of paper ‘3aPA8 – A Midsummer Flights’ Dream: Balloon-borne infrasound-based aerial seismology
Presented Wednesday morning, December 01, 2021
181st ASA Meeting, Acoustics in Focus

Earthquakes cause the Earth’s surface to act as a giant speaker producing extremely low frequency sound in the atmosphere, called infrasound, similar to how striking a drum produces audible sound. Because sound attenuation is weak at these low frequencies, infrasound propagates very efficiently in the Earth’s atmosphere, and can be recorded at distances up to hundreds of kilometers.

As a result, pressure sensors carried by high-altitude balloons can record the direct infrasound induced by earthquakes. Our balloons carry two pressure sensors to help detect and characterize the so-called seismic infrasound. The study of infrasound is a viable proxy for measuring the motion of the ground: indeed, computer simulations and previous balloon experiments have shown that the infrasound signal retains information about the earthquake that generated it.

Drone footage of a solar-heated balloon carrying two infrasound sensors over Oklahoma, just after take-off. Notice how the lower instrument is being reeled down to increase sensor separation.

The interior of Venus, Earth’s sister planet, remains a mystery as of today. Unlike Mars, the surface of which has been explored by numerous landers and rovers, the surface of Venus is particularly inhospitable: atmospheric pressure is 92 times that on Earth, and the temperature can exceed 475 degrees Celsius. This makes direct ground motion measurements particularly challenging. However, balloons flying in the Venusian cloud layer would encounter much more temperate conditions (~0 degree Celsius and Earth’s sea level atmospheric pressure), and could therefore survive long enough to make significant records of venusquake-induced infrasound.

On July 22, 2019, Brissaud et al. conducted the first ever experiment to detect the infrasonic signature of a magnitude 4.2 earthquake in California from a high-altitude balloon. During the summer of 2021, NASA’s Jet Propulsion Laboratory (JPL), Oklahoma State University (OSU), and Sandia National Laboratories (SNL) collaborated to increase the number of detections by launching infrasound sensors over the seismically-active plains of Oklahoma. The team used an innovative solar hot air balloon design to reduce the cost and complexity that comes with traditional helium balloons.

Launching an infrasound solar-heated balloon from Oklahoma State University’s Unmanned Aircraft Flight Station (Glencoe, OK)

Over the course of 68 days, 39 balloons were launched in hope of capturing the seismo-acoustic signal of some of the 743 Oklahoma earthquakes. Covering an average distance of 325 km per day and floating at an average altitude of 20 km above sea level, the balloons passed close to 126 weak earthquakes, with a maximum magnitude of 2.8. We are now analyzing this large dataset, which is potentially filled with infrasound signatures of earthquakes, thunderstorms, and several human-caused signals such as chemical explosions and wind farms.

This flight campaign allowed the team to optimize the design of balloon instrumentation for the detection of geophysical events on Earth, and hopefully on Venus in the future.

© 2021. All rights reserved. A portion of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). SNL is managed and operated by NTESS under DOE NNSA contract DE-NA0003525.

1pPPb – Young adults with ADHD display altered neural responses to attended sounds than neurotypical counterparts

Jasmine Kwasa – jkwasa@andrew.cmu.edu
Laura María Torres – lmtorres@bu.edu
Abby Noyce – anoyce@andrew.cmu.edu
Barbara Shinn-Cunningham – bgsc@andrew.cmu.edu

Carnegie Mellon University
5000 Forbes Ave
Pittsburgh, PA 15217

Popular version of paper ‘1pPPb – Top-down attention modulates neural responses in neurotypical, but not ADHD, young adults
To be presented Monday afternoon, November 29, 2021
181st ASA Meeting

Competing sounds, like a teacher’s voice against the sudden trill of a cell phone, pose a challenge to our attention. Listening in such environments depends upon a push-and-pull between our goals (wanting to listen to the teacher) and involuntary distractions to salient, unexpected sounds (the phone notification). The outcome of this attentional contest depends on the strength of an individual’s “top-down” control of attention relative to their susceptibility to “bottom-up” attention capture.

We wanted to understand the range of this ability in the general population, from neurotypical functioning to neurodivergence. We reasoned that people with Attention Deficit Hyperactivity Disorder (ADHD) would perform worse when undertaking a challenging task that required strong top-down control (mental flexibility) and show altered neural signatures of this ability.

We created an auditory paradigm that stressed top-down control of attention. Forty-five young adult volunteers with normal hearing listened to multiple concurrent streams of spoken syllables that came from the left, center, and right (listen to an example trial below) while we recorded electroencephalography (EEG). We tested both the ability to sustain attentional focus on a single “target” stream (always heard from the center, depicted in black in Figure 1) and the ability to monitor the target but flexibly switch attention to an unpredictable “interrupter” stream from another direction if and when it appeared (depicted in red in Figure 1).

You can hear an example trial here:

A visual depiction of this clip is seen below:
ADHD

We included key conditions in which the stimuli were identical between trials, but the attentional focus differed, allowing us to isolate effects of attention. The EEG recording allowed us to capture neural responses, called event-related potential (ERP) components, whose amplitudes reflect the strength of top-down relative to bottom-up attention.

We found that while volunteers performed within a large range from near-perfect to near-chance levels of attentive listening, ADHD did not influence who were among the best or worst. In fact, there were no significant differences between ADHD (N=25) and Neurotypical (N=20) volunteers in terms of reporting the order of the syllables. However, ADHD subjects exhibited weaker attentional modulation (less flexibility) of ERP component amplitudes than did neurotypical listeners.

Importantly, neural response modulation significantly correlated with behavioral performance, implying that the best performers are those whose brain responses are under stronger top-down control.

Together, these results demonstrate that in the general population of both neurotypical and neurodivergent people, there is indeed a spectrum of top-down control in the face of salient interruptions, regardless of ADHD status. However, young adults with ADHD might achieve attentive listening via different mechanisms in need of further investigation.

1aBAb12 – Novel use of a lung ultrasound sensor for monitoring lung conditions

Novel use of a lung ultrasound sensor for monitoring lung conditions

Tanya Khokhlova – tdk7@uw.edu
Adam Maxwell – amax38@uw.edu
Gilles Thomas – gthom@uw.edu
Jeff Thiel – jt43@uw.edu
Alex Peek – apeek@uw.edu
Bryan Cunitz – bwc@uw.edu
Michael Bailey – mbailey@uw.edu
Kyle Steinbock – kyles96@uw.edu
Layla Anderson – anderla@uw.edu
Ross Kessler – kesslerr@uw.edu
Adeyinka Adedipe- adeyinka@uw.edu
University of Washington
Seattle, WA, 98195

Popular version of paper ‘1aBAb12 – Novel use of a lung ultrasound sensor for detection of lung interstitial syndrome

Presented Monday morning, November 29, 2021

181^st ASA Meeting

The need to continuously evaluate the amount of fluid in the lung is essential in patients suffering from a number of conditions, including viral pneumonia (including COVID-19) and heart failure, and patients on dialysis. Chest x-ray and CT are typically used for this purpose, but can not be done continuously due to the radiation dose, and have logistical limitations in some cases, for example when transporting unstable patients or patients with COVID-19 due to the risk of contagion. Lung ultrasound (LUS) is non-ionizing and safe, and has recently emerged as a useful triage and monitoring tool for quantification of lung water. Because lung is air-filled, it is reflective for ultrasound, and in LUS exams it is image artifacts that are being evaluated, rather than true lung images. The artifacts termed A-lines are periodic bright horizontal lines parallel to the lung surface representing multiple reflections of ultrasound pulse from the lung and indicating a normal aeration pattern. The artifacts termed B-lines are comet-like bright vertical regions originating at the lung surface and extending down. The number and distribution of B-lines are known to correlate with presence of fluid in the lung and the condition severity. However, visualization and quantification of B-lines requires training and is machine and operator dependent, whereas in select clinical scenarios continuous, automated hands-free monitoring of lung function is preferred, e.g. COVID19 infection.
In this study we were aiming to identify the detected ultrasound signal features that are associated with B-lines and to develop a miniature wearable non-imaging lung ultrasound sensor (LUSS). Individual adhesive LUSS elements could be attached to patients in specific anatomic locations similarly to EKG leads, and ultrasound signals would be collected and processed with automated algorithms continuously or on demand. First, we used an open platform ultrasound imaging system to perform standard 10-zone LUS in ten patients with confirmed pulmonary edema, and in five healthy volunteers. The ultrasound signal data corresponding to each image were collected for subsequent off-line Doppler, decorrelation and spectral analyses. The metrics we found to be associated with the B-line thickness and number were peaks of Doppler power at the pleural line and the ultrasound signal amplitude corresponding to a large depth.

Left: examples of lung ultrasound images containing A-lines and B-lines and the corresponding signals detected by the ultrasound imaging probe. Right: conceptual diagram of the use of LUSS for monitoring of lung condition and a prototype LUSS element. Adhesive LUSS elements are applied in 10 anatomic locations and automated signal processing software displays lung fluid score for each element on a 4-point scale: none (green), mild (yellow), moderate (orange) or severe (red).

Next, we built miniature LUSS elements powered by custom-built multiplexed transmit-receive circuit, and tested them in a benchtop lung model – polyurethane sponge containing variable volumes of water – side by side with LUS imaging probe previously used in patients. Wetting of the sponge produced B-lines on the ultrasound images, and the associated ultrasound signals were similar to those measured by LUSS elements. We hope to proceed with testing LUSS in human patients in the nearest future. This work was supported by NIH R01EB023910.

1aBAb9 – Extracting Human Skull Properties by Using Ultrasound and Artificial Intelligence

Extracting Human Skull Properties by Using Ultrasound and Artificial Intelligence

Churan He1– churanh2@illinois.edu
Yun Jing2 – jing.yun@psu.edu
Aiguo Han1 – han51@illinois.edu

1. Department of Electrical and Computer Engineering
The University of Illinois at Urbana Champaign
306 North Wright Street
Urbana, IL 61801

2. Graduate Program in Acoustics
Pennsylvania State University
201 Applied Science Building
University Park, PA 16802

Popular version of paper ‘1aBAb9 – Human skull profile and speed of sound estimation using pulse-echo ultrasound signals with deep learning

Presented Monday morning, November 29, 2021

181st Meeting of the Acoustical Society of America in Seattle, Washington.

Ultrasound is a tremendously valuable tool for medical imaging and therapy of the human body. When it comes to applications in the brain, however, the presence of the skull poses severe challenges to both imaging and therapy. The skulls of human adults induce significant distortions (also called phase aberrations) to the acoustic waves. The aberrations result in blurred brain images that are extremely challenging to interpret. The skull also distorts and shifts the acoustic focus, causing challenges in therapy of the brain (such as treating essential tremors and brain tumors) using high-intensity focused ultrasound.

Prior research has shown that phase aberrations can be most accurately corrected if the skull profile (i.e., thickness distribution) and speed of sound are known a priori. Various methods have been proposed to estimate the skull profile and speed of sound. The gold-standard method used in treatment planning derives the skull properties from computed-tomography (CT) images of the skull. The CT-based method, however, entails ionizing radiation, potentially causing harm to the patients.

We propose an ultrasound-based method to extract the skull properties. This method is safer because ultrasound does not cause ionizing radiation. We developed an artificial intelligence (AI) algorithm (specifically, a deep learning algorithm) that predicted the skull thickness and sound speed by using ultrasound echo signals reflected from the skull.

We tested the feasibility of our method through a simulation study (Figure 1). We performed acoustic simulations using realistic skull models built from CT scans of five ex vivo human skulls (see animation). The simulations generated a large number (=7891) of ultrasound signals from skull segments for which the thickness and sound speed were known. We used 80% of the data to train our AI algorithm and 20% for testing. We developed and tested two algorithm versions: One version took the original echo signal as the input and the other used a transformed signal (i.e., Fourier transform that displays the signal’s frequency spectrum).

Both versions of our AI algorithm achieved accurate results, while the version using the transformed signals appeared to be more accurate. Using the original signal as the input, we obtained a mean absolute error of 0.3 mm for skull thickness prediction and 31 m/s for sound speed prediction. When transformed signals were used, the error in thickness prediction was reduced to 0.2 mm (= 3% of the average skull thickness [6.3 mm]), and the error in sound speed prediction was reduced to 25 m/s (= 1% of the average sound speed [2340 m/s]). In the case of transformed signals, the correlation between predicted values and the ground truth was 0.98 for thickness and 0.81 for speed of sound (Figure 2), where a correlation value of 1 represents perfect correlation.

Collectively, our preliminary results demonstrate that the developed AI algorithm can accurately estimate skull thickness and speed of sound, providing a potentially powerful tool to correct skull phase aberration for transcranial ultrasound brain imaging and therapy.

[Animation: 3-dimensional density map of one of the skulls used in the study]

Figure 1. Schematic diagram of the simulation study

 

Figure 2. a) Scatter plot of extracted speed of sound versus ground truth; b) scatter plot of extracted thickness versus ground truth.