University of Minnesota 75 East River Parkway Minneapolis, MN 55455 United States
Christopher Feist Christopher Milliren Lori Arent Julia B. Ponder Peggy Nelson Edward J. Walsh
Popular version of 3aABb4 – Behavioral responses of bald eagles (Haliaeetus leucocephalus) to acoustic stimuli in a laboratory setting Presented at the 184 ASA Meeting Read the abstract at https://doi.org/10.1121/10.0018607
The ultimate goal of this project is to protect eagles by discouraging these charismatic birds from entering the airspace of wind energy facilities. The specific question under consideration centers on whether or not an acoustic cue, a sound, can be used for that purpose, to steer eagles away from harm’s way. Our specific goal in this particular study was to take the next step along our overall research path and determine if behaviors of bald eagles in particular were affected by different sound stimuli in a controlled laboratory environment.
Perhaps to be expected, behavioral responses varied significantly. Some birds explored their immediate airspace avidly, while others exhibited a more restrained set of behavioral responses to sound stimulation.
To get a feeling for the task, consider the reaction of this eagle to a sound stimulus in a quiet laboratory setting .
To collect these data, a bird was placed in a sound-damped room and the experiment was conducted from a control center just outside the exposure space. Birds were videotaped as sounds were delivered to one of two speakers and a group of unbiased judges was asked to determine (1) whether the bird responded to the sound based on its behavior, (2) to qualitatively assess the strength of the response, and (3) to identify the behaviors associated with the response. Twelve sounds were tested and judges were instructed to observe the eagle during a specified time window without knowing which sound, if any, had been played. Spectrograms of the sounds tested are shown in the figure.
By far the most common response was an attempt to localize the sound source based on head turning toward a speaker, although birds also frequently tilted their heads in response to stimuli. Females were slightly more responsive to sound stimuli than males, and not surprisingly, stimuli that elicited a higher number of responses also elicited stronger or more evident responses. Complex and natural sounds, for example, sounds produced by eagles, eaglets and pesky mobbing crow sounds, elicited more and stronger responses than man-made stimuli. Generally, bald eagles were fairly accurate in locating the direction that the sound originated, and, as before, females performed better than males.
The results from this study provide a critical step in an effort to protect eagles as we move away from the use of fossil fuels and rely more on wind power. We come away from this study with a better understanding of the types of sound signals that elicit more and stronger responses in bald eagles, and with the confidence that we will be able to objectively assess behavioral responses in more natural settings. We now know what these magnificent birds can hear, and we know that certain sound stimuli are more effective than others in evoking behavioral responses, taking us one step closer to our ultimate goal, to save bald eagles from undesirable outcomes and to give wind facility developers the tools needed to manage their facilities in an even more eco-friendly manner.
NASA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109, United States
Daniel C. Bowman2, Emalee Hough3, Zach Yap3, John D. Wilding4, Jamey Jacob3, Brian Elbing3, Léo Martire1, Attila Komjathy1, Michael T. Pauken1, James A. Cutts1, Jennifer M. Jackson4, Raphaël F. Garcia5, and David Mimoun5
1. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA
2. Sandia National Laboratories, Albuquerque, New Mexico, USA
3. Oklahoma State University, Stillwater, OK, USA
4. Seismological Laboratory, California Institute of Technology, Pasadena, CA, USA
5. Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), Toulouse, France
Popular version of 4aPAa1 – Development of Balloon-Based Seismology for Venus through Earth-Analog Experiments and Simulations
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018837
Venus has often been described as a “hellscape” and deservedly so – the surface of Venus simultaneously scorches and crushes spacecraft that land on it with temperatures exceeding 460 degrees Celsius (~850 F) and atmospheric pressure exceeding 90 atmospheres. While the conditions on the surface of Venus are extreme, the temperature and pressure drop dramatically with altitude. At about 50-60 km above the surface, temperature (-10-70 C) and pressure (~0.2-1 atmosphere) resemble that on Earth. At this altitude, the challenge of surviving clouds of sulfuric acid is more manageable than that of surviving the simultaneous squeeze and scorch at the surface. This is evidenced by the fact that the two VeGa balloons floated in the atmosphere of Venus by the Soviet Union in 1985 transmitted data for approximately 48 hours (and presumably survived for much longer) compared to 2 hours and 7 minutes, which is the longest any spacecraft landed on the surface has survived. A new generation of Venus balloons is now being designed that can last over 100 days and can change their altitude to navigate different layers of Venus’ atmosphere. Our research focuses on developing technology to detect signatures of volcanic eruptions and “venusquakes” from balloons in the Venus clouds. Doing so allows us to quantify the level of ongoing activity on Venus, and associate this activity with maps of the surface, which in turn allows us to study the planet’s interior from high above the surface. Conducting this experiment from a balloon floating at an altitude of 50-60 km above the surface of Venus provides a significantly extended observation period, surpassing the lifespan of any spacecraft landed on the surface with current technology.
We propose to utilize low-frequency sound waves known as infrasound to detect and characterize Venus quakes and volcanic activity. These waves are generated due to coupling between the ground and the atmosphere of the planet – when the ground moves, it acts like a drum that produces weak infrasound waves in the atmosphere, which can then be detected by pressure sensors deployed from balloons as shown in figure 1. On Venus, the process of conversion from ground motion to infrasound is up to 60 times more efficient than Earth.
Figure 1: Infrasound is generated when the atmosphere reverberates in response to the motion of the ground and can be detected on balloons. Infrasound can travel directly from the site of the event to the balloon (epicentral) or be generated by seismic waves as they pass underneath the balloon and travel vertically upward (surface wave infrasound).
We are developing this technique by first demonstrating that earthquakes and volcanic eruptions on Earth can be detected by instruments suspended from balloons. These data also allow us to validate our simulation tools and generate estimates for what such signals may look like on Venus. In flight experiments over the last few years, not just several earthquakes of varying magnitudes and volcanic eruptions, but also other Venus-relevant phenomena such as lightning and mountain waves have been detected from balloons as shown in figure 2.
Figure 2: Venus-relevant events on Earth detected on high-altitude balloons using infrasound. Pressure waves from the originating event travel to the balloon and are recorded by barometers suspended from the balloon.
In the next phase of the project, we will generate a catalog of analogous signals on Venus and develop signal identification tools that can autonomously identify signals of interest on a Venus flight.
Copyright 2023, all rights reserved. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
NUWC Division Newport, NAVSEA, Newport, RI, 02841, United States
Dr. Lauren A. Freeman, Dr. Daniel Duane, Dr. Ian Rooney from NUWC Division Newport and
Dr. Simon E. Freeman from ARPA-E
Popular version of 1aAB1 – Passive Acoustic Monitoring of Biological Soundscapes in a Changing Climate
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018023
Climate change is impacting our oceans and marine ecosystems across the globe. Passive acoustic monitoring of marine ecosystems has been shown to provide a window into the heartbeat of an ecosystem, its relative health, and even information such as how many whales or fish are present in a given day or month. By studying marine soundscapes, we collate all of the ambient noise at an underwater location and attribute parts of the soundscape to wind and waves, to boats, and to different types of biology. Long term biological soundscape studies allow us to track changes in ecosystems with a single, small, instrument called a hydrophone. I’ve been studying coral reef soundscapes for nearly a decade now, and am starting to have time series long enough to begin to see how climate change affects soundscapes. Some of the most immediate and pronounced impacts of climate change on shallow ocean soundscapes are evident in varying levels of ambient biological sound. We found a ubiquitous trend at research sites in both the tropical Pacific (Hawaii) and sub-tropical Atlantic (Bermuda) that warmer water tends to be associated with higher ambient noise levels. Different frequency bands provide information about different ecological processes (such as fish calls, invertebrate activity, and algal photosynthesis). The response of each of these processes to temperature changes is not uniform, however each type of ambient noise increases in warmer water. At some point, ocean warming and acidification will fundamentally change the ecological structure of a shallow water environment. This would also be reflected in a fundamentally different soundscape, as described by peak frequencies and sound intensity. While I have not monitored the phase shift of an ecosystem at a single site, I have documented and shown that healthy coral reefs with high levels of parrotfish and reef fish have fundamentally different soundscapes, as reflected in their acoustic signature at different frequency bands, than coral reefs that are degraded and overgrown with fleshy macroalgae. This suggests that long term soundscape monitoring could also track these ecological phase shifts under climate stress and other impacts to marine ecosystems such as overfishing.
A healthy coral reef research site in Hawaii with vibrant corals, many reef fish, and copious nooks and crannies for marine invertebrates to make their homes.
Soundscape segmented into three frequency bands capturing fish vocalizations (blue), parrotfish scrapes (red), and invertebrate clicks along with algal photosynthesis bubbles (yellow). All features show an increase in ambient noise level (PSD, y-axis) with increasing ocean temperature at each site studied in Hawaii.
While there is great interest in studying the structure of Venus because it is believed to be similar to Earth, there are no direct seismic measurements on Venus. This is because the Venus surface temperature is too hot for electronics, but conditions are milder in the middle of the Venus atmosphere. This has motivated interest in studying seismic activity using low frequency sound measurements on high altitude balloons. Recently, this method was demonstrated on Earth with weak earthquakes being detected from balloons flying at twice the altitude of commercial airplanes. Video 1 shows a balloon launch for these test flights. Due to the denser atmosphere on Venus, the coupling between the Venus-quake and the sound waves should be much greater, which will make the sound louder on Venus. However, the higher density atmosphere combined with vertical changes in wind speed is also likely to increase the amount of wind noise on these sensor. Thus development of a new technology to reduce wind noise on a high altitude balloon is needed.
Video 1. Video of a balloon launch during the summer of 2021. Video courtesy of Jamey Jacob.
Several different designs were proposed and ground tested to identify potential materials for compact windscreens. The testing included a long-term deployment outdoors so that the sensors would be exposed to a wide range of wind speeds and conditions. Separately, the sensors were exposed to controlled low-frequency sounds to test if the windscreens were also reducing the loudness of the signals of interest. All of the designs showed significant reduction in wind noise with minimal reduction in the controlled sounds, but one design in particular outperformed the others. This design uses a canvas fabric on the outside of a box as shown in the Figure 1 combined with a dense foam material on the inside.
Figure 1. Picture of balloon carrying the low frequency sound sensors. Compared an early design to no windscreen with this flight. Image courtesy of Brian Elbing.
The next step is to fly this windscreen on a high altitude balloon, especially on windier days and with a long flight line to increase the amount of wind that the sensors will experience. The wind direction at the float altitude of these balloons will change in May and then rapidly increase, which this will be the target window to test this new design.
co-chair, Interagency Working Group on Ocean Sound and Marine Life (IWG-OSML) Washington, DC 20001 United States
Thomas C Weber – member, IWG-OSML, Washington, DC Heather Spence – co-chair, IWG-OSML, Washington, DC Grace C Smarsh – Executive Secretary, IWG-OSML, Washington, DC
Popular version of 1aAB9 – Ocean Acoustics and the UN Decade of Ocean Science for Sustainable Development Presented at the 184 ASA Meeting Read the abstract at https://doi.org/10.1121/10.0018031
The Acoustic Environment is, collectively, the combination of all sounds within a given area modified by interactions with the environment. This definition includes both the sounds of nature and human use and is used by the US National Park Service as a basis for characterizing, managing, and preserving sound as one of the natural resources within the park system. Thinking in terms of a theatre, the Acoustic Environment is where scenes emerge from the interaction of individual actors (or sources) with all other aspects of the stage (the environment). The audience (or receiver) derives information from a continuous series of actions and interactions that combine to tell a story. In developing the Ocean Decade Research Programme on the Maritime Acoustic Environment (OD-MAE https://tinyurl.com/463uwjk5) we applied the theatre analogy to underwater environments, where acoustic scenes result from the dynamic combination of physical, biological, and chemical processes in the ocean that define the field of oceanography. In the science of Ocean Acoustics, these highly intertwined relationships are reflected in the information available to us through sound and can be used as a means to both differentiate among various ocean regions and tell us something – stories – about processes occurring within the oceans. The use of sound for understanding the natural environment is particularly effective in the oceans because underwater sound travels very efficiently over large distances, allowing us to probe the vast expanses of the globe. As an example of this, the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) is capable of monitoring nearly the entire volume of the world’s oceans for underwater nuclear explosions with only eleven underwater acoustic listening stations.
In the context of the UN Decade of Ocean Science for Sustainable Development (oceandecade.org), the OD-MAE program seeks to raise awareness about and support research related to the information available through sound that reflects the regional ocean environment and its state. For example, the noisiest places in the ocean have been found to be in Alaskan and Antarctic fjords where sound energy levels created by the release of trapped air by melting ice exceed that of many other sources, including weather and shipping. Sound energy increases with melt rate as more bubbles are released, providing information about the amount of fresh water being added into the oceans along with other climate indicators.
Representative glacial environment. Image credit: National Park Service
Ambient Sound recorded near Hubbard and Turner Glaciers near Yakutat, AK. Credit: Matthew Zeh, Belmont University and Preston Wilson, Univ. of Texas at Austin
Similarly, in warmer climates, the acoustic environment of coral reefs can provide scientists an indication of a reef system’s health. Healthy reef systems support much more life and as a result more sound is produced by the resident marine life. This is evident when contrasting the sounds recorded at a healthy reef system to those recorded at a location that experienced bleaching owing to increased water temperature and climate change.
Representative healthy and degraded reef systems. Image credits NOAA
Sound of representative healthy reef system. Credit: Steve Simpson, University of Bristol, UK
Sound of representative degraded reef system. Credit: Steve Simpson, University of Bristol, UK
As a research program, the OD-MAE seeks to quantify information about the acoustic environment such that we can assess the current state and health of the oceans, from shallow tropical reefs to the very deepest depths of the ocean. Telling the stories of the ocean by listening to it will help provide knowledge and tools for sustainably managing development and even restoring maritime environments.
 Pettit, E. C., Lee, K. M., Brann, J. P., Nystuen, J. A., Wilson, P. S., and O’Neel, S. (2015), Unusually loud ambient noise in tidewater glacier fjords: A signal of ice melt. Geophys. Res. Lett., 42, 2309– 2316. doi: 10.1002/2014GL062950.  https://artsandculture.google.com/story/can-we-use-sound-to-restore-coral-reefs/ RgUBYCe8v8Ol0Q [last visited 5.3.2023]  Williams, B. R., McAfee, D., and Connell, S. D.. 2021. Repairing recruitment processes with sound technology to accelerate habitat restoration. Ecological Applications 31( 6):e02386. 10.1002/eap.2386
Matthew Neal – firstname.lastname@example.org Instagram: @matthewneal32
Department of Otolaryngology and other Communicative Disorders University of Louisville Louisville, Kentucky 40208 United States
Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments Presented at the 184 ASA Meeting Read the abstract at https://doi.org/10.1121/10.0018736
Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.
This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.
A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.
Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.