Noise reduction for low frequency sound measurements from balloons on Venus

Taylor Swaim – tswaim@okstate.edu

Oklahoma State University
Stillwater, Oklahoma 74078
United States

Kate Spillman
Emalee Hough
Zach Yap
Jamey D. Jacob
Brian R. Elbing (twitter: @ElbingProf)

Popular version of 2pCA6 – Infrasound noise mitigation on high altitude balloons
Presented at the 184 ASA Meeting
Read the article in Proceedings of Meetings on Acoustics

While there is great interest in studying the structure of Venus because it is believed to be similar to Earth, there are no direct seismic measurements on Venus. This is because the Venus surface temperature is too hot for electronics, but conditions are milder in the middle of the Venus atmosphere. This has motivated interest in studying seismic activity using low frequency sound measurements on high altitude balloons. Recently, this method was demonstrated on Earth with weak earthquakes being detected from balloons flying at twice the altitude of commercial airplanes. Video 1 shows a balloon launch for these test flights. Due to the denser atmosphere on Venus, the coupling between the Venus-quake and the sound waves should be much greater, which will make the sound louder on Venus. However, the higher density atmosphere combined with vertical changes in wind speed is also likely to increase the amount of wind noise on these sensor. Thus development of a new technology to reduce wind noise on a high altitude balloon is needed.

Video 1. Video of a balloon launch during the summer of 2021. Video courtesy of Jamey Jacob.

Several different designs were proposed and ground tested to identify potential materials for compact windscreens. The testing included a long-term deployment outdoors so that the sensors would be exposed to a wide range of wind speeds and conditions. Separately, the sensors were exposed to controlled low-frequency sounds to test if the windscreens were also reducing the loudness of the signals of interest. All of the designs showed significant reduction in wind noise with minimal reduction in the controlled sounds, but one design in particular outperformed the others. This design uses a canvas fabric on the outside of a box as shown in the Figure 1 combined with a dense foam material on the inside.

Figure 1. Picture of balloon carrying the low frequency sound sensors. Compared an early design to no windscreen with this flight. Image courtesy of Brian Elbing.

The next step is to fly this windscreen on a high altitude balloon, especially on windier days and with a long flight line to increase the amount of wind that the sensors will experience. The wind direction at the float altitude of these balloons will change in May and then rapidly increase, which this will be the target window to test this new design.

Diving into the Deep End: Exploring an Extraterrestrial Ocean

Grant Eastland – grant.c.eastland.civ@us.navy.mil

Naval Undersea Warfare Center Division, Keyport, Test and Evaluation Department., Keyport, Washington, 98345, United States

Popular version of 4aPAa12 – Considerations of undersea exploration of an extraterrestrial ocean
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018848

As we venture out beyond our home planet to explore our neighbors in our solar system, we have encountered the most extreme environments we could have imagined that provide some of greatest engineering challenges. Probes and landers have measured and experienced dangerous temperatures, atmospheres, and surfaces that would be deadly for human exploration. However, no extraterrestrial ocean environments have been studied beyond observation, which are the mostly unexplored portions of our planet. Remarkably, pass-by planetary probes have found the possible existence of oceans on two of Jupiter’s moons Europa and Ganymede and the existence of a potential ocean, as well as lakes and rivers on Titan, a moon of Saturn. Jupiter’s moon Europa could have a saltwater ocean that could be between 60 and 90 miles deep, covered in up to 15 miles of ice. The deepest point in Earth’s Ocean is a maximum of about 7.5 miles for comparison about 10 to 15 times shallower. Those extreme pressures experienced at that depth would be difficult to withstand with current technology and acoustic propagation could potentially behave differently also. At those pressures, water might not freeze above 8°F (~260 K), causing liquid water at temperatures not seen in our oceans. The effects of this would be found in the speed of sound, which are shown in Figure 1 through a creative and imaginative modelling scheme numerically simulated. The methods used were a mixture of using Earth data with predictive speculation, and physical intuition.

Figure 1. Imaginative scientific freedom determining the speed of sound in the deep ocean on Europa beneath a 30 km ice sheet. The water stays liquid down to potentially 260 K (8 degrees F), heated by currently an unknown mechanism probably related to Jupiter’s gravitational pull.

On Titan, a moon of Saturn, there are lakes and rivers of hydrocarbons like Methane and Ethane. For these compounds to be liquid, the temperature would have to be about -297°F. We know how sound interacts with Methane on Earth, because it is a gas for our conditions, but we would have to get it to cryogenic temperatures to study the acoustics as a liquid. We would have to build systems that could swim around in such temperatures to explore what is underneath. At liquid water temperatures, like potentially some of the extraterrestrial oceans predicted to exist, conditions may still be amenable to life. But to discover that life will require independent systems, making measurements and gathering information for humans to see through the eyes of our technology. The drive to explore extreme ocean environments could provide evidence of life beyond Earth, since where there is water, life is possible.

A moth’s ear inspires directional passive acoustic structures

Lara Díaz-García – lara.diaz-garcia@strath.ac.uk
Twitter: @laradigar23
Instagram: @laradigar

Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, Lanarkshire, G1 1RD, United Kingdom

Popular version of 2aSA1-Directional passive acoustic structures inspired by the ear of Achroia grisella, presented at the 183rd ASA Meeting.

Read the article in Proceedings of Meetings on Acoustics

When most people think of microphones, they think of the ones singers use or you would find in a karaoke machine, but they might not realize that much smaller microphones are all around us. Current smartphones have about three or four microphones that are small. The miniaturization of microphones is therefore a desire in technological development. These microphones are strategically placed to achieve directionality. Directionality means that the microphone’s goal is to discard undesirable noise coming from directions other than the speaker’s as well as to detect and transmit the sound signal. For hearing implant users this functionality is also desirable. Ideally, you want to be able to tell what direction a sound is coming from, as people with unimpaired hearing do.

But dealing with small size and directionality presents problems. People with unimpaired hearing can tell where sound is coming from by comparing the input received by each of our ears, conveniently sitting on opposite sides of our heads and therefore receiving sounds at slightly different times and with different intensities. The brain can do the math and compute what direction sound must be coming from. The problem is that, to use this trick, you need two microphones that are separated so the time of arrival and difference in intensity are not negligible, and that goes against microphone miniaturization. What to do if you want a small but directional microphone, then?

When looking for inspiration for novel solutions, scientists often look to nature, where energy efficiency and simple designs are prioritized in evolution. Insects are one such example that faces the challenge of directional hearing at small scales. The researchers have chosen to look at the lesser wax moth (fig 1), observed to have directional hearing in the 1980s. The males produce a mating call that the females can track even when one of their ears is pierced. This implies that, instead of using both ears as humans do, these moths’ directional hearing is achieved with just one ear.

Lesser wax moth specimen with scale bar. Image courtesy of Birgit E. Rhode (CC BY 4.0).

The working hypothesis is that directionality must be achieved by the asymmetrical shape and characteristics of the moth’s ear itself. To test this hypothesis, the researchers designed a model that resembles the moth’s ear and checked how it behaved when exposed to sound. The model consists of a thin elliptical membrane with two halves of different thicknesses. For it, they used a readily available commercial 3D printer that allows customization of the design and fabrication of samples in just a few hours. The samples were then placed on a turning surface and the behavior of the membrane in response to sound coming from different directions was investigated (fig 2). It was found that the membrane moves more when sound comes from one direction rather than all the others (fig 3), meaning the structure is therefore passively directional. This means it could inspire a single small directional microphone in the future.

Laboratory setup to turn the sample (in orange, center of the picture) and expose it to sound from the speaker (left of the picture). Researcher’s own picture.
Image adapted from Lara Díaz-García’s original paper. Sounds coming from 0º direction elicit a stronger movement in the membrane than other directions.

1pEA7 – Oscillations of drag-reducing air pocket under fast boat hull

Konstantin Matveev – matveev@wsu.edu

Washington State University
Pullman, WA 99164

Popular version of 1pEA7 – Acoustic oscillations of drag-reducing air pocket under fast boat with stepped bottom
Presented Monday afternoon, May 23, 2022
182nd ASA Meeting
Click here to read the abstract

A lot of fuel is usually consumed by a fast boat to overcome water drag. Some of this resistance is caused by water friction which scales with the hull wetted area. By injecting air under the hull bottom with a special recess and maintaining a thin but large-area air pocket, total boat drag can be decreased by up to 30%.

boat hull

Boat with bottom air cavity.

However, generating and keeping the bottom air pocket in waves is rather tricky, as periodic wave pressure may excite an acoustic resonance in a compliant air cavity, resulting in large oscillations of the air cavity accompanied by significant loss of air to the surrounding water flow. The deterioration of the air pocket will drastically increase resistance of the hull, and the boat may be unable to reach sufficiently high speeds to operate in a planing regime.

Side view of air-cavity hull in waves.

Bottom view of air-cavity hull in waves, showing increased air leakage.

A simplified oscillator model, similar to a mass on a spring, is employed in this study to describe and simulate oscillations of the air cavity under the boat hull. The main inertia in this process is the so-called added water mass, which is a mass of an effective water volume under the air pocket, while the spring action comes from the compressibility of air inside the bottom recess.

boat hull

Oscillator model for air cavity under hull in waves.

An air-cavity boat accelerating through waves may hit the resonance condition, when a frequency of encounter with waves coincides with the natural or preferable oscillation frequency of the air pocket under the hull. Simulations using the developed model have demonstrated that acoustic oscillations may grow in magnitude and disintegrate the air cavity. However, if the boat accelerates sufficiently fast and the time spent near the resonance state is short, then oscillations will not have enough time to amplify, and the boat can successfully reach a high speed to glide on the water surface. Alternatively, if the damping is increased, for example by baffles, morphing surfaces or even sound from underwater loudspeakers, one can suppress the oscillation growth as well. The presented model can help boat designers develop higher performance boats.

Adding Sound to Electric Vehicles Improves Pedestrian Safety

Adding Sound to Electric Vehicles Improves Pedestrian Safety

Study participants noticed cars before the minimum safe detection distance in most cases

Media Contact:
Larry Frum
AIP Media
301-209-3090
media@aip.org

SEATTLE, November 30, 2021 — While they decrease sound pollution, electric vehicles are so quiet, they can create a safety concern, particularly to the visually impaired. To address this, many governments have mandated artificial sounds be added to electric vehicles.

In the United States, regulations require vehicle sounds to be detectable at certain distances for various vehicle speeds, with faster speeds corresponding to larger detection distances. Michael Roan, from Penn State University, and Luke Neurauter, from the Virginia Tech Transportation Institute, and their team tested how well people detect electric vehicle sounds in terms of these requirements.

Roan will discuss their methods and results in the talk, “Electric Vehicle Additive Sounds: Detection results from an outdoor test for sixteen participants,” on Tuesday, Nov. 30 at 1:25 p.m. Eastern U.S. at the Hyatt Regency Seattle. The presentation is part of the 181st Meeting of the Acoustical Society of America, taking place Nov. 29-Dec. 3.

Participants in the study were seated adjacent to a lane of the Virginia Tech Transportation Institute’s Smart Road facility and pressed a button upon hearing an approaching electric vehicle. This allowed the researchers to measure the probability of detection versus distance from the vehicle, a new criterion for evaluating safety compared to the mean detection distance.

“All of the cases had mean detection ranges that exceeded the National Highway Transportation Safety Administration minimum detection distances. However, there were cases where probability of detection, even at close ranges, never reached 100%,” said Roan. “While the additive sounds greatly improve detection distances over the no sound condition, there are cases where pedestrians still missed detections.”

Even after adding sound, electric vehicles are typically quieter than standard internal combustion engine vehicles. In urban environments, they would create less sound pollution.

Roan said further studies need to be done to investigate detection when all vehicles at an intersection are electric. Additive sounds could create a complex interference pattern that may result in some loud locations and other locations with very little sound.

———————– MORE MEETING INFORMATION ———————–
USEFUL LINKS
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eventpilotadmin.com/web/planner.php?id=ASAFALL21
Press Room: https://acoustics.org/world-wide-press-room/

WORLDWIDE PRESS ROOM
In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.

PRESS REGISTRATION
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

3aEA7 – Interactive Systems for Immersive Spaces

Samuel Chabot – chabos2@rpi.edu
Jonathan Mathews – mathej4@rpi.edu
Jonas Braasch – braasj@rpi.edu
Rensselaer Polytechnic Institute
110 8th St
Troy, NY, 12180

Popular version of 3aEA7 – Multi-user interactive systems for immersive virtual environments
Presented Wednesday morning, December 01, 2021
181st ASA Meeting
Click here to read the abstract

In the past few years, immersive spaces have become increasingly popular. These spaces, most prevalently used as exhibits and galleries, incorporate large displays that completely envelop groups of people, speaker arrays, and even reactive elements that can respond to the actions of the visitors within. One of the primary challenges in creating productive applications for these environments is the integration of intuitive interaction frameworks. For users to take full advantage of these spaces, whether it be for productivity, or education, or entertainment, the interfaces used to interact with data should be both easy to understand, and provide predictable feedback. In the Collaborative Research-Augmented Immersive Virtual Environment, or CRAIVE-Lab, at Rensselaer Polytechnic Institute, we have integrated a variety of technologies to foster natural interaction with the space. First, we developed a dynamic display environment for our immersive screen, written in JavaScript, to easily create display modules for everything from images to remote desktops. Second, we have incorporated spatial information into these display objects, so that audiovisual content presented on the screen generates spatialized audio over our 128-channel speaker array at the corresponding location. Finally, we have a multi-sensor platform installed, which integrates a top-down camera array, as well as a 16-channel spherical microphone to provide continuous tracking of multiple users, voice activity detection associated with each user, and isolated audio.

By combining these technologies together, we can create a user experience within the room that encourages dynamic interaction with data. For example, delivering a presentation in this space, a process that typically consists of several file transfers and a lackluster visual experience, can now be performed with minimal setup, using the presenter’s own device, and with spatial audio when needed.

Control of lights and speakers can be done via a unified control system. Feedback from the sensor system allows display elements to be positioned relative to the user. Identified users can take ownership of specific elements on the display, and interact with the system concurrently, which makes group interactions and shared presentations far less cumbersome than with typical methods. The elements which make up the CRAIVE-Lab are not particularly novel, as far as contemporary immersive rooms are concerned. However, these elements intertwine into a network which provides functionality for the occupants that is far greater than the sum of its parts.