Towards studying Venus seismicity, subsurface, and atmosphere using atmospheric acoustics

Gil Averbuch – gil.averbuch@whoi.edu

Applied Ocean Phusics and Engineering, Woods Hole Oceanographic Instuitution., Woods Hole, MA, 02543, United States

Andi Petculescu
University of Louisiana
Department of Physics
Lafayette, Louisiana, USA

Popular version of 3aPAa6 – Calculating the Acoustics Internal Gravity Wave Dispersion Relations in Venus’s Supercritical Lower Atmosphere
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027303

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Venus surface. Image from NASA (https://science.nasa.gov/gallery/venus/)

Venus is the second planet from the sun and is the closest in size and mass to Earth. Satellite images show large regions of tectonic deformations and volcanic material, indicating the area is seismically and volcanically active. Ideally, to study its subsurface and seismic and volcanic activity, we would deploy seismometers on the surface to measure the ground motions following venusquakes or volcanic eruptions; this will allow us to understand the planet’s past and current geological processes and evolution. However, the extreme conditions at the surface of Venus prevent us from doing that. With temperatures exceeding 400°C (854°F) and a pressure of more than 90 bars (90 times more than on Earth), instruments don’t last long.

One alternative to overcome this challenge is to study Venus’s subsurface and seismic activity using balloon-based acoustic sensors floating in the atmosphere to detect venusquakes from the air. But before doing that, we first need to assess its feasibility. This means we must better understand how seismic energy is transferred to acoustic energy in Venus’s atmosphere and how the acoustic waves propagate through it. In our research, we address the following questions. 1) How efficiently does seismic motion turn to atmospheric acoustic waves across Venus’ surface? 2) how do acoustic waves propagate in Venus’s atmosphere? and 3) what is the frequency range of acoustic waves in Venus’s atmosphere?

Venus’s extreme pressure and temperature correspond to supercritical fluid conditions in the atmosphere’s lowest few kilometers. Supercritical fluids combine gases and fluids’ properties and exhibit nonintuitive behavior, such as high density and compressibility. Therefore, to describe the behavior of such fluids, we need to use an equation of state (EoS) that captures these phenomena. Different EoSs are appropriate for different fluid conditions, but only a limited selection adequately describes supercritical fluids. One of these equations is the Peng-Robinson (PR) EoS. Incorporating the PR-EoS with the fluid dynamics equations allows us to study acoustics propagation in Venus’s atmosphere.

Our results show that the energy transported across Venus’s surface from seismic sources is two orders of magnitude larger than on Earth, pointing to a better seismic-to-acoustic transmission. This is mainly due to Venus’s denser atmosphere (~68 kg/m3) compared to Earth’s (~1 kg/m3). Using numerical simulations, we show that different seismic waves will be coupled to Venus’s atmosphere at different spatial positions. Therefore, when considering measurements from floating balloons, they will measure different seismic-to-acoustic signals depending on their position. In addition, we show that Venus’s atmosphere allows lower acoustic frequencies than Earth’s. This will be useful in 1) preparing the capabilities of the acoustic instruments used on the balloons, and 2) interpreting future observations.

The Infrasonic Choir: Decoding Songs to Inform Decisions

Sarah McComas – sarah.mccomas@usace.army.mil

U.S. Army Engineer Research and Development Center, Vicksburg, MS, 39180, United States

Popular version of 1pPAb4 – The Infrasonic Choir: Decoding Songs to Inform Decisions
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026838

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Figure 1. Infrasound is a low frequency, sub-audible sound propagated over long distances (10’s to 1000’s of kilometers) and typically below the threshold of human hearing. Image courtesy of author.

The world around us is continuously evolving due to the actions of Mother Nature and man-made activities, impacting how we interact with the environment. Many of these activities generate infrasound, which is sound below the frequency threshold for human hearing (Figure 1). These signals can travel for long distances, 10s to 100s km based on source strength, while maintaining key information about what generated the signal. The multitude of signals can be thought of as an infrasonic choir with voices from a wide variety of sources which include natural signals such as surf and volcanic activity and man-made including infrastructure or industrial activities. Listening to, and deciphering, the infrasonic choir around us allows us to better understand how the world around us is evolving.

The infrasonic choir is observed by placing groupings, called arrays, of specialized sensors around the environment we wish to understand. These sensors are microphones designed to capture very low frequency sounds. An array’s geometry enables us to identify the direction the signal is observed. Using multiple arrays around a region allow for identification of the source location.

One useful application of decoding infrasonic songs is listening to infrastructure, such as a bridge. Bridges vibrate at frequencies related to the engineering characteristics of the structure, such as mass and stiffness. Bridges are surrounded by the fluid atmosphere which allow the bridge vibrations to create waves that can be measured with infrasound sensor arrays. One can visualize this as waves generated after a rock is thrown into a pond. As the bridge’s overall health degrades, whether through time or other events, its engineering characteristics change causing a change in the vibrational frequency. Being able to identify a change from a healthy, “in-tune” structure to an “out-of-tune”, unhealthy structure without having to see or inspect the bridge would enable continuous monitoring of entire regional road networks. The ability to conduct this type of monitoring after a natural disaster, such as hurricane or earthquake, would enable quick identification of damaged structures for prioritization of limited structural assessment resources.

Understanding how to decode the infrasonic choir within the symphony of the environment to better understand the world around us is the focus of ongoing research at the U.S. Army Engineer Research and Development Center. This research effort focuses on moving monitoring into source rich urban environments, the design of lightweight and low-cost sensors and mobile arrays, and the development of automated processing methods for analysis. When successful, continuous monitoring of this largely untapped source of information will provide a method for understanding the environment to better inform decisions.

Permission to publish was granted by the Director, Geotechnical and Structures Laboratory, U.S. Army Engineer Research and Development Center.

A general method to obtain clearer images at a higher resolution than theoretical limit

Jian-yu Lu – jian-yu.lu@ieee.org
X (Twitter): @Jianyu_lu
Instagram: @jianyu.lu01
Department of Bioengineering, College of Engineering, The University of Toledo, Toledo, Ohio, 43606, United States

Popular version of 1pBAb4 – Reconstruction methods for super-resolution imaging with PSF modulation
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026777

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imaging is an important fundamental tool to advance science, engineering, and medicine, and is indispensable in our daily life. Here we have a few examples: Acoustical and optical microscopes have helped to advance biology. Ultrasound imaging, X-ray radiography, X-ray computerized tomography (X-ray CT), magnetic resonance imaging (MRI), gamma camera, single-photon emission computerized tomography (SPECT), and positron emission tomography (PET) have been routinely used for medical diagnoses. Electron and scanning tunneling microscopes have revealed structures in nanometer or atomic scale, where one nanometer is one billionth of a meter. And photography, including the cameras in cell phones, is in our everyday life.

Despite the importance of imaging, it was first recognized by Ernest Abbe in 1873 that there is a fundamental limit known as the diffraction limit for resolution in wave-based imaging systems due to the diffraction of waves. This effects acoustical, optical, and electromagnetic waves, and so on.

Recently (see Lu, IEEE TUFFC, January 2024), the researcher developed a general method to overcome such a long-standing diffraction limit. This method is not only applicable to wave-based imaging systems such as ultrasound, optical, electromagnetic, radar, and sonar; it is in principle also applicable to other linear shift-invariant (LSI) imaging systems such as X-ray radiography, X-ray CT, MRI, gamma camera, SPECT, and PET since it increases image resolution by introducing high spatial frequencies through modulating the point-spread function (PSF) of an LSI imaging system. The modulation can be induced remotely from outside of an object to be imaged, or can be small particles introduced into or on the surface of the object and manipulated remotely. The LSI system can be understood with a geometric distortion corrected optical camera in the photography, where the photo of a person will be the same or invariant in terms of the size and shape if the person only shifts his/her position in the direction that is perpendicular to the camera optical axis within the camera field of view.

Figure 1 below demonstrates the efficacy of the method using an acoustical wave. The method was used to image a passive object (in the first row) through a pulse-echo imaging or to image wave source distributions (in the second row) with a receiver. The best images obtainable under the Abbe’s diffraction limit are in the second column, and the super-resolution (better than the diffraction limit) images obtained with the new method are in the last column. The super-resolution images had a resolution that was close to 1/3 of the wavelength used from a distance with an f-number (focal distance divided by the diameter of the transducer) close to 2.

Figure 1. This figure was modified in courtesy of IEEE (doi.org/10.1109/TUFFC.2023.3335883).

Because the method developed is based on the convolution theory of an LSI system and many practical imaging systems are LSI, the method opens an avenue for various new applications in science, engineering, and medicine. With a proper choice of a modulator and imaging system, nanoscale imaging with resolution similar to that of a scanning electron microscope (SEM) is possible even with visible or infrared light.

These Sounds Are Out of This World! #ASA184

These Sounds Are Out of This World! #ASA184

Software program predicts environmental noise and modulates voices to simulate sound on other planets.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 11, 2023 – You may know how other planets look, like the rust orange, dusty surface of Mars or the vibrant teal of Uranus. But what do those planets sound like?

This illustration depicts Mars helicopter Ingenuity during a test flight on Mars. Ingenuity was taken to the red planet strapped to the belly of the Perseverance rover (seen in the background). Credit: NASA/JPL-Caltech

Timothy G. Leighton from the University of Southampton in the U.K. designed a software program that produces extraterrestrial environmental sounds and predicts how human voices might change in distant worlds. He will demonstrate his work at the upcoming 184th Meeting of the Acoustical Society of America, running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel. His presentation will take place Thursday, May 11, at 12:00 p.m. Eastern U.S. in the Chicago room.

The presentation is part of a special session that brings together the acoustics and planetary science communities. Acoustical studies became essential during the Huygens lander’s descent into Titan’s atmosphere in 2005 and in the more recent Mars InSight and Mars 2020 missions. These successful missions carried customized active and passive acoustic sensors operating over a wide spectrum, from very low frequencies (infrasound, below the human hearing threshold) to ultrasound (above human hearing).

“For decades, we have sent cameras to other planets in our solar system and learned a great deal from them. However, we never really heard what another planet sounded like until the very recent Mars Perseverance mission,” said Leighton.

Scientists can harness sound on other worlds to learn about properties that might otherwise require a lot of expensive equipment, like the chemical composition of rocks, how atmospheric temperature changes, or the roughness of the ground.

Extraterrestrial sounds could also be used in the search for life. At first glance, Jupiter’s moon Europa may seem a hostile environment, but below its shell of ice lies a potentially life-sustaining ocean.

“The idea of sending a probe on a seven-year trip through space, then drilling or melting to the seabed, poses mind-boggling challenges in terms of finance and technology. The ocean on Europa is 100 times deeper than Earth’s Arctic Ocean, and the ice cap is roughly 1,000 times thicker,” said Leighton. “However, instead of sending a physical probe, we could let sound waves travel to the seabed and back and do our exploring for us.”

Planets’ unique atmospheres impact sound speed and absorption. For example, the thin, carbon dioxide-rich Martian atmosphere absorbs more sound than Earth’s, so distant noises appear fainter. Anticipating how sound travels is important for designing and calibrating equipment like microphones and speakers.

Hearing the sound from other planets is beneficial not just for scientific purposes, but also for entertainment. Science-fiction films contain vivid imagery to mimic the look of other worlds but often lack the immersive quality of how those worlds would sound.

Leighton’s software will showcase predictions of the sounds of other worlds at planetariums and museums. In the case of Mars, it will include actual sounds thanks to the U.S./European Perseverance team and China’s Zhurong mission.

The special session, chaired by Leighton and Andi Petculescu, is the third forum on acoustics in planetary science organized at a meeting of the Acoustical Society of America.

“The success of the first two ASA special sessions on this subject has led to quite a few collaborations between the two communities, a trend that we hope will carry on,” said Petculescu.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Fighting Racial Bias in Next-Gen Breast Cancer Screening #ASA184

Fighting Racial Bias in Next-Gen Breast Cancer Screening #ASA184

A new virtual framework has enabled investigations into the effectiveness of optoacoustic tomography for cancer screening in darker-skinned individuals.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 9, 2023 – Breast cancer is one of the most common and deadly types of cancer, and the best outcomes stem from early detection. But some screening techniques may be less effective for people with darker skin.

Seonyeong Park of the University of Illinois Urbana-Champaign will discuss experiments to measure this bias in her talk, “Virtual imaging trials to investigate impact of skin color on three-dimensional optoacoustic tomography of the breast.” The presentation will take place Tuesday, May 9, at 6:15 p.m. Eastern U.S. in room Chicago F/G, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

Virtual imaging trials to investigate the impact of skin color on 3D optoacoustic tomography of the breast: (a) An example schematic of a 3D OAT scan (left) and a clinical image (right) scanned by TomoWave Laboratories, Inc. (Houston) using LOUISA-3D at MD Anderson Cancer Center; (b) VITs of OAT using numerical breast phantoms (top); (c) 3D visualization of region-of-interest difference images between the reconstructed images with and without a lesion under different skin tones, obtained via VITs of OAT. Credit: Seonyeong Park

Current standard screening for breast cancer is done with X-ray mammography, which can be uncomfortable and is less effective on dense breast tissue. An alternative, optoacoustic tomography, uses laser light to induce sound vibrations in breast tissue. The vibrations can be measured and analyzed to spot tumors. This method is safe and effective and does not require compression during imaging.

The technology that underlies OAT imaging is not new; it has been used in pulse oximetry for decades. Concerns about its interaction with darker skin have existed for almost as long.

“In 1990, a study found that pulse oximetry was about 2.5 times less accurate in patients with dark skin,” said Park. “Recently, an article suggested that unreliable measurements from pulse oximeters may have contributed to increased mortality rates in Black patients during the COVID-19 pandemic.”

With OAT emerging as an effective breast cancer screening method, Park and her team, led by professors Mark Anastasio at UIUC and Umberto Villa at the University of Texas at Austin, collaborating with professor Alexander Oraevsky of TomoWave Laboratories, Inc. in Houston, wanted to determine if this same bias was present. Rather than navigate the cost and ethics issues surrounding human test subjects, the team instead simulated a range of skin colors and tumor locations.

“By using an ensemble of realistic numerical breast phantoms, i.e., digital breasts, the evaluation can be conducted rapidly and cost-effectively,” said Park.

The results confirmed that tumors could be harder to locate in individuals with darker skin depending on the design of the OAT imager and the location of the tumor. Fortunately, a virtual framework developed by Park allows for more comprehensive investigations and can serve as a tool for evaluating and optimizing new OAT imaging systems in their early stages of development.

“To improve detectability in dark skin, the laser power to acoustic noise ratio should be increased,” said Park. “It is recommended that skin color-dependent detectability should be evaluated when designing new OAT breast imagers. Our team is actively conducting in-depth investigations utilizing our virtual framework to propose effective strategies for designing imaging systems that can help mitigate racial bias in OAT breast imaging.”

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Can we detect volcanic eruptions and venusquakes from a balloon floating high above Venus?

Siddharth Krishnamoorthy – siddharth.krishnamoorthy@jpl.nasa.gov

NASA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109, United States

Daniel C. Bowman2, Emalee Hough3, Zach Yap3, John D. Wilding4, Jamey Jacob3, Brian Elbing3, Léo Martire1, Attila Komjathy1, Michael T. Pauken1, James A. Cutts1, Jennifer M. Jackson4, Raphaël F. Garcia5, and David Mimoun5

1. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA
2. Sandia National Laboratories, Albuquerque, New Mexico, USA
3. Oklahoma State University, Stillwater, OK, USA
4. Seismological Laboratory, California Institute of Technology, Pasadena, CA, USA
5. Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), Toulouse, France

Popular version of 4aPAa1 – Development of Balloon-Based Seismology for Venus through Earth-Analog Experiments and Simulations
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018837

Venus has often been described as a “hellscape” and deservedly so – the surface of Venus simultaneously scorches and crushes spacecraft that land on it with temperatures exceeding 460 degrees Celsius (~850 F) and atmospheric pressure exceeding 90 atmospheres. While the conditions on the surface of Venus are extreme, the temperature and pressure drop dramatically with altitude. At about 50-60 km above the surface, temperature (-10-70 C) and pressure (~0.2-1 atmosphere) resemble that on Earth. At this altitude, the challenge of surviving clouds of sulfuric acid is more manageable than that of surviving the simultaneous squeeze and scorch at the surface. This is evidenced by the fact that the two VeGa balloons floated in the atmosphere of Venus by the Soviet Union in 1985 transmitted data for approximately 48 hours (and presumably survived for much longer) compared to 2 hours and 7 minutes, which is the longest any spacecraft landed on the surface has survived. A new generation of Venus balloons is now being designed that can last over 100 days and can change their altitude to navigate different layers of Venus’ atmosphere. Our research focuses on developing technology to detect signatures of volcanic eruptions and “venusquakes” from balloons in the Venus clouds. Doing so allows us to quantify the level of ongoing activity on Venus, and associate this activity with maps of the surface, which in turn allows us to study the planet’s interior from high above the surface. Conducting this experiment from a balloon floating at an altitude of 50-60 km above the surface of Venus provides a significantly extended observation period, surpassing the lifespan of any spacecraft landed on the surface with current technology.

We propose to utilize low-frequency sound waves known as infrasound to detect and characterize Venus quakes and volcanic activity. These waves are generated due to coupling between the ground and the atmosphere of the planet – when the ground moves, it acts like a drum that produces weak infrasound waves in the atmosphere, which can then be detected by pressure sensors deployed from balloons as shown in figure 1. On Venus, the process of conversion from ground motion to infrasound is up to 60 times more efficient than Earth.

Figure 1: Infrasound is generated when the atmosphere reverberates in response to the motion of the ground and can be detected on balloons. Infrasound can travel directly from the site of the event to the balloon (epicentral) or be generated by seismic waves as they pass underneath the balloon and travel vertically upward (surface wave infrasound).

We are developing this technique by first demonstrating that earthquakes and volcanic eruptions on Earth can be detected by instruments suspended from balloons. These data also allow us to validate our simulation tools and generate estimates for what such signals may look like on Venus. In flight experiments over the last few years, not just several earthquakes of varying magnitudes and volcanic eruptions, but also other Venus-relevant phenomena such as lightning and mountain waves have been detected from balloons as shown in figure 2.

Figure 2: Venus-relevant events on Earth detected on high-altitude balloons using infrasound. Pressure waves from the originating event travel to the balloon and are recorded by barometers suspended from the balloon.

In the next phase of the project, we will generate a catalog of analogous signals on Venus and develop signal identification tools that can autonomously identify signals of interest on a Venus flight.

Copyright 2023, all rights reserved. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).