Spider Silk Sound System #ASA186

Spider Silk Sound System #ASA186

Spiderweb silk moves at the velocity of particles in a sound field for highly sensitive, long-distance sound detection.

Media Contact:
AIP Media

OTTAWA, Ontario, May 16, 2024 – The best microphone in the world might have an unexpected source: spider silk. Spiders weave webs to trap their insect snacks, but the sticky strands also help spiders hear. Unlike human eardrums and conventional microphones that detect sound pressure waves, spider silk responds to changes in the velocities of air particles as they are thrust about by a sound field. This sound velocity detection method remains largely underexplored compared to pressure sensing, but it holds great potential for high-sensitivity, long-distance sound detection.

Researchers from Binghamton University investigated how spiders listen to their environments through webs. They found the webs match the acoustic particle velocity for a wide range of sound frequencies. Ronald Miles will present their work Thursday, May 16, at 10:00 a.m. EDT as part of a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association, running May 13-17 at the Shaw Centre located in downtown Ottawa, Ontario, Canada.


Larinioides sclopetarius, commonly known as bridge spiders, helped researchers from Binghamton University investigate how spiders listen to their environments through webs as a way to inspire future designs for microphones that would also be able to respond to sound-driven airflow. Image credit: Junpeng Lai

“Most insects that can hear sound use fine hairs or their antennae, which don’t respond to sound pressure,” said Miles, a professor of mechanical engineering. “Instead, these thin structures respond to the motion of the air in a sound field. I wondered how to make an engineered device that would also be able to respond to sound-driven airflow. We tried various man-made fibers that were very thin, but they were also very fragile and difficult to work with. Then, Dr. Jian Zhou was walking in our campus nature preserve and saw a spiderweb blowing in the breeze. He thought spider silk might be a great thing to try.”

Before building such a device, the team had to prove spiderwebs truly responded to sound-driven airflow. To test this hypothesis, they simply opened their lab windows to observe the Larinioides sclopetarius, or bridge spiders, that call the windowsills home. They played sound ranging from 1 Hz to 50 kHz for the spiders and measured the spider silk motion with a laser vibrometer. They found the sound-induced velocity of the silk was the same as the particles in the air surrounding it, confirming the mechanism that these spiders use to detect their prey.

“Because spider silk is, of course, created by spiders, it isn’t practical to incorporate it into the billions of microphones that are made each year,” said Miles. “It does, however, teach us a lot about what mechanical properties are desirable in a microphone and may inspire entirely new designs.”

​Main Meeting Website: https://acousticalsociety.org/ottawa/    
Technical Program: https://eppro02.ativ.me/src/EventPilot/php/express/web/planner.php?id=ASASPRING24

In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the in-person meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.


  • fosters communication among people working in all areas of acoustics in Canada
  • promotes the growth and practical application of knowledge in acoustics
  • encourages education, research, protection of the environment, and employment in acoustics
  • is an umbrella organization through which general issues in education, employment and research can be addressed at a national and multidisciplinary level

The CAA is a member society of the International Institute of Noise Control Engineering (I-INCE) and the International Commission for Acoustics (ICA), and is an affiliate society of the International Institute of Acoustics and Vibration (IIAV). Visit https://caa-aca.ca/.

What could happen to Earth if we blew up an incoming asteroid?

Brin Bailey – brittanybailey@ucsb.edu

University of California, Santa Barbara, Physics Department, Santa Barbara, CA, 93106, United States

Popular version of 4aPA12 – Acoustic ground effects simulations from asteroid disruption via the ‘Pulverize It’ method
Presented at the 186 ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=Session&project=ASASPRING24&id=3657765

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Let’s imagine a hypothetical scenario: a new asteroid has just been discovered, on a path straight towards Earth, threatening to hit us in just a few days. What can we do about it?

A new study funded by NASA is trying to answer that question. Pulverize It, or PI for short, is a proposed method for planetary defense–the effort of monitoring and protecting Earth from incoming asteroids. In essence, PI’s plan of attack is to penetrate an incoming asteroid with high-speed, bullet-like projectiles, which would split the asteroid into many smaller fragments (pieces) (Figure 1). PI’s key difference from other planetary defense methods is its versatility. It is designed to work for a wide variety of scenarios, meaning that PI could be used whether an asteroid impact is one year away or one week away (depending on the asteroid’s size and speed).


Figure 1. PI works by penetrating an asteroid with a high-speed, high-density projectile, which rapidly converts a portion of the asteroid’s kinetic energy into heat and shock waves within the rocky material. The heat energy of the impact locally vaporizes and ionizes material near the impact site(s), and the subsequent shock waves damage and fracture the asteroid material as they move and pass (refract) through it.

How is this possible, and how could the asteroid fragments affect us here on Earth? Rather than using momentum transfer–like in methods such as asteroid deflection, as demonstrated by NASA’s recent Double Asteroid Redirection Test (DART) mission–PI utilizes energy transfer to mitigate a threat by disassembling (or breaking apart) an asteroid.

If the asteroid is blown apart while far away from Earth (generally, at least several months before impact), these fragments would miss the planet entirely. This is PI’s preferred mode of operation,as it is always more favorable to keep the action away from us when possible. In a scenario where we have little warning time (a “terminal” scenario), the small asteroid fragments may enter Earth’s atmosphere–but this is part of the plan (Figure 2).


Figure 2. In a short-warning scenario where the asteroid is intercepted and broken up close to Earth (“terminal” scenario), the fragment cloud enters Earth’s atmosphere. Each fragment will burst at high altitude, dispersing the energy of the original asteroid into optical and acoustical ground effects. As the fragments in the cloud spread out, they will enter the atmosphere at different times and in different places, creating spatially and temporally de-correlated shock waves. The spread of the fragment cloud depends on a variety of factors, mainly intercept time (the amount of time between asteroid breakup and ground impact) and fragment disruption velocity (the speed and direction at which fragments move away from the fragment cloud’s center of mass).

Earth’s atmosphere acts as a bulletproof vest, shielding us from harmful ultraviolet radiation, typical space debris, and, in this case, asteroid fragments. As these small rocky pieces enter the atmosphere at very high speeds, air molecules exert large amounts of pressure on them. This puts stress on the rock and causes it to break up. As the fragment’s altitude decreases, the atmosphere’s density increases. This adds heat and increases pressure until the fragment can’t remain intact anymore, causing the fragment to detonate, or “burst.”

When taken together, these bursts can be thought of as a cosmic fireworks show. As each fragment travels through the atmosphere and bursts, it produces a small amount of light (like a shooting star) and pressure (as a shock wave, like a sonic boom). The collection of these optical and acoustical effects, referred to as “ground effects,” work to disperse the energy of the original asteroid over a wide area and over time. In reasonable mitigation scenarios that are appropriate for the incoming asteroid (for example, based on asteroid size or by breaking the asteroid into a very large number of very small pieces), these ground effects result in little to no damage.

In this study, we investigate the acoustical ground effects that PI may produce when blowing apart an incoming asteroid in a “terminal” scenario with little warning. As each fragment enters Earth’s atmosphere and bursts, the pressure released creates a shock wave, carrying energy and creating an audible “boom” for each fragment (a sonic boom). Using custom codes, we simulate the acoustical ground effects for a variety of scenarios that are designed to keep the total pressure output below 3 kPa–the pressure at which residential windows may begin to break–in order to minimize potential damage (Figure 3).

Figure 3. Simulation of the acoustical ground effects from a 50 m diameter asteroid which is broken into 1000 fragments one day before impact. The asteroid is modeled as a spherical rocky body (average density of 2.6 g/cm3) traveling through space at 20 km/s and entering Earth’s atmosphere at an angle of 45°. The fragments move away from each other at an average speed of 1 m/s. The sonic “booms” produced by the fragment bursts are simulated here based upon the arrival of each shock wave at an observer on the ground (indicated by the green dot in the left plot). Note that both plots take into account the constructive interference between shock waves. Left: real-time pressure. Right: maximum pressure, where each pixel displays the highest pressure it has experienced. The dark orange lines, which display higher pressure values, signify areas where two shock waves have overlapped.

Figure 4. Simulation of the acoustical ground effects from an unfragmented (as in, not broken up) 50 m diameter asteroid. The asteroid is modeled as a spherical rocky body (average density of 2.6 g/cm3) traveling through space at 20 km/s and entering Earth’s atmosphere at an angle of 45°. Upon entering and descending through Earth’s atmosphere, the asteroid undergoes a great amount of pressure from air molecules, eventually causing the asteroid to airburst. This burst releases a large amount of pressure, creating a powerful shock wave. Left: real-time pressure. Right: maximum pressure, where each pixel displays the highest pressure it has experienced.

Our simulations support that the ground effects from an asteroid blown apart by PI are vastly less damaging than if the asteroid hit Earth intact. For example, we find that a 50-meter-diameter asteroid that is broken into 1000 fragments only one day before Earth impact is vastly less damaging than if it was left intact (Figure 3 versus Figure 4). In the mitigated scenario, we estimate that the observation area (±150 km from the fragment cloud’s center) would experience an average pressure of ~0.4 kPa and a maximum pressure of ~2 kPa (Figure 3). In the unfragmented asteroid case (as in, not broken up), we estimate an average pressure of ~3 kPa and a maximum pressure of ~20 kPa (Figure 4). The asteroid mitigated by PI keeps all areas below the 3 kPa damage threshold, while the maximum pressure in the unmitigated case is almost seven times higher than the threshold.

The key is that the shock waves from the many fragments are “de-correlated” at any given observer, and hence vastly less threatening. Our findings suggest that PI is an effective approach for planetary defense that can be used in both short-warning (“terminal” scenarios) and extended warning scenarios, to result in little to no ground damage.

While we would rather not use this terminal defense mode–as it is preferable to intercept asteroids far ahead of time–PI’s short-warning mode could be used to mitigate threats that we fail to see coming. We envision that asteroid impact events similar to the in Chelyabinsk airburst in 2013 (~20 m diameter) or Tunguska airburst in 1908 (~40-50 m diameter) could be effectively mitigated by PI with remarkably short intercepts and relatively little intercept mass.

Website and additional resources
Please see our website for further information regarding the PI project, including papers, visuals, and simulations. For our full suite of ground effects simulations, please check our YouTube channel.

Funding for this program comes from NASA NIAC Phase I grant 80NSSC22K0764 , NASA NIAC Phase II grant 80NSSC23K0966, NASA California Space Grant NNX10AT93H and the Emmett and Gladys W. fund. We gratefully acknowledge support from the NASA Ames High End Computing Capability (HECC) and Lawrence Livermore National Laboratory (LLNL) for the use of their ALE3D simulation tools used for modeling the hypervelocity penetrator impacts, as well as funding from NVIDIA for an Academic Hardware Grant for a high-end GPU to speed up ground effect simulations.

Listen In: Infrasonic Whispers Reveal the Hidden Structure of Planetary Interiors and Atmospheres

Quentin Brissaud – quentin@norsar.no
X (twitter): @QuentinBrissaud

Research Scientist, NORSAR, Kjeller, N/A, 2007, Norway

Sven Peter Näsholm, University of Oslo and NORSAR
Marouchka Froment, NORSAR
Antoine Turquet, NORSAR
Tina Kaschwich, NORSAR

Popular version of 1pPAb3 – Exploring a planet with infrasound: challenges in probing the subsurface and the atmosphere
Presented at the 186 ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3657997

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

infrasoundLow frequency sound, called infrasound, can help us better understand our atmosphere and explore distant planetary atmospheres and interiors.

Low-frequency sound waves below 20 Hz, known as infrasound, are inaudible to the human ear. They can be generated by a variety of natural phenomena, including volcanoes, ocean waves, and earthquakes. These waves travel over large distances and can be recorded by instruments such as microbarometers, which are sensitive to small pressure variations. This data can give unique insight into the source of the infrasound and the properties of the media it traveled through, whether solid, oceanic, or atmospheric. In the future, infrasound data might be key to build more robust weather prediction models and understand the evolution of our solar system.

Infrasound has been used on Earth to monitor stratospheric winds, to analyze the characteristics of man-made explosions, and even to detect earthquakes. But its potential extends beyond our home planet. Infrasound waves generated by meteor impacts on Mars have provided insight into the planet’s shallow seismic velocities, as well as near-surface winds and temperatures. On Venus, recent research considers that balloons floating in its atmosphere, and recording infrasound waves, could be one of the few alternatives to detect “venusquakes” and explore its interior, since surface pressures and temperatures are too extreme for conventional instruments.

Sonification of sound generated by the Flores Sea earthquake as recorded by a balloon flying at 19 km altitude.

Until recently, it has been challenging to map infrasound signals to various planetary phenomena, including ocean waves, atmospheric winds, and planetary interiors. However, our research team and collaborators have made significant strides in this field, developing tools to unlock the potential of infrasound-based planetary research. We retrieve the connections between source and media properties, and sound signatures through 3 different techniques: (1) training neural networks to learn the complex relationships between observed waveforms and source and media characteristics, (2) perform large-scale numerical simulations of seismic and sound waves from earthquakes and explosions, and (3) incorporate knowledge about source and seismic media from adjacent fields such as geodynamics and atmospheric chemistry to inform our modeling work. Our recent work highlights the potential of infrasound-based inversions to predict high-altitude winds from the sound of ocean waves with machine learning, to map an earthquake’s mechanism to its local sound signature, and to assess the detectability of venusquakes from high-altitude balloons.

To ensure the long-term success of infrasound research, dedicated Earth missions will be crucial to collect new data, support the development of efficient global modeling tools, and create rigorous inversion frameworks suited to various planetary environments. Nevertheless, Infrasound research shows that tuning into a planet’s whisper unlocks crucial insights into its state and evolution.

Using rays to describe spinning sound

Chirag Gokani – chiragokani@gmail.com

University of Texas at Austin, Applied Research Laboratories and Walker Department of Mechanical Engineering, Austin, Texas, 78766-9767, United States

Michael R. Haberman; Mark F. Hamilton (both at Applied Research Laboratories and Walker Department of Mechanical Engineering)

Popular version of 5pPA13 – Effects of increasing orbital number on the field transformation in focused vortex beams
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3657527

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

When a chef tosses pizza dough, the spinning motion stretches the dough into a circular disk. The more rapidly the dough is spun, the wider the disk becomes.

Fig 1. Pizza dough gets stretched out into a circular disk when it is spun. Source


A similar phenomenon occurs when sound waves are subjected to spinning motion: the beam spreads out more rapidly with increased spinning. One can use the theory of diffraction—the study of how waves constructively and destructively interfere to form a field pattern that evolves with distance—to explain this unique sound field, known as a vortex beam.

Fig 2. The wavefronts of vortex beams are helical in shape, like the threads on a screw. Adapted from Jiang et al., Phys. Rev. Lett. 117, 034301 (2016).


In addition to exhibiting a helical field structure, vortex beams can be focused, the same way sunlight passing through a magnifying glass can be focused to a bright spot. When sound is simultaneously spun and focused, something unexpected happens. Rather than converging to a point, the combination of spinning and focusing can cause the sound field to create a region of zero acoustic pressure, analogous to a shadow in optics, between the source and focal point, the shape of which resembles a rugby ball.

While the theory of diffraction predicts this effect, it does not provide insight into what creates the shadow region when the acoustic field is simultaneously spun and focused. To understand why this happens, one can resort to a simpler concept that approximates sound as a collection of rays. This simpler description, known as ray theory, is based on the assumption that waves do not interfere with one another, and that the sound field can be described by straight arrows emerging from a source, just like sun rays emerging from behind a cloud. According to this description, the pressure is proportional to the number of rays present in a given region in space.

Fig 3. Rays in a vortex beam. Adapted from Richard et al., New J. Phys. 22, 063021 (2020).


Analysis of the paths of individual sound rays permits one to unravel how the overall shape and intensity of the beam are affected by spinning and focusing. One key finding is the formation of an annular channel, resembling a tunnel, within the beam’s structure. This channel is created by a multitude of individual sound rays that are converging due to focusing but are skewed away from the beam axis due to spinning.

By studying this channel, one can calculate the amplitude of the sound field according to ray theory, offering perspectives that the theory of diffraction does not readily reveal. Specifically, the annular channels reveal that the sound field is greatest on the surface of a spheroid, coinciding with the feature shaped like a rugby ball predicted by the theory of diffraction.

In the figure below from the work of Gokani et al., the annular channels and spheroidal shadow zone predicted by ray theory are overlaid as white lines on the upper half of the field predicted by the theory of diffraction, represented by colors corresponding to intensity increasing from blue to red. The amount by which the sound is spun is characterized by ℓ, the orbital number, which increases from left to right in the figure.

Fig 4. Annular channels (thin white lines) and spheroidal shadow zones (thick white lines) overlaid on the diffraction pattern (colors). From Gokani et al., J. Acoust. Soc. Am. 155, 2707-2723 (2024).


As can be seen from Fig. 4, ray theory distills the intricate dynamics of sound that is spun and focused to a tractable geometry problem. Insights gained from this theory not only expand one’s fundamental knowledge of sound and waves but also have practical applications related to particle manipulation, biomedical ultrasonics, and acoustic communications.

Towards studying Venus seismicity, subsurface, and atmosphere using atmospheric acoustics

Gil Averbuch – gil.averbuch@whoi.edu

Applied Ocean Phusics and Engineering, Woods Hole Oceanographic Instuitution., Woods Hole, MA, 02543, United States

Andi Petculescu
University of Louisiana
Department of Physics
Lafayette, Louisiana, USA

Popular version of 3aPAa6 – Calculating the Acoustics Internal Gravity Wave Dispersion Relations in Venus’s Supercritical Lower Atmosphere
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=Session&project=ASASPRING24&id=3657512

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

VenusVenus surface. Image from NASA (https://science.nasa.gov/gallery/venus/)

Venus is the second planet from the sun and is the closest in size and mass to Earth. Satellite images show large regions of tectonic deformations and volcanic material, indicating the area is seismically and volcanically active. Ideally, to study its subsurface and seismic and volcanic activity, we would deploy seismometers on the surface to measure the ground motions following venusquakes or volcanic eruptions; this will allow us to understand the planet’s past and current geological processes and evolution. However, the extreme conditions at the surface of Venus prevent us from doing that. With temperatures exceeding 400°C (854°F) and a pressure of more than 90 bars (90 times more than on Earth), instruments don’t last long.

One alternative to overcome this challenge is to study Venus’s subsurface and seismic activity using balloon-based acoustic sensors floating in the atmosphere to detect venusquakes from the air. But before doing that, we first need to assess its feasibility. This means we must better understand how seismic energy is transferred to acoustic energy in Venus’s atmosphere and how the acoustic waves propagate through it. In our research, we address the following questions. 1) How efficiently does seismic motion turn to atmospheric acoustic waves across Venus’ surface? 2) how do acoustic waves propagate in Venus’s atmosphere? and 3) what is the frequency range of acoustic waves in Venus’s atmosphere?

Venus’s extreme pressure and temperature correspond to supercritical fluid conditions in the atmosphere’s lowest few kilometers. Supercritical fluids combine gases and fluids’ properties and exhibit nonintuitive behavior, such as high density and compressibility. Therefore, to describe the behavior of such fluids, we need to use an equation of state (EoS) that captures these phenomena. Different EoSs are appropriate for different fluid conditions, but only a limited selection adequately describes supercritical fluids. One of these equations is the Peng-Robinson (PR) EoS. Incorporating the PR-EoS with the fluid dynamics equations allows us to study acoustics propagation in Venus’s atmosphere.

Our results show that the energy transported across Venus’s surface from seismic sources is two orders of magnitude larger than on Earth, pointing to a better seismic-to-acoustic transmission. This is mainly due to Venus’s denser atmosphere (~68 kg/m3) compared to Earth’s (~1 kg/m3). Using numerical simulations, we show that different seismic waves will be coupled to Venus’s atmosphere at different spatial positions. Therefore, when considering measurements from floating balloons, they will measure different seismic-to-acoustic signals depending on their position. In addition, we show that Venus’s atmosphere allows lower acoustic frequencies than Earth’s. This will be useful in 1) preparing the capabilities of the acoustic instruments used on the balloons, and 2) interpreting future observations.

The Infrasonic Choir: Decoding Songs to Inform Decisions

Sarah McComas – sarah.mccomas@usace.army.mil

U.S. Army Engineer Research and Development Center, Vicksburg, MS, 39180, United States

Popular version of 1pPAb4 – The Infrasonic Choir: Decoding Songs to Inform Decisions
Presented at the 186th ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3658000

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

InfrasonicFigure 1. Infrasound is a low frequency, sub-audible sound propagated over long distances (10’s to 1000’s of kilometers) and typically below the threshold of human hearing. Image courtesy of author.

The world around us is continuously evolving due to the actions of Mother Nature and man-made activities, impacting how we interact with the environment. Many of these activities generate infrasound, which is sound below the frequency threshold for human hearing (Figure 1). These signals can travel for long distances, 10s to 100s km based on source strength, while maintaining key information about what generated the signal. The multitude of signals can be thought of as an infrasonic choir with voices from a wide variety of sources which include natural signals such as surf and volcanic activity and man-made including infrastructure or industrial activities. Listening to, and deciphering, the infrasonic choir around us allows us to better understand how the world around us is evolving.

The infrasonic choir is observed by placing groupings, called arrays, of specialized sensors around the environment we wish to understand. These sensors are microphones designed to capture very low frequency sounds. An array’s geometry enables us to identify the direction the signal is observed. Using multiple arrays around a region allow for identification of the source location.

One useful application of decoding infrasonic songs is listening to infrastructure, such as a bridge. Bridges vibrate at frequencies related to the engineering characteristics of the structure, such as mass and stiffness. Bridges are surrounded by the fluid atmosphere which allow the bridge vibrations to create waves that can be measured with infrasound sensor arrays. One can visualize this as waves generated after a rock is thrown into a pond. As the bridge’s overall health degrades, whether through time or other events, its engineering characteristics change causing a change in the vibrational frequency. Being able to identify a change from a healthy, “in-tune” structure to an “out-of-tune”, unhealthy structure without having to see or inspect the bridge would enable continuous monitoring of entire regional road networks. The ability to conduct this type of monitoring after a natural disaster, such as hurricane or earthquake, would enable quick identification of damaged structures for prioritization of limited structural assessment resources.

Understanding how to decode the infrasonic choir within the symphony of the environment to better understand the world around us is the focus of ongoing research at the U.S. Army Engineer Research and Development Center. This research effort focuses on moving monitoring into source rich urban environments, the design of lightweight and low-cost sensors and mobile arrays, and the development of automated processing methods for analysis. When successful, continuous monitoring of this largely untapped source of information will provide a method for understanding the environment to better inform decisions.

Permission to publish was granted by the Director, Geotechnical and Structures Laboratory, U.S. Army Engineer Research and Development Center.