Can aliens found in museums teach us about learning sound categories?

Christopher Heffner –

Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, 14214, United States

Popular version of 4aSCb6 – Age and category structure in phonetic category learning
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine being a native English speaker learning to speak French for the first time. You’ll have to do a lot of learning, including learning new ways to fit words together to form sentences and a new set of words. Beyond that, though, you must also learn to tell sounds apart that you’re not used to. Even the French word for “sound”, son, is different from the word for “bucket”, seau, in a way that English speakers don’t usually pay attention to. How do you manage to learn to tell these sounds apart when you’re listening to others? You need to group those sounds into categories. In this study, museum and library visitors interacting with aliens in a simple game helped us to understand which categories that people might find harder to learn. The visitors were of many different ages, which allowed us to see how this might change as we get older.

One thing that might help would be if you come with knowledge that certain types of categories are impossible. If you’re in a new city trying to choose a restaurant, it can be really daunting if you decide to investigate every single restaurant in the city. The decision becomes less overwhelming if you narrow yourself to a specific cuisine or neighborhood. Similarly, if you’re learning a new language, it might be very difficult if you entertain every possible category, but limiting yourself to certain options might help. My previous research (Heffner et al., 2019) indicated that learners might start the language learning process with biases against complicated categories, like ones that you need the word “or” to describe. I can describe a day as uncomfortable in its temperature if it is too hot or too cold. We compared these complicated categories to simple ones and saw that the complicated ones were hard to learn.

In this study, I studied this sort of bias across lots of different ages. Brains change as we grow into adulthood and continue to change as we grow older. I was curious whether the bias we have against those certain complicated categories would shift with age, too. To study this, I enlisted visitors to a variety of community sites, by way of partnerships with, among others, the Buffalo Museum of Science, the Rochester Museum and Science Center, and the West Seneca Public Library, all located in Western New York. My lab brought portable equipment to those sites and recruited visitors. The visitors were able to learn about acoustics, a branch of science they had probably not heard much about before; the community spaces got a cool, interactive activity for their guests; and we as the scientists got access to a broader population than we could get sitting inside the university.

Aliens in museumsFigure 1. The three aliens that my participants got to know over the course of the experiment. Each alien made a different combination of sounds, or no sounds at all.

We told the visitors that they were park rangers in Neptune’s first national park. They had to learn which aliens in the park made which sounds. The visitors didn’t know that the sounds they were hearing were taken from German. Over the course of the experiment, they learned to group sounds together according to categories that we made up in the German speech sounds. What we found is that learning of simple and complicated categories was different across ages. Nobody liked the complicated categories. Everyone, no matter their age, found them difficult to learn. However, the responses to the simple categories differed a lot depending on the age. Kids found them very difficult, too, but learning got easier for the teens. Learning peaked in young adulthood, then was a bit harder for those in older age. This suggests that the brain systems that help us learn simple categories might change over time, while everyone seems to have the bias against the complicated categories.


Figure 2. A graph, created by me, showing how accurate people were at matching the sounds they heard with aliens. There are three pairs of bars, and within each pair, the red bars (on the right) show the accuracy for the simple categories, while the blue bars (on the left) show the accuracy for the complicated categories. The left two bars show participants aged 7-17, the middle two bars show participants aged 18-39, and the right two show participants aged 40 and up. Note that the simple categories are easier than the complicated ones for participants above 18, while for those younger than 18, there is no difference between the categories.

What could happen to Earth if we blew up an incoming asteroid?

Brin Bailey –

University of California, Santa Barbara, Physics Department, Santa Barbara, CA, 93106, United States

Popular version of 4aPA12 – Acoustic ground effects simulations from asteroid disruption via the ‘Pulverize It’ method
Presented at the 186 ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Let’s imagine a hypothetical scenario: a new asteroid has just been discovered, on a path straight towards Earth, threatening to hit us in just a few days. What can we do about it?

A new study funded by NASA is trying to answer that question. Pulverize It, or PI for short, is a proposed method for planetary defense–the effort of monitoring and protecting Earth from incoming asteroids. In essence, PI’s plan of attack is to penetrate an incoming asteroid with high-speed, bullet-like projectiles, which would split the asteroid into many smaller fragments (pieces) (Figure 1). PI’s key difference from other planetary defense methods is its versatility. It is designed to work for a wide variety of scenarios, meaning that PI could be used whether an asteroid impact is one year away or one week away (depending on the asteroid’s size and speed).


Figure 1. PI works by penetrating an asteroid with a high-speed, high-density projectile, which rapidly converts a portion of the asteroid’s kinetic energy into heat and shock waves within the rocky material. The heat energy of the impact locally vaporizes and ionizes material near the impact site(s), and the subsequent shock waves damage and fracture the asteroid material as they move and pass (refract) through it.

How is this possible, and how could the asteroid fragments affect us here on Earth? Rather than using momentum transfer–like in methods such as asteroid deflection, as demonstrated by NASA’s recent Double Asteroid Redirection Test (DART) mission–PI utilizes energy transfer to mitigate a threat by disassembling (or breaking apart) an asteroid.

If the asteroid is blown apart while far away from Earth (generally, at least several months before impact), these fragments would miss the planet entirely. This is PI’s preferred mode of operation,as it is always more favorable to keep the action away from us when possible. In a scenario where we have little warning time (a “terminal” scenario), the small asteroid fragments may enter Earth’s atmosphere–but this is part of the plan (Figure 2).


Figure 2. In a short-warning scenario where the asteroid is intercepted and broken up close to Earth (“terminal” scenario), the fragment cloud enters Earth’s atmosphere. Each fragment will burst at high altitude, dispersing the energy of the original asteroid into optical and acoustical ground effects. As the fragments in the cloud spread out, they will enter the atmosphere at different times and in different places, creating spatially and temporally de-correlated shock waves. The spread of the fragment cloud depends on a variety of factors, mainly intercept time (the amount of time between asteroid breakup and ground impact) and fragment disruption velocity (the speed and direction at which fragments move away from the fragment cloud’s center of mass).

Earth’s atmosphere acts as a bulletproof vest, shielding us from harmful ultraviolet radiation, typical space debris, and, in this case, asteroid fragments. As these small rocky pieces enter the atmosphere at very high speeds, air molecules exert large amounts of pressure on them. This puts stress on the rock and causes it to break up. As the fragment’s altitude decreases, the atmosphere’s density increases. This adds heat and increases pressure until the fragment can’t remain intact anymore, causing the fragment to detonate, or “burst.”

When taken together, these bursts can be thought of as a cosmic fireworks show. As each fragment travels through the atmosphere and bursts, it produces a small amount of light (like a shooting star) and pressure (as a shock wave, like a sonic boom). The collection of these optical and acoustical effects, referred to as “ground effects,” work to disperse the energy of the original asteroid over a wide area and over time. In reasonable mitigation scenarios that are appropriate for the incoming asteroid (for example, based on asteroid size or by breaking the asteroid into a very large number of very small pieces), these ground effects result in little to no damage.

In this study, we investigate the acoustical ground effects that PI may produce when blowing apart an incoming asteroid in a “terminal” scenario with little warning. As each fragment enters Earth’s atmosphere and bursts, the pressure released creates a shock wave, carrying energy and creating an audible “boom” for each fragment (a sonic boom). Using custom codes, we simulate the acoustical ground effects for a variety of scenarios that are designed to keep the total pressure output below 3 kPa–the pressure at which residential windows may begin to break–in order to minimize potential damage (Figure 3).

Figure 3. Simulation of the acoustical ground effects from a 50 m diameter asteroid which is broken into 1000 fragments one day before impact. The asteroid is modeled as a spherical rocky body (average density of 2.6 g/cm3) traveling through space at 20 km/s and entering Earth’s atmosphere at an angle of 45°. The fragments move away from each other at an average speed of 1 m/s. The sonic “booms” produced by the fragment bursts are simulated here based upon the arrival of each shock wave at an observer on the ground (indicated by the green dot in the left plot). Note that both plots take into account the constructive interference between shock waves. Left: real-time pressure. Right: maximum pressure, where each pixel displays the highest pressure it has experienced. The dark orange lines, which display higher pressure values, signify areas where two shock waves have overlapped.

Figure 4. Simulation of the acoustical ground effects from an unfragmented (as in, not broken up) 50 m diameter asteroid. The asteroid is modeled as a spherical rocky body (average density of 2.6 g/cm3) traveling through space at 20 km/s and entering Earth’s atmosphere at an angle of 45°. Upon entering and descending through Earth’s atmosphere, the asteroid undergoes a great amount of pressure from air molecules, eventually causing the asteroid to airburst. This burst releases a large amount of pressure, creating a powerful shock wave. Left: real-time pressure. Right: maximum pressure, where each pixel displays the highest pressure it has experienced.

Our simulations support that the ground effects from an asteroid blown apart by PI are vastly less damaging than if the asteroid hit Earth intact. For example, we find that a 50-meter-diameter asteroid that is broken into 1000 fragments only one day before Earth impact is vastly less damaging than if it was left intact (Figure 3 versus Figure 4). In the mitigated scenario, we estimate that the observation area (±150 km from the fragment cloud’s center) would experience an average pressure of ~0.4 kPa and a maximum pressure of ~2 kPa (Figure 3). In the unfragmented asteroid case (as in, not broken up), we estimate an average pressure of ~3 kPa and a maximum pressure of ~20 kPa (Figure 4). The asteroid mitigated by PI keeps all areas below the 3 kPa damage threshold, while the maximum pressure in the unmitigated case is almost seven times higher than the threshold.

The key is that the shock waves from the many fragments are “de-correlated” at any given observer, and hence vastly less threatening. Our findings suggest that PI is an effective approach for planetary defense that can be used in both short-warning (“terminal” scenarios) and extended warning scenarios, to result in little to no ground damage.

While we would rather not use this terminal defense mode–as it is preferable to intercept asteroids far ahead of time–PI’s short-warning mode could be used to mitigate threats that we fail to see coming. We envision that asteroid impact events similar to the in Chelyabinsk airburst in 2013 (~20 m diameter) or Tunguska airburst in 1908 (~40-50 m diameter) could be effectively mitigated by PI with remarkably short intercepts and relatively little intercept mass.

Website and additional resources
Please see our website for further information regarding the PI project, including papers, visuals, and simulations. For our full suite of ground effects simulations, please check our YouTube channel.

Funding for this program comes from NASA NIAC Phase I grant 80NSSC22K0764 , NASA NIAC Phase II grant 80NSSC23K0966, NASA California Space Grant NNX10AT93H and the Emmett and Gladys W. fund. We gratefully acknowledge support from the NASA Ames High End Computing Capability (HECC) and Lawrence Livermore National Laboratory (LLNL) for the use of their ALE3D simulation tools used for modeling the hypervelocity penetrator impacts, as well as funding from NVIDIA for an Academic Hardware Grant for a high-end GPU to speed up ground effect simulations.

Listening for bubbles to make scuba diving safer

Joshua Currens –

Department of Radiology; Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, United States

Popular version of 5aBAb8 – Towards real-time decompression sickness mitigation using wearable capacitive micromachined ultrasonic transducer arrays
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Scuba diving is a fun recreational activity but carries the risk of decompression sickness (DCS), commonly known as ‘the bends’. This condition occurs when divers ascend too quickly, causing gas that has accumulated in their bodies to expand rapidly into larger bubbles—similar to the fizz when a soda can is opened.

To prevent this, divers will follow specific safety protocols that limit how fast they rise to the surface and stop at predetermined depths to allow bubbles in their body to dissipate. However, these are general guidelines that do not account for every person in every situation. This limitation can make it harder to prevent DCS effectively in all individuals without unnecessarily lengthening the time to ascend for a large portion of divers. Traditionally, these bubbles have only been detected with ultrasound technology after the diver has surfaced, so it is a challenge to predict DCS before it occurs (Figure 1b&c). Early identification of these bubbles could allow for the development of personalized underwater instructions to bring divers back to the surface and minimize the risk of DCS.

To address this challenge, our team is creating a wearable ultrasound device that divers can use underwater.

Ultrasound works by sending sound waves into the body and then receiving the echoes that bounce back. Bubbles reflect these sound waves strongly, making them visible in ultrasound images (Figure 1d). Unlike traditional ultrasound systems that are too large and not suited for underwater use, our innovative device will be compact and efficient, designed specifically for real-time bubble monitoring while diving.

Currently, our research involves testing this technology and optimizing imaging parameters in controlled environments like hyperbaric chambers. These are specialized rooms where underwater conditions can be replicated by increasing the inside pressure. We recently collected the first ultrasound scans of human divers during a hyperbaric chamber dive with a research ultrasound system, and next we plan to use it with our first prototype. With this data, we hope to find changes in the images that indicate where bubbles are forming. In the future, we plan to start testing our custom ultrasound tool on divers, which will be a big step towards continuously monitoring divers underwater, and eventually personalized DCS prevention.

divingFigure 1. (a) Scuba diver underwater. (b) Post-dive monitoring for bubbles using ultrasound. (c) Typical ultrasound system (developed using Biorender). (d) Bubbles detected in ultrasound images as bright spots in heart. Images courtesy of JC, unless otherwise noted.

Unlocking the Secrets of Ocean Dynamics: Insights from ALMA

Florent Le Courtois –

DGA Tn, Toulon, Var, 83000, France

Samuel Pinson, École Navale, Rue du Poulmic, 29160 Lanvéoc, France
Victor Quilfen, Shom, 13 Rue de Châtellier, 29200 Brest, France
Gaultier Real, CMRE, Viale S. Bartolomeo, 400, 19126 La Spezia, Italy
Dominique Fattaccioli, DGA Tn, Avenue de la Tour Royale, 83000 Toulon, France

Popular version of 4aUW7 – The Acoustic Laboratory for Marine Applications (ALMA) applied to fluctuating environment analysis
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Ocean dynamics happen at various spatial and temporal scales. They cause the displacement and the mixing of water bodies of different temperatures. Acoustic propagation is strongly impacted by these fluctuations as sound speed depends mainly on the underwater temperature. Monitoring underwater acoustic propagation and its fluctuations remains a scientific challenge, especially at mid-frequency (typically the order of 1 to 10 kHz). Dedicated measurement campaigns have to be conducted to increase the understanding of the fluctuations, their impacts on the acoustic propagation and thus to develop appropriate localization processing.

The Acoustic Laboratory for Marine Application (ALMA) has been proposed by the French MOD Procurement Agency (DGA) to conduct research for passive and active sonar since 2014, in support of future sonar array design and processing. Since its inception in 2014, ALMA has undergone remarkable transformations, evolving from a modest array of hydrophones to a sophisticated system equipped with 192 hydrophones and advanced technology. With each upgrade, ALMA’s capabilities have expanded, allowing us to delve deeper into the secrets of the sea.


Figure 1. Evolution of the ALMA array configuration, from 2014 to 2020. Real and Fattacioli, 2018

Bulletin of sea temperature to understand the acoustic propagation
The campaign of 2016 took place Nov 7 – 17, 2016, off the Western Coast of Corsica in the Mediterranean Sea, located by the blue dot in Fig.2 (around 42.4 °N and 9.5 °E). We analyzed signals from a controlled acoustic source and temperature recording, corresponding approximately to 14 hours of data.

Figure 2. Map of surface temperature during the campaign. Heavy rains of previous days caused a vortex in the north of Corsica. Pinson et. al, 2022

The map of sea temperature during the campaign was computed. It is similar to a weather bulletin for the sea. From previous days, heavy rains caused a global cooling over the areas. A vortex appeared in the Ligurian Sea between Italy and the North of Corsica. Then the cold waters traveled Southward along Corsica Western coast to reach the measurement area. The water cooling was measured as well on the thermometers. The main objective was to understand the changes in the echo pattern in relation to the temperature change. Echos can characterize the acoustic paths. We are mainly interested in the amplitude, the time of travel and the angle of arrival of echoes to describe the acoustic path between the source and ALMA array.

All echoes extracted by processing ALMA data are plotted as dots in 3D. They depend on the time of the campaign, the angle of arrival and the time of flight. The loudness of the echo is indicated by the colorscale. The 3D image is sliced in Fig. 3 a), b) and c) for better readability. The directions of the last reflection are estimated in Fig. 3 a): positive angles come from the surface reflection while negative angles come from seabed reflection. The global cooling of the waters caused a slowly increasing fluctuation of the time of flight between the source and the array in Fig. 3 b). A surprising result was a group of spooky arrivals, who appeared briefly during the campaign at an angle close to 0 ° during 3 and 12 AM in Fig. 3 b) and c).

All the echoes detected by processing the acoustic data. Pinson et. al, 2022

Figure 3. Evolution of the acoustic paths during the campaign. Each path is a dot defined by the time of flight and the angle of arrival during the period of the campaign. Pinson et. al, 2022

The acoustic paths were computed using the bulletin of sea temperature. A more focused map of the depth of separation between cold and warm waters, also called mixing layer depths (MLD), is plotted in Fig 4. We noticed that, when the mixing layer depth is below the depth of the source, the cooling causes acoustic paths to be trapped by bathymetry in the lower part of the water column. It explains the apparition of the spooky echoes. Trapped paths are plotted in the blue line while regular paths are plotted in black in Fig. 5.

Figure 4. Evolution of the depth of separation between cold and warm water during the campaign. Pinson et. al, 2022

Figure 5. Example of acoustic paths in the area: black lines indicate regular propagation of the sound; blue lines indicate the trapped paths of the spooky echoes. Pinson et. al, 2022

The ALMA system and the associated tools allowed illustrating practical ocean acoustics phenomena. ALMA has been deployed during 5 campaigns, representing 50 days at sea, mostly in the Western Mediterranean Sea, but also in the Atlantic to tackle other complex physical problems.

Taking Pictures of the Sound of a Rocket

Grant W. Hart –
Brigham Young University
Provo, UT 84602
United States

Kent Gee (@KentLGee on X)
Eric Hintz
Giovanna Nuccitelli
Trevor Mahlmann (@TrevorMahlmann on X)

Popular version of 1pNSa8 – A photographic analysis of Mach wave radiation from a rocket plume
Presented at the 186th ASA Meeting
Read the abstract at

The rumble of a large rocket launching is one of the loudest non-explosive sounds that mankind has ever made. Where does that sound come from?  Surprisingly, it doesn’t come from the rocket itself, or even the exhaust nozzle, but rather from the plume of exhaust that shoots out of the back. The plume is supersonic when it comes out of the rocket, and it emits sound as it slows down in the atmosphere.

This process was visualized in some recent pictures taken by Trevor Mahlmann of a Falcon 9 launch from Cape Canaveral.  The launch was just after dawn, and Mahlmann took a series of striking pictures as the rocket passed in front of the sun. Two of those pictures are shown below. If you look at the edge of the sun in the later picture you can see distortions caused by the intense sound waves coming from the rocket.

Recognizing the possibility of gaining more information from these pictures, researchers at Brigham Young University got permission from Mr. Mahlmann to further analyze them.  The third picture below shows a portion of the difference between the first two pictures. The colors have been modified to show the sound waves more clearly.  The waves clearly are coming from a region far down the plume of the rocket, rather than the nozzle of the rocket. The source was typically about 10-25 times the diameter of the rocket down the plume.

The sound is also directional – it doesn’t go out evenly in all directions, but rather goes out most strongly at about 20-30 degrees below the horizontal. Most rockets sound loudest to people watching the launch when they are 20-30 degrees above the ground. This is all consistent with the models of the sound being produced by the processes that slow down the exhaust from supersonic speeds.  A good introduction to rocket noise is found in a recent article in Physics Today.

The researchers first had to line up the images so that the sun was in the same place in each frame. They were then able to subtract the later image from the first one to get the difference and leave just the distortions caused by the waves in the second image.  To find the source of the waves, it was necessary to draw a line backward from the wave’s image and find where it met the rocket’s path across the Sun. Since it took time for the wave to get from the source to where it was observed, they had to find where the rocket was at the time the sound wave was given off. They did this by finding how far the sound had traveled and used the speed of sound to find the time it took to get there. With that information the researchers could find the position of the source and the direction of the wave.

Falcon 9 rocket

Figure 1. A Falcon 9 rocket about to pass in front of the Sun. Image courtesy of Trevor Mahlmann. Used by permission. Higher resolution versions available from the photographer.


Falcon 9 rocket

Figure 2. A Falcon 9 rocket passing in front of the Sun. Note the distortions of the edge of the Sun caused by the sound waves produced by the rocket. Image courtesy of Trevor Mahlmann. Used by permission. Higher resolution versions available from the photographer.



Figure 3. A portion of the difference between the two previous figures, showing the enhanced sound waves. The bottom of the rocket is at the top of the image. Image adapted from Hart et al.’s original paper.

The science of baby speech sounds: men and women may experience them differently

M. Fernanda Alonso Arteche –
Instagram: @laneurotransmisora

School of Communication Science and Disorders, McGill University, Center for Research on Brain, Language, and Music (CRBLM), Montreal, QC, H3A 0G4, Canada

Instagram: @babylabmcgill

Popular version of 2pSCa – Implicit and explicit responses to infant sounds: a cross-sectional study among parents and non-parents
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine hearing a baby coo and instantly feeling a surge of positivity. Surprisingly, how we react to the simple sounds of a baby speaking might depend on whether we are women or men, and whether we are parents. Our lab’s research delves into this phenomenon, revealing intriguing differences in how adults perceive baby vocalizations, with a particular focus on mothers, fathers, and non-parents.

Using a method that measures reaction time to sounds, we compared adults’ responses to vowel sounds produced by a baby and by an adult, as well as meows produced by a cat and by a kitten. We found that women, including mothers, tend to respond positively only to baby speech sounds. On the other hand, men, especially fathers, showed a more neutral reaction to all sounds. This suggests that the way we process human speech sounds, particularly those of infants, may vary significantly between genders. While previous studies report that both men and women generally show a positive response to baby faces, our findings indicate that their speech sounds might affect us differently.

Moreover, mothers rated babies and their sounds highly, expressing a strong liking for babies, their cuteness, and the cuteness of their sounds. Fathers, although less responsive in the reaction task, still rated highly their liking for babies, the cuteness of them, and the appeal of their sounds. This contrast between implicit (subconscious) reactions and explicit (conscious) opinions highlights an interesting complexity in parental instincts and perceptions. Implicit measures, such as those used in our study, tap into automatic and unconscious responses that individuals might not be fully aware of or may not express when asked directly. These methods offer a more direct window into the underlying feelings that might be obscured by social expectations or personal biases.

This research builds on earlier studies conducted in our lab, where we found that infants prefer to listen to the vocalizations of other infants, a factor that might be important for their development. We wanted to see if adults, especially parents, show similar patterns because their reactions may also play a role in how they interact with and nurture children. Since adults are the primary caregivers, understanding these natural inclinations could be key to supporting children’s development more effectively.

The implications of this study are not just academic; they touch on everyday experiences of families and can influence how we think about communication within families. Understanding these differences is a step towards appreciating the diverse ways people connect with and respond to the youngest members of our society.