Acoustic Suction Tweezers: A new compact acoustic gadget for small object manipulation

Shoya Yoneda – yoneda-shoya@ed.tmu.ac.jp

Department of Electrical Engineering and Computer Science
Tokyo Metropolitan University
Hino-shi, Tokyo, 191-0065
Japan

Kan Okubo – kanne@tmu.ac.jp

Popular version of 4pPA6 – Miniaturized Acoustic Suction Tweezers: Lift Control and Cap Design for Mobile Applications
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me/appinfo.php?page=IntHtml&project=ASAASJ25&id=3983403&server=eppro02.ativ.me

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Can you believe that ultrasound-induced forces can actually pull objects?
It may sound surprising, but this phenomenon is real. In this paper we introduce a fascinating world of sound-based manipulation.

Our research group has long been developing acoustic tweezers capable of picking up tiny objects using ultrasonic forces. (See https://youtu.be/PoZsKjst82g)

In our latest work, we take this idea a step further. By using a remarkably simple structure and cleverly harnessing the lifting force generated by sound, we have created a new acoustic gadget: the acoustic suction tweezer.

Yes —acoustic suction tweezers, sometimes called an “acoustic pipette,” pull objects toward them using sound energy, with no vacuum effect involved.

Proposal Device: Acoustic Suction Tweezer

Video 1. Introduction of the Acoustic Suction Tweezer

Sound exerts a force on objects known as the acoustic radiation force, which typically pushes objects away. However, by placing a small aperture unit in front of the transducer, we can shape a unique sound field that transforms this force into attraction and lift —almost like a miniature vacuum cleaner made of sound. To harness this effect, we developed an acoustic focusing cap through extensive trial and error, testing various designs manufactured with a 3D printer to evaluate their performance.

Figure 1. Make various Acoustic Focusing Caps

Figure 1. Make various Acoustic Focusing Caps

The figure below shows the simulated sound pressure levels. Relatively high-pressure regions are concentrated near the tip of the cap, which correlates with the generation of attractive acoustic radiation forces in this area.

Figure 3. An Example of Sound Pressure Levels Inside the Cap

Figure 2. An Example of Sound Pressure Levels Inside the Cap

How does it compare to other devices?
Our previously proposed acoustic tweezers require large transducer arrays and complex phase control (See https://www.eurekalert.org/news-releases/923462). In contrast, the acoustic suction tweezers overcome these limitations through careful design considerations. Remarkably, they lift objects even larger than the wavelength of sound, such as 15 mm polystyrene spheres.

Practicality
The Acoustic Suction Tweezer excels in practicality; it can be implemented quickly, at low cost, using just a 3D printer and a single ultrasonic transducer.

We confirmed that the device can handle lightweight industrial items such as coated wires and even delicate objects like feathers —materials conventional vacuum tweezers struggle to grasp.

We confirmed that the device can handle lightweight industrial items such as coated wires and even delicate objects like feathers —materials conventional vacuum tweezers struggle to grasp.We expect this device to have strong potential for applications in diverse fields, including medicine, biochemistry, and engineering. We also hope that this system will inspire further innovation and the creation of many other useful acoustic-based tools.

Sound(e)scape: Can a Sonic Break Improve Cognitive Performance?

Alaa Algargoosh – algargoosh@vt.edu

Virginia Polytechnic Institute and State University (Virginia Tech), Perry St, Blacksburg, VA, 24061, United States

Megan Wysocki
Virginia Polytechnic Institute and State University (Virginia Tech)

Amneh Hamida
RWTH Aachen University.

Popular version of 1pNSa4 – Cognitive Restoration in Virtual Interactions with Indoor Acoustic Environments
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3977035

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

People often associate restorative experiences with nature: the sound of birds, wind, or flowing water. But what if indoor spaces could offer their own kind of mental escape, not through what we see, but through how we interact with sound?

This idea began with a simple observation. When you walk into a space and notice how your footsteps and voice are reflected back to you, the echoes create a subtle sense of awe. According to Attention Restoration Theory, experiences that evoke fascination and effortless engagement can help replenish mental resources. We wanted to explore whether these moments of acoustic interaction between a person and a space could invite gentle attention and, in turn, support cognitive restoration. In Attention Restoration Theory, this is referred to as soft fascination, a type of stimulus that is engaging but not overwhelming.

Exploring Echoes as a Path to Mental Restoration:
During a live demonstration at the MIT Museum, we used auralization a technology that allows you to hear your voice as if you were in a different place using that place’s sound signature or impulse response. A volunteer hummed into the acoustic signature of Hagia Sophia. Later, the entire audience hummed together and reflected on their experiences. The conversation pointed to the potential of such acoustic interaction to support a meditative state by impacting sense of space, time, and self.

This inspired a controlled experiment to study the restorative potential of indoor acoustic environments. We asked people to experience different sound environments (Figure 1) and measure their cognitive activity before and after each interaction. Early results suggest that interactive acoustics may support attention restoration depending on the acoustic characteristics, opening a new way of thinking about how sound affects us indoors.

Figure 1: Virtual interaction with an acoustic environment during the experiment, where a person hears their own voice transformed through the acoustic signature of another space.

Why does this matter?
We spend most of our time indoors, yet discussions of restorative environments often focus on natural settings. This is especially relevant for workplaces and schools, where mental fatigue is common. It may also hold meaningful promise for neurodivergent individuals, including those with ADHD, who often benefit from environments that support attention without overstimulating it.
We imagine applications in immersive restorative spaces where people can interact with sound to reset and return to their activities with greater clarity. We also envision subtle integration into transitional spaces such as staircases, corridors, and building entrances that provide gentle cognitive relief as people move throughout their day.

Sound(e)scape reframes acoustics not as background, but as a tool for well-being. By understanding how interactive sound shapes attention and cognition, we can design buildings that do not simply avoid harmful noise. They can actively help the mind take a restorative break.

Figure 2: Visualization of interacting with different acoustic environments. Left: Max Addae vocalizing in an office environment (MIT Media Lab). Middle: “Hagia Sophia – Muhammad, Allah, Abu Bakr” by Rabe!, licensed under CC BY-SA 3.0 (https://commons.wikimedia.org/wiki/File:Hagia_Sophia_-_Muhammad,_Allah,_Abu_Bakr.jpg) Cropped and one person (Max Addae) added by Alaa Algargoosh. Right: Max Addae vocalizing in Boston Symphony Hall.

Sound recordings:
1. Vocalizing in an office environment (MIT Media Lab). (Voice: Max Addae)
2. Virtual vocalization in Hagia Sophia. (Voice: Max Addae)
3. Virtual vocalization in Boston Symphony Hall. (Voice: Max Addae)
The virtual vocalizations were generated using the impulse responses available at ODEON software library.

Acoustics of Korean Traditional Architecture: A Case Study of Magoksa Temple

Sungjoon Kim – sungjoon.kim@kaist.ac.kr

Instagram: @jooon.kim
291, Daehak-ro, N25, Yuseong-gu, Daejeon, 34141, South Korea

Popular version of 1aAA2 – Acoustics of Korean Traditional Architecture: A Case Study of Magoksa Temple
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3979424

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Western churches typically evoke the impression of long, reverberant echoes. This acoustic quality is largely influenced by their domed ceilings and stone construction, which amplify and sustain sound. A single note from an organ or choir can travel far and linger in the air, creating a bright and grand sound field.

In contrast, Asian temples often have acoustic characteristics that differ significantly from those of Western churches. In particular, traditional Korean temples have a soft and warm sound environment. Their structures are primarily composed of wood, soil, and paper, reflecting Korea’s architectural philosophy of harmony with nature and the surrounding landscape. Instead of a strong, ringing echo, the listener experiences a gentle and intimate atmosphere.

Our study explores the acoustic characteristics of Magoksa Temple in South Korea, a Buddhist temple complex whose main halls date back to the 17th century. We measured the reverberation and other acoustic properties of three main temple halls and analyzed how sound behaves in these wooden spaces. The goal of this study is to understand these unique sound behaviors and to consider how they can be recreated when digitally restoring historical sites in virtual reality content and other media.

Figure 1: Main worship hall (Daegwangbojeon) of Magoksa Temple and surrounding courtyard.

To carry out the measurements, we played test signals through a loudspeaker and recorded the responses using microphones, including a three-dimensional (3D) microphone array. These room impulse responses capture the “acoustic fingerprint” of each hall: how long sound lasts, which frequency bands are emphasized or reduced, and how sound energy arrives from different directions around a listener.

Figure 2: Acoustic measurement setup inside a temple hall with a loudspeaker and 3D microphone array.

We found that all three temple halls share two distinctive features:

  1. Strong low-frequency resonance – Deep sounds, such as drums or low chanting, tend to linger longer than higher-pitched sounds. One important reason is structural: the floors are hollow beneath the wooden planks, and this cavity reinforces low-frequency energy, similar to the body of a musical instrument.
  2. High-frequency absorption – Soft materials such as paper doors, soil walls, and exposed wood absorb much of the high-frequency content. This reduces sharp reflections and makes the space sound calm and close, rather than bright or very echoey like a stone cathedral.
Figure 3: Frequency responses of the three main halls at Magoksa Temple.

Using the 3D microphone array, we also examined spatial characteristics, such as which parts of the structure (floor, ceiling, or side walls) create the most prominent reflections, and how sound surrounds a seated listener. These results help us understand more deeply how traditional Korean temples use their wooden structures and natural materials to create such distinctive acoustics.

Understanding these sound patterns helps us preserve more than just the visual beauty of cultural heritage—it allows us to capture the aural identity of a place. By integrating these findings into digital reconstructions and virtual reality experiences, we can make presentations of traditional Korean architecture feel more realistic and immersive, allowing future generations not only to see history but also to hear it.

Listen to the Voices of Plants: Evaluate leaf water content with acoustic response of leaf

Sakura Niki – s21a4113hj@s.chibakoudai.jp

Chiba Institute of Technology, Narashino, Chiba, 275-0016, Japan

Popular version of 1pEA11 – Investigation of the relationship between a circular diaphragm model and measured leaf natural frequency to evaluate leaf water content.
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3983223

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Have you ever wanted to listen to the voices of plants when they need water? If you use our method, you can.

We focused on changes in the acoustic frequency characteristics of the leaf after we stopped watering. Currently, we are developing a method to evaluate leaf water content through its acoustic response for plant-human communication.

Figure1. Proposed method for evaluating leaf water content through its acoustic response

In this study, we confirmed that leaf natural frequency showed complex behavior with losing water content. Despite this complexity, we demonstrated the estimation of its frequency change using an equation based on the circular diaphragm theory.

Our research steps were conducted in the following order: I. Measurement of leaf natural frequency, II. Estimation of leaf natural frequency, and III. Comparison of measured and estimated values.

First, in “I. Measurement of leaf natural frequency,” we obtained the acoustic frequency characteristics of the leaf under non-irrigation conditions by vibrating the leaf using a bone-conduction transducer. The results showed that the natural frequency showed non-monotonic and complex changes over time as leaf water content decreased. Based on the leaf Young’s modulus and thickness measured simultaneously as physical parameters, we confirmed that the complex changes in natural frequency were due to independent changes in these physical parameters.

Next, in “II. Estimation of leaf natural frequency,” we derived an estimation equation by applying a first-order approximation to the circular diaphragm theory to clarify the leaf vibration behavior under non-irrigation conditions. The estimated values were calculated by substituting the measured physical parameters into the estimation equation.

Figure 2. Estimation equation to estimate leaf natural frequency

Finally, in “III. Comparison of measured and estimated values,” we compared the measured natural frequency in step I with the estimated natural frequency in step II using correlation coefficients. The results showed that the estimated values showed high correlation coefficients with the measured values (0.66–0.83). We concluded that the estimated equation based on the circular diaphragm theory can be applied to leaf vibration.

Figure 3. Comparison of measured and estimated leaf natural frequency changes under stopped watering

Through this study, we investigated the relationship between the leaf vibration characteristics and water content, and we clarified this relationship as a preliminary step. Based on these findings, we aim to establish a quantitative measurement method for evaluating leaf water content using its acoustic response.

Once this proposed method is established, we will be able to hear the voices of leaves when they are thirsty.

How the season affects if you hear a sonic boom during rocket ascent

Mark Anderson – anderson.mark.az@gmail.com

X: @AerospaceMark
Brigham Young University, Provo, UT, 84602, United States

Additional Authors
Kent L. Gee
X: @KentLGee
Brigham Young University

Lucas K. Hall
California State University Bakersfield

Institutional Social Media
Brigham Young University
X: @BYU
Instagram: @brighamyounguniversity

Department of Physics and Astronomy
X: @BYU_PhysAstro

Popular version of 2aNSb9 – Modeling seasonal variation in rocket ascent sonic booms
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=IntHtml&project=ASAASJ25&id=3989257

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Residents in Southern California have reported hearing sudden, explosion-like sounds that have been startling individuals and causing vibrations in buildings. It turns out, these sounds are actually the sonic booms from rockets launched 70 or more miles away. And whether you’ll hear one is often determined by the season.

These sonic booms are produced during the rocket’s ascent toward orbit (see Fig. 1). As the vehicle pitches over during flight, it produces a sonic boom that can hit the ground below. This has happened with every orbital rocket, including the Saturn V and Space Shuttle, all the way to the modern Falcon 9. The difference is that while, historically, these sonic booms have only been audible over the ocean, rockets today are being launched closer to the coast, making these sonic booms audible on land.

A SpaceX Falcon 9 rocket launches to orbit. Photo credit: SpaceX. CC BY-NC 2.0 (https://www.flickr.com/photos/spacex/51027443336/). Annotation by the authors.

As part of a project funded by Vandenberg Space Force Base, researchers at Brigham Young University and California State University Bakersfield have teamed up to measure these rocket ascent sonic booms. Almost immediately, a question arose: why is it that we can measure sonic booms on land for a few months, followed by a period of almost nothing, even if the rocket’s trajectory stays the same? To answer this question, we used NASA’s state-of-the-art sonic boom modeling software, PCBoom. After verifying that we could reproduce our measured results using day-of weather data inputs, we simulated a commonly flown coastal trajectory using five and a half years’ worth of weather balloon data. This trajectory is among the closest currently-flown trajectories to the coast.

The results came back clearly. The seasonal weather causes predictable patterns in where sonic booms are most likely to be heard on land. More specifically, it typically comes down to which direction the upper-level winds (above 10 miles) are blowing, either from the east (summer) or the west (spring/fall). Because these winds change rather predictably throughout the year, we conclude that, for this launch trajectory, sonic booms on land are most likely to be heard in the spring and fall, with somewhat fewer in the winter and very few in the summer. To visualize these trends, Fig. 2 shows representative examples of where the sonic boom will land for each of the four seasons.

Representative sonic boom footprints from each of the four seasons, generated using PCBoom with day-of weather inputs. Actual footprints for a given day are subject to daily weather differences and thus will not exactly match these plots.

With this new understanding of how the seasonal weather affects the sonic boom footprint, we will continue to work with Vandenberg Space Force Base on further rocket ascent sonic boom research. We hope that one day this research will contribute to a world where future rockets can launch regularly while minimizing disruptions to communities and environments.

The sounds of the water music of Vanuatu

Randy Hurd – randyhurd@weber.edu

Weber State University, Department of Mechanical Engineering, Ogden, UT, 84408, United States

Additional author: John Allen

Popular version of 5aMU3 – Acoustics of the Vanuatu Water Music
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3981726

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Women in the island nation of Vanuatu create music in a unique way. Standing waist deep in a pool, they strike the water with their hands creating a unique variety of tones (see Figure 1). While the acoustics of inanimate objects entering water (such as spheres and raindrops) have long been understood, the mechanisms governing human hand strikes have received less attention. For this study, we replicate and simplify these musical techniques in a controlled laboratory environment to analyze the physical properties—the hydrodynamics and the resulting acoustic profile—of the sounds produced.

Figure 1: Women from the Leweton Cultural Group in the Banks Islands of Vanuatu dance together while interacting with the water surface to create music. (Image courtesy of The Secrets of Vanuatu Water Music. Directed by Marc Hoeferlin, ARTE France and ZED, 2015)

To isolate and measure these effects, we recreated the water-slapping motions in a transparent water tank. We used a high-speed camera to capture the subsurface cavity formation in detail (see figure 2), and recorded the sounds with both an in-air microphone and an underwater hydrophone.

Figure 2: A series of high-speed image sequences portray simplifications of four different techniques used by the women of Vanuatu to create music. a) A flat-handed slap produces a wide and shallow entrained air cavity. b) A cup-handed slap produces a slightly deeper cavity. c) A plunge with a deep hand produces a deep cavity that collapses in the final image. d) A horizontal plowing motion entrains air behind the hand (50 ms between images).

The key finding of this work is the establishment of a direct link between the physical motion of the hand, the shape and size of the air cavity created, and the acoustic characteristics of the sound produced. We find that the way the hand interacts with the water creates different subsurface cavities and control the volume and tone of the sound produced. Even hand-shape upon impact is shown to affect the resulting tone. In essence, the research demonstrates that the tone and duration of the sound are primarily controlled by the size and shape of the entrained air cavity. The larger the cavity, the deeper and longer the resulting sound.

The women of Vanuatu are incredibly sophisticated in their approach to creating music. They manipulate the sound spectrum without needing different instruments, simply by varying parameters like hand pose, curvature, and depth of penetration. This is a powerful demonstration of how multiphase flow, water entry and acoustics can produce an enriching and aesthetically complex experience.