To Bop or To Sway? The Music Will Tell You

Specific musical features have the power to make people bounce or sway. #ASA_ASJ2025 #ASA189

HONOLULU, Dec. 2, 2025 — Some music is for grooving: It evokes spontaneous dancing, like head bopping, jumping, or arm swinging. Other music is for swaying, or for crying, or for slow dancing. Music makes people move, but whether musicians intentionally induce specific movements with their compositions, such as vertical bouncing or horizontal swaying, or what musical features would contribute to these distinctions, is more complex.

Shimpei Ikegami, an associate professor at Showa Women’s University, sought to understand how musicians express intended bodily movement directions using specific acoustic features.

“It’s almost magical how something we hear with our ears can influence our entire body. In Japan, we even have terms to describe distinct rhythmic feelings to music,” Ikegami said.

Ikegami will present his musical results Tuesday, Dec. 2, at 2:55 p.m. HST as part of the Sixth Joint Meeting of the Acoustical Society of America and Acoustical Society of Japan, running Dec. 1-5 in Honolulu, Hawaii.

The music samples being composed. Credit: Shimpei Ikegami

The music samples being composed. Credit: Shimpei Ikegami

Four professional pop musicians composed short musical excerpts intended to elicit either “tate-nori” (vertical, up-and-down movement), “yoko-nori” (horizontal, side-to-side movement), or neither movement type.

Ikegami quantified the acoustic characteristics of the excerpts, measuring features such as loudness, beat clarity, rhythm complexity, and timbre. By comparing the prominence of features across intended-movement conditions, he found that vertical “bop” music was characterized by a clearer beat and percussive sounds, fueling listeners with the rush of high-energy workout songs. In contrast, horizontal “sway” excerpts were smoother and included less percussive sounds, creating a mellow and atmospheric musical impression. In a listener-rating experiment, participants heard each excerpt and rated the extent to which it made them feel like moving vertically and horizontally. Ikegami found that the listeners’ directional dancing inclinations matched the musicians’ intended expressions.

Ikegami’s findings suggest that the way musicians express certain qualities of danceability is specific and quantifiable. He aims to further explore commonalities and differences between musical profiles that induce vertical versus horizontal bodily movement.

“In the immediate future, I am investigating the psychological impressions — how the music is perceived by listeners. I am also deeply interested in cultural differences in these phenomena,” said Ikegami. “I believe that advancing my understanding of how music influences our body movements could be beneficial in fields such as health care, rehabilitation, and education.”

Contact:
AIP Media
+1 301-209-3090
media@aip.org

——————— MORE MEETING INFORMATION ——————–

Main Meeting Website: https://acousticalsociety.org/honolulu-2025/
Technical Program: https://eppro02.ativ.me/web/planner.php?id=ASAASJ25

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting and/or press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

ABOUT THE ACOUSTICAL SOCIETY OF JAPAN
ASJ publishes a monthly journal in Japanese, the Journal of the Acoustical Society of Japan as well as a bimonthly journal in English, Acoustical Science and Technology, which is available online at no cost https://www.jstage.jst.go.jp/browse/ast. These journals include technical papers and review papers. Special issues are occasionally organized and published. The Society also publishes textbooks and reference books to promote acoustics associated with various topics. See https://acoustics.jp/en/.

The sounds of the water music of Vanuatu

Randy Hurd – randyhurd@weber.edu

Weber State University, Department of Mechanical Engineering, Ogden, UT, 84408, United States

Additional author: John Allen

Popular version of 5aMU3 – Acoustics of the Vanuatu Water Music
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3981726

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Women in the island nation of Vanuatu create music in a unique way. Standing waist deep in a pool, they strike the water with their hands creating a unique variety of tones (see Figure 1). While the acoustics of inanimate objects entering water (such as spheres and raindrops) have long been understood, the mechanisms governing human hand strikes have received less attention. For this study, we replicate and simplify these musical techniques in a controlled laboratory environment to analyze the physical properties—the hydrodynamics and the resulting acoustic profile—of the sounds produced.

Figure 1: Women from the Leweton Cultural Group in the Banks Islands of Vanuatu dance together while interacting with the water surface to create music. (Image courtesy of The Secrets of Vanuatu Water Music. Directed by Marc Hoeferlin, ARTE France and ZED, 2015)

To isolate and measure these effects, we recreated the water-slapping motions in a transparent water tank. We used a high-speed camera to capture the subsurface cavity formation in detail (see figure 2), and recorded the sounds with both an in-air microphone and an underwater hydrophone.

Figure 2: A series of high-speed image sequences portray simplifications of four different techniques used by the women of Vanuatu to create music. a) A flat-handed slap produces a wide and shallow entrained air cavity. b) A cup-handed slap produces a slightly deeper cavity. c) A plunge with a deep hand produces a deep cavity that collapses in the final image. d) A horizontal plowing motion entrains air behind the hand (50 ms between images).

The key finding of this work is the establishment of a direct link between the physical motion of the hand, the shape and size of the air cavity created, and the acoustic characteristics of the sound produced. We find that the way the hand interacts with the water creates different subsurface cavities and control the volume and tone of the sound produced. Even hand-shape upon impact is shown to affect the resulting tone. In essence, the research demonstrates that the tone and duration of the sound are primarily controlled by the size and shape of the entrained air cavity. The larger the cavity, the deeper and longer the resulting sound.

The women of Vanuatu are incredibly sophisticated in their approach to creating music. They manipulate the sound spectrum without needing different instruments, simply by varying parameters like hand pose, curvature, and depth of penetration. This is a powerful demonstration of how multiphase flow, water entry and acoustics can produce an enriching and aesthetically complex experience.

How do humans whistle?

Prashanth Tamilselvam – ptamilselvam@hawk.illinoistech.edu
Bluesky: @prashanth-t.bsky.social
Instagram: @prashanth_tamilselvam
Illinois Institute of Technology, Chicago, Illinois, 60616, United States

Francisco Ruiz
ruiz@illinoistech.edu
Illinois Institute of Technology,
Chicago, Illinois,60616
United States

Popular version of 4pMU15 – Experiments on the flow acoustics of Human whistling
Presented at the 189th ASA Meeting
Read the abstract at https://eppro02.ativ.me//web/index.php?page=Session&project=ASAASJ25&id=3976527

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

When was the last time you tried to whistle and wondered how do we make music with our mouth? For many, whistling feels effortless: purse your lips, blow, and a clear tone appears. Yet nearly half of us find it surprisingly difficult and never manage to produce more than a faint breath. Our research explores the physics behind this familiar but surprisingly complex activity.

When you whistle, the tongue rises against the roof of the mouth, leaving a small gap. The lips form a second constriction, and the space between acts as a resonant chamber, much like the tube of a flute. Pitch is controlled by moving the tongue to change the space between it and the palate. But geometry alone is not enough: we have found that only a specific combination of airflow and lip shape creates a ‘sweet spot’ leading to a stable tone. Maybe this is why so many people struggle with it.

Figure 1

In our experiments, involving orifices shaped like the hole of a donut to represent the lips, we found periodic vortices coming out (fig 1). These vortices are released at a frequency that is exactly the pitch we hear, showing that whistling is not simply blowing air but a precise coupling between the flow and the sound (fig 2a). The shape of the lips has a significant influence on the sound. Too narrow or too wide an opening suppresses the sound, and the front-to-back contour of the lips must encourage clean airflow separation (see how the non-toroidal lip geometry in fig 2b manages to whistle only within a small range of air velocity). This subtle control of lip geometry is essential for sustaining a clear, steady whistle.

Figure 2
The sound does not simply travel outward into the air. It also travels back into the mouth, where it interacts with the air coming from the lungs. This inward-traveling sound creates a feedback loop that amplifies the oscillations of the flow (fig 2c). The shear layer produced at the back of the mouth has a strong influence on how the airflow interacts with the lips. Subtle changes in this upstream shear layer either support or disrupt the formation of the vortices, and hence the sound.Difficult? It clearly is for many of us, but did you know that walruses also whistle? And they shape their lips exactly the way humans do it.We hope that understanding how humans (and walruses) whistle will help those of us who struggle with it. Meanwhile, our research is already guiding the development of a new, super-compact wind instrument that can be played without the use of hands. We call it the Flutino.Whistling may feel ordinary, but its physics is anything but simple.

What does a glass bottle and an ancient Indian flute have in common? Explorations in acoustic color

Ananya Sen Gupta – ananya-sengupta@uiowa.edu
Department of Electrical and Computer Engineering
University of Iowa
Iowa City, IA 52242
United States

Trevor Smith – trevor-smith@uiowa.edu

Panchajanya Dey – panchajanyadey@gmail.com
@panchajanya_official

Popular version of 5aMU4 – Exploring the acoustic color signature paterns of Bansuri, the traditional Indian bamboo flute using principles of the Helmholtz generator and geometric signal processing techniques
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0038290

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

The Bansuri, the ancient Indian bamboo flute

 

More media files accessed here

Bansuri, the ancient Indian bamboo flute, is of rich historical, cultural and spiritual significance to South Asian musical heritage. It has been mentioned in ancient Hindu texts dating back centuries, sometimes millennia, and is still played all over India in classical, folk, movie songs, and other musical genres today. Made from a single bamboo reed, with seven finger holes (six are mostly played) and one blow-hole, the Bansuri carries the rich melody of wind whistling through the tropical woods. In terms of musical acoustics, the Bansuri essentially works as a composite Helmholtz resonator, also known as wind throb, with a cylindrical rather than spherical and partially open cavity. The cavity openings are through the finger holes that are open during playing, as well as the open end of the shaft. Helmholtz resonance refers to the phenomenon of air resonance in a cavity, an effect named after the German physicist Hermann von Helmholtz. The bansuri sound is created when the air going in through the blow-hole is trapped inside the cavity of the bamboo shaft, before it leaves primarily through the end of the bamboo shaft as well as the first open finger holes.

The longer the length of the effective air shaft, which depends on how many finger-holes are closed, the lower the fundamental resonant frequency. However, the acoustical quality of the bansuri is determined not only by the fundamental (lowest) frequency but also by the relative dominance of the harmonics (higher octaves). The different octaves (typical bansuri has a range of thee octaves) can be activated by the bansuri player by controlling the angle and “beam-width” of the blow, which significantly impacts the dynamics of the air pressure, vorticity and air flow. A direct blow into the blow-hole for any finger-hole combination activates the direct propagation mode, where the lowest octave is dominant. To hit the higher octaves of the same note, the flautist has to blow at an angle to activate the other modes of sound propagation, which proceeds through the air column as well as the wooden body of the bansuri.

The accompanying videos and images show a basic demonstration of the bansuri as a musical instrument by Panchajanya Dey, simple demonstrations of a glass bottle as a Helmholtz resonator, and exposition of how the acoustic color (shown in the figures) can be used to bridge interdisciplinary artists to create new forms of music.

Acoustic color is a popular data science tool that expresses the relative distribution of power across the frequency spectrum as a function time. Visually these are images with colormap (red=high, blue = low) representing the relative power between the harmonics of the flute, and a rising (or falling) curve within the acoustic color image indicates a rising (or falling) tone for a harmonic. For the bansuri, the harmonic structures exist as non-linear braid-like curves within the acoustic color image. The higher harmonics, which may contain useful melodic information, are often embedded against background noise that sounds like hiss, likely from mixing of airflow modes and irregular reed vibrations. However, some hiss is natural to the flute and filtering it out makes the music lose its authenticity. In the talk, we presented computational techniques based on harmonic filtering to separate the modes of acoustic propagation and sound production in the Bansuri, e.g. filtering out leakage due to mixing of modes. We also exposited how the geometric aspects of the acoustic color features (e.g. harmonic signatures) may be exploited to create a fluid feature dictionary. The purpose of this dictionary is to store the harmonic signatures of different melodic movements, without sacrificing the rigor of musical grammar, or the authentic earthy sound of the bansuri (e.g. some of the hiss is natural and supposed to be there). This fluid feature repository may be harnessed with large language models (LLM) or similar AI/ML architecture to enable machine interpretation of Indian classical music, create collaborative infrastructure to enable artists from different musical traditions to experiment with an authentic software testbed, among other exciting applications.

Explaining the tone of two legendary jazz guitarists

Chirag Gokani – chiragokani@utexas.edu
Instagram: @chiragokani
Applied Research Laboratories and Walker Department of Mechanical Engineering
Austin, Texas 78766-9767

Preston S. Wilson (also at Applied Research Laboratories and Walker Department of Mechanical Engineering)

Popular version of 2aMU6 – Timbral effects of the right-hand techniques of jazz guitarists Wes Montgomery and Joe Pass
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037556

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Wes Montgomery and Joe Pass are two of the most influential guitarists of the 20th century. Acclaimed music educator and producer Rick Beato says,

Wes influenced all my favorite guitarists, from Joe Pass, to George Benson, to Pat Martino, to Pat Metheny, to John Scofield. He influenced Jimi Hendrix, he influenced Joe Satriani, Eric Johnson. Virtually every guitarist I can think of that I respect, Wes is a major, if not the biggest, influence of.

Beato similarly praises Joe Pass for his 1973 album Virtuoso, calling it the “album that changed my life”:

If there’s one record that I ever suggest to people that want to get into jazz guitar, it’s this record, Joe Pass, Virtuoso.

Part of what made Wes Montgomery and Joe Pass so great was their iconic guitar tone. Montgomery played with his thumb, and his tone was focused and warm. See, for example, “Cariba” from Full House (1962). Meanwhile, Pass played both fingerstyle and with a pick, and his tone was smooth and rich. His fingerstyle playing can be heard on “Just Friends” from I Remember Charlie Parker (1979), and his pick playing can be heard on “Dreamer (Vivo Sonhando)” from Ella Abraca Jobim (1981).

Wes Montgomery (left, Tom Marcello, CC BY-SA 2.0) and Joe Pass (right, Chuck Stewart, Public domain via Wikimedia Commons)

To better understand the tone of Montgomery and Pass, we modeled the thumb, fingers, and pick as they interact with a guitar string.

Our model for how the thumb, fingers, and pick excite a guitar string. The string’s deformation is exaggerated for the purpose of illustration.

One factor in the model is the location at which the string is excited. Montgomery played closer to the bridge of the guitar, while Pass played closer to the neck. Another important factor is the amount that the thumb, fingers, and pick slip off the string. Montgomery’s thumb delivered a “pluck” and slipped less than Pass’s pick, which delivered more of a “strike” to the string.

Simulations of the model suggest that Montgomery and Pass balanced these two factors with the choice of thumb, fingers, and pick. The focused nature of Montgomery’s tone is due to his thumb, while the warmth of his tone arises from playing closer to the bridge and predominantly plucking the string. Meanwhile, the richness of Pass’s tone is due to his pick, while its smooth quality is due to playing closer to the neck and predominantly striking the string. Pass’s fingerstyle playing falls in between the thumb and pick techniques.

Guitarists wishing to play in the style of Montgomery and Pass can adjust their technique to match the parameters of our model. Conversely, the parameters of our model can be adjusted to emulate the tone of other notable guitarists.

Notable jazz and fusion guitarists grouped by technique. The parameters of our model can be adjusted to describe these guitarists.

Our model could also be used to synthesize realistic digital guitar voices that are more sensitive to the player’s touch.

To demonstrate the effects of the right-hand technique on the tone, we offer an arrangement of the jazz standard “Stella by Starlight” for solo guitar. The thumb is used at the beginning of the arrangement, with occasional contributions from the fingers. The fingers are used exclusively from 0:50-1:10, after which the pick is used to conclude the arrangement. Knowledge of the physics underlying these techniques helps us better appreciate both the subtlety of guitar performance and the contributions of Montgomery and Pass to music.

What’s the Best Way to Pitch Shift and Time Stretch a Mashup?

Anh Dung Dinh – addinh@connect.ust.hk
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
Hong Kong SAR

Xinyang WU – xwuch@connect.ust.hk
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
Hong Kong SAR

Andrew Brian Horner – horner@cse.ust.hk
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
Hong Kong SAR

Popular version of 1pMU – Ideal tempo and pitch for two-source mashup
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037389

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Corey Blaz/Shutterstock.com

If you are a music enthusiast, chances are you have encountered mashups, a form of music remix combining multiple tracks together, on the Internet. DJs assemble a playlist of multiple popular songs with smooth transitions to spice up the radio station or club, and online artists layer tracks on top of each other to create a fresh take on existing songs.

To make a mashup that’s harmonically organized and pleasing, you need to consider the musical features of the original songs, including tempo – the speed at which the songs are played, and key – which musical notes are used. For example, let us combine the vocals and instrumental of these two songs:

“Twinkle Twinkle Little Star” melody rendered with vocal samples

“Vivacity” Kevin MacLeod (incompetech.com). Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/

There are different ways the songs could be modified to fit each other and combined. Some examples are shown here:

Our study aims to figure out which of the above examples, among others, would be rated by listeners as the best fit. We conducted a series of surveys to evaluate the preferences of over 70 listeners when presented with mashups of varying features. Our results are depicted in Figures 1 and 2 which show that most listeners preferred mashups with an average tempo and the original vocal pitch. More in-depth results are explored in our conference presentation and paper.

Figure 1: Average score of listener preference for different tempo variants in vocal-swap mashups. Higher score indicates more participants selected that option as the “most preferred” version of the mashup combining 2 songs. Overall, majority of listeners liked the mashups at average tempo of the two original tracks.

Figure 1: Average score of listener preference for different tempo variants in vocal-swap mashups. Higher score indicates more participants selected that option as the “most preferred” version of the mashup combining 2 songs. Overall, majority of listeners liked the mashups at average tempo of the two original tracks.

Figure 2: Average score of listener preference for different key variants, plotted as a function of the key differences between the 2 base songs. In most cases, the vocals’ original key is the most preferred version for the mashups.

Figure 2: Average score of listener preference for different key variants, plotted as a function of the key differences between the 2 base songs. In most cases, the vocals’ original key is the most preferred version for the mashups.

Our results will hopefully provide helpful insights for mashup artists to further enhance their compositions, as well as for automatic mashup creation algorithms to improve their output performance.