–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Women in the island nation of Vanuatu create music in a unique way. Standing waist deep in a pool, they strike the water with their hands creating a unique variety of tones (see Figure 1). While the acoustics of inanimate objects entering water (such as spheres and raindrops) have long been understood, the mechanisms governing human hand strikes have received less attention. For this study, we replicate and simplify these musical techniques in a controlled laboratory environment to analyze the physical properties—the hydrodynamics and the resulting acoustic profile—of the sounds produced.
Figure 1: Women from the Leweton Cultural Group in the Banks Islands of Vanuatu dance together while interacting with the water surface to create music. (Image courtesy of The Secrets of Vanuatu Water Music. Directed by Marc Hoeferlin, ARTE France and ZED, 2015)
To isolate and measure these effects, we recreated the water-slapping motions in a transparent water tank. We used a high-speed camera to capture the subsurface cavity formation in detail (see figure 2), and recorded the sounds with both an in-air microphone and an underwater hydrophone.
Figure 2: A series of high-speed image sequences portray simplifications of four different techniques used by the women of Vanuatu to create music. a) A flat-handed slap produces a wide and shallow entrained air cavity. b) A cup-handed slap produces a slightly deeper cavity. c) A plunge with a deep hand produces a deep cavity that collapses in the final image. d) A horizontal plowing motion entrains air behind the hand (50 ms between images).
The key finding of this work is the establishment of a direct link between the physical motion of the hand, the shape and size of the air cavity created, and the acoustic characteristics of the sound produced. We find that the way the hand interacts with the water creates different subsurface cavities and control the volume and tone of the sound produced. Even hand-shape upon impact is shown to affect the resulting tone. In essence, the research demonstrates that the tone and duration of the sound are primarily controlled by the size and shape of the entrained air cavity. The larger the cavity, the deeper and longer the resulting sound.
The women of Vanuatu are incredibly sophisticated in their approach to creating music. They manipulate the sound spectrum without needing different instruments, simply by varying parameters like hand pose, curvature, and depth of penetration. This is a powerful demonstration of how multiphase flow, water entry and acoustics can produce an enriching and aesthetically complex experience.
Prashanth Tamilselvam – ptamilselvam@hawk.illinoistech.edu
Bluesky: @prashanth-t.bsky.social
Instagram: @prashanth_tamilselvam
Illinois Institute of Technology, Chicago, Illinois, 60616, United States
Francisco Ruiz ruiz@illinoistech.edu
Illinois Institute of Technology,
Chicago, Illinois,60616
United States
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
When was the last time you tried to whistle and wondered how do we make music with our mouth? For many, whistling feels effortless: purse your lips, blow, and a clear tone appears. Yet nearly half of us find it surprisingly difficult and never manage to produce more than a faint breath. Our research explores the physics behind this familiar but surprisingly complex activity.
When you whistle, the tongue rises against the roof of the mouth, leaving a small gap. The lips form a second constriction, and the space between acts as a resonant chamber, much like the tube of a flute. Pitch is controlled by moving the tongue to change the space between it and the palate. But geometry alone is not enough: we have found that only a specific combination of airflow and lip shape creates a ‘sweet spot’ leading to a stable tone. Maybe this is why so many people struggle with it.
Figure 1
In our experiments, involving orifices shaped like the hole of a donut to represent the lips, we found periodic vortices coming out (fig 1). These vortices are released at a frequency that is exactly the pitch we hear, showing that whistling is not simply blowing air but a precise coupling between the flow and the sound (fig 2a). The shape of the lips has a significant influence on the sound. Too narrow or too wide an opening suppresses the sound, and the front-to-back contour of the lips must encourage clean airflow separation (see how the non-toroidal lip geometry in fig 2b manages to whistle only within a small range of air velocity). This subtle control of lip geometry is essential for sustaining a clear, steady whistle.
Figure 2
The sound does not simply travel outward into the air. It also travels back into the mouth, where it interacts with the air coming from the lungs. This inward-traveling sound creates a feedback loop that amplifies the oscillations of the flow (fig 2c). The shear layer produced at the back of the mouth has a strong influence on how the airflow interacts with the lips. Subtle changes in this upstream shear layer either support or disrupt the formation of the vortices, and hence the sound.Difficult? It clearly is for many of us, but did you know that walruses also whistle? And they shape their lips exactly the way humans do it.We hope that understanding how humans (and walruses) whistle will help those of us who struggle with it. Meanwhile, our research is already guiding the development of a new, super-compact wind instrument that can be played without the use of hands. We call it the Flutino.Whistling may feel ordinary, but its physics is anything but simple.
Chirag Gokani – chiragokani@utexas.edu Instagram: @chiragokani Applied Research Laboratories and Walker Department of Mechanical Engineering Austin, Texas 78766-9767
Preston S. Wilson (also at Applied Research Laboratories and Walker Department of Mechanical Engineering)
Popular version of 2aMU6 – Timbral effects of the right-hand techniques of jazz guitarists Wes Montgomery and Joe Pass Presented at the 188th ASA Meeting Read the abstract at https://doi.org/10.1121/10.0037556
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Wes Montgomery and Joe Pass are two of the most influential guitarists of the 20th century. Acclaimed music educator and producer Rick Beato says,
Wes influenced all my favorite guitarists, from Joe Pass, to George Benson, to Pat Martino, to Pat Metheny, to John Scofield. He influenced Jimi Hendrix, he influenced Joe Satriani, Eric Johnson. Virtually every guitarist I can think of that I respect, Wes is a major, if not the biggest, influence of.
Beato similarly praises Joe Pass for his 1973 album Virtuoso, calling it the “album that changed my life”:
If there’s one record that I ever suggest to people that want to get into jazz guitar, it’s this record, Joe Pass, Virtuoso.
Part of what made Wes Montgomery and Joe Pass so great was their iconic guitar tone. Montgomery played with his thumb, and his tone was focused and warm. See, for example, “Cariba” from Full House (1962). Meanwhile, Pass played both fingerstyle and with a pick, and his tone was smooth and rich. His fingerstyle playing can be heard on “Just Friends” from I Remember Charlie Parker (1979), and his pick playing can be heard on “Dreamer (Vivo Sonhando)” from Ella Abraca Jobim (1981).
Wes Montgomery (left, Tom Marcello, CC BY-SA 2.0) and Joe Pass (right, Chuck Stewart, Public domain via Wikimedia Commons)
To better understand the tone of Montgomery and Pass, we modeled the thumb, fingers, and pick as they interact with a guitar string.
Our model for how the thumb, fingers, and pick excite a guitar string. The string’s deformation is exaggerated for the purpose of illustration.
One factor in the model is the location at which the string is excited. Montgomery played closer to the bridge of the guitar, while Pass played closer to the neck. Another important factor is the amount that the thumb, fingers, and pick slip off the string. Montgomery’s thumb delivered a “pluck” and slipped less than Pass’s pick, which delivered more of a “strike” to the string.
Simulations of the model suggest that Montgomery and Pass balanced these two factors with the choice of thumb, fingers, and pick. The focused nature of Montgomery’s tone is due to his thumb, while the warmth of his tone arises from playing closer to the bridge and predominantly plucking the string. Meanwhile, the richness of Pass’s tone is due to his pick, while its smooth quality is due to playing closer to the neck and predominantly striking the string. Pass’s fingerstyle playing falls in between the thumb and pick techniques.
Guitarists wishing to play in the style of Montgomery and Pass can adjust their technique to match the parameters of our model. Conversely, the parameters of our model can be adjusted to emulate the tone of other notable guitarists.
Notable jazz and fusion guitarists grouped by technique. The parameters of our model can be adjusted to describe these guitarists.
Our model could also be used to synthesize realistic digital guitar voices that are more sensitive to the player’s touch.
To demonstrate the effects of the right-hand technique on the tone, we offer an arrangement of the jazz standard “Stella by Starlight” for solo guitar. The thumb is used at the beginning of the arrangement, with occasional contributions from the fingers. The fingers are used exclusively from 0:50-1:10, after which the pick is used to conclude the arrangement. Knowledge of the physics underlying these techniques helps us better appreciate both the subtlety of guitar performance and the contributions of Montgomery and Pass to music.
Andrew Brian Horner horner@cse.ust.hk
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR
Popular version of 1aMU2 – The emotional characteristics of the violin with different pitches, dynamics, and vibrato levels
Presented at the 187th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0034939
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Music has a unique way of moving us emotionally, but have you ever wondered how individual sounds shape these feelings?
In our study, we looked at how different features of violin notes—like pitch (the height of the notes), dynamics (the loudness of the sounds), and vibrato (how the note vibrates)—combine to create emotional responses. While previous research often focuses on each feature in isolation, we explored how they interact, revealing how the violin’s sounds evoke specific emotions.
To conduct this study, we used single-note recordings from the violin at different pitches, two levels of dynamics (loud and soft), and two vibrato settings (no vibrato and high vibrato). We invited participants to listen to these sounds and rate their emotional responses using a scale of emotional positivity (valence) and intensity (arousal). Participants also selected which emotions they felt from a list of 16 emotions, such as joyful, nervous, relaxed, or agitated.
Audio 1. The experiment used a violin single-note sample (middle C pitch + loud dynamics + no vibrato).
Audio 2. The experiment used a violin single-note sample (middle C pitch + soft dynamics + no vibrato).
Audio 3. The experiment used a violin single-note sample (middle C pitch + loud dynamics + high vibrato).
Audio 4. The experiment used a violin single-note sample (middle C pitch + loud dynamics + high vibrato).
Our findings reveal that each element plays a unique role in shaping emotions. As shown in Figure 1, higher pitches and strong vibrato generally raised emotional intensity, creating feelings of excitement or tension. Lower pitches were more likely to evoke sadness or calmness, while loud dynamics made emotions feel more intense. Surprisingly, sounds without vibrato were linked to calmer emotions, while vibrato added energy and excitement, especially for emotions like anger or fear. And Figure 2 illustrates how strong vibrato enhances emotions like anger and sadness, while the absence of vibrato correlates with calmer feelings.
Figure 1. Pitch, Dynamics, and Vibrato average ratings on valence-arousal with different levels. It shows that higher pitches and strong vibrato increase arousal, while soft dynamics and no vibrato are linked to higher valence, highlighting pitch as the most influential factor.
Figure 2. Pitch, Dynamics, and Vibrato average ratings on 16 emotions. It shows that strong vibrato enhances angry and sad emotions, while no vibrato supports calm emotions; higher pitches increase arousal for angry emotions, and brighter tones evoke calm and happy emotions.
Our research provides insights for musicians, composers, and even music therapists, helping them understand how to use the violin’s features to evoke specific emotions. With this knowledge, violinists can fine-tune their performance to match the emotional impact they aim to create, and composers can carefully select sounds that resonate with listeners’ emotional expectations.
The University of British Columbia, Department of Linguistics, Vancouver, British Columbia, V6T 1Z4, Canada
Additional authors:
Sydney Norris, Sabrina Luk, Marcell Maitinsky, Md Jahurul Islam, and Bryan Gick
Popular version of 3pPP6 – The Role of Genre Association in Sung Dialect Categorization
Presented at the 187th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0035323
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Have you ever listened to a song and later been surprised to hear the artist speak with a different accent than the one you heard in the song? Take country singer Keith Urban’s song “What About Me” for instance; when listening, you might assume that he has a Southern American (US) English accent. However, in his interviews, he speaks with an Australian English accent. So why did you think he sounded Southern?
Research suggests that specific accents or dialects are associated with musical genres [2], that singers adjust their accents based on genre [4]; and that foreign accents are more difficult to recognize in songs compared to speech [5]. However, when listeners perceive an accent in a song, it is unclear which type of information they rely on: the acoustic speech information or information about the musical genre. Our previous research investigated this question for Country and Reggae music and found that genre recognition may play a larger role in dialect perception than the actual sound of the voice [9].
Our current study explores American Blues and Folk music, genres that allow for easier separation of vocals from instrumentals, with more refined stimuli manipulation. Blues is strongly associated with African American English [3], while Folk can be associated with a variety of (British, American, etc.) dialects [1]. Participants listened to manipulated clips of sung and “spoken” lines taken from songs in both genres, which were transcribed for participants (see Figure 1). AI applications were used to remove instrumentals for both sung and spoken clips, while “spoken” clips also underwent rhythm and pitch normalization so that they sounded like spoken rather than sung speech. After hearing each sung or spoken line, participants were asked to identify the dialect they heard from six options [7, 8] (see Figure 2).
Figure 1: Participant view of a transcript from a Folk song clip.
Figure 2: Participant view of six dialect options after hearing a clip.
Participants were much more confident and accurate in categorizing accents for clips in the Sung condition, regardless of genre. The proportion of uncertainty (“Not Sure” responses) in the Spoken condition was consistent across genres (see “D” in Figure 3), suggesting that participants were more certain of dialect when musical cues were present. Dialect categories followed genre expectations, as can be seen from the increase in identifying African American English for Blues in the Sung condition (see “A”). Removing uncertainty by adding genre cues did not increase the likelihood of “Irish English” or “British English” being chosen for Blues, though it did for Folk (see “B” and “C” in Figure 3), in line with genre-based expectations.
Figure 3: Participant dialect responses.
These findings enhance our understanding of the relationship between musical genre and accent. Referring again to the example of Keith Urban, the singer’s stylistic accent change may not be the only culprit for our interpretation of a Southern drawl. Rather, we may have assumed we were listening to a musician with a Southern American English Accent when we heard the first banjo-like twang or tuned into iHeartCountry Radio. When we listen to a song and perceive a singer’s accent, we are not only listening to the sounds of their speech, but are also shaping our perception from our expectations of dialect based on the musical genre.
References:
Carrigan, J., Henry L. (2004). Lornell, kip. the NPR curious listener’s guide to american folk music. Library Journal (1976), 129(19), 63.
De Timmerman, Romeo, et al. (2024). The globalization of local indexicalities through music: African‐American English and the blues. Journal of Sociolinguistics, 28(1), 3–25. https://doi.org/10.1111/josl.12616.
Gibson, A. M. (2019). Sociophonetics of popular music: insights from corpus analysis and speech perception experiments [Doctoral dissertation, University of Canterbury]. http://dx.doi.org/10.26021/4007.
Mageau, M., Mekik, C., Sokalski, A., & Toivonen, I. (2019). Detecting foreign accents in song. Phonetica, 76(6), 429–447. https://doi.org/10.1159/000500187.
RStudio. (2020). RStudio: Integrated Development for R. RStudio, PBC, Boston, MA. http://www.rstudio.com/.
Stoet, G. (2010). PsyToolkit – A software package for programming psychological experiments using Linux. Behavior Research Methods, 42(4), 1096-1104.
Stoet, G. (2017). PsyToolkit: A novel web-based method for running online questionnaires and reaction-time experiments. Teaching of Psychology, 44(1), 24-31.
Walter, M., Bengtson, G., Maitinsky, M., Islam, M. J., & Gick, B. (2023). Dialect perception in song versus speech. The Journal of the Acoustical Society of America, 154(4_supplement), A161. https://doi.org/10.1121/10.0023131.