What does a glass bottle and an ancient Indian flute have in common? Explorations in acoustic color

Ananya Sen Gupta – ananya-sengupta@uiowa.edu
Department of Electrical and Computer Engineering
University of Iowa
Iowa City, IA 52242
United States

Trevor Smith – trevor-smith@uiowa.edu

Panchajanya Dey – panchajanyadey@gmail.com
@panchajanya_official

Popular version of 5aMU4 – Exploring the acoustic color signature paterns of Bansuri, the traditional Indian bamboo flute using principles of the Helmholtz generator and geometric signal processing techniques
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0038290

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

The Bansuri, the ancient Indian bamboo flute

 

More media files accessed here

Bansuri, the ancient Indian bamboo flute, is of rich historical, cultural and spiritual significance to South Asian musical heritage. It has been mentioned in ancient Hindu texts dating back centuries, sometimes millennia, and is still played all over India in classical, folk, movie songs, and other musical genres today. Made from a single bamboo reed, with seven finger holes (six are mostly played) and one blow-hole, the Bansuri carries the rich melody of wind whistling through the tropical woods. In terms of musical acoustics, the Bansuri essentially works as a composite Helmholtz resonator, also known as wind throb, with a cylindrical rather than spherical and partially open cavity. The cavity openings are through the finger holes that are open during playing, as well as the open end of the shaft. Helmholtz resonance refers to the phenomenon of air resonance in a cavity, an effect named after the German physicist Hermann von Helmholtz. The bansuri sound is created when the air going in through the blow-hole is trapped inside the cavity of the bamboo shaft, before it leaves primarily through the end of the bamboo shaft as well as the first open finger holes.

The longer the length of the effective air shaft, which depends on how many finger-holes are closed, the lower the fundamental resonant frequency. However, the acoustical quality of the bansuri is determined not only by the fundamental (lowest) frequency but also by the relative dominance of the harmonics (higher octaves). The different octaves (typical bansuri has a range of thee octaves) can be activated by the bansuri player by controlling the angle and “beam-width” of the blow, which significantly impacts the dynamics of the air pressure, vorticity and air flow. A direct blow into the blow-hole for any finger-hole combination activates the direct propagation mode, where the lowest octave is dominant. To hit the higher octaves of the same note, the flautist has to blow at an angle to activate the other modes of sound propagation, which proceeds through the air column as well as the wooden body of the bansuri.

The accompanying videos and images show a basic demonstration of the bansuri as a musical instrument by Panchajanya Dey, simple demonstrations of a glass bottle as a Helmholtz resonator, and exposition of how the acoustic color (shown in the figures) can be used to bridge interdisciplinary artists to create new forms of music.

Acoustic color is a popular data science tool that expresses the relative distribution of power across the frequency spectrum as a function time. Visually these are images with colormap (red=high, blue = low) representing the relative power between the harmonics of the flute, and a rising (or falling) curve within the acoustic color image indicates a rising (or falling) tone for a harmonic. For the bansuri, the harmonic structures exist as non-linear braid-like curves within the acoustic color image. The higher harmonics, which may contain useful melodic information, are often embedded against background noise that sounds like hiss, likely from mixing of airflow modes and irregular reed vibrations. However, some hiss is natural to the flute and filtering it out makes the music lose its authenticity. In the talk, we presented computational techniques based on harmonic filtering to separate the modes of acoustic propagation and sound production in the Bansuri, e.g. filtering out leakage due to mixing of modes. We also exposited how the geometric aspects of the acoustic color features (e.g. harmonic signatures) may be exploited to create a fluid feature dictionary. The purpose of this dictionary is to store the harmonic signatures of different melodic movements, without sacrificing the rigor of musical grammar, or the authentic earthy sound of the bansuri (e.g. some of the hiss is natural and supposed to be there). This fluid feature repository may be harnessed with large language models (LLM) or similar AI/ML architecture to enable machine interpretation of Indian classical music, create collaborative infrastructure to enable artists from different musical traditions to experiment with an authentic software testbed, among other exciting applications.

Explaining the tone of two legendary jazz guitarists

Chirag Gokani – chiragokani@utexas.edu
Instagram: @chiragokani
Applied Research Laboratories and Walker Department of Mechanical Engineering
Austin, Texas 78766-9767

Preston S. Wilson (also at Applied Research Laboratories and Walker Department of Mechanical Engineering)

Popular version of 2aMU6 – Timbral effects of the right-hand techniques of jazz guitarists Wes Montgomery and Joe Pass
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037556

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Wes Montgomery and Joe Pass are two of the most influential guitarists of the 20th century. Acclaimed music educator and producer Rick Beato says,

Wes influenced all my favorite guitarists, from Joe Pass, to George Benson, to Pat Martino, to Pat Metheny, to John Scofield. He influenced Jimi Hendrix, he influenced Joe Satriani, Eric Johnson. Virtually every guitarist I can think of that I respect, Wes is a major, if not the biggest, influence of.

Beato similarly praises Joe Pass for his 1973 album Virtuoso, calling it the “album that changed my life”:

If there’s one record that I ever suggest to people that want to get into jazz guitar, it’s this record, Joe Pass, Virtuoso.

Part of what made Wes Montgomery and Joe Pass so great was their iconic guitar tone. Montgomery played with his thumb, and his tone was focused and warm. See, for example, “Cariba” from Full House (1962). Meanwhile, Pass played both fingerstyle and with a pick, and his tone was smooth and rich. His fingerstyle playing can be heard on “Just Friends” from I Remember Charlie Parker (1979), and his pick playing can be heard on “Dreamer (Vivo Sonhando)” from Ella Abraca Jobim (1981).

Wes Montgomery (left, Tom Marcello, CC BY-SA 2.0) and Joe Pass (right, Chuck Stewart, Public domain via Wikimedia Commons)

To better understand the tone of Montgomery and Pass, we modeled the thumb, fingers, and pick as they interact with a guitar string.

Our model for how the thumb, fingers, and pick excite a guitar string. The string’s deformation is exaggerated for the purpose of illustration.

One factor in the model is the location at which the string is excited. Montgomery played closer to the bridge of the guitar, while Pass played closer to the neck. Another important factor is the amount that the thumb, fingers, and pick slip off the string. Montgomery’s thumb delivered a “pluck” and slipped less than Pass’s pick, which delivered more of a “strike” to the string.

Simulations of the model suggest that Montgomery and Pass balanced these two factors with the choice of thumb, fingers, and pick. The focused nature of Montgomery’s tone is due to his thumb, while the warmth of his tone arises from playing closer to the bridge and predominantly plucking the string. Meanwhile, the richness of Pass’s tone is due to his pick, while its smooth quality is due to playing closer to the neck and predominantly striking the string. Pass’s fingerstyle playing falls in between the thumb and pick techniques.

Guitarists wishing to play in the style of Montgomery and Pass can adjust their technique to match the parameters of our model. Conversely, the parameters of our model can be adjusted to emulate the tone of other notable guitarists.

Notable jazz and fusion guitarists grouped by technique. The parameters of our model can be adjusted to describe these guitarists.

Our model could also be used to synthesize realistic digital guitar voices that are more sensitive to the player’s touch.

To demonstrate the effects of the right-hand technique on the tone, we offer an arrangement of the jazz standard “Stella by Starlight” for solo guitar. The thumb is used at the beginning of the arrangement, with occasional contributions from the fingers. The fingers are used exclusively from 0:50-1:10, after which the pick is used to conclude the arrangement. Knowledge of the physics underlying these techniques helps us better appreciate both the subtlety of guitar performance and the contributions of Montgomery and Pass to music.

What’s the Best Way to Pitch Shift and Time Stretch a Mashup?

Anh Dung Dinh – addinh@connect.ust.hk
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
Hong Kong SAR

Xinyang WU – xwuch@connect.ust.hk
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
Hong Kong SAR

Andrew Brian Horner – horner@cse.ust.hk
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology
Hong Kong SAR

Popular version of 1pMU – Ideal tempo and pitch for two-source mashup
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037389

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Corey Blaz/Shutterstock.com

If you are a music enthusiast, chances are you have encountered mashups, a form of music remix combining multiple tracks together, on the Internet. DJs assemble a playlist of multiple popular songs with smooth transitions to spice up the radio station or club, and online artists layer tracks on top of each other to create a fresh take on existing songs.

To make a mashup that’s harmonically organized and pleasing, you need to consider the musical features of the original songs, including tempo – the speed at which the songs are played, and key – which musical notes are used. For example, let us combine the vocals and instrumental of these two songs:

“Twinkle Twinkle Little Star” melody rendered with vocal samples

“Vivacity” Kevin MacLeod (incompetech.com). Licensed under Creative Commons: By Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/

There are different ways the songs could be modified to fit each other and combined. Some examples are shown here:

Our study aims to figure out which of the above examples, among others, would be rated by listeners as the best fit. We conducted a series of surveys to evaluate the preferences of over 70 listeners when presented with mashups of varying features. Our results are depicted in Figures 1 and 2 which show that most listeners preferred mashups with an average tempo and the original vocal pitch. More in-depth results are explored in our conference presentation and paper.

Figure 1: Average score of listener preference for different tempo variants in vocal-swap mashups. Higher score indicates more participants selected that option as the “most preferred” version of the mashup combining 2 songs. Overall, majority of listeners liked the mashups at average tempo of the two original tracks.

Figure 1: Average score of listener preference for different tempo variants in vocal-swap mashups. Higher score indicates more participants selected that option as the “most preferred” version of the mashup combining 2 songs. Overall, majority of listeners liked the mashups at average tempo of the two original tracks.

Figure 2: Average score of listener preference for different key variants, plotted as a function of the key differences between the 2 base songs. In most cases, the vocals’ original key is the most preferred version for the mashups.

Figure 2: Average score of listener preference for different key variants, plotted as a function of the key differences between the 2 base songs. In most cases, the vocals’ original key is the most preferred version for the mashups.

Our results will hopefully provide helpful insights for mashup artists to further enhance their compositions, as well as for automatic mashup creation algorithms to improve their output performance.

Does Virtual Reality Match Reality? Vocal Performance Across Environments

Pasquale Bottalico – pb81@illinois.edu

University of Illinois, Urbana-Champaign
Champaign, IL 61820
United States

Carly Wingfield2, Charlie Nudelman1, Joshua Glasner3, Yvonne Gonzales Redman1,2

  1. Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign
  2. School of Music University of Illinois Urbana-Champaign
  3. School of Graduate Studies, Delaware Valley University

Popular version of 2aAAa1 – Does Virtual Reality Match Reality? Vocal Performance Across Environments
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037496

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Singers often perform in very different spaces than where they practice—sometimes in small, dry rooms and later in large, echoey concert halls. Many singers have shared that this mismatch can affect how they sing. Some say they end up singing too loudly because they can’t hear themselves well, while others say they hold back because the room makes them sound louder than they are. Singers have to adapt their voices to unfamiliar concert halls, and often they have very little rehearsal time to adjust.

While research has shown that instrumentalists adjust their playing depending on the room they are in, there’s been less work looking specifically at singers. Past studies have found that different rooms can change how singers use their voices, including how their vibrato (the small, natural variation in pitch) changes depending on the room’s echo and clarity.

At the University of Illinois, our research team from the School of Music and the Department of Speech and Hearing Science is studying whether virtual reality (VR) can help singers train for different acoustic environments. The big question: can a virtual concert hall give singers the same experience as a real one?

To explore this, we created virtual versions of three real performance spaces on campus (Figure 1).

Figure 1. 360 degree images of the three performance spaces investigated.

Singers wore open-backed headphones and a VR headset while singing into a microphone in a sound booth. As they sang, their voices were processed in real time to sound like they were in one of the real venues, and this audio was sent back to them through the headphones. In the Video (Video1), you can see a singer performing in the sound booth where the acoustic environments were recreated virtually. In the audio file (Audio1), you can hear exactly what the singer heard: the real-time, acoustically processed sound being sent back to their ears through the open-backed headphones.

Video 1. Singer performing in the virtual environment.

 

Audio 1. Example of real-time auralized feedback.

Ten trained singers performed in both the actual venues (Figure 2) and in virtual versions of those same spaces.

Figure 2. Singer performing in the rear environment.

We then compared how they sang and how they felt during each performance. The results showed no significant differences in how the singers used their voices or how they perceived the experience between real and virtual environments.

This is an exciting finding because it suggests that virtual reality could become a valuable tool in voice training. If a singer can’t practice in a real concert hall, a VR simulation could help them get used to the sound and feel of the space ahead of time. This technology could give students greater access to performance preparation and allow voice teachers to guide students through the process in a more flexible and affordable way.

Do Pipe Organs Create an Auto-Tune Effect? #ASA187

Do Pipe Organs Create an Auto-Tune Effect? #ASA187

Pipe organs create sympathetic resonance in concert halls and church sanctuaries

Media Contact:
AIP Media
301-209-3090
media@aip.org

MELVILLE, N.Y., Nov. 20, 2024 – The pipe organ, with its strong timber base and towering metal pipes, stands as a bastion in concert halls and church sanctuaries. Even when not in use, the pipe organ affects the acoustical environment around it.

Researcher Ashley Snow from the University of Washington sought to understand what effects the world’s largest class of musical instrument has on the acoustics of concert halls that house them.

pipe organs

Ashley Snow studied the resonant effects of the D-K Organ on concert hall acoustics at Coe College in Cedar Rapids, Iowa. Credit: Ashley Snow

“The question is how much the pipe organ contributes to an acoustic environment—and the bigger question is, what portion of music is the acoustic environment, and vice versa?” Snow said.

Snow will present data on the sympathetic resonance of pipe organs and its effect on concert hall acoustics on Wednesday, Nov. 20, at 11:00 a.m. ET as part of the virtual 187th Meeting of the Acoustical Society of America, running Nov. 18-22, 2024.

Snow hypothesized that the pipe organ creates an auto-tune effect since its pipes sympathetically resonate to the same frequencies they are tuned to. This effect may enhance the overall musical sound of ensembles that play in concert halls with organs.

A sine-sweep—a resonance test in which a sine-wave shaped signal is used to excite a system—was played through loudspeakers facing the organ pipes and measuring the response with a microphone at different positions. Data was gathered by placing microphones inside and around the organ pipes during a musical performance and a church service.

“I was way up in the ranks dangling a probe microphone into the pipes, trying my hardest not to make a sound or fall,” Snow said.

Snow verified experimentally that sympathetic resonance does occur in organ pipes during musical performances, speeches, and noises at frequencies that align with musical notes, and that the overall amplitude increases when the signal matches the resonance of one or more pipes.

Investigation into the significance of these effects on the overall quality of musical performance to listeners in the audience is still ongoing. Snow hopes to expand this research by comparing room acoustics between rooms with and without the presence of an organ, along with categorizing and mathematically modeling the tuning system of various world instruments. “What about the sympathy of a marimba, cymbal, or piano strings? Or the mode-locking of horns in a band? Would it sound the same if these things were separated from each other? For better or for worse? I want people to think about that.”

———————– MORE MEETING INFORMATION ———————–
​Main Meeting Website: https://acousticalsociety.org/asa-virtual-fall-2024/
Technical Program: https://eppro01.ativ.me/src/EventPilot/php/express/web/planner.php?id=ASAFALL24

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the virtual meeting and/or press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

How Pitch, Dynamics, and Vibrato Shape Emotions in Violin Music

Wenyi Song – wsongak@cse.ust.hk
Twitter: @sherrys72539831

Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR, NA, NA, Hong Kong

Anh Dung DINH
addinh@connect.ust.hk

Andrew Brian Horner
horner@cse.ust.hk
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR

Popular version of 1aMU2 – The emotional characteristics of the violin with different pitches, dynamics, and vibrato levels
Presented at the 187th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0034939

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Music has a unique way of moving us emotionally, but have you ever wondered how individual sounds shape these feelings?

In our study, we looked at how different features of violin notes—like pitch (the height of the notes), dynamics (the loudness of the sounds), and vibrato (how the note vibrates)—combine to create emotional responses. While previous research often focuses on each feature in isolation, we explored how they interact, revealing how the violin’s sounds evoke specific emotions.

To conduct this study, we used single-note recordings from the violin at different pitches, two levels of dynamics (loud and soft), and two vibrato settings (no vibrato and high vibrato). We invited participants to listen to these sounds and rate their emotional responses using a scale of emotional positivity (valence) and intensity (arousal). Participants also selected which emotions they felt from a list of 16 emotions, such as joyful, nervous, relaxed, or agitated.

Audio 1. The experiment used a violin single-note sample (middle C pitch + loud dynamics + no vibrato).

Audio 2. The experiment used a violin single-note sample (middle C pitch + soft dynamics + no vibrato).

Audio 3. The experiment used a violin single-note sample (middle C pitch + loud dynamics + high vibrato).

Audio 4. The experiment used a violin single-note sample (middle C pitch + loud dynamics + high vibrato).

Our findings reveal that each element plays a unique role in shaping emotions. As shown in Figure 1, higher pitches and strong vibrato generally raised emotional intensity, creating feelings of excitement or tension. Lower pitches were more likely to evoke sadness or calmness, while loud dynamics made emotions feel more intense. Surprisingly, sounds without vibrato were linked to calmer emotions, while vibrato added energy and excitement, especially for emotions like anger or fear. And Figure 2 illustrates how strong vibrato enhances emotions like anger and sadness, while the absence of vibrato correlates with calmer feelings.

Figure 1. Pitch, Dynamics, and Vibrato average ratings on valence-arousal with different levels. It shows that higher pitches and strong vibrato increase arousal, while soft dynamics and no vibrato are linked to higher valence, highlighting pitch as the most influential factor.

 

Figure 2. Pitch, Dynamics, and Vibrato average ratings on 16 emotions. It shows that strong vibrato enhances angry and sad emotions, while no vibrato supports calm emotions; higher pitches increase arousal for angry emotions, and brighter tones evoke calm and happy emotions.

Our research provides insights for musicians, composers, and even music therapists, helping them understand how to use the violin’s features to evoke specific emotions. With this knowledge, violinists can fine-tune their performance to match the emotional impact they aim to create, and composers can carefully select sounds that resonate with listeners’ emotional expectations.


Read more:
The emotional characteristics of the violin with different pitches, dynamics, and vibrato levels