Sound That Gets Under Your Skin (Literally): Testing Bone Conduction Headphones

Kiersten Reeser – kreeser@ara.com

Applied Research Associates, Inc., 7921 Shaffer Pkwy, Littleton, Colorado, 80127, United States

Twitter: @ARA_News_Events
Instagram: @appliedresearchassociates

Additional authors:
Alexandria Podolski
William Gray
Andrew Brown
Theodore Argo

Popular version of 1pEA3 – Investigating Commercially Available Force Sensors for Bone Conduction Hearing Device Evaluation
Presented at the 187th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=IntHtml&project=ASAFALL24&id=3771572

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Bone conduction (BC) headphones produce sound without covering the outer ears, offering an appealing alternative to conventional headphones. While BC technologies have long been used for diagnosing and treating hearing loss, consumer BC devices have become increasingly popular with a variety of claimed benefits, from safety to sound quality. However, objectively measuring BC signals – to guide improvement of device design, for example – presents several unique challenges, beginning with measurement of the BC signal itself.

Airborne audio signals, like those generated by conventional headphones, are measured using microphones; BC signals are generated by vibrating transducers pressed against the head. These vibrations are impacted by how/where and how tightly the BC headphones are positioned on the head, and other factors including individualized anatomy.

BC devices have historically been evaluated using an artificial mastoid (Figure 1 – left), a specialized (and expensive) measurement tool that was designed to simulate key properties of the tissue behind the ear, capturing the output of selected clinical BC devices under carefully controlled measurement conditions. While the artificial mastoid’s design allows for high-precision measurements, it does not account for the variety of shapes and sizes of consumer BC devices. Stakeholders ranging from manufacturers to researchers need a method to measure the effective outputs of consumer BC devices as worn by actual listeners.

Figure 1. The B&K Artificial Mastoid (left) is the standard solution for measuring BC device output. There is a need for a sensor to be placed between the BCD and human head for real-life measurements of the device’s output.

 

Our team, made up of collaborators at Applied Research Associates, Inc. (ARA) and the University of Washington, is working to develop a system that can be used across a wide variety of unique anatomy, BC devices, and sensor placement locations (Figure 1 – right). The goal is to use thin/flexible sensors placed directly under BC devices during use to accurately and repeatably measure the coupling of the BC device with the head (static force) and the audio-frequency vibrations produced by the device (dynamic force).

Three low-cost force sensors have been identified, shown in Figure 2, each having different underlying technologies with potential to meet the requirements necessary to characterize BC device output. The sensors have undergone preliminary testing, which revealed that all three can produce static force measurements. However, the detectable frequencies and signal quality of the dynamic force measurements varied based on the sensing design and circuitry of each sensor. The design of the Ohmite force sensing resistor (Figure 3– left) limited the quality of the measured signal. The SingleTact force sensing capacitor (Figure 3– middle) was incapable of collecting dynamic measurements for audio signals. The Honeywell FSA was limited by its circuitry and could only partially detect the desired frequency ranges.

Figure 2. Three force-sensors were evaluated; Ohmite force-sensing resistor (left), SingleTact force-sensing capacitor (middle), and Honeywell FSA (right).

 

Further testing and development are necessary to identify whether dynamic force measurements can be improved by utilizing different hardware for data collection or implementing different data analysis techniques. Parallel efforts are focused on streamlining the interface between the BC device and the sensors to improve listener comfort.

How Pitch, Dynamics, and Vibrato Shape Emotions in Violin Music

Wenyi Song – wsongak@cse.ust.hk
Twitter: @sherrys72539831

Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR, NA, NA, Hong Kong

Anh Dung DINH
addinh@connect.ust.hk

Andrew Brian Horner
horner@cse.ust.hk
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR

Popular version of 1aMU2 – The emotional characteristics of the violin with different pitches, dynamics, and vibrato levels
Presented at the 187th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=IntHtml&project=ASAFALL24&id=3767557

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Music has a unique way of moving us emotionally, but have you ever wondered how individual sounds shape these feelings?

In our study, we looked at how different features of violin notes—like pitch (the height of the notes), dynamics (the loudness of the sounds), and vibrato (how the note vibrates)—combine to create emotional responses. While previous research often focuses on each feature in isolation, we explored how they interact, revealing how the violin’s sounds evoke specific emotions.

To conduct this study, we used single-note recordings from the violin at different pitches, two levels of dynamics (loud and soft), and two vibrato settings (no vibrato and high vibrato). We invited participants to listen to these sounds and rate their emotional responses using a scale of emotional positivity (valence) and intensity (arousal). Participants also selected which emotions they felt from a list of 16 emotions, such as joyful, nervous, relaxed, or agitated.

Audio 1. The experiment used a violin single-note sample (middle C pitch + loud dynamics + no vibrato).

Audio 2. The experiment used a violin single-note sample (middle C pitch + soft dynamics + no vibrato).

Audio 3. The experiment used a violin single-note sample (middle C pitch + loud dynamics + high vibrato).

Audio 4. The experiment used a violin single-note sample (middle C pitch + loud dynamics + high vibrato).

Our findings reveal that each element plays a unique role in shaping emotions. As shown in Figure 1, higher pitches and strong vibrato generally raised emotional intensity, creating feelings of excitement or tension. Lower pitches were more likely to evoke sadness or calmness, while loud dynamics made emotions feel more intense. Surprisingly, sounds without vibrato were linked to calmer emotions, while vibrato added energy and excitement, especially for emotions like anger or fear. And Figure 2 illustrates how strong vibrato enhances emotions like anger and sadness, while the absence of vibrato correlates with calmer feelings.

Figure 1. Pitch, Dynamics, and Vibrato average ratings on valence-arousal with different levels. It shows that higher pitches and strong vibrato increase arousal, while soft dynamics and no vibrato are linked to higher valence, highlighting pitch as the most influential factor.

 

Figure 2. Pitch, Dynamics, and Vibrato average ratings on 16 emotions. It shows that strong vibrato enhances angry and sad emotions, while no vibrato supports calm emotions; higher pitches increase arousal for angry emotions, and brighter tones evoke calm and happy emotions.

Our research provides insights for musicians, composers, and even music therapists, helping them understand how to use the violin’s features to evoke specific emotions. With this knowledge, violinists can fine-tune their performance to match the emotional impact they aim to create, and composers can carefully select sounds that resonate with listeners’ emotional expectations.

Understanding rapid fluid flow from the passage of a sound wave

James Friend – jfriend@ucsd.edu

Medically Advanced Devices Laboratory, Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA, 92093, United States

Popular version of 1pPA6 – Acoustic Streaming
Presented at the 187th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAFALL24&id=3770639

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Acoustic streaming is the flow of fluid driven by the interaction of sound waves with a fluid. Traditionally, this effect was viewed as slow and steady, but recent research shows it can cause fluids to flow rapidly and usefully. To understand how this mechanism works, the researchers devised an entirely new approach to the problem, spatiotemporally separating the acoustics from the fluid flow, providing a closed-form solution, a first. This phenomena has applications in areas like medical diagnostics, biosensing, and microfluidics where precise fluid manipulation is needed, and the analysis techniques may be useful from particle physics to geoengineering.

The Silent Service

HONGMIN PARK – hongmini0202@snu.ac.kr

Seoul National University, Gwanak-ro, Gwanak-gu, Seoul, Republic of Korea, Seoul, Seoul, 08826, South Korea

WOOJAE SEONG
Professor of Seoul National University
http://uwal.snu.ac.kr

Popular version of 2aEA9 – A study of the application of global optimization for the arrangement of absorbing materials in multi-layered absorptive fluid silencer
Presented at the 187th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAFALL24&id=3771466

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Underwater Radiated Noise (URN) generated by naval vessels is critically important as it directly impacts survivability. Underwater Radiated Noise (URN) refers to the sound emitted by objects, like ships or submarines, into the water. This noise is generated by various sources, including the vessel’s machinery, propellers, and movement through water. It can be detected underwater, affecting their ability to remain undetected. So various studies have been conducted to reduce URN for submarines to maintain stealth and silence.

This study focuses on the ‘absorptive fluid silencer’ installed in piping to reduce noise from the complex machinery system. An absorptive fluid silencer is similar to a car muffler, reducing noise by placing sound-absorbing materials inside.

We measured how well the silencer reduced noise by comparing sound levels at the beginning and end of the silencer. Polyurethane, a porous elastic material, was used as the internal sound-absorbing material, and five types of absorbent materials suitable for actual manufacturing were selected. By applying a ‘global optimization method,’ we designed a high-performance ‘fluid silencer.’.

The above graph shows a partial analysis result, It can be observed that using composite absorbing materials provides superior sound absorption performance compared to using a single absorbing material.

Listen to the Music: We Rely on Musical Genre to Determine Singers’ Accents

Maddy Walter – maddyw37@student.ubc.ca

The University of British Columbia, Department of Linguistics, Vancouver, British Columbia, V6T 1Z4, Canada

Additional authors:
Sydney Norris, Sabrina Luk, Marcell Maitinsky, Md Jahurul Islam, and Bryan Gick

Popular version of 3pPP6 – The Role of Genre Association in Sung Dialect Categorization
Presented at the 187th ASA Meeting
Read the abstract at https://eppro01.ativ.me//web/index.php?page=Session&project=ASAFALL24&id=3771321

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Have you ever listened to a song and later been surprised to hear the artist speak with a different accent than the one you heard in the song? Take country singer Keith Urban’s song “What About Me” for instance; when listening, you might assume that he has a Southern American (US) English accent. However, in his interviews, he speaks with an Australian English accent. So why did you think he sounded Southern?

Research suggests that specific accents or dialects are associated with musical genres [2], that singers adjust their accents based on genre [4]; and that foreign accents are more difficult to recognize in songs compared to speech [5]. However, when listeners perceive an accent in a song, it is unclear which type of information they rely on: the acoustic speech information or information about the musical genre. Our previous research investigated this question for Country and Reggae music and found that genre recognition may play a larger role in dialect perception than the actual sound of the voice [9].

Our current study explores American Blues and Folk music, genres that allow for easier separation of vocals from instrumentals, with more refined stimuli manipulation. Blues is strongly associated with African American English [3], while Folk can be associated with a variety of (British, American, etc.) dialects [1]. Participants listened to manipulated clips of sung and “spoken” lines taken from songs in both genres, which were transcribed for participants (see Figure 1). AI applications were used to remove instrumentals for both sung and spoken clips, while “spoken” clips also underwent rhythm and pitch normalization so that they sounded like spoken rather than sung speech. After hearing each sung or spoken line, participants were asked to identify the dialect they heard from six options [7, 8] (see Figure 2).

Figure 1: Participant view of a transcript from a Folk song clip.
Figure 2: Participant view of six dialect options after hearing a clip.

Participants were much more confident and accurate in categorizing accents for clips in the Sung condition, regardless of genre. The proportion of uncertainty (“Not Sure” responses) in the Spoken condition was consistent across genres (see “D” in Figure 3), suggesting that participants were more certain of dialect when musical cues were present. Dialect categories followed genre expectations, as can be seen from the increase in identifying African American English for Blues in the Sung condition (see “A”). Removing uncertainty by adding genre cues did not increase the likelihood of “Irish English” or “British English” being chosen for Blues, though it did for Folk (see “B” and “C” in Figure 3), in line with genre-based expectations.

Figure 3: Participant dialect responses.

These findings enhance our understanding of the relationship between musical genre and accent. Referring again to the example of Keith Urban, the singer’s stylistic accent change may not be the only culprit for our interpretation of a Southern drawl. Rather, we may have assumed we were listening to a musician with a Southern American English Accent when we heard the first banjo-like twang or tuned into iHeartCountry Radio. When we listen to a song and perceive a singer’s accent, we are not only listening to the sounds of their speech, but are also shaping our perception from our expectations of dialect based on the musical genre.

References:

  1. Carrigan, J., Henry L. (2004). Lornell, kip. the NPR curious listener’s guide to american folk music. Library Journal (1976), 129(19), 63.
  2. Coupland, N. (2011). Voice, place and genre in popular song performance. Journal of Sociolinguistics, 15(5), 573–602. https://doi.org/10.1111/j.1467-9841.2011.00514.x.
  3. De Timmerman, Romeo, et al. (2024). The globalization of local indexicalities through music: African‐American English and the blues. Journal of Sociolinguistics, 28(1), 3–25. https://doi.org/10.1111/josl.12616.
  4. Gibson, A. M. (2019). Sociophonetics of popular music: insights from corpus analysis and speech perception experiments [Doctoral dissertation, University of Canterbury]. http://dx.doi.org/10.26021/4007.
  5. Mageau, M., Mekik, C., Sokalski, A., & Toivonen, I. (2019). Detecting foreign accents in song. Phonetica, 76(6), 429–447. https://doi.org/10.1159/000500187.
  6. RStudio. (2020). RStudio: Integrated Development for R. RStudio, PBC, Boston, MA. http://www.rstudio.com/.
  7. Stoet, G. (2010). PsyToolkit – A software package for programming psychological experiments using Linux. Behavior Research Methods, 42(4), 1096-1104.
  8. Stoet, G. (2017). PsyToolkit: A novel web-based method for running online questionnaires and reaction-time experiments. Teaching of Psychology, 44(1), 24-31.
  9. Walter, M., Bengtson, G., Maitinsky, M., Islam, M. J., & Gick, B. (2023). Dialect perception in song versus speech. The Journal of the Acoustical Society of America, 154(4_supplement), A161. https://doi.org/10.1121/10.0023131.