A Twangy Timbre Cuts Through the Noise

Among loud noise, a brassy and bright voice can help speakers be understood.

A study by Tsai et al. showed that twangy, female voices are best understood amongst plane and train sounds. Credit: AIP

A study by Tsai et al. showed that twangy, female voices are best understood amongst plane and train sounds. Credit: AIP

WASHINGTON, July 29, 2025 — Twangy voices are a hallmark of country music and many regional accents. However, this speech type, often described as “brassy” and “bright,” can also be used to get a message across in a noisy environment.

In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from Indiana University found that it was easier to understand twangy female voices compared to neutral voices when…click to read more

From: JASA Express Letters
Article: How vocal timbre impacts word identification and listening effort in traffic-shaped noises
DOI: 10.1121/10.0037043

Would a Musical Triangle of Any Other Shape Sound as Sweet?

Would a Musical Triangle of Any Other Shape Sound as Sweet?

The surprising proof of resonance in the open-ended musical triangle could exist in circles and squares too.

musical triangle

For the triangle, researchers captured proof that resonance occurs even with the notched, open corner, and it may occur in other instrument shapes as well. Credit: Risako Tanigawa

WASHINGTON, May 6, 2025 – The triangle is a small instrument made of a metal rod bent into a triangle shape that is open at one corner. While small, its sound is distinct, with multiple overtones and nonharmonic resonance. But what causes the surprisingly powerful sound?

“The triangle instrument produces enchanting and beautiful tones, raising deep and profound questions about the connection between music and physics,” author Risako Tanigawa said. “Optical sound measurement has…click to read more

From: JASA Express Letters
Article: How the musical triangle’s shape influences its sound
DOI: 10.1121/10.0034851

Listening for Multiple Mental Health Disorders

Listening for Multiple Mental Health Disorders

Automated analysis of voice can reliably diagnose co-occurring depressive and anxiety disorders in one minute.

Listening for Multiple Mental Health Disorders

Acoustic and phonemic features from recordings and applied machine learning technique can distinguish subjects with and without comorbid AD/MDD. Credit: Hannah Daniel/AIP

WASHINGTON, Feb. 4, 2025 – It’s no secret that there is a mental health crisis in the United States. As of 2021, 8.3% adults had major depressive disorder (MDD) and 19.1% had anxiety disorders (AD), and the COVID-19 pandemic exacerbated these statistics. Despite the high prevalence of AD/MDD, diagnosis and treatment rates remain low – 36.9% for AD and 61.0% for MDD – due to a variety of social, perceptual, and structural barriers. Automated screening tools can help.

In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers developed machine learning tools that screen for comorbid AD/MDD using acoustic voice signals extracted from… click to read more

From: JASA Express Letters
Article: Automated acoustic voice screening techniques for comorbid depression and anxiety disorders
DOI: 10.1121/10.0034851

June 2024 JASA Express Letters Cover

The June JASA Express Letters cover features a photo inspired by “Who is singing? Voice recognition from spoken versus sung speech,” by Angela Cooper, Matthew Eitel, Natalie Fecher,  Elizabeth Johnson, and Laura K. Cirelli. The article shows listeners can recognize a person’s singing voice, even if they only previously heard them speak, as well as vice versa. The research highlights our flexible ability to identify individuals from their voice.

This month’s issue also included two Editor’s Picks:

Browse the rest of the issue at https://pubs.aip.org/asa/jel/issue/4/6.

JASA-EL cover

New Across Acoustics Episode: Why don’t speech recognition systems understand African American English?

Most people have encountered speech recognition software in their day-to-day lives, whether through personal digital assistants, auto transcription, or other such modern marvels. As the technology advances, though, it still fails to understand speakers of African American English (AAE). In this episode, we talk to Michelle Cohn (Google Research and University of California Davis) and Zion Mengesha (Google Research and Stanford University) about their research into why these problems with speech recognition software seem to persist and what can be done to make sure more voices are understood by the technology.