Listening for Multiple Mental Health Disorders

Listening for Multiple Mental Health Disorders

Automated analysis of voice can reliably diagnose co-occurring depressive and anxiety disorders in one minute.

Listening for Multiple Mental Health Disorders

Acoustic and phonemic features from recordings and applied machine learning technique can distinguish subjects with and without comorbid AD/MDD. Credit: Hannah Daniel/AIP

WASHINGTON, Feb. 4, 2025 – It’s no secret that there is a mental health crisis in the United States. As of 2021, 8.3% adults had major depressive disorder (MDD) and 19.1% had anxiety disorders (AD), and the COVID-19 pandemic exacerbated these statistics. Despite the high prevalence of AD/MDD, diagnosis and treatment rates remain low – 36.9% for AD and 61.0% for MDD – due to a variety of social, perceptual, and structural barriers. Automated screening tools can help.

In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers developed machine learning tools that screen for comorbid AD/MDD using acoustic voice signals extracted from… click to read more

From: JASA Express Letters
Article: Automated acoustic voice screening techniques for comorbid depression and anxiety disorders
DOI: 10.1121/10.0034851

Ouch! Commonalties Found in Pain Vocalizations and Interjections Across Cultures

Ouch! Commonalties Found in Pain Vocalizations and Interjections Across Cultures

Study investigates vocalizations and interjections for pain, joy, and disgust across 131 languages.

Vowel density maps reveal that distinct vowel spaces for vocalizations of pain, disgust, and joy remain consistent across languages. Credit: Ponsonnet et al.

WASHINGTON, Nov. 12, 2024 – There are an estimated 7,000 languages spoken worldwide, each offering unique ways to express human emotion. But do certain emotions show regularities in their vocal expression across languages?

In JASA, published on behalf of the Acoustical Society of America by AIP Publishing, an interdisciplinary team of linguists and bioacousticians led by Maïa Ponsonnet, Katarzyna Pisanski, and Christophe Coupé explored this by… click to read more

From: JASA
Article: Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust and joy across languages
DOI: 10.1121/10.0032454

D. Keith Wilson Selected as Next Acoustics Today Editor

D. Keith Wilson Selected as Next Acoustics Today Editor

Melville, May 31, 2024 – The Acoustical Society of America (ASA) is pleased to announce that D. Keith Wilson will be stepping into the role of Editor of Acoustics Today (AT), the science and technology magazine of the ASA, starting in 2025. For the past ten years, Arthur N. Popper has held the position.

This role will not be Dr. Wilson’s first leadership position within ASA Publications: he was the Editor of JASA Express Letters from 2005 to 2009 and the chairperson of the Committee on Publication Policy from 2011 to 2018. Currently, he acts as an Associate Editor for The Journal of the Acoustical Society of America (JASA) and JASA Express Letters.

In addition to his involvement with ASA Publications, Dr. Wilson has been an active member in the society and was elected as a Fellow of the Society in 2003. He has been involved with both the Physical Acoustics and Noise Technical Committees and helped create the Computational Acoustics Technical Committee.

The ASA welcomes Dr. Wilson into this position and look forward to the skill and insight he will bring to the magazine.


About Acoustics Today: Each issue of Acoustics Today is sent to ASA members in print form and is also freely available online at acousticstoday.org. The primary purpose of Acoustics Today is to provide timely scholarly articles, short essays highlighting important ASA programs, and other material to ASA members that is interesting, understandable, and relevant, regardless of a member’s background.

Machine Listening: Making Speech Recognition Systems More Inclusive

Machine Listening: Making Speech Recognition Systems More Inclusive

Study explores how African American English speakers adapt their speech to be understood by voice technology.

Speech Recognition

African American English speakers adjust rate and pitch based on audience. Credit: Michelle Cohn, Zion Mengesha, Michal Lahav, and Courtney Heldreth

WASHINGTON, April 30, 2024 – Interactions with voice technology, such as Amazon’s Alexa, Apple’s Siri, and Google Assistant, can make life easier by increasing efficiency and productivity. However, errors in generating and understanding speech during interactions are common. When using these devices, speakers often style-shift their speech from their normal patterns into a louder and… click to read more

From: JASA Express Letters
Article: African American English speakers’ pitch variation and rate adjustments for imagined technological and human addressees
DOI: 10.1121/10.0025484

Hard-of-Hearing Music Fans Prefer a Different Sound

Hard-of-Hearing Music Fans Prefer a Different Sound

Modern music can be inaccessible to those with hearing loss; sound mixing tweaks could make a difference.

Listeners with hearing loss can struggle to make out vocals and certain frequencies in modern music. Credit: Aravindan Joseph Benjamin

WASHINGTON, August 22, 2023 – Millions of people around the world experience some form of hearing loss, resulting in negative impacts to their health and quality of life. Treatments exist in the form of hearing aids and cochlear implants, but these assistive devices cannot replace the full functionality of human hearing and remain inaccessible for most people. Auditory experiences, such as speech and music…click to read more

From: The Journal of the Acoustical Society of America
Article: Exploring level- and spectrum-based music mixing transforms for hearing-impaired listeners
DOI: 10.1121/10.0020269

Lead Vocal Tracks in Popular Music Go Quiet

Lead Vocal Tracks in Popular Music Go Quiet

An analysis of top popular music from 1946 to 2020 shows a marked decrease in volume of the lead vocal track and differences across musical genres.

Estimated lead-to-accompaniment-ratio, LAR, for songs in five genres from 1990-2020. Purple circles correspond to solo artists and green squares to bands. Credit: Kai Siedenburg

WASHINGTON, April 25, 2023 – A general rule of music production involves mixing various soundtracks so the lead singer’s voice is in the foreground. But it is unclear how such track mixing – and closely related lyric intelligibility – has changed over the years.

Scientists from the University of Oldenburg in Germany carried out an analysis of hundreds of popular song recordings from 1946 to 2020 to determine…click to read more

From the Journal: JASA Express Letters
Article: Lead-vocal level in recordings of popular music 1946-2020
DOI: 10.1121/10.0017773