Listening for Multiple Mental Health Disorders

Listening for Multiple Mental Health Disorders

Automated analysis of voice can reliably diagnose co-occurring depressive and anxiety disorders in one minute.

Listening for Multiple Mental Health Disorders

Acoustic and phonemic features from recordings and applied machine learning technique can distinguish subjects with and without comorbid AD/MDD. Credit: Hannah Daniel/AIP

WASHINGTON, Feb. 4, 2025 – It’s no secret that there is a mental health crisis in the United States. As of 2021, 8.3% adults had major depressive disorder (MDD) and 19.1% had anxiety disorders (AD), and the COVID-19 pandemic exacerbated these statistics. Despite the high prevalence of AD/MDD, diagnosis and treatment rates remain low – 36.9% for AD and 61.0% for MDD – due to a variety of social, perceptual, and structural barriers. Automated screening tools can help.

In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers developed machine learning tools that screen for comorbid AD/MDD using acoustic voice signals extracted from… click to read more

From: JASA Express Letters
Article: Automated acoustic voice screening techniques for comorbid depression and anxiety disorders
DOI: 10.1121/10.0034851

June 2024 JASA Express Letters Cover

The June JASA Express Letters cover features a photo inspired by “Who is singing? Voice recognition from spoken versus sung speech,” by Angela Cooper, Matthew Eitel, Natalie Fecher,  Elizabeth Johnson, and Laura K. Cirelli. The article shows listeners can recognize a person’s singing voice, even if they only previously heard them speak, as well as vice versa. The research highlights our flexible ability to identify individuals from their voice.

This month’s issue also included two Editor’s Picks:

Browse the rest of the issue at https://pubs.aip.org/asa/jel/issue/4/6.

JASA-EL cover

New Across Acoustics Episode: Why don’t speech recognition systems understand African American English?

Most people have encountered speech recognition software in their day-to-day lives, whether through personal digital assistants, auto transcription, or other such modern marvels. As the technology advances, though, it still fails to understand speakers of African American English (AAE). In this episode, we talk to Michelle Cohn (Google Research and University of California Davis) and Zion Mengesha (Google Research and Stanford University) about their research into why these problems with speech recognition software seem to persist and what can be done to make sure more voices are understood by the technology.

Celebrating Pride Month with Across Acoustics: Speech Research and Gender-Diverse Speakers

Happy Pride Month, everyone! This is a time to celebrate and uplift the voices of the LGBTQ+ community, and what better way to do so than by diving into some fascinating research that explores the intersections of speech, perception, and gender diversity? In the Across Acoustics episode, “Speech Research Methods and Gender-Diverse Speakers,” Brandon Merritt discusses the article, “Auditory Free Classification of Gender-Diverse Speakers,” published in the Journal of the Acoustical Society of America (JASA) with co-authors Tessa Bent, Rowan Kilgore, and Cameron Eads. Their research sheds light on how listeners perceive and classify the gender of speakers, moving beyond the traditional binary notions of gender.

Understanding how we perceive gender in speech has profound implications for communication and inclusivity. By exploring the acoustic and perceptual characteristics that influence gender attribution, Merritt’s research helps to create a more nuanced understanding of gender diversity. This is particularly important for supporting the representation and recognition of transgender, non-binary, and gender-nonconforming individuals in both academic research and everyday interactions.

Brandon Merritt’s contributions to the field of speech and gender perception extend beyond this podcast episode. Here are a couple more publications that you should check out:

Speech Beyond the Binary: Some Acoustic-Phonetic and Auditory-Perceptual Characteristics of Non-Binary Speakers” (JASA Express Letters, March 2023): This paper explores the acoustic and perceptual features of non-binary speakers, providing insights into how non-binary identities are expressed and perceived through speech.

Revisiting the Acoustics of Speaker Gender Perception: A Gender Expansive Perspective” (JASA, January 2022): This work revisits traditional models of gender perception in speech, incorporating a broader range of gender identities and offering a more inclusive perspective.

As we celebrate Pride Month, it’s crucial to recognize and support research that honors and explores the diversity of human experience. Brandon Merritt’s work exemplifies this commitment by pushing the boundaries of how we understand and categorize gender through speech. So, take a moment to listen to the podcast, read Merritt’s publications, and reflect on the importance of inclusivity in research and beyond.

Happy Pride Month, and here’s to celebrating the vibrant diversity that makes our world a richer, more understanding place!

May 2024 JASA Express Letters Cover

The May JASA Express Letters cover features a portion of Figure 4 from “Predicting underwater acoustic transmission loss in the SOFAR channel from ray trajectories via deep learning,” by Haitao Wang, Shiwei Peng, Qunyi He and Xiangyang Zeng. The image shows acoustic transmission loss maps. The article presents a deep learning-based underwater acoustic transmission loss prediction method, in an effort to address current challenges with predicting acoustic transmission loss in the SOFAR channel.

This month’s issue also had a couple Editor’s Picks:

Browse the rest of the issue at https://pubs.aip.org/asa/jel/issue/4/5.

May JASA Express Letters cover