A Twangy Timbre Cuts Through the Noise

Among loud noise, a brassy and bright voice can help speakers be understood.

A study by Tsai et al. showed that twangy, female voices are best understood amongst plane and train sounds. Credit: AIP

A study by Tsai et al. showed that twangy, female voices are best understood amongst plane and train sounds. Credit: AIP

WASHINGTON, July 29, 2025 — Twangy voices are a hallmark of country music and many regional accents. However, this speech type, often described as “brassy” and “bright,” can also be used to get a message across in a noisy environment.

In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from Indiana University found that it was easier to understand twangy female voices compared to neutral voices when…click to read more

From: JASA Express Letters
Article: How vocal timbre impacts word identification and listening effort in traffic-shaped noises
DOI: 10.1121/10.0037043

Introducing Project ELLA: Enhancing Early Language and Literacy

Jennell Vick – jvick@chsc.org
Twitter: @DrJVick

Cleveland Hearing and Speech Center
6001 Euclid Avenue Suite 100
Cleveland, OH, 44103
United States

Popular version of 2aSC4 – From intention to understanding and back again: How a simple message of ‘Catch and Pass’ can build language in children
Presented at the 187th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0035171

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Project ELLA (Early Language and Literacy for All) is an exciting new program designed to boost early language and literacy skills in young children. The program uses a simple yet powerful message, “Catch and Pass,” to teach parents, grandparents, daycare teachers and other caregivers the importance of having back-and-forth conversations with children from birth. These interactions help build and strengthen the brain’s language pathways, setting the foundation for lifelong learning.

Developed by the Cleveland Hearing & Speech Center, Project ELLA focuses on helping children in the greater Cleveland area, especially those in under-resourced communities. Community health workers visit neighborhoods to build trust with neighbors, raise awareness about the importance of responsive interactions for language development, and help empower families to put their children on-track for later literacy (See Video1). They also identify children who may need more help through speech and language screenings. For children identified as needing more help, Project ELLA offers free speech-language therapy and support for caregivers at Cleveland Hearing & Speech Center.

The success of the project is measured by tracking the number of children and families served, the progress of children in therapy, the knowledge and skills of caregivers and teachers, and the partnerships established in the community (See Fig. 1). Project ELLA is a groundbreaking model that has the potential to transform language and literacy development in Cleveland and beyond.

Early Language and Literacy for All

The science of baby speech sounds: men and women may experience them differently

M. Fernanda Alonso Arteche – maria.alonsoarteche@mail.mcgill.ca
Instagram: @laneurotransmisora

School of Communication Science and Disorders, McGill University, Center for Research on Brain, Language, and Music (CRBLM), Montreal, QC, H3A 0G4, Canada

Instagram: @babylabmcgill

Popular version of 2pSCa – Implicit and explicit responses to infant sounds: a cross-sectional study among parents and non-parents
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027179

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine hearing a baby coo and instantly feeling a surge of positivity. Surprisingly, how we react to the simple sounds of a baby speaking might depend on whether we are women or men, and whether we are parents. Our lab’s research delves into this phenomenon, revealing intriguing differences in how adults perceive baby vocalizations, with a particular focus on mothers, fathers, and non-parents.

Using a method that measures reaction time to sounds, we compared adults’ responses to vowel sounds produced by a baby and by an adult, as well as meows produced by a cat and by a kitten. We found that women, including mothers, tend to respond positively only to baby speech sounds. On the other hand, men, especially fathers, showed a more neutral reaction to all sounds. This suggests that the way we process human speech sounds, particularly those of infants, may vary significantly between genders. While previous studies report that both men and women generally show a positive response to baby faces, our findings indicate that their speech sounds might affect us differently.

Moreover, mothers rated babies and their sounds highly, expressing a strong liking for babies, their cuteness, and the cuteness of their sounds. Fathers, although less responsive in the reaction task, still rated highly their liking for babies, the cuteness of them, and the appeal of their sounds. This contrast between implicit (subconscious) reactions and explicit (conscious) opinions highlights an interesting complexity in parental instincts and perceptions. Implicit measures, such as those used in our study, tap into automatic and unconscious responses that individuals might not be fully aware of or may not express when asked directly. These methods offer a more direct window into the underlying feelings that might be obscured by social expectations or personal biases.

This research builds on earlier studies conducted in our lab, where we found that infants prefer to listen to the vocalizations of other infants, a factor that might be important for their development. We wanted to see if adults, especially parents, show similar patterns because their reactions may also play a role in how they interact with and nurture children. Since adults are the primary caregivers, understanding these natural inclinations could be key to supporting children’s development more effectively.

The implications of this study are not just academic; they touch on everyday experiences of families and can influence how we think about communication within families. Understanding these differences is a step towards appreciating the diverse ways people connect with and respond to the youngest members of our society.

Why is it easier to understand people we know?

Emma Holmes – emma.holmes@ucl.ac.uk
X (Twitter): @Emma_Holmes_90

University College London (UCL), Department of Speech Hearing and Phonetic Sciences, London, Greater London, WC1N 1PF, United Kingdom

Popular version of 4aPP4 – How does voice familiarity affect speech intelligibility?
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027437

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

It’s much easier to understand what others are saying if you’re listening to a close friend or family member, compared to a stranger. If you practice listening to the voices of people you’ve never met before, you might also become better at understanding them too.

Many people struggle to understand what others are saying in noisy restaurants or cafés. This can become much more challenging as people get older. It’s often one of the first changes that people notice in their hearing. Yet, research shows that these situations are much easier if people are listening to someone they know very well.

In our research, we ask people to visit the lab with a friend or partner. We record their voices while they read sentences aloud. We then invite the volunteers back for a listening test. During the test, they hear sentences and click words on a screen to show what they heard. This is made more difficult by playing a second sentence at the same time, which the volunteers are told to ignore. This is like having a conversation when there are other people talking around you. Our volunteers listen to many sentences over the course of the experiment. Sometimes, the sentence is one recorded from their friend or partner. Other times, it’s one recorded from someone they’ve never met. Our studies have shown that people are best at understanding the sentences spoken by their friend or partner.

In one study, we manipulated the sentence recordings, to change the sound of the voices. The voices still sounded natural. Yet, volunteers could no longer recognize them as their friend or partner. We found that participants were still better at understanding the sentences, even though they didn’t recognize the voice.

In other studies, we’ve investigated how people learn to become familiar with new voices. Each volunteer learns the names of three new people. They’ve never met these people, but we play them lots of recordings of their voices. This is like when you listen to a new podcast or radio show. We’ve found that people become very good at understanding these people. In other words, we can train people to become familiar with new voices.

In new work that hasn’t yet been published, we found that voice familiarization training benefits both older and younger people. So, it may help older people who find it very difficult to listen in noisy places. Many environments contain background noise—from office parties to hospitals and train stations. Ultimately, we hope that we can familiarize people with voices they hear in their daily lives, to make it easier to listen in noisy places.

1aPP – The Role of Talker/Vowel Change in Consonant Recognition with Hearing Loss

Ali Abavisani – aliabavi@illinois.edu
Jont B. Allen – jontalle@illinois.edu
Dept. of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
405 N Mathews Ave
Urbana, IL, 61801

Popular version of paper 1aPP
Presented Monday, May 13, 2019
177th ASA Meeting, Louisville, KY

Hearing loss can have serious impact on social life of individuals experiencing it. The effect of hearing loss becomes more complicated in environments such as restaurants, where the background noise is similar to speech. Although hearing aids in various designs, intend to address these issues, users complain about hearing aids performance in social situations, where they are mostly needed. Part of this problem refers to the nature of hearing aids, which do not use speech as part of design and fitting process. If we somehow incorporate speech sounds in real life conditions into the fitting process of hearing aids, it may be possible to address most of the shortcomings that irritates the users.

There have been many studies on the features that are important in identification of speech sounds such as isolated consonant + vowel (CV) phones (i.e., meaningless speech sound). Most of these studies ran experiments on normal hearing listeners, to identify the effects of different speech features in correct recognition. It turned out that manipulation of speech sounds, such as replacing a vowel, or amplifying/attenuating certain parts of sound in time-frequency domain, leads to identification of new speech sounds by the normal hearing listeners. One goal of current study is to investigate whether there are similar responses to such manipulations from listeners who have hearing loss.

We designed a speech-based test that may be utilized by audiologists to determine susceptible speech phones for each individual with hearing loss. The design includes a perceptual measure that corresponds to speech understanding in background noise, where the noise is similar to speech. The perceptual measure identifies the noise level in which the speech sound is recognizable by an average normal hearing listener, at least with 90% accuracy. The speech sounds within the test include combinations of 14 consonants {p, t, k, f, s, S, b, d, g, v, z, Z, m, n} and four vowels {A, ae, I, E}, to cover different features that are present in speech. All the test sounds have pre-evaluated to make sure they are recognizable by normal hearing listeners in the noise conditions of the experiments. Two sets of sounds named T$_1$ and T$_2$ having same consonant-vowel combinations of sounds but different talkers, had been presented to the listeners at their most comfortable level of hearing (not depending to their specific hearing loss). The two speech sets had distinct perceptual measure. When two sounds with similar perceptual measure, and with the same consonant but different vowel are presented to a listener with hearing loss, their response can show us how their particular hearing function, may cause errors in understanding this particular speech sound, and why this function led to recognition of a specific sound instead of the presented speech. Also, presenting sounds from the two sets constitute the means to compare the role of perceptual measure (which is based on normal hearing listeners), on listeners with hearing loss. When the recognition score for a particular listener increases as the result of a change in presented speech sounds, it is an indication on how the fitting process of hearing aid should follow, regarding that particular (listener, speech sound) pair.

While the study shows that improvement or degradation of the speech sounds are listener dependent, on average 85% of sounds are improved when we replaced the CV with same CV but with a better perceptual measure. Additionally, using CVs with similar perceptual measure, on average 28% of CVs are improved when we replaced the vowel with vowel {A}, 28% of CVs are improved when we replaced the vowel with vowel {E}, 25% of CVs are improved when we replaced the vowel with vowel {ae}, and 19% of CVs are improved when we replaced the vowel with vowel {I}.

The confusion pattern in each case, provides insight on how these changes affect the phone recognition in each ear. We propose to prescribe hearing aid amplification tailored to individual ears, based on the confusion pattern, the response from change in perceptual measure, and the response from change in vowel.

These tests are directed at the fine-tuning of hearing aid insertion gain, with the ultimate goal of improving speech perception, and to precisely identify when and for what consonants the ear with hearing loss needs treatment to enhance speech recognition.