Why Australian Aboriginal languages have small vowel systems

Andrew Butcher – endymensch@gmail.com

Flinders University, GPO Box 2100, Adelaide, SA, 5001, Australia

Popular version of 1pSC6 – On the Small Flat Vowel Systems of Australian Languages
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022855

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Australia originally had 250-350 Aboriginal languages. Today, about 20 of these survive and none has more than 5,000 speakers. Most of the original languages shared very similar sound systems. About half of them had just three vowels, another 10% or so had four, and a further 25% or so had a five-vowel system. Only 16% of the world’s languages have a vowel inventory of four or less (the average number is six; some Germanic languages, such as Danish, have 20 or so).

This paper asks why many Australian languages have so few vowels. Our research shows that the vowels of Aboriginal languages are much more ‘squashed down’ in the acoustic space than those of European languages (Fig 1), indicating that the tongue does not come as close to the roof of the mouth as in European languages. The two ‘closest’ vowels are [e] (a sound with the tongue at the front of the mouth, between ‘pit’ and ‘pet’) and [o] (at the back of the mouth with rounded lips, between ‘put’ and ‘pot’). The ‘open’ (low-tongue) vowel is best transcribed [ɐ], a sound between ‘pat’ and ‘putt’, but with a less open jaw. Four- and five-vowel systems squeeze the extra vowels in between these, adding [ɛ] (between ‘pet’ and ‘pat’) and [ɔ] (more or less exactly as in ‘pot’), with little or no expansion of the acoustic space. Thus, the majority of Australian languages lack any true close (high-tongue) vowels (as in ‘peat’ and ‘pool’).
So why do Australian languages have a ‘flattened’ vowel space? The answer may lie in the ears of the speakers rather than in their mouths. Aboriginal Australians have by far the highest prevalence of chronic middle ear infection in the world. Our research with Aboriginal groups of diverse age, language and geographical location shows 30-60% of speakers have a hearing impairment in one or both ears (Fig 2). Nearly all Aboriginal language groups have developed an alternate sign language to complement the spoken one. Our previous analysis has shown that the sound systems of Australian languages resemble those of individual hearing-impaired children in several important ways, leading us to hypothesise that the consonant systems and the word structure of these languages have been influenced by the effects of chronic middle ear infection over generations.

A reduction in the vowel space is another of these resemblances. Middle ear infection affects the low frequency end of the scale (under 500 Hz), thus reducing the prominence of the distinctive lower resonances of close vowels, such as in ‘peat’ and ‘pool’ (Fig 3). It is possible that, over generations, speakers have raised the frequencies of these resonances to make them more hearable, thereby constricting the acoustic space the languages use. If so, we may ask whether, on purely acoustic grounds, communicating in an Aboriginal language in the classroom – using a sound system optimally attuned to the typical hearing profile of the speech community – might offer improved educational outcomes for indigenous children in the early years.

Documenting the sounds of southwest Congo: the case of North Boma

Lorenzo Maselli – lorenzo.maselli@ugent.be

Instagram: @mundenji

FWO, UGent, UMons, BantUGent, Ghent, Oost-Vlaanderen, 9000, Belgium

Popular version of 1aSC2 – Retroflex nasals in the Mai-Ndombe (DRC): the case of nasals in North Boma B82
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022724

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

“All language sounds are equal but some language sounds are more equal than others” – or, at least, that is the case in academia. While French i’s and English t’s are constantly re-dotted and re-crossed, the vast majority of the world’s linguistic communities remain undocumented, with their unique sound heritage gradually fading into silence. The preservation of humankind’s linguistic diversity relies solely on detailed documentation and description.

Over the past few years, a team of linguists from Ghent, Mons, and Kinshasa have dedicated their efforts to recording the phonetic and phonological oddities of southwest Congo’s Bantu varieties. Among these, North Boma (Figure 1) stands out for its display of rare sounds known as “retroflexes”. These sounds are particularly rare in central Africa, which mirrors a more general state of under-documentation of the area’s sound inventories. Through extensive fieldwork in the North Boma area, meticulous data analysis, and advanced statistical processing, these researchers have unveiled the first comprehensive account of North Boma’s retroflexes. As it turns out, North Boma retroflexes are exclusively nasal, a striking typological circumstance. Their work, presented in Sydney this year, not only enriches our understanding of these unique consonants but also unveils potential historical implications behind their prevalence in the region.

North BomaFigure 1 – the North Boma area

The study highlights the remarkable salience of North Boma’s retroflexes, characterised by distinct acoustic features that sometimes align and sometimes deviate from those reported in the existing literature. This is clearly shown in Figure 2, where the North Boma nasal space is plotted using a technique known as “Multiple Factor Analysis” allowing for the study of small corpora organised into clear variable groups. As can be seen, their behaviour differs greatly from that of the other nasals of North Boma. This uniqueness also suggests that their presence in the area may stem from interactions with long-lost hunter-gatherer forest languages, providing invaluable insights into the region’s history.

North Boma Figure 2 – MFA results show that retroflex and non-retroflex nasals behave very differently in North Boma

Extraordinary sound patterns are waiting to be discovered in the least documented language communities of the world. North Boma serves as just one compelling example among many. As we navigate towards an unprecedented language loss crisis, the imperative for detailed phonetic documentation becomes increasingly evident.

Achieving Linguistic Justice for African American English #ASA184

Achieving Linguistic Justice for African American English #ASA184

African American English varies systematically and is internally consistent; a proper understanding of this variation prevents the misdiagnosis of speech and language disorder.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 10, 2023 – African American English (AAE) is a variety of English spoken primarily, though not exclusively, by Black Americans of historical African descent. Because AAE varies from white American English (WAE) in a systematic way, it is possible that speech and hearing specialists unfamiliar with the language variety could misidentify differences in speech production as speech disorder. Professional understanding of the difference between typical variation and errors in the language system is the first step for accurately identifying disorder and establishing linguistic justice for AAE speakers.

(left) 5-year-old AAE girl’s production of “elephant.” When the /t/ sound in /nt/ is produced the AAE speaker produces less aspiration noise. The /t/ sound exists for a shorter period in time relative to the WAE /t/ production. The duration of the word is 740 milliseconds (.74 seconds). (right) 5-year-old WAE girl’s production of “elephant.” When the /t/ sound in /nt/ is produced the WAE speaker produces a lot of aspiration noise. The /t/ sound exists for a longer period in time relative to the AAE /t/ production. The duration of the entire word is 973 milliseconds (.97) seconds. Both girls have intelligible productions of the word “elephant.”

In her presentation, “Kids talk too: Linguistic justice and child African American English,” Yolanda Holt of East Carolina University will describe aspects of the systematic variation between AAE and WAE speech production in children. The talk will take place Wednesday, May 10, at 10:50 a.m. Eastern U.S. in the Los Angeles/Miami/Scottsdale room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

Common characteristics of AAE speech include variation at all linguistic levels, from sound production at the word level to the choice of commentary in professional interpersonal interactions. A frequent feature of AAE is final consonant reduction/deletion and final consonant cluster reduction. Holt provided the following example to illustrate word level to interpersonal level linguistic variation.

“In the professional setting, if one AAE-speaking professional woman wanted to compliment the attire of the other, the exchange might sound something like this: [Speaker 1] ‘I see you rockin’ the tone on tone.’ [Speaker 2] ‘Frien’, I’m jus’ tryin’ to be like you wit’ the fully executive flex.’’’

This example, in addition to using common aspects of AAE word shape, shows how the choice to use AAE in a professional setting is a way for the two women to share a message beyond the words.

“This exchange illustrates a complex and nuanced cultural understanding between the two speakers. In a few words, they communicate professional respect and a subtle appreciation for the intricate balance that African American women navigate in bringing their whole selves to the corporate setting,” said Holt.

Holt and her team examined final consonant cluster reduction (e.g., expressing “shift” as “shif’”) in 4- and 5-year-old children. Using instrumental acoustic phonetic analysis, they discovered that the variation in final consonant production in AAE is likely not a wholesale elimination of word endings but is perhaps a difference in aspects of articulation.

“This is an important finding because it could be assumed that if a child does not fully articulate the final sound, they are not aware of its existence,” said Holt. “By illustrating that the AAE-speaking child produces a variation of the final sound, not a wholesale removal, we help to eliminate the mistaken idea that AAE speakers don’t know the ending sounds exist.”

Holt believes the fields of speech and language science, education, and computer science should expect and accept such variation in human communication. Linguistic justice occurs when we accept variation in human language without penalizing the user or defining their speech as “wrong.”

“Language is alive. It grows and changes over each generation,” said Holt. “Accepting the speech and language used by each generation and each group of speakers is an acceptance of the individual, their life, and their experience. Acceptance, not tolerance, is the next step in the march towards linguistic justice. For that to occur, we must learn from our speakers and educate our professionals that different can be typical. It is not always disordered.”

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Hey Siri, Can You Hear Me? #ASA184

Hey Siri, Can You Hear Me? #ASA184

Experiments show how speech and comprehension change when people communicate with artificial intelligence.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 9, 2023 – Millions of people now regularly communicate with AI-based devices, such as smartphones, speakers, and cars. Studying these interactions can improve AI’s ability to understand human speech and determine how talking with technology impacts language.

In their talk, “Clear speech in the new digital era: Speaking and listening clearly to voice-AI systems,” Georgia Zellou and Michelle Cohn of the University of California, Davis will describe experiments to investigate how speech and comprehension change when humans communicate with AI. The presentation will take place Tuesday, May 9, at 12:40 p.m. Eastern U.S. in the Los Angeles/Miami/Scottsdale room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

Humans change their voice when communicating with AI. Credit: Michelle Cohn

In their first line of questioning, Zellou and Cohn examined how people adjust their voice when communicating with an AI system compared to talking with another human. They found the participants produced louder and slower speech with less pitch variation when they spoke to voice-AI (e.g., Siri, Alexa), even across identical interactions.

On the listening side, the researchers showed that how humanlike a device sounds impacts how well listeners will understand it. If a listener thinks the voice talking is a device, they are less able to accurately understand. However, if it sounds more humanlike, their comprehension increases. Clear speech, like in the style of a newscaster, was better understood overall, even if it was machine-generated.

“We do see some differences in patterns across human- and machine-directed speech: People are louder and slower when talking to technology. These adjustments are similar to the changes speakers make when talking in background noise, such as in a crowded restaurant,” said Zellou. “People also have expectations that the systems will misunderstand them and that they won’t be able to understand the output.”

Clarifying what makes a speaker intelligible will be useful for voice technology. For example, these results suggest that text-to-speech voices should adopt a “clear” style in noisy conditions.

Looking forward, the team aims to apply these studies to people from different age groups and social and language backgrounds. They also want to investigate how people learn language from devices and how linguistic behavior adapts as technology changes.

“There are so many open questions,” said Cohn. “For example, could voice-AI be a source of language change among some speakers? As technology advances, such as with large language models like ChatGPT, the boundary between human and machine is changing – how will our language change with it?”

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Vocal Tract Size, Shape Dictate Speech Sounds

Vocal Tract Size, Shape Dictate Speech Sounds

Main anatomical shape factors of the vocal tract. Credit: Antoine Serrurier

WASHINGTON, March 21, 2023 – Only humans have the ability to use speech. Remarkably, this communication is understandable across accent, social background, and anatomy despite a wide variety of ways to produce the necessary sounds. In JASA, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from…click to read more

From the Journal: The Journal of the Acoustical Society of America
Article: Morphological and acoustic modeling of the vocal tract
DOI: 10.1121/10.0017356

The Impact of Formal Musical Training on Speech Comprehension in Heavily Distracting Environments

Alexandra Bruder – alexandra.l.bruder@vanderbilt.edu

Vanderbilt University Medical Center, Department of Anesthesiology, 1211 21st Avenue South, Medical Arts Building, Suite 422, Nashville, TN, 37212, United States

Joseph Schlesinger – joseph.j.schlesinger@vumc.org
Twitter: @DrJazz615

Vanderbilt University Medical Center
Nashville, TN 37205
United States

Clayton D Rothwell – crothwell@infoscitex.com<
Infoscitex Corporation, a DCS Company
Dayton, OH, 45431
United States

Popular version of 1pMU4-The Impact of Formal Musical Training on Speech Intelligibility Performance – Implications for Music Pedagogy in High-Consequence Industries, presented at the 183rd ASA Meeting.

Imagine being a waiter… everyone in the restaurant is speaking, music is playing, and co-workers are trying to get your attention, causing you to miss the customer’s order. Communication is necessary but can be hindered due to distractions in many environments, especially in high-risk environments, such as aviation, nuclear power, and healthcare, where miscommunication is a frequent contributing factor to accidents and loss of life. In domains where multitasking is necessary and timely and accurate responses must be ensured, does formal music training help performance?

We used an audio-visual task to test if formal music training can be useful in multitasking environments. Twenty-five students from Vanderbilt University participated in the study and were separated into groups based on their level of formal music training: no formal music training, 1-3 years, 3-5 years, and 5+ years of formal music training. Participants were given three tasks to attend to, a speech comprehension task (modeling distracted communication), a complex visual distraction task (modeling a clinical patient monitor), and an easy visual distraction task (modeling an alarm monitoring task). These tasks were completed in the presence of a combination of alarms and/or background noise and with/without background music.

formal musical training study Image courtesy of Bruder et al. original paper. (Psychology of Music).

Our research focused on results regarding the audio comprehension task and showed that the group with the most formal music training did not show changes in response rate with or without background music added, while all the other groups did. Meaning that with enough music training, background music is not a factor influencing participant response! Additionally, the number of times the participants responded to the audio task depended on the degree of formal music training. Participants with no formal music training had the highest response rate, followed by the 1-3-year group, then the 3–5-year group, with the 5+ year group having the lowest response rate. However, all participants were similar in accuracy overall, and accuracy decreased for all groups when background music was playing. Given the similar accuracy among groups, but less frequent responding with more formal music training, it appears that formal music training helps inform participants to not respond when they don’t know the answer.

Image courtesy of Bruder et al. original paper (Psychology of Music).

Why does this matter? There are many situations when responding and getting something wrong can be more detrimental than not responding, especially in time pressure situations where mistakes are costly to correct. Although the accuracy was similar between all groups, the groups with some formal music training seemed to respond with overconfidence, but did not know enough to increase accuracy, resulting in a potentially dangerous situation. This is contrasted with the 5+ formal music training group, who showed no effect of background music on response rate and who used their trained ears to better judge the extent of their understanding of the information and were less eager to respond to a difficult task under distraction. It turns out that those middle school band lessons paid off after all, that is, if you work in a distracting, multitasking environment.