Moved around? You might process words differently than homebodies!

Marie Bissell – marie.bissell@uta.edu

University of Texas at Arlington, 701 S Nedderman Dr, Arlington, TX, 76019, United States

Abby Walker
Cynthia Clopper

Popular version of 3pSC2 – Effects of dialect familiarity and dialect exposure on cross-dialect lexical processing
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037947

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Some of us grow up hearing mostly one dialect, while others of us have substantial exposure to multiple dialects, maybe because we’re in a pretty bidialectal community, or because we’ve moved between dialect regions. In our work, we’re investigating whether these differences in exposure to pronunciation variation impact how people recognize words.

Word recognition is a bit like a race in your head: there are lots of potential contenders, and your brain’s job is to sift through them really quickly. One thing that makes it easier to recognize a word is if it’s been recently activated: so if you hear “bed” then see <BED>, you’ll be really quick to recognize the written word, compared to if you had just heard a completely unrelated word, like “hat.” One thing that makes it harder to recognize a word is if you’ve just heard a competitor (a word that is pretty similar and therefore confusable with the target word), in this case, hearing something like “bad” before <BED>. We think activating these competitors makes recognition harder because when you hear the word “bad,” you suppress or inhibit competitor words like “bed.”

Map of the USA showing three major dialect regions: Northern, Midland and Southern. Image courtesy of Cynthia Clopper, and boundaries are based on Labov, Ash & Boberg 2006) Figure 1: Map of the USA showing three major dialect regions: Northern, Midland and Southern. Image courtesy of Cynthia Clopper, and boundaries are based on Labov, Ash & Boberg 2006)]

Okay, so how does exposure to variability impact all this? In our experiments, participants heard words from different dialects and then matched them to written words. What we’ve been finding across a few studies with American English listeners is that people who have lived in multiple dialect regions (specifically moving between those highlighted in Figure 1) get less of a boost for matching words (“bed” > BED), and more robustly, less of a cost for competitor words (“bad” > BED). Why would this be the case? We think that if you’ve been exposed to lots of variation in pronunciation, you need to be more flexible as a listener: being too certain about what you heard (“oh, that’s definitely ‘bed’, not ‘bad’”) could make it difficult to recover when you’re wrong, and if there are lots of dialects around, there’s more room for you to be wrong! Importantly, we don’t see one style of listening as better or worse than another; rather, it looks like how we process words adapts to the particular challenges of the speech communities we grow up in!

Does Virtual Reality Match Reality? Vocal Performance Across Environments

Pasquale Bottalico – pb81@illinois.edu

University of Illinois, Urbana-Champaign
Champaign, IL 61820
United States

Carly Wingfield2, Charlie Nudelman1, Joshua Glasner3, Yvonne Gonzales Redman1,2

  1. Department of Speech and Hearing Science, University of Illinois, Urbana-Champaign
  2. School of Music University of Illinois Urbana-Champaign
  3. School of Graduate Studies, Delaware Valley University

Popular version of 2aAAa1 – Does Virtual Reality Match Reality? Vocal Performance Across Environments
Presented at the 188th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0037496

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Singers often perform in very different spaces than where they practice—sometimes in small, dry rooms and later in large, echoey concert halls. Many singers have shared that this mismatch can affect how they sing. Some say they end up singing too loudly because they can’t hear themselves well, while others say they hold back because the room makes them sound louder than they are. Singers have to adapt their voices to unfamiliar concert halls, and often they have very little rehearsal time to adjust.

While research has shown that instrumentalists adjust their playing depending on the room they are in, there’s been less work looking specifically at singers. Past studies have found that different rooms can change how singers use their voices, including how their vibrato (the small, natural variation in pitch) changes depending on the room’s echo and clarity.

At the University of Illinois, our research team from the School of Music and the Department of Speech and Hearing Science is studying whether virtual reality (VR) can help singers train for different acoustic environments. The big question: can a virtual concert hall give singers the same experience as a real one?

To explore this, we created virtual versions of three real performance spaces on campus (Figure 1).

Figure 1. 360 degree images of the three performance spaces investigated.

Singers wore open-backed headphones and a VR headset while singing into a microphone in a sound booth. As they sang, their voices were processed in real time to sound like they were in one of the real venues, and this audio was sent back to them through the headphones. In the Video (Video1), you can see a singer performing in the sound booth where the acoustic environments were recreated virtually. In the audio file (Audio1), you can hear exactly what the singer heard: the real-time, acoustically processed sound being sent back to their ears through the open-backed headphones.

Video 1. Singer performing in the virtual environment.

 

Audio 1. Example of real-time auralized feedback.

Ten trained singers performed in both the actual venues (Figure 2) and in virtual versions of those same spaces.

Figure 2. Singer performing in the rear environment.

We then compared how they sang and how they felt during each performance. The results showed no significant differences in how the singers used their voices or how they perceived the experience between real and virtual environments.

This is an exciting finding because it suggests that virtual reality could become a valuable tool in voice training. If a singer can’t practice in a real concert hall, a VR simulation could help them get used to the sound and feel of the space ahead of time. This technology could give students greater access to performance preparation and allow voice teachers to guide students through the process in a more flexible and affordable way.

Introducing Project ELLA: Enhancing Early Language and Literacy

Jennell Vick – jvick@chsc.org
Twitter: @DrJVick

Cleveland Hearing and Speech Center
6001 Euclid Avenue Suite 100
Cleveland, OH, 44103
United States

Popular version of 2aSC4 – From intention to understanding and back again: How a simple message of ‘Catch and Pass’ can build language in children
Presented at the 187th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0035171

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


Project ELLA (Early Language and Literacy for All) is an exciting new program designed to boost early language and literacy skills in young children. The program uses a simple yet powerful message, “Catch and Pass,” to teach parents, grandparents, daycare teachers and other caregivers the importance of having back-and-forth conversations with children from birth. These interactions help build and strengthen the brain’s language pathways, setting the foundation for lifelong learning.

Developed by the Cleveland Hearing & Speech Center, Project ELLA focuses on helping children in the greater Cleveland area, especially those in under-resourced communities. Community health workers visit neighborhoods to build trust with neighbors, raise awareness about the importance of responsive interactions for language development, and help empower families to put their children on-track for later literacy (See Video1). They also identify children who may need more help through speech and language screenings. For children identified as needing more help, Project ELLA offers free speech-language therapy and support for caregivers at Cleveland Hearing & Speech Center.

The success of the project is measured by tracking the number of children and families served, the progress of children in therapy, the knowledge and skills of caregivers and teachers, and the partnerships established in the community (See Fig. 1). Project ELLA is a groundbreaking model that has the potential to transform language and literacy development in Cleveland and beyond.

Early Language and Literacy for All

To Sound like a Hockey Player, Speak like a Canadian #ASA186

To Sound like a Hockey Player, Speak like a Canadian #ASA186

American athletes tend to signal their identity as hockey players through Canadian English-like accents.

Media Contact:
AIP Media
301-209-3090
media@aip.org

OTTAWA, Ontario, May 16, 2024 – As a hockey player, Andrew Bray was familiar with the slang thrown around the “barn” (hockey arena). As a linguist, he wanted to understand how sport-specific jargon evolved and permeated across teams, regions, and countries. In pursuit of the sociolinguistic “biscuit” (puck), he faced an unexpected question.

“It was while conducting this initial study that I was asked a question that has since shaped the direction of my subsequent research,” said Bray. “‘Are you trying to figure out why the Americans sound like fake Canadians?’”  

Canadian English dialects are stereotypically represented by the vowel pronunciation, or articulation, in words like “out” and “about,” borrowed British terms like “zed,” and the affinity for the tag question “eh?” Bray, from the University of Rochester, will present an investigation into American hockey players’ use of Canadian English accents Thursday, May 16, at 8:25 a.m. EDT as part of a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association, running May 13-17 at the Shaw Centre located in downtown Ottawa, Ontario, Canada.

hockey

Andrew Bray, former UGA Ice Dawg, will present an investigation into American hockey players’ use of Canadian English accents at the 186th meeting of the Acoustical Society of America. Here the University of Georgia takes on the University of Florida in the 2016 Savannah Tire Hockey Classic. Image credit: University of Georgia Ice Dawgs

Studying how hockey players talk required listening to them talk about hockey. To analyze unique vowel articulation and the vast collection of sport-specific slang terminology that players incorporated into their speech, Bray visited different professional teams to interview their American-born players.

“In these interviews, I would ask players to discuss their career trajectories, including when and why they began playing hockey, the teams that they played for throughout their childhood, why they decided to pursue collegiate or major junior hockey, and their current lives as professionals,” said Bray. “The interview sought to get players talking about hockey for as long as possible.”

Bray found that American athletes borrow features of the Canadian English accents, especially for hockey-specific terms and jargon, but do not follow the underlying rules behind the pronunciation, which could explain why the accent might sound “fake” to a Canadian.

“It is important to note that American hockey players are not trying to shift their speech to sound more Canadian,” said Bray. “Rather, they are trying to sound more like a hockey player.”

Players from Canada and northern American states with similar accents have historically dominated the sport. Adopting features of this dialect is a way hockey players can outwardly portray their identity through speech, called a linguistic persona. Many factors influence this persona, like age, gender expression, social category, and as Bray demonstrated, a sport.

Going forward, Bray plans to combine his recent work with his original quest to investigate if Canadian English pronunciation and the hockey linguistic persona are introduced to American players through the sport’s signature slang.

———————– MORE MEETING INFORMATION ———————–
​Main Meeting Website: https://acousticalsociety.org/ottawa/    
Technical Program: https://eppro02.ativ.me/src/EventPilot/php/express/web/planner.php?id=ASASPRING24

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the in-person meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

ABOUT THE CANADIAN ACOUSTICAL ASSOCIATION/ASSOCIATION CANADIENNE D’ACOUSTIQUE

  • fosters communication among people working in all areas of acoustics in Canada
  • promotes the growth and practical application of knowledge in acoustics
  • encourages education, research, protection of the environment, and employment in acoustics
  • is an umbrella organization through which general issues in education, employment and research can be addressed at a national and multidisciplinary level

The CAA is a member society of the International Institute of Noise Control Engineering (I-INCE) and the International Commission for Acoustics (ICA), and is an affiliate society of the International Institute of Acoustics and Vibration (IIAV). Visit https://caa-aca.ca/.

Can aliens found in museums teach us about learning sound categories?

Christopher Heffner – ccheffne@buffalo.edu

Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, 14214, United States

Popular version of 4aSCb6 – Age and category structure in phonetic category learning
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027460

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine being a native English speaker learning to speak French for the first time. You’ll have to do a lot of learning, including learning new ways to fit words together to form sentences and a new set of words. Beyond that, though, you must also learn to tell sounds apart that you’re not used to. Even the French word for “sound”, son, is different from the word for “bucket”, seau, in a way that English speakers don’t usually pay attention to. How do you manage to learn to tell these sounds apart when you’re listening to others? You need to group those sounds into categories. In this study, museum and library visitors interacting with aliens in a simple game helped us to understand which categories that people might find harder to learn. The visitors were of many different ages, which allowed us to see how this might change as we get older.

One thing that might help would be if you come with knowledge that certain types of categories are impossible. If you’re in a new city trying to choose a restaurant, it can be really daunting if you decide to investigate every single restaurant in the city. The decision becomes less overwhelming if you narrow yourself to a specific cuisine or neighborhood. Similarly, if you’re learning a new language, it might be very difficult if you entertain every possible category, but limiting yourself to certain options might help. My previous research (Heffner et al., 2019) indicated that learners might start the language learning process with biases against complicated categories, like ones that you need the word “or” to describe. I can describe a day as uncomfortable in its temperature if it is too hot or too cold. We compared these complicated categories to simple ones and saw that the complicated ones were hard to learn.

In this study, I studied this sort of bias across lots of different ages. Brains change as we grow into adulthood and continue to change as we grow older. I was curious whether the bias we have against those certain complicated categories would shift with age, too. To study this, I enlisted visitors to a variety of community sites, by way of partnerships with, among others, the Buffalo Museum of Science, the Rochester Museum and Science Center, and the West Seneca Public Library, all located in Western New York. My lab brought portable equipment to those sites and recruited visitors. The visitors were able to learn about acoustics, a branch of science they had probably not heard much about before; the community spaces got a cool, interactive activity for their guests; and we as the scientists got access to a broader population than we could get sitting inside the university.

Aliens in museumsFigure 1. The three aliens that my participants got to know over the course of the experiment. Each alien made a different combination of sounds, or no sounds at all.

We told the visitors that they were park rangers in Neptune’s first national park. They had to learn which aliens in the park made which sounds. The visitors didn’t know that the sounds they were hearing were taken from German. Over the course of the experiment, they learned to group sounds together according to categories that we made up in the German speech sounds. What we found is that learning of simple and complicated categories was different across ages. Nobody liked the complicated categories. Everyone, no matter their age, found them difficult to learn. However, the responses to the simple categories differed a lot depending on the age. Kids found them very difficult, too, but learning got easier for the teens. Learning peaked in young adulthood, then was a bit harder for those in older age. This suggests that the brain systems that help us learn simple categories might change over time, while everyone seems to have the bias against the complicated categories.

 

Figure 2. A graph, created by me, showing how accurate people were at matching the sounds they heard with aliens. There are three pairs of bars, and within each pair, the red bars (on the right) show the accuracy for the simple categories, while the blue bars (on the left) show the accuracy for the complicated categories. The left two bars show participants aged 7-17, the middle two bars show participants aged 18-39, and the right two show participants aged 40 and up. Note that the simple categories are easier than the complicated ones for participants above 18, while for those younger than 18, there is no difference between the categories.

The science of baby speech sounds: men and women may experience them differently

M. Fernanda Alonso Arteche – maria.alonsoarteche@mail.mcgill.ca
Instagram: @laneurotransmisora

School of Communication Science and Disorders, McGill University, Center for Research on Brain, Language, and Music (CRBLM), Montreal, QC, H3A 0G4, Canada

Instagram: @babylabmcgill

Popular version of 2pSCa – Implicit and explicit responses to infant sounds: a cross-sectional study among parents and non-parents
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027179

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine hearing a baby coo and instantly feeling a surge of positivity. Surprisingly, how we react to the simple sounds of a baby speaking might depend on whether we are women or men, and whether we are parents. Our lab’s research delves into this phenomenon, revealing intriguing differences in how adults perceive baby vocalizations, with a particular focus on mothers, fathers, and non-parents.

Using a method that measures reaction time to sounds, we compared adults’ responses to vowel sounds produced by a baby and by an adult, as well as meows produced by a cat and by a kitten. We found that women, including mothers, tend to respond positively only to baby speech sounds. On the other hand, men, especially fathers, showed a more neutral reaction to all sounds. This suggests that the way we process human speech sounds, particularly those of infants, may vary significantly between genders. While previous studies report that both men and women generally show a positive response to baby faces, our findings indicate that their speech sounds might affect us differently.

Moreover, mothers rated babies and their sounds highly, expressing a strong liking for babies, their cuteness, and the cuteness of their sounds. Fathers, although less responsive in the reaction task, still rated highly their liking for babies, the cuteness of them, and the appeal of their sounds. This contrast between implicit (subconscious) reactions and explicit (conscious) opinions highlights an interesting complexity in parental instincts and perceptions. Implicit measures, such as those used in our study, tap into automatic and unconscious responses that individuals might not be fully aware of or may not express when asked directly. These methods offer a more direct window into the underlying feelings that might be obscured by social expectations or personal biases.

This research builds on earlier studies conducted in our lab, where we found that infants prefer to listen to the vocalizations of other infants, a factor that might be important for their development. We wanted to see if adults, especially parents, show similar patterns because their reactions may also play a role in how they interact with and nurture children. Since adults are the primary caregivers, understanding these natural inclinations could be key to supporting children’s development more effectively.

The implications of this study are not just academic; they touch on everyday experiences of families and can influence how we think about communication within families. Understanding these differences is a step towards appreciating the diverse ways people connect with and respond to the youngest members of our society.