Can aliens found in museums teach us about learning sound categories?

Christopher Heffner – ccheffne@buffalo.edu

Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, 14214, United States

Popular version of 4aSCb6 – Age and category structure in phonetic category learning
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027460

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine being a native English speaker learning to speak French for the first time. You’ll have to do a lot of learning, including learning new ways to fit words together to form sentences and a new set of words. Beyond that, though, you must also learn to tell sounds apart that you’re not used to. Even the French word for “sound”, son, is different from the word for “bucket”, seau, in a way that English speakers don’t usually pay attention to. How do you manage to learn to tell these sounds apart when you’re listening to others? You need to group those sounds into categories. In this study, museum and library visitors interacting with aliens in a simple game helped us to understand which categories that people might find harder to learn. The visitors were of many different ages, which allowed us to see how this might change as we get older.

One thing that might help would be if you come with knowledge that certain types of categories are impossible. If you’re in a new city trying to choose a restaurant, it can be really daunting if you decide to investigate every single restaurant in the city. The decision becomes less overwhelming if you narrow yourself to a specific cuisine or neighborhood. Similarly, if you’re learning a new language, it might be very difficult if you entertain every possible category, but limiting yourself to certain options might help. My previous research (Heffner et al., 2019) indicated that learners might start the language learning process with biases against complicated categories, like ones that you need the word “or” to describe. I can describe a day as uncomfortable in its temperature if it is too hot or too cold. We compared these complicated categories to simple ones and saw that the complicated ones were hard to learn.

In this study, I studied this sort of bias across lots of different ages. Brains change as we grow into adulthood and continue to change as we grow older. I was curious whether the bias we have against those certain complicated categories would shift with age, too. To study this, I enlisted visitors to a variety of community sites, by way of partnerships with, among others, the Buffalo Museum of Science, the Rochester Museum and Science Center, and the West Seneca Public Library, all located in Western New York. My lab brought portable equipment to those sites and recruited visitors. The visitors were able to learn about acoustics, a branch of science they had probably not heard much about before; the community spaces got a cool, interactive activity for their guests; and we as the scientists got access to a broader population than we could get sitting inside the university.

Aliens in museumsFigure 1. The three aliens that my participants got to know over the course of the experiment. Each alien made a different combination of sounds, or no sounds at all.

We told the visitors that they were park rangers in Neptune’s first national park. They had to learn which aliens in the park made which sounds. The visitors didn’t know that the sounds they were hearing were taken from German. Over the course of the experiment, they learned to group sounds together according to categories that we made up in the German speech sounds. What we found is that learning of simple and complicated categories was different across ages. Nobody liked the complicated categories. Everyone, no matter their age, found them difficult to learn. However, the responses to the simple categories differed a lot depending on the age. Kids found them very difficult, too, but learning got easier for the teens. Learning peaked in young adulthood, then was a bit harder for those in older age. This suggests that the brain systems that help us learn simple categories might change over time, while everyone seems to have the bias against the complicated categories.

 

Figure 2. A graph, created by me, showing how accurate people were at matching the sounds they heard with aliens. There are three pairs of bars, and within each pair, the red bars (on the right) show the accuracy for the simple categories, while the blue bars (on the left) show the accuracy for the complicated categories. The left two bars show participants aged 7-17, the middle two bars show participants aged 18-39, and the right two show participants aged 40 and up. Note that the simple categories are easier than the complicated ones for participants above 18, while for those younger than 18, there is no difference between the categories.

2aSC – Speech: An eye and ear affair!

Pamela Trudeau-Fisette – ptrudeaufisette@gmail.com
Lucie Ménard – menard.lucie@uqam.ca
Université du Quebec à Montréal
320 Ste-Catherine E.
Montréal, H3C 3P8

Popular version of poster session 2aSC, “Auditory feedback perturbation of vowel production: A comparative study of congenitally blind speakers and sighted speakers”
Presented Tuesday morning, May 19, 2015, Ballroom 2, 8:00 AM – 12:00 noon
169th ASA Meeting, Pittsburgh
———————————
When learning to speak, young infants and toddlers use auditory and visual cues to correctly associate speech movements to a specific speech sound. In doing so, typically developing children compare their own speech and those of their ambient language to build and improve the relationship between what they hear, see and feel, and how to produce it.

In many day-to-day situations, we exploit the multimodal nature of speech: in noisy environments, for instance like in a cocktail party, we look at our interlocutor’s face and use lip reading to recover speech sounds. When speaking clearly, we open our mouth wider to make ourself sound more intelligible. Sometimes, just seeing someone’s face is enough to communicate!

What happens in cases of congenital blindness? Despite the fact that blind speakers learn to produce intelligible speech, they do not quite speak like sighted speakers do. Since they do not perceive others’ visual cues, blind speakers do not produce visible labial movements as much as their sighted peers do.

Production of the French vowel “ou” (similar as in cool) produced by a sighted adult speaker (on the left) and a congenitally blind adult speaker (on the right). We can clearly see that the articulatory movements of the lips are more explicit for the sighted speaker.

Therefore, blind speakers put more weight on what they hear (auditory feedback) than sighted speakers, because one sensory input is lacking. How does that affect the way blind individuals speak?

To answer this question, we conducted an experiment during which we asked congenitally blind adult speakers and sighted adult speakers to produce multiple repetitions of the French vowel “eu”. While they were producing the 130 utterances, we gradually altered their auditory feedback through headphones – without them knowing it- so that they were not hearing the exact sound they were saying. Consequently, they needed to modify the way they produced the vowel in order to compensate for the acoustic manipulation, so they could hear the vowel they were asked to produce (and the one they thought they were saying all along!).

What we were interested in is whether blind speakers and sighted speakers would react differently to this auditory manipulation. The blind speakers not being able to rely on visual feedback, we hypothesized that they would grant more importance on their auditory feedback and, therefore, compensate to a greater extent for the acoustic manipulation.

To explore this matter, we observed the acoustic (produced sounds) and articulatory (lips and tongue movements) differences between the two groups at three distinct time points of the experiment phases.

As predicted, congenitally blind speakers compensated for the altered auditory feedback in a greater extent than their sighted peers. More specifically, even though both speaker groups adapted their productions, the blind group compensated more than the control group did, as if they were integrating the auditory information more strongly. Also, we found that both speaker groups used different articulatory strategies to respond to the applied manipulation: blind participants used their tongue (which is not visible when you speak) more to compensate. This latter observation is not surprising considering the fact that blind speakers do not use their lips (which is visible when you speak) as much as their sighted peers do.

Tags: speech, language, learning, vision, blindness