To Sound like a Hockey Player, Speak like a Canadian #ASA186

To Sound like a Hockey Player, Speak like a Canadian #ASA186

American athletes tend to signal their identity as hockey players through Canadian English-like accents.

Media Contact:
AIP Media

OTTAWA, Ontario, May 16, 2024 – As a hockey player, Andrew Bray was familiar with the slang thrown around the “barn” (hockey arena). As a linguist, he wanted to understand how sport-specific jargon evolved and permeated across teams, regions, and countries. In pursuit of the sociolinguistic “biscuit” (puck), he faced an unexpected question.

“It was while conducting this initial study that I was asked a question that has since shaped the direction of my subsequent research,” said Bray. “‘Are you trying to figure out why the Americans sound like fake Canadians?’”  

Canadian English dialects are stereotypically represented by the vowel pronunciation, or articulation, in words like “out” and “about,” borrowed British terms like “zed,” and the affinity for the tag question “eh?” Bray, from the University of Rochester, will present an investigation into American hockey players’ use of Canadian English accents Thursday, May 16, at 8:25 a.m. EDT as part of a joint meeting of the Acoustical Society of America and the Canadian Acoustical Association, running May 13-17 at the Shaw Centre located in downtown Ottawa, Ontario, Canada.


Andrew Bray, former UGA Ice Dawg, will present an investigation into American hockey players’ use of Canadian English accents at the 186th meeting of the Acoustical Society of America. Here the University of Georgia takes on the University of Florida in the 2016 Savannah Tire Hockey Classic. Image credit: University of Georgia Ice Dawgs

Studying how hockey players talk required listening to them talk about hockey. To analyze unique vowel articulation and the vast collection of sport-specific slang terminology that players incorporated into their speech, Bray visited different professional teams to interview their American-born players.

“In these interviews, I would ask players to discuss their career trajectories, including when and why they began playing hockey, the teams that they played for throughout their childhood, why they decided to pursue collegiate or major junior hockey, and their current lives as professionals,” said Bray. “The interview sought to get players talking about hockey for as long as possible.”

Bray found that American athletes borrow features of the Canadian English accents, especially for hockey-specific terms and jargon, but do not follow the underlying rules behind the pronunciation, which could explain why the accent might sound “fake” to a Canadian.

“It is important to note that American hockey players are not trying to shift their speech to sound more Canadian,” said Bray. “Rather, they are trying to sound more like a hockey player.”

Players from Canada and northern American states with similar accents have historically dominated the sport. Adopting features of this dialect is a way hockey players can outwardly portray their identity through speech, called a linguistic persona. Many factors influence this persona, like age, gender expression, social category, and as Bray demonstrated, a sport.

Going forward, Bray plans to combine his recent work with his original quest to investigate if Canadian English pronunciation and the hockey linguistic persona are introduced to American players through the sport’s signature slang.

​Main Meeting Website:    
Technical Program:

In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at

ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at

ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the in-person meeting or virtual press conferences, contact AIP Media Services at For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

The Acoustical Society of America is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See


  • fosters communication among people working in all areas of acoustics in Canada
  • promotes the growth and practical application of knowledge in acoustics
  • encourages education, research, protection of the environment, and employment in acoustics
  • is an umbrella organization through which general issues in education, employment and research can be addressed at a national and multidisciplinary level

The CAA is a member society of the International Institute of Noise Control Engineering (I-INCE) and the International Commission for Acoustics (ICA), and is an affiliate society of the International Institute of Acoustics and Vibration (IIAV). Visit

Can aliens found in museums teach us about learning sound categories?

Christopher Heffner –

Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, 14214, United States

Popular version of 4aSCb6 – Age and category structure in phonetic category learning
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine being a native English speaker learning to speak French for the first time. You’ll have to do a lot of learning, including learning new ways to fit words together to form sentences and a new set of words. Beyond that, though, you must also learn to tell sounds apart that you’re not used to. Even the French word for “sound”, son, is different from the word for “bucket”, seau, in a way that English speakers don’t usually pay attention to. How do you manage to learn to tell these sounds apart when you’re listening to others? You need to group those sounds into categories. In this study, museum and library visitors interacting with aliens in a simple game helped us to understand which categories that people might find harder to learn. The visitors were of many different ages, which allowed us to see how this might change as we get older.

One thing that might help would be if you come with knowledge that certain types of categories are impossible. If you’re in a new city trying to choose a restaurant, it can be really daunting if you decide to investigate every single restaurant in the city. The decision becomes less overwhelming if you narrow yourself to a specific cuisine or neighborhood. Similarly, if you’re learning a new language, it might be very difficult if you entertain every possible category, but limiting yourself to certain options might help. My previous research (Heffner et al., 2019) indicated that learners might start the language learning process with biases against complicated categories, like ones that you need the word “or” to describe. I can describe a day as uncomfortable in its temperature if it is too hot or too cold. We compared these complicated categories to simple ones and saw that the complicated ones were hard to learn.

In this study, I studied this sort of bias across lots of different ages. Brains change as we grow into adulthood and continue to change as we grow older. I was curious whether the bias we have against those certain complicated categories would shift with age, too. To study this, I enlisted visitors to a variety of community sites, by way of partnerships with, among others, the Buffalo Museum of Science, the Rochester Museum and Science Center, and the West Seneca Public Library, all located in Western New York. My lab brought portable equipment to those sites and recruited visitors. The visitors were able to learn about acoustics, a branch of science they had probably not heard much about before; the community spaces got a cool, interactive activity for their guests; and we as the scientists got access to a broader population than we could get sitting inside the university.

Aliens in museumsFigure 1. The three aliens that my participants got to know over the course of the experiment. Each alien made a different combination of sounds, or no sounds at all.

We told the visitors that they were park rangers in Neptune’s first national park. They had to learn which aliens in the park made which sounds. The visitors didn’t know that the sounds they were hearing were taken from German. Over the course of the experiment, they learned to group sounds together according to categories that we made up in the German speech sounds. What we found is that learning of simple and complicated categories was different across ages. Nobody liked the complicated categories. Everyone, no matter their age, found them difficult to learn. However, the responses to the simple categories differed a lot depending on the age. Kids found them very difficult, too, but learning got easier for the teens. Learning peaked in young adulthood, then was a bit harder for those in older age. This suggests that the brain systems that help us learn simple categories might change over time, while everyone seems to have the bias against the complicated categories.


Figure 2. A graph, created by me, showing how accurate people were at matching the sounds they heard with aliens. There are three pairs of bars, and within each pair, the red bars (on the right) show the accuracy for the simple categories, while the blue bars (on the left) show the accuracy for the complicated categories. The left two bars show participants aged 7-17, the middle two bars show participants aged 18-39, and the right two show participants aged 40 and up. Note that the simple categories are easier than the complicated ones for participants above 18, while for those younger than 18, there is no difference between the categories.

The science of baby speech sounds: men and women may experience them differently

M. Fernanda Alonso Arteche –
Instagram: @laneurotransmisora

School of Communication Science and Disorders, McGill University, Center for Research on Brain, Language, and Music (CRBLM), Montreal, QC, H3A 0G4, Canada

Instagram: @babylabmcgill

Popular version of 2pSCa – Implicit and explicit responses to infant sounds: a cross-sectional study among parents and non-parents
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imagine hearing a baby coo and instantly feeling a surge of positivity. Surprisingly, how we react to the simple sounds of a baby speaking might depend on whether we are women or men, and whether we are parents. Our lab’s research delves into this phenomenon, revealing intriguing differences in how adults perceive baby vocalizations, with a particular focus on mothers, fathers, and non-parents.

Using a method that measures reaction time to sounds, we compared adults’ responses to vowel sounds produced by a baby and by an adult, as well as meows produced by a cat and by a kitten. We found that women, including mothers, tend to respond positively only to baby speech sounds. On the other hand, men, especially fathers, showed a more neutral reaction to all sounds. This suggests that the way we process human speech sounds, particularly those of infants, may vary significantly between genders. While previous studies report that both men and women generally show a positive response to baby faces, our findings indicate that their speech sounds might affect us differently.

Moreover, mothers rated babies and their sounds highly, expressing a strong liking for babies, their cuteness, and the cuteness of their sounds. Fathers, although less responsive in the reaction task, still rated highly their liking for babies, the cuteness of them, and the appeal of their sounds. This contrast between implicit (subconscious) reactions and explicit (conscious) opinions highlights an interesting complexity in parental instincts and perceptions. Implicit measures, such as those used in our study, tap into automatic and unconscious responses that individuals might not be fully aware of or may not express when asked directly. These methods offer a more direct window into the underlying feelings that might be obscured by social expectations or personal biases.

This research builds on earlier studies conducted in our lab, where we found that infants prefer to listen to the vocalizations of other infants, a factor that might be important for their development. We wanted to see if adults, especially parents, show similar patterns because their reactions may also play a role in how they interact with and nurture children. Since adults are the primary caregivers, understanding these natural inclinations could be key to supporting children’s development more effectively.

The implications of this study are not just academic; they touch on everyday experiences of families and can influence how we think about communication within families. Understanding these differences is a step towards appreciating the diverse ways people connect with and respond to the youngest members of our society.

Babies lead the way – a discovery with infants brings new insights to vowel perception

Linda Polka –

School of Communication Sciences & Disorders, McGill University SCSD, 2001 McGill College Avenue, Montreal, Quebec, H3A 1G1, Canada

Matthew Masapollo, PhD
Motor Neuroscience Laboratory
Department of Psychology
McGill University

Popular version of 2ASC7 – What babies bring to our understanding of vowel perception
Presented at the 186th ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

From the early months of life, infants perceive and produce vowel sounds, which occupy a central role in speech communication across the lifespan. Infant research has typically focused on understanding how their vowel perception and production skills mature into an adult-like form. But infants, being genuine and notoriously unpredictable, often give us new insights that go beyond our study goals. In our lab, several findings initially discovered in infants are now directing novel research with adults. One such discovery is the focal vowel bias, a perceptual pattern we observed when we tested infants on their ability to discriminate two vowel sounds. For example, when testing infants (~4-12 months) to see if they could discriminate two vowel sounds such as “eh” (as in bed) and ‘ae” (as in bad), infants showed very good performance in detecting the change from ‘eh’ to ‘ae’, but very poor performance when the direction of change was reversed (detecting change from ‘ae’ to ‘eh”). Initially, these unexpected directional differences were puzzling because the sounds were identical. However, we soon realized that we could predict this pattern by considering the degree of articulatory movement required to produce each sound. Articulatory movement describes how fast and how far we have to move our tongue, lips, or jaw to produce a speech sound. We noticed that infants find it easier to discriminate vowels when the vowel that involves the most articulatory movement is presented second rather than first. In essence, this pattern shows us that vowels produced with more extreme articulatory movements are also more perceptually salient. Our scientific name for this pattern- the focal vowel bias – is a shorthand way to describe the acoustic signatures of the vowels produced with larger articulatory movements.

These infant findings led us to explore the focal vowel bias in adults. We ran experiments using the “oo” vowels in English and French, which are slightly different sounds. Compared to English “oo”, French “oo” has more articulatory movement due to enhanced lip rounding. Using these vowel sounds (produced by a bilingual speaker), we found that adults showed the pattern we observed in infants. They discriminated a change from English “oo” to French “oo” more easily than the reverse direction, consistent with the focal vowel bias. Adults did this regardless of whether they spoke English or French, showing that that the focal vowel bias is not related to language experience. We then ran many experiments using different versions of the French and English ‘oo” vowels, including natural and synthesized vowels, visual vowel signals (just a moving face with no sound), and animated dots and shapes that follow the lip movements of each vowel sound. We found that adults displayed the focal vowel bias for both visual and auditory vowel signals. Adults also showed the bias when tested with simple visual animations that retained the global shape, orientation, and dynamic movements of a mouth, even though subjects failed to perceive these animations as a mouth. No bias was found when movement and mouth orientation were disrupted (static images or animations rotated sideways). These findings show us that the focal vowel bias is related to how we process the speech movements in different sensory modalities.

These adult findings highlight our exquisite sensitivity to articulatory movement and suggest that the information we attend to in speech is multimodal and closely tied to how speech is produced. We now resume our infant research focused on a new question – as young infants begin learning to produce speech, do their speech movements also critically contribute to this perceptual bias and help them form vowel categories? We are eager to see where the next round of infant research will take us.

Vowel Adjustments: The Key to High-Pitched Singing

May Pik Yu Chan –

University of Pennsylvania, 3401-C Walnut Street, Suite 300, C Wing, Philadelphia, PA, 19104, United States

Jianjing Kuang

Popular version of 4aMU6 – Ultrasound tongue imaging of vowel spaces across pitches in singing
Presented at the 186 ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Singing isn’t just for the stage – everyone enjoys finding their voices in songs, regardless of whether they are performing in an auditorium or merely humming in the shower. Singing well is more than just hitting the right notes, it’s also about using your voice as an instrument effectively. One technique that professional opera singers master is to change how they pronounce their vowels based on the pitch they are singing. But why do singers change their vowels? Is it only to sound more beautiful, or is it necessary to hit these higher notes?

We explore this question by studying what non-professional singers do – if it is necessary to change the vowels to reach higher notes, then non-professional singers will also do the same at higher notes. The participants were asked to sing various English vowels across their pitch range, much like a vocal warm-up exercise. These vowels included [i] (like “beat”), [ɛ] (like “bet”), [æ] (like “bat”), [ɑ] (like “bot”), and [u] (like “boot”). Since vowels are made by different tongue gestures, we used ultrasound imaging to capture images of the participants’ tongue positions as they sang. This allowed us to see how the tongue moved across different pitches and vowels.

We found that participants who managed to sing more pitches did indeed adjust their tongue shapes when reaching high notes. Even when isolating the participants who said they have never sung in choir or acapella group contexts, the trend still stands. Those who are able to sing at higher pitches try to adjust their vowels at higher pitches. In contrast, participants who cannot sing a wide pitch range generally do not change their vowels based on pitch.

We then compared this to pilot data from an operatic soprano, who showed gradual adjustments in tongue positions across her whole pitch range, effectively neutralising the differences between vowels at her highest pitches. In other words, all the vowels at her highest pitches sounded very similar to each other.

Overall, these findings suggest that maybe changing our mouth shape and tongue position is necessary when singing high pitches. The way singers modify their vowels could be an essential part of achieving a well-balanced, efficient voice, especially for hitting high notes. By better understanding how vowels and pitch interact with each other, this research opens the door to further studies on how singers use their vocal instruments and what are the keys to effective voice production. Together, this research offers insights into not only our appreciation for the art of singing, but also into the complex mechanisms of human vocal production.


Video 1: Example of sung vowels at relatively lower pitches.
Video 2: Example of sung vowels at relatively higher pitches.

Why is it easier to understand people we know?

Emma Holmes –
X (Twitter): @Emma_Holmes_90

University College London (UCL), Department of Speech Hearing and Phonetic Sciences, London, Greater London, WC1N 1PF, United Kingdom

Popular version of 4aPP4 – How does voice familiarity affect speech intelligibility?
Presented at the 186 ASA Meeting
Read the abstract at

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

It’s much easier to understand what others are saying if you’re listening to a close friend or family member, compared to a stranger. If you practice listening to the voices of people you’ve never met before, you might also become better at understanding them too.

Many people struggle to understand what others are saying in noisy restaurants or cafés. This can become much more challenging as people get older. It’s often one of the first changes that people notice in their hearing. Yet, research shows that these situations are much easier if people are listening to someone they know very well.

In our research, we ask people to visit the lab with a friend or partner. We record their voices while they read sentences aloud. We then invite the volunteers back for a listening test. During the test, they hear sentences and click words on a screen to show what they heard. This is made more difficult by playing a second sentence at the same time, which the volunteers are told to ignore. This is like having a conversation when there are other people talking around you. Our volunteers listen to many sentences over the course of the experiment. Sometimes, the sentence is one recorded from their friend or partner. Other times, it’s one recorded from someone they’ve never met. Our studies have shown that people are best at understanding the sentences spoken by their friend or partner.

In one study, we manipulated the sentence recordings, to change the sound of the voices. The voices still sounded natural. Yet, volunteers could no longer recognize them as their friend or partner. We found that participants were still better at understanding the sentences, even though they didn’t recognize the voice.

In other studies, we’ve investigated how people learn to become familiar with new voices. Each volunteer learns the names of three new people. They’ve never met these people, but we play them lots of recordings of their voices. This is like when you listen to a new podcast or radio show. We’ve found that people become very good at understanding these people. In other words, we can train people to become familiar with new voices.

In new work that hasn’t yet been published, we found that voice familiarization training benefits both older and younger people. So, it may help older people who find it very difficult to listen in noisy places. Many environments contain background noise—from office parties to hospitals and train stations. Ultimately, we hope that we can familiarize people with voices they hear in their daily lives, to make it easier to listen in noisy places.