It’s Not What You Said, It’s How You Said It: How Prosody Affects Reaction Time
Aleah D. Combs – firstname.lastname@example.org
Emma-Kate Calvert – email@example.com
Dr. Kevin B. McGowan – firstname.lastname@example.org
University of Kentucky
120 Patterson Drive
Lexington, KY 40506
Popular version of paper 4pSC10
Presented Thursday, May 16, 2019
177th ASA Meeting, Louisville, KY
In order to speak to someone, a great many things must occur in a very small amount of time in your brain.
In no particular order, your articulators (that’s your mouth, tongue, lips, and anything else that change position when you speak) must be prepared to move quickly in a coordinated fashion to make a set of required sounds. You will have to listen to the background noise in order to produce speech at an appropriate volume. You must be prepared with the sounds that both parties have agreed mean whatever concepts you’re trying to convey, in an order that makes sense to both parties. You must also prepare for a number of plausible responses, what to do if the other person (from here on out, your interlocutor) does not hear or understand your utterance, and plan what you have to say based on the things you think your interlocutor knows. You must decide how to present the information—how do you want your interlocutor to feel? Do you want to claim authority on the subject, or express uncertainty?
None of this is a surprise to anyone with receptive language skill advanced enough to comprehend the above paragraph. Nonetheless, it provides a solid context to explain one of the key fields of inquiry in psycholinguistics: process ordering. That is, what order do all of these processes happen in? Do they overlap? Do they interact? These questions are often explored in reaction time studies, wherein a participant is subjected to a set of stimuli and asked to react to it, usually by pressing a button or clicking a mouse.
In this study, we were interested in the interaction between imperative commands and the tone of voice they were presented in. Specifically, we were interested in whether changing the tone of voice of a command changed the reaction time to that command in a significant way.
Our setup was as follows:
These were the buttons that our participant could choose from.
There were 12 different types of stimulus. Each animal (bird, dog, goat, fish) had angry, happy, and neutral versions of the command “press the [animal] button”. These are some of the stimuli for the word goat.
Angry (characterized by lower overall pitch and rate of speech, hyperarticulation):
Happy (characterized by a raise in pitch variation and rate of speech):
Neutral (the control or baseline for pitch variation, rate of speech, and overall pitch)
These sound files were produced by a trained actor and the emotions simulated using his training.
38 participants later, we had our answer.
[Response times] – [stimulus times] =
Ð angr 925.3958 ms
Ð happ 902.5510 ms
Ð neut 876.3297 ms
Holding neutral as a control, our angry commands produced a statistically significant (p = 0.0251) slower reaction time. This is consistent with a Sumner, Kim, King, and McGowan model of processing where social and semantic information is processed interactively and simultaneously (2014).