ASA Lay Language Papers
161st Acoustical Society of America Meeting

Your Words are Music to My Ears:  How a Musician’s Brain Represents Speech

McNeel Gordon Jantzen –
Department of Psychology
Western Washington University
Bellingham, WA 98225, USA.

Popular version of paper 5aSC18
Presented Friday morning, May 27, 2011
161st Meeting of the Acoustical Society of America, Seattle, WA.

Understanding speech is a complex process requiring distributed neural systems that contribute across multiple stages of analysis. At the acoustic level of speech perception, auditory features like amplitude and pitch of sounds are processed. Encoding of acoustic information occurs earlier than the encoding of other features such as how the sounds were produced (the phonetics) and how different speech sounds are organized and combined (the phonology). For the majority of people language processing, including acoustic processing of speech sounds, is localized predominantly in the left hemisphere of the brain.  In contrast, processing of musical sounds is strongly lateralized to the right hemisphere.  Interestingly, musicians, who are adept at the production and perception of music, are also more sensitive to key acoustic features of speech such as onset timing and frequency, suggesting that musical training may enhance the processing of acoustic information for speech sounds.  A possible mechanism for this is that musicians are able to engage right hemisphere music processing networks for the traditionally left hemisphere perception of speech.
In our study we sought to provide neural evidence that musicians process speech and music in a similar way.  We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds.  In contrast we predicted that in non-musicians processing of speech sounds would be localized to left hemisphere language areas.  We created simple speech sounds that could be distinguished on acoustic properties alone.  The speech sounds were simultaneously presented to different ears and subjects attended either to one of the speech sounds or to the sound in one ear.  Musicians and non-musicians performed similarly on the tasks.  However, the neural activity seen in the EEG results told a different story.   The brains’ early response to the acoustic features was both faster and greater for musicians compared to non-musicians, suggesting that musical training enhances sensitivity to acoustic features.  Simply said, the musicians were picking up on acoustic details that the non-musicians did not.  In addition, musicians showed greater acoustic related processing in the right hemisphere compared to non-musicians who showed the traditional left hemisphere dominance.  Musicians were not just engaging language areas when processing the speech sounds, but areas associated with music as well.  This important finding supports our original hypothesis that musicians represent acoustic information related to speech more bilaterally, whereas this information is mainly represented in the left hemisphere for non-musicians.  The location of activation in the right hemisphere is consistent with a musical equivalent of the sound based representation of speech commonly attributed to the analogous location in the left hemisphere.  In the left hemisphere the information necessary for this processing occurs within the mental lexicon and would further suggest that within the right hemisphere there is a “musical” lexicon that musicians employ when processing acoustic information for speech. 

One of the most exciting practical implications from this research is the use of musical training to facilitate the neural reorganization of language in adults after stroke or other mild traumatic brain injuries.  Some have already begun to use singing as a way for Broca’s aphasics to communicate more fluidly.  Another exciting and important implication for our work is that musical training has been found to enhance our ability to perceive emotive information in spoken language.  This may play a key role in helping people with autism in the communication and understanding of emotion.  From a global perspective, our work contributes to the larger or growing notion that language processing in the brain is not isolated from other neural processes, it’s all about the connections… neural connections that is.