4aPPa24 – Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task

Takahiro Tamesue – tamesue@yamaguchi-u.ac.jp
Yamaguchi University
1677-1 Yoshida, Yamaguchi
Yamaguchi Prefecture 753-8511
Japan

Popular version of poster 4aPPa24, “Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task”
Presented Thursday morning, December 1, 2016
172nd ASA Meeting, Honolulu

Open offices that make effective use of limited space and encourage dialogue, interaction, and collaboration among employees are becoming increasingly common. However, productive work-related conversation might actually decrease the performance of other employees within earshot — more so than other random, meaningless noises. When carrying out intellectual activities involving memory or arithmetic tasks, it is a common experience for noise to cause an increased psychological impression of “annoyance,” leading to a decline in performance. This is more apparent for meaningful noise, such as conversation, than it is for other random, meaningless noise. In this study, the impact of meaningless and meaningful noises on selective attention and cognitive performance in volunteers, as well as the degree of subjective annoyance of those noises, were investigated through physiological and psychological experiments.

The experiments were based on the so-called “odd-ball” paradigm — a test used to examine selective attention and information processing ability. In the odd-ball paradigm, subjects detect and count rare target events embedded in a series of repetitive events. To complete the odd-ball task it is necessary to regulate attention to a stimulus. In one trial, subjects had to count the number of times the infrequent target sounds occurred under meaningless or meaningful noises over a 10 minute period. The infrequent sound — appearing 20% of the time—was a 2 kHz tone burst; the frequent sound was a 1 kHz tone burst. In a visual odd-ball test, subjects observed pictures flashing on a PC monitor as meaningless or meaningful sounds were played to both ears through headphones. The most infrequent image was 10 x 10 centimeter-squared red image; the most frequent was a green square. At the end of the trial, the subjects also rated their level of annoyance at each sound on a seven-point scale.

During the experiments, the subjects brain waves were measured through electrodes placed on their scalp. In particular, we look at what is called, “event-related potentials,” very small voltages generated in the brain structures in response to specific events or stimuli that generate electroencephalograph waveforms. Example results, after appropriate averaging, of wave forms of event-related potentials under no external noise are shown in Figure 1. The so-called N100 component peaks negatively about 100 milliseconds after the stimulus and the P300 component positive peaks positively around 300 milliseconds after a stimulus, related to selective attention and working memory. Figure 2 and 3 show the results of event-related potentials for infrequent sound under the meaningless and meaningful noise. N100 and P300 components are smaller in amplitude and longer in latency because of the meaningful noise compared to the meaningless noise.

tamesue1Figure 1. Averaged wave forms of evoked Event-related potentials for infrequent sound under no external noise. tamesue2Figure 2. Averaged wave forms of evoked Event-related potentials for infrequent sound under meaningless noise.
tamesue3Figure 3. Averaged wave forms of auditory evoked Event-related potentials under meaningful noise.  

We employed a statistical method called, “principal component analysis” to identify the latent components. Results of statistical analysis, where four principal components were extracted as shown in Figure 4. Considering the results, where component scores of meaningful noise was smaller than other noise conditions, meaningful noise reduces the component of event-related potentials. Thus, selective attention to cognitive tasks was influenced by the degree of meaningfulness of the noise.

tamesue4Figure 4. Loadings of principal component analysis tamesue5Figure 5. Subjective experience of annoyance (Auditory odd-ball paradigms)

Figure 5 shows the results for annoyance in the auditory odd-ball paradigms. These results demonstrated that the subjective experience of annoyance in response to noise increased due to the meaningfulness of the noise. The results revealed that whether the noise is meaningless or meaningful had a strong influence not only on the selective attention to auditory stimuli in cognitive tasks, but also the subjective experience of annoyance.

That means that when designing sound environments in spaces used for cognitive tasks, such as the workplace or schools, it is appropriate to consider not only the sound level, but also meaningfulness of the noise that is likely to be present. Surrounding conversations often disturb the business operations conducted in such open offices. Because it is difficult to soundproof an open office, a way to mask meaningful speech with some other sound would be of great benefit for achieving a comfortable sound environment.

1aPP44 – What’s That Noise? The Effect of Hearing Loss and Tinnitus on Soldiers Using Military Headsets

Candice Manning, AuD, PhD – Candice.Manning@va.gov
Timothy Mermagen, BS – timothy.j.mermagen.civ@mail.mil
Angelique Scharine, PhD – angelique.s.scharine.civ@mail.mil

Human and Intelligent Agent Integration Branch (HIAI)
Human Research and Engineering Directorate
U.S. Army Research Laboratory
Building 520
Aberdeen Proving Ground, MD

Lay language paper 1aPP44, “Speech recognition performance of listeners with normal hearing, sensorineural hearing loss, and sensorineural hearing loss and bothersome tinnitus when using air and bone conduction communication headsets”
Presented Monday Morning, May 23, 2016, 8:00 – 12:00, Salon E/F
171st ASA Meeting, Salt Lake City

Military personnel are at high risk for noise-induced hearing loss due to the unprecedented proportion of blast-related acoustic trauma experienced during deployment from high-level impulsive and continuous noise (i.e., transportation vehicles, weaponry, blast-exposure).  In fact, noise-induced hearing loss is the primary injury of United States Soldiers returning from Afghanistan and Iraq.  Ear injuries, including tympanic membrane perforation, hearing loss, and tinnitus, greatly affect a Soldier’s hearing acuity and, as a result, reduce situational awareness and readiness.  Hearing protection devices are accessible to military personnel; however, it has been noted that many troops forego the use of protection believing it may decrease circumstantial responsiveness during combat.

Noise-induced hearing loss is highly associated with tinnitus, the experience of perceiving sound that is not produced by a source outside of the body.  Chronic tinnitus causes functional impairment that may result in a tinnitus sufferer to seek help from an audiologist or other healthcare professional.  Intervention and management are the only options for those individuals suffering from chronic tinnitus as there is no cure for this condition.  Tinnitus affects every aspect of an individual’s life including sleep, daily tasks, relaxation, and conversation to name only a few.  In 2011, the United States Government Accountability Office report on noise indicated that tinnitus was the most prevalent service-connected disability.  The combination of noise-induced hearing loss and the perception of tinnitus could greatly impact a Soldier’s ability to rapidly and accurately process speech information under high-stress situations.

The prevalence of hearing loss and tinnitus within the military population suggests that Soldier use of hearing protection is extremely important. The addition of hearing protection into reliable communication devices will increase the probability of use among Soldiers.  Military communication devices using air and bone-conduction provide clear two-way audio communications through a headset and a microphone.

Air conduction headsets offer passive hearing protection from high ambient noise, and talk-through microphones allow the user to engage in face-to-face conversation and hear ambient environmental sounds, preserving situation awareness.  Bone-conduction technology utilizes the bone-conduction pathway and presents auditory information differently than air-conduction devices (see Figure 1).  Because headsets with bone conduction transducers do not cover the ears, they allow the user to hear the surrounding environment and the option to communicate over a radio network.  Worn with or without hearing protection, bone conduction devices are inconspicuous and fit easily under the helmet.   Bone conduction communication devices have been used in the past; however, as newer devices have been designed, they have not been widely adopted for military applications.

Manning1a - headsetsA. Manning1b - headsetsB.

Figure 1. Air and Bone conduction headsets used during study: a) Invisio X5 dual in-ear headset and X50 control unit and b) Aftershockz Sports 2 headset.

Since many military personnel operate in high noise environments and with some degree of noise induced hearing damage and/or tinnitus, it is important to understand how speech recognition performance might be altered as a function of headset use.  This is an important subject to evaluate as there are two auditory pathways (i.e., air-conduction pathway and bone-conduction pathway) that are responsible for hearing perception.  Comparing the differences between the air and bone-conduction devices on different hearing populations will help to describe the overall effects of not only hearing loss, an extremely common disability within the military population, but the effect of tinnitus on situational awareness as well.  Additionally, if there are differences between the two types of headsets, this information will help to guide future communication device selection for each type of population (NH vs. SNHL vs. SNHL/Tinnitus).

Based on findings from speech understanding in noise literature, communication devices do have a negative effect on speech intelligibility within the military population when noise is present.  However, it is uncertain as to how hearing loss and/or tinnitus effects speech intelligibility and situational awareness under high-level noise environments.  This study looked at speech recognition of words presented over AC and BC headsets and measured three groups of listeners: Normal Hearing, sensorineural hearing impaired, and/or tinnitus sufferers. Three levels of speech-to-noise (SNR=0,-6,-12) were created by embedding speech items in pink noise.  Overall, performance was marginally, but significantly better for the Aftershockz bone conduction headset (Figure 2).  As would be expected, performance increases as the speech to noise ratio increases (Figure 3).

Manning2

Figure 2. Mean rationalized arcsine units measured for each of the TCAPS under test.

Manning3

Figure 3. Mean rationalized arcsine units measured as a function of speech to noise ratio.

One of the most fascinating things about the data is that although the effect of hearing profile was significant, it was not practically so, the means for the Normal Hearing, Hearing Loss and Tinnitus groups were 65, 61, and 63, respectively (Figure 4).  Nor was there any interaction with any of the other variables under test.  One might conclude from the data that if the listener can control the level of presentation, the speech to noise ratio has about the same effect, regardless of hearing loss. There was no difference in performance with the TCAPS due to one’s hearing profile; however, the Aftershockz headset provided better speech intelligibility for all listeners.

Manning4

Figure 4. Mean rationalized arcsine units observed as a function of the hearing profile of the listener.

4aPP2 – Localizing Sound Sources when the Listener Moves: Vision Required

William A. Yost – william.yost@asu.edu, paper presenter
Xuan Zhong – xuan.zhong@asu.edu
Speech and Hearing Science
Arizona State University
P.O. Box 870102
Tempe, AZ 87285

Popular version of paper 4aPP2, related papers 1aPPa1, 1pPP7, 1pPP17, 3aPP4,
Presented Monday morning, May 18, 2015
169th ASA Meeting, Pittsburgh

When an object (sound source) produces sound, that sound can be used to locate the spatial position of the sound source. Since sound has no physical attributes related to space and the auditory receptors do not respond according to where the sound comes from, the brain makes computations based on the sound’s interaction with the listener’s head. These computations provide information about sound source location. For instance, sound from a source opposite the right ear will reach that ear slightly before reaching the left ear since the source is closer to the right ear. This slight difference in arrival time produces an interaural (between the ears) time difference (ITD), which is computed in neural circuits in the auditory brainstem as one cue used for sound source localization (i.e., small ITDs indicate that the sound source is near the front and large ITDs that the sound source is off to one side).

We are investigating sound source localization when the listener and/or the source move. See Figure 1 for a picture of the laboratory that is an echo-reduced room with 36 loudspeakers on a 5-foot radius sphere and a computer-controlled chair for rotating listeners while they listen to sounds presented from the loudspeakers. Conditions when sounds and listeners move presents a challenge for the auditory system in processing auditory spatial cues for sound source localization. When either the listener or the source moves, the ITDs change. So when the listener moves the ITD changes, signaling that the source moved even if it didn’t. In order to prevent this type of confusion about the location of sound sources, the brain needs another piece of information. We have shown that in addition to computing auditory spatial cues like the ITD, the brain also needs information about the location of the listener. Without both types of information, our experiments indicate that major errors occur in locating sound sources. When vision is used to provide information about the location of the listener, accurate sound source localization occurs. Thus, sound source localization requires information about the auditory spatial cues such as the ITD, but also information provided by systems like vision indicating the listener’s spatial location. This has been an underappreciated aspect of sound source localization. Additional research will be needed to more fully understand how these two forms of essential information are combined and used to locate sound sources. Improving sound source localization accuracy when listeners and/or sources move has many practical applications ranging from aiding people with hearing impairment to improving robots’ abilities to use sound to locate objects (e.g., a person in a fire). [The research was supported by an Air Force Office of Scientific Research, AFOSR, grant].

Yost1 - Localizing

Figure 1. The Spatial Hearing Laboratory at ASU with sound absorbing materials on all walls, ceiling, and floor; 36-loudspeakers on a 5-foot radius sphere, and a computer controlled rotating chair.

2aPP6 – Emergence of Spoken Language in Deaf Children Receiving a Cochlear Implant

Ann E. Geers
Popular version of 2aPP6. Language emergence in early-implanted children
Presented at the 169th Meeting of the Acoustical Society of America
May 2015

Before the advent of Cochlear Implants (CI), children who were born profoundly deaf acquired spoken language and literacy skills with great difficulty and over many years of intensive education. Even with the most powerful hearing aids and early intervention, children learned spoken language at about half the normal rate, and fell further behind in language and reading with increasing age. At that time, many deaf children learned to communicate through sign language, though more than 90% of them had parents with normal hearing who did not know how to sign when their deaf child was born.

Following FDA approval in the 1990s, many deaf children began receiving a CI (in one ear) at some point after their second birthday. Dramatic improvements were seen compared to hearing aid users in the ability to hear and produce clear speech, understand spoken language and acquire literacy skills. However many children with CIs still did not reach levels within the range of their age mates with normal hearing in these areas. Over the next 2 decades, with universal newborn hearing screening mandatory in most states, implantation occurred at younger ages (typically 12-18 months) and CI technology offered improved access to speech, especially soft sounds. As implant performance continued to improve for children receiving one CI, receiving a second CI to optimize hearing at both ears was considered.

This study followed 60 children implanted between 12 and 38 months of age when they were 3, 4 and 10 years old. All of them were in preschool programs focused on developing spoken language skills and had no disabilities other than hearing impairment. By age 10, 95% of them were enrolled in regular education settings with hearing age mates.

Three groups, roughly equal in size, were identified from standardized language tests administered at 4 and 10 years of age. 1) Normal Language Emergence – these children exhibited spoken language skills within the normal range by age 4 and continued along this normal course into their elementary school years. They developed above-average reading comprehension. 2) Late Language Emergence – these children were language-delayed in preschool, but caught up by the time they were 10. They developed average reading comprehension for their age. 3) Persistent Language Delay- these children were also language-delayed in preschool, but they did not catch up with hearing age-mates by age 10. They were below-average readers.

Achieving age-appropriate language and reading skills by mid-elementary grades is a remarkable accomplishment for children with profound hearing loss and the fact that two-thirds of the sample reached or exceeded this level attests to the efficacy of early cochlear implantation. In fact, children with normal language emergence were most likely to have received a CI very young – between 12 and 18 months of age. However, age at first CI did not differentiate children with late language emergence from those with persistent delay. In fact, these groups did not differ in nonverbal intelligence, mother’s education, bilateral implantation, age at first intervention or age enrolled in regular education classrooms. As a result, predicting during preschool whether or not a child will catch up with hearing children in the same grade is difficult. We looked for factors distinguishing language-delayed preschoolers who would reach age-appropriate language levels by mid-elementary grades from those who would remain delayed. Early prediction is important for intensifying and individualizing early intervention for children at risk for long-term delay.

Results from a battery of tests and questionnaires revealed a constellation of factors distinguishing children with persistent from those with resolving language delay. Most of these factors were associated with the quality of the audio input provided by the device. For example, odds were 3-4 times greater that children who caught up used more recent CI technology than those who remained delayed. Children who caught up in language had a particular advantage in their ability to detect and understand speech presented at soft levels. This is understandable, because incidental or casual language acquisition depends on the ability to overhear soft speech in addition to speech at normal-conversation levels. In addition, a smaller repertoire of speech sounds, lower vocabulary and poorer grammar skills were evident in the conversational language of persistently delayed children as early as 3 years of age with smaller language gains between 3 and 4 years, foreshadowing slower long-term speech and language development. A somewhat surprising finding was that a much larger percentage (47%) of persistently delayed children had left-ear CIs as compared with those who caught up (14%).

These results have important implications for surgeons, speech-language pathologists, educators and audiologists serving young children with cochlear implants. For the surgeon, right-ear placement of the first CI should be preferred over the left unless cochlear anatomy precludes placement at the right ear. This, along with implantation by 18 months, may help to maximize chances of age-appropriate spoken language development. For the speech language pathologist, the extent of immature speech production and language use during preschool years may foreshadow later language difficulties. For the audiologist, encouraging upgraded speech processor technology and working to ensure the audibility of soft speech when programming the device may positively influence future language development. For the educator, recognition of risk factors for persistent language delay may signal increased intensity of language intervention. Addressing these issues should increase the likelihood that children with CIs will exhibit spoken communication and academic skills in line with expectations for their grade placement.

Cardiovascular Effects of Noise on Man

Wolfgang Babisch – wolfgang.babisch@t-online.de
Himbeersteig 37
14129 Berlin, Germany

Presented Tuesday afternoon, May 19, 2015
169th ASA Meeting, Pittsburgh
Click here to read the abstract

Sound penetrates our life everywhere. It is an essential component of our social life. We need it for communication, orientation and as a warning signal. The auditory system is continuously analyzing acoustic information, including unwanted and disturbing sound, which is filtered and interpreted by different cortical (conscious perception and processing) and sub-cortical brain structures (non-conscious perception and processing). The terms “sound” and “noise” are often used synonymously. Sound becomes noise when it causes adverse health effects, including annoyance, sleep disturbance, cognitive impairment, mental or physiological disorders, including hearing loss and cardiovascular disorders. The evidence is increasing that ambient noise levels below hearing damaging intensities are associated with the occurrence of metabolic disorders (type 2 diabetes), high blood pressure (hypertension), coronary heart diseases (including myocardial infarction), and stroke. Environmental noise from transportation noise sources, including road, rail and air traffic, is increasingly recognized as a significant public health issue.

Systematic research on the non-auditory physiological effects of noise has been carried out for a long time starting in the post war period of the last century. The reasoning that long-term exposure to environmental noise causes cardiovascular health effects is based on the following experimental and empirical findings:

  • Short-term laboratory studies carried out on humans have shown that the exposure to noise affects the autonomous nervous system and the endocrine system. Heart rate, blood pressure, cardiac output, blood flow in peripheral blood vessels and stress hormones (including epinephrine, nor-epinephrine, cortisol) are affected. At moderate environmental noise levels such acute reactions are found, particularly, when the noise interferes with activities of the individuals (e.g. concentration, communication, relaxation).
  • Noise-induced instantaneous autonomic responses do not only occur in waking hours, but also in sleeping subjects even when they report not being disturbed by the noise.
  • The responses do not adapt on a long-term basis. Subjects who had lived for several years in a noisy environment still respond to acute noise stimuli.
  • The long-term effects of chronic noise exposure have been studied in animals at high noise levels showing manifest vascular changes (thickening of vascular walls) and alterations in the heart muscle (increases of connective tissue) that indicate an increased aging of the heart and a higher risk of cardiovascular mortality.
  • Long-term effects of chronic noise exposure in humans have been studied in workers exposed to high noise levels in the occupational environment showing higher rates of hypertension and ischemic heart diseases in exposed subjects compared with less exposed subjects.

These findings make it plausible to deduct that similar long-term effects of chronic noise exposure may also occur at comparably moderate or low environmental noise levels. It is important to note that non-auditory noise effects do not follow the toxicological principle of dosage. This means that it is not simply the accumulated total sound energy that causes the adverse effects. Instead, the individual situation and the disturbed activity need to be taken into account (time activity patterns). It may very well be that an average sound pressure level of 85 decibels (dB) at work causes less of an effect than 65 dB at home when carrying out mental tasks or relaxing after a stressful day, or 50 dB when being asleep. This makes a substantial difference compared to many other environmental exposures where the accumulated dose is the hazardous factor, e. g. air pollution (“dealing with decibels is not like summing up micrograms as we do for chemical exposures”).

The general stress theory is the rationale and biological model for the non-auditory physiological effects of noise on man. According to the general stress concept, repeated temporal changes in biological responses disturb the biorhythm, cause permanent dysregulation, resulting in physiological and metabolic imbalance and disturbed haemostasis of the organism leading to chronic diseases in the long run. In principle, a variety of body functions may be affected, including the cardiovascular system, the gastrointestinal system, and the immune system, for example. Noise research has been focusing on cardiovascular health outcomes because cardiovascular diseases have a high prevalence in the general population. Noise-induced cardiovascular effects may therefore be relevant for public health and provide a strong argument for noise abatement policies within the global context of adverse health effects due to community noise, including annoyance and sleep disturbance.

Figure 1 shows a simplified reaction scheme used in epidemiological noise research. It simplifies the cause-effect chain i.e.: sound > disturbance > stress response > (biological) risk factors > disease. Noise affects the organism either directly through nervous interactions of the acoustic nerve with other regions of the central nervous system, or indirectly through the emotional and the cognitive perception of sound. The objective noise exposure (sound level) and the subjective noise exposure (annoyance) may both be predictors in the relationship between noise and health endpoints. The direct, non-conscious, pathway may be predominant in sleeping subjects.

The body of epidemiological studies regarding the association between transportation noise (mainly road traffic and aircraft noise) and cardiovascular diseases (hypertension, coronary heart disease, stroke) has increased a lot in the recent years. Most of the studies suggest a continuous increase in risk with increasing noise level. Exposure modifiers such as long years of residence and the location of rooms (facing the street) have been associated with a stronger risk supporting the causal interpretation of findings. The question is no longer whether environmental noise causes cardiovascular disorders, the question is rather to what extent (the slope of the exposure-response curve) and at which threshold (the empirical onset of the exposure-response curve (reference level)). Different noise sources differ in their noise characteristics with respect to the maximum noise level, the time course including the number of events, the noise level rise time of a single event, the frequency spectrum, the tonality and their informational content. In principle, different exposure-response curves must be considered for different noise sources. This not only applies to noise annoyance where aircraft noise is found to be more annoying than road traffic noise and railway noise (at the same average noise level), but may, in principle, also be true for the physiological effects of noise.

So called meta-analyses have been carried out pooling the results of relevant studies on the same associations for deriving common exposure-response relationships that can be used for a quantitative risk assessment. Figure 2 shows pooled exposure-response relationships of the associations between road traffic noise and hypertension (24 studies, weighted pooled reference level 50 dB), road traffic noise and coronary heart disease (14 studies, weighted pooled reference level 52 dB), aircraft noise and hypertension (5 studies, weighted pooled reference level 49 dB), and aircraft noise and coronary heart disease (3 studies weighted pooled reference level 48 dB). Conversions of different noise indicators were made with respect to the 24-hour day(+0 dB)-evening(+5 dB)-night(+10 dB)-weighted annual A-weighted equivalent continuous sound pressure level Lden which is commonly used for noise mapping in Europe and elsewhere, referring to the most exposed façade of the buildings. The curves suggest increases in risks (hypertension, coronary heart disease) between 5 and 10 percent per increase of the noise indicator Lden by 10 dB, starting at noise levels around 50 dB. This corresponds with approximately 10 dB lower night noise levels Lnight of approximately 40 dB. According to the graphs, subjects that live in areas where the ambient average noise level Lden exceeds 65 dB run an approximately 15-25 percent higher risk of cardiovascular diseases compared with subjects that live in comparably quiet areas. With respect to high blood pressure the risk tends to be larger for aircraft noise compared with road traffic noise which may have to do with the fact that people do not have access to a quiet side when the noise comes from above. However, the number of aircraft noise studies is much smaller than the number of road traffic noise studies. More research is needed in this field. Nevertheless, the available data provide information for action taking.

The decision upon critical noise levels and “accepted” public health risks within a social and economic context is not a scientific one but a political one. Expert groups had concluded that average A-weighted road traffic noise levels at the facades of the houses exceeding 65 dB during daytime and 55 dB during the night were to be considered as being detrimental to ill-health. New studies that were able to assess the noise level in more detail at the lower end of the exposure range (e. g. including secondary roads) tended to find lower threshold values for the onset of the increase in risk than the earlier studies where noise data were area-wide not available (e. g. only primary road network). Based on the current knowledge regarding the cardiovascular health effects of environmental noise it seems justified to refine the recommendations towards lower critical noise levels, particularly with respect to the exposure during the night. Sleep is an important modulator of cardiovascular function. Some studies showed stronger associations of cardiovascular outcomes with the exposure during the night than with the exposure during the day. Noise-disturbed sleep, in this respect, must be considered as a particular potential pathway for the development of cardiovascular disorders.

The WHO (World Health Organization) Regional Office for Europe is currently developing a new set of guidelines (“WHO Environmental Noise Guidelines for the European Region”) to provide suitable scientific evidence and recommendations for policy makers of the Member States in the European Region. The activity can be viewed as an initiative to update the WHO Community Noise Guidelines from 1999 where cardiovascular effects of environmental noise were not explicitly considered in the recommendations. This may change in the new version of the document.

figure1_babisch

Figure 1. Noise reaction model according to Babisch (2014) [Babisch, W. (2014). Updated exposure-response relationship between road traffic noise and coronary heart diseases: A meta-analysis. Noise Health 16 (68): 1-9.]

figure2_babisch
Figure 2. Exposure-response relationships of the associations between transportation noise and cardiovascular health outcomes. Data taken from:

  • Babisch, W. and I. van Kamp (2009). Exposure-response relationship of the association between aircraft noise and the risk of hypertension. Noise Health 11 (44): 149-156.
  • van Kempen, E. and W. Babisch (2012). The quantitative relationship between road traffic noise and hypertension: a meta-analysis. Journal of Hypertension 30(6): 1075-1086.
  • Babisch, W. (2014). Updated exposure-response relationship between road traffic noise and coronary heart diseases: A meta-analysis. Noise Health 16 (68): 1-9.
  • Vienneau, D., C. Schindler, et al. (2015). The relationship between transportation noise exposure and ischemic heart disease: A meta analysis. Environmental Research 138: 372-380.

Note: Study-specific reference values were pooled after conversion to Lden using the derived meta-analysis weights of each study (according to Vienneau et al. (2015)).

Abbreviations: Road = road traffic noise, Air = aircraft noise, Hyp = hypertension, CHD = coronary heart disease

4aSCb16 – How your genes may help you learn another language

Han-Gyol Yi – gyol@utexas.edu
W. Todd Maddox – maddox@psy.utexas.edu
The University of Texas at Austin
2504A Whitis Ave. (A1100)
Austin, TX 78712

Valerie S. Knopik – valerie_knopik@brown.edu
Rhode Island Hospital
593 Eddy Street
Providence, RI 02093

John E. McGeary – john_mcgeary@brown.edu
Providence Veterans Affairs Medical Center
830 Chalkstone Avenue
Providence, RI 02098

Bharath Chandrasekaran – bchandra@utexas.edu
The University of Texas at Austin
2504A Whitis Ave. (A1100)
Austin, TX 78712

Popular version of paper 4aSCb16
Presented Thursday morning, October 30, 2014
168th ASA Meeting, Indianapolis

For many decades, speech scientists have marveled at the complexity of speech sounds. In English, a relatively simple task of distinguishing “bat” from “pat” can involve as many as 16 different sound cues. Also, English vowels are pronounced so differently across speakers that one person’s “Dan” can sound like another’s “done”. Despite all this, most adult native English speakers are able to understand English speech sounds rapidly, effortlessly, and accurately. In contrast, learning a new language is not an easy task, partly because the characteristics of foreign speech sounds are unfamiliar to us. For instance, Mandarin Chinese is a tonal language, which means that the pitch pattern used to produce each syllable can change the meaning of the word. Therefore, the word “ma” can mean “mother”, “hemp”, “horse”, or “to scold,” depending on whether the word was produced with a flat, rising, dipping, or a falling pitch pattern. It is no surprise that many native English speakers struggle in learning Mandarin Chinese. At the same time, some seem to master these new speech sounds with relative ease. With our research, we seek to discover the neural and genetic bases of this individual variability in language learning success. In this paper, we are focusing on genes that target activity of two distinct neural regions: prefrontal cortex and striatum.

Recent advances in speech science research strongly suggest that for adults, learning speech sounds for the first time is a cognitively challenging task. What this means is that every time you hear a new speech sound, a region of your brain called the prefrontal cortex – the part of the cerebral cortex that sits right under your forehead –¬ must do extra work to extract relevant sound patterns and parse them according to learned rules. Such activity in the prefrontal cortex is driven by dopamine, which is one of the many chemicals that the cells in your brain use to communicate with each other. In general, higher dopamine activity in the prefrontal cortex means better performance in complex and difficult tasks.

Interestingly, there is a well-studied gene called COMT that affects the dopamine activity level in the prefrontal cortex. Everybody has a COMT gene, although with different subtypes. Individuals with a subtype of the COMT gene that promotes dopamine activity perform hard tasks better than do those with other subtypes. In our study, we found that the native English speakers with the dopamine-promoting subtype of the COMT gene (40 out of 169 participants) learned Mandarin Chinese speech sounds better than those with different subtypes. This means that, by assessing your COMT gene profile, you might be able to predict how well you will learn a new language.

However, this is only half the story. While new learners may initially use their prefrontal cortex to discern foreign speech sound contrasts, expert learners are less likely to do so. As with any other skill, speech perception becomes more rapid, effortless, and accurate with practice. At this stage, your brain can bypass all that burdensome cognitive reasoning in the prefrontal cortex. Instead, it can use the striatum – a deep structure within the brain¬¬ – to directly decode the speech sounds. We find that the striatum is more active for expert learners of new speech sounds. Furthermore, individuals with a subtype of a gene called FOXP2 that promotes flexibility of the striatum to new experiences (31 out of 204 participants) were found to learn Mandarin Chinese speech sounds better than those with other subtypes.

Our research suggests that learning speech sounds in a foreign language involves multiple neural regions, and that genetic variations which affect the activity within those regions lead to better or worse learning. In other words, your genetic framework may be contributing to how well you learn to understand a new language. What we do not know at this point is how these variables interact with other sources of variability, such as prior experience. Previous studies have shown that extensive musical training, for example, can enhance learning speech sounds of a foreign language. We are a long way from cracking the code of how the brain, a highly complex organism, functions. We hope that a neurocognitive genetic approach may help bridge the gap between biology and language.