Effects of noise for workers in the transportation industry

Marion Burgess m.burgess@adfa.edu.au
Brett Molesworth b.molesworth@unsw.edu.au

University of New South Wales, Australia

Popular version of paper
Presented June 28, 2017, in session 4aNSa, Measuring, Modeling, and Managing Transportation Noise I. 8:00 AM – 12:20 PM
173rd ASA Meeting, Boston

There are well established limits for workplace noise based on the risk of hearing damage. For example, an 8-hour noise exposure level is limited to 85 decibels (when the sound is this loud you need to shout to talk to someone near you). There are also guidelines for acceptable noise levels in workplaces that aim to ensure the noise will not be intrusive or affect the ability of the worker to do the tasks. For example, a design level for a general office may be 40 to 45 decibels (dBA), while for a ticket sales area, 45 to 50 dBA. In this range, noise should not have an adverse affect on your ability to complete a task.

However, there are many work environments, particularly in the transportation industry, in which the noise levels are above 50 dBA but the employees are required to perform tasks that require a high level of concentration and attention. For pilots and bus, truck and train drivers, the noise levels in the area they are working can be 65 to more than 75 dBA at times.

These workers all need to make safety-critical decisions and operate technical equipment in the presence of continuous noise generated from their vehicle’s engine. Transport check-in staff need to communicate and process passengers in noisy check-in halls where there is both vehicle and equipment noise as well as the noise from personnel around, such as “babble.”

In this paper, we discuss findings from a number of studies investigating the effect of constant noise at 65 dBA on various cognitive and memory skills. Two noise sources were used: One, a wideband noise like constant mechanical noise from an engine, and the other a babble noise of multiple persons’ incomprehensible speech. Language background is another factor that can increase cognitive load for those workers who are communicating in a language that is not native.

The cognitive tasks aimed to test working memory with an alphabet span test and recognition memory using a cued recall task. The signal to noise ratio used was 0, -5 and -10 dBA. Wideband noise was found to have a greater effect on working memory and recognition memory than babble noise.
Those who were not native English speakers were also more affected by the wideband noise than the babble noise. The subjective assessment, when the subjects were asked their opinion of the effect of the noise and the annoyance, was also greater for broadband noise.

These findings reinforce the limitations of basing acceptability on a simple overall dBA value alone. The reduction in performance demonstrates the importance of reducing the noise levels within transportation workplaces.

3pAB1 – A Welcoming Whinny

David G. Browning decibeldb@aol.com
Peter D. Herstein – netsailor.ph@cox.net
BROWNING BIOTECH
139 Old North Road
Kingston, RI 02881
Popular Version of paper 3pAB1
Presented Tuesday afternoon, June 27, 2017
173rd ASA Meeting, Boston

Are you greeted with a welcoming whinny when you enter the barn? When doing research on horse whinnys (as part of the Equinne Vocalization Project) we realized we were hearing more whinnys when horses were inside the barn than out. This led us to investigate further and we came to realize it was vocalization adaptation. Horses have remarkable eyesight, with almost a 360° field of view, which they primarily rely on to observe and communicate when out in the open. In a barn, confined to a stall, their line of sight is often blocked. Quite remarkably, they learn to compensate by recognizing the sounds that are of interest — like that of the feed-cart or even their owner’s footsteps — which they often salute with a whinny.

We were curious as to how universal vocalization adaptation occurred in the animal world and in searching the literature we found numerous interesting examples. Asian Wild Dogs (Dholes), for example, hunt prey in packs, usually out in the open where they can visually keep track of the prey and their pack mates. When they encounter some sight-limiting vegetation, however, they have developed a short, flat whistle to keep track of each other but not interfere with their listening for the prey.

Jungles, presenting further examples, are uniquely challenging to animals for three reasons: visibility is limited, moving is difficult, and the vocalization has to be heard despite many others’ sounds. African rhinos out on the plain can make do with a simple bellow, as it would be easy to trot over and check them out. In contrast, a Sumartran rhino, always in the jungle, has a complex vocalization. Often compared to that of a whale, the vocalization is complex in order to be heard among the competing calls while providing enough information so to entice another to slog over to check it out (or not).

The military use a term “situational awareness,” that also refers the awareness that is crucial to animals, and this work provides some examples of their acoustic compensations when visibility is limited for some reason.

2pSAa – Three-in-one Sound Effects: A redirecting antenna, beam splitter and a sonar

Andrii Bozhko – AndriiBozhko@my.unt.edu
Arkadii Krokhin – Arkadii.Krokhin@unt.edu
Department of Physics
University of North Texas
1155 Union Circle #311427
Denton, TX 76201, USA

José Sánchez-Dehesa – jsdehesa@upvnet.upv.es
Francisco Cervera – fcervera@upvnet.upv.es
Wave Phenomena Group
Universitat Politècnica de València
Camino de Vera s/n
Valencia, ES-46022, Spain

Popular version of paper 2pSAa, “Redirection and splitting of sound waves by a periodic chain of thin perforated cylindrical shell.”
Presented Monday afternoon, June 26, 2017, 2:20, Room 201
173rd ASA Meeting, Boston

Any sound, whether the warble of an exotic bird or the noise of clucky machinery, what scientists percieve is a complex mixture of many primitive sound waves — the so-called pure tones, which are simply vibrations of certain distinct frequencies. So, is it possible, we wondered, to break down such an acoustic compound into its constituents and separate one of those pure tones from the rest?

It can be achieved using any of the signal processing techniques, however, a simple mechanistic solution also exists in the form of a passive system. That is to say, one that doesn’t have to be turned on to operate.

Here we demonstrate such a system: A linear, periodic arrangement of metallic perforated cylindrical shells in air (see Fig. 1), which serves as a redirecting antenna and a splitter for sound within an audible range.

Figure 1 – A periodic array of perforated cylindrical shells mounted outside the Department of Electronic Engineering, Polytechnic University of Valencia. Credit: Sánchez-Dehesa

Each shell in the chain (see Fig. 2) is a weak scatterer, meaning the sound wave would pass through it virtually undistorted, and strong redirection of an incoming signal might occur only if the chain is sufficiently long. When the number of shells in the chain is large enough, e.g. several dozens, each shell participates in a collective oscillatory motion, with each one of them transferring its vibration to its neighbor via the environment. Such a self-consistent wave is referred to as an eigenmode of our system, and it is best thought of as collective oscillations of air localized in the vicinity of the shells’ surfaces.

Figure 2 – A close-up of an aluminum perforated cylindrical shell. Credit: Sánchez-Dehesa

Now, there are two substantial concepts regarding the wave motion that deserve careful clarification. When describing an acoustic wave, we can look at how and where the regions of maximum (or minimum) pressure move through the medium (air in this case), and combine the information with that of the pace and direction of their motion into a single characteristic — called the phase velocity of the wave.

Another important property of the wave is its group velocity, which indicates how fast and in which direction the actual sound propagates. In many cases, the phase velocity and the group velocity of the wave have the same direction (the case of normal dispersion), but it is also not uncommon for the group velocity of a wave to be opposite to the phase velocity (the case of anomalous dispersion).

The idea of exploiting the fundamental eigenmodes of our system with either normal or anomalous dispersion is what enables the chain of perforated shells to redirect and focus sound. Namely, an acoustic signal that impinges on the chain can trigger the collective vibration of the shells – the eigenmode – and, thus, launch a wave running along the chain.

Of course, most of the sound would pass through the chain, but nevertheless the amount of energy that is redirected along the chain in the form of an eigenmode is quite noticeable. The eigenmode excitation only occurs if the phase velocity of the eigenmode matches that of the incoming signal, and for a specific incident angle, the matching condition supports several frequencies within the audible range.

What is crucial here is that the dispersion of the chain’s eigenmodes on those frequencies is alternating between normal and anomalous, which means that varying only the frequency of the incident acoustic wave (with everything else remaining unchanged) one can virtually switch the direction of the eigenmode propagation along the chain.

Animation 1 – An acoustic wave of frequency 2625 Hz is incident on the chain of perforated shells at the angle of 10o. The excited eigenmode having anomalous dispersion propagates down the chain. Credit: Bozhko

Animation 2 – Same as in animation 1, but the frequency is 3715 Hz, with the excited eigenmode having normal dispersion now. The redirected sound then propagates upwards along the chain. Credit: Bozhko

Animations 1 and 2 illustrate such intriguing behavior of the chain of perforated shells. In one case, the eigenmode that is excited has normal dispersion and carries energy upwards along the chain. In the other case, the dispersion is anomalous and the eigenmode travels downwards. The 10° incidence angle of the sound in both cases is the same, but the frequencies are different.

One possible application of such a redirecting antenna would be an acoustic beam splitter. Indeed, if an incoming signal has a wide spectrum of frequencies, then two pure tones with frequencies depending on the parameters of the chain and the angle of incidence can be extracted and redirected along the chain.

Due to different dispersion behavior of the eigenmodes corresponding to these two tones, the eigenmodes propagate in opposite directions. Thus, splitting of two pure tones becomes possible if we use a chain of perforated shells. Since the frequencies of the eigenmodes change smoothly with changing incidence angle, this angle can be recovered. This means that the chain may also serve as a passive acoustic detector which determines the direction to the source of incoming signal.

2aAAc3 – Vocal Effort, Load and Fatigue in Virtual Acoustics

Pasquale Bottalico, PhD. – pb@msu.edu
Lady Catherine Cantor Cutiva, PhD. – cantorcu@msu.edu
Eric J. Hunter, PhD. – ejhunter@msu.edu

Voice Biomechanics and Acoustics Laboratory
Department of Communicative Sciences and Disorders
College of Communication Arts & Sciences
Michigan State University
1026 Red Cedar Road
East Lansing, MI 48824

Popular version of paper 2aAAc3 Presented Tuesday morning, June 26, 2017
Acoustics ’17 Boston, 173rd Meeting of the Acoustical Society of America and the 8th Forum Acusticum

Mobile technologies are changing the lives of millions of people around the world. According to the World Health Organization (2014), around 90% of the population worldwide could benefit from the opportunities mobile technologies represent, and at relatively low cost. Moreover, investigations on the use of mobile technology for health has increased in important ways over the last decade.

One of the most common applications of mobile technologies on health is self-monitoring. Wearable devices for checking movement in our daily lives are becoming popular. Therefore, if such technology works for monitoring our physical activity, could similar technology be used to monitor the how we use our voice in our daily life? This is particularly important considering that several voice disorders are related to how and where we use our voices.

As a first step to answering this question, this study investigated how people talk in a variety of situations which simulate common vocal communication environments.  Specifically, the study was designed to better understand how self-reported vocal fatigue is related to objective voice parameters like voice intensity, pitch, and their fluctuation, as well as the duration of the vocal load. This information would allow us to identify trends between the self-perception of vocal fatigue and objective parameters that may quantify it. To this end, we invited 39 college students (18 males and 21 females) to read a text under different “virtual-simulated” acoustic conditions. These conditions were comprised of 3 reverberation times, 2 noise conditions, and 3 auditory feedback levels; for a total of 18 tasks per subject presented in a random order. For each condition, the subjects answered questions addressing their perception of vocal fatigue on a visual analogue scale (Figure1).

Figure 1. Visual analogue scales used to determine self-report of vocal fatigue Credit: Bottalico

The experiment was conducted in a quiet, sound isolated booth. We recorded speech samples using an omnidirectional microphone placed at a fixed distance of 30 centimeters from the subject’s mouth. The 18 virtual-simulated acoustic conditions were presented to the participants through headphones which included a real time mix of the participants’ voice with the simulated environment (noise and/or reverberation). Figure 2, presents the measurements setup.

Figure 2. Schematic experimental setup. Credit: Bottalico

To get a better understanding of the environments, we spliced together segments from the recordings of one subject. This example of the speech material recorded and the feedback that the participants received by the headphones is presented in Figure 3 (and in the attached audio clip).

Figure 3. Example of the recording. Credit: Bottalico

Using these recordings, we investigated how participants’ report of vocal fatigue related with (1) gender, (2) ΔSPL mean (the variation in intensity from the typical voice intensity of each subject), (3) fo (fundamental frequency or pitch), (4) ΔSPL standard deviation (the modulation of the intensity), (5) fo standard deviation (the modulation of the intonation) and (6) the duration of the vocal load (represented by the order of administration of the tasks, which was randomized per subject).

As we show in Figure 4, the duration of speaking (vocal load) and the modulation of the speech intensity are the most important factors in the explanation of the vocal fatigue.

Figure 4 Relative importance of the 6 predictors in explaining the self-reported vocal fatigue

While the results show that participants perception of vocal fatigue increases when the duration of the vocal load, of particular interst are the pitch and modulation of the intonation increase, the association between vocal fatigue and intensity modulation and voice intensity. Specifically, there seems to be a sweet spot or a comfort range of intensity modulation (around 8 dB), that allows a lower level of vocal fatigue. What this means to vocalists is that in continuous speech, vocal fatigue may be decreased by adding longer pauses during the speech and by avoiding excessive increase of voice intensity. Our hypothesis is that this comfort range represents the right amount of modulation to allow vocal rest to the vocal folds, avoiding an excessive increase in voice intensity.

The complexity of a participant’s perceived vocal fatigue related to intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order, which represents the duration of the vocal load, is shown in Video1 and in Video2 for males and females. The videos illustrate the average values of pitch and modulation of intonation (120 Hz and 20 Hz for males; 186 Hz and 32 Hz for females).

Self-reported vocal fatigue as a function of the intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order which represents the duration of the vocal load for males assuming an average pitch (120 Hz) and modulation of intonation (20 Hz)

Self-reported vocal fatigue as a function of the intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order which represents the duration of the vocal load for females assuming an average pitch (186 Hz) and modulation of intonation (32 Hz)

If mobile technology is going to be used for people to monitor their daily voice use in different environments, the results of this study provide valuable information needed for the design of mobile technology. A low cost mobile system with output easy to understand is possible.

References
1. World Health Organization. (2014). mHealth: New horizons for health through mobile technologies: second global survey on eHealth. 2011. WHO, Geneva.

2. Bort-Roig, J., Gilson, N. D., Puig-Ribera, A., Contreras, R. S., & Trost, S. G. (2014). Measuring and influencing physical activity with smartphone technology: a systematic review. Sports Medicine, 44(5), 671-686.

Acknowledgements
Research was in part supported by the NIDCD of the NIH under Award Number R01DC012315. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

2aBAa7 – Stimulating the brain with ultrasound: treatment planning

Joseph Blackmore – joseph.blackmore@wadham.ox.ac.uk
Robin Cleveland – robin.cleveland@eng.ox.ac.uk
Institute of Biomedical Engineering, University of Oxford, Roosevelt Drive, Oxford, OX3 7DQ, United Kingdom

Michele Veldsman – michele.veldsman@ndcn.ox.ac.uk
Christopher Butler – chris.butler@ndcn.ox.ac.uk
Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, United Kingdom

Popular version of paper 2aBAa7
Presented Monday morning, June 26, 2017
173rd ASA Meeting, Boston

Many disorders of the brain, such as OCD and essential tremor, can be treated by stimulating or disrupting specific locations in the brain. This can be done by placing an electrode directly at the site needing disruption with a procedure known as deep brain stimulation, but it is an invasive procedure that involves drilling a hole in the skull and inserting a wire through the brain tissue.

Non-invasive alternatives do exist in which electrodes or magnets are placed on the scalp, avoiding the need for surgery. However, these methods can only be used to treat brain regions quite close to the skull and have limited spatial specificity.

Recently, low-intensity focused ultrasound has also been shown to stimulate localized regions of the brain, creating, for example, the sensation of seeing stars in your eyes (known as phosphenes) [1], when targeted to a region of the brain associated with vision. However, steering and focusing an ultrasound beam to the correct location within the brain remains a challenge due to the presence of the skull.

Skull bone, with its varying thickness, curvature, and structure, strongly distorts and attenuates ultrasound waves and can shift the focal point away from the intended target. Consequently, in current human trials, as many of 50 percent of ultrasound stimulation attempts did not elicit a response [1,2].

One solution to more robust focusing is to use ultrasound transducers with hundreds, or even thousands of elements, each of which is individually tuned to account for variations in skull properties so that all waves focus to the intended target location with the brain. However, this equipment is very complex and expensive which, in this early stage of research into ultrasound-induced neuromodulation, has limited progress.

Here, we performed a numerical study to assess whether single-element transducers — which are relatively inexpensive — could be used in combination with numerical modelling to achieve sufficient targeting in the brain. This would provide a solution that can be used as a research tool to further understand the mechanisms behind ultrasound-induced neuromodulation.

Figure 1 – Propagation of sound waves from the brain target out to a spherical receiver outside the skull. The received signals are then optimized to determine the best position for an ultrasound source to deliver sound back through the skull. The locations for different optimization methods are depicted by the colored dots.

The method works by importing a three-dimensional CT image into a computer and placing a virtual acoustic source at the desired target location. A super-computer then calculates how the sound travels from the target, through brain tissue and the skull bone, onto a sphere outside the head, depicted in Figure 1.

From the predicted signals, it is possible to determine the best position of an ultrasound source which can send sound back through the skull to the target location. We employed different strategies for choosing the source location (the dots in Figure 1), and for the optimal strategy predict that a single element transducer can localize sound to a region about 36 millimeters long and 4 millimeters in diameter at depths up to 45 millimeters into brain tissue, which is depicted in Figure 2.

brain

Figure 2 – Focusing the sound waves to a region deep within the brain from a curved single-element transducer. The red cross indicates the intended target. The blue contours represent the acoustic intensity relative to the intensity at the target. -3dB corresponds to 50% of the intensity at the target, -6dB is 25% and -12dB is 12.5%.

[1] Lee, Wonhye, et al. “Transcranial focused ultrasound stimulation of human primary visual cortex.” Scientific Reports 6 (2016).
[2] Lee, Wonhye, et al. “Image-guided transcranial focused ultrasound stimulates human primary somatosensory cortex.” Scientific Reports 5 (2015): 8743.