How Canaries Listen to Their Song – Adam R. Fishbein

How Canaries Listen to Their Song – Adam R. Fishbein

How Canaries Listen to Their Song

Adam R. Fishbein – afishbei@terpmail.umd.edu

Shelby L. Lawson

Gregory F. Ball

Robert J. Dooling

University of Maryland

4123 Biology-Psychology Building

College Park, MD 20742

 

Popular version of paper 3pAB5

Presented Tuesday afternoon, June 27, 2017

173rd ASA Meeting, Boston

 

The melodic, rolling songs of canaries have entertained humans for centuries. But for canaries, these songs play an important role in courtship. The song, produced exclusively by males, can last for minutes and consists of various syllables repeated in flexibly sequenced phrases.

 

Earlier behavioral observations have shown that females are especially attracted to so-called “sexy” syllables or “sexy” phrases. These are characterized by a fast tempo, wide-bandwidth (meaning that they extend from low to high pitch), and a two-note structure. Researchers have argued that females have evolved to prefer these syllables because they are difficult to produce and thus provide an honest signal of the male’s quality [1][2]. That is, sexy syllables indicate a strong, healthy male with good genes.

 

 

Figure 1 Recording of canary song (left)

and spectrogram of “sexy” phrase (right).

 

The two red lines indicate the two notes of the sexy syllable. (Credit: Fishbein)

 

We explored how canaries in a non-breeding state (i.e. short days) listen to their song by testing their auditory perception using the equivalent of a human hearing test. Since the birds can’t tell us “yes” or “no” when asked if two sounds are different, we train them to listen to a repeating sound and peck a key when the sound changes. If they respond correctly, this tells us they can hear the difference between the sounds and they are then rewarded with brief access to food.

 

Figure 2 Canary in testing chamber. (Credit: Fishbein)

Some of the questions we posed are: Do sexy phrases sound different to canaries than other phrases? Do they listen more to the fine details of every syllable or to the overall flow of the song? Are females more sensitive to “sexy” qualities than males? Do other birds hear canary song differently than canaries?

In one experiment, we asked canaries to distinguish between eight different song phrases: four “sexy” ones and four “non-sexy” ones. We analyzed the birds’ responses and created a “perceptual map” that visually represents how distinct the phrases sound to the canaries.

Our results show that canaries perceive a bird’s sexy phrases as more similar to each other than other phrases, confirming that canaries find these sexy syllable vocalizations particularly salient.

 

Figure 3 “Perceptual map” for canaries. Circles indicate phrases taken from recordings of bird A. Diamonds indicate phrases taken from recordings of bird B. Blue labels are non-sexy phrases and red ones are sexy. Axis labels indicate the acoustic features that each dimension correlates with. (Credit: Fishbein)

Other experiments in this study provided further evidence that sexy song syllables sound distinctive to canaries. Canaries could hear synthesized reversals of sexy syllables, but performed better at reversals of non-sexy ones. They were also better at hearing increases in the tempo of sexy syllables than decreases in tempo. These results suggested that canaries may be attuned to perceiving the fast tempo and coordinated notes of the sexy syllables. Importantly, these findings were the case for both female and male canaries, perhaps because male canaries need to assess competitors and maintain their own song, just as females need to find the highest quality mate.

Canaries are not exceptional in being able to hear the fine details of their song. Other species tested with these song manipulations are similarly sensitive to small temporal differences between notes in sexy syllables.

Taken together, these results suggest that canaries listen to chunk by chunk, phrase by phrase changes in their song, keying in to details about sexiness when those particular syllables occur. In the future, it will be interesting to compare these perceptual results from canaries in a non-breeding state to canaries that are on long days, with elevated hormone levels, preparing to breed.

In a way, canaries seem to listen to song like we listen to an orchestral symphony, hearing the melody and rhythm of the whole piece, integrating the contributions of each instrument, and not zooming in on the performance of a single instrument except during an especially impressive solo.

 

References

  1. Vallet, E., Kreutzer, M., 1995. “Female canaries are sexually responsive to special song phrases.” Animal Behavior. 49, 1603-1610.
  2. Suthers, R., Vallet, E., Kreutzer, M., 2012. “Bilateral coordination and the motor basis of female preference for sexual signals in canary song.” Journal of Experimental Biology. 215, 2950-2959.

 

 

 

My personal head related transfer function – Sebastián Fingerhuth 

My personal head related transfer function – Sebastián Fingerhuth 

My personal head related transfer function

High quality individualized computer models

 

Sebastián Fingerhuth   sebastian.fingerhuth@pucv.cl

Danny Angles                danny.angles.a@mail.pucv.cl

Juan Barraza                 juan.barraza.b@mail.pucv.cl

School of Electrical Engineering

Pontificia Universidad Católica de Valparaíso

Av. Brasil 2147 – Valparaíso – Chile.

 

Our ability to precisely locate where a sound comes from is due to many factors, noteably that we have two ears and our brains can use the geometry of our head and ears to distinguish originating direction. The fact we hear with two ears is called binaural hearing. It has been a matter of study and research for many years and has many technological applications.

Among the outstanding applications are: 3D sound reproduction and recording systems, architectural acoustic designs, sound design, individualized adjustments of orthopedic hearing aids, teleconferencing systems, among others.

Hardware equipment called dummy heads allow us to study this topic [3] [4]. They are available purchase and at some research institutions and consist of a real-sized artificial head, including two ears that have microphones in the ear channels. With a dummy head, it is possible to record sound exactly as if it were on our own ears. This has led to the production of some of the amazing 3D recording available for computer games, videos, and other music production. These dummy heads are also used intensively for research purposes.

There is an ITU (International Telecommunications Union) standard for dummy heads, but it is hardly sufficient to fully model individual heads, likely representing only average head dimensions, or median sizes. My individual head is different to a dummy head and these differences, in size and geometry, can affect how good or accurate the results of auditory measurements can be.

The innovation of our research is in the development of a methodology constructing individual 3D computer models (CAD) of heads and ears, including the outer part of the ear, called the pinna. The results of these three-dimensional models can be used to directly compute individual acoustics parameters on a computer or to build an individual dummy head.

 

Video: Model 3D CAD (Credit: Fingerhuth/Angles/ Barraza)

 

The methodology to obtain the 3D CAD models has to steps: i) the model of the head and ii) the 3D replicas of the ear.

Model of the head

Photogrammetry is a method used to get 3-D information from a set of pictures taken of the object of interest, such as a head. If we want a high resolution and high precision model of the head, we need many pictures, from different angles and positions, to cover the head fully. We use the processing software 3D Zephy, but there other options exist. Finally, we obtain a 3D CAD model which also includes the color and texture of the object, but only the geometry is of interest for acoustics.

Figure 1. Upper Images: Control points marked on the persons face before the photo session. (Credit: Fingerhuth/Angles/ Barraza) Lower Images: 3D CAD model (with and without texture) including control distances measured with the software (Credit: Fingerhuth/Angles/ Barraza)

Ear Replicas

The form and geometry of the pinna makes it almost impossible to get an accurate 3D CAD model from a pure photogrammetry method. Therefore, we used a molding process [5] [6]. First an alginate negative is created and then a plaster positive. Most of the plaster pinnae are cut, to open and show the concavities (Figure 2).

Figure 2. Plaster replicas of the pinna. The original one, from a dummy head on the left. (Credit: Fingerhuth/Angles/ Barraza)

 

3D Scanner

The plaster models of the pinna can be converted in a 3D CAD model using photogrammetry again or by means of a 3D scanner. We used the latter, and the result is shown in Figure 3. This method uses a combination of laser and imaging on a small rotating platform.

Figure 3. Scanning process of the pinna. In this case, a standard pinna from a commercial dummy head. (Credit: Fingerhuth/Angles/ Barraza)

 

Results

We tested and compared the results of each one of these processes (e.g. See control points and distances on the participants and on the CAD model in Figure 1). To check how robust the methods are, we also performed additional quality tests: We used more or fewer pictures in the photogrammetry software, photo-sessions for the same participant were repeated on different days, etc.

The results of these comparison showed us that the mean error was lower than 1.5%. Finally, the partial results, consisting of one CAD model of the head and two CAD pinnae, were joined to form one 3D CAD model (Figure 4) that will be used to compute the acoustic cues for that specific person (this is called the Head Related Transfer Function, HRTF).

We will compare our results in its quality with 3D audio localization listening test with that same participant. This will finally give information about how good this hybrid process is for obtaining individualized dummy heads.

Figure 4. Result from the hybrid model obtaining process. (Credit: Fingerhuth/Angles/ Barraza)

 

Bibliography

[1] D. Batteau, «The role of the Pinna in Human Localization,» Proceeding of The Royal Society Biological Sciences, vol. 168, pp. 158-180, 1967.
[2] J. S. Rayleigh, The Theory of Sound, London: Macmillan, 1877.
[3] K. Martin y G. Bill, «HRTF Measurements of a KEMAR Dummy-Head Microphone,» MIT Media Lab, Massachuset, 1994.
[4] F. Wightman y D. Kistler, «Headphone simulation of free-field listening. I: Stimulus synthesis,» The Journal of the Acoustical Society of America, vol. 85, nº 2, p. 858, 1998.
[5] J. L. Bravo , «Construcción de Modelo de Oreja Artificial de Silicona y Medición de Características Acústicas,» Valparaíso, 2015.
[6] R. Codoceo, «Construcción de Modelos 3D de Oreja y Cabeza Individualizada para Medición Acústica,» Valparaíso, 2016.

 

 

 

 

 

S&N-S Light: the system that makes the noise light – Sonja Di Blasio

S&N-S Light: the system that makes the noise light – Sonja Di Blasio

S&N-S Light: the system that makes the noise light

Sonja Di Blasio– sonja.diblasio@polito.it
Giuseppina Emma Puglisi– giuseppina.puglisi@polito.it
Giuseppe Vannelli – giuseppe.vannelli@polito.it
Louena Shtrepi– louena.shtrepi@polito.it
Marco Carlo Masoero – marco.masoero@polito.it
Arianna Astolfi– arianna.astolfi@polito.it

Dipartimento di Energia
Politecnico di Torino
Corso Duca degli Abruzzi, 24
10129 Torino

Simone Corbellini – simone.corbellini@polito.it
Dipartimento di Elettronica e Telecomunicazioni
Politecnico di Torino
Corso Duca degli Abruzzi, 24
10129 Torino

In collaboration with:
Giulia Calosso – calossogiulia@gmail.com
Alessia Griginis – griginis@onleco.com
ONLECO S.r.L
Via Antonio Pigafetta, 3
10129 Torino
Stefano Cerruti – stefano.c@bottegastudio.it
BSA – Bottega Studio Architetti
Via degli Stampatori, 4
10128 Torino

Black logo of S&N-S Light Credit:ONLECO S.r.l

 

Recently, the tendency in many fields related to environmental quality, such as thermal and visual quality, is to customise the comfort according to users’ needs. A tailored comfort zone is planned for public spaces, in which occupants can set their own comfort level with passive or active systems. In this context, the reduction of noise due to anthropic sources is a priority.

In densely occupied spaces, such as classrooms, workplaces, restaurants and outdoor spaces, the noise due to other people chatting has a detrimental effect upon performance, health and environmental quality. In these spaces, the way to reduce noise is usually based on the acoustic refurbishment of the rooms, in terms of sound absorption and sound insulation [1].

Strategies able to involve the users actively to obtain good acoustic quality have not yet been largely developed. Since high noise levels due to people chatting have been defined as the main source of acoustic pollution in these spaces [2,3,4], focusing on the occupant behaviour can be an effective strategy to obtain acoustic comfort through active role of the users.

We developed Speech & Noise-Stop Light, S&N-S Light, a patented, smart sound level meter device with a warning light triggered by exceeding a predetermined anthropic sound level difference, which encourages personal voice control through visual feedback. The light activation, with green, yellow and red colour, is based on an adaptive algorithm that filters accidental noise levels.

The main aims of S&N-S Light are: To increase social awareness about noise impact on health and comfort; and to encourage people toward personal voice control to reduce anthropic noise levels and obtain acoustic comfort through an active social behaviour.

Its prototype is a transparent panel illuminated by a through‐light colour beam (see Figure 1). It is largely used to control chatting noise in classrooms, and therefore applied as educational tool.

 

Figure 1 – Example of application of S&N-S Light in a classroom of the primary school “Roberto d’Azeglio” in Torino during a measurement campaign. Credit: Sonja/POLITO

The innovation is in the adaptive algorithm, which makes S&N-S Light different from competitors (see Figure 2). The light activation is based on a time history, thus allowing S&N-S Light to automatically adapt to the changes in the noise conditions.

In this way, it considers the fact that people can also be annoyed in the case of low noise levels, like when the noise increases compared to a previous condition, especially in the case of cognitive tasks. Moreover, S&N-S Light is also able to filter the noise due to accidental events, such as a teacher’s shout or a sneeze.

Figure 2 – Innovative elements that charaterize S&N-S Light.* Credit: Sonja/POLITO and Creative Common licence (https://creativecommons.org/licenses/by/3.0/us/

Six prototypes have been produced so far and successfully applied in primary and secondary school classrooms. The architecture is shown in Figure 3. A class-2, low-cost sound level meter records noise levels at a fixed time interval, and an electronic card processes the signals and activates the warning light which lights up the transparent panel. A Wi-Fi module has been added to send data to a cloud server platform and on a customised mobile App in real time. The future aim is to design a new device to extend application of S&N-S Light to open-plan offices and restaurants.

Figure 3 – Architecture of the S&N-S Light prototype and screenshot of the mobile App. Credit: Sonja/Simone/POLITO

We carried out several measurement campaigns in classrooms of different type of schools. Results highlighted a statistically significant decrease in noise levels, as shown in Figure 4 and Table I, especially in the first week in which S&N-S Light is switched on.

 

Figure 4 – Example of occurrences distribution of L90 in a primary school classroom with S&N-S Light switched off and switched on. It can be shown that the most occurrent value decreased by about 8 dB. Credit: Sonja/POLITO

 

Table I – L90 reduction with S&N-S Light switched on in four primary school classes. The reliability of the improvements has been based on the the Mann‐Whitney U Test,  a non parametric statistical test that it used to interpret whether there are differences in the occurrences distributions of two groups.

The experiments in the classrooms demonstrated a noise level decrease with S&N-S Light switched on (week 1 and week 2) compared to S&N-S Light switched off (week 0). Furthermore, the decrease in noise level results higher in week 1 compared to week 2. Currently, we are organizing a further measurement campaign in a primary school to investigate whether training for children, repeated each week, could reduce the difference between week 1 and week 2.

 

[1] Kristiansen J., Lund S.P., Persson R., Challi R., Lindskov J. M., Nielsen P.M., Larsen P.K., Toftum J., The effects of acoustical refurbishment of classrooms on teachers’ perceived noise exposure and noise-related health symptoms, International Archives of Occupational and Environment Heath, 89 (2016), pp. 341-350.

 

[2] Astolfi A., Pellerey F., Subjective and objective assessment of acoustical and overall environmental quality in secondary school classrooms, Journal of the Acoustical Society of America, 123(1) (2008), pp. 163-173.

 

[3] Dockrell J. E., Shield B., Children’s perceptions of their acoustical environment at school and at home, Journal of the Acoustical Society of America, 115 (6) (2004), pp. 2964-2973.

 

[4] Ottoz E., Rizzi L., Nastasi F., Recreational noise in Turin and Milan: impact and costs of movida for disturbed residents, In: Proceedings of the 22th International Congress on Sound and Vibration, (2008), pp. 1-8.

 

* Terms of Use: These icons are licensed under a Creative Commons Attribution 3.0 United States (CC BY 3.0 US (https://creativecommons.org/licenses/by/3.0/us/). They are attributed to Nurfakeh Fuji Amaludin, Pascual Bilotta, Magicon and Becris, and the original version can be found here https://thenounproject.com/

 

 

 

What Can We Learn from Breaking Wave Noise? – Grant B. Deane

What Can We Learn from Breaking Wave Noise? – Grant B. Deane

What Can We Learn from Breaking Wave Noise?

Grant B. Deane – gdeane@ucsd.edu
M. Dale Stokes – dstokes@ucsd.edu
Scripps Institution of Oceanography, UCSD,
La Jolla, CA 92093-0206

David M. Farmer – farmer.david@gmail.com
School of Earth and Ocean Sciences,
Victoria BC, V8P 5C2, Canada

Eric D’Asaro – dasaro@apl.washington.edu
Zhongxiang Zhao – zzhao@apl.washington.edu
Applied Physics Laboratory, University of Washington,
Seattle, WA 98105

Popular version of paper 2pAO10
Presented Monday afternoon, June 26, 2017
173th ASA Meeting, Boston

Waves breaking on the ocean, often called “whitecaps,” limit the growth of ocean waves, transfer momentum between the atmosphere and ocean, generate marine aerosols, increase ocean albedo and enhance the air-sea transport of greenhouse gasses. Despite their importance for understanding weather and climate, they remain poorly understood.

The reason for this is clear: breaking waves are the product of storms at sea, they are a source of intense turbulence and they can destroy the sensitive instruments we might use to measure them. This makes them tricky to study in their natural ocean environment, and has encouraged the development of various remote sensing techniques using aircraft and satellites. While we have learned much about breaking waves from above, we still need to understand what is happening in the turbulent core. Here we probe the whitecaps’ inner structure from beneath using the natural sound they create.

The video shows a breaking wave seen from above and below during a storm of Point Conception, California in 2000. Credit: Deane

The mass of bubbles that give the whitecap its bright appearance come from the air entrained as the wave breaks. The breaking process generates intense turbulence that fragments the trapped air cavity into a mass of small bubbles. These bubbles create underwater noise. The sounds of crashing surf, the tinkling fountain and the babbling brook are all made by bubbles, which emit a musical pulse of sound when they are first formed.

Each pulse of sound has its own tone that is determined by the size of the bubble making it. So, wave noise intensity and frequency contains information about the numbers and sizes of bubbles entrained by a wave. By measuring the sound safely beneath the fury of the ocean surface, we can learn what is going on within its turbulent interior.

Wave noise has been used over the years to learn many interesting things about breaking waves, including their intensity, how frequently they break and their movement across the sea surface. Wave noise has been used to probe the properties of recently formed bubbles left after a wave breaks and even to infer wind speed, which is closely related to the overall intensity of noise in the ocean.

We have been using wave noise to probe fluid turbulence in whitecaps. Our interest in whitecap turbulence is motivated by its relationship to bubble entrainment and breakup. Fluctuating pressure within the breaking wave driven by fluid turbulence can rupture bubbles by distorting them from their spherical form into irregular shapes.

Small bubbles are stabilized against rupture by surface tension, but large bubbles get ripped apart. These two forces are balanced at a spatial scale, the Hinze scale, which is related to the intensity of the turbulence. The Hinze scale plays a key role in setting the bubble size distribution in breaking waves. An important question is how does the Hinze scale, and therefore the bubble size distribution, change as the wind grows from a gentle breeze to a tropical cyclone?

We might reasonably expect the turbulence to increase with increasing wind speed. If this were true, the bubble distribution created by wave breaking would lead to smaller bubbles at higher wind speed. Surprisingly, this turns out not to be the case. Our experiments on breaking waves in a laboratory show that turbulence intensity in breaking waves, measured by both bubble sizes and a quite different method, reaches a maximum value, relatively independent of the size of the wave.

This leads us to suspect that the Hinze scale, and therefore the bubble size distribution, should be the same for a wide range of wind speeds. We call this phenomenon “turbulence saturation,” and it has important implications for transport processes linking the ocean and atmosphere. But, does this result translate from the laboratory to the open ocean?

Field measurements support this hypothesis. Wave noise was measured along 7 transects across 3 different tropical cyclones. Figure 1 shows measurements of wave noise as it depends on frequency for different wind speeds (colored lines) varying from 15 to 40 meters per second. Notice that all spectra change slope between 2000-4000 Hertz, annotated with the vertical, grey box. The frequency of this break point is thus nearly independent of wind speed.

Since we expect this frequency to be related to the Hinze scale, these data suggests that the Hinze scale, and therefore the bubble size distribution, is the same across the entire range of wind speeds. We support this conclusion with a model of sound generation by bubbles (yellow/black lines). The model predicts a peak near the Hinze frequency. Sound generation at lower frequencies is due to other physics and not modeled here. Changing the turbulence dissipation rate by a factor of 5 moves the location of the peak by about a factor of 3, suggesting that if the turbulence intensity did change then we would see evidence of it in the wave noise.

This combination of laboratory and field measurements with theory provide us with evidence of “scale invariance” of turbulence within breaking waves in the open ocean up to 40 meters per second wind speeds, supporting the turbulence saturation hypothesis and demonstrating the unique contributions that ambient sound measurements can make under severe conditions.

[Work supported by ONR, Ocean Acoustics Division and NSF. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Office of Naval Research].

 


Figure 1. Measurements of wave noise for wind speeds ranging from 15 to 40 meters per second. The black curves show model calculations of the wave noise under conditions of changing turbulence.

 

Designing Tunable Acoustic Metamaterials Using 3-D Computer Graphics – Mark J. Cops

Designing Tunable Acoustic Metamaterials Using 3-D Computer Graphics – Mark J. Cops

Designing Tunable Acoustic Metamaterials Ssing 3-D Computer Graphics

Mark J. Cops – mcops@bu.edu
J. Gregory McDaniel – jgm@bu.edu
Boston University
110 Cummington Mall
Boston, MA 02215

Elizabeth A. Magliula – Elizabeth.magliula@navy.mil
Naval Undersea Warfare Center
1176 Howell Street, Building 1302
Newport, RI 02841

Popular version of paper 4aSAb12
Presented Wednesday morning, June, 28, 2017
173rd ASA Meeting, Boston

In this work, software originally designed for display rendering, artistic graphics, animation creating, and video game creation is being used to create new materials with tunable properties. This work has produced digital designs of materials that are essential to reducing sound and vibration.

Metamaterials are specially engineered materials which use a combination of structure and host materials to enable a wide range of material properties not ordinarily found in nature. Metallic foams are one such subset of metamaterials, which provide advantages for structural applications due to their high strength-to-weight ratio. Metallic foams can be manufactured through a variety of processes, such as casting or sintering, and can either be closed cell or open cell (Figure 1).


Figure 1. An open cell aluminum foam manufactured by ERG Aerospace Corp.

The ability to tune metallic foam properties for various noise and vibration mitigation applications is a valuable tool for industrial designers and engineers. The combination of 3-D computer graphics and finite-element software can be used to rapidly design, investigate, and classify material properties. OpenGL is a programming language used widely in computer graphics. Using OpenGL, the programmer can create complex cellular structures by effectively controlling the pixel display in a 3-D array of pixels by using signed distance functions to specify locations of solid material or void space. One remarkable thing about using OpenGL is its inherent simplicity and ability to create any surface described mathematically. Two such materials, created from the described approach, are shown in Figure 2.


Figure 2. (a) an Aluminum tetrahedron lattice with triangular struts. (b) A copper minimal surface geometry structure.

The relative density of these two foams was altered by keeping the pore spacing (the distance between void openings in the surface) constant and increasing the thickness of material. To determine effective materials properties, the designed foam structures were analyzed using the finite element method software, Abaqus. Six different strain loading scenarios were imposed on the structure: representing tensile and shear loading on all orientations, shown in Figure 3.


Figure 3. Strain loading scenarios used in determining effective material properties.

We then determined numerically the effective static material properties, such as Young’s modulus and Poisson ratio. Figure 4 shows relative Young’s Modulus and Poisson ratio values vs. relative density for the foam in Figure 2b. Each blue point is one foam that was digitally designed and analyzed by the discussed approach. It is interesting to note that there is a very significant trend — properties are a quadratic function of relative density.


Figure 4. Material property curves for the foam in Figure 2b.

The useful feature about curves such as those in Figure 4, and others generated by the discussed approach, is the ability for designers to visualize the design space and availability of material properties and select a desired relative density foam to meet design criteria. Such foams can then be fabricated and implemented to serve a wide range of structural applications.

Effects of noise for workers in the transportation industry – Marion Burgess

Effects of noise for workers in the transportation industry

Marion Burgess m.burgess@adfa.edu.au
Brett Molesworth b.molesworth@unsw.edu.au

University of New South Wales, Australia

Popular version of paper
Presented June 28, 2017, in session 4aNSa, Measuring, Modeling, and Managing Transportation Noise I. 8:00 AM – 12:20 PM

173rd ASA Meeting, Boston

There are well established limits for workplace noise based on the risk of hearing damage. For example, an 8-hour noise exposure level is limited to 85 decibels (when the sound is this loud you need to shout to talk to someone near you). There are also guidelines for acceptable noise levels in workplaces that aim to ensure the noise will not be intrusive or affect the ability of the worker to do the tasks. For example, a design level for a general office may be 40 to 45 decibels (dBA), while for a ticket sales area, 45 to 50 dBA. In this range, noise should not have an adverse affect on your ability to complete a task.

However, there are many work environments, particularly in the transportation industry, in which the noise levels are above 50 dBA but the employees are required to perform tasks that require a high level of concentration and attention. For pilots and bus, truck and train drivers, the noise levels in the area they are working can be 65 to more than 75 dBA at times.

These workers all need to make safety-critical decisions and operate technical equipment in the presence of continuous noise generated from their vehicle’s engine. Transport check-in staff need to communicate and process passengers in noisy check-in halls where there is both vehicle and equipment noise as well as the noise from personnel around, such as “babble.”

In this paper, we discuss findings from a number of studies investigating the effect of constant noise at 65 dBA on various cognitive and memory skills. Two noise sources were used: One, a wideband noise like constant mechanical noise from an engine, and the other a babble noise of multiple persons’ incomprehensible speech. Language background is another factor that can increase cognitive load for those workers who are communicating in a language that is not native.

The cognitive tasks aimed to test working memory with an alphabet span test and recognition memory using a cued recall task. The signal to noise ratio used was 0, -5 and -10 dBA. Wideband noise was found to have a greater effect on working memory and recognition memory than babble noise.
Those who were not native English speakers were also more affected by the wideband noise than the babble noise. The subjective assessment, when the subjects were asked their opinion of the effect of the noise and the annoyance, was also greater for broadband noise.

These findings reinforce the limitations of basing acceptability on a simple overall dBA value alone. The reduction in performance demonstrates the importance of reducing the noise levels within transportation workplaces.