3pAB1 – A Welcoming Whinny

David G. Browning decibeldb@aol.com
Peter D. Herstein – netsailor.ph@cox.net
BROWNING BIOTECH
139 Old North Road
Kingston, RI 02881
Popular Version of paper 3pAB1
Presented Tuesday afternoon, June 27, 2017
173rd ASA Meeting, Boston

Are you greeted with a welcoming whinny when you enter the barn? When doing research on horse whinnys (as part of the Equinne Vocalization Project) we realized we were hearing more whinnys when horses were inside the barn than out. This led us to investigate further and we came to realize it was vocalization adaptation. Horses have remarkable eyesight, with almost a 360° field of view, which they primarily rely on to observe and communicate when out in the open. In a barn, confined to a stall, their line of sight is often blocked. Quite remarkably, they learn to compensate by recognizing the sounds that are of interest — like that of the feed-cart or even their owner’s footsteps — which they often salute with a whinny.

We were curious as to how universal vocalization adaptation occurred in the animal world and in searching the literature we found numerous interesting examples. Asian Wild Dogs (Dholes), for example, hunt prey in packs, usually out in the open where they can visually keep track of the prey and their pack mates. When they encounter some sight-limiting vegetation, however, they have developed a short, flat whistle to keep track of each other but not interfere with their listening for the prey.

Jungles, presenting further examples, are uniquely challenging to animals for three reasons: visibility is limited, moving is difficult, and the vocalization has to be heard despite many others’ sounds. African rhinos out on the plain can make do with a simple bellow, as it would be easy to trot over and check them out. In contrast, a Sumartran rhino, always in the jungle, has a complex vocalization. Often compared to that of a whale, the vocalization is complex in order to be heard among the competing calls while providing enough information so to entice another to slog over to check it out (or not).

The military use a term “situational awareness,” that also refers the awareness that is crucial to animals, and this work provides some examples of their acoustic compensations when visibility is limited for some reason.

2pSAa – Three-in-one Sound Effects: A redirecting antenna, beam splitter and a sonar

Andrii Bozhko – AndriiBozhko@my.unt.edu
Arkadii Krokhin – Arkadii.Krokhin@unt.edu
Department of Physics
University of North Texas
1155 Union Circle #311427
Denton, TX 76201, USA

José Sánchez-Dehesa – jsdehesa@upvnet.upv.es
Francisco Cervera – fcervera@upvnet.upv.es
Wave Phenomena Group
Universitat Politècnica de València
Camino de Vera s/n
Valencia, ES-46022, Spain

Popular version of paper 2pSAa, “Redirection and splitting of sound waves by a periodic chain of thin perforated cylindrical shell.”
Presented Monday afternoon, June 26, 2017, 2:20, Room 201
173rd ASA Meeting, Boston

Any sound, whether the warble of an exotic bird or the noise of clucky machinery, what scientists percieve is a complex mixture of many primitive sound waves — the so-called pure tones, which are simply vibrations of certain distinct frequencies. So, is it possible, we wondered, to break down such an acoustic compound into its constituents and separate one of those pure tones from the rest?

It can be achieved using any of the signal processing techniques, however, a simple mechanistic solution also exists in the form of a passive system. That is to say, one that doesn’t have to be turned on to operate.

Here we demonstrate such a system: A linear, periodic arrangement of metallic perforated cylindrical shells in air (see Fig. 1), which serves as a redirecting antenna and a splitter for sound within an audible range.

Figure 1 – A periodic array of perforated cylindrical shells mounted outside the Department of Electronic Engineering, Polytechnic University of Valencia. Credit: Sánchez-Dehesa

Each shell in the chain (see Fig. 2) is a weak scatterer, meaning the sound wave would pass through it virtually undistorted, and strong redirection of an incoming signal might occur only if the chain is sufficiently long. When the number of shells in the chain is large enough, e.g. several dozens, each shell participates in a collective oscillatory motion, with each one of them transferring its vibration to its neighbor via the environment. Such a self-consistent wave is referred to as an eigenmode of our system, and it is best thought of as collective oscillations of air localized in the vicinity of the shells’ surfaces.

Figure 2 – A close-up of an aluminum perforated cylindrical shell. Credit: Sánchez-Dehesa

Now, there are two substantial concepts regarding the wave motion that deserve careful clarification. When describing an acoustic wave, we can look at how and where the regions of maximum (or minimum) pressure move through the medium (air in this case), and combine the information with that of the pace and direction of their motion into a single characteristic — called the phase velocity of the wave.

Another important property of the wave is its group velocity, which indicates how fast and in which direction the actual sound propagates. In many cases, the phase velocity and the group velocity of the wave have the same direction (the case of normal dispersion), but it is also not uncommon for the group velocity of a wave to be opposite to the phase velocity (the case of anomalous dispersion).

The idea of exploiting the fundamental eigenmodes of our system with either normal or anomalous dispersion is what enables the chain of perforated shells to redirect and focus sound. Namely, an acoustic signal that impinges on the chain can trigger the collective vibration of the shells – the eigenmode – and, thus, launch a wave running along the chain.

Of course, most of the sound would pass through the chain, but nevertheless the amount of energy that is redirected along the chain in the form of an eigenmode is quite noticeable. The eigenmode excitation only occurs if the phase velocity of the eigenmode matches that of the incoming signal, and for a specific incident angle, the matching condition supports several frequencies within the audible range.

What is crucial here is that the dispersion of the chain’s eigenmodes on those frequencies is alternating between normal and anomalous, which means that varying only the frequency of the incident acoustic wave (with everything else remaining unchanged) one can virtually switch the direction of the eigenmode propagation along the chain.

Animation 1 – An acoustic wave of frequency 2625 Hz is incident on the chain of perforated shells at the angle of 10o. The excited eigenmode having anomalous dispersion propagates down the chain. Credit: Bozhko

Animation 2 – Same as in animation 1, but the frequency is 3715 Hz, with the excited eigenmode having normal dispersion now. The redirected sound then propagates upwards along the chain. Credit: Bozhko

Animations 1 and 2 illustrate such intriguing behavior of the chain of perforated shells. In one case, the eigenmode that is excited has normal dispersion and carries energy upwards along the chain. In the other case, the dispersion is anomalous and the eigenmode travels downwards. The 10° incidence angle of the sound in both cases is the same, but the frequencies are different.

One possible application of such a redirecting antenna would be an acoustic beam splitter. Indeed, if an incoming signal has a wide spectrum of frequencies, then two pure tones with frequencies depending on the parameters of the chain and the angle of incidence can be extracted and redirected along the chain.

Due to different dispersion behavior of the eigenmodes corresponding to these two tones, the eigenmodes propagate in opposite directions. Thus, splitting of two pure tones becomes possible if we use a chain of perforated shells. Since the frequencies of the eigenmodes change smoothly with changing incidence angle, this angle can be recovered. This means that the chain may also serve as a passive acoustic detector which determines the direction to the source of incoming signal.

2aAAc3 – Vocal Effort, Load and Fatigue in Virtual Acoustics

Pasquale Bottalico, PhD. – pb@msu.edu
Lady Catherine Cantor Cutiva, PhD. – cantorcu@msu.edu
Eric J. Hunter, PhD. – ejhunter@msu.edu

Voice Biomechanics and Acoustics Laboratory
Department of Communicative Sciences and Disorders
College of Communication Arts & Sciences
Michigan State University
1026 Red Cedar Road
East Lansing, MI 48824

Popular version of paper 2aAAc3 Presented Tuesday morning, June 26, 2017
Acoustics ’17 Boston, 173rd Meeting of the Acoustical Society of America and the 8th Forum Acusticum

Mobile technologies are changing the lives of millions of people around the world. According to the World Health Organization (2014), around 90% of the population worldwide could benefit from the opportunities mobile technologies represent, and at relatively low cost. Moreover, investigations on the use of mobile technology for health has increased in important ways over the last decade.

One of the most common applications of mobile technologies on health is self-monitoring. Wearable devices for checking movement in our daily lives are becoming popular. Therefore, if such technology works for monitoring our physical activity, could similar technology be used to monitor the how we use our voice in our daily life? This is particularly important considering that several voice disorders are related to how and where we use our voices.

As a first step to answering this question, this study investigated how people talk in a variety of situations which simulate common vocal communication environments.  Specifically, the study was designed to better understand how self-reported vocal fatigue is related to objective voice parameters like voice intensity, pitch, and their fluctuation, as well as the duration of the vocal load. This information would allow us to identify trends between the self-perception of vocal fatigue and objective parameters that may quantify it. To this end, we invited 39 college students (18 males and 21 females) to read a text under different “virtual-simulated” acoustic conditions. These conditions were comprised of 3 reverberation times, 2 noise conditions, and 3 auditory feedback levels; for a total of 18 tasks per subject presented in a random order. For each condition, the subjects answered questions addressing their perception of vocal fatigue on a visual analogue scale (Figure1).

Figure 1. Visual analogue scales used to determine self-report of vocal fatigue Credit: Bottalico

The experiment was conducted in a quiet, sound isolated booth. We recorded speech samples using an omnidirectional microphone placed at a fixed distance of 30 centimeters from the subject’s mouth. The 18 virtual-simulated acoustic conditions were presented to the participants through headphones which included a real time mix of the participants’ voice with the simulated environment (noise and/or reverberation). Figure 2, presents the measurements setup.

Figure 2. Schematic experimental setup. Credit: Bottalico

To get a better understanding of the environments, we spliced together segments from the recordings of one subject. This example of the speech material recorded and the feedback that the participants received by the headphones is presented in Figure 3 (and in the attached audio clip).

Figure 3. Example of the recording. Credit: Bottalico

Using these recordings, we investigated how participants’ report of vocal fatigue related with (1) gender, (2) ΔSPL mean (the variation in intensity from the typical voice intensity of each subject), (3) fo (fundamental frequency or pitch), (4) ΔSPL standard deviation (the modulation of the intensity), (5) fo standard deviation (the modulation of the intonation) and (6) the duration of the vocal load (represented by the order of administration of the tasks, which was randomized per subject).

As we show in Figure 4, the duration of speaking (vocal load) and the modulation of the speech intensity are the most important factors in the explanation of the vocal fatigue.

Figure 4 Relative importance of the 6 predictors in explaining the self-reported vocal fatigue

While the results show that participants perception of vocal fatigue increases when the duration of the vocal load, of particular interst are the pitch and modulation of the intonation increase, the association between vocal fatigue and intensity modulation and voice intensity. Specifically, there seems to be a sweet spot or a comfort range of intensity modulation (around 8 dB), that allows a lower level of vocal fatigue. What this means to vocalists is that in continuous speech, vocal fatigue may be decreased by adding longer pauses during the speech and by avoiding excessive increase of voice intensity. Our hypothesis is that this comfort range represents the right amount of modulation to allow vocal rest to the vocal folds, avoiding an excessive increase in voice intensity.

The complexity of a participant’s perceived vocal fatigue related to intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order, which represents the duration of the vocal load, is shown in Video1 and in Video2 for males and females. The videos illustrate the average values of pitch and modulation of intonation (120 Hz and 20 Hz for males; 186 Hz and 32 Hz for females).

Self-reported vocal fatigue as a function of the intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order which represents the duration of the vocal load for males assuming an average pitch (120 Hz) and modulation of intonation (20 Hz)

Self-reported vocal fatigue as a function of the intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order which represents the duration of the vocal load for females assuming an average pitch (186 Hz) and modulation of intonation (32 Hz)

If mobile technology is going to be used for people to monitor their daily voice use in different environments, the results of this study provide valuable information needed for the design of mobile technology. A low cost mobile system with output easy to understand is possible.

References
1. World Health Organization. (2014). mHealth: New horizons for health through mobile technologies: second global survey on eHealth. 2011. WHO, Geneva.

2. Bort-Roig, J., Gilson, N. D., Puig-Ribera, A., Contreras, R. S., & Trost, S. G. (2014). Measuring and influencing physical activity with smartphone technology: a systematic review. Sports Medicine, 44(5), 671-686.

Acknowledgements
Research was in part supported by the NIDCD of the NIH under Award Number R01DC012315. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

2aBAa7 – Stimulating the brain with ultrasound: treatment planning

Joseph Blackmore – joseph.blackmore@wadham.ox.ac.uk
Robin Cleveland – robin.cleveland@eng.ox.ac.uk
Institute of Biomedical Engineering, University of Oxford, Roosevelt Drive, Oxford, OX3 7DQ, United Kingdom

Michele Veldsman – michele.veldsman@ndcn.ox.ac.uk
Christopher Butler – chris.butler@ndcn.ox.ac.uk
Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, United Kingdom

Popular version of paper 2aBAa7
Presented Monday morning, June 26, 2017
173rd ASA Meeting, Boston

Many disorders of the brain, such as OCD and essential tremor, can be treated by stimulating or disrupting specific locations in the brain. This can be done by placing an electrode directly at the site needing disruption with a procedure known as deep brain stimulation, but it is an invasive procedure that involves drilling a hole in the skull and inserting a wire through the brain tissue.

Non-invasive alternatives do exist in which electrodes or magnets are placed on the scalp, avoiding the need for surgery. However, these methods can only be used to treat brain regions quite close to the skull and have limited spatial specificity.

Recently, low-intensity focused ultrasound has also been shown to stimulate localized regions of the brain, creating, for example, the sensation of seeing stars in your eyes (known as phosphenes) [1], when targeted to a region of the brain associated with vision. However, steering and focusing an ultrasound beam to the correct location within the brain remains a challenge due to the presence of the skull.

Skull bone, with its varying thickness, curvature, and structure, strongly distorts and attenuates ultrasound waves and can shift the focal point away from the intended target. Consequently, in current human trials, as many of 50 percent of ultrasound stimulation attempts did not elicit a response [1,2].

One solution to more robust focusing is to use ultrasound transducers with hundreds, or even thousands of elements, each of which is individually tuned to account for variations in skull properties so that all waves focus to the intended target location with the brain. However, this equipment is very complex and expensive which, in this early stage of research into ultrasound-induced neuromodulation, has limited progress.

Here, we performed a numerical study to assess whether single-element transducers — which are relatively inexpensive — could be used in combination with numerical modelling to achieve sufficient targeting in the brain. This would provide a solution that can be used as a research tool to further understand the mechanisms behind ultrasound-induced neuromodulation.

Figure 1 – Propagation of sound waves from the brain target out to a spherical receiver outside the skull. The received signals are then optimized to determine the best position for an ultrasound source to deliver sound back through the skull. The locations for different optimization methods are depicted by the colored dots.

The method works by importing a three-dimensional CT image into a computer and placing a virtual acoustic source at the desired target location. A super-computer then calculates how the sound travels from the target, through brain tissue and the skull bone, onto a sphere outside the head, depicted in Figure 1.

From the predicted signals, it is possible to determine the best position of an ultrasound source which can send sound back through the skull to the target location. We employed different strategies for choosing the source location (the dots in Figure 1), and for the optimal strategy predict that a single element transducer can localize sound to a region about 36 millimeters long and 4 millimeters in diameter at depths up to 45 millimeters into brain tissue, which is depicted in Figure 2.

brain

Figure 2 – Focusing the sound waves to a region deep within the brain from a curved single-element transducer. The red cross indicates the intended target. The blue contours represent the acoustic intensity relative to the intensity at the target. -3dB corresponds to 50% of the intensity at the target, -6dB is 25% and -12dB is 12.5%.

[1] Lee, Wonhye, et al. “Transcranial focused ultrasound stimulation of human primary visual cortex.” Scientific Reports 6 (2016).
[2] Lee, Wonhye, et al. “Image-guided transcranial focused ultrasound stimulates human primary somatosensory cortex.” Scientific Reports 5 (2015): 8743.

1aAAc3 – Can a Classroom “Sound Good” to Enhance Speech Intelligibility for Children?

Puglisi Giuseppina Emma – giuseppina.puglisi@polito.it
Bolognesi Filippo – filippo.bolognesi@studenti.polito.it
Shtrepi Louena – louena.shtrepi@polito.it
Astolfi Arianna – arianna.astolfi@polito.it
Dipartimento di Energia
Politecnico di Torino
Corso Duca degli Abruzzi, 24
10129 Torino (Italy)

Warzybok Anna – a.warzybok@uni-oldenburg.de
Kollmeier Birger – birger.kollmeier@uni-oldenburg.de
Medizinische Physik and Cluster of Excellence Hearing4All
Carl von Ossietzky Universität Oldenburg
D-26111 Oldenburg (Germany)

Popular version of paper 1aAAc3
Presented Sunday morning, June 25, 2017

The architectural design of classrooms should account for premises that influence the activities taking place there. As an example, the ergonomics of tables and chairs should fit pupils’ age and change with school grades. Shading components should be easily integrated with windows so that excessive light doesn’t interfere with visual tasks. Together with these well-known aspects, a classroom should also “sound” appropriate, since the teacher-to-student communication process is at the base of learning. But what does this mean?

First, we must pay attention must to the school grade under investigation. Kindergarten and primary schools aim at the direct teacher-to-student contact, and thus the environment should passively support the speech. Conversely, university classrooms are designed to host hundreds of students, actively supporting speech through amplification systems. Second, the classroom acoustics need to be focused on the enhancement of speech intelligibility. So, practical design must be oriented to reduce the reverberation time (i.e. reducing the number of sound reflections in the space) and the noise levels, since these factors are proved to negatively affect teachers’ vocal effort and students’ speech intelligibility.

Acoustical interventions typically happen after a school building is completed, whereas it would be fundamental to integrate these from the beginning of a project. Regardless of when they’re taken into consideration, it is generally due to the use of absorptive surfaces positioned on the lateral walls or ceiling.

Absorbent panels are made of materials that absorb incident sound energy because of their pores, like those found in natural fiber materials, glass or rock wool. A portion of the captured energy is transformed into heat, so the part of energy again reflected as sound into the space is strongly reduced (Figure 1). However, recent studies and standards updates investigated whether acoustic treatments should include both absorbent and diffusive surfaces, to account for the teaching and learning premises at the same time since an excessive reduction of reflections does not support speech and is proved to require higher vocal efforts to teachers.

Figure 1 – Scheme of the absorptive (top) and diffusive (bottom) properties of surfaces, with the respective polar diagrams that represent the spatial response of the different surfaces. In the top case (absorption), the incident energy (Ei) is absorbed by the surface and the reflected energy (Er) is strongly reduced. In the bottom case (diffusion), Ei is partially absorbed by the surface and Er is reflected in the space in a non-specular way. Note that these graphs are adapted from D’Antonio and Cox (reference: Acoustic absorbers and diffusers theory, design and application. Spon Press, New York, 2004).

Therefore, we found that optimal classroom acoustic design should be based on a balance of absorption and diffusion, which can be obtained by means of the presence of surfaces that are strategically placed in the environment. Diffusive surfaces, in fact, are able to redirect the sound energy in a non-specular way into the environment so that acoustic defects like strong echoes can be avoided, and early reflections can be preserved to improve speech intelligibility, especially in the rear of a room.

The few available studies on this approach refer to simulated classroom acoustics, so our work is a contribution to the research aiming to go further with new data based on measured realistic conditions. We looked at an existing unfurnished classroom in an Italian primary school with long reverberation time (around 3 seconds). We used software for the acoustical simulation of enclosed spaces to simulate the untreated room and obtain the so called “calibrated model” that gives the same acoustic parameters of the ones measured in-field.

Then, based on this calibrated model, in which the acoustic properties of the existing surfaces fit the real ones, we simulated several solutions for the acoustic treatment. This included the adjustment of absorption and scattering coefficients of surfaces to answer to characterize different configurations with absorbent and diffusive panels. All of the combinations were designed to reach the optimal reverberation time for teaching activities, and to increase the Speech Transmission Index (STI) and Definition (D50) parameters, which are intelligibility indexes that define the degree of support of an environment to speech comprehension.

Classroom

Figure 2 – Schematic representation of the investigated classroom. On the left, it is represented the actual condition of the room with vaulted ceiling and untreated walls; on the right, the optimized acoustic condition is given with the use of absorptive (red) and diffusive (blue) panels that were positioned on the walls or on the ceiling (as baffles) based on literature and experimental studies. The typical position of the teacher (red dot) desk and position in the classroom is given in the figure.

Figure 2 illustrates the actual and simulated classrooms where absorptive (red) and diffusive (blue) surfaces were placed. The optimized configuration (Figure 3 for an overview of the acoustic parameters) was selected as the one with the highest STI and D50 in the rear area, and consisted in absorbent panels and baffles on the lateral walls and on the ceiling and diffusive surfaces on the bottom of the front wall.

Figure 3 – Summary of the acoustical parameters of the investigated classroom that are referred to the actual condition of the room. Values in italic represent the outcomes of the in-field measurements, whereas the others are obtained from simulations; values in bold represent the values that comply with the reference standard. If an average was performed in frequency, it is indicated as subscript. The scheme on the right represents the mutual position between the talker-teacher (red dot) and the farthest receiver-student (green dot), where the acoustic distance-dependent parameters of Definition (D50, %) and Speech Transmission Index (STI, -) were calculated. The reverberation time (T30, s) was measured in several positions around the room.

We evaluated the effectiveness of the acoustic treatment as the enhancement of speech intelligibility using the Binaural Speech Intelligibility Model. Its outcomes are given as speech reception thresholds (SRTs) to give a fixed level of speech intelligibility, set to 80% to account for the listening task that is related to learning. Across the tested positions that accounted for several talker-to-listener distances and noise-source positions (Figure 4), model predictions indicted an average improvement in SRTs up to 6.8 dB after the acoustic intervention that can be “heard” experimentally.

Here you can hear a sentence in the presence of energetic masking noise, or noise without an informational content, but with a spectral distribution that replicates the one of speech.

Here you will hear the same sentence and noise under optimized room acoustics.

Figure 4 – Scheme of the tested talker-to-listener mutual positions for the evaluation of speech intelligibility under different acoustic conditions (i.e. classroom without acoustic treatment and with optimized acoustics). The red dots represent the talker-teacher position; the green dots represent the listener-student positions; the yellow dots represent the noise positions that were separately used to evaluate speech intelligibility in each listener position.

To summarize, we demonstrated an easy-to-use and effective design methodology for architects and engineers of classrooms, and a case study that represents the typical Italian primary school classrooms, to optimize acoustics for a learning environment. It is of great importance to make a classroom sound good, since we cannot switch off our ears. The premise of hearing well in classrooms is essential to establishing the basis of learning and of social relationships between people.

 

How Important are Fish Sounds for Feeding, Contests, and Reproduction?

How Important are Fish Sounds for Feeding, Contests, and Reproduction?

Amorim MCP – amorim@ispa.pt
Mare – Marine and Environmental Sciences Centre
ISPA-Instituto Universitário
Lisbon, Portugal

Many animals are vocal and produce communication sounds during social behaviour. These acoustic signals can help gain access to limited resources like food or mates, or to win a fight. Despite vocal communication being widespread among fishes, our knowledge of what influence fish sounds have on reproduction and survival lags considerably behind that for terrestrial animals. In this work, we studied how fish acoustic signals may confer an advantage in gaining access to food and mates, and how they can influence reproduction outcomes.

Triglid fish feed in groups and we found that those that grunt or growl while approaching food are more likely to feed than silent conspecifics. Because fish emit sounds during aggressive exhibitions just before grasping food, our results suggest that uttering sounds may help to deter other fish from gaining access to disputed food items.

Figure 1. A grey gurnard (on the right) emits a grunt during a frontal display and gains access to food. Credit: Amorim/MCP

Figure 1. A grey gurnard (on the right) emits a grunt during a frontal display and gains access to food. Credit: Amorim/MCP

Lusitanian toadfish males nest under rocks and crevices in shallow water and emit an advertising call that sounds like a boat whistle to attract females. Receptive females lay their eggs in the nest and leave the male to provide parental care. We found that vocal activity of male toadfish advertises their body condition. Males that call more often and for longer bouts are more likely to entice females into their nest to spawn, thereby enjoying a higher reproductive success.

Figure 2. In the Lusitanian toadfish (a) maximum calling rate is related with reproductive success measured as the number of obtained eggs (b). Credit: Amorim/MCP).

The codfish family contains many vocal species. During the spawning season, male haddock produce short series of slowly repeated knocks that become longer and faster as courtship proceeds. The fastest sounds are heard as a continuous humming. An increase in knock production rate culminates in a mating embrace, that then results in simultaneous egg and sperm release. This suggests that male haddock sounds serve to bring male and female fish together in the same part of the ocean, and to synchronise their reproductive behaviour, therefore maximizing external fertilization.

Figure 3. Sounds made by male haddock during a mating embrace helps to synchronize spawning and maximize fertilization. Credit: Hawnins/AD

This set of studies highlight the importance of fish sounds for key fitness related traits, such as competitive feeding, the choice of mates, and the synchronization of reproductive activities. In the face of future global change scenarios predicted for Earth’s marine ecosystems, there is an urgent demand to better understand the importance of acoustic communication for fish survival and fitness.

 

Anthropogenic noise, for example, is increasingly changing the natural acoustic environment that have shaped fish acoustic signals and there is still very little knowledge on its impact on fishes.