1aAAc3 – Can a Classroom “Sound Good” to Enhance Speech Intelligibility for Children?

Puglisi Giuseppina Emma – giuseppina.puglisi@polito.it
Bolognesi Filippo – filippo.bolognesi@studenti.polito.it
Shtrepi Louena – louena.shtrepi@polito.it
Astolfi Arianna – arianna.astolfi@polito.it
Dipartimento di Energia
Politecnico di Torino
Corso Duca degli Abruzzi, 24
10129 Torino (Italy)

Warzybok Anna – a.warzybok@uni-oldenburg.de
Kollmeier Birger – birger.kollmeier@uni-oldenburg.de
Medizinische Physik and Cluster of Excellence Hearing4All
Carl von Ossietzky Universität Oldenburg
D-26111 Oldenburg (Germany)

Popular version of paper 1aAAc3
Presented Sunday morning, June 25, 2017

The architectural design of classrooms should account for premises that influence the activities taking place there. As an example, the ergonomics of tables and chairs should fit pupils’ age and change with school grades. Shading components should be easily integrated with windows so that excessive light doesn’t interfere with visual tasks. Together with these well-known aspects, a classroom should also “sound” appropriate, since the teacher-to-student communication process is at the base of learning. But what does this mean?

First, we must pay attention must to the school grade under investigation. Kindergarten and primary schools aim at the direct teacher-to-student contact, and thus the environment should passively support the speech. Conversely, university classrooms are designed to host hundreds of students, actively supporting speech through amplification systems. Second, the classroom acoustics need to be focused on the enhancement of speech intelligibility. So, practical design must be oriented to reduce the reverberation time (i.e. reducing the number of sound reflections in the space) and the noise levels, since these factors are proved to negatively affect teachers’ vocal effort and students’ speech intelligibility.

Acoustical interventions typically happen after a school building is completed, whereas it would be fundamental to integrate these from the beginning of a project. Regardless of when they’re taken into consideration, it is generally due to the use of absorptive surfaces positioned on the lateral walls or ceiling.

Absorbent panels are made of materials that absorb incident sound energy because of their pores, like those found in natural fiber materials, glass or rock wool. A portion of the captured energy is transformed into heat, so the part of energy again reflected as sound into the space is strongly reduced (Figure 1). However, recent studies and standards updates investigated whether acoustic treatments should include both absorbent and diffusive surfaces, to account for the teaching and learning premises at the same time since an excessive reduction of reflections does not support speech and is proved to require higher vocal efforts to teachers.

Figure 1 – Scheme of the absorptive (top) and diffusive (bottom) properties of surfaces, with the respective polar diagrams that represent the spatial response of the different surfaces. In the top case (absorption), the incident energy (Ei) is absorbed by the surface and the reflected energy (Er) is strongly reduced. In the bottom case (diffusion), Ei is partially absorbed by the surface and Er is reflected in the space in a non-specular way. Note that these graphs are adapted from D’Antonio and Cox (reference: Acoustic absorbers and diffusers theory, design and application. Spon Press, New York, 2004).

Therefore, we found that optimal classroom acoustic design should be based on a balance of absorption and diffusion, which can be obtained by means of the presence of surfaces that are strategically placed in the environment. Diffusive surfaces, in fact, are able to redirect the sound energy in a non-specular way into the environment so that acoustic defects like strong echoes can be avoided, and early reflections can be preserved to improve speech intelligibility, especially in the rear of a room.

The few available studies on this approach refer to simulated classroom acoustics, so our work is a contribution to the research aiming to go further with new data based on measured realistic conditions. We looked at an existing unfurnished classroom in an Italian primary school with long reverberation time (around 3 seconds). We used software for the acoustical simulation of enclosed spaces to simulate the untreated room and obtain the so called “calibrated model” that gives the same acoustic parameters of the ones measured in-field.

Then, based on this calibrated model, in which the acoustic properties of the existing surfaces fit the real ones, we simulated several solutions for the acoustic treatment. This included the adjustment of absorption and scattering coefficients of surfaces to answer to characterize different configurations with absorbent and diffusive panels. All of the combinations were designed to reach the optimal reverberation time for teaching activities, and to increase the Speech Transmission Index (STI) and Definition (D50) parameters, which are intelligibility indexes that define the degree of support of an environment to speech comprehension.

Classroom

Figure 2 – Schematic representation of the investigated classroom. On the left, it is represented the actual condition of the room with vaulted ceiling and untreated walls; on the right, the optimized acoustic condition is given with the use of absorptive (red) and diffusive (blue) panels that were positioned on the walls or on the ceiling (as baffles) based on literature and experimental studies. The typical position of the teacher (red dot) desk and position in the classroom is given in the figure.

Figure 2 illustrates the actual and simulated classrooms where absorptive (red) and diffusive (blue) surfaces were placed. The optimized configuration (Figure 3 for an overview of the acoustic parameters) was selected as the one with the highest STI and D50 in the rear area, and consisted in absorbent panels and baffles on the lateral walls and on the ceiling and diffusive surfaces on the bottom of the front wall.

Figure 3 – Summary of the acoustical parameters of the investigated classroom that are referred to the actual condition of the room. Values in italic represent the outcomes of the in-field measurements, whereas the others are obtained from simulations; values in bold represent the values that comply with the reference standard. If an average was performed in frequency, it is indicated as subscript. The scheme on the right represents the mutual position between the talker-teacher (red dot) and the farthest receiver-student (green dot), where the acoustic distance-dependent parameters of Definition (D50, %) and Speech Transmission Index (STI, -) were calculated. The reverberation time (T30, s) was measured in several positions around the room.

We evaluated the effectiveness of the acoustic treatment as the enhancement of speech intelligibility using the Binaural Speech Intelligibility Model. Its outcomes are given as speech reception thresholds (SRTs) to give a fixed level of speech intelligibility, set to 80% to account for the listening task that is related to learning. Across the tested positions that accounted for several talker-to-listener distances and noise-source positions (Figure 4), model predictions indicted an average improvement in SRTs up to 6.8 dB after the acoustic intervention that can be “heard” experimentally.

Here you can hear a sentence in the presence of energetic masking noise, or noise without an informational content, but with a spectral distribution that replicates the one of speech.

Here you will hear the same sentence and noise under optimized room acoustics.

Figure 4 – Scheme of the tested talker-to-listener mutual positions for the evaluation of speech intelligibility under different acoustic conditions (i.e. classroom without acoustic treatment and with optimized acoustics). The red dots represent the talker-teacher position; the green dots represent the listener-student positions; the yellow dots represent the noise positions that were separately used to evaluate speech intelligibility in each listener position.

To summarize, we demonstrated an easy-to-use and effective design methodology for architects and engineers of classrooms, and a case study that represents the typical Italian primary school classrooms, to optimize acoustics for a learning environment. It is of great importance to make a classroom sound good, since we cannot switch off our ears. The premise of hearing well in classrooms is essential to establishing the basis of learning and of social relationships between people.

 

How Important are Fish Sounds for Feeding, Contests, and Reproduction?

How Important are Fish Sounds for Feeding, Contests, and Reproduction?

Amorim MCP – amorim@ispa.pt
Mare – Marine and Environmental Sciences Centre
ISPA-Instituto Universitário
Lisbon, Portugal

Many animals are vocal and produce communication sounds during social behaviour. These acoustic signals can help gain access to limited resources like food or mates, or to win a fight. Despite vocal communication being widespread among fishes, our knowledge of what influence fish sounds have on reproduction and survival lags considerably behind that for terrestrial animals. In this work, we studied how fish acoustic signals may confer an advantage in gaining access to food and mates, and how they can influence reproduction outcomes.

Triglid fish feed in groups and we found that those that grunt or growl while approaching food are more likely to feed than silent conspecifics. Because fish emit sounds during aggressive exhibitions just before grasping food, our results suggest that uttering sounds may help to deter other fish from gaining access to disputed food items.

Figure 1. A grey gurnard (on the right) emits a grunt during a frontal display and gains access to food. Credit: Amorim/MCP

Figure 1. A grey gurnard (on the right) emits a grunt during a frontal display and gains access to food. Credit: Amorim/MCP

Lusitanian toadfish males nest under rocks and crevices in shallow water and emit an advertising call that sounds like a boat whistle to attract females. Receptive females lay their eggs in the nest and leave the male to provide parental care. We found that vocal activity of male toadfish advertises their body condition. Males that call more often and for longer bouts are more likely to entice females into their nest to spawn, thereby enjoying a higher reproductive success.

Figure 2. In the Lusitanian toadfish (a) maximum calling rate is related with reproductive success measured as the number of obtained eggs (b). Credit: Amorim/MCP).

The codfish family contains many vocal species. During the spawning season, male haddock produce short series of slowly repeated knocks that become longer and faster as courtship proceeds. The fastest sounds are heard as a continuous humming. An increase in knock production rate culminates in a mating embrace, that then results in simultaneous egg and sperm release. This suggests that male haddock sounds serve to bring male and female fish together in the same part of the ocean, and to synchronise their reproductive behaviour, therefore maximizing external fertilization.

Figure 3. Sounds made by male haddock during a mating embrace helps to synchronize spawning and maximize fertilization. Credit: Hawnins/AD

This set of studies highlight the importance of fish sounds for key fitness related traits, such as competitive feeding, the choice of mates, and the synchronization of reproductive activities. In the face of future global change scenarios predicted for Earth’s marine ecosystems, there is an urgent demand to better understand the importance of acoustic communication for fish survival and fitness.

 

Anthropogenic noise, for example, is increasingly changing the natural acoustic environment that have shaped fish acoustic signals and there is still very little knowledge on its impact on fishes.

 

 

 

 

2pAOb – Methane in the ocean: observing gas bubbles from afar

Tom Weber – tom.weber@unh.edu
University of New Hampshire
24 Colovos Road
Durham, NH 03824

Popular version of paper 2pAOb
Presented Tuesday Afternoon, November 29, 2016
172nd ASA Meeting, Honolulu

The more we look, the more we find bubbles of methane, a greenhouse gas, leaking from the ocean floor (e.g., [1]). Some of the methane in these gas bubbles may travel to the ocean surface where it enters the atmosphere, and some is consumed by microbes, generating biomass and the greenhouse gas carbon dioxide in the process [2]. Given the vast quantities of methane thought to be contained beneath the ocean seabed [3], understanding how much methane goes where is an important component of understanding climate change and the global carbon cycle.

Fortunately, gas bubbles are really easy to observe acoustically. The gas inside the bubble acts like a very soft-spring compared to the nearly incompressible ocean water surrounding it. If we compress this spring with an acoustic wave, the water surrounding the bubble moves with it as an entrained mass. This simple mass-spring system isn’t conceptually different than the suspension system (the spring) on your car (the mass): driving over a wash-board dirt road at the wrong speed (using the right acoustic frequency) can elicit a very uncomfortable (or loud) response. We try to avoid these conditions in our vehicles, but exploiting the acoustic resonance of a gas bubble helps us detect centimeter-sized (or smaller) bubbles when they are kilometers away (Fig. 1).

weber_figure1 - methane gas bubbles

Figure 1. Top row: observations of methane gas bubbles exiting the ocean floor (picture credit: NOAA OER). The red circle shows methane hydrate (methane ice). Bottom row: acoustic observations of methane gas bubbles rising through the water column.

Methane bubbles rising from the ocean floor undergo a complicated evolution as they rise through the water column: gas is transferred both into and out of the surrounding bubble causing the gas composition of a bubble near the sea surface to look very different than at its ocean floor origin, and coatings on the bubble wall can change both the speed at which the bubble rises as well as the rate at which gas enters or exits the bubble. Understanding the various ways in which methane bubbles contribute to the global carbon cycle requires understanding these complicated details of a methane bubble’s lifetime in the ocean. We can use acoustic remote sensing techniques, combined with our understanding of the acoustic response of resonant bubbles, to help answer the question of where the methane gas goes. In doing so we map the locations of methane gas bubble sources on the seafloor (Fig. 2), measure how high up into the water column we observe gas bubbles rising, and use calibrated acoustic measurements to help constrain models of how bubbles change during their ascent through the water column.

weber_figure2 - methane gas bubbles

Figure 2. A map of acoustically detected methane gas bubble seeps (blue dots) in the northern Gulf of Mexico in water depths of approximately 1000-2000 m. Oil pipelines on the seabed are shown as yellow lines.

Not surprisingly, working on answering these questions generates new questions to answer, including how the acoustic response of large, wobbly bubbles (Fig. 3) differs from small, spherical ones and what the impact of methane hydrate (methane-ice) coatings are on both the fate of the bubbles and the acoustic response. Given how much of the ocean remains unexplored, we expect to be learning about methane gas seeps and their role in our climate for a long time to come.

weber_figure3

Figure 3. Images of large, wobbly bubbles that are approximately 1 cm in size. These type of bubbles are being investigated to help understand how their acoustic response differs from an ideal, spherical bubble. Picture credit: Alex Padilla.

[1] Skarke, A., Ruppel, C., Kodis, M., Brothers, D., & Lobecker, E. (2014). Widespread methane leakage from the sea floor on the northern US Atlantic margin. Nature Geoscience, 7(9), 657-661.

[2] Kessler, J. (2014). Seafloor methane: Atlantic bubble bath. Nature Geoscience, 7(9), 625-626.

[3] Ruppel, C. D. “Methane hydrates and contemporary climate change.” Nature Education Knowledge 3, no. 10 (2011): 29.

3aMU8 – Comparing the Chinese erhu and the European violin using high-speed camera measurements

Florian Pfeifle – Florian.Pfeifle@uni-hamburg.de

Institute of Systematic Musicology
University of Hamburg
Neue Rabenstrasse 13
22765 Hamburg, Germany
Popular version of paper 3aMU8, “Organologic and acoustic similarities of the European violin and the Chinese erhu”
Presented Wednesday morning, November 30, 2016
172nd ASA Meeting, Honolulu

0. Overview and introduction
Have you ever wondered what a violin solo piece like Paganini’s La Campanella would sound like if played on a Chinese erhu, or how an erhu solo performance of Horse Racing, a Mongolian folk song, would sound on a modern violin?

Our work is concerned with the research of acoustic similarities and differences of these two instruments using high-speed camera measurements and piezoelectric pickups to record and quantify the motion and vibrational response of each instrument part individually.
The research question here is, where do acoustic differences between both instruments begin and what are the underlying physical mechanisms responsible?

1. The instruments
The Chinese erhu is the most popular instrument in the bowed string instrument group known as huqin in China. It plays a central role in various kinds of classical music as well as in regional folk music styles.  Figure 1 shows a handcrafted master luthier erhu.  In orchestral and ensemble music its role is comparable to the European violin as it often takes the role as the lead voice instrument.

A handcrafted master luthier erhu. This instrument is used in all of our measurements.

Figure 1. A handcrafted master luthier erhu. This instrument is used in all of our measurements.

In contrast to the violin, the erhu is played in anupright position, resting on the left thigh of the musician. It consists of two strings, as compared to four in the case of the violin. The bow is put between both strings instead of being played from the top as European bowed instruments are usually played. In addition to the difference in bowing technique, the left hand does not stop the strings on a neck but presses the firmly taut strings, thereby changing their freely vibrating length.  A similarity between both instruments is the use of a horse-hair strung bow to excite the strings.  The history of an instrument similar to the erhu is documented from the 11th century onwards, in the case of the violin from the 15th century. The historic development before that time is still not fully known, but there is some consensus between most researchers that bowed lutes have their origin in central Asia, presumably somewhere along the silk road. Early pictorial sources point to a place of origin in an area known as Transoxiana which spanned an area across modern Uzbekistan and Turkmenistan.

Comparing instruments from different cultural spheres and having different backgrounds is a many-faceted problem as there are historical, cultural, structural and musical factors playing an important role in the aesthetic perception of an instrument. Measuring and comparing acoustical features of instruments can be used to objectify this endeavour, at least to a certain degree.  Therefore, the method applied in this paper aims at finding and comparing differences and similarities on an acoustical level, using different data acquisition methods.  The measurement setup is depicted in Figure 2.

Measurement setup for both instrument measurements.

Figure 2. Measurement setup for both instrument measurements.

The vibration of the strings are recorded using a high-speed camera which is able to capture the deflection of bowed strings with a very high frame rate.  An exemplary video of such a measurement is shown in Video 1.

Video 1.  A high-speed recording of a bowed violin string.

The recorded motion of a string can now be tracked with sub-pixel accuracy using a tracking software that traces the trajectory of a defined point on the string. The motion of the bridge is measured by applying a miniature piezoelectric transducer, which converts microscopic motions into measurable electronic signals, to the bridge. We record the radiated instrument sound using a standard measurement microphone which is positioned one meter from the instrument’s main radiating part. This measurement setup results in three different types of data: first only the bowed string without the influence of the body of the instrument; the motion of the bridge and the string; and a recording of the radiated instrument sound under normal playing conditions.

Returning to the initial question, we can now analyze and compare each measurement individually. What is even more exciting, we can combine measurements of the string deflection of one instrument with the response of the other instrument’s body. In this way we can approximate the amount of influence the body has on the sound colour of the instrument and if it is possible to make an erhu performance sound like a violin performance, or vice versa. The following sound files convey an idea of this methodology by combining the string motion of part of an Mongolian folk song played on an erhu with the body of an European violin. Sound-example 1 is a microphone recording of the erhu piece and sound-example 2 is the same recording using only the string measurement combined with an European violin body.  To experience the difference clearly, headphones or reasonably good loudspeakers are recommended.

Audio File 1. A section of an erhusolo piece recorded with a microphone.

Audio File 2. A section of the same erhupiece combining the erhu string measurement with a violin body.

2. Discussion
The results clearly show that the violin body has a noticeable influence on the timbre, or quality, of the piece when compared to the microphone recording of the erhu. But even so, due to the specific tonal quality of the piece itself, it does not sound like a composition from an European tradition. This means that stylistic and expressive idiosyncrasies are easily recognizable and influence the perceived aesthetic of an instrument. The proposed technique could be used to extend the comparison of other instruments, such as plucked lutes like the guitar and pi’pa, or mandolin and ruanxian.

4aPPa24 – Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task

Takahiro Tamesue – tamesue@yamaguchi-u.ac.jp
Yamaguchi University
1677-1 Yoshida, Yamaguchi
Yamaguchi Prefecture 753-8511
Japan

Popular version of poster 4aPPa24, “Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task”
Presented Thursday morning, December 1, 2016
172nd ASA Meeting, Honolulu

Open offices that make effective use of limited space and encourage dialogue, interaction, and collaboration among employees are becoming increasingly common. However, productive work-related conversation might actually decrease the performance of other employees within earshot — more so than other random, meaningless noises. When carrying out intellectual activities involving memory or arithmetic tasks, it is a common experience for noise to cause an increased psychological impression of “annoyance,” leading to a decline in performance. This is more apparent for meaningful noise, such as conversation, than it is for other random, meaningless noise. In this study, the impact of meaningless and meaningful noises on selective attention and cognitive performance in volunteers, as well as the degree of subjective annoyance of those noises, were investigated through physiological and psychological experiments.

The experiments were based on the so-called “odd-ball” paradigm — a test used to examine selective attention and information processing ability. In the odd-ball paradigm, subjects detect and count rare target events embedded in a series of repetitive events. To complete the odd-ball task it is necessary to regulate attention to a stimulus. In one trial, subjects had to count the number of times the infrequent target sounds occurred under meaningless or meaningful noises over a 10 minute period. The infrequent sound — appearing 20% of the time—was a 2 kHz tone burst; the frequent sound was a 1 kHz tone burst. In a visual odd-ball test, subjects observed pictures flashing on a PC monitor as meaningless or meaningful sounds were played to both ears through headphones. The most infrequent image was 10 x 10 centimeter-squared red image; the most frequent was a green square. At the end of the trial, the subjects also rated their level of annoyance at each sound on a seven-point scale.

During the experiments, the subjects brain waves were measured through electrodes placed on their scalp. In particular, we look at what is called, “event-related potentials,” very small voltages generated in the brain structures in response to specific events or stimuli that generate electroencephalograph waveforms. Example results, after appropriate averaging, of wave forms of event-related potentials under no external noise are shown in Figure 1. The so-called N100 component peaks negatively about 100 milliseconds after the stimulus and the P300 component positive peaks positively around 300 milliseconds after a stimulus, related to selective attention and working memory. Figure 2 and 3 show the results of event-related potentials for infrequent sound under the meaningless and meaningful noise. N100 and P300 components are smaller in amplitude and longer in latency because of the meaningful noise compared to the meaningless noise.

tamesue1Figure 1. Averaged wave forms of evoked Event-related potentials for infrequent sound under no external noise. tamesue2Figure 2. Averaged wave forms of evoked Event-related potentials for infrequent sound under meaningless noise.
tamesue3Figure 3. Averaged wave forms of auditory evoked Event-related potentials under meaningful noise.  

We employed a statistical method called, “principal component analysis” to identify the latent components. Results of statistical analysis, where four principal components were extracted as shown in Figure 4. Considering the results, where component scores of meaningful noise was smaller than other noise conditions, meaningful noise reduces the component of event-related potentials. Thus, selective attention to cognitive tasks was influenced by the degree of meaningfulness of the noise.

tamesue4Figure 4. Loadings of principal component analysis tamesue5Figure 5. Subjective experience of annoyance (Auditory odd-ball paradigms)

Figure 5 shows the results for annoyance in the auditory odd-ball paradigms. These results demonstrated that the subjective experience of annoyance in response to noise increased due to the meaningfulness of the noise. The results revealed that whether the noise is meaningless or meaningful had a strong influence not only on the selective attention to auditory stimuli in cognitive tasks, but also the subjective experience of annoyance.

That means that when designing sound environments in spaces used for cognitive tasks, such as the workplace or schools, it is appropriate to consider not only the sound level, but also meaningfulness of the noise that is likely to be present. Surrounding conversations often disturb the business operations conducted in such open offices. Because it is difficult to soundproof an open office, a way to mask meaningful speech with some other sound would be of great benefit for achieving a comfortable sound environment.