3aAA1 – Are restaurants and bars in New York City too loud?

Gregory Scott –  greg@soundprint.co
SoundPrint
P.O. Box 74
New York, NY 10150

Popular version of paper 3aAA1, “Analyses of crowd-sourced sound levels, logged from more than 2250 restaurants and bars in New York City”
Presented Wednesday, December 06, 2017, 7:50-8:10 AM, Studio 9
174th ASA meeting, New Orleans

For several decades, there has been a significant need to better educate the public about noise pollution, and over the past few years, an increasing number of media articles have claimed that eating and drinking venues are getting too loud. This loudness problem is likely due, in part, to background music or architectural design that enhances rather than abates interior sounds. These design elements include open kitchens, stripped-down or hard surfaces as well as less tablecloths, carpeting and paneling to absorb sound.

Loud environments make it more difficult for people to connect with others in conversation as noise is the second highest complaint among diners in Zagat’s Annual Survey. In New York City, noise is the highest complaint and 72% of diners surveyed actively avoid eateries that are too loud [1-2].  Loud noise also potentially negatively impacts hearing health as it is the most common modifiable environmental cause of hearing loss that affects 24% of adults [3].  The Center for Disease Control and Prevention recommends avoiding prolonged exposure to loud environments to prevent noise-induced hearing loss [4-5].

This is the first exploratory study to capture, on a large scale and on a continuing basis, the average sound levels of restaurants and bars. The free iOS SoundPrint app (links here to website and app) was employed to measure and submit the sound levels of New York City venues to its publicly accessible database. More than 1,800 Manhattan restaurants and bars were measured at least three times during prime-time days and hours (Wednesday through Saturday evenings between ~7:00PM-9:00PM) from July 2015 to June 2017. The measured sound level values are organized into four categories: Quiet, Moderate, Loud and Very Loud based on two dimensions – whether they are conducive for conversation and whether they are safe for hearing health.  More discussion on how these categories were selected and defined is provided in the full paper.

The results in Table 2 and Table 3 show that the average sound level for Manhattan restaurants is 78 dBA making restaurants, on average, too loud for conversation. For bars, the average sound level is 81 dBA making bars not only too loud for conversation, but also potentially unsafe for hearing health.  About 71% of restaurants and 90% of bars that were measured exhibit sound levels that are not conducive for conversation. And approximately 31% of restaurants and 60% of bars have measured sound levels that are potentially dangerous to hearing health. These numbers reach as high as 70% for specific Manhattan neighborhoods such as Flatiron, Gramercy, East Village and the Lower East Side where average measured sound levels are as high as 82 dBA. Segmentation by neighborhoods shows another story, notably that as one proceeds from uptown to downtown Manhattan, the restaurants and bars tend to increase in average sound levels so you are more likely to discover a quieter venue on the Upper West Side or Upper East Side than in the village (See Table 2 and Table 3).

In addition, segmenting restaurants by their cuisine type shows a varying range of average sound levels where Indian, Chinese and Japanese restaurants comprise the relatively quieter restaurants compared to Mexican, Latin, American, Spanish and Mediterrean restaurants. SoundPrint data collectors observed that the quieter restaurants tend to have less background music, more sound-absorbing features and that the patrons do not tend to raise their voices compared to those of the other restaurant cuisines (See Table 4).

In sum, the data suggests that the increasing number of media articles about sound levels in restaurants being too loud are correct. In New York City, a majority of the surveyed restaurants and bars have average sound levels that make it difficult for patrons to have a conversation without the need to raise their voice and a high number approach levels that are known to be dangerous to hearing health. A person randomly walking into a restaurant or a bar in New York City during prime days and hours is more likely than not to encounter an auditory environment that is “too loud.”

restaurants and bars
restaurants and bars
restaurants and bars
restaurants and bars
  1. Zagat Survey. The State of American Dining in 2016.
  2. James M. New York City restaurant survey pet peeves and dining stats; 2013.
  3. Carroll YI, Eichwald J, Scinicariello F, et al. Vital Signs: Noise-Induced Hearing Loss Among Adults — United States 2011–2012. MMWR Morb Mortal Wkly Rep. ePub: 7 February 2017.
  4. Carroll YI, Eichwald J, Scinicariello F, et al. Vital Signs: Noise-Induced Hearing Loss Among Adults — United States 2011–2012. MMWR Morb Mortal Wkly Rep. ePub: 7 February 2017
  5. Themann CL, Suter AH, Stephenson MR. National research agenda for the prevention of occupational hearing loss—part 1. Semin Hear 2013;34:145–207. CrossRef

1pAAa6 – Soundscape of washroom equipment

Lucky Tsaih,
Yosua W. Tedja,
An-Chi Tsai, Julie Chen
Department of Architecture, National Taiwan University of Science and Technology,
Taipei, Taiwan.

1pAAa6 – Soundscape of washroom equipment and its application
Jun 25, 2017
173rd Meeting of the Acoustical Society of America and the 8th Forum Acusticum
Click here to read the abstract

There is at least one toilet in your apartment, sometimes two for a house or even three toilets for a midrise building. There are lots of toilets are in school. Wow! Toilets are everywhere! How loud is a toilet flush sound?

Audio 1. Credit: Tedja

It is about 92 decibels. Since human hearing is less sensitive in lower frequency regions, we only hear it as about 85 decibels. 85 decibels is as loud as a truck driving by in front of you. Since most people desire to sleep, work, and study in a quiet space, when someone flushes a toilet, our sleeping can be disturbed or our concentration broken.


Figure 1. Toilet sound and quiet space. Credit: Tsaih

Thus, how good is your washroom wall, door and window at reducing the toilet flush sound while you are sleeping, working or studying? As in most cases, a typical single layer of gypsum board wall is used and doesn’t reduce much of the low frequency sound, as Figure 2 shows.


Figure 2. Toilet sound and sound reduction of a typical GWB wall. Credit: Tsaih

So, during work, study or sleep, you will still probably hear the “hmmmmmmm” sound. The simulated sound below assumes there are only walls, and no windows or doors in the washroom.

This research is to show how loud the washroom equipment sound can be and what kind of proper noise control an architect should consider using when designing washrooms in spaces like bedrooms and classrooms. We measured and analyzed sound pressure levels of washroom equipment. We also analyzed sound transmission class and its frequency spectrum of some typical washroom partitions to see if these partitions could reduce washroom equipment sound sufficiently.

 

Audio 2. Credit: Tedja

washroom
Figure 3. Toilet sound in study room and bedroom. Credit: Tsai

In short, a wall that blocks toilet flush sound is necessary in our homes, classes, and offices.
washroom
Figure 4. Learning and sleeping with toilet flush sound. Credit: Tedja and Chen

1aAAa2 – Can humans use echolocation to hear the difference between different kinds of walls?

David Pelegrin Garcia – david.pelegringarcia@kuleuven.be
KU Leuven, Dept. Electrical Engineering
Kasteelpark Arenberg 10 – box 2446
3001 Leuven, Belgium

Monika Rychtarikova – monika.rychtarikova@kuleuven.be
KU Leuven, Faculty of Architecture
Hoogstraat 51
9000 Gent, Belgium

Lukaš Zelem – lukas.zelem@stuba.sk
Vojtech Chmelík – vojtech.chmelik@stuba.sk
STU Bratislava, Dept. Civil Engineering
Radlinského 11
811 07 Bratislava, Slovakia

Leopold Kritly – leopold.kritly@gmail.com
Christ Glorieux – christ.glorieux@kuleuven.be
KU Leuven, Dept. Physics and Astronomy
Celestijnenlaan 200d – box 2416
3001 Leuven, Belgium

Popular version of 1aAAa2 Auditory recognition of surface texture with various scattering coefficients
Presented Sunday morning, June 25, 2017 173rd ASA Meeting, Boston
Click here to read the abstract

When we switch on the light in a room, we see objects. As a matter of fact, we see the reflection of light from these objects, revealing their shape and color. This all seems to happen instantaneously since, due the enormously high speed of light, the time that light needs to travel from the light source to the object then to our eye is extremely short. But how is it with sound? Can we “hear objects”, or correctly said, sound reflections from objects? In other words, can we “echolocate”?

We know that sound, in comparison to light, propagates much slower. Therefore, if we stand far enough from a large obstacle and clap our hands, shortly after hearing the initial clapping sound, we hear a clear sound reflection from objects – an echo (Figure 1). But is it possible to detect an object if we stand close to it? And can the shape or surface texture of an object be recognized from the “color” of the sound? And how does it work?

human echolocation

Figure 1. Sound arriving at the ears after emitting a ‘tongue click’ in the presence of an obstacle. Credit: Pelegrin-Garcia/KU Leuven

It is widely known that bats, dolphins and other animals use echolocation to orient themselves in their environment and detect obstacles, preys, relatives or antagonists. It is less known that, with some practice, most people are also able to echolocate. As a matter of fact, echolocation makes a great difference in the lives of blind people who use it in their daily lives [1, 2], and are commonly referred to as “echolocators.”

While echolocation is mainly used as an effective means of orientation and mobility, additional information can be extracted from listening to reflected sound. For example, features about objects’ texture, size and shape can be deduced, and a meaning can be assigned to what is heard, such as. a tree, car or a fence. Furthermore, echolocators form a “map” of their surroundings by inferring where objects stand in relation to their body, and how different objects related to each other.

In our research, we focus on some of the most elementary auditory tasks that are required during echolocation: When is a sound reflection audible? Can people differentiate among sound reflections returned by objects with different shapes, fabric or surface textures?

In previous work [3] we showed that by producing click-sounds with their tongue, most sighted people without prior echolocation experience were able to detect reflections from large walls at distances as far as 16 meters in ideal conditions, such as in open field where there are no obstacles other than the wall that reflects the sound, and where one cannot hear any other noises like background noise. Blind echolocators in a similar study [4], nevertheless, could detect reflections from much smaller objects at nearby distances below 2 meters.

In the present study, we investigated whether sighted people who had no experience with echolocation could distinguish between walls with different surface textures by just listening to a click reflected by the wall.

To answer this question, we performed listening tests with 16 sighted participants. We played back a pair of clicks with an added reflection; the first click with one kind of wall and the second with another. Participants responded as to whether they heard a difference between the two clicks or not. This was repeated at distances of 1.5 meters and 10 meters for all possible pairs of simulated walls with various geometries (see in Figure 2 some of these walls and the echoes they produced).

human echolocation

Figure 2. Sample of the wall geometries that we tested (from left to right, top row: staircase, parabolic (cave-like) wall, sinusoid wall and periodic squared wall; bottom row: narrow wall with an aperture, broad wall, narrow wall, convex circular wall), with the echoes they produced at distances of 1.5 and 10 m. Credit: Pelegrin-Garcia/KU Leuven

We found that most participants could distinguish the parabolic wall and the staircase from the rest of the walls at a distance of 10 meters. The parabolic (cave-like) wall returned much stronger reflections than all other walls due to acoustic focusing. The sound emitted in different directions was reflected back by the wall to the point of emission. On the other hand, the staircase returned a reflection with a “chirp” sound. This kind of sound was also the focus of study at the Kukulcan temple in Mexico [5].

The results of our work support the hypothesis of a recent investigation [6] that suggests that prehistoric societies could have used echolocation to select the placement of rock art in particular caves that returned clearly distinct echoes at long distances.

 

[1] World Access for the Blind, “Our Vision is Sound”, https://waftb.org. Retreived 9th June 2017

[2] Thaler, L. (2013). Echolocation may have real-life advantages for blind people: An analysis of survey data. Frontiers in Physiology, 4(98). http://doi.org/10.3389/fphys.2013.00098

[3] Pelegrín-García, D., Rychtáriková, M., & Glorieux, C. (2017). Single simulated reflection audibility thresholds for oral sounds in untrained sighted people. Acta Acustica United with Acustica, 103, 492–505. http://doi.org/10.3813/AAA.919078

[4] Rice, C. E., Feinstein, S. H., & Schusterman, R. J. (1965). Echo-Detection Ability of the Blind: Size and Distance Factors. Journal of Experimental Psychology, 70(3), 246–255.

[5] Trivedi, B. P. (2002). Was Maya Pyramid Designed to Chirp Like a Bird? National Geographic News (http://news.nationalgeographic.com/news/2002/12/1206_021206_TVMayanTemple.html). Retrieved 10th June 2017

[6] Mattioli, T., Farina, A., Armelloni, E., Hameau, P., & Díaz-Andreu, M. (2017). Echoing landscapes: Echolocation and the placement of rock art in the Central Mediterranean. Journal of Archaeological Science, 83, 12–25. http://doi.org/10.1016/j.jas.2017.04.008

2pAAb1 – The acoustics of rooms for music rehearsal and performance: The Norwegian approach

Jon G. Olsen – jon.olsen@musikk.no
Council for Music Organizations in Norway, Oslo, Norway

Jens Holger Rindel – jens.holger.rindel@multiconsult.no
Multiconsult, Oslo, Norway

Popular version of 2pAAb1, “The acoustics of rooms for music rehearsal and performance – the Norwegian approach”
Presented Monday afternoon, June 26, 2017, 1:20 pm.
173rd ASA Meeting / 8th Forum Acusticum, Boston, USA
Click here to read the abstract

Each week, local music groups in Norway use more than 10,000 rooms for rehearsals and concerts. Over 500,000 people sing, play or go to concerts every week. In Europe, over 40 million choir singers spend at least one evening in a rehearsal room. Professional musicians and singers use rehearsal rooms many hours a day. Most of the local concerts take place in rooms that are not designed for concert events, but are in schools, community centers, youth clubs and other rooms and spaces more or less suitable for playing music.

The size of the rooms varies from under 100 to over 10, 000 cubic meters. The users cover a broad variety of music ensembles, mostly wind bands, choirs and other amateur ensembles. Since 2009, the Norwegian Council for Music Organizations (Norsk musikkråd) has completed more than 600 acoustical room measurement reports on rooms used for rehearsal and concerts. All the reports are made available online with a Google Map of Norway (http://database.musikklokaler.no/map).

The results are depressing: 85 % of the rooms are not suited for the type of music for which they are used. A faulty type of acoustics can enforce the music ensemble to adapt to wrong balance between the instruments, making the musical interaction much more difficult and reducing the possibility for developing a good sound – both for each musician and for the orchestra or choir as a whole.

Unsuitable acoustics reduce the musical quality of the music group and give the conductor less possibility to work with and develop the musical quality. It also reduces the joy of playing or singing in a local music group. As the famous conductor Mariss Janssons used to say, “A good hall for the orchestra is as important as a good instrument is for a soloist.”

Different types of music need different types of rooms and different acoustical conditions. We can divide the music genres into three main groups:

  • Acoustically soft music, such as singing and playing instruments that are relatively quiet, such as string instruments, guitars etc. and smaller woodwind ensembles.
  • Acoustically loud music, such as playing brass instruments and percussion instruments, brass bands, concert bands, symphony orchestras and opera singing.
  • Amplified music, such as pop/rock bands, amplified jazz groups etc.

The Norwegian Standardization Organization established a working group, with participants from the Council for Music Organizations in Norway, the music industry, municipalities, acoustic consultants, The Union of Norwegian Musicians and others. Together, this group has developed the National standard, “Acoustic criteria for rooms and spaces for music rehearsal and performance” NS 8178:2014.

The Norwegian standardization group has divided rooms into five categories and provided specific requirements for each:

  • Individual practice room (1-2 musicians practicing)
  • Small ensemble room (3-6 musicians, teaching rooms)
  • Medium size ensemble room (up to 20 musicians/singers)
  • Large ensemble room (for choir, school band, concert band, symphonic band with brass/percussion, acoustic big band)
  • Performance rooms, subdivided into four types of rooms
    • Amplified music club scenes (small jazz, pop, singer/songwriter)
    • Amplified music concert rooms (pop/rock/jazz/blues)
    • Acoustic loud music (concert band, symphony orchestra, brass band, big band)
    • Acoustic quiet music (vocal group, string orchestra, folk music group, chamber music)

VOLUME – the most important criterion
Too small a volume turns out to be the main problem for many ensembles. A survey of Norwegian choir rehearsal rooms shows that 54% of the rooms are excessively small (less than half the size they should have been), 22% are too small and only 24% have more or less enough volume.

Query Norwegian singer’s organization, spring 2016. Rehearsal room size.
Figure 1.: Query Norwegian singer’s organization, spring 2016. Rehearsal room size.

For wind bands, we see more or less the same situation where the rooms are in general too small. In music schools, there are also many studios that are too small. The result is that the music is far too loud, and it is very difficult to work with sound quality and dynamic expression.

ROOM GEOMETRY – criterion number 2
This criterion poses not so many problems, apart from the fact that the room height is often too small, particularly in rehearsal rooms, but also in a number of concert rooms. A low ceiling is bad for the sound quality of the instruments and makes it difficult to hear each other.

REVERBRATION TIME – criterion number 3
There are often problems with the reverberation time, different for each of the three types of music. For acoustic soft music, the reverberation time should be relatively long in order to give support to the music, but it is very often too short in rehearsal and concert rooms. For acoustic loud music, the reverberation time should be moderate in order to avoid the music to be too loud, but it is often too long – or sometimes too short.

For amplified music, the reverberation time should be short, and this is quite often the case. However, it is especially important to have sufficiently short reverberation time in the bass (the low frequencies); otherwise the music makes an unpleasant booming sound.

The Norwegian standard provides a basis for better design of new music rooms. The systematic collection of acoustic reports of music rooms gives important background for recommendations on how to build or refurbish rooms for music in schools and cultural buildings.


Picture 1: Brass band rehearsal at Toneheim college, Norway, Credit: Trond Eklund Johansen, Hedmark/Oppland Music Council

2aAAc3 – Vocal Effort, Load and Fatigue in Virtual Acoustics

Pasquale Bottalico, PhD. – pb@msu.edu
Lady Catherine Cantor Cutiva, PhD. – cantorcu@msu.edu
Eric J. Hunter, PhD. – ejhunter@msu.edu

Voice Biomechanics and Acoustics Laboratory
Department of Communicative Sciences and Disorders
College of Communication Arts & Sciences
Michigan State University
1026 Red Cedar Road
East Lansing, MI 48824

Popular version of paper 2aAAc3 Presented Tuesday morning, June 26, 2017
Acoustics ’17 Boston, 173rd Meeting of the Acoustical Society of America and the 8th Forum Acusticum

Mobile technologies are changing the lives of millions of people around the world. According to the World Health Organization (2014), around 90% of the population worldwide could benefit from the opportunities mobile technologies represent, and at relatively low cost. Moreover, investigations on the use of mobile technology for health has increased in important ways over the last decade.

One of the most common applications of mobile technologies on health is self-monitoring. Wearable devices for checking movement in our daily lives are becoming popular. Therefore, if such technology works for monitoring our physical activity, could similar technology be used to monitor the how we use our voice in our daily life? This is particularly important considering that several voice disorders are related to how and where we use our voices.

As a first step to answering this question, this study investigated how people talk in a variety of situations which simulate common vocal communication environments.  Specifically, the study was designed to better understand how self-reported vocal fatigue is related to objective voice parameters like voice intensity, pitch, and their fluctuation, as well as the duration of the vocal load. This information would allow us to identify trends between the self-perception of vocal fatigue and objective parameters that may quantify it. To this end, we invited 39 college students (18 males and 21 females) to read a text under different “virtual-simulated” acoustic conditions. These conditions were comprised of 3 reverberation times, 2 noise conditions, and 3 auditory feedback levels; for a total of 18 tasks per subject presented in a random order. For each condition, the subjects answered questions addressing their perception of vocal fatigue on a visual analogue scale (Figure1).

Figure 1. Visual analogue scales used to determine self-report of vocal fatigue Credit: Bottalico

The experiment was conducted in a quiet, sound isolated booth. We recorded speech samples using an omnidirectional microphone placed at a fixed distance of 30 centimeters from the subject’s mouth. The 18 virtual-simulated acoustic conditions were presented to the participants through headphones which included a real time mix of the participants’ voice with the simulated environment (noise and/or reverberation). Figure 2, presents the measurements setup.

Figure 2. Schematic experimental setup. Credit: Bottalico

To get a better understanding of the environments, we spliced together segments from the recordings of one subject. This example of the speech material recorded and the feedback that the participants received by the headphones is presented in Figure 3 (and in the attached audio clip).

Figure 3. Example of the recording. Credit: Bottalico

Using these recordings, we investigated how participants’ report of vocal fatigue related with (1) gender, (2) ΔSPL mean (the variation in intensity from the typical voice intensity of each subject), (3) fo (fundamental frequency or pitch), (4) ΔSPL standard deviation (the modulation of the intensity), (5) fo standard deviation (the modulation of the intonation) and (6) the duration of the vocal load (represented by the order of administration of the tasks, which was randomized per subject).

As we show in Figure 4, the duration of speaking (vocal load) and the modulation of the speech intensity are the most important factors in the explanation of the vocal fatigue.

Figure 4 Relative importance of the 6 predictors in explaining the self-reported vocal fatigue

While the results show that participants perception of vocal fatigue increases when the duration of the vocal load, of particular interst are the pitch and modulation of the intonation increase, the association between vocal fatigue and intensity modulation and voice intensity. Specifically, there seems to be a sweet spot or a comfort range of intensity modulation (around 8 dB), that allows a lower level of vocal fatigue. What this means to vocalists is that in continuous speech, vocal fatigue may be decreased by adding longer pauses during the speech and by avoiding excessive increase of voice intensity. Our hypothesis is that this comfort range represents the right amount of modulation to allow vocal rest to the vocal folds, avoiding an excessive increase in voice intensity.

The complexity of a participant’s perceived vocal fatigue related to intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order, which represents the duration of the vocal load, is shown in Video1 and in Video2 for males and females. The videos illustrate the average values of pitch and modulation of intonation (120 Hz and 20 Hz for males; 186 Hz and 32 Hz for females).

Self-reported vocal fatigue as a function of the intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order which represents the duration of the vocal load for males assuming an average pitch (120 Hz) and modulation of intonation (20 Hz)

Self-reported vocal fatigue as a function of the intensity (ΔSPL) and the modulation of the intensity (SPL standard deviation) over the task order which represents the duration of the vocal load for females assuming an average pitch (186 Hz) and modulation of intonation (32 Hz)

If mobile technology is going to be used for people to monitor their daily voice use in different environments, the results of this study provide valuable information needed for the design of mobile technology. A low cost mobile system with output easy to understand is possible.

References
1. World Health Organization. (2014). mHealth: New horizons for health through mobile technologies: second global survey on eHealth. 2011. WHO, Geneva.

2. Bort-Roig, J., Gilson, N. D., Puig-Ribera, A., Contreras, R. S., & Trost, S. G. (2014). Measuring and influencing physical activity with smartphone technology: a systematic review. Sports Medicine, 44(5), 671-686.

Acknowledgements
Research was in part supported by the NIDCD of the NIH under Award Number R01DC012315. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

1aAAc3 – Can a Classroom “Sound Good” to Enhance Speech Intelligibility for Children?

Puglisi Giuseppina Emma – giuseppina.puglisi@polito.it
Bolognesi Filippo – filippo.bolognesi@studenti.polito.it
Shtrepi Louena – louena.shtrepi@polito.it
Astolfi Arianna – arianna.astolfi@polito.it
Dipartimento di Energia
Politecnico di Torino
Corso Duca degli Abruzzi, 24
10129 Torino (Italy)

Warzybok Anna – a.warzybok@uni-oldenburg.de
Kollmeier Birger – birger.kollmeier@uni-oldenburg.de
Medizinische Physik and Cluster of Excellence Hearing4All
Carl von Ossietzky Universität Oldenburg
D-26111 Oldenburg (Germany)

Popular version of paper 1aAAc3
Presented Sunday morning, June 25, 2017

The architectural design of classrooms should account for premises that influence the activities taking place there. As an example, the ergonomics of tables and chairs should fit pupils’ age and change with school grades. Shading components should be easily integrated with windows so that excessive light doesn’t interfere with visual tasks. Together with these well-known aspects, a classroom should also “sound” appropriate, since the teacher-to-student communication process is at the base of learning. But what does this mean?

First, we must pay attention must to the school grade under investigation. Kindergarten and primary schools aim at the direct teacher-to-student contact, and thus the environment should passively support the speech. Conversely, university classrooms are designed to host hundreds of students, actively supporting speech through amplification systems. Second, the classroom acoustics need to be focused on the enhancement of speech intelligibility. So, practical design must be oriented to reduce the reverberation time (i.e. reducing the number of sound reflections in the space) and the noise levels, since these factors are proved to negatively affect teachers’ vocal effort and students’ speech intelligibility.

Acoustical interventions typically happen after a school building is completed, whereas it would be fundamental to integrate these from the beginning of a project. Regardless of when they’re taken into consideration, it is generally due to the use of absorptive surfaces positioned on the lateral walls or ceiling.

Absorbent panels are made of materials that absorb incident sound energy because of their pores, like those found in natural fiber materials, glass or rock wool. A portion of the captured energy is transformed into heat, so the part of energy again reflected as sound into the space is strongly reduced (Figure 1). However, recent studies and standards updates investigated whether acoustic treatments should include both absorbent and diffusive surfaces, to account for the teaching and learning premises at the same time since an excessive reduction of reflections does not support speech and is proved to require higher vocal efforts to teachers.

Figure 1 – Scheme of the absorptive (top) and diffusive (bottom) properties of surfaces, with the respective polar diagrams that represent the spatial response of the different surfaces. In the top case (absorption), the incident energy (Ei) is absorbed by the surface and the reflected energy (Er) is strongly reduced. In the bottom case (diffusion), Ei is partially absorbed by the surface and Er is reflected in the space in a non-specular way. Note that these graphs are adapted from D’Antonio and Cox (reference: Acoustic absorbers and diffusers theory, design and application. Spon Press, New York, 2004).

Therefore, we found that optimal classroom acoustic design should be based on a balance of absorption and diffusion, which can be obtained by means of the presence of surfaces that are strategically placed in the environment. Diffusive surfaces, in fact, are able to redirect the sound energy in a non-specular way into the environment so that acoustic defects like strong echoes can be avoided, and early reflections can be preserved to improve speech intelligibility, especially in the rear of a room.

The few available studies on this approach refer to simulated classroom acoustics, so our work is a contribution to the research aiming to go further with new data based on measured realistic conditions. We looked at an existing unfurnished classroom in an Italian primary school with long reverberation time (around 3 seconds). We used software for the acoustical simulation of enclosed spaces to simulate the untreated room and obtain the so called “calibrated model” that gives the same acoustic parameters of the ones measured in-field.

Then, based on this calibrated model, in which the acoustic properties of the existing surfaces fit the real ones, we simulated several solutions for the acoustic treatment. This included the adjustment of absorption and scattering coefficients of surfaces to answer to characterize different configurations with absorbent and diffusive panels. All of the combinations were designed to reach the optimal reverberation time for teaching activities, and to increase the Speech Transmission Index (STI) and Definition (D50) parameters, which are intelligibility indexes that define the degree of support of an environment to speech comprehension.

Classroom

Figure 2 – Schematic representation of the investigated classroom. On the left, it is represented the actual condition of the room with vaulted ceiling and untreated walls; on the right, the optimized acoustic condition is given with the use of absorptive (red) and diffusive (blue) panels that were positioned on the walls or on the ceiling (as baffles) based on literature and experimental studies. The typical position of the teacher (red dot) desk and position in the classroom is given in the figure.

Figure 2 illustrates the actual and simulated classrooms where absorptive (red) and diffusive (blue) surfaces were placed. The optimized configuration (Figure 3 for an overview of the acoustic parameters) was selected as the one with the highest STI and D50 in the rear area, and consisted in absorbent panels and baffles on the lateral walls and on the ceiling and diffusive surfaces on the bottom of the front wall.

Figure 3 – Summary of the acoustical parameters of the investigated classroom that are referred to the actual condition of the room. Values in italic represent the outcomes of the in-field measurements, whereas the others are obtained from simulations; values in bold represent the values that comply with the reference standard. If an average was performed in frequency, it is indicated as subscript. The scheme on the right represents the mutual position between the talker-teacher (red dot) and the farthest receiver-student (green dot), where the acoustic distance-dependent parameters of Definition (D50, %) and Speech Transmission Index (STI, -) were calculated. The reverberation time (T30, s) was measured in several positions around the room.

We evaluated the effectiveness of the acoustic treatment as the enhancement of speech intelligibility using the Binaural Speech Intelligibility Model. Its outcomes are given as speech reception thresholds (SRTs) to give a fixed level of speech intelligibility, set to 80% to account for the listening task that is related to learning. Across the tested positions that accounted for several talker-to-listener distances and noise-source positions (Figure 4), model predictions indicted an average improvement in SRTs up to 6.8 dB after the acoustic intervention that can be “heard” experimentally.

Here you can hear a sentence in the presence of energetic masking noise, or noise without an informational content, but with a spectral distribution that replicates the one of speech.

Here you will hear the same sentence and noise under optimized room acoustics.

Figure 4 – Scheme of the tested talker-to-listener mutual positions for the evaluation of speech intelligibility under different acoustic conditions (i.e. classroom without acoustic treatment and with optimized acoustics). The red dots represent the talker-teacher position; the green dots represent the listener-student positions; the yellow dots represent the noise positions that were separately used to evaluate speech intelligibility in each listener position.

To summarize, we demonstrated an easy-to-use and effective design methodology for architects and engineers of classrooms, and a case study that represents the typical Italian primary school classrooms, to optimize acoustics for a learning environment. It is of great importance to make a classroom sound good, since we cannot switch off our ears. The premise of hearing well in classrooms is essential to establishing the basis of learning and of social relationships between people.