1pSC2 – Deciding to go (or not to go) to the party may depend as much on your memory as on your hearing

Kathy Pichora-Fuller – k.pichora.fuller@utoronto.ca
Department of Psychology, University of Toronto,
3359 Mississauga Road,
Mississauga, Ontario, CANADA L5L 1C6

Sherri Smith – Sherri.Smith@va.gov
Audiologic Rehabilitation Laboratory, Veterans Affairs Medical Center,
Mountain Home, Tennessee, UNITED STATES 37684

Popular version of paper 1pSC2 Effects of age, hearing loss and linguistic complexity on listening effort as measured by working memory span
Presented Monday afternoon, May 18, 2015 (Session: Listening Effort II)
169th ASA Meeting, Pittsburgh

Understanding conversation in noisy everyday situations can be a challenge for listeners, especially individuals who are older and/or hard-of-hearing. Listening in some everyday situations (e.g., at dinner parties) can be so challenging that people might even decide that they would rather stay home than go out. Eventually, avoiding these situations can damage relationships with family and friends and reduce enjoyment of and participation in activities. What are the reasons for these difficulties and why are some people affected more than other people?

How easy or challenging it is to listen may vary from person to person because some people have better hearing abilities and/or cognitive abilities compared to other people. The hearing abilities of some people may be affected by the degree or type of their hearing loss. The cognitive abilities of some people, for example how well they can attend to and remember what they have heard, can also affect how easy it is for them to follow conversation in challenging listening situations. In addition to hearing abilities, cognitive abilities seem to be particularly relevant because in many everyday listening situations people need to listen to more than one person talking at the same time and/or they may need to listen while doing something else such as driving a car or crossing a busy street. The auditory demands that a listener faces in a situation increase as background noise becomes louder or as more interfering sounds combine with each other. The cognitive demands in a situation increase when listeners need to keep track of more people talking or to divide their attention as they try to do more tasks at the same time. Both auditory and cognitive demands could result in the situation becoming very challenging and these demands may even totally overload a listener.

One way to measure information overload is to see how much a person remembers after they have completed a set of tasks. For several decades, cognitive psychologists have been interested in ‘working memory’, or a person’s limited capacity to process information while doing tasks and to remember information after the tasks have been completed. Like a bank account, the more cognitive capacity is spent on processing information while doing tasks, the less cognitive capacity will remain available for remembering and using the information later. Importantly, some people have bigger working memories than other people and people who have a bigger working memory are usually better at understanding written and spoken language. Indeed, many researchers have measured working memory span for reading (i.e., a task involving the processing and recall of visual information) to minimize ‘contamination’ from the effects of hearing loss that might be a problem if they measured working memory span for listening. However, variations in difficulty due to hearing loss may be critically important in assessing how the demands of listening affect different individuals when they are trying to understand speech in noise. Some researchers have studied the effects of the acoustical properties of speech and interfering noises on listening, but less is known about how variations in the type of language materials (words, sentences, stories) might alter listening demands for people who have hearing loss. Therefore, to learn more about why some people cope better when listening to conversation in noise, we need to discover how both their auditory and their cognitive abilities come into play during everyday listening for a range of spoken materials.

We predicted that speech understanding would be more highly associated with working memory span for listening than with listening span for reading, especially when more realistic language materials are used to measure speech understanding. To test these predictions, we conducted listening and reading tests of working memory and we also measured memory abilities using five other measures (three auditory memory tests and two visual memory tests). Speech understanding was measured with six tests (two tests with words, one in quiet and one in noise; three tests with sentences, one in quiet and two in noise; one test with stories in quiet). The tests of speech understanding using words and sentences were selected from typical clinical tests and involved simple immediate repetition of the words or sentences that were heard. The test using stories has been used in laboratory research and involved comprehension questions after the end of the story. Three groups with 24 people in each group were tested: one group of younger adults (mean age = 23.5 years) with normal hearing and two groups of older adults with hearing loss (one group with mean age = 66.3 years and the other group with mean age 74.3 years).

There was a wide range in performance on the listening test of working memory, but performance on the reading test of working memory was more limited and poorer. Overall, there was a significant correlation between the results on the reading and listening working memory measures. However, when correlations were conducted for each of the three groups separately, the correlation reached significance only for the oldest listeners with hearing loss; this group had lower mean scores on both tests. Surprisingly, for all three groups, there were no significant correlations among the working memory and speech understanding measures. To further investigate this surprising result, a factor analysis was conducted. The results of the factor analysis suggest that there was one factor including age, hearing test results and performance on speech understanding measures when the speech-understanding task was simply to repeat words or sentences – these seem to reflect auditory abilities. In addition, separate factors were found for performance on the speech understanding measures involving the comprehension of discourse or the use of semantic context in sentences – these seem to reflect linguistic abilities. Importantly, the majority of the memory measures were distinct from both kinds of speech understanding measures, and also a more basic and less cognitively demanding memory measure involving only the repetition of sets of numbers. Taken together, these findings suggest that working memory measures reflect differences between people in cognitive abilities that are distinct from those tapped by the sorts of simple measures of hearing and speech understanding that have been used in the clinic. Above and beyond current clinical tests, by testing working memory, especially listening working memory, useful information could be gained about why some people cope better than others in everyday challenging listening situations.

tags: age, hearing, memory, linguistics, speech

4pAB3 – Can a spider “sing”? If so, who might be listening?

Alexander L. Sweger – swegeral@mail.uc.edu
George W. Uetz – uetzgw@ucmail.uc.edu
University of Cincinnati
Department of Biological Sciences
2600 Clifton Ave, Cincinnati OH 45221

Popular version of paper 4pAB3, “the potential for acoustic communication in the ‘purring’ wolf spider’
Presented Thursday afternoon, May 21, 2015, 2:40 PM, Rivers room
169th ASA Meeting, Pittsburgh
Click here to read the abstract

While we are familiar with a wide variety of animals that use sound to communicate- birds, frogs, crickets, etc.- there are thousands of animal species that use vibration as their primary means of communication. Since sound and vibration are physically very similar, the two are inextricable connected, but biologically they are still somewhat separate modes of communication. Within the field of bioacoustics, we are beginning to fully realize how prevalent vibration is as a mode of animal communication, and how interconnected vibration and sound are for many species.

Wolf spiders are one group that heavily utilizes vibration as a means of communication, and they have very sensitive structures for “listening” to vibrations. However, despite the numerous vibrations that are involved in spider communication, they are not known for creating audible sounds. While a lot of species that use vibration will simultaneously use airborne sound, spiders do not possess structures for hearing sound, and it is generally assumed that they do not use acoustic communication in conjunction with vibration.

The “purring” wolf spider (Gladicosa gulosa) may be a unique exception to this assumption. Males create vibrations when they communicate with potential mates in a manner very similar to other wolf spider species, but unlike other wolf spider species, they also create airborne sounds during this communication. Both the vibrations and the sounds produced by this species are of higher amplitude than other wolf spider species, both larger and smaller, meaning this phenomenon is independent of species size. While other acoustically communicating species like crickets and katydids have evolved structures for producing sound, these spiders are vibrating structures in their environment (dead leaves) to create sound. Since we know spiders do not possess typical “ears” for hearing these sounds, we are interested in finding out if females or other males are able to use these sounds in communication. If they do, then this species could be used as an unusual model for the evolution of acoustic communication.

An image of a male "purring" wolf spider, Gladicosa gulosa, and the spectrogram of his accompanied vibration. Listen to a recording of the vibration here,

Figure 1: An image of a male “purring” wolf spider, Gladicosa gulosa, and the spectrogram of his accompanied vibration. Listen to a recording of the vibration here,

and the accompanying sound here.

Our work has shown that the leaves themselves are vital to the use of acoustic communication in this species. Males can only produce the sounds when they are on a surface that vibrates (like a leaf) and females will only respond to the sounds when they are on a similar surface. When we remove the vibration and only provide the acoustic signal, females still show a significant response and males do not, suggesting that the sounds produced by males may play a part in communicating specifically with females.

So, the next question is- how are females responding to the airborne sound without ears? Despite the relatively low volume of the sounds produced, they can still create a vibration in a very thin surface like a leaf. This creates a complex method of communication- a male makes a vibration in a leaf that creates a sound, which then travels to another leaf and creates a new vibration, which a female can then hear. While relatively “primitive” compared to the highly-evolved acoustic communication in birds, frogs, insects, and other species, this unique usage of the environment may create opportunities for studying the evolution of sound as a mode of animal communication.

Monitoring deep ocean temperatures using low-frequency ambient noise

Katherine Woolfe, Karim G. Sabra
School of Mechanical Engineering, Georgia Institute of Technology
Atlanta, GA 30332-0405

In order to precisely quantify the ocean’s heat capacity and influence on climate change, it is important to accurately monitor ocean temperature variations, especially in the deep ocean (i.e. at depths ~1000m) which cannot be easily surveyed by satellite measurements. To date, deep ocean temperatures are most commonly measured using autonomous sensing floats (e.g. Argo floats). However, this approach is limited because, due to costs and logistics, the existing global network of floats cannot sample the entire ocean at the lower depths. On the other hand, acoustic thermometry (using the travel time of underwater sound to infer the temperature of the water the sound travels through) has already been demonstrated as one of the most precise methods for measuring ocean temperature and heat capacity over large distances (Munk et al., 1995; Dushaw et al., 2009; The ATOC Consortium, 1998). However, current implementations of acoustic thermometry require the use of active, man-made sound sources. Aside from the logistical issues of deploying such sources, there is also the ongoing issue of negative effects on marine animals such as whales.

An emerging alternative to measurements with active acoustic sources is the use of ambient noise correlation processing, which uses the background noise in an environment to extract useful information about that environment. For instance, ambient noise correlation processing has successfully been used to monitor seismically-active earth systems such as fault zones (Brenguier et al., 2008) and volcanic areas (Brenguier et al., 2014). In the context of ocean acoustics (Roux et al., 2004; Godin et al., 2010; Fried et al., 2013), previous studies have demonstrated that the noise correlation method requires excessively long averaging times to reliably extract most of the acoustic travel-paths that were used by previous active acoustic thermometry studies (Munk et al., 1995). Consequently, since this averaging time is typically too long compared to the timescale of ocean fluctuations (i.e., tides, surface waves, etc.), this would prevent the application of passive acoustic thermometry using most of these travel paths (Roux et al., 2004; Godin et al., 2010; Fried et al., 2013). However, for deep ocean propagation, there is an unusually stable acoustic travel path, where sound propagates nearly horizontally along the Sound Fixing and Ranging (SOFAR) channel. The SOFAR channel is centered on the minimum value of the sound speed over the ocean depth (located at ~1000 m depth near the equator) and thus acts as a natural pathway for sound to travel very large distances with little attenuation (Ewing and Worzel, 1948).

In this research, we have demonstrated the feasibility of a passive acoustic thermometry method use in the deep oceans, using only recordings of low-frequency (f~10 Hz) ambient noise propagating along the SOFAR channel. This study used continuous recordings of ocean noise from two existing hydroacoustic stations of the International Monitoring System, operated by the Comprehensive Nuclear-Test-Ban Treaty Organization, located respectively next to Ascension and Wake Islands (see Fig. 1(a)). Each hydroacoustic station is composed of two triangular-shaped horizontal hydrophone arrays (Fig. 1(b)), separated by L~130 km, which are referred to hereafter as the north and south triads. The sides of each triad are ~2 km long and the three hydrophones are located within the SOFAR channel at depth ~1000 m. From year to year, the acoustic waves that propagate between hydrophone pairs along the SOFAR channel build up from distant noise sources whose paths intersect the hydrophone pairs. In the low-frequency band used here (1-40 Hz) -with most of the energy of the arrivals being centered around 10 Hz- these arrivals are known to mainly originate from ice-breaking noise in the Polar regions (Chapp et al., 2005; Matsumoto et al., 2014; Gavrilov and Li, 2009; Prior et al., 2011). The angular beams shown in Fig. 1a illustrate a simple estimate of the geographical area from which ice-generated ambient noise is likely to emanate for each site (Woolfe et al., 2015).

Sabra1 - deep ocean

FIG. 1. (a) Locations of the two hydroacoustic stations (red dots) near Ascension and Wake Islands. (b) Zoomed-in schematic of the hydrophone array configurations for the Ascension and Wake Island sites. Each hydroacoustic station consists of a northern and southern triangle array of three hydrophones (or triad), with each triangle side having a length ~ 2 km. The distance L between triad centers is equal to 126 km and 132 km for the Ascension Island and Wake Island hydroacoustic stations, respectively.

Acoustic thermometry estimates ocean temperature fluctuations averaged over the entire acoustic travel path (in this case, the entire depth and length of the SOFAR channel between north and south hydrophone triads) by leveraging the nearly linear dependence between sound speed in water and temperature (Munk et al., 1995). Here the SOFAR channel extends approximately from 390 m to 1350 m deep at the Ascension Island site and 460 m to 1600 m deep at the Wake Island site, as determined from the local sound speed profiles and the center frequency (~10 Hz) of the SOFAR arrivals. We use passive acoustic thermometry is used to monitor the small variations in the travel time of the SOFAR arrivals over several years (8 years at Ascension Island, and 5 years at Wake Island). To do so, coherent arrivals are extracted by averaging cross-correlations of ambient noise recordings over 1 week at the Wake and Ascension Island sites. The small fluctuations in acoustic travel time are converted to deep ocean temperature fluctuations by leveraging the linear relationship between change in sound speed and change in temperature in the water (Woolfe et al., 2015). These calculated temperature fluctuations are shown in Fig. 2, and are consistent with Argo float measurements. At the Wake Island site, where data are measured only over 5 years, the Argo and thermometry data are found to be 54% correlated. Both data indicate a very small upward (i.e. warming) trend. The Argo data shows a trend of 0.003 °C /year ± 0.001 °C/ year, for 95% confidence interval, and the thermometry data shows a trend of 0.007 °C /year ± 0.002 °C/ year, for 95% confidence interval (Fig. 2(a)). On the other hand, for the Ascension site, the SOFAR channel temperature variations measured over a longer duration of eight years from passive thermometry and Argo data are found to be significantly correlated, with a 0.8 correlation coefficient. Furthermore, Fig. 2(b) indicates a warming of the SOFAR channel in the Ascension area, as inferred from the similar upward trend of both passive thermometry (0.013 °C /year ± 0.001 °C/ year, for 95% confidence interval) and Argo (0.013 °C/ year ± 0.004 °C/ year, for 95% confidence interval) temperature variation estimates Hence, our approach provides a simple and totally passive means for measuring deep ocean temperature variations, which could ultimately significantly improve our understanding of the role of oceans in climate change.

sabra2 - deep ocean

FIG. 2. (a) Comparison of the deep ocean temperature variations at the Wake Island site estimated from passive thermometry (blue line) with Argo float measurements (grey dots), along with corresponding error bars (Woolfe et al., 2015). (b) Same as (a), but for the Ascension Island site. Each ΔT data series is normalized so that a linear fit on the data would have a y-intercept at zero.

REFERENCES:
The ATOC Consortium, (1998). “Ocean Climate Change: Comparison of Acoustic Tomography, Satellite Altimetry, and Modeling”, Science. 281, 1327-1332.
Brenguier, F., Campillo, M., Takeda, T., Aoki, Y., Shapiro, N.M., Briand, X., Emoto, K., and Miyake, H. (2014). “Mapping Pressurized Volcanic Fluids from Induced Crustal Seismic Velocity Drops”, Science. 345, 80-82.
Brenguier, F., Campillo, M., Hadziioannou, C., Shapiro, N.M., Nadeau, R.M., and Larose, E. (2008). “Postseismic Relazation Along the San Andreas Fault at Parkfield from Continuous Seismological Observations.” Science. 321, 1478-1481.
Chapp, E., Bohnenstiehl, D., and Tolstoy, M. (2005). “Sound-channel observations of ice-generated tremor in the Indian Ocean”, Geochem. Geophys. Geosyst., 6, Q06003.
Dushaw, D., Worcester, P., Munk, W., Spindel, R., Mercer, J., Howe, B., Metzger, K., Birdsall, T., Andrew, R., Dzieciuch, M., Cornuelle, B., Menemenlis, D., (2009). “A decade of acoustic thermometry in the North Pacific Ocean”, J. Geophys., 114, C07021.
Ewing, M., and Worzel, J.L., (1948). “Long-Range Sound Transmission”, GSA Memoirs. 27, 1-32.
Fried, S., Walker, S.C. , Hodgkiss, W.S. , and Kuperman, W.A. (2013). “Measuring the effect of ambient noise directionality and split-beam processing on the convergence of the cross-correlation function”, J. Acoust. Soc. Am., 134, 1824-1832.
Gavrilov, A., and Li, B. (2009). “Correlation between ocean noise and changes in the environmental conditions in Antarctica” Proceedings of the 3rd International Conference and Exhibition on Underwater Acoustic Measurements: Technologies and Results. Napflion, Greece, 1199.
Godin, O., Zabotin, N., and Goncharov, V. (2010). “Ocean tomography with acoustic daylight,” Geophys. Res. Lett. 37, L13605.
Matsumoto, H., Bohnenstiehl, D., Tournadre, J., Dziak, R., Haxel, J., Lau, T.K., Fowler, M., and Salo, S. (2014). “Antarctic icebergs: A significant natural ocean sound source in the Southern Hemisphere”, Geochem. Geophys., 15, 3448-3458.
Munk, W., Worcester, P., and Wunsch, C., (1995) .Ocean Acoustic Tomography, Cambridge University Press, Cambridge, 1-28, 197-202.
Prior, M., Brown, D., and Haralabus, G., (2011), “Data features from long-term monitoring of ocean noise”, paper presented at Proceedings of the 4th International Conference and Exhibition on Underwater Acoustic Measurements, p. L.26.1, Kos, Greece.
Roux, P., Kuperman, W., and the NPAL Group, (2004). “Extracting coherent wave fronts from acoustic ambient noise in the ocean,” J. Acoust. Soc. Am, 116, 1995-2003.
Woolfe, K.F., Lani, S., Sabra, K.G., and Kuperman, W.S. (2015). “Monitoring deep ocean temperatures using acoustic ambient noise”, Geophys. Res. Lett., DOI: 10.1002/2015GL063438.

5aMU3 – The Origins of Building Acoustics for Theatre and Music Performances

John Mourjopoulos – mourjop@upatras.gr
University of Patras
Audio & Acoustic Technology Group,
Electrical and Computer Engineering Dept.,
26500 Patras, Greece

Historical perspective
The ancient open amphitheatres and the roofed odeia of the Greek-Roman era present the earliest testament of public buildings designed for effective communication of theatrical and music performances over large audiences, often up to 15000 spectators [1-4]. Although mostly located around the Mediterranean, such antique theatres were built in every major city of the ancient world in Europe, Middle East, North Africa and beyond. Nearly 1000 such buildings have been identified, their evolution starting possibly from the Minoan and archaic times, around 12th century BC. However, the known amphitheatric form appears during the age that saw the flourishing of philosophy, mathematics and geometry, after the 6th century BC. These theatres were the birthplace of the classic ancient tragedy and comedy plays fostering theatrical and music activities for at least 700 years, until their demise during the early Christian era. After a gap of 1000 years, public theatres, opera houses and concert halls, often modelled on these antique buildings, re-emerged in Europe during the Renaissance era.

During the antiquity, open theatres were mainly used for staging drama theatrical performances so that their acoustics were tuned for speech intelligibility allowing very large audiences to hear clearly the actors and the singing chorus. During this era, smaller sized roofed versions of these theatres, the “odeia” (plural for “odeon”), were also constructed [4, 5], often at close vicinity to open theatres (Figure 1). The odeia had different acoustics qualities with strong reverberation and thus were not appropriate for speech and theatrical performances but instead were good for performing music functioning somehow similarly to modern-day concert halls.

Mourjopoulous1 - odeia
Figure 1: representation of buildings around ancient Athens Acropolis during the Roman era. Besides the ancient open amphitheatre of Dionysus, the roofed odeion of Pericles is shown, along with the later period odeion of Herodes (adopted from www.ancientathens3d.com [6]).

Open amphitheatre acoustics for theatrical plays
The open antique theatre signifies the initial meeting point between architecture, acoustics and the theatrical act. This simple structure consists of the large truncated-cone shaped stepped audience area, (the amphitheatrical “koilon” in Greek or “cavea” in Latin), the flat stage area for the chorus (the “orchestra”) and the stage building (the “skene”) with the raised stage (“proskenion”) for the actors (Figure 2).

Mourjopoulous2
Figure 2: structure of the Hellenistic period open theatre.

The acoustic quality of these ancient theatres amazes visitors and experts alike. Recently, the widespread use of acoustic simulation software and of sophisticated computer models has allowed a better understanding of the unique open amphitheatre acoustics, even when the theatres are known solely from archaeological records [1,3,7,9,11]. Modern portable equipment has allowed state-of-the-art measurements to be carried out in some well-preserved ancient theatres [8,10,13]. As a test case, the classical / Hellenistic theatre of Epidaurus in southern Greece is often studied which is famous for its near-perfect speech intelligibility [12,13]. Recent measurements with audience present (Figure 3) confirm that intelligibility is retained besides the increased audience sound absorption [13].

Mourjopoulous3
Figure 3: Acoustic measurements at the Epidaurus theatre during recent drama play (form Psarras et al.[13]).

It is now clear that the “good acoustics” of these amphitheatres and especially of Epidaurus, is due to a number of parameters: sufficient amplification of stage sound, uniform spatial acoustic coverage, low reverberation, enhancement of voice timbre, all contributing to perfect intelligibility even at seats 60 meters away, provided that environmental noise is low. These acoustically important functions are largely a result of the unique amphitheatrical shape: for any sound produced in the stage or the orchestra, the geometric shape and hard materials of the theatre’s surfaces generate sufficient reflected and scattered sound energy which comes first from the stage building (when this exists), then the orchestra floor and finally from the surfaces at the top and back of seat rows adjacent each listener position and which is uniformly spread to the audience area [11,13] (see Figure 4 and Figure 5).

Mourjopoulous4
Figure 4: Acoustic wave propagation 2D model for the Epidaurus theatre. The blue curves show the direct and reflected waves at successive time instances indicated by the red dotted lines. Along with the forward propagating wavefronts, backscattered and reflected waves from the seating rows are produced (from Lokki et al. [11]).

This reflected sound energy reinforces the sound produced in the stage and its main bulk arrives at the listener’s ears very shortly, typically within 40 milliseconds after the direct signal (see Figure 5). Within such short intervals, as far as the listeners’ brain is concerned, this is sound also coming from the direction of the source in the stage, due to a well-known perceptual property of human hearing, often referred to as “precedence or Haas effect” [11,13].

Mourjopoulous5
Figure 5: Acoustic response measurement for the Epidaurus theatre, assuming that the source emits a short pulse and the microphone is at a seat at 15 meters. Given that today the stage building does not exist, the first reflection arrives very shortly from the orchestra ground. Seven successive and periodic reflections can be seen from the top and the risers of adjacent seat rows. Their energy is reduced within approx. 40 milliseconds after the arrival of the direct sound (from Vassilantonopoulos et al. [12]).

The dimensions for seating width and riser height, as well as the koilon slope, can ensure minimal sound occlusion by lower tiers and audience and result to the fine tuning of in-phase combinations of the strong direct and reflected sounds [9,11]. As a result, frequencies useful for speech communication are amplified adding a characteristic coloration of voice sound and further assisting clear speech perception [11]. These specific amphitheatre design details have been found to affect the qualitative and quantitative aspects of amphitheatre acoustics and in this respect, each ancient theatre has unique acoustic character. Given that the amphitheatric seating concept evolved from earlier archaic rectangular or trapezoidal shaped seating arrangements with inferior acoustics (see Figure 6), such evolution hints at possible conscious acoustic design principles employed by the ancient architects. During the Roman period, stage building grew in size and the orchestra was truncated, showing adaptation to artistic, political and social trends with acoustic properties correlated to intended new uses favouring more the visual performance elements [4,15]. Unfortunately, only few fragments of such ancient acoustic design principles have been found and only via the writings of the Roman architect Marcus Vitruvius Pollio (70-15 BC), [14].

Mourjopoulous6
Figure 6: Evolution of the shape of open theatres. Roman period theatres had semi-circular orchestra and taller and more elaborate stage building.The red lines indicate the koilon / orchestra design principle as described by the ancient architect Vitruvius.

The acoustics of odeia for music performances
Although the form of ancient odeia broadly followed the amphitheatric seating and stage / orchestra design, they were covered by roofs usually made from timber. This covered amphitheatric form was also initially adopted by the early Renaissance theatres, nearly 1000 years after the demise of antique odeia [16] (Figure 7).

Mourjopoulous7
Figure 7: Different shapes of roofed odeia of antiquity and the Renaissance period (representations from www.ancientathens3d.com [6]).

Supporting a large roof structure without any inner pillars over the wide diameter dictated by the amphitheatric shape, presents even today a structural engineering feat and it is no wonder that odeia roofs are not preserved. Without their roofs, these odeia appear today to be similar to the open amphitheatres. However, computer simulations indicate that in period, unlike the open theatres, they had strong acoustic reverberation and their acoustics helped the loudness and timbre of musical instruments at the expense of speech intelligibility, so that these spaces were not appropriate and were not used for theatrical plays [4,5]. For the case of the Herodes odeion in Athens (Figure 8), computer simulations show that the semi-roofed version had up to 25% worst speech intelligibility compared to the current open state, but the strong acoustic reverberation which was similar to a modern concert hall of compatible inner volume of 10000 m3, made it suitable as a music performance space [5].

Mourjopoulous8
Figure 8: The Herodes odeion at its current state and via computer model of the current open and its antique semi-roofed version. (from Vassilantonopoulos et al. [5]). Very recent archaeological evidence indicates that the roof covered fully the building, as is also shown in Figure 10.

Thousand years ago, these antique theatres established acoustic functionality principles that even today prevail for the proper presentation of theatre and music performances to public audiences and thus signal the origins of the art and science in building acoustics.

A virtual acoustic tour of simulated amphitheatres and odeia is available at:
http://www.ancientacoustics2011.upatras.gr/Files/ANC_THE_FLASH/index.html
Please use headphones for more realistic 3D sound effect.

References
[1] F. Canac, “L’acoustique des théâtres antiques”, published by CNRS, Paris, (1967).
[2] R. Shankland, “Acoustics of Greek theatres”, Physics Today, (1973).
[3] K. Chourmouziadou, J. Kang, “Acoustic evolution of ancient Greek and Roman theatres”, Applied Acoustics vol.69 (2008).
[4] G. C. Izenur, “Roofed Theaters of Classical Antiquity”, Yale University Press, New Haven, Connecticut, (1992).
[5] S. Vassilantonopoulos, J. Mourjopoulos, “The Acoustics of Roofed Ancient Odea”, Acta Acoustica united with Acustica, vol.95, (2009).
[6] D. Tsalkanis, www.ancientathens3d.com, (accessed April 2015).
[7] S. L. Vassilantonopoulos, J. N. Mourjopoulos, “A study of ancient Greek and Roman theater acoustics”, Acta Acustica united with Acustica 89 (2002).
[8] A.C. Gade, C. Lynge, M. Lisa, J.H.Rindel, “Matching simulations with measured acoustic data from Roman theatres using the ODEON programme”, Proceedings of Forum Acusticum 2005, (2005).
[9] N. F. Declerq, C. S. Dekeyser, “Acoustic diffraction effects at the Hellenistic amphitheatre of Epidaurus: Seat rows responsible for the marvellous acoustics”, J. Acoust. Soc. Am. 121 (2007).
[10] A. Farnetani, N. Prodi, R. Pompoli, “On the acoustics of ancient Greek and Roman theatres”, J. Acoust. Soc. Am. 124 (2008).
[11] T. Lokki, A. Southern, S. Siltanen, L. Savioja, “Studies of Epidaurus with a hybrid room acoustics modelling method”, Acta Acustica united with Acustica, vol.99, 2013.
[12] S. Vassilantonopoulos, T. Zakynthinos, P. Hatziantoniou, N.-A. Tatlas, D. Skarlatos, J. Mourjopoulos, “Measurement and analysis of acoustics of Epidaurus theatre” (in Greek), Hellenic Institute of Acoustics Conference, (2004).
[13] S. Psarras, P. Hatziantoniou, M. Kountouras, N-A. Tatlas, J. Mourjopoulos, D. Skarlatos, “Measurement and Analysis of the Epidaurus Ancient Theatre Acoustics”, Acta Acustica united with Acustica, vol.99, (2013).
[14] Vitruvius, “The ten books on architecture” (translated by Morgan MH), London / Cambridge, MA: Harvard University Press, (1914).
[15] Beckers, Benoit, N.Borgia, “The acoustic model of the Greek theatre.” Protection of Historical Buildings, Prohitech09, (2009).
[16] M. Barron, “Auditorium acoustics and architectural design”, London: E& FN Spon (1993).

2pNSb – A smartphone noise meter app in every pocket?

Chucri A. Kardous – ckardous@cdc.gov
Peter B. Shaw – pbs3@cdc.gov
National Institute for Occupational Safety and Health
Centers for Disease Control and Prevention
1090 Tusculum Avenue
Cincinnati, Ohio 45226

Popular version of paper 2pNSb, “Use of smartphone sound measurement apps for occupational noise assessments”
Presented Tuesday May 19, 2015, 3:55 PM, Ballroom 1
169th ASA Meeting, Pittsburgh, PA
See also: Evaluation of smartphone sound measurement applications

Our world is getting louder. Excessive noise is a public health problem and can cause a range of health issues; noise exposure can induce hearing impairment, cardiovascular disease, hypertension, sleep disturbance, and a host of other psychological and social behavior problems. The World Health Organization (WHO) estimates that there are 360 million people with disabling hearing loss. Occupational hearing loss is the most common work-related illness in the United States; the National Institute for Occupational Safety and Health (NIOSH) estimates that approximately 22 million U.S. workers are exposed to hazardous noise.

Smartphones users are expected to hit the 2 billion mark in 2015. The ubiquity of smartphones and the sophistication of current sound measurement applications (apps) present a great opportunity to revolutionize the way we look at noise and its effects on our hearing and overall health. Through the use of crowdsourcing techniques, people around the world may be able to collect and share noise exposure data using their smartphones. Scientists and public health professionals could rely on such shared data to promote better hearing health and prevention efforts. In addition, the ability to acquire and display real-time noise exposure data raises people’s awareness about their work (and off-work) environment and allows them to make informed decisions about hazards to their hearing and overall well-being. For instance, the European Environment Agency (EEA) developed the Noise Watch app that allows citizens around the world to make noise measurements whether at their work or during their leisure activities, and upload that data to a database in real time and using the smartphone GPS capabilities to construct a map of the noisiest places and sources in their environment.

However, not all smartphone sound measurements apps are equal. Some are basic and not very accurate while some are much more sophisticated. NIOSH researchers conducted a study of 192 smartphone sound measurement apps to examine the accuracy and functionality of such apps. We conducted the study in our acoustics laboratory and compared the results to a professional sound level meter. Only 10 apps met our selection criteria, and of those only 4 met our accuracy requirements of being within ±2 decibels (dB) of type 1 professional sound level meter. Apps developed for the iOS platform were more advanced, functionality and performance wise, than Android apps. You can read more about our original study on our NIOSH Science Blog at: http://blogs.cdc.gov/niosh-science-blog/2014/04/09/sound-apps/ or download our JASA paper at: http://scitation.aip.org/content/asa/journal/jasa/135/4/10.1121/1.4865269.

Testing the SoundMeter app on the iPhone 5 and iPhone 4S
Figure 1. Testing the SoundMeter app on the iPhone 5 and iPhone 4S against a ½” Larson-Davis 2559 random incidence reference microphone
Today, we will present on our additional efforts to examine the accuracy of smartphone sound measurement apps using external microphones that can be calibrated. There are several external microphones available mostly for the consumer market, and although they vary greatly in price, they all possess similar acoustical specifications and have performed similarly in our laboratory tests. Preliminary results showed even greater agreement with professional sound measurement instruments (± 1 dB) over our testing range.

Calibrating the SPLnFFT app
Figure 2. Calibrating the SPLnFFT app with MicW i436 external microphone using the Larson-Davis CAL250 acoustic calibrator (114 dB SPL @ 250Hz)

Figure 3

Figure 3. Laboratory testing of 4 iOS devices using MicW i436 and comparing the measurements to a Larson-Davis type 831 sound level meter (pink noise at 75 dBA)

We will also discuss our plans to develop and distribute a free NIOSH Sound Level Meter app in an effort to facilitate future occupational research efforts and build an noise job exposure database.

Challenges remain with using smartphones to collect and document noise exposure data. Some of the main issues encountered in recent studies relate to privacy and collection of personal data, sustained motivation to participate in such studies, bad or corrupted data, and mechanisms for storing and accessing such data.

2aPP6 – Emergence of Spoken Language in Deaf Children Receiving a Cochlear Implant

Ann E. Geers
Popular version of 2aPP6. Language emergence in early-implanted children
Presented at the 169th Meeting of the Acoustical Society of America
May 2015

Before the advent of Cochlear Implants (CI), children who were born profoundly deaf acquired spoken language and literacy skills with great difficulty and over many years of intensive education. Even with the most powerful hearing aids and early intervention, children learned spoken language at about half the normal rate, and fell further behind in language and reading with increasing age. At that time, many deaf children learned to communicate through sign language, though more than 90% of them had parents with normal hearing who did not know how to sign when their deaf child was born.

Following FDA approval in the 1990s, many deaf children began receiving a CI (in one ear) at some point after their second birthday. Dramatic improvements were seen compared to hearing aid users in the ability to hear and produce clear speech, understand spoken language and acquire literacy skills. However many children with CIs still did not reach levels within the range of their age mates with normal hearing in these areas. Over the next 2 decades, with universal newborn hearing screening mandatory in most states, implantation occurred at younger ages (typically 12-18 months) and CI technology offered improved access to speech, especially soft sounds. As implant performance continued to improve for children receiving one CI, receiving a second CI to optimize hearing at both ears was considered.

This study followed 60 children implanted between 12 and 38 months of age when they were 3, 4 and 10 years old. All of them were in preschool programs focused on developing spoken language skills and had no disabilities other than hearing impairment. By age 10, 95% of them were enrolled in regular education settings with hearing age mates.

Three groups, roughly equal in size, were identified from standardized language tests administered at 4 and 10 years of age. 1) Normal Language Emergence – these children exhibited spoken language skills within the normal range by age 4 and continued along this normal course into their elementary school years. They developed above-average reading comprehension. 2) Late Language Emergence – these children were language-delayed in preschool, but caught up by the time they were 10. They developed average reading comprehension for their age. 3) Persistent Language Delay- these children were also language-delayed in preschool, but they did not catch up with hearing age-mates by age 10. They were below-average readers.

Achieving age-appropriate language and reading skills by mid-elementary grades is a remarkable accomplishment for children with profound hearing loss and the fact that two-thirds of the sample reached or exceeded this level attests to the efficacy of early cochlear implantation. In fact, children with normal language emergence were most likely to have received a CI very young – between 12 and 18 months of age. However, age at first CI did not differentiate children with late language emergence from those with persistent delay. In fact, these groups did not differ in nonverbal intelligence, mother’s education, bilateral implantation, age at first intervention or age enrolled in regular education classrooms. As a result, predicting during preschool whether or not a child will catch up with hearing children in the same grade is difficult. We looked for factors distinguishing language-delayed preschoolers who would reach age-appropriate language levels by mid-elementary grades from those who would remain delayed. Early prediction is important for intensifying and individualizing early intervention for children at risk for long-term delay.

Results from a battery of tests and questionnaires revealed a constellation of factors distinguishing children with persistent from those with resolving language delay. Most of these factors were associated with the quality of the audio input provided by the device. For example, odds were 3-4 times greater that children who caught up used more recent CI technology than those who remained delayed. Children who caught up in language had a particular advantage in their ability to detect and understand speech presented at soft levels. This is understandable, because incidental or casual language acquisition depends on the ability to overhear soft speech in addition to speech at normal-conversation levels. In addition, a smaller repertoire of speech sounds, lower vocabulary and poorer grammar skills were evident in the conversational language of persistently delayed children as early as 3 years of age with smaller language gains between 3 and 4 years, foreshadowing slower long-term speech and language development. A somewhat surprising finding was that a much larger percentage (47%) of persistently delayed children had left-ear CIs as compared with those who caught up (14%).

These results have important implications for surgeons, speech-language pathologists, educators and audiologists serving young children with cochlear implants. For the surgeon, right-ear placement of the first CI should be preferred over the left unless cochlear anatomy precludes placement at the right ear. This, along with implantation by 18 months, may help to maximize chances of age-appropriate spoken language development. For the speech language pathologist, the extent of immature speech production and language use during preschool years may foreshadow later language difficulties. For the audiologist, encouraging upgraded speech processor technology and working to ensure the audibility of soft speech when programming the device may positively influence future language development. For the educator, recognition of risk factors for persistent language delay may signal increased intensity of language intervention. Addressing these issues should increase the likelihood that children with CIs will exhibit spoken communication and academic skills in line with expectations for their grade placement.