“Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography”
Andrew Wiens– Andrew.firstname.lastname@example.org
Omar T. Inan
Georgia Institute of Technology
Electrical and Computer Engineering
Popular version of poster 3aBA12 “Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography.”
Presented Wednesday, May 19, 2015, 11:30 am, Kings 2
169th ASA Meeting, Pittsburgh
In 2014, one out of every four internet users in the United States wore a wearable device such as a smart watch or fitness monitor. As more people incorporate wearable devices into their daily lives, better techniques are needed to enable real, accurate health measurements.
Currently, wearable devices can make simple measurements of various metrics such as heart rate, general activity level, and sleep cycles. Heart rate is usually measured from small changes in the intensity of the light reflected from light-emitting diodes, or LEDs, that are placed on the surface of the skin. In medical parlance, this technique is known as photoplethysmography. Activity level and sleep cycles, on the other hand, are usually measured from relatively large motions of the human body using small sensors called accelerometers.
Recently, researchers have improved a technique called ballistocardiography, or BCG, that uses one or more mechanical sensors, such as an accelerometer worn on the body, to measure very small vibrations originating from the beating heart. Using this technique, changes in the heart’s time intervals and the volume of pumped blood, or cardiac output, have been measured. These are capabilities that other types of noninvasive wearable sensors currently cannot provide from a single point on the body, such as the wrist or chest wall. This method could become crucial for blood pressure measurement via pulse-transit time, a promising noninvasive, cuffless method that measures blood pressure using the time interval from when blood is ejected from the heart to when it arrives at the end of a main artery.
The goal of the preliminary study reported here was to demonstrate similar measurements recorded during immersion in an aquatic environment. Three volunteers wore a waterproof accelerometer on the chest while immersed in water up to the neck. An example of these vibrations recorded at rest appear in Figure 1. The subjects performed a physiologic exercise called a Valsalva maneuver to temporarily modulate the cardiovascular system. Two water temperatures and three body postures were tested as well to discover differences in the signal morphology that could arise under different conditions.
Figure. 1. The underwater BCG recorded at rest.
Measurements of the vibrations that occurred during single heart beats appear in Figure 2. Investigation of the recorded signals shows that the amplitude of the signal increased during immersion compared to standing in air. In addition, the median frequency of the vibrations also decreased substantially.
Figure. 2. Single heart beats of the underwater BCG from three subjects in three different environments and body postures.
One remaining question is, why did these changes occur? It is known that a significant volume of blood shifts toward the thorax, or chest, during immersion, leading to changes in the mechanical loading of the heart. It is possible that this phenomenon wholly or partially explains the changes in the vibrations observed during immersion. Finally, how can we make accurate physiologic measurements from the underwater wearable BCG? These are open questions, and further investigation is needed.
COULD WIND TURBINE NOISE INTERFERE WITH GREATER PRAIRIE CHICKEN (Tympanuchus cupido pinnatus) COURTSHIP?
Edward J. Walsh – Edward.Walsh@boystown.org
JoAnn McGee – JoAnn.McGee@boystown.org
Boys Town National Research Hospital
555 North 30th St.
Omaha, NE 68131
Cara E. Whalen – email@example.com
Larkin A. Powell – firstname.lastname@example.org
Mary Bomberger Brown – email@example.com
School of Natural Resources
University of Nebraska-Lincoln
Lincoln, NE 68583
Popular version of paper 1pABa2
Presented Monday afternoon, May 18, 2015
169th ASA Meeting, Pittsburgh
The Sand Hills ecoregion of central Nebraska is distinguished by rolling grass-stabilized sand dunes that rise up gently from the Ogallala aquifer. The aquifer itself is the source of widely scattered shallow lakes and marshes, some permanent and others that come and go with the seasons.
However, the sheer magnificence of this prairie isn’t its only distinguishing feature. Early on frigid, wind-swept, late-winter mornings, a low pitched hum, interrupted by the occasional dawn song of a Western Meadowlark (Sturnella neglecta) and other songbirds inhabiting the region, is virtually impossible to ignore.
CLICK HERE TO LISTEN TO THE HUM
The hum is the chorus of the Greater Prairie Chicken (Tympanuchus cupido pinnatus), the communal expression of the courtship song of lekking male birds performing an elaborate testosterone-driven, foot-pounding ballet that will decide which males are selected to pass genes to the next generation; the word “lek” is the name of the so-called “booming” or courtship grounds where the birds perform their wooing displays.
While the birds cackle, whine, and whoop to defend territories and attract mates, it is the loud “booming” call, an integral component of the courtship display that attracts the interest of the bioacoustician – and the female prairie chicken.
The “boom” is an utterance that is carried long distances over the rolling grasslands and wetlands by a narrow band of frequencies ranging from roughly 270 to 325 cycles per second (Whalen et al., 2014). It lasts about 1.9 seconds and is repeated frequently throughout the morning courtship ritual.
Usually, the display begins with a brief but energetic bout of foot stamping or dancing, which is followed by an audible tail flap that gives way to the “boom” itself.
CLICK HERE TO OBSERVE A VIDEO CLIP OF THE COURTSHIP DISPLAY
For the more acoustically and technologically inclined, a graphic representation of the pressure wave of a “boom,” along with its spectrogram (a visual representation showing how the frequency content of the call changes during the course of the bout) and graphs depicting precisely where in the spectral domain the bulk of the acoustic power is carried is shown in Figure 1. The “boom” is clearly dominated by very low frequencies that are centered on approximately 300 Hz (cycles per second).
FIGURE 1: ACOUSTIC CHARACTERISTICS OF THE “BOOM”
Vocalization is, of course, only one side of the communication equation. Knowing what these stunning birds can hear is on the other.
We are interested in what Greater Prairie Chickens can hear because wind energy developments are encroaching onto their habitat, a condition that makes us question whether noise generated by wind turbines might have the capacity to mask vocal output and complicate communication between “booming” males and attending females.
Step number one in addressing this question is to determine what sounds the birds are capable of hearing – what their active auditory space looks like. The golden standard of hearing tests are behavioral in nature – you know, the ‘raise your hand or press this button if you can hear this sound’ kind of testing. However, this method isn’t very practical in a field setting; you can’t easily ask a Greater Prairie Chicken to raise its hand, or in this case its wing, when it hears the target sound.
To solve this problem, we turn to electrophysiology – to an evoked brain potential that is a measure of the electrical activity produced by the auditory parts of the inner ear and brain in response to sound. The specific test that we settled on is known as the ABR, the auditory brainstem response.
The ABR is a fairly remarkable response that captures much of the peripheral and central auditory pathway in action when short tone bursts are delivered to the animal. Within approximately 5 milliseconds following the presentation of a stimulus, the auditory periphery and brain produce a series of as many as five positive-going, highly reproducible electrical waves. These waves, or voltage peaks, more or less represent the sequential activation of primary auditory centers sweeping from the auditory nerve (the VIIIth cranial nerve), which transmits the responses of the sensory cells of the inner ear rostrally, through auditory brainstem centers toward the auditory cortex.
Greater Prairie Chickens included in this study were captured using nets that were placed on leks in the early morning hours. Captured birds were transported to a storage building that had been reconfigured into a remote auditory physiology lab where ABRs were recorded from birds positioned in a homemade, sound attenuating space – an acoustic wedge-lined wooden box.
FIGURE 2: ABR WAVEFORMS
The waveform of the Greater Prairie Chicken ABR closely resembles ABRs recorded from other birds – three prominent positive-going electrical peaks, and two smaller amplitude waves that follow, are easily identified, especially at higher levels of stimulation. In Figure 2, ABR waveforms recorded from an individual bird in response to 2.8 kHz tone pips are shown in the left panel and the group averages of all birds studied under the same stimulus conditions are shown in the right panel; the similarity of response waveforms from bird to bird, as indicated in the nearly imperceptible standard errors (shown in gray), testifies to the stability and utility of the tool. As stimulus level is lowered, ABR peaks decrease in amplitude and occur at later time points following stimulus onset.
Since our goal was to determine if Greater Prairie Chickens are sensitive to sounds produced by wind turbines, we generated an audiogram based on level-dependent changes in ABRs representing responses to tone pips spanning much of the bird’s audiometric range (Figure 3). An audiogram is a curve representing the relationship between response threshold (i.e., the lowest stimulus level producing a clear response) and stimulus frequency; in this case, thresholds were averaged across all animals included in the investigation.
FIGURE 3: AUDIOGRAM AND WIND TURBINE NOISE
As shown in Figure 3, the region of greatest hearing sensitivity is in the 1 to 4 kHz range and thresholds increase (sensitivity is lost) rapidly at higher stimulus frequencies and more gradually at lower frequencies. Others have shown that ABR threshold values are approximately 30 dB higher than thresholds determined behaviorally in the budgerigar (Melopsittacus undulates) (Brittan-Powell et al., 2002). So, to answer the question posed in this investigation, ABR threshold values were adjusted to estimate behavioral thresholds, and the resulting sensitivity curve was compared with the acoustic output of a wind turbine farm studied by van den Berg in 2006. The finding is clear; wind turbine noise falls well within the audible space of Greater Prairie Chickens occupying booming grounds in the acoustic footprint of active wind turbines.
While findings reported here indicate that Greater Prairie Chickens are sensitive to at least a portion of wind turbine acoustic output, the next question that we plan to address will be more difficult to answer: Does noise propagated from wind turbines interfere with vocal communication among Greater Prairie Chickens courting one another in the Nebraska Sand Hills? Efforts to answer that question are in the works.
Presentation #1pABa2 “Hearing sensitivity in the Greater Prairie Chicken” by Edward J. Walsh, Cara Whalen, Larkin Powell, Mary B. Brown, and JoAnn McGee will be take place on Monday, May 18, 2015, at 1:15 PM in the Rivers room at the Wyndham Grand Pittsburgh Downtown Hotel. The abstract can be found by searching for the presentation number here:
Brittan-Powell, E.F., Dooling, R.J. and Gleich, O. (2002). Auditory brainstem responses in adult budgerigars (Melopsittacus undulates). J. Acoust. Soc. Am. 112:999-1008.
van den Berg, G.P. (2006). The sound of high winds. The effect of atmospheric stability on wind turbine sound and microphone noise. Dissertation, Groningen University, Groningen, The Netherlands.
Whalen, C., Brown, M.B., McGee, J., Powell, L.A., Smith, J.A. and Walsh, E.J. (2014). The acoustic characteristics of greater prairie-chicken vocalizations. J. Acoust. Soc. Am. 136:2073.
Cameron Vongsawad – firstname.lastname@example.org
Mark Berardi – email@example.com
Kent Gee – firstname.lastname@example.org
Tracianne Neilsen – email@example.com
Jeannette Lawler – firstname.lastname@example.org
Department of Physics & Astronomy
Brigham Young University
Provo, Utah 84602
Popular version of paper 2pED, “Development of an acoustics outreach program for the deaf.”
Presented Tuesday Afternoon, May 19, 2015, 1:45 pm, Commonwealth 2
169th ASA Meeting, Pittsburgh
The deaf and hard of hearing have less intuition with sound but are no strangers to the effects of pressure, vibrations, and other basic acoustical principles. Brigham Young University recently expanded their “Sounds to Astound” outreach program (sounds.byu.edu) and developed an acoustics demonstration program for visiting deaf students. The program was designed to help the students connect to a wide variety of acoustical principles through highly visual and kinesthetic demonstrations of sound as well as utilizing the students’ primary language of American Sign Language (ASL).
In science education, the “Hear and See” methodology (Beauchamp 2005) has been shown to be an effective teaching tool in assisting students to internalize new concepts. This sensory-focused approach can be applied to a deaf audience in a different way, the “See and Feel” method. In both, whenever possible students participate in demonstrations to experience the physical principle being taught.
In developing the “See and Feel” approach, a fundamental consideration was to select the principles of sound that were easily communicated using words that exist and are commonly used in ASL. For example, the word “pressure” is common, while the word “wave” is uncommon. Additionally, the sign for “wave” is closely associated with a water wave, which could lead to confusion about the nature of sound as a longitudinal wave. In the absence of an ASL sign for “resonance,” the nature of sound was taught by focusing on the signs for “vibration” and “pressure.” Additional vocabulary, i.e., mode, amplitude, node, antinode, and wave propagation, were presented using classifiers (non-lexical visualizations of gestures and hand shapes) and finger spelling the words. (Sheetz 2012)
Two bilingual teaching approaches were tried to make ASL the primary instruction language while also enabling communication among the demonstrators. In the first approach, the presenter used ASL and spoken English simultaneously. In the second approach, the presenter used only ASL and other interpreters provided the spoken English translation. The second approach proved to be more effective for both the audience and the presenters because it allowed the presenter to focus on describing the principles in the native framework of ASL, resulting in a better presentation flow for the deaf students.
In addition to the tabletop demonstrations (illustrated in the figures), the students were also able to feel sound in BYU’s reverberation chamber as a large subwoofer was operated at resonance frequencies of the room. The students were invited to walk around the room to find where the vibrations felt weakest. In doing so, the students mapped the nodal lines of the wave patterns in the room. In addition, the participants enjoyed standing in the corners of the room, where the sound pressure is eight times as strong and feeling the power of sound vibrations.
The experience of sharing acoustics with the deaf and hard of hearing has been remarkable. We have learned a few lessons about what does and doesn’t work well with regards to the ASL communication, visual instruction, and accessibility of the demos to all participants. Clear ASL communication is key to the success of the event. As described above, it is more effective if the main presenter communicates with ASL and someone else, who understands ASL and physics, provides a verbal interpretation for non-ASL volunteers. Having a fair ratio of interpreters to participants gives individualized voices for each person in attendance throughout the event. Another important consideration is that the ASL presenter needs to be visible to all students at all times. Extra thought is required to illuminate the presenter when the demonstrations require low lighting for maximum visual effect.
Because most of the demonstration traditionally rely on the perception of sound, care must be taken to provide visual instruction about the vibrations for hearing-impaired participants. (Lang 1973, 1981) This required the presenters to think creatively about how to modify demos. Dividing students into smaller groups (3-4 students) allow each student to interact with the demonstrations more closely. (Vongsawad 2014) This hands-on approach will improve the students’ ability to “See & Feel” the principles of sound being illustrated in the demonstrations and benefit more fully from the event.
While a bit hesitant at first, by the end of the event, students were participating more freely, asking questions and excited about what they had learned. They left with a better understanding of principles of acoustics and how sound affects their lives. The primary benefit, however, was providing opportunities for deaf children to see that resources exist at universities for them to succeed in higher education.
We would like to acknowledge support for this work from a National Science Foundation Grant (IIS-1124548) and from the Sorensen Impact Foundation. The visiting students also took part in a research project to develop a technology referred to as “Signglasses” – head-mounted artificial reality displays that could be used to help deaf and hard of hearing students better participate in planetarium shows. We also appreciate the support from the Acoustical Society of America in the development of BYU’s student chapter outreach program, “Sounds to Astound.” This work could not have been completed without the help of the Jean Massieu School of the Deaf in Salt Lake City, Utah.
This video demonstrates the use of ASL as the primary means of communication for students. Communication in their native language improved understanding.
Figure 1: Vibrations on a string were made to appear “frozen” in time by matching the frequency of a strobe light to the frequency of oscillation, which enhanced the ability of students to analyze the wave properties visually.
Figure 2: The Rubens Tube is another classic physics and acoustics demonstration to show resonance in a pipe. Similarly to the vibrations on a string, but this time being affected by sound waves directly. A speaker is attached to the end of a tube full of propane and the exiting propane that is lit on fire shows the variations in pressure due to the pressure wave caused by the sound in the tube. Here students are able to visualize a variety of sound properties.
Figure 3: Free spectrum analyzer and oscilloscope software was used to visualize the properties of sound broken up into its derivative parts. Students were encouraged to make sounds by clapping, snapping, using a tuning fork or their voice, and were able to see that sounds made in different ways have different features. It was significant for the hearing-impaired students to see that the noises they made looked similar to everyone else’s.
Figure 4: A loudspeaker driven at a frequency of 40 Hz was used to first make a candle flame flicker and then blow out as the loudness was increased to demonstrate the power of sound traveling as a pressure wave in the air.
Figure 5: A surface vibration loudspeaker placed on a table was another effective demonstration for the students to feel the sound. Students felt the sound as the surface vibration loudspeaker was placed on a table. Some students placed the surface vibration loudspeaker on their heads for an even more personal experience with sound.
Figure 6: Pond foggers use high frequency and high amplitude sound to turn water into fog, or cold water vapor. This demonstration gave students the opportunity to see and feel how powerful sound or vibrations can be. They could also put their fingers close to the fogger and feel the vibrations in the water.
Tags: education, deafness, language
Michael S. Beauchamp, “See me, hear me, touch me: Multisensory integration in lateral occipital-temporal cortex,” Cognitive Neuroscience: Current Opinion in Neurobiology 15, 145-153 (2005).
N. A. Scheetz, Deaf Education in the 21st Century: Topics and Trends (Pearson, Boston, 2012) pp. 152-62.
Cameron T. Vongsawad, Tracianne B. Neilsen, and Kent L. Gee, “Development of educational stations for Acoustical Society of America outreach,” Proc. Mtgs. Acoust. 20, 025003 (2014).
Harry G. Lang, “Teaching Physics to the Deaf,” Phys. Teach. 11, 527 (September 1973).
Harry, G. Lang, “Acoustics for deaf physics students,” Phys. Teach. 11, 248 (April 1981).
Hollow vs. Foam-filled racket: Feel-good vibrations
Kritika Vayur – email@example.com
Dr. Daniel A. Russell – firstname.lastname@example.org
Pennsylvania State University
201 Applied Science Building
State College, PA, 16802
Popular version of paper 3aSA11, “Vibrational analysis of hollow and foam-filled graphite tennis rackets”
Presented Wednesday morning, May 20, 2015, 11:15 AM in room Kings 3
169th ASA Meeting, Pittsburgh
Tennis Rackets and Injuries
The typical modern tennis racket has a light-weight, hollow graphite frame with a large head. Though these rackets are easier to swing, there seems to be an increase in the number of players experiencing injuries commonly known as “tennis elbow”. Recently, even notable professional players such as Rafael Nadal, Victoria Azarenka, and Novak Djokovic have withdrawn from tournaments because of wrist, elbow or shoulder injuries.
A recent new solid foam-filled graphite racket design claims to reduce the risk of injury. Previous testing has suggested that these foam-filled rackets are less stiff and damp the vibrations more than hollow rackets, thus reducing the risk of injury and shock delivered to the arm of the player . Figure 1 shows cross-sections of the handles of hollow and foam-filled versions of the same model racket.
The preliminary study reported in this paper was an attempt to identify the vibrational characteristics that might explain why foam-filled rackets improve feel and reduce risk of injury.
Figure 1: Cross-section of the handle of a foam-filled racket (left) and a hollow racket (right).
The first vibrational characteristic we set out to identify was the damping associated with first few bending and torsional vibrations of the racket frame. A higher damping rate means the unwanted vibration dies away faster and results in a less painful vibration delivered to the hand, wrist, and arm. Previous research on handheld sports equipment (baseball and softball bats and field hockey sticks) has demonstrated that bats and sticks with higher damping feel better and minimize painful sting [2,3,4].
We measured the damping rates of 20 different tennis rackets, by suspending the racket from the handle with rubber bands, striking the racket frame in the head region, and measuring the resulting vibration at the handle using an accelerometer. Damping rates were obtained from the frequency response of the racket using a frequency analyzer. We note that suspending the racket from rubber bands is a free boundary condition, but other research has shown that this free boundary condition more closely reproduces the vibrational behavior of a hand-held racket than does a clamped-handle condition [5,6].
Measured damping rates for the first bending mode, shown in Fig. 2, indicate no difference between the damping and decay rates for hollow and foam-filled graphite rackets. Similar results were obtained for other bending and torsional modes. This result suggests that the benefit of or preference for foam-filled rackets is not due to a higher damping that could cause unwanted vibrations to decay more quickly.
Figure 2: Damping rates of the first bending mode for 20 rackets, hollow (open circles) and foam-filled (solid squares). A higher damping rate means the vibration will have a lower amplitude and will decay more quickly.
Vibrational Mode Shapes and Frequencies
Experimental modal analysis is a common method to determine how the racket vibrates with various mode shapes at its resonance frequencies . In this experiment, two rackets were tested, a hollow and a foam-filled racket of the same make and model. Both rackets were freely suspended by rubber bands, as shown in Fig. 3. An accelerometer, fixed at one location, measured the vibrational response to a force hammer impact at each of approximately 180 locations around the frame and strings of the racket. The resulting Frequency Response Functions for each impact location were post-processed with a modal analysis software to extract vibrational mode shapes and resonance frequencies. An example of the vibrational mode shapes for hollow graphite tennis racket may be found on Dr. Russell’s website.
Figure 3: Modal analysis set up for a freely suspended racket.
Figure 4 compares the first and third bending modes and the first torsional mode for a hollow and foam-filled racket. The only difference between the two rackets is that one was hollow and the other was foam-filled. In the figure, the pink and green regions represent motion in opposite directions, and the white regions indicate regions, called nodes, where no vibration occurs. The sweet spot of a tennis racket is often identified as being at the center of the nodal line of the first bending mode shape in the head region . An impact from an incoming ball at this location results in zero vibration at the handle, and therefore a better “feel” for the player. The data in Fig. 4 shows that there are very few differences between the mode shapes of the hollow and foam-filled rackets. The frequencies at which the mode shapes for the foam-filled rackets occur are slightly higher than those of the hollow rackets, but the difference in shapes are negligible between the two types.
Figure 4: Contour maps representing the out-of-plane vibration amplitude for the first bending (left), first torsional (middle), and third bending (right) modes for a hollow (top) and a foam-filled racket (bottom) of the same make and model.
This preliminary study shows that damping rates for this particular design of foam-filled rackets are not higher than those of hollow rackets. The modal analysis gives a closer, yet non-conclusive, look at the intrinsic properties of the hollow and foam-filled rackets. The benefit of using this racket design is perhaps related to a larger impact shock, but additional testing is needed to discover this conjecture.
Tags: tennis, vibrations, graphite, design
 Ferrara, L., & Cohen, A. (2013). A mechanical study on tennis racquets to investigate design factors that contribute to reduced stress and improved vibrational dampening. Procedia Engineering, 60, 397-402.
 Russell D.A. (2012). Vibration damping mechanisms for the reduction of sting in baseball bats. In 164th meeting of the Acoustical Society of America, Kansas City, MO, Oct 22-26. Journal of Acoustical Society of America, 132(3) Pt.2, 1893.
 Russell, D.A. (2012). Flexural vibration and the perception of sting in hand-held sports implements. In Proceedings of InterNoise 2012, August 19-22, New York City, NY.
 Russell, D.A. (2006). Bending modes, damping, and the sensation of string in
baseball bats. In Proceedings 6th IOMAC Conference, 1, 11-16.
 Banwell, G.H., Roberts, J.R., & Halkon, B.J. (2014). Understanding the dynamics behavior of a tennis racket under play conditions. Experimental Mechanics, 54, 527-537.
 Kotze, J., Mitchell, S.R., & Rothberg, S.J. (2000).The role of the racket in high-speed tennis serves. Sports Engineering, 3, 67-84.
 Schwarz, B.J., & Richardson, M.H. (1999). Experimental modal analysis. CSI Reliability Week, 35(1), 1-12.
 Cross, R. (2004). Center of percussion of hand-held implements. American Journal of Physics, 72, 622-630.
Soundscapes and human restoration in green urban areas
Irene van Kamp, (email@example.com)
Elise van Kempen,
National Institute for Public Health and the Environment
Pobox 1 Postvak 10
3720 BA BILTHOVEN
Popular version of paper in session 2aNSa, “Soundscapes and human restoration in green urban areas”
Presented Tuesday morning, May 19, 2015, 9:35 AM, Commonwealth 1
169th ASA Meeting, Pittsburgh
Worldwide there is a revival of interest in the positive effect of landscapes, green and blue space, open countryside on human well-being, quality of life, and health especially for urban dwellers. However, most studies do not account for the influence of the acoustic environment in these spaces both in a negative and positive way. One of the few studies in the field, which was done by Kang and Zhang (2010) identified relaxation, communication, dynamics and spatiality as the key factors in the evaluation of urban soundscapes. Remarkable is their finding that the general public and urban designers clearly value public space very different. The latter had a much stronger preference for natural sounds and green spaces than the lay-observers. Do we as professionals tend to exaggerate the value of green and what characteristics of urban green space are key to health, wellbeing and restoration? And what role does the acoustic quality and accompanying social quality play in this? In his famous studies on livable streets Donald Appleyard concluded that in heavy traffic streets the number of contacts with friends, acquaintances and the amount of social interaction in general was much lower. Also people in busy streets had a tendency to describe their environment as being much smaller than their counterparts in quiet streets did. In other words, the acoustic quality affects not only our wellbeing and behavior but also our sense of territory, social cohesion and social interactions. And this concerns all of us: citing Appleyard “nearly everyone in the world lives in a street”.
There is evidence that green or natural areas/wilderness/ or urban environments with natural elements as well as areas with a high sound quality can intrinsically provide restoration through spending time there. Also merely the knowledge that such quiet and green places are available seems to work as a buffer effect between stress and health (Van Kamp, Klaeboe, Brown, and Lercher, 2015 : in Jian Kang and Brigitte Schulte-Fortkamp (Eds) in press).
Recently a European study was performed into the health effect of access and use of green area in four European cities of varying size in Spain, the UK, Netherlands and Lithuania)
At the four study centers people were selected from neighborhoods with varying levels of socioeconomic status and green and blue space. By means of a structured interview information was gathered about availability, use and importance of green space in the immediate environment as well as the sound quality of favorite green areas used for physical activity, social encounters and relaxation. Data are also available about perceived mental/physical health and medication use. This allowed for analyzing the association between indicators of green, restoration and health, while accounting for perceived soundscapes in more detail. In general there are four mechanisms assumed that lead from green and tranquil space to health: via physical activity, via social interactions and relaxation and finally via reduced levels of traffic related air and noise pollution. This paper will explore the role of sound in the process which leads from access and use of green space to restoration and health. So far this aspect has been understudied. There is some indication that certain areas contribute to restoration more than others. Most studies address the restorative effects of natural recreational areas outside the urban environment. The question is whether natural areas within, and in the vicinity of, urban areas contribute to psycho-physiological and mental restoration after stress as well. Does restoration require the absence of urban noise?
Example of an acoustic environment – a New York City Park – with potential restorative outcomes (Photo: A.L. Brown)