2pABa1 – Snap chat: listening in on the peculiar acoustic patterns of snapping shrimp, the noisiest animals on the reef

Ashlee Lillis – ashlee@whoi.edu
T. Aran Mooney – amooney@whoi.edu

Marine Research Facility
Woods Hole Oceanographic Institution
266 Woods Hole Road
Woods Hole, MA 02543

Popular version of paper 2pABa1
Presented Tuesday afternoon, November 29, 2016
172nd ASA Meeting, Honolulu

Characteristic soundscape recorded on a coral reef in St. John, US Virgin Islands. The conspicuous crackle is produced by many tiny snapping shrimp.

Put your head underwater in almost any tropical or sub-tropical coastal area and you will hear a continuous, static-like noise filling the water. The source of this ubiquitous sizzling sound found in shallow-water marine environments around the world was long considered a mystery of the sea. It wasn’t until WWII investigations of this underwater sound, considered troublesome, that hidden colonies of a type of small shrimp were discovered as the cause of the pervasive crackling sounds (Johnson et al., 1947).

Individual snapping shrimp (Figure 1), sometimes referred to as pistol shrimp, measure smaller than a few centimeters, but produce one of the loudest of all sounds in nature using a specialized snapping claw. The high intensity sound is actually the result of a bubble popping when the claw is closed at incredibly high speed, creating not only the characteristic “snap” sound but also a flash of light and extremely high temperature, all in a fraction of a millisecond (Versluis et al., 2000). Because these shrimp form large, dense aggregations, living unseen within reefs and rocky habitats, the combination of individual snaps creates the consistent crackling sound familiar to mariners. Snapping is used by shrimp for defense and territorial interactions, but likely serves other unknown functions based on our recent studies.

snapping shrimp snapping shrimp

Figure 1. Images of the species of snapping shrimp, Alpheus heterochaelis, we are using to test hypotheses in the lab. This is the dominant species of snapping shrimp found coastally in the Southeast United States, but there are hundreds of different species worldwide, easily identified by their relatively large snapping claw.

Since snapping shrimp produce the dominant sound in many marine regions, changes in their activity or population substantially alters ambient sound levels at a given location or time. This means that the behavior of snapping shrimp exerts an outsized influence on the sensory environment for a variety of marine animals, and has implications for the use of underwater sound by humans (e.g., harbor defense, submarine detection). Despite this fundamental contribution to the acoustic environment of temperate and coral reefs, relatively little is known about snapping shrimp sound patterns, and the underlying behaviors or environmental influences. So essentially, we ask the question: what is all the snapping about?

Figure 2 (missing). Photo showing an underwater acoustic recorder deployed in a coral reef setting. Recorders can be left to record sound samples at scheduled times (e.g. every 10 minutes) so that we can examine the long-term temporal trends in snapping shrimp acoustic activity on the reef.

Recent advances in underwater recording technology and interest in passive acoustic monitoring have aided our efforts to sample marine soundscapes more thoroughly (Figure 2), and we are discovering complex dynamics in snapping shrimp sound production. We collected long-term underwater recordings in several Caribbean coral reef systems and analyzed the snapping shrimp snap rates. Our soundscape data show that snap rates generally exhibit daily rhythms (Figure 3), but that these rhythms can vary over short spatial scales (e.g., opposite patterns between nearby reefs) and shift substantially over time (e.g., daytime versus nighttime snapping during different seasons). These acoustic patterns relate to environmental variables such as temperature, light, and dissolved oxygen, as well as individual shrimp behaviors themselves.

lillis3 snapping shrimp
Figure 3. Time-series of snap rates detected on two nearby USVI coral reefs for a week-long recording period. Snapping shrimp were previously thought to consistently snap more during the night, but we found in this study location that shrimp were more active during the day, with strong dawn and dusk peaks at one of the sites. This pattern conflicts with what little is known about snapping behaviors and is motivating further studies of why they snap.

The relationships between environment, behaviors, and sound production by snapping shrimp are really only beginning to be explored. By listening in on coral reefs, our work is uncovering intriguing patterns that suggest a far more complex picture of the role of snapping shrimp in these ecosystems, as well as the role of snapping for the shrimp themselves. Learning more about the diverse habits and lifestyles of snapping shrimp species is critical to better predicting and understanding variation in this dominant sound source, and has far-reaching implications for marine ecosystems and human applications of underwater sound.

References

Johnson, M. W., F. Alton Everest, and Young, R. W. (1947). “The role of snapping shrimp (Crangon and Synalpheus) in the production of underwater noise in the sea,” Biol. Bull. 93, 122–138.

Versluis, M., Schmitz, B., von der Heydt, A., and Lohse, D. (2000). “How snapping shrimp snap: through cavitating bubbles,” Science, 289, 2114–2117. doi:10.1126/science.289.5487.2114

1aSA – On a Fire Extinguisher with Sound-wind for the Beginning Stage of Fire

Myung-Jin Bae, mjbae@ssu.ac.kr
Myung-Sook Kim, kimm@ssu.ac.kr
Soongil University, 369 Sangdo-ro, Dongjak-gu, 06978 Seoul Korea

Popular version of 1aSA “On a fire extinguisher using sound winds”
Presented 10:30 AM – 12:00 PM., November 28, 2016.
172nd ASA Meeting, Honolulu, U.S.A.
Click here to read the abstract

There are a variety of fire extinguishers available on the market with differing extinguishing methods, including powder-dispersers, fluid-dispersers, gas-dispersers and water-dispersers. There has been little advancement in the technology of fire extinguishers in the past 50 years. Yet, issues may arise when using any of these types of extinguishers during an emergency that hinder its smooth implementation. For example, powder, fluid, or gas can solidify and become stuck inside of containers; or batteries can discharge due to neglected management. This leaves a need for developing a new kind of fire extinguisher that will operated reliably at the beginning stage of fire without risk of faulting. The answer may be the sound fire extinguisher.

The sound fire extinguisher has been in development since the DAPRA, Defense Advanced Research Projects Agency of the United States, publicized the result of its project in 2012, suggesting that a fire can be put out by surrounding it with two large sound speakers. Speakers were enormously large in size then because they needed to create enough sound power to extinguish fire. As a follow-up, in 2015 American graduate students introduced a portable sound extinguisher and demonstrated it with a video posted on YouTube. But it still required heavy equipment, weighing 9 kilograms, was relatively weak in power and had long cables. In August of 2015, we, the Sori Sound Engineering Research Institute (SSERI), introduced an improved device, a sound extinguisher using a sound lens in a speaker to produce more focused power of sound, roughly 10 times stronger in its power than the device presented in the YouTube video.

Our device still exhibited problems, such as its heavy weight over 2.5 kilograms, and its obligatory vicinity to the flame. Here we introduces a further improved sound extinguisher in order to increase the efficiency rate of the device by utilizing the sound-wind. As illustrated in Figures 1 and 2 below, the sound fire extinguishers do not use any water or chemical fluids as do conventional extinguishers, only emitting sound. When the sound extinguisher produces low frequency sound of 100 Hz, its vibration energy touches the flame, scatters its membrane, and blocks the influx of oxygen and subdues the flame.

The first version of the extinguisher, where a sound lens in a speaker produced roughly 10 times more power with focusing, introduced by the research team of SSERI is shown in Figure 1. It was relatively light, weighing only 2.5 kilograms and 1/3 the weight of previous ones, and thus could be carried around with one hand without any connecting cables. It was also small in size measuring 40 centimeters (a little more than 1 feet) in length. With an easy on-off switch, it is trivial to operate up to 1 or 2 meters (about 1 yard) distance from the flame. It can be continuously used for one hour when fully charged.

The further improved version of the sound fire extinguisher is shown in Figure 2. The most important improvement to be found in our new fire extinguisher is the utilization of wind. As we blow out candles using the air from our mouth, similarly the fire can be put out by wind if its speed is over 5 meters/second when it reaches the flame. In order to acquire the power and speed required to put out the fire, we developed a way to increase the speed of wind by using low-powered speakers: a method of magnifying the power of sound wind.

fire extinguisher
Figure 1. The first sound fire extinguisher by SSERI: the mop type.

fire extinguisher
Figure 2. The improved extinguisher by SSERI: the portable type

Wind generally creates white noise, but we covered wind with particular sound frequencies. When wind acquires certain sound frequency, namely, its resonance frequency, its amplitude magnifies it and creates a larger sound-wind. Figure 3 below illustrates the mechanism of a fire extinguisher with sound-wind amplifier. A speaker produces the low frequency sound (100 Hz and below) and creates sound-wind, resonates it by utilizing the horn-effect to magnify and produce 15 times more power. The magnified sound-wind touches the flame and instantly put out the fire.

In summary, with these improvements, the sound-wind extinguisher is fit best for the beginning stage of a fire. It can be used at home, at work, on board in aircrafts, vessels, and cars. In the future, we will continue efforts to further improve the functions of the sound-wind fire extinguisher so that it can be available for a popular use.

fire extinguisher
Figure 3: The mechanism of a sound-wind fire extinguisher

References
[1] DAPRA Demonstration, https://www.youtube.com/watch?v=DanOeC2EpeA
[2] American graduate students (George Mason Univ.), https://www.youtube.com/watch?v=uPVQMZ4ikvM
[3] Park, S.Y., Yeo, K.S., Bae, M.J. “On a Detection of Optimal Frequency for Candle Fire-extinguishing,” ASK, Proceedings of 2015 Fall Conference of ASK, Vol. 34, No. 2(s), pp. 32, No. 13, Nov. 2015.
[4] Ik-Soo Ahn, Hyung-Woo Park, Seong-Geon Bae, Myung-Jin Bae,“ A Study on a sound fire extinguisher using special sound lens,” Acoustical Society of America, Journal of ASA, Vol.139, No.4, pp.2077, April 2016.

*Video file attached: sound-wind extinguisher V2

2pSC – How do narration experts provide expressive storytelling in Japanese fairy tales?

Takashi Saito – saito@sc.shonan-it.ac.jp
Shonan Institute of Technology
1-1-25 Tsujido-Nishikaigan,
Fujisawa, Kanagawa, JAPAN

Popular version of paper 2pSC, “Prosodic analysis of storytelling speech in Japanese fairy tale”
Presented Tuesday afternoon, November 29, 2016
172nd ASA Meeting, Honolulu

Recent advances in speech synthesis technologies bring us relatively high quality synthetic speech, as smartphones today often provide it with speech message output. The acoustic sound quality especially seems to sometimes be particularly close to that of human voices. Prosodic aspects, or the patterns of rhythm and intonation, however, still have large room for improvement. The overall speech messages generated by speech synthesis systems sound somewhat awkward and monotonous. In other words, those messages lack expressiveness of speech compared with human speech. One of the reasons for this is that most systems use a one-sentence speech synthesis scheme in which each sentence in the message is generated independently, connected just to construct the message. The lack of expressiveness might hinder widening the range of applications for speech synthesis. Storytelling is a typical application to expect speech synthesis to be capable of having a control mechanism beyond just one sentence to provide really vivid and expressive storytelling. This work attempts to investigate the actual storytelling strategies of human narration experts for the purpose of ultimately reflecting them on the expressiveness of speech synthesis.

A Japanese popular fairy tale titled, “The Inch-High Samurai,” in its English translation was the storytelling material in this study. It is a short story taking about six minutes to tell verbally. The story consists of four elements typically found in simple fairy tales: introduction, build-up, climax, and ending. These common features suit the story well for observing prosodic changes in the story’s flow. The story was told by six narration experts (four female and two male narrators) and were recorded. First, we were interested in what they were thinking while telling the story, so we interviewed them on their actual reading strategies after the recording. We found they usually did not adopt fixed reading techniques for each sentence, but tried to go into the world of the story, and make a clear image of characters appearing in the story, as would an actor. They also reported paying attention to the following aspects of the scenes associated with the story elements: In the introduction, featuring the birth of the little Samurai character, they started to speak slowly and gently in effort to grasp the hearts of listeners. In the story’s climax, depicting the extermination of the devil character, they tried to express a tense feeling through a quick rhythm and tempo. Finally, in the ending, they gradually changed their reading styles to make the audience understand that the happy ending is coming soon.

For all six speakers a baseline speech segmentation was conducted for words, and accentual phrases in a semi-automatic way. We then used a multi-layered prosodic tagging method, performed manually, to provide information on various changes of “story states” relevant to impersonation, emotional involvement and scene flow control. Figure 1 shows an example of the labeled speech data. Wavesurfer [1] software served as our speech visualization and labelling tool. The example utterance contains a part of the storyteller’s speech (containing the phrase “oniwa bikkuridesu” meaning, “the devil was surprised,” and devil’s part, “ta ta tasukekuree,” meaning, “please help me!”) and is shown in the top label pane for characters (chrlab). The second top label pane (evelab) shows event labels such as scene changes and emotional involvement (desire, joy, fear, etc…). In this example, a “fear” event is attached to the devil’s utterance part. The dynamic pitch movement can be observed in the pitch contour pane located at the bottom of the figure.

segmentedspeechsample

How are the events of scene change or emotional involvement provided by human narrators manifested in speech data? Prosodic parameters of speed, measured in speech rate or mora/sec; pitch, measured in Hz; power, measured in dB; and preceding pause length, measured in seconds, are investigated for all the breath groups in the speech data. Breath group refers to a speech segment which is uttered consecutively without pausing. Figure 2, 3 and 4 show these parameters at a scene-change event (Figure 2), desire event (Figure 3), and fear event (Figure 4). The axis on the left of the figures shows the ratio of the parameter to its average value. Each event has its own distinct tendency on prosodic parameters, also seen in the figures, which seems to be fairly common to all speakers. For instance, the differences between the scene-change event and the desire event are the amount of preceding pause and the degree of the contributions from the other three parameters. The fear event shows a quite different tendency from other events, but it is common to all speakers though the degree of the parameter movement differs between speakers. Figure 5 shows how to expresses character differences, when the reader impersonates the story’s characters, with the three parameters. In short, speed and pitch are changed dynamically for impersonation, and this is a common tendency of all speakers.

Based on findings obtained from these human narrations, we are designing a framework of mapping story events through scene changes and emotional involvement to prosodic parameters. Simultaneously, it is necessary to build additional databases to ensure and reinforce story event description and mapping framework.

saito-fig2 saito-fig3
saito-fig4 saito-fig5

[1] Wavesurfer: http://www.speech.kth.se/wavesurfer/

2aNS – How virtual reality technologies can enable better soundscape design

W.M. To – wmto@ipm.edu.mo
Macao Polytechnic Institute, Macao SAR, China.
A. Chung – ac@smartcitymakter.com
Smart City Maker, Denmark.
B. Schulte-Fortkamp – b.schulte-fortkamp@tu-berlin.de
Technische Universität Berlin, Berlin, Germany.

Popular version of paper 2aNS, “How virtual reality technologies can enable better soundscape design”
Presented Tuesday morning, November 29, 2016
172nd ASA Meeting, Honolulu

The quality of life including good sound quality has been sought by community members as part of the smart city initiative. While many governments have placed special attention to waste management, air and water pollution, acoustic environment in cities has been directed toward the control of noise, in particular, transportation noise. Governments that care about the tranquility in cities rely primarily on setting the so-called acceptable noise levels i.e. just quantities for compliance and improvement [1]. Sound quality is most often ignored. Recently, the International Organization for Standardization (ISO) released the standard on soundscape [2]. However, sound quality is a subjective matter and depends heavily on the perception of humans in different contexts [3]. For example, China’s public parks are well known to be rather noisy in the morning due to the activities of boisterous amateur musicians and dancers – many of them are retirees and housewives – or “Da Ma” [4]. These activities would cause numerous complaints if they would happen in other parts of the world, but in China it is part of everyday life.

According to the ISO soundscape guideline, people can use sound walks, questionnaire surveys, and even lab tests to determine sound quality during a soundscape design process [3]. With the advance of virtual reality technologies, we believe that the current technology enables us to create an application that immerses designers and stakeholders in the community to perceive and compare changes in sound quality and to provide feedback on different soundscape designs. An app has been developed specifically for this purpose. Figure 1 shows a simulated environment in which a student or visitor arrives the school’s campus, walks through the lawn, passes a multifunctional court, and get into an open area with table tennis tables. She or he can experience different ambient sounds and can click an object to increase or decrease the volume of sound from that object. After hearing sounds at different locations from different sources, the person can evaluate the level of acoustic comfort at each location and express their feelings toward overall soundscape.  She or he can rate the sonic environment based on its degree of perceived loudness and its level of pleasantness using a 5-point scale from 1 = ‘heard nothing/not at all pleasant’ to 5 = ‘very loud/pleasant’. Besides, she or he shall describe the acoustic environment and soundscape using free words because of the multi-dimensional nature of sonic environment.

soundscape

Figure 1. A simulated soundwalk in a school campus.

  1. To, W. M., Mak, C. M., and Chung, W. L.. Are the noise levels acceptable in a built environment like Hong Kong? Noise and Health, 2015. 17(79): 429-439.
  2. ISO. ISO 12913-1:2014 Acoustics – Soundscape – Part 1: Definition and Conceptual Framework, Geneva: International Organization for Standardization, 2014.
  3. Kang, J. and Schulte-Fortkamp, B. (Eds.). Soundscape and the Built Environment, CRC Press, 2016.
  4. Buckley, C. and Wu, A. In China, the ‘Noisiest Park in the World’ Tries to Tone Down Rowdy Retirees, NYTimes.com, from http://www.nytimes.com/2016/07/04/world/asia/china-chengdu-park-noise.html , 2016.

 

4aEA1 – Aero-Acoustic Noise and Control Lab

Aero-Acoustic Noise and Control Lab – Seoryong Park – tjfyd11@snu.ac.kr

School of Mechanical and Aerospace Eng., Seoul National University
301-1214, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742, Republic of Korea

Popular version of paper 4aEA1, “Integrated simulation model for prediction of acoustic environment of launch vehicle”
Presented Thursday morning, December 1, 2016
172nd ASA Meeting, Honolulu

Literally speaking, a “sound” refers to a pressure fluctuation of the air. This means, for example, the sound of a bus passing means our ear senses the pressure fluctuation or pressure variation the bus created. During our daily lives, there are rarely significant pressure fluctuations in the air above common noises, but in special cases it happens. Windows are commonly featured in movies breaking from someone screaming loudly or in high pitches in the movie. This is usually exaggerated, but not out of the realm of what is physically possible.

The pressure fluctuations in the air caused by sound can cause engineering problems for loud structures such as rockets, especially given that the pressure nature of the sounds waves that means louder sounds result from larger pressure fluctuations and can cause more damage. Rocket launches are particularly loud and the resulting pressure change in the air can affect the surface of the launched vehicle as the form of the force shown as Figure 1.

fig-1-the-magnitude-of-acoustic-loads-on-the-luanch-vehicle
Figure 1. The Magnitude of Acoustic Loads on the Launch Vehicle

fig-2-acoustic-loads-generated-during-a-lift-off
As the vehicle is launched (Figure. 2), it reaches volumes over 180dB, which corresponds to about 20,000 Pascals in pressure change. This pressure change is about 20% of atmospheric pressure, which is considered very large. Because of the pressure change during launching, communication equipment and antenna panel can incur damage, causing the malfunctioning of the fairing, the protective cone covering the satellite. In the engineering field, the load created by the launching noise is called acoustic load, and many studies are in progress related to acoustic load.

Studies focused on the relationship between a launching vehicle and its acoustic load is categorized, to rocket engineers, under “prediction and control.” Prediction is divided into two aspects: internal acoustic load; and external acoustic load. Internal acoustic load refers to sound delivered from outside to inside, while external acoustic load is the noise directly from the jet fire. There are two ways to predict the external acoustic load, namely an empirical method and numerical method. The empirical method was developed by NASA in 1972 and uses the collected information from various studies. The numerical method employs mathematical formulas related to noise and electric wave calculated using computer modeling. As computers become more powerful, this method continues to gain favor. However, because numerical methods require so much calculation time, they often require the use of dedicated computing centers. Our team instead focused on using the more efficient and faster empirical method. fig-3-external-acoustic-loads-prediction-result-%28spectrum%29

Figure 3 shows the results of our calculations, depicting the expected sound spectrum. We can consider various physics principles involved during a lift-off, such as sound reflection, diffraction and impingement that could affect the original empirical method results.

Meanwhile, our team used a statistical energy analysis method to predict the internal acoustic load caused by the predicted external acoustic load. This method is used often to predict internal noise environments. It is used to predict the internal noise of a launching vehicle as well as aircraft and automobile noise. Our research team used a program called, VA One SEA, for predicting these noise effects, shown as figure. 4.

fig-4-modeling-of-the-payloads-and-forcing-of-the-external-acoustic-loads
Figure 4. Modeling of the Payloads and Forcing of the External Acoustic Loads

After predicting internal acoustic load, we decreased the acoustic load to conduct an internal noise control study. A common way to do this is by sticking noise-reducing material to the structure. However, the extra weight from the noise-reducing material can cause decreased performance. To overcome this side effect, we also conducted a study about active noise control, which is in progress. Active noise control refers to reducing the noise by making antiphase waves of the sound for cancelling. Figure 5 shows the experimental results of applied SISO Noise Control, showing the reduction of noise is significant, especially for low frequencies.

fig-5-experimental-results-of-siso-active-noise-control
Figure 5. Experimental Results of SISO Active Noise Control

Our research team applied the acoustic load prediction method and control method to the Korean launching vehicle, KSR-111. Through this application, we developed an improved empirical prediction method that is more accurate than previous methods, and we found usefulness of the noise control as we established the best algorithm for our experimental facilities and the active noise control area.