2pABa1 – Snap chat: listening in on the peculiar acoustic patterns of snapping shrimp, the noisiest animals on the reef

Ashlee Lillis – ashlee@whoi.edu
T. Aran Mooney – amooney@whoi.edu

Marine Research Facility
Woods Hole Oceanographic Institution
266 Woods Hole Road
Woods Hole, MA 02543

Popular version of paper 2pABa1
Presented Tuesday afternoon, November 29, 2016
172nd ASA Meeting, Honolulu

Characteristic soundscape recorded on a coral reef in St. John, US Virgin Islands. The conspicuous crackle is produced by many tiny snapping shrimp.

Put your head underwater in almost any tropical or sub-tropical coastal area and you will hear a continuous, static-like noise filling the water. The source of this ubiquitous sizzling sound found in shallow-water marine environments around the world was long considered a mystery of the sea. It wasn’t until WWII investigations of this underwater sound, considered troublesome, that hidden colonies of a type of small shrimp were discovered as the cause of the pervasive crackling sounds (Johnson et al., 1947).

Individual snapping shrimp (Figure 1), sometimes referred to as pistol shrimp, measure smaller than a few centimeters, but produce one of the loudest of all sounds in nature using a specialized snapping claw. The high intensity sound is actually the result of a bubble popping when the claw is closed at incredibly high speed, creating not only the characteristic “snap” sound but also a flash of light and extremely high temperature, all in a fraction of a millisecond (Versluis et al., 2000). Because these shrimp form large, dense aggregations, living unseen within reefs and rocky habitats, the combination of individual snaps creates the consistent crackling sound familiar to mariners. Snapping is used by shrimp for defense and territorial interactions, but likely serves other unknown functions based on our recent studies.

snapping shrimp snapping shrimp

Figure 1. Images of the species of snapping shrimp, Alpheus heterochaelis, we are using to test hypotheses in the lab. This is the dominant species of snapping shrimp found coastally in the Southeast United States, but there are hundreds of different species worldwide, easily identified by their relatively large snapping claw.

Since snapping shrimp produce the dominant sound in many marine regions, changes in their activity or population substantially alters ambient sound levels at a given location or time. This means that the behavior of snapping shrimp exerts an outsized influence on the sensory environment for a variety of marine animals, and has implications for the use of underwater sound by humans (e.g., harbor defense, submarine detection). Despite this fundamental contribution to the acoustic environment of temperate and coral reefs, relatively little is known about snapping shrimp sound patterns, and the underlying behaviors or environmental influences. So essentially, we ask the question: what is all the snapping about?

Figure 2 (missing). Photo showing an underwater acoustic recorder deployed in a coral reef setting. Recorders can be left to record sound samples at scheduled times (e.g. every 10 minutes) so that we can examine the long-term temporal trends in snapping shrimp acoustic activity on the reef.

Recent advances in underwater recording technology and interest in passive acoustic monitoring have aided our efforts to sample marine soundscapes more thoroughly (Figure 2), and we are discovering complex dynamics in snapping shrimp sound production. We collected long-term underwater recordings in several Caribbean coral reef systems and analyzed the snapping shrimp snap rates. Our soundscape data show that snap rates generally exhibit daily rhythms (Figure 3), but that these rhythms can vary over short spatial scales (e.g., opposite patterns between nearby reefs) and shift substantially over time (e.g., daytime versus nighttime snapping during different seasons). These acoustic patterns relate to environmental variables such as temperature, light, and dissolved oxygen, as well as individual shrimp behaviors themselves.

lillis3 snapping shrimp
Figure 3. Time-series of snap rates detected on two nearby USVI coral reefs for a week-long recording period. Snapping shrimp were previously thought to consistently snap more during the night, but we found in this study location that shrimp were more active during the day, with strong dawn and dusk peaks at one of the sites. This pattern conflicts with what little is known about snapping behaviors and is motivating further studies of why they snap.

The relationships between environment, behaviors, and sound production by snapping shrimp are really only beginning to be explored. By listening in on coral reefs, our work is uncovering intriguing patterns that suggest a far more complex picture of the role of snapping shrimp in these ecosystems, as well as the role of snapping for the shrimp themselves. Learning more about the diverse habits and lifestyles of snapping shrimp species is critical to better predicting and understanding variation in this dominant sound source, and has far-reaching implications for marine ecosystems and human applications of underwater sound.

References

Johnson, M. W., F. Alton Everest, and Young, R. W. (1947). “The role of snapping shrimp (Crangon and Synalpheus) in the production of underwater noise in the sea,” Biol. Bull. 93, 122–138.

Versluis, M., Schmitz, B., von der Heydt, A., and Lohse, D. (2000). “How snapping shrimp snap: through cavitating bubbles,” Science, 289, 2114–2117. doi:10.1126/science.289.5487.2114

2aNS – How virtual reality technologies can enable better soundscape design

W.M. To – wmto@ipm.edu.mo
Macao Polytechnic Institute, Macao SAR, China.
A. Chung – ac@smartcitymakter.com
Smart City Maker, Denmark.
B. Schulte-Fortkamp – b.schulte-fortkamp@tu-berlin.de
Technische Universität Berlin, Berlin, Germany.

Popular version of paper 2aNS, “How virtual reality technologies can enable better soundscape design”
Presented Tuesday morning, November 29, 2016
172nd ASA Meeting, Honolulu

The quality of life including good sound quality has been sought by community members as part of the smart city initiative. While many governments have placed special attention to waste management, air and water pollution, acoustic environment in cities has been directed toward the control of noise, in particular, transportation noise. Governments that care about the tranquility in cities rely primarily on setting the so-called acceptable noise levels i.e. just quantities for compliance and improvement [1]. Sound quality is most often ignored. Recently, the International Organization for Standardization (ISO) released the standard on soundscape [2]. However, sound quality is a subjective matter and depends heavily on the perception of humans in different contexts [3]. For example, China’s public parks are well known to be rather noisy in the morning due to the activities of boisterous amateur musicians and dancers – many of them are retirees and housewives – or “Da Ma” [4]. These activities would cause numerous complaints if they would happen in other parts of the world, but in China it is part of everyday life.

According to the ISO soundscape guideline, people can use sound walks, questionnaire surveys, and even lab tests to determine sound quality during a soundscape design process [3]. With the advance of virtual reality technologies, we believe that the current technology enables us to create an application that immerses designers and stakeholders in the community to perceive and compare changes in sound quality and to provide feedback on different soundscape designs. An app has been developed specifically for this purpose. Figure 1 shows a simulated environment in which a student or visitor arrives the school’s campus, walks through the lawn, passes a multifunctional court, and get into an open area with table tennis tables. She or he can experience different ambient sounds and can click an object to increase or decrease the volume of sound from that object. After hearing sounds at different locations from different sources, the person can evaluate the level of acoustic comfort at each location and express their feelings toward overall soundscape.  She or he can rate the sonic environment based on its degree of perceived loudness and its level of pleasantness using a 5-point scale from 1 = ‘heard nothing/not at all pleasant’ to 5 = ‘very loud/pleasant’. Besides, she or he shall describe the acoustic environment and soundscape using free words because of the multi-dimensional nature of sonic environment.

soundscape

Figure 1. A simulated soundwalk in a school campus.

  1. To, W. M., Mak, C. M., and Chung, W. L.. Are the noise levels acceptable in a built environment like Hong Kong? Noise and Health, 2015. 17(79): 429-439.
  2. ISO. ISO 12913-1:2014 Acoustics – Soundscape – Part 1: Definition and Conceptual Framework, Geneva: International Organization for Standardization, 2014.
  3. Kang, J. and Schulte-Fortkamp, B. (Eds.). Soundscape and the Built Environment, CRC Press, 2016.
  4. Buckley, C. and Wu, A. In China, the ‘Noisiest Park in the World’ Tries to Tone Down Rowdy Retirees, NYTimes.com, from http://www.nytimes.com/2016/07/04/world/asia/china-chengdu-park-noise.html , 2016.

 

1aNS5 – Noise, vibration, and harshness (NVH) of smartphones

Inman Jang – kgpbjim@yonsei.ac.kr
Tae-Young Park – pty0948@yonsei.ac.kr
Won-Suk Ohm – ohm@yonsei.ac.kr
Yonsei University
50, Yonsei-ro, Seodaemun-gu
Seoul 03722
Korea

Heungkil Park – heungkil.park@samsung.com
Samsung Electro Mechanics Co., Ltd.
150, Maeyeong-ro, Yeongtong-gu
Suwon-si, Gyeonggi-do 16674
Korea

Popular version of paper 1aNS5, “Controlling smartphone vibration and noise”
Presented Monday morning, November 28, 2016
172nd ASA Meeting, Honolulu

Noise, vibration, and harshness, also known as NVH, refers to the comprehensive engineering of noise and vibration of a device through stages of their production, transmission, and human perception. NVH is a primary concern in car and home appliance industries because many consumers take into account the quality of noise when making buying decisions. For example, a car that sounds too quiet (unsafe) or too loud (uncomfortable) is a definite turnoff. That said, a smartphone may strike you as an acoustically innocuous device (unless you are not a big fan of Metallica ringtones), for which the application of NVH seems unwarranted. After all, who would expect the roar of a Harley from a smartphone? But think again. Albeit small in amplitude (less than 30 dB), smartphones emit an audible buzz that, because of the close proximity to the ear, can degrade the call quality and cause annoyance.

smartphone-noise

Figure 1: Smartphone noise caused by MLCCs

The major culprit for the smartphone noise is the collective vibration of tiny electronics components, known as multi-layered ceramic capacitors (MLCCs). An MLCC is basically a condenser made of piezoelectric ceramics, which expands and contracts upon the application of voltage (hence piezoelectric). A typical smartphone has a few hundred MLCCs soldered to the circuit board inside. The almost simultaneous pulsations of these MLCCs are transmitted to and amplified by the circuit board, the vibration of which eventually produces the distinct buzzing noise as shown in Fig. 1. (Imagine a couple hundred rambunctious little kids jumping up and down on a floor almost in unison!) The problem has been even more exacerbated by the recent trend in which the name of the game is “The slimmer the better”; because a slimmer circuit board is much easier to flex it transmits and produces more vibration and noise.

Recently, Yonsei University and Samsung Electromechanics in South Korea joined forces to address this problem. Their comprehensive NVH regime includes the visualization of smartphone noise and vibration (transmission), the identification and replacement of the most problematic MLCCs (production), and the evaluation of harshness of the smartphone noise (human perception). For visualization of smartphone noise, a technique known as the nearfield acoustic holography is used to produce a sound map as shown in Fig. 2, in which the spatial distribution of sound pressure, acoustic intensity or surface velocity can be overlapped on the snapshot of the smartphone. Such sound maps help smartphone designers draw a detailed mental picture of what is going on acoustically and proceed to rectify the problem by identifying the groups of MLCCs most responsible for producing the vibration of the circuit board. Then, engineers can take corrective actions by replacing the (cheap) problematic MLCCs with (expensive) low-vibration MLCCs. Lastly, the outcome of the noise/vibration engineering is measured not only in terms of physical attributes such as sound pressure level, but also in their psychological correlates such as loudness and the overall psychoacoustic annoyance. This three-pronged strategy (addressing production, transmission, and human perception) is proven to be highly effective, and currently Samsung Electromechanics is offering the NVH service to a number of major smartphone vendors around the world.

sound-map - smartphone

Figure 2: Sound map of a smartphone surface

 

1aSC31 – Shape changing artificial ear inspired by bats enriches speech signals

Anupam K Gupta1,2, Jin-Ping Han ,2, Philip Caspers1, Xiaodong Cui2, Rolf Müller1

  1. Dept. of Mechanical Engineering, Virginia Tech, Blacksburg, VA, USA
  2. IBM T. J. Watson Research Center, Yorktown, NY, USA

Contact: Jin-Ping Han – hanjp@us.ibm.com

Popular version of paper 1aSC31, “Horseshoe bat inspired reception dynamics embed dynamic features into speech signals.”
Presented Monday morning, Novemeber 28, 2016
172nd ASA Meeting, Honolulu

Have you ever had difficulty understanding what someone was saying to you while walking down a busy big city street, or in a crowded restaurant? Even if that person was right next to you? Words can become difficult to make out when they get jumbled with the ambient noise – cars honking, other voices – making it hard for our ears to pick up what we want to hear. But this is not so for bats. Their ears can move and change shape to precisely pick out specific sounds in their environment.

This biosonar capability inspired our artificial ear research and improving the accuracy of automatic speech recognition (ASR) systems and speaker localization. We asked if could we enrich a speech signal with direction-dependent, dynamic features by using bat-inspired reception dynamics?

Horseshoe bats, for example, are found throughout Africa, Europe and Asia, and so-named for the shape of their noses, can change the shape of their outer ears to help extract additional information about the environment from incoming ultrasonic echoes. Their sophisticated biosonar systems emit ultrasonic pulses and listen to the incoming echoes that reflect back after hitting surrounding objects by changing their ear shape (something other mammals cannot do). This allows them to learn about the environment, helping them navigate and hunt in their home of dense forests.

While probing the environment, horseshoe bats change their ear shape to modulate the incoming echoes, increasing the information content embedded in the echoes. We believe that this shape change is one of the reasons bats’ sonar exhibit such high performance compared to technical sonar systems of similar size.

To test this, we first built a robotic bat head that mimics the ear shape changes we observed in horseshoe bats.

han1 - bats

Figure 1: Horseshoe bat inspired robotic set-up used to record speech signal

We then recorded speech signals to explore if using shape change, inspired by the bats, could embed direction-dependent dynamic features into speech signals. The potential applications of this could range from improving hearing aid accuracy to helping a machine more-accurately hear – and learn from – sounds in real-world environments.

We compiled a digital dataset of 11 US English speakers from open source speech collections provided by Carnegie Mellon University. The human acoustic utterances were shifted to the ultrasonic domain so our robot could understand and play back the sounds into microphones, while the biomimetic bat head actively moved its ears. The signals at the base of the ears were then translated back to the speech domain to extract the original signal.
This pilot study, performed at IBM Research in collaboration with Virginia Tech, showed that the ear shape change was, in fact, able to significantly modulate the signal and concluded that these changes, like in horseshoe bats, embed dynamic patterns into speech signals.

The dynamically enriched data we explored improved the accuracy of speech recognition. Compared to a traditional system for hearing and recognizing speech in noisy environments, adding structural movement to a complex outer shape surrounding a microphone, mimicking an ear, significantly improved its performance and access to directional information. In the future, this might improve performance in devices operating in difficult hearing scenarios like a busy street in a metropolitan center.

han2

Figure 2: Example of speech signal recorded without and with the dynamic ear. Top row: speech signal without the dynamic ear, Bottom row: speech signal with the dynamic ear

5aEA2 – What Does Your Signature Sound Like?

Daichi Asakura – asakura@pa.info.mie-u.ac.jp
Mie University
Tsu, Mie, Japan

Popular version of poster, 5aEA2. “Writer recognition with a sound in hand-writing”
172nd ASA Meeting, Honolulu

We can notice a car approaching by noise it makes on the road or can recognize a person by the sound of their footsteps. There are many studies analyzing and recognizing these noises. In the computer security industry, studies have even been proposed to estimate what is being typed from the sound of typing on the keyboard [1] and extracting RSA keys through noises made by a PC [2].

Of course, there is a relationship between a noise and its cause and that noise, therefore, contains information. The sound of a person writing, or “hand writing sound,” is one of the noises in our everyday environment. Previous studies have addressed the recognition of handwritten numeric characters by using the resulting sound, finding an average recognition of 88.4%. Based on this study, we seek the possibility of recognizing and identifying a writer by using the sound of their handwriting. If accurate identification is possible, it could become a method of signature verification without having to ever look at the signature.

We used the handwriting sounds of nine participants, conducting recognition experiments. We asked them to write the same text, which were names in Kanji, the Chinese characters, under several different conditions, such as writing slowly or writing on a different day. Figure 1 shows an example of a spectrogram of the hand-writing sound we analyzed. The bottom axis represents time and the vertical axis shows frequency. Colors represent the magnitude – or intensity – of the frequencies, where red indicates high intensity and blue is low.
handwriting

The spectrogram showed features corresponding to the number of strokes in the Kanji. We used a recognition system based on a hidden Markov model (HMM) – typically used for speech recognition –, which represents transitions of spectral patterns as they evolve in time. The results showed an average identification rate of 66.3%, indicating that writer identification is possible in this manner. However, the identification rate decreased under certain conditions, especially a slow writing speed.

To improve performances, we need to increase the number of hand writing samples and include various written texts as well as participants. We also intend to include writing of English characters and numbers. We expect that Deep Learning, which is attracting increasing attention around the world, will also help us achieve a higher recognition rate in future experiments.

 

  1. Zhuang, L., Zhou, F., and Tygar, J. D., Keyboard Acoustic Emanations Revisited, ACM Transactions on Information and Systems Security, 2009, vol.13, no.1, article 3, pp.1-26.
  2. Genkin, D., Shamir, A., and Tromer, E., RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis, Proceedings of CRYPTO 2014, 2014, pp.444-461.
  3. Kitano, S., Nishino, T. and Naruse, H., Handwritten digit recognition from writing sound using HMM, 2013, Technical Report of the Institute of Electronics, Information and Communication Engineers, vol.113, no.346, pp.121-125.