Indris’ melodies are individually distinctive and genetically driven
Marco Gamba – email@example.com
Cristina Giacoma – firstname.lastname@example.org
University of Torino
Department of Life Sciences and Systems Biology
Via Accademia Albertina 13
10123 Torino, Italy
Popular version of paper 2aABa3 “Melody in my head, melody in my genes? Acoustic similarity, individuality and genetic relatedness in the indris of Eastern Madagascar”
Presented Tuesday morning, November 29, 2016
172nd ASA Meeting, Honolulu
Melody in my head, melody in my genes? Acoustic similarity, individuality and genetic relatedness in the indris of Eastern Madagascar
Human hearing ablities are exceptional at identifying the voices of friends and relatives . The potential for this identification lies in the acoustic structures of our words, which not only convey verbal information (the meaning of our words) but also non-verbal cues (such as sex and identity of the speakers).
In animal communication, the recognizing a member of the same species can also be important. Birds and mammals may adjust their signals that function for neighbor recognition, and the discrimination between a known neighbor and a stranger would result in strikingly different responses in term of territorial defense .
Indris (Indri indri) are the only lemurs that produce group songs and among the few primate species that communicate using articulated singing displays. The most distinctive portions of the indris’ song are called descending phrases, consisting of between two and five units or notes. We recorded 21 groups of indris in the Eastern rainforests of Madagascar from 2005 to 2015. In each recording, we identified individuals using natural markings. We noticed that group encounters were rare, and hypothesized that song might play a role in providing members of the same species with information about the sex and identity of an individual singer and the emitting group.
We found we could effectively discriminate between the descending phrases of an individual indris, showing they have the potential for advertising about sex and individual identity. This strengthened the hypothesis that song may play a role in processes like kinship and mate recognition. Finding that there is was degree of group specificity in the song also supports the idea that neighbor-stranger recognition is also important in the indris and that the song may function announcing territorial occupation and spacing.
Traditionally, primate songs are considered an example of a genetically determined display. Thus the following step in our research was to examine whether the structure of the phrases could relate to the genetic relatedness of the indris. We found a significant correlation between the genetic relatedness of the studied individuals and the acoustic similarity of their song phrases. This suggested that genetic relatedness may play a role in determining song similarity.
For the first time, we found evidence that the similarity of a primate vocal display changes within a population in a way that is strongly associated with kin. When examining differences between sexes we found that male offspring showed phrases that were more similar to their fathers, while daughters did not show similarity with any of their parents.
The potential for kin detection may play a vital role in determining relationships within a population, regulating dispersal, and avoiding inbreeding. Singing displays may advertise kin to signal against potential mating, information that females, and to a lesser degree males, can use when forming a new group. Unfortunately, we still do not know whether indris can perceptually decode this information or how they use it in their everyday life. But work like this sets the basis for understanding primates’ mating and social systems and lays the foundation for better conservation methods.
Belin, P. Voice processing in human and non-human primates. Philosophical Transactions of the Royal Society B: Biological Sciences, 2006. 361: p. 2091-2107.
Randall, J. A. Discrimination of foot drumming signatures by kangaroo rats, Dipodomys spectabilis. Animal Behaviour, 1994. 47: p. 45-54.
Gamba, M., Torti, V., Estienne, V., Randrianarison, R. M., Valente, D., Rovara, P., Giacoma, C. The Indris Have Got Rhythm! Timing and Pitch Variation of a Primate Song Examined between Sexes and Age Classes. Frontiers in Neuroscience, 2016. 10: p. 249.
Torti, V., Gamba, M., Rabemananjara, Z. H., Giacoma, C. The songs of the indris (Mammalia: Primates: Indridae): contextual variation in the long-distance calls of a lemur. Italian Journal of Zoology, 2013. 80, 4.
Barelli, C., Mundry, R., Heistermann, M., Hammerschmidt, K. Cues to androgen and quality in male gibbon songs. PLoS ONE, 2013. 8: e82748.
Figure 1. A female indri with offspring in the Maromizaha Forest, Madagascar. Maromizaha is a New Protected Area located in the Region Alaotra-Mangoro, east of Madagascar. It is managed by GERP (Primate Studies and Research Group). At least 13 species of lemurs have been observed in the area.
Figure 2. Spectrograms of an indri song showing a typical sequence of different units. In the enlarged area, the pitch contour in red shows a typical “descending phrase” of 4 units. The indris also emit phrases of 2, 3 and more rarely 5 or 6 units.
Figure 3. A 3d-plot of the dimensions (DF1, DF2, DF3) generated from a Discriminant model that successfully assigned descending phrases of four units (DP4) to the emitter. Colours denote individuals. The descending phrases of two (DP2) and three units (DP3) also showed a percentage of correct classification rate significantly above chance.
How virtual reality technologies can enable better soundscape design.
W.M. To – email@example.com
Macao Polytechnic Institute, Macao SAR, China.
A. Chung – firstname.lastname@example.org
Smart City Maker, Denmark.
B. Schulte-Fortkamp – email@example.com
Technische Universität Berlin, Berlin, Germany.
Popular version of paper 2aNS, “How virtual reality technologies can enable better soundscape design”
Presented Tuesday morning, November 29, 2016
172nd ASA Meeting, Honolulu
The quality of life including good sound quality has been sought by community members as part of the smart city initiative. While many governments have placed special attention to waste management, air and water pollution, acoustic environment in cities has been directed toward the control of noise, in particular, transportation noise. Governments that care about the tranquility in cities rely primarily on setting the so-called acceptable noise levels i.e. just quantities for compliance and improvement . Sound quality is most often ignored. Recently, the International Organization for Standardization (ISO) released the standard on soundscape . However, sound quality is a subjective matter and depends heavily on the perception of humans in different contexts . For example, China’s public parks are well known to be rather noisy in the morning due to the activities of boisterous amateur musicians and dancers – many of them are retirees and housewives – or “Da Ma” . These activities would cause numerous complaints if they would happen in other parts of the world, but in China it is part of everyday life.
According to the ISO soundscape guideline, people can use sound walks, questionnaire surveys, and even lab tests to determine sound quality during a soundscape design process . With the advance of virtual reality technologies, we believe that the current technology enables us to create an application that immerses designers and stakeholders in the community to perceive and compare changes in sound quality and to provide feedback on different soundscape designs. An app has been developed specifically for this purpose. Figure 1 shows a simulated environment in which a student or visitor arrives the school’s campus, walks through the lawn, passes a multifunctional court, and get into an open area with table tennis tables. She or he can experience different ambient sounds and can click an object to increase or decrease the volume of sound from that object. After hearing sounds at different locations from different sources, the person can evaluate the level of acoustic comfort at each location and express their feelings toward overall soundscape. She or he can rate the sonic environment based on its degree of perceived loudness and its level of pleasantness using a 5-point scale from 1 = ‘heard nothing/not at all pleasant’ to 5 = ‘very loud/pleasant’. Besides, she or he shall describe the acoustic environment and soundscape using free words because of the multi-dimensional nature of sonic environment.
Figure 1. A simulated soundwalk in a school campus.
To, W. M., Mak, C. M., and Chung, W. L.. Are the noise levels acceptable in a built environment like Hong Kong? Noise and Health, 2015. 17(79): 429-439.
ISO. ISO 12913-1:2014 Acoustics – Soundscape – Part 1: Definition and Conceptual Framework, Geneva: International Organization for Standardization, 2014.
Kang, J. and Schulte-Fortkamp, B. (Eds.). Soundscape and the Built Environment, CRC Press, 2016.
Marine Research Facility
Woods Hole Oceanographic Institution
266 Woods Hole Road
Woods Hole, MA 02543
Popular version of paper 2pABa1
Presented Tuesday afternoon, November 29, 2016
172nd ASA Meeting, Honolulu
Characteristic soundscape recorded on a coral reef in St. John, US Virgin Islands. The conspicuous crackle is produced by many tiny snapping shrimp.
Put your head underwater in almost any tropical or sub-tropical coastal area and you will hear a continuous, static-like noise filling the water. The source of this ubiquitous sizzling sound found in shallow-water marine environments around the world was long considered a mystery of the sea. It wasn’t until WWII investigations of this underwater sound, considered troublesome, that hidden colonies of a type of small shrimp were discovered as the cause of the pervasive crackling sounds (Johnson et al., 1947).
Individual snapping shrimp (Figure 1), sometimes referred to as pistol shrimp, measure smaller than a few centimeters, but produce one of the loudest of all sounds in nature using a specialized snapping claw. The high intensity sound is actually the result of a bubble popping when the claw is closed at incredibly high speed, creating not only the characteristic “snap” sound but also a flash of light and extremely high temperature, all in a fraction of a millisecond (Versluis et al., 2000). Because these shrimp form large, dense aggregations, living unseen within reefs and rocky habitats, the combination of individual snaps creates the consistent crackling sound familiar to mariners. Snapping is used by shrimp for defense and territorial interactions, but likely serves other unknown functions based on our recent studies.
[Insert Figure 1. Images of the species of snapping shrimp, Alpheus heterochaelis, we are using to test hypotheses in the lab. This isthe dominant species of snapping shrimp found coastally in the Southeast United States, but there are hundreds of different species worldwide, easily identified by their relatively large snapping claw. ]
Since snapping shrimp produce the dominant sound in many marine regions, changes in their activity or population substantially alters ambient sound levels at a given location or time. This means that the behavior of snapping shrimp exerts an outsized influence on the sensory environment for a variety of marine animals, and has implications for the use of underwater sound by humans (e.g., harbor defense, submarine detection). Despite this fundamental contribution to the acoustic environment of temperate and coral reefs, relatively little is known about snapping shrimp sound patterns, and the underlying behaviors or environmental influences. So essentially, we ask the question: what is all the snapping about?
[Insert Figure 2. Photo showing an underwater acoustic recorder deployed in a coral reef setting. Recorders can be left to record sound samples at scheduled times (e.g. every 10 minutes) so that we can examine the long-term temporal trends in snapping shrimp acoustic activity on the reef.]
Recent advances in underwater recording technology and interest in passive acoustic monitoring have aided our efforts to sample marine soundscapes more thoroughly (Figure 2), and we are discovering complex dynamics in snapping shrimp sound production. We collected long-term underwater recordings in several Caribbean coral reef systems and analyzed the snapping shrimp snap rates. Our soundscape data show that snap rates generally exhibit daily rhythms (Figure 3), but that these rhythms can vary over short spatial scales (e.g., opposite patterns between nearby reefs) and shift substantially over time (e.g., daytime versus nighttime snapping during different seasons). These acoustic patterns relate to environmental variables such as temperature, light, and dissolved oxygen, as well as individual shrimp behaviors themselves.
[Insert Figure 3. Time-series of snap rates detected on two nearby USVI coral reefs for a week-long recording period. Snapping shrimp were previously thought to consistently snap more during the night, but we found in this study location that shrimp were more active during the day, with strong dawn and dusk peaks at one of the sites. This pattern conflicts with what little is known about snapping behaviors and is motivating further studies of why they snap.]
The relationships between environment, behaviors, and sound production by snapping shrimp are really only beginning to be explored. By listening in on coral reefs, our work is uncovering intriguing patterns that suggest a far more complex picture of the role of snapping shrimp in these ecosystems, as well as the role of snapping for the shrimp themselves. Learning more about the diverse habits and lifestyles of snapping shrimp species is critical to better predicting and understanding variation in this dominant sound source, and has far-reaching implications for marine ecosystems and human applications of underwater sound.
Johnson, M. W., F. Alton Everest, and Young, R. W. (1947). “The role of snapping shrimp (Crangon and Synalpheus) in the production of underwater noise in the sea,” Biol. Bull. 93, 122–138.
Versluis, M., Schmitz, B., von der Heydt, A., and Lohse, D. (2000). “How snapping shrimp snap: through cavitating bubbles,” Science, 289, 2114–2117. doi:10.1126/science.289.5487.2114
Comparing the Chinese erhu and the European violin using high-speed camera measurements
Florian Pfeifle – Florian.Pfeifle@uni-hamburg.de
Institute of Systematic Musicology
University of Hamburg
Neue Rabenstrasse 13
22765 Hamburg, Germany
Popular version of paper 3aMU8, “Organologic and acoustic similarities of the European violin and the Chinese erhu”
Presented Wednesday morning, November 30, 2016
172nd ASA Meeting, Honolulu
0. Overview and introduction
Have you ever wondered what a violin solo piece like Paganini’s La Campanella
would sound like if played on a Chinese erhu, or how an erhu solo performance
of Horse Racing, a Mongolian folk song, would sound on a modern violin?
Our work is concerned with the research of acoustic similarities and differences of these two instruments using high-speed camera measurements and piezoelectric pickups to record and quantify the motion and vibrational response of each instrument part individually.
The research question here is, where do acoustic differences between both instruments begin and what are the underlying physical mechanisms responsible?
1. The instruments
The Chinese erhu is the most popular instrument in the bowed string instrument group known as huqin in China. It plays a central role in various kinds of classical music as well as in regional folk music styles. Figure 1 shows a handcrafted master luthier erhu. In orchestral and ensemble music its role is comparable to the European violin as it often takes the role as the lead voice instrument.
Figure 1. A handcrafted master luthier erhu. This instrument is used in all of our measurements.
In contrast to the violin, the erhu is played in anupright position, resting on the left thigh of the musician. It consists of two strings, as compared to four in the case of the violin. The bow is put between both strings instead of being played from the top as European bowed instruments are usually played. In addition to the difference in bowing technique, the left hand does not stop the strings on a neck but presses the firmly taut strings, thereby changing their freely vibrating length. A similarity between both instruments is the use of a horse-hair strung bow to excite the strings. The history of an instrument similar to the erhu is documented from the 11th century onwards, in the case of the violin from the 15th century. The historic development before that time is still not fully known, but there is some consensus between most researchers that bowed lutes have their origin in central Asia, presumably somewhere along the silk road. Early pictorial sources point to a place of origin in an area known as Transoxiana which spanned an area across modern Uzbekistan and Turkmenistan.
Comparing instruments from different cultural spheres and having different backgrounds is a many-faceted problem as there are historical, cultural, structural and musical factors playing an important role in the aesthetic perception of an instrument. Measuring and comparing acoustical features of instruments can be used to objectify this endeavour, at least to a certain degree. Therefore, the method applied in this paper aims at finding and comparing differences and similarities on an acoustical level, using different data acquisition methods. The measurement setup is depicted in Figure 2.
Figure 2. Measurement setup for both instrument measurements.
The vibration of the strings are recorded using a high-speed camera which is able to capture the deflection of bowed strings with a very high frame rate. An exemplary video of such a measurement is shown in Video 1.
Video 1. A high-speed recording of a bowed violin string.
The recorded motion of a string can now be tracked with sub-pixel accuracy using a tracking software that traces the trajectory of a defined point on the string. The motion of the bridge is measured by applying a miniature piezoelectric transducer, which converts microscopic motions into measurable electronic signals, to the bridge. We record the radiated instrument sound using a standard measurement microphone which is positioned one meter from the instrument’s main radiating part. This measurement setup results in three different types of data: first only the bowed string without the influence of the body of the instrument; the motion of the bridge and the string; and a recording of the radiated instrument sound under normal playing conditions.
Returning to the initial question, we can now analyze and compare each measurement individually. What is even more exciting, we can combine measurements of the string deflection of one instrument with the response of the other instrument’s body. In this way we can approximate the amount of influence the body has on the sound colour of the instrument and if it is possible to make an erhu performance sound like a violin performance, or vice versa. The following sound files convey an idea of this methodology by combining the string motion of part of an Mongolian folk song played on an erhu with the body of an European violin. Sound-example 1 is a microphone recording of the erhu piece and sound-example 2 is the same recording using only the string measurement combined with an European violin body. To experience the difference clearly, headphones or reasonably good loudspeakers are recommended.
Audio File 1. A section of an erhusolo piece recorded with a microphone.
Audio File 2. A section of the same erhupiece combining the erhu string measurement
with a violin body.
The results clearly show that the violin body has a noticeable influence on the timbre, or quality, of the piece when compared to the microphone recording of the erhu. But even so, due to the specific tonal quality of the piece itself, it does not sound like a composition from an European tradition. This means that stylistic and expressive idiosyncrasies are easily recognizable and influence the perceived aesthetic of an instrument. The proposed technique could be used to extend the comparison of other instruments, such as plucked lutes like the guitar and pi’pa, or mandolin and ruanxian.
Popular version of paper , 1aSA “On a fire extinguisher using sound winds”
Presented 10:30 AM – 12:00 PM., November 28, 2016.
172nd ASA Meeting, Honolulu, U.S.A.
There are a variety of fire extinguishers available on the market with differing extinguishing methods, including powder-dispersers, fluid-dispersers, gas-dispersers and water-dispersers. There has been little advancement in the technology of fire extinguishers in the past 50 years. Yet, issues may arise when using any of these types of extinguishers during an emergency that hinder its smooth implementation. For example, powder, fluid, or gas can solidify and become stuck inside of containers; or batteries can discharge due to neglected management. This leaves a need for developing a new kind of fire extinguisher that will operated reliably at the beginning stage of fire without risk of faulting. The answer may be the sound fire extinguisher.
The sound fire extinguisher has been in development since the DAPRA, Defense Advanced Research Projects Agency of the United States, publicized the result of its project in 2012, suggesting that a fire can be put out by surrounding it with two large sound speakers. Speakers were enormously large in size then because they needed to create enough sound power to extinguish fire. As a follow-up, in 2015 American graduate students introduced a portable sound extinguisher and demonstrated it with a video posted on YouTube. But it still required heavy equipment, weighing 9 kilograms, was relatively weak in power and had long cables. In August of 2015, we, the Sori Sound Engineering Research Institute (SSERI), introduced an improved device, a sound extinguisher using a sound lens in a speaker to produce more focused power of sound, roughly 10 times stronger in its power than the device presented in the YouTube video.
Our device still exhibited problems, such as its heavy weight over 2.5 kilograms, and its obligatory vicinity to the flame. Here we introduces a further improved sound extinguisher in order to increase the efficiency rate of the device by utilizing the sound-wind. As illustrated in Figures 1 and 2 below, the sound fire extinguishers do not use any water or chemical fluids as do conventional extinguishers, only emitting sound. When the sound extinguisher produces low frequency sound of 100 Hz, its vibration energy touches the flame, scatters its membrane, and blocks the influx of oxygen and subdues the flame.
The first version of the extinguisher, where a sound lens in a speaker produced roughly 10 times more power with focusing, introduced by the research team of SSERI is shown in Figure 1. It was relatively light, weighing only 2.5 kilograms and 1/3 the weight of previous ones, and thus could be carried around with one hand without any connecting cables. It was also small in size measuring 40 centimeters (a little more than 1 feet) in length. With an easy on-off switch, it is trivial to operate up to 1 or 2 meters (about 1 yard) distance from the flame. It can be continuously used for one hour when fully charged.
The further improved version of the sound fire extinguisher is shown in Figure 2. The most important improvement to be found in our new fire extinguisher is the utilization of wind. As we blow out candles using the air from our mouth, similarly the fire can be put out by wind if its speed is over 5 meters/second when it reaches the flame. In order to acquire the power and speed required to put out the fire, we developed a way to increase the speed of wind by using low-powered speakers: a method of magnifying the power of sound wind.
Figure 1. The first sound fire extinguisher by SSERI: the mop type.
Figure 2. The improved extinguisher by SSERI: the portable type
Wind generally creates white noise, but we covered wind with particular sound frequencies. When wind acquires certain sound frequency, namely, its resonance frequency, its amplitude magnifies it and creates a larger sound-wind. Figure 3 below illustrates the mechanism of a fire extinguisher with sound-wind amplifier. A speaker produces the low frequency sound (100 Hz and below) and creates sound-wind, resonates it by utilizing the horn-effect to magnify and produce 15 times more power. The magnified sound-wind touches the flame and instantly put out the fire.
In summary, with these improvements, the sound-wind extinguisher is fit best for the beginning stage of a fire. It can be used at home, at work, on board in aircrafts, vessels, and cars. In the future, we will continue efforts to further improve the functions of the sound-wind fire extinguisher so that it can be available for a popular use.
Figure 3: The mechanism of a sound-wind fire extinguisher
 DAPRA Demonstration, https://www.youtube.com/watch?v=DanOeC2EpeA
 American graduate students (George Mason Univ.), https://www.youtube.com/watch?v=uPVQMZ4ikvM
 Park, S.Y., Yeo, K.S., Bae, M.J. “On a Detection of Optimal Frequency for Candle Fire-extinguishing,” ASK, Proceedings of 2015 Fall Conference of ASK, Vol. 34, No. 2(s), pp. 32, No. 13, Nov. 2015.
 Ik-Soo Ahn, Hyung-Woo Park, Seong-Geon Bae, Myung-Jin Bae,“ A Study on a sound fire extinguisher using special sound lens,” Acoustical Society of America, Journal of ASA, Vol.139, No.4, pp.2077, April 2016.
Methane in the ocean: observing gas bubbles from afar
Tom Weber – firstname.lastname@example.org
University of New Hampshire
24 Colovos Road
Durham, NH 03824
Popular version of paper 2pAOb
Presented Tuesday Afternoon, November 29, 2016
172nd ASA Meeting, Honolulu
The more we look, the more we find bubbles of methane, a greenhouse gas, leaking from the ocean floor (e.g., ). Some of the methane in these gas bubbles may travel to the ocean surface where it enters the atmosphere, and some is consumed by microbes, generating biomass and the greenhouse gas carbon dioxide in the process . Given the vast quantities of methane thought to be contained beneath the ocean seabed , understanding how much methane goes where is an important component of understanding climate change and the global carbon cycle.
Fortunately, gas bubbles are really easy to observe acoustically. The gas inside the bubble acts like a very soft-spring compared to the nearly incompressible ocean water surrounding it. If we compress this spring with an acoustic wave, the water surrounding the bubble moves with it as an entrained mass. This simple mass-spring system isn’t conceptually different than the suspension system (the spring) on your car (the mass): driving over a wash-board dirt road at the wrong speed (using the right acoustic frequency) can elicit a very uncomfortable (or loud) response. We try to avoid these conditions in our vehicles, but exploiting the acoustic resonance of a gas bubble helps us detect centimeter-sized (or smaller) bubbles when they are kilometers away (Fig. 1).
Methane bubbles rising from the ocean floor undergo a complicated evolution as they rise through the water column: gas is transferred both into and out of the surrounding bubble causing the gas composition of a bubble near the sea surface to look very different than at its ocean floor origin, and coatings on the bubble wall can change both the speed at which the bubble rises as well as the rate at which gas enters or exits the bubble. Understanding the various ways in which methane bubbles contribute to the global carbon cycle requires understanding these complicated details of a methane bubble’s lifetime in the ocean. We can use acoustic remote sensing techniques, combined with our understanding of the acoustic response of resonant bubbles, to help answer the question of where the methane gas goes. In doing so we map the locations of methane gas bubble sources on the seafloor (Fig. 2), measure how high up into the water column we observe gas bubbles rising, and use calibrated acoustic measurements to help constrain models of how bubbles change during their ascent through the water column.
Not surprisingly, working on answering these questions generates new questions to answer, including how the acoustic response of large, wobbly bubbles (Fig. 3) differs from small, spherical ones and what the impact of methane hydrate (methane-ice) coatings are on both the fate of the bubbles and the acoustic response. Given how much of the ocean remains unexplored, we expect to be learning about methane gas seeps and their role in our climate for a long time to come.
 Skarke, A., Ruppel, C., Kodis, M., Brothers, D., & Lobecker, E. (2014). Widespread methane leakage from the sea floor on the northern US Atlantic margin. Nature Geoscience, 7(9), 657-661.
 Ruppel, C. D. “Methane hydrates and contemporary climate change.” Nature Education Knowledge 3, no. 10 (2011): 29.
Figure 1. Top row: observations of methane gas bubbles exiting the ocean floor (picture credit: NOAA OER). The red circle shows methane hydrate (methane ice). Bottom row: acoustic observations of methane gas bubbles rising through the water column.
Figure 2. A map of acoustically detected methane gas bubble seeps (blue dots) in the northern Gulf of Mexico in water depths of approximately 1000-2000 m. Oil pipelines on the seabed are shown as yellow lines.
Figure 3. Images of large, wobbly bubbles that are approximately 1 cm in size. These type of bubbles are being investigated to help understand how their acoustic response differs from an ideal, spherical bubble. Picture credit: Alex Padilla.