2pAB16 – Biomimetic sonar and the problem of finding objects in foliage

Joseph Sutlive – josephs7@vt.edu
Rolf Müller – rolf.mueller@vt.edu
Virginia Tech
ICTAS II, 1075 Life Science Cir (Mail Code 0917)
Blacksburg, VA 24061-1016 USA

Popular version of paper 2pAB16
Presented Monday afternoon, May 14, 2019
177th ASA Meeting, Louisville, KY

The ability of sonars to find targets-of-interest is often hampered by a cluttered environment. For example, naval sonars encounter difficulties finding mines partially or fully buried among other distracting (clutter) targets. Such situations pose target identification challenges that are much harder than target detection and resolution problems. Possible new ideas for approaching such problems could come from the many bat species which navigate and hunt in dense vegetation and thus must be able to identify targets-of-interest within clutter. Evolutionary adaptation of the bat biosonar system is likely to have resulted in the “discovery” of features that support making distinctions between clutter and echoes of interest.

There are two main types of sonar: active sonar, in which echoes are triggered by the sonar’s own pulses, and passive sonar, in which the system remains silent and listens to its environment to gain a better understanding of it. The most well-established case is given by certain groups of bats that use Doppler shifts caused by the wingbeat of a flying insect prey to identify the prey in foliage. Different bat species have been shown to use a passive sonar approach that is based on unique prey-generated acoustic signals. We have designed a sonar which mimics the biosonar of the horseshoe bat, a bat which uses active sonar and is one of the bats that use doppler shifts as an identification mechanism.

Biomimetic sonar

A sonar that mimics the biosonar of the horseshoe bat.

The sonar scanned a variety of targets hidden in artificial foliage; the data was analyzed later. Initial analysis has shown that the sonar can be used to discriminate between different objects in foliage. Additional target discrimination tasks were used; gathering initial echo data of an object without clutter, then trying to find that object within clutter. Initial analysis has indicated the possibility of this sonar head being able to be used for this paradigm, though the results seemed very dependent on the direction of the target. Further investigation will look to refine the models explored here to better understand the how we can extract an object from a noisy, cluttered environment.

5aAB4 – How far away can an insect as tiny as a female mosquito hear the males in a mating swarm?

Lionel Feugère1,2 – l.feugere@gre.ac.uk
Gabriella Gibson2 – g.gibson@gre.ac.uk
Olivier Roux1 – olivier.roux@ird.fr
1MIVEGEC, IRD, CNRS,
Université de Montpellier,
Montpellier, France.
2Natural Resources Institute,
University of Greenwich,
Chatham, Kent ME4 4TB, UK.

Popular version of paper 5aAB4
Presented Friday morning during the session “Understanding Animal Song” (8:00 AM – 9:30 AM), May 17, 2019
177th ASA Meeting, Louisville, KY

Why do mosquitoes make that annoying sound just when we are ready for a peaceful sleep? Why do they risk their lives by ‘singing’ so loudly? Scientists recently discovered that mosquito wing-flapping creates tones that are very important for mating; the flight tones help both males and females locate a mate in the dark.

Mosquitoes hear with their two hairy antennae, which vibrate when stimulated by the sound-wave created by another mosquito flying nearby. Their extremely sensitive hearing organ at the base of the antennae transforms the vibrations into an electrical signal to the brain, similar to how a joystick responds to our hand movements. Mosquitoes have the most sensitive hearing of all insects, however, this hearing mechanism is optimal only at short distances. Consequently, scientists have assumed that mosquitoes use sound for only very short-range communication.

Theoretically, however, a mosquito can hear a sound at any distance, provided it is loud enough. In practice, a single mosquito will struggle to hear another mosquito more than a few centimeters away because the flight tone is not loud enough. However, in the field mosquitoes are exposed to much louder flight tones. For example, males of the malaria mosquito, Anopheles coluzzii, can gather by the thousands in station-keeping flight (‘mating swarms’) for at least 20 minutes at dusk, waiting for females to arrive. We wondered if a female mosquito could hear the sound of a male swarm from far away if the swarm is large enough.

To investigate this hypothesis, we started a laboratory population of An. gambiae from field-caught mosquitoes and observed their behaviour under field-like environmental conditions in a sound-proof room. Phase 1: we reproduced the visual cues and dusk lighting conditions that trigger swarming behaviour, and released males in groups of tens to hundreds of males and recorded their flight-sounds (listen to SOUND 1 below).

Phase 2: we released one female at a time and played-back the recordings of different sizes of male swarms over a range of distances to determine how far away a female can detect males. If a female hears a flying male or males, she alters her own flight tone to let the male(s) know she is there.

Our results show that a female cannot hear a small swarm until she comes within tens of centimeter of the swarm. However, for larger, louder swarms, females consistently responded to male flight tones. The larger the number of males in the swarm, the further away the females responded; females detected a swarm of ~1,500 males at a distance of ~0.75 m, and they detected a swarm of ~6,000 males at a distance of ~1.5 m.

2aAB1 – Most animals hear acoustic flow instead of pressure; we should too

N. Miles – miles@binghamton.edu

Department of Mechanical Engineering
Binghamton University
State University of New York
Binghamton, NY 13902 USA

Popular version of paper 2aAB1
Presented Tuesday morning May 14, 2019.  8:35-8:55 am
177th ASA Meeting, Louisville, KY

The sound we hear consists of tiny, rapid changes in the pressure of air as it fluctuates about the steady atmospheric pressure.  Our ears detect these minute pressure fluctuations because they produce time-varying forces on our eardrums.  Many animals hear sound using pressure-sensitive eardrums such as ours.  However, most animals that hear sound (including countless insects) don’t have eardrums at all. Instead, they listen by detecting the tiny motion of air molecules as they flow back and forth when sound propagates.

The motion of air molecules in a sound wave is illustrated the video below.  The moving dots in this video depict motion of gas molecules due to the back and forth motion of a piston shown at the left.  The sound wave is a propagating fluctuation in the density (and pressure) of the molecules.  Note that a wave propagates to the right while the motion of each molecule (such as the larger moving dot in the center of the image) consists of back and forth motion.  Small animals sense this back and forth motion by sensing the deflection of thin hairs that are driven by viscous forces in the fluctuating acoustic medium.

It is likely that the early inventors of acoustic sensors fashioned microphones to operate based on sensing pressure because they knew that is how humans hear sound.  As a result, all microphones have possessed a thin pressure-sensing diaphragm (or ribbon) that functions much like our eardrums.  The fact that most animals don’t hear this way suggests that there may be significant benefits to considering alternate designs.  In this study, we explore technologies for achieving precise detection of sound using a mechanical structure that is driven by viscous forces associated with the fluctuating velocity of the medium.  In one example, we have shown this to result in a directional microphone with flat frequency response from 1 Hz to 50 kHz (Zhou, Jian, and Ronald N. Miles. “Sensing fluctuating airflow with spider silk.” Proceedings of the National Academy of Sciences 114.46 (2017): 12120-12125.).

Nature shows that there are many ways to fashion a thin, lightweight structure that can respond to minute changes in airflow as occur in a sound field.   A first step in designing an acoustic flow sensor is to understand the effects of the viscosity of the air on such a structure as air flows in a sound field; viscosity is known to be essential in the acoustic flow-sensing ears of small animals.  Our mathematical model predicts that the sound-induced motion of a very thin beam can be dominated by viscous forces when its width becomes on the order of five microns.  Such a structure can be readily made using modern microfabrication methods.

In order to create a microphone, once an extremely thin and compliant structure is designed that can respond to acoustic flow-induced viscous forces, one must develop a means of converting its motion into an electronic signal.  We have described one method of accomplishing this using capacitive transduction (Miles, Ronald N. “A Compliant Capacitive Sensor for Acoustics: Avoiding Electrostatic Forces at High Bias Voltages.” IEEE Sensors Journal 18.14 (2018): 5691-5698).

Acknowledgement:  This research is supported by a grant from NIH National Institute on Deafness and other Communication Disorders (1R01DC017720-01).

2aAB8 – How dolphins deal with background noise

Maria Zapetis – maria.zapetis@usm.edu
University of Southern Mississippi
118 College Drive
Hattiesburg, MS 39406

Jason Mulsow – jason.mulsow@nmmpfoundation.org
National Marine Mammal Foundation
2240 Shelter Island Drive, Suite 200
San Diego, CA 92106

Carolyn E. Schlundt – melka@peraton.com
Peraton Corporation
4045 Hancock Street, Suite 210
San Diego, CA 92110

James J. Finneran – james.finneran@navy.mil
US Navy Marine Mammal Program
Space and Naval Warfare Systems Center, Pacific
53560 Hull Street
San Diego, CA 92152

Heidi Lyn – hlyn@southalabama.edu
University of South Alabama
75 South University Boulevard
Mobile, AL 36688

Popular version of paper 2aAB8, “Bottlenose dolphin (Tursiops truncatus) vocal modifications in response to spectrally pink background noise”
Presented Tuesday morning, November 6, 2019, 11:00 AM in Shaughnessy (FE)
176th ASA Meeting, Victoria, British Columbia, Canada

You’re in the middle of a conversation when you walk out of a quiet building onto a crowded street. A loud plane flies directly overhead. You stop your car for a passing train. Chances are, you’ve experienced these kinds of anthropogenic (man-made) noises in your everyday life. What do you do? Most people raise their voice to be heard, an automatic reaction called the Lombard effect [1, 2]. Similarly, dolphins and other marine mammals experience anthropogenic noise — from human activities such as boat traffic and construction activities — and raise their “voices” to better communicate [3, 4]. Understanding the extent to which dolphins exhibit the Lombard effect and alter their vocalizations in the presence of man-made noise is important for predicting and mitigating the potential effects of noise on wild marine mammals.

In this study, bottlenose dolphins were trained to “whistle” upon hearing a computer-generated tone (Figure 1). After successful detection of a tone, dolphins typically produced a “victory squeal” (a pulsed call associated with success [5]). During tone-detection trials, the dolphins’ whistles and victory squeals were recorded while one of three computer-generated noise conditions played in the background (Figure 2). The dolphins responded to every background noise condition with the Lombard effect: as the noise frequency content and level increased, the dolphins’ whistles got louder (increased amplitude) (Figure 3). Other noise-induced vocal modifications were observed, such as changes in the number of whistle harmonics, depending on the specifics of the noise condition. Because this was a controlled exposure study with trained dolphins, we were not only able to exclude extraneous variables but were able to see how the dolphins responded to different levels of background noise. Control over the background noise allowed us to tease apart the effects of noise level and noise frequency content. Both properties of the noise appear to affect the parameters of dolphin signals differently, and may reflect an ability to discriminate within those properties independently.

dolphins

Figure 1. Hearing test procedure with US Navy Marine Mammal Program dolphins in San Diego Bay, CA. [A] Each trial begins with the trainer directing the dolphin to dive underwater and position herself on a “biteplate” in front of an underwater speaker. [B] Once on the biteplate, the dolphin waits for the hearing test tone to be presented. When the dolphin hears the tone, she whistles in response. The researcher lets the dolphin know that she is correct by playing a “reward buzzer” out of another underwater speaker. The dolphin will often respond to the reward buzzer with a victory squeal before [C] coming up for a fish reward. The dolphin’s vocalizations are recorded from the hydrophone (underwater microphone) in a green suction cup just behind the blowhole.

Figure 2. Spectrogram examples of the four conditions. The [W] whistles, [RT] reward buzzers, and [VS] victory squeals of the dolphin in Figure 1 are labeled.

Figure 3. Whistle Amplitude across four conditions. Compared to the control condition (San Diego Bay ambient noise), both dolphins produced louder whistles in every noise condition.

  1. Lombard, E. (1911). Le signe de l’élévation de la voix. Annales des Maladies de L’Oreille et du Larynx, 37, 101–119.
  2. Rabin, L. A., McCowan, B., Hooper, S. L., & Owings, D. H. (2003). Anthropogenic noise and its effect on animal communication: an interface between comparative psychology and conservation biology. The International Journal of Comparative Psychology, 16, 172-192.
  3. Buckstaff, K. C. (2004). Effects of watercraft noise on the acoustic behavior of bottlenose dolphins, Tursiops Truncatus, in Sarasota Bay, Florida. Marine Mammal Science, 20(4), 709–725.
  4. Hildebrand, J. (2009). Anthropogenic and natural sources of ambient noise in the ocean. Marine Ecology Progress Series, 395, 5–20.
  5. Dibble, D. S., Van Alstyne, K. R., & Ridgway, S. (2016). Dolphins signal success by producing a victory squeal. International Journal of Comparative Psychology, 29.

 

1a2b3c – How bowhead whales cope with changes in natural and anthropogenic ocean noise in the arctic

Aaron M. Thode athode@ucsd.edu
Scripps Institution of Oceanography, UCSD, La Jolla, California 92093, USA

Susanna B. Blackwell, Katherine H. Kim, Alexander S. Conrad
Greeneridge Sciences, Inc., 90 Arnold Place, Suite D, Santa Barbara, California 93117, USA

Popular version of paper 1a2b3c
Presented Monday morning, 1pAO, Arctic Acoustical Oceanography II

We live in a world full of ever-changing noise with both natural and industrial origins.  Despite this constant interference, we’ve developed several strategies for communicating with each other.  If you’ve ever attended a busy party, you’ve probably found yourself shouting to be overheard by a nearby companion, and maybe even had to repeat yourself a few times to be understood.

bowhead whales

Figure 1: Spectrogram of whale calls; sound file attached 4x normal speed

 

Whales are even more dependent on sound for communicating with each other.  For example, each autumn bowhead whales, a species of baleen whale, migrate along the Alaskan North Slope from their Canadian summer feeding grounds towards the Bering Sea.  During this voyage, they make numerous low-frequency sounds (50-200 Hz) that are detectable on underwater microphones, or “hydrophones,” up to tens of kilometers away.  There are many mysteries about these calls, such as what type of information they convey, and why their frequency content seems to be shifting downward with time (Thode et al., 2017) for unknown reasons.  Nevertheless, scientists generally agree that bowheads use these sounds for communication.

Changing conditions in the Arctic have encouraged more human industrial activity in this formerly remote region.  For example, over the past decade multiple organizations have conducted seismic surveys throughout the Arctic Ocean to pinpoint oil-drilling locations or establish territorial claims. The impulsive “airgun” sounds generated by these activities could be detected over distances of more than 1000 km (Thode et al., 2010).

bowhead whales

Figure 2: Spectrogram of seismic airgun signals along with bowhead whale calls; sound file attached 4x normal speed

Previous work by our team has found that bowhead whales double their calling rate whenever distant seismic signals are present (Blackwell et al., 2015).  But what consequences, if any, could this behavioral change have on the long-term health of the bowhead population?

To answer this question my colleagues at Greeneridge Sciences Inc. and I have studied how bowhead whales respond to natural changes in noise levels, which during the summer and fall are caused primarily by wind in the relatively ship-free waters of the North.  We found that, like humans at a party, whales can respond in two ways to rising noise levels.  They can increase their loudness, or “source level,” and/or they can increase the rate at which they produce calls.  Measuring this effect is challenging, because whenever background noise levels increase it becomes difficult to detect weaker calls, an effect called “masking”.  Because of masking, as noise levels rise one might measure a decrease in calling rate as well as an apparent increase in call source levels, even if a whale population didn’t actually change their calling behavior.

To solve this problem our team deployed multiple groups of hydrophones that allowed machine learning algorithms to localize the positions of over a million whale calls over the eight years of our study.  We then threw away over 90% of these positions, keeping only calls that were produced at close range to our sensors, and thus wouldn’t become masked by changes in noise levels.  Continuing the party analogy, we effectively only listened to people close to us, so we could still detect whispers along with shouts.

We found that whales tried to increase their source levels as noise levels increased, but when noise levels became high enough (75% of maximum noise levels encountered naturally) the whales didn’t call any louder, even as noise levels continued to rise (Figure 3).

Figure 3: Relationship between background noise level and whale calling level, for calls made within 3.5 km of a sensor.

Whales do, however, keep increasing their call rates with rising noise levels.  We found that a 26-dB (400-times) increase in noise levels caused calling rates to double, the same rate increase caused by seismic airguns (Figure 4).

Figure 4: Image of relationship between whale calling rate (over ten minutes) and background noise level.

This work has thus allowed us to place bowhead whale responses to human disturbance in a natural noise context, which eventually may assist us in evaluating the long-term impact of such activities on population growth.

References

Blackwell, S. B., Nations, C. S., McDonald, T. L., Thode, A. M., Mathias, D., Kim, K. H., C.R. Greene, J., and Macrander, A. M. (2015). “The effects of airgun sounds on bowhead whale calling rates:  evidence for two behavioral thresholds.  ,” PLoS One 10, e0125720.

Thode, A. M., Blackwell, S. B., Conrad, A. S., Kim, K. H., and Michael Macrander, A. (2017). “Decadal-scale frequency shift of migrating bowhead whale calls in the shallow Beaufort Sea,” The Journal of the Acoustical Society of America 142, 1482-1502.

Thode, A. M., Kim, K., Greene, C. R., and Roth, E. H. (2010). “Long range transmission loss of broadband seismic pulses in the Arctic under ice-free conditions,” J. Acoust. Soc. Am. 128, EL181-EL187.

1pAB4 – Combining underwater photography and passive acoustics to monitor fish

Camille Pagniello – cpagniel@ucsd.edu
Gerald D’Spain – gdspain@ucsd.edu
Jules Jaffe – jjaffe@ucsd.edu
Ed Parnell – eparnell@ucsd.edu

Scripps Institution of Oceanography, University of California San Diego
La Jolla, CA 92093-0205, USA

Jack Butler – Jack.Butler@myfwc.com
2796 Overseas Hwy, Suite 119
Marathon, FL 33050

Ana Širović – asirovic@tamug.edu
Texas A&M University Galveston
P.O. Box 1675
Galveston, TX 77550

Popular version of paper 1pAB4 “Searching for the FishOASIS: Using passive acoustics and optical imaging to identify a chorusing species of fish”
Presented Monday afternoon, November 5, 2018
176th ASA Meeting, Victoria, Canada

Although over 120 marine protected areas (MPAs) have been established along the coast of southern California, it has been difficult to monitor their ability to quantify their effectiveness via the presence of target animals. Traditional monitoring methods, such as diver surveys, allow species to be identified, but are laborious and expensive, and heavily rely on good weather and a talented pool of scientific divers. Additionally, the diver’s presence is known to alter animal presence and behavior. As one alternative to aid and perhaps, in the long run, replace the divers, we explored the use of long-term, continuous, passive acoustic recorders to listen to the animals’ vocalizations.

Many marine animals produce sound. In shallow coastal waters, fish are often a dominant contributor. Aristotle was the first to note the “voice” of fish, yet only sporadic reports on fish sounds appeared over the next few millennia. Many of the over 30,000 species of fish that exist today are believed to produce sound; however, the acoustic behavior has been determined in less than 5% of these biologically and commercially important animals.

Towards the goal of both listening to the fish and identifying which species are vocalizing, we developed a Fish Optical and Acoustic Sensor Identification System (FishOASIS) (Figure 1). This portable, low-cost instrument couple’s a multi-element passive acoustic array with multiple cameras, thus allowing us to determine which fish are making which sound for a variety of species. In addition to detecting sporadic events such as fish spawning aggregations, this instrument also provides the ability to track individual fish within aggregations.

FishOASIS

Figure 1. A diver deploying FishOASIS in the kelp forest off La Jolla, CA.

Choruses (i.e., the simultaneous vocalization of animals) are often associated with fish spawning aggregations and, in our work, FishOASIS was successful in recording a low-frequency fish chorus in the kelp forest off La Jolla, CA (Figure 2).

Figure 2. Long-term spectral average (LTSA) of low-frequency fish chorus of unknown species on June 8, 2017 at 17:30:00. Color represents spectrum level, with red indicating highest pressure level.

The chorus starts half an hour before sunset and lasts about 3-4 hours almost every day from May to September. While individuals within the aggregation are dispersed over a large area (approx. 0.07 km2), the chorus’ spatial extent is fairly fixed over time. Species that could be producing this chorus include kelp bass (Paralabrax clathratus) and halfmoons (Medialuna californiensis) (Figure 3).

Figure 3. A halfmoon (Medialuna californiensis) in the kelp forest off La Jolla, CA.

FishOASIS has also been used to identify the sounds of barred sand bass (Paralabrax nebulifer), a popular species among recreational fishermen in the Southern California Bight (Figure 4).

Figure 4. Barred sand bass (Paralabrax nebulifer) call.

This article demonstrates that combining multiple cameras with multi-element passive acoustic arrays is a cost-effective method for monitoring sound-producing fish activities, diversity and biomass. This approach is minimally invasive and offers greater spatial and temporal coverage at significantly lower cost than traditional methods. As such, FishOASIS is a promising tool to collect the information required for the implementation of passive acoustics to monitor MPAs.