5aUW7 – Using Noise to Probe Seafloor

Tsuwei Tan – ttan1@nps.edu
Oleg A. Godin – oagodin@nps.edu
Physics Dept., Naval Postgraduate School
1 University Cir.
Monterey CA, 93943, USA

Popular version of paper 5aUW7
Presented Friday morning, November 9, 2018, 10:15-10:30 AM
176th ASA Meeting, Victoria, BC Canada

Introduction
Scientists have long used sound to probe the ocean and its bottom. Breaking waves, roaring earthquakes, speeding supertankers, snapping shrimp, and vocalizing whales make the ocean a very noisy place. Rather than “shouting” above this ambient noise with powerful dedicated sound sources, we are now learning how to measure ocean currents and seafloor properties using the noise itself. In this paper, we combine long recordings of ambient noise with a signal processing skill called time warping to quantify seafloor properties. Time warping changes the signal rate so we can extract individual modes, which carry information about the ocean’s properties.

Experiment & Data
We pulled our data from Michael Brown and colleagues [1].  They recorded ambient noise in the Straits of Florida with several underwater microphones (hydrophones) continuously over six days (see Figure 1). We applied time warping to this data. By measuring (cross-correlating) noise recordings made at points A and B several kilometers apart, one obtains a signal that approximates the signal received at A when a sound source is placed at B. With this approach, a hydrophone becomes a virtual sound source. The sound of the virtual source (the noise cross-correlation function) can be played in Figure 2. There are two nearly symmetric peaks in the cross-correlation function shown in Figure 1 because A also serves as a virtual source of sound at B. Having two virtual sources allowed Oleg Godin and colleagues to measure current velocity in the Straits of Florida [2].

Figure 1. Illustration of the site of the experiment and the cross-correlation function of ambient noise received by hydrophones A and B in 100 m-deep water at horizontal separation of about 5 km in the Straits of Florida. Figure 2. Five-second audio of correlated ambient noise from Figure 1: At receiver A, a stronger impulsive sound starts at 3.25 sec, which is the time it takes underwater acoustic waves to travel from B to A. Listen here

Retrieving Environmental Information
Sound travels faster or slower underwater depending on how soft or hard the seafloor is. We employ time warping to analyze the signal produced by the virtual sound source. Time warping is akin to using a whimsical clock that makes the original signal run at a decreasing pace rather than steadily (Figure 3a  3b). The changing pace is designed to split the complicated signal into simple, predictable components called normal modes (Figure 3c  3d). Travel times from B to A of normal modes at different acoustic frequencies prove to be very sensitive to sound speed and density in the ocean’s bottom layers. Depth-dependence of these geo-acoustic parameters at the experimental site as well as precise distance from B to A can be determined by trying various sets of the parameters and finding the one that best fits the acoustic normal modes revealed by the ambient noise measurements. The method is illustrated in Figure 4. The sound of the virtual source (Figure 2), which emerges from ambient noise, reveals that the ocean bottom at the experimental site is an 11 m-thick layer of sand overlying a much thicker layer of limestone (Figure 5).

Figure 3. Time warping process: Components of the virtual source signal from noise are separated in the spectrogram of the warped signal from (c) to (d). Figure 4. Comparison of measured travel times of normal modes to the travel time theoretically predicted for various trial models of the ocean bottom and the geometry of the experiment. The measured and theoretically predicted travel times are shown by circles and lines, respectively. Individual normal modes are distinguished by color. By fixing the geo-acoustic parameters (sound speed and density), the precise range r between hydrophones A and B can be found by minimizing the difference between the measured and predicted travel times. The best fit is found at r = 4988m. Watch here

Figure 5. Ocean bottom properties retrieved from ambient noise. Blue and red lines show sound speed in water and bottom, respectively, at different depths below the ocean surface. The ratios ρs and ρb of the bottom density to seawater density are also shown in two bottom layers.  

Conclusion Ambient noise does not have to be an obstacle to acoustic remote sensing of the ocean.  We are learning how to use it to quantify ocean properties. In this research, we used ambient noise to probe the ocean bottom. Time warping has been applied to ambient noise records to successfully measure sound speeds and densities at different depths below the seafloor in the Straits of Florida. Our passive acoustic approach is inexpensive, non-invasive, and environmentally friendly. We are currently working on applying the same approach to the extensive underwater ambient noise recordings obtained at several sites off New Jersey during the Shallow Water 2006 experiment.  

Reference

[1] M. G. Brown, O. A. Godin, N. J. Williams, N. A. Zabotin, L. Zabotina, and G. J. Banker, “Acoustic Green’s function extraction from ambient noise in a coastal ocean environment,” Geophys. Res. Lett. 41, 5555–5562 (2014). [

2] O. A. Godin, M. Brown, N. A.  Zabotin, L. Y. Zabotina, and N. J. Williams, “Passive acoustic measurement of flow velocity in the Straits of Florida.” Geoscience Lett. 1, 16 (2014).

1pEAa5 – A study on the optimal speaker position for improving sound quality of flat panel display

Sungtae Lee, owenlee@lgdisplay.com
Kwanho Park, khpark12@lgdisplay.com
Hyungwoo Park, pphw@ssu.ac.kr
Myungjin Bae, mjbae@ssu.ac.kr
37-8, LCD-ro 8beon-gil, Wollong-myeon Paju-si, Gyeonggi-do, Korea (the Republic of)

This “OLED Panel Speaker” was developed by attaching exciters on the back of OLED panels, which do not have backlights. Synchronizing the video and sound on screen, OLED Panel Speaker delivers clear voice and immersive sound. This technology which only can be applied to OLED, is already adopted by some TV makers and receiving great reviews and evaluations.

speaker
With the continuous development of display industry and progress of IT technology, the display is gradually becoming more advanced. Throughout the development in display technology followed by CRT to LCD and OLED, TVs have evolved to offer much better picture quality. The remarkable development of picture quality has enabled to receive positive market reactionsIn the mean time, relatively bulky speaker was hidden behind the panel to make TVs thin. TV sound could not keep up with the progress of the picture quality, until LG Display developed Flat Panel Speaker using the merit of OLED panel thickness, less than 1mm.

speaker
To realize the technology, we developed an exciter that simplifies the normal speaker structureSpecially-designed exciters are positioned at the back of the panel, invisibly vibrate the screen to create sound.

speaker
We developed and applied an enclosure structure in order to realize “stereo sound” on one sheet of OLED panel and found positive results through vibrational mode analysis.


Depending on the shape of enclosure tape, there are Peak/Dip at a certain frequency created by standing wave. Changing the shape of peak and dip frequencies to 1/3 λ, the peak is improved by 37% from 8dB to 5 dB.


When this technology applied, the sound image moves to the center of the screen, maximizing the immersive experience and enabling the realistic sound.

Sungtae_Lee_Lay Paper

2pPP7 – Your ears never sleep: auditory processing of nonwords during sleep in children

Adrienne Roman – adrienne.s.roman@vumc.org
Carlos Benitez – carlos.r.benitez@vanderbilt.edu
Alexandra Key – sasha.key@vanderbilt.edu
Anne Marie Tharpe – anne.m.tharpe@vumc.org

The brain needs a variety of stimulation from the environment to develop and grow. The ability for the brain to change as a result of sensory input and experiences is often referred to as experience-dependent plasticity. When children are young, their brains are more susceptible to experience-dependent plasticity (e.g., Kral, 2013) so the quantity and quality of input is important. Because our ears are always “on”, our auditory system receives a lot of input to process, especially while we are awake. But, what can we “hear” when we are asleep? And, does what we hear while we are asleep help our brains develop?

Although there has been research in infants and adults examining the extent to which our brains process sounds during sleep, very little research has focused on young children, a group that sleeps a significant portion of their day (Paruthi et al., 2016). We decided to start our investigation by trying to answer the question, do children process and retain information heard during sleep? To investigate this question, we used electroencephalography (EEG) to measure the electrical activity of children’s brains in response to different sounds – sounds they heard when asleep and sounds they heard when awake.

First, during the child’s regular naptime, each child was hooked up to a portable EEG. Using EEG, a technician could tell us when the child went to sleep. Once asleep, we played the child three made-up words over and over in random order for ten minutes. Then, we let the child continue to sleep until he or she woke up.

When the children awoke from their naps, we took them to our EEG lab for event-related potential (ERP) testing. ERPs are segments of on-going EEG recordings appearing as waveforms that reflect the brain’s response to  events or stimulation (such as a sound played).

The children wore “hats” consisting of 128 spongy electrodes while listening to the same three made-up words heard during the nap mixed in with new made-up words that the children never heard before. We then analyzed the ERPs, to determine if the children’s brains responded differently to the words played during sleep than to the new words the children had not heard before. We were looking for ‘memory traces’ in the EEG that would indicate that the children ‘remembered’ the words heard while sleeping.

We found that children’s brains were able to differentiate the nonsensical words “heard” during the nap from the brand new words played during the ERP testing. This means that the brain did not just filter the information coming in, but also retained it long enough to recognize it after they woke up. This is the first step in understanding the impact of a child’s auditory environment during sleep on the brain.

Kral, A. (2013). Auditory critical periods: a review from system’s perspective. Neuroscience, 247, 117-133.

Paruthi, S., Brooks, L. J., D’Ambrosio, C., Hall, W. A., Kotagal, S., Lloyd, R. M.,

Malow, B. A., Maski, K., Nichols, C., Quan, S. F., Rosen, C. L., Troester, M. M., & Wise, M.S. (2016). Recommended amount of sleep for pediatric populations: a consensus statement of the American Academy of Sleep Medicine. Journal of clinical sleep medicine: JCSM: official publication of the American Academy of Sleep Medicine, 12(6), 785.

 

1pAB4 – Combining underwater photography and passive acoustics to monitor fish

Camille Pagniello – cpagniel@ucsd.edu
Gerald D’Spain – gdspain@ucsd.edu
Jules Jaffe – jjaffe@ucsd.edu
Ed Parnell – eparnell@ucsd.edu

Scripps Institution of Oceanography, University of California San Diego
La Jolla, CA 92093-0205, USA

Jack Butler – Jack.Butler@myfwc.com
2796 Overseas Hwy, Suite 119
Marathon, FL 33050

Ana Širović – asirovic@tamug.edu
Texas A&M University Galveston
P.O. Box 1675
Galveston, TX 77550

Popular version of paper 1pAB4 “Searching for the FishOASIS: Using passive acoustics and optical imaging to identify a chorusing species of fish”
Presented Monday afternoon, November 5, 2018
176th ASA Meeting, Victoria, Canada

Although over 120 marine protected areas (MPAs) have been established along the coast of southern California, it has been difficult to monitor their ability to quantify their effectiveness via the presence of target animals. Traditional monitoring methods, such as diver surveys, allow species to be identified, but are laborious and expensive, and heavily rely on good weather and a talented pool of scientific divers. Additionally, the diver’s presence is known to alter animal presence and behavior. As one alternative to aid and perhaps, in the long run, replace the divers, we explored the use of long-term, continuous, passive acoustic recorders to listen to the animals’ vocalizations.

Many marine animals produce sound. In shallow coastal waters, fish are often a dominant contributor. Aristotle was the first to note the “voice” of fish, yet only sporadic reports on fish sounds appeared over the next few millennia. Many of the over 30,000 species of fish that exist today are believed to produce sound; however, the acoustic behavior has been determined in less than 5% of these biologically and commercially important animals.

Towards the goal of both listening to the fish and identifying which species are vocalizing, we developed a Fish Optical and Acoustic Sensor Identification System (FishOASIS) (Figure 1). This portable, low-cost instrument couple’s a multi-element passive acoustic array with multiple cameras, thus allowing us to determine which fish are making which sound for a variety of species. In addition to detecting sporadic events such as fish spawning aggregations, this instrument also provides the ability to track individual fish within aggregations.

FishOASIS

Figure 1. A diver deploying FishOASIS in the kelp forest off La Jolla, CA.

Choruses (i.e., the simultaneous vocalization of animals) are often associated with fish spawning aggregations and, in our work, FishOASIS was successful in recording a low-frequency fish chorus in the kelp forest off La Jolla, CA (Figure 2).

Figure 2. Long-term spectral average (LTSA) of low-frequency fish chorus of unknown species on June 8, 2017 at 17:30:00. Color represents spectrum level, with red indicating highest pressure level.

The chorus starts half an hour before sunset and lasts about 3-4 hours almost every day from May to September. While individuals within the aggregation are dispersed over a large area (approx. 0.07 km2), the chorus’ spatial extent is fairly fixed over time. Species that could be producing this chorus include kelp bass (Paralabrax clathratus) and halfmoons (Medialuna californiensis) (Figure 3).

Figure 3. A halfmoon (Medialuna californiensis) in the kelp forest off La Jolla, CA.

FishOASIS has also been used to identify the sounds of barred sand bass (Paralabrax nebulifer), a popular species among recreational fishermen in the Southern California Bight (Figure 4).

Figure 4. Barred sand bass (Paralabrax nebulifer) call.

This article demonstrates that combining multiple cameras with multi-element passive acoustic arrays is a cost-effective method for monitoring sound-producing fish activities, diversity and biomass. This approach is minimally invasive and offers greater spatial and temporal coverage at significantly lower cost than traditional methods. As such, FishOASIS is a promising tool to collect the information required for the implementation of passive acoustics to monitor MPAs.

2pAB1 – The Acoustic World of Bat Biosonar

Rolf Mueller – rolf.mueller@vt.edu

Virginia Tech
1075 Life Science Cir
Blacksburg, VA 24061

Popular version of paper 2pAB1
Presented Tuesday afternoon, November 6, 2018
176th ASA Meeting, Victoria, BC, Canada

Ultrasound plays a pivotal role in the life of bats, since the animals rely on echoes triggered by their ultrasonic biosonar pulses as their primary source of information on their environments.

However, air is far from an ideal medium for sound propagation since it subjects the waves to severe absorption that dissipates sound energy into heat. Because absorption gets a lot worse with increasing frequency, the ultrasonic frequencies of bats are particularly effected by this and the operation range of bat biosonar is just a few meters for typical sensing tasks.

Absorption limits the highest ultrasonic frequencies that the bats can operate on. This has consequences for the ability of the animals to concentrate the acoustic energy they emit or receive in narrow beams. Forming a narrow beam requires a sonar emitter/receiver that is much larger than the wavelength. Being small mammals, bat have not been able to evolve ears that are much larger (i.e., 2 or 3 orders of magnitude) than the ultrasonic wavelengths of their biosonar systems and hence have fairly wide beams (e.g., 60 degrees or wider).

biosonar

Figure 1. Ultrasonic pulses followed by their echo trains that where created by a robot that mimics the biosonar system of horseshoe in a forest.

For bat species that navigate and hunt in dense vegetation, a broad sonar beam means that the animals receive a lot of “clutter” echoes from the surrounding vegetation. These clutter echoes are likely to drown out informative echoes related to important the presence of prey or passage ways.

biosonar

Figure 2. Biomimetic robot mimicking the biosonar system of horseshoe bats.

Given these basic acoustical conditions, it appears that bat biosonar should be a complete disaster, but in reality the opposite is the case. Bats are the second most species-rich group of mammals (after rodents) and have successfully conquered a diverse set of habitats and food sources based on a combination of active biosonar and flapping flight. Hence, a narrow focus on standard sonar parameters like beamwidth, signal-to-noise ratio, resolution, etc. may not be the right direction to understand the biosonar skills of bats. To remedy this situation, we have created a robot that mimics the biosonar system of horseshoe bats. The robot is currently being used to collect large numbers of echoes from natural environments to create a data basis to identify non-standard informative echo features using machine learning methods.