1pUW4 – Videos of ultrasonic wave propagation through transparent acrylic objects in water for introductory physics courses produced using refracto-vibrometry

Matthew Mehrkens – mmehrken@gustavus.edu
Benjamin Rorem – brorem@gustavus.edu
Thomas Huber – huber@gustavus.edu
Gustavus Adolphus College
Department of Physics
800 West College Avenue
Saint Peter, MN 56082

Popular version of paper 1pUW4, “Videos of ultrasonic wave propagation through transparent acrylic objects in water for introductory physics courses produced using refracto-vibrometry”
Presented Monday afternoon, May 7, 2018, 2:30pm – 2:45pm, Greenway B
175th ASA Meeting, Minneapolis

In most introductory physics courses, there are units on sound waves and optics. These may include readings, computer simulations, and lab experiments where properties such as reflection and refraction of light are studied. Similarly, students may study how an object, such as an airplane, traveling faster than the speed of sound can produce a Mach cone. Equations, such as Snell’s Law of Refraction or the Mach angle equation are derived or presented that allow students to perform calculations. However, there is an important piece that is missing for some students – they are not able to actually see the sound or light waves traveling.

The goal of this project was to produce videos of ultrasonic wave propagation through a transparent acrylic sample that could be incorporated into introductory high-school and college physics courses. Students can observe and quantitatively study wave phenomena such as reflection, refraction and Mach cone formation. By using rulers, protractors, and simple equations, students can use these videos to determine the velocity of sound in water and acrylic.

Video that demonstrates ultrasonic waves propagating in acrylic samples measured using refracto-vibrometry.

To produce these videos, an optical technique called refracto-vibrometry was used. As shown in Figure 1, the laser from a scanning laser Doppler vibrometer was directed through a water-filled tank at a retroreflective surface.

refracto-vibrometry

Figure 1: (a) front view, and (b) top view. The pulse from an ultrasound transducer passes through water and is incident on a transparent rectangular target. To measure propagating wave fronts using refracto-vibrometery, the laser from the vibrometer traveled through the water and was reflected off a retro reflector.

 

The vibrometer detected the density changes as the ultrasound wave pulse passed through the laser beam. This process of measuring the ultrasound arrival time was performed thousands of times when the laser was directed at a large collection of scan points. These data sets were used to create videos of the propagating ultrasound.

In one measurement, a transparent rectangular acrylic block, tilted at an angle, was placed in the water tank. Figure 2 is a single frame from a video showing the traveling ultrasonic waves emitted from a transducer and reflected/refracted by the block. By using the video, along with a ruler and protractor, students can determine the speed of sound in the water and acrylic block.

Video showing ultrasonic waves traveling through water as they are reflected and refracted by a transparent acrylic block.

Figure 2: Ultrasonic wave pulses (cyan and red colored bands) as they travel from water into the acrylic block (the region outlined in magenta). The path of the maximum position of the waves are shown by the green and blue dots.

In a similar measurement, a transparent acrylic cylinder was suspended in the water tank by fine monofilament string.  As an ultrasonic pulse traveled in the cylinder, it created a small bulge in the surface. Because this bulge in the acrylic cylinder traveled faster than the speed of sound in water, it produced a Mach cone that can be seen in the video and in Figure 3.  Students can determine the speed of sound in the cylinder by measuring the angle of this cone.

Figure 3: Mach cone produced by ultrasonic waves traveling faster in acrylic cylinder than in water.

Video showing formation of a Mach cone resulting from ultrasonic waves traveling faster through an acrylic cylinder than in water.

By interacting with these videos, students should be able to gain a better understanding of wave behavior. The videos are available for download from http://physics.gustavus.edu/~huber/acoustics

This material is based upon work supported by the National Science Foundation under Grant Numbers 1300591 and 1635456. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

3aUWa6 – Inversion of geo-acoustic parameters from sound attenuation measurements in the presence of swim bladder bearing

Orest Diachok – orest.diachok@jhuapl.edu
Johns Hopkins University Applied Physics Laboratory
11100 Johns Hopkins Rd.
Laurel MD 20723

Altan Turgut – turgut@wave.nrl.navy.mil
Naval Research Laboratory
4555 Overlook Ave. SW
Washington DC 20375

Popular version of paper 3aUWa6 “Inversion of geo-acoustic parameters from transmission loss measurements in the presence of swim bladder bearing fish in the Santa Barbara Channel”
Presented Wednesday morning, December 6, 2017, 9:15-10:00 AM, Salon E
174th ASA Meeting, New Orleans

The intensity of sound propagating from a source in the ocean becomes diminished with range due to geometrical spreading, chemical absorption, and reflection losses from the bottom and surface. Measurements of sound intensity vs. range and depth in the water column may be used to infer the speed of sound, density and attenuation coefficient (geo-alpha) of bottom sediments. Numerous inversion algorithms have been developed to search through physically viable permutations of these parameters and identify the values of these parameters that provide the best fit to measurements. This approach yields valid results in regions where the concentration of swim bladder bearing fish is negligible.

In regions where the there are large numbers of swim bladder bearing fish, the effect of attenuation due to fish (bio-alpha) needs to be considered to permit unbiased estimates of geo-acoustic parameters (Diachok and Wales, 2005; Diachok and Wadsworth, 2014).

Swim bladder bearing fish resonate at frequencies controlled by the dimensions of their swim bladders. Adult 16 cm long sardines resonate at 1.1 kHz at 12 m depth. Juvenile sardines, being smaller, resonate at higher frequencies. If the number of fish is sufficiently large, sound will be highly attenuated at the resonance frequencies of their swim bladders.

To demonstrate the competing effects of bio and geo-alpha on sound attenuation we conducted an interdisciplinary experiment in the Santa Barbara Channel during a month when the concentration of sardines was known to be relatively high. This experiment included an acoustic source, S, which permitted measurements at frequencies between 0.3 and 5 kHz and an array of 16 hydrophones, H, which was deployed 3.7 km from the source, as illustrated in Figure 1. Sound propagating from S to H was attenuated by sediments at the bottom of the ocean (yellow) and a layer of fish at about 12 m depth (blue). To validate inferred geo-acoustic values from the sound intensity vs. depth data, we sampled the bottom with cores and measured sound speed and geo-alpha vs. depth with a near-bottom towed chirp sonar (Turgut et al., 2002). To validate inferred bio-acoustic values, Carla Scalabrin of Ifremer, France measured fish layer depths with an echo sounder, and Paul Smith of the Southwest Fisheries Science Center conducted trawls, which provided length distributions of dominant species. The latter permitted calculation of swim bladder dimensions and resonance frequencies.

Figure 1. Experimental geometry: source, S deployed 9 m below the surface between a float and an anchor, and a vertical array of hydrophones, H, deployed 3.7 km from source.

Figure 2 provides two-hour averaged measurements of excess attenuation coefficients (corrected for geometrical spreading and chemical absorption) vs. frequency and depth at night, when these species are generally dispersed (far apart from each other) near the surface. The absorption bands centered at 1.1, 2.2 and 3.5 kHz corresponded to 16 cm sardines, 10 cm anchovies, and juvenile sardines or anchovies at 12 m respectively. During daytime, sardines generally form schools at greater depths, where they resonate at “bubble cloud” frequencies, which are lower than the resonance frequencies of individuals.

Swim bladder

Figure 2. Concurrent echo sounder measurements of energy reflected from fish vs. depth (left), and excess attenuation vs. frequency and depth at night (right).

The method of concurrent inversion (Diachok and Wales, 2005) was applied to measurements of sound intensity vs. depth to estimate values of bio-and geo-acoustic parameters. The geo-acoustic search space consisted of the sound speed at the top of the sediments, the gradient in sound speed and geo-alpha. The biological search space consisted of the depth and thickness of the fish layer and bio-alpha within the layer. Figure 3 shows the results of the search for the values of geo-alpha that resulted in the best fit between calculations and measurements, 0.1 dB/m at 1.1 kHz and 0.5 dB/m at 1.9 kHz. Also shown are results of chirp sonar estimates of geo-alpha at 3.2 kHz and quadratic fit to the data.

Figure 3. Attenuation coefficient in sediments derived from concurrent inversion of bio and geo parameters, geo only, chirp sonar, and quadratic fit to data.

If we had assumed that bio-alpha was zero, then the inverted value of geo-alpha would have been 0.12 dB/m at 1.1 kHz, which is about ten times greater than the properly derived estimate, and 0.9 dB/m at 1.9 kHz.

These measurements were made at a biological hot spot, which was identified through an echo sounder survey. None of the previously reported experiments, which were designed to permit inversion of geo-acoustic parameters from sound propagation measurements, included echo sounder measurements of fish depth or trawls. Consequently, some of these measurements may have been conducted at sites where the concentration of swim bladder bearing fish may have been significant, and inverted values of geo-acoustic parameters may have been biased by neglect of bio-alpha.

Acknowledgement: This research was supported by the Office of Naval Research Ocean Acoustics Program.

References

Diachok, O. and S. Wales (2005), “Concurrent inversion of bio and geo-acoustic parameters from transmission loss measurements in the Yellow Sea”, J. Acoust. Soc. Am., 117, 1965-1976.

Diachok, O. and G. Wadsworth (2014), “Concurrent inversion of bio and geo-acoustic parameters from broadband transmission loss measurements in the Santa Barbara Channel”, J. Acoust. Soc. Am., 135, 2175.

Turgut, A., M. McCord, J. Newcomb and R. Fisher (2002) “Chirp sonar sediment characterization at the northern Gulf of Mexico Littoral Acoustic Demonstration Center experimental site”, Proceedings, Oce

3pIDa1 – Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles

Lora J. Van Uffelen, Ph.D – loravu@uri.edu
University of Rhode Island
Department of Ocean Engineering &
Graduate School of Oceanography
45 Upper College Rd
Kingston, RI 02881

Popular version of paper 3pIDa1, “Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles”
Presented Wednesday, December 06, 2017, 1:05-1:25 PM, Salon E
174th ASA meeting, New Orleans

What do you think of when you think of a drone?  A quadcopter that your neighbor flies too close to your yard?  A weaponized military system?  A selfie drone?  The word drone typically refers to an unmanned aerial vehicle (UAV), but it also now is used to refer to an unmanned underwater vehicle (UUV).  Aerial drones are typically outfitted with cameras, but cameras are not always the best way to “see” underwater.  Hydronephones are underwater vehicles, or underwater drones, equipped with hydrophones, or underwater microphones, which receive and record sound underwater.   Sound is one of the best tools for sensing or “seeing” the underwater environment.

Sound travels 4-5 times faster in the ocean than it does in air. The speed of sound depends on ocean temperature, salinity, and pressure. Sound can also travel far – hundreds of miles under the right conditions! – which makes sound an excellent tool for things like underwater communication, navigation, and even measuring oceanographic properties like temperature and currents.

Here, the term hydronephone is used specifically to refer to an ocean glider, a subclass of UUV, used as an acoustic receiver [Figure 1].  Gliders are autonomous underwater vehicles (AUVs) because they do not require constant piloting.  A pilot can only communicate with a glider when it is at the sea surface; while it is underwater it travels autonomously.  Gliders do not have propellers, but they move through the water by controlling their buoyancy and using hydrofoil wings to “glide” through the water. Key advantages of these vehicles are that they are relatively quiet, they have low power consumption so they can be deployed for long durations of time, they can operate in harsh environments, and they are much more cost-effective than traditional ship-based observational methods.

Hydronephones

Figure 1: Seaglider hydronephones (SG196 and SG198) on the deck of the USCGC Healy prior to deployment in the Arctic Ocean north of Alaska in August 2016.

Two hydronephones were deployed August-September of 2016 and 2017 in the Arctic Ocean.  They recorded sound signals at ranges up to 480 kilometers (about 300 miles) from six underwater acoustic sources that were placed in the Arctic Ocean north of Alaska as part of a large-scale ocean acoustics experiment funded by the Office of Naval Research [Figure 2].  This acoustic system was designed to learn about how sound travels in the Arctic ocean where the temperatures and ice conditions are changing.  The hydronephones were a mobile addition to this stationary system, allowing for measurements at many different locations.

Figure 2: Map of Seaglider SG196 and SG198 tracks in the Arctic Ocean in August/September of 2016 and 2017. Locations of stationary sound sources are shown as yellow pins.

One of the challenges of using gliders is figuring our exactly where they are when they are underwater.  When the gliders are at the surface, they can get their position in latitude and longitude using Global Positioning System (GPS) satellites, in a similar way to how a handheld GPS or a cellphone gets position.  Gliders only have access to GPS when they come to the ocean surface because the GPS signals are electromagnetic waves, which do not travel far underwater.   The gliders only come to the surface a few times a day and can travel several miles between surfacings, so a different method is needed to determine where they are while they are deep underwater. For the case of the Arctic experiment, the recordings of the acoustic transmissions from the six sources on the hydronephones could be used to position them underwater using sound in a way that is analogous to the way that GPS uses electromagnetic signals for positioning.

Improvements in underwater positioning will make hydronephones an even more valuable tool for ocean acoustics and oceanography.  As vehicle and battery technology improves and as data storage continues to become smaller and cheaper, hydronephones will also be able to record for longer periods of time allowing more extensive exploration of the underwater world.

Acknowledgments:  Many investigators contributed to this experiment including Sarah Webster, Craig Lee, and Jason Gobat from the University of Washington, Peter Worcester and Matthew Dzieciuch from Scripps Institution of Oceanography, and Lee Freitag from the Woods Hole Oceanographic Institution. This project was funded by the Office of Naval Research.

3pAB4 – Automatic classification of fish sounds for environmental purposes

Marielle MALFANTE – marielle.malfante@gipsa-lab.grenoble-inp.fr
Jérôme MARS – jerome.mars@gipsa-lab.grenoble-inp.fr
Mauro DALLA MURA – mauro.dalla-mura@gipsa-lab.grenoble-inp.fr
Cédric GERVAISE – cedric.gervaise@gipsa-lab.grenoble-inp.fr

GIPSA-Lab
Université Grenoble Alpes (UGA)
11 rue des Mathématiques
38402 Saint Martin d’Hères (GRENOBLE)
FRANCE

Popular version of paper 3pAB4 “Automatic fish sounds classification”
Presented Wednesday afternoon, May 25, 2016, 2:15 in Salon I
171st ASA Meeting, Salt Lake City

In the current context of global warming and environmental concern, we need tools to evaluate and monitor the evolution of our environment. The evolution of animal populations is of a special concern in order to prevent changes of behaviour under environmental stress and to preserve biodiversity. Monitoring animal populations however, can be a complex and costly task. Experts can either (1) monitor animal populations directly on the field, or (2) use sensors to gather data on the field (audio or video recordings, trackers, etc.) and then process those data to retrieve knowledge about the animal population. In both cases the issue is the same: experts are needed and can only process limited quantity of data.

An alternative idea would be to keep using the field sensors but to build software tools in order to automatically process the data, thereby allowing monitoring animal populations on larger geographic areas and for extensive time periods.

The work we present is about automatically monitoring fish populations using audio recordings. Sounds have a better propagation underwater: by recording sounds under the sea we can gather loads of information about the environment and animal species it shelters. Here is an example of such recordings:

Legend: Raw recording of fish sounds, August 2014, Corsica, France.

Regarding fish populations, we distinguish four types of sounds that we call (1) Impulsions, (2) Roars, (3) Drums and (4) Quacks. We can hear them in the previous recording, but here are some extracts with isolated examples:

Legend: Filtered recording of fish sounds to hear Roar between 5s and 13s and Drums between 22s to 29s and 42s to 49s.

Legend: Filtered recording of fish sounds to hear Quacks and Impulsions. Both sounds are quite short (<0.5s) and are heard all along the recording.

However, to make a computer automatically classify a fish sound into one of those four groups is a very complex task. A simple or intuitive task for humans is often extremely complex for a computer, and vice versa. This is because humans and computers process information in different ways. For instance, a computer is very successful at solving complex calculations and at performing repetitive tasks, but it is very difficult to make a computer recognize a car in a picture. Humans however, tend to struggle with complex calculations but can very easily recognise objects in images. How do you explain a computer ‘this is a car’? It has four wheels. But then, how do you know this is a wheel? Well, it has a circular shape. Oh, so this ball is a wheel, isn’t it?

This easy task for a human is very complex for a machine. Scientists found a solution to make a computer understand what we call ‘high-level concepts’ (recognising objects in pictures, understanding speech, etc.). They designed algorithms called Machine Learning. The idea is to give a computer a lot of examples of each concept we want to teach it. For instance, to make a computer recognise a car in a picture, we feed it with many pictures of cars so that it can learn what a car is, and with many pictures without cars so that it can learn what a car is not. Many companies such as Facebook, Google, or Apple use those algorithms for face recognition, speech understanding, individualised advertisement, etc. It works very well.

In our work, we use the same technics to teach a computer to recognize and automatically classify fish sounds. Once those sounds have been classified, we can study their evolutions and see if fish populations behave differently from place to place, or if their behaviours evolve with time. It is also possible to study their density and see if their numbers vary through time.

This work is of a particular interest since to our knowledge, we present the first tool to automatically classify fish sounds. One of the main challenges is to make a sound understandable by a computer.that is to find and extract relevant information in the acoustic signal. By doing that, it gets easier for the computer to understand similarities and differences between all signals and in the end of the day, to be able to predict to which group a sound belongs.

how_to_build_automatic_fish_sounds_classifier
Legend: How to build an automatic fish sounds classifier? Illustration.

3aUW8 – A view askew: Bottlenose dolphins improve echolocation precision by aiming their sonar beam to graze the target

Laura N. Kloepper– lkloepper@saintmarys.edu
Saint Mary’s College
Notre Dame, IN 46556

Yang Liu–yang.liu@umassd.edu
John R. Buck– jbuck@umassd.edu
University of Massachusetts Dartmouth
285 Old Westport Road
Dartmouth, MA 02747

Paul E. Nachtigall–nachtiga@hawaii.edu
University of Hawaii at Manoa
PO Box 1346
Kaneohe, HI 96744

Popular version of paper 3aUW8, “Bottlenose dolphins direct sonar clicks off-axis of targets to maximize Fisher Information about target bearing”
Presented Wednesday morning, November 4, 2015, 10:25 AM in River Terrace 2
170th ASA Meeting, Jacksonville

Bottlenose dolphins are incredible echolocators. Using just sound, they can detect a ping-pong ball sized object from 100 m away, and discriminate between objects differing in thickness by less than 1 mm. Based on what we know about man-made sonar, however, the dolphins’ sonar abilities are an enigma–simply put, they shouldn’t be as good at echolocation as they actually are.

Typical manmade sonar devices achi­eve high levels of performance by using very narrow sonar beams. Creating narrow beams requires large and costly equipment. In contrast to these manmade sonars, bottlenose dolphins achieve the same levels of performance with a sonar beam that is many times wider–but how? Understanding their “sonar secret” can help lead to more sophisticated synthetic sonar devices.

Bottlenose dolphins’ echolocation signals contain a wide range of frequencies.  The higher frequencies propagate away from the dolphin in a narrower beam than the low frequencies do. This means the emitted sonar beam of the dolphin is frequency-dependent.  Objects directly in front of the animal echo back all of the frequencies.   However, as we move out of the direct line in front of the animal, there is less and less high frequency, and when the target is way off to the side, only the lower frequencies reach the target to bounce back.   As shown below in Fig. 1, an object 30 degrees off the sonar beam axis has lost most of the frequencies.

Kloepper-fig1

Figure 1. Beam pattern and normalized amplitude as a function of signal frequency and bearing angle. At 0 degrees, or on-axis, the beam contains an equal representation across all frequencies. As the bearing angle deviates from 0, however, the higher frequency components fall off rapidly.

Consider an analogy to light shining through a prism.  White light entering the prism contains every frequency, but the light leaving the prism at different angles contains different colors.  If we moved a mirror to different angles along the light beam, it would change the color reflected as it moved through different regions of the transmitted beam.  If we were very good, we could locate the mirror precisely in angle based on the color reflected.  If the color changes more rapidly with angle in one region of the beam, we would be most sensitive to small changes in position at that angle, since small changes in position would create large changes in color.  In mathematical terms, this region of maximum change would have the largest gradient of frequency content with respect to angle.  The dolphin sonar appears to be exploiting a similar principle, only the different colors are different frequencies or pitch in the sound.

Prior studies on bottlenose dolphins assumed the animal pointed its beam directly at the target, but this assumption resulted in the conclusion that the animals shouldn’t be as “good” at echolocation as they actually are. What if, instead, they use a different strategy? We hypothesized that the dolphin might be aiming their sonar so that the main axis of the beam passes next to the target, which results in the region of maximum gradient falling on the target. Our model predicts that placing the region of the beam most sensitive to change on the target will give the dolphin greatest precision in locating the object.

To test our hypothesis, we trained a bottlenose dolphin to detect the presence or absence of an aluminum cylinder while we recorded the echolocation signals with a 16-element hydrophone array (Fig.2).

Laura Dolphin Graphics

Figure 2: Experimental setup. The dolphin detected the presence or absence of cylinders at different distances while we recorded sonar beam aim with a hydrophone array.

We then measured where the dolphin directed its sonar beam in relation to the target and found the dolphin pointed its sonar beam 7.05 ± 2.88 degrees (n=1930) away from the target (Fig.3).

Kloepper-Fig_3

Figure 3: Optimality in directing beam away from axis. The numbers on the emitted beam represent the attenuation in decibels relative to the sound emitted from the dolphin. The high frequency beam (red) is narrower than the blue and attenuates at angle more rapidly. The dolphin directs its sonar beam 7 degrees away from the target.

To then determine if certain regions of the sonar beam provide more theoretical “information” to the dolphin, which would improve its echolocation, we applied information theory to the dolphin sonar beam. Using the weighted frequencies present in the signal, we calculated the Fisher Information for the emitted beam of a bottlenose dolphin. From our calculations we determined 95% of the maximum Fisher Information to be between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees (Fig. 4).

Kloepper-Fig_4

Figure 4: The calculated Fisher Information as a function of bearing angle. The peak of the information is between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees.

The result? The dolphin is using a strategy that is the mathematically optimal! By directing its sonar beam slightly askew of the target (such as a fish), the target is placed in the highest frequency gradient of the beam, allowing the dolphin to locate the target more precisely.

Monitoring deep ocean temperatures using low-frequency ambient noise

Katherine Woolfe, Karim G. Sabra
School of Mechanical Engineering, Georgia Institute of Technology
Atlanta, GA 30332-0405

In order to precisely quantify the ocean’s heat capacity and influence on climate change, it is important to accurately monitor ocean temperature variations, especially in the deep ocean (i.e. at depths ~1000m) which cannot be easily surveyed by satellite measurements. To date, deep ocean temperatures are most commonly measured using autonomous sensing floats (e.g. Argo floats). However, this approach is limited because, due to costs and logistics, the existing global network of floats cannot sample the entire ocean at the lower depths. On the other hand, acoustic thermometry (using the travel time of underwater sound to infer the temperature of the water the sound travels through) has already been demonstrated as one of the most precise methods for measuring ocean temperature and heat capacity over large distances (Munk et al., 1995; Dushaw et al., 2009; The ATOC Consortium, 1998). However, current implementations of acoustic thermometry require the use of active, man-made sound sources. Aside from the logistical issues of deploying such sources, there is also the ongoing issue of negative effects on marine animals such as whales.

An emerging alternative to measurements with active acoustic sources is the use of ambient noise correlation processing, which uses the background noise in an environment to extract useful information about that environment. For instance, ambient noise correlation processing has successfully been used to monitor seismically-active earth systems such as fault zones (Brenguier et al., 2008) and volcanic areas (Brenguier et al., 2014). In the context of ocean acoustics (Roux et al., 2004; Godin et al., 2010; Fried et al., 2013), previous studies have demonstrated that the noise correlation method requires excessively long averaging times to reliably extract most of the acoustic travel-paths that were used by previous active acoustic thermometry studies (Munk et al., 1995). Consequently, since this averaging time is typically too long compared to the timescale of ocean fluctuations (i.e., tides, surface waves, etc.), this would prevent the application of passive acoustic thermometry using most of these travel paths (Roux et al., 2004; Godin et al., 2010; Fried et al., 2013). However, for deep ocean propagation, there is an unusually stable acoustic travel path, where sound propagates nearly horizontally along the Sound Fixing and Ranging (SOFAR) channel. The SOFAR channel is centered on the minimum value of the sound speed over the ocean depth (located at ~1000 m depth near the equator) and thus acts as a natural pathway for sound to travel very large distances with little attenuation (Ewing and Worzel, 1948).

In this research, we have demonstrated the feasibility of a passive acoustic thermometry method use in the deep oceans, using only recordings of low-frequency (f~10 Hz) ambient noise propagating along the SOFAR channel. This study used continuous recordings of ocean noise from two existing hydroacoustic stations of the International Monitoring System, operated by the Comprehensive Nuclear-Test-Ban Treaty Organization, located respectively next to Ascension and Wake Islands (see Fig. 1(a)). Each hydroacoustic station is composed of two triangular-shaped horizontal hydrophone arrays (Fig. 1(b)), separated by L~130 km, which are referred to hereafter as the north and south triads. The sides of each triad are ~2 km long and the three hydrophones are located within the SOFAR channel at depth ~1000 m. From year to year, the acoustic waves that propagate between hydrophone pairs along the SOFAR channel build up from distant noise sources whose paths intersect the hydrophone pairs. In the low-frequency band used here (1-40 Hz) -with most of the energy of the arrivals being centered around 10 Hz- these arrivals are known to mainly originate from ice-breaking noise in the Polar regions (Chapp et al., 2005; Matsumoto et al., 2014; Gavrilov and Li, 2009; Prior et al., 2011). The angular beams shown in Fig. 1a illustrate a simple estimate of the geographical area from which ice-generated ambient noise is likely to emanate for each site (Woolfe et al., 2015).

Sabra1 - deep ocean

FIG. 1. (a) Locations of the two hydroacoustic stations (red dots) near Ascension and Wake Islands. (b) Zoomed-in schematic of the hydrophone array configurations for the Ascension and Wake Island sites. Each hydroacoustic station consists of a northern and southern triangle array of three hydrophones (or triad), with each triangle side having a length ~ 2 km. The distance L between triad centers is equal to 126 km and 132 km for the Ascension Island and Wake Island hydroacoustic stations, respectively.

Acoustic thermometry estimates ocean temperature fluctuations averaged over the entire acoustic travel path (in this case, the entire depth and length of the SOFAR channel between north and south hydrophone triads) by leveraging the nearly linear dependence between sound speed in water and temperature (Munk et al., 1995). Here the SOFAR channel extends approximately from 390 m to 1350 m deep at the Ascension Island site and 460 m to 1600 m deep at the Wake Island site, as determined from the local sound speed profiles and the center frequency (~10 Hz) of the SOFAR arrivals. We use passive acoustic thermometry is used to monitor the small variations in the travel time of the SOFAR arrivals over several years (8 years at Ascension Island, and 5 years at Wake Island). To do so, coherent arrivals are extracted by averaging cross-correlations of ambient noise recordings over 1 week at the Wake and Ascension Island sites. The small fluctuations in acoustic travel time are converted to deep ocean temperature fluctuations by leveraging the linear relationship between change in sound speed and change in temperature in the water (Woolfe et al., 2015). These calculated temperature fluctuations are shown in Fig. 2, and are consistent with Argo float measurements. At the Wake Island site, where data are measured only over 5 years, the Argo and thermometry data are found to be 54% correlated. Both data indicate a very small upward (i.e. warming) trend. The Argo data shows a trend of 0.003 °C /year ± 0.001 °C/ year, for 95% confidence interval, and the thermometry data shows a trend of 0.007 °C /year ± 0.002 °C/ year, for 95% confidence interval (Fig. 2(a)). On the other hand, for the Ascension site, the SOFAR channel temperature variations measured over a longer duration of eight years from passive thermometry and Argo data are found to be significantly correlated, with a 0.8 correlation coefficient. Furthermore, Fig. 2(b) indicates a warming of the SOFAR channel in the Ascension area, as inferred from the similar upward trend of both passive thermometry (0.013 °C /year ± 0.001 °C/ year, for 95% confidence interval) and Argo (0.013 °C/ year ± 0.004 °C/ year, for 95% confidence interval) temperature variation estimates Hence, our approach provides a simple and totally passive means for measuring deep ocean temperature variations, which could ultimately significantly improve our understanding of the role of oceans in climate change.

sabra2 - deep ocean

FIG. 2. (a) Comparison of the deep ocean temperature variations at the Wake Island site estimated from passive thermometry (blue line) with Argo float measurements (grey dots), along with corresponding error bars (Woolfe et al., 2015). (b) Same as (a), but for the Ascension Island site. Each ΔT data series is normalized so that a linear fit on the data would have a y-intercept at zero.

REFERENCES:
The ATOC Consortium, (1998). “Ocean Climate Change: Comparison of Acoustic Tomography, Satellite Altimetry, and Modeling”, Science. 281, 1327-1332.
Brenguier, F., Campillo, M., Takeda, T., Aoki, Y., Shapiro, N.M., Briand, X., Emoto, K., and Miyake, H. (2014). “Mapping Pressurized Volcanic Fluids from Induced Crustal Seismic Velocity Drops”, Science. 345, 80-82.
Brenguier, F., Campillo, M., Hadziioannou, C., Shapiro, N.M., Nadeau, R.M., and Larose, E. (2008). “Postseismic Relazation Along the San Andreas Fault at Parkfield from Continuous Seismological Observations.” Science. 321, 1478-1481.
Chapp, E., Bohnenstiehl, D., and Tolstoy, M. (2005). “Sound-channel observations of ice-generated tremor in the Indian Ocean”, Geochem. Geophys. Geosyst., 6, Q06003.
Dushaw, D., Worcester, P., Munk, W., Spindel, R., Mercer, J., Howe, B., Metzger, K., Birdsall, T., Andrew, R., Dzieciuch, M., Cornuelle, B., Menemenlis, D., (2009). “A decade of acoustic thermometry in the North Pacific Ocean”, J. Geophys., 114, C07021.
Ewing, M., and Worzel, J.L., (1948). “Long-Range Sound Transmission”, GSA Memoirs. 27, 1-32.
Fried, S., Walker, S.C. , Hodgkiss, W.S. , and Kuperman, W.A. (2013). “Measuring the effect of ambient noise directionality and split-beam processing on the convergence of the cross-correlation function”, J. Acoust. Soc. Am., 134, 1824-1832.
Gavrilov, A., and Li, B. (2009). “Correlation between ocean noise and changes in the environmental conditions in Antarctica” Proceedings of the 3rd International Conference and Exhibition on Underwater Acoustic Measurements: Technologies and Results. Napflion, Greece, 1199.
Godin, O., Zabotin, N., and Goncharov, V. (2010). “Ocean tomography with acoustic daylight,” Geophys. Res. Lett. 37, L13605.
Matsumoto, H., Bohnenstiehl, D., Tournadre, J., Dziak, R., Haxel, J., Lau, T.K., Fowler, M., and Salo, S. (2014). “Antarctic icebergs: A significant natural ocean sound source in the Southern Hemisphere”, Geochem. Geophys., 15, 3448-3458.
Munk, W., Worcester, P., and Wunsch, C., (1995) .Ocean Acoustic Tomography, Cambridge University Press, Cambridge, 1-28, 197-202.
Prior, M., Brown, D., and Haralabus, G., (2011), “Data features from long-term monitoring of ocean noise”, paper presented at Proceedings of the 4th International Conference and Exhibition on Underwater Acoustic Measurements, p. L.26.1, Kos, Greece.
Roux, P., Kuperman, W., and the NPAL Group, (2004). “Extracting coherent wave fronts from acoustic ambient noise in the ocean,” J. Acoust. Soc. Am, 116, 1995-2003.
Woolfe, K.F., Lani, S., Sabra, K.G., and Kuperman, W.S. (2015). “Monitoring deep ocean temperatures using acoustic ambient noise”, Geophys. Res. Lett., DOI: 10.1002/2015GL063438.