3pIDa1 – Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles

Lora J. Van Uffelen, Ph.D – loravu@uri.edu
University of Rhode Island
Department of Ocean Engineering &
Graduate School of Oceanography
45 Upper College Rd
Kingston, RI 02881

Popular version of paper 3pIDa1, “Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles”
Presented Wednesday, December 06, 2017, 1:05-1:25 PM, Salon E
174th ASA meeting, New Orleans

What do you think of when you think of a drone?  A quadcopter that your neighbor flies too close to your yard?  A weaponized military system?  A selfie drone?  The word drone typically refers to an unmanned aerial vehicle (UAV), but it also now is used to refer to an unmanned underwater vehicle (UUV).  Aerial drones are typically outfitted with cameras, but cameras are not always the best way to “see” underwater.  Hydronephones are underwater vehicles, or underwater drones, equipped with hydrophones, or underwater microphones, which receive and record sound underwater.   Sound is one of the best tools for sensing or “seeing” the underwater environment.

Sound travels 4-5 times faster in the ocean than it does in air. The speed of sound depends on ocean temperature, salinity, and pressure. Sound can also travel far – hundreds of miles under the right conditions! – which makes sound an excellent tool for things like underwater communication, navigation, and even measuring oceanographic properties like temperature and currents.

Here, the term hydronephone is used specifically to refer to an ocean glider, a subclass of UUV, used as an acoustic receiver [Figure 1].  Gliders are autonomous underwater vehicles (AUVs) because they do not require constant piloting.  A pilot can only communicate with a glider when it is at the sea surface; while it is underwater it travels autonomously.  Gliders do not have propellers, but they move through the water by controlling their buoyancy and using hydrofoil wings to “glide” through the water. Key advantages of these vehicles are that they are relatively quiet, they have low power consumption so they can be deployed for long durations of time, they can operate in harsh environments, and they are much more cost-effective than traditional ship-based observational methods.

Hydronephones

Figure 1: Seaglider hydronephones (SG196 and SG198) on the deck of the USCGC Healy prior to deployment in the Arctic Ocean north of Alaska in August 2016.

Two hydronephones were deployed August-September of 2016 and 2017 in the Arctic Ocean.  They recorded sound signals at ranges up to 480 kilometers (about 300 miles) from six underwater acoustic sources that were placed in the Arctic Ocean north of Alaska as part of a large-scale ocean acoustics experiment funded by the Office of Naval Research [Figure 2].  This acoustic system was designed to learn about how sound travels in the Arctic ocean where the temperatures and ice conditions are changing.  The hydronephones were a mobile addition to this stationary system, allowing for measurements at many different locations.

Figure 2: Map of Seaglider SG196 and SG198 tracks in the Arctic Ocean in August/September of 2016 and 2017. Locations of stationary sound sources are shown as yellow pins.

One of the challenges of using gliders is figuring our exactly where they are when they are underwater.  When the gliders are at the surface, they can get their position in latitude and longitude using Global Positioning System (GPS) satellites, in a similar way to how a handheld GPS or a cellphone gets position.  Gliders only have access to GPS when they come to the ocean surface because the GPS signals are electromagnetic waves, which do not travel far underwater.   The gliders only come to the surface a few times a day and can travel several miles between surfacings, so a different method is needed to determine where they are while they are deep underwater. For the case of the Arctic experiment, the recordings of the acoustic transmissions from the six sources on the hydronephones could be used to position them underwater using sound in a way that is analogous to the way that GPS uses electromagnetic signals for positioning.

Improvements in underwater positioning will make hydronephones an even more valuable tool for ocean acoustics and oceanography.  As vehicle and battery technology improves and as data storage continues to become smaller and cheaper, hydronephones will also be able to record for longer periods of time allowing more extensive exploration of the underwater world.

Acknowledgments:  Many investigators contributed to this experiment including Sarah Webster, Craig Lee, and Jason Gobat from the University of Washington, Peter Worcester and Matthew Dzieciuch from Scripps Institution of Oceanography, and Lee Freitag from the Woods Hole Oceanographic Institution. This project was funded by the Office of Naval Research.

3pAB4 – Automatic classification of fish sounds for environmental purposes

Marielle MALFANTE – marielle.malfante@gipsa-lab.grenoble-inp.fr
Jérôme MARS – jerome.mars@gipsa-lab.grenoble-inp.fr
Mauro DALLA MURA – mauro.dalla-mura@gipsa-lab.grenoble-inp.fr
Cédric GERVAISE – cedric.gervaise@gipsa-lab.grenoble-inp.fr

GIPSA-Lab
Université Grenoble Alpes (UGA)
11 rue des Mathématiques
38402 Saint Martin d’Hères (GRENOBLE)
FRANCE

Popular version of paper 3pAB4 “Automatic fish sounds classification”
Presented Wednesday afternoon, May 25, 2016, 2:15 in Salon I
171st ASA Meeting, Salt Lake City

In the current context of global warming and environmental concern, we need tools to evaluate and monitor the evolution of our environment. The evolution of animal populations is of a special concern in order to prevent changes of behaviour under environmental stress and to preserve biodiversity. Monitoring animal populations however, can be a complex and costly task. Experts can either (1) monitor animal populations directly on the field, or (2) use sensors to gather data on the field (audio or video recordings, trackers, etc.) and then process those data to retrieve knowledge about the animal population. In both cases the issue is the same: experts are needed and can only process limited quantity of data.

An alternative idea would be to keep using the field sensors but to build software tools in order to automatically process the data, thereby allowing monitoring animal populations on larger geographic areas and for extensive time periods.

The work we present is about automatically monitoring fish populations using audio recordings. Sounds have a better propagation underwater: by recording sounds under the sea we can gather loads of information about the environment and animal species it shelters. Here is an example of such recordings:

Legend: Raw recording of fish sounds, August 2014, Corsica, France.

Regarding fish populations, we distinguish four types of sounds that we call (1) Impulsions, (2) Roars, (3) Drums and (4) Quacks. We can hear them in the previous recording, but here are some extracts with isolated examples:

Legend: Filtered recording of fish sounds to hear Roar between 5s and 13s and Drums between 22s to 29s and 42s to 49s.

Legend: Filtered recording of fish sounds to hear Quacks and Impulsions. Both sounds are quite short (<0.5s) and are heard all along the recording.

However, to make a computer automatically classify a fish sound into one of those four groups is a very complex task. A simple or intuitive task for humans is often extremely complex for a computer, and vice versa. This is because humans and computers process information in different ways. For instance, a computer is very successful at solving complex calculations and at performing repetitive tasks, but it is very difficult to make a computer recognize a car in a picture. Humans however, tend to struggle with complex calculations but can very easily recognise objects in images. How do you explain a computer ‘this is a car’? It has four wheels. But then, how do you know this is a wheel? Well, it has a circular shape. Oh, so this ball is a wheel, isn’t it?

This easy task for a human is very complex for a machine. Scientists found a solution to make a computer understand what we call ‘high-level concepts’ (recognising objects in pictures, understanding speech, etc.). They designed algorithms called Machine Learning. The idea is to give a computer a lot of examples of each concept we want to teach it. For instance, to make a computer recognise a car in a picture, we feed it with many pictures of cars so that it can learn what a car is, and with many pictures without cars so that it can learn what a car is not. Many companies such as Facebook, Google, or Apple use those algorithms for face recognition, speech understanding, individualised advertisement, etc. It works very well.

In our work, we use the same technics to teach a computer to recognize and automatically classify fish sounds. Once those sounds have been classified, we can study their evolutions and see if fish populations behave differently from place to place, or if their behaviours evolve with time. It is also possible to study their density and see if their numbers vary through time.

This work is of a particular interest since to our knowledge, we present the first tool to automatically classify fish sounds. One of the main challenges is to make a sound understandable by a computer.that is to find and extract relevant information in the acoustic signal. By doing that, it gets easier for the computer to understand similarities and differences between all signals and in the end of the day, to be able to predict to which group a sound belongs.

how_to_build_automatic_fish_sounds_classifier
Legend: How to build an automatic fish sounds classifier? Illustration.

3aUW8 – A view askew: Bottlenose dolphins improve echolocation precision by aiming their sonar beam to graze the target

Laura N. Kloepper– lkloepper@saintmarys.edu
Saint Mary’s College
Notre Dame, IN 46556

Yang Liu–yang.liu@umassd.edu
John R. Buck– jbuck@umassd.edu
University of Massachusetts Dartmouth
285 Old Westport Road
Dartmouth, MA 02747

Paul E. Nachtigall–nachtiga@hawaii.edu
University of Hawaii at Manoa
PO Box 1346
Kaneohe, HI 96744

Popular version of paper 3aUW8, “Bottlenose dolphins direct sonar clicks off-axis of targets to maximize Fisher Information about target bearing”
Presented Wednesday morning, November 4, 2015, 10:25 AM in River Terrace 2
170th ASA Meeting, Jacksonville

Bottlenose dolphins are incredible echolocators. Using just sound, they can detect a ping-pong ball sized object from 100 m away, and discriminate between objects differing in thickness by less than 1 mm. Based on what we know about man-made sonar, however, the dolphins’ sonar abilities are an enigma–simply put, they shouldn’t be as good at echolocation as they actually are.

Typical manmade sonar devices achi­eve high levels of performance by using very narrow sonar beams. Creating narrow beams requires large and costly equipment. In contrast to these manmade sonars, bottlenose dolphins achieve the same levels of performance with a sonar beam that is many times wider–but how? Understanding their “sonar secret” can help lead to more sophisticated synthetic sonar devices.

Bottlenose dolphins’ echolocation signals contain a wide range of frequencies.  The higher frequencies propagate away from the dolphin in a narrower beam than the low frequencies do. This means the emitted sonar beam of the dolphin is frequency-dependent.  Objects directly in front of the animal echo back all of the frequencies.   However, as we move out of the direct line in front of the animal, there is less and less high frequency, and when the target is way off to the side, only the lower frequencies reach the target to bounce back.   As shown below in Fig. 1, an object 30 degrees off the sonar beam axis has lost most of the frequencies.

Kloepper-fig1

Figure 1. Beam pattern and normalized amplitude as a function of signal frequency and bearing angle. At 0 degrees, or on-axis, the beam contains an equal representation across all frequencies. As the bearing angle deviates from 0, however, the higher frequency components fall off rapidly.

Consider an analogy to light shining through a prism.  White light entering the prism contains every frequency, but the light leaving the prism at different angles contains different colors.  If we moved a mirror to different angles along the light beam, it would change the color reflected as it moved through different regions of the transmitted beam.  If we were very good, we could locate the mirror precisely in angle based on the color reflected.  If the color changes more rapidly with angle in one region of the beam, we would be most sensitive to small changes in position at that angle, since small changes in position would create large changes in color.  In mathematical terms, this region of maximum change would have the largest gradient of frequency content with respect to angle.  The dolphin sonar appears to be exploiting a similar principle, only the different colors are different frequencies or pitch in the sound.

Prior studies on bottlenose dolphins assumed the animal pointed its beam directly at the target, but this assumption resulted in the conclusion that the animals shouldn’t be as “good” at echolocation as they actually are. What if, instead, they use a different strategy? We hypothesized that the dolphin might be aiming their sonar so that the main axis of the beam passes next to the target, which results in the region of maximum gradient falling on the target. Our model predicts that placing the region of the beam most sensitive to change on the target will give the dolphin greatest precision in locating the object.

To test our hypothesis, we trained a bottlenose dolphin to detect the presence or absence of an aluminum cylinder while we recorded the echolocation signals with a 16-element hydrophone array (Fig.2).

Laura Dolphin Graphics

Figure 2: Experimental setup. The dolphin detected the presence or absence of cylinders at different distances while we recorded sonar beam aim with a hydrophone array.

We then measured where the dolphin directed its sonar beam in relation to the target and found the dolphin pointed its sonar beam 7.05 ± 2.88 degrees (n=1930) away from the target (Fig.3).

Kloepper-Fig_3

Figure 3: Optimality in directing beam away from axis. The numbers on the emitted beam represent the attenuation in decibels relative to the sound emitted from the dolphin. The high frequency beam (red) is narrower than the blue and attenuates at angle more rapidly. The dolphin directs its sonar beam 7 degrees away from the target.

To then determine if certain regions of the sonar beam provide more theoretical “information” to the dolphin, which would improve its echolocation, we applied information theory to the dolphin sonar beam. Using the weighted frequencies present in the signal, we calculated the Fisher Information for the emitted beam of a bottlenose dolphin. From our calculations we determined 95% of the maximum Fisher Information to be between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees (Fig. 4).

Kloepper-Fig_4

Figure 4: The calculated Fisher Information as a function of bearing angle. The peak of the information is between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees.

The result? The dolphin is using a strategy that is the mathematically optimal! By directing its sonar beam slightly askew of the target (such as a fish), the target is placed in the highest frequency gradient of the beam, allowing the dolphin to locate the target more precisely.

Monitoring deep ocean temperatures using low-frequency ambient noise

Katherine Woolfe, Karim G. Sabra
School of Mechanical Engineering, Georgia Institute of Technology
Atlanta, GA 30332-0405

In order to precisely quantify the ocean’s heat capacity and influence on climate change, it is important to accurately monitor ocean temperature variations, especially in the deep ocean (i.e. at depths ~1000m) which cannot be easily surveyed by satellite measurements. To date, deep ocean temperatures are most commonly measured using autonomous sensing floats (e.g. Argo floats). However, this approach is limited because, due to costs and logistics, the existing global network of floats cannot sample the entire ocean at the lower depths. On the other hand, acoustic thermometry (using the travel time of underwater sound to infer the temperature of the water the sound travels through) has already been demonstrated as one of the most precise methods for measuring ocean temperature and heat capacity over large distances (Munk et al., 1995; Dushaw et al., 2009; The ATOC Consortium, 1998). However, current implementations of acoustic thermometry require the use of active, man-made sound sources. Aside from the logistical issues of deploying such sources, there is also the ongoing issue of negative effects on marine animals such as whales.

An emerging alternative to measurements with active acoustic sources is the use of ambient noise correlation processing, which uses the background noise in an environment to extract useful information about that environment. For instance, ambient noise correlation processing has successfully been used to monitor seismically-active earth systems such as fault zones (Brenguier et al., 2008) and volcanic areas (Brenguier et al., 2014). In the context of ocean acoustics (Roux et al., 2004; Godin et al., 2010; Fried et al., 2013), previous studies have demonstrated that the noise correlation method requires excessively long averaging times to reliably extract most of the acoustic travel-paths that were used by previous active acoustic thermometry studies (Munk et al., 1995). Consequently, since this averaging time is typically too long compared to the timescale of ocean fluctuations (i.e., tides, surface waves, etc.), this would prevent the application of passive acoustic thermometry using most of these travel paths (Roux et al., 2004; Godin et al., 2010; Fried et al., 2013). However, for deep ocean propagation, there is an unusually stable acoustic travel path, where sound propagates nearly horizontally along the Sound Fixing and Ranging (SOFAR) channel. The SOFAR channel is centered on the minimum value of the sound speed over the ocean depth (located at ~1000 m depth near the equator) and thus acts as a natural pathway for sound to travel very large distances with little attenuation (Ewing and Worzel, 1948).

In this research, we have demonstrated the feasibility of a passive acoustic thermometry method use in the deep oceans, using only recordings of low-frequency (f~10 Hz) ambient noise propagating along the SOFAR channel. This study used continuous recordings of ocean noise from two existing hydroacoustic stations of the International Monitoring System, operated by the Comprehensive Nuclear-Test-Ban Treaty Organization, located respectively next to Ascension and Wake Islands (see Fig. 1(a)). Each hydroacoustic station is composed of two triangular-shaped horizontal hydrophone arrays (Fig. 1(b)), separated by L~130 km, which are referred to hereafter as the north and south triads. The sides of each triad are ~2 km long and the three hydrophones are located within the SOFAR channel at depth ~1000 m. From year to year, the acoustic waves that propagate between hydrophone pairs along the SOFAR channel build up from distant noise sources whose paths intersect the hydrophone pairs. In the low-frequency band used here (1-40 Hz) -with most of the energy of the arrivals being centered around 10 Hz- these arrivals are known to mainly originate from ice-breaking noise in the Polar regions (Chapp et al., 2005; Matsumoto et al., 2014; Gavrilov and Li, 2009; Prior et al., 2011). The angular beams shown in Fig. 1a illustrate a simple estimate of the geographical area from which ice-generated ambient noise is likely to emanate for each site (Woolfe et al., 2015).

Sabra1 - deep ocean

FIG. 1. (a) Locations of the two hydroacoustic stations (red dots) near Ascension and Wake Islands. (b) Zoomed-in schematic of the hydrophone array configurations for the Ascension and Wake Island sites. Each hydroacoustic station consists of a northern and southern triangle array of three hydrophones (or triad), with each triangle side having a length ~ 2 km. The distance L between triad centers is equal to 126 km and 132 km for the Ascension Island and Wake Island hydroacoustic stations, respectively.

Acoustic thermometry estimates ocean temperature fluctuations averaged over the entire acoustic travel path (in this case, the entire depth and length of the SOFAR channel between north and south hydrophone triads) by leveraging the nearly linear dependence between sound speed in water and temperature (Munk et al., 1995). Here the SOFAR channel extends approximately from 390 m to 1350 m deep at the Ascension Island site and 460 m to 1600 m deep at the Wake Island site, as determined from the local sound speed profiles and the center frequency (~10 Hz) of the SOFAR arrivals. We use passive acoustic thermometry is used to monitor the small variations in the travel time of the SOFAR arrivals over several years (8 years at Ascension Island, and 5 years at Wake Island). To do so, coherent arrivals are extracted by averaging cross-correlations of ambient noise recordings over 1 week at the Wake and Ascension Island sites. The small fluctuations in acoustic travel time are converted to deep ocean temperature fluctuations by leveraging the linear relationship between change in sound speed and change in temperature in the water (Woolfe et al., 2015). These calculated temperature fluctuations are shown in Fig. 2, and are consistent with Argo float measurements. At the Wake Island site, where data are measured only over 5 years, the Argo and thermometry data are found to be 54% correlated. Both data indicate a very small upward (i.e. warming) trend. The Argo data shows a trend of 0.003 °C /year ± 0.001 °C/ year, for 95% confidence interval, and the thermometry data shows a trend of 0.007 °C /year ± 0.002 °C/ year, for 95% confidence interval (Fig. 2(a)). On the other hand, for the Ascension site, the SOFAR channel temperature variations measured over a longer duration of eight years from passive thermometry and Argo data are found to be significantly correlated, with a 0.8 correlation coefficient. Furthermore, Fig. 2(b) indicates a warming of the SOFAR channel in the Ascension area, as inferred from the similar upward trend of both passive thermometry (0.013 °C /year ± 0.001 °C/ year, for 95% confidence interval) and Argo (0.013 °C/ year ± 0.004 °C/ year, for 95% confidence interval) temperature variation estimates Hence, our approach provides a simple and totally passive means for measuring deep ocean temperature variations, which could ultimately significantly improve our understanding of the role of oceans in climate change.

sabra2 - deep ocean

FIG. 2. (a) Comparison of the deep ocean temperature variations at the Wake Island site estimated from passive thermometry (blue line) with Argo float measurements (grey dots), along with corresponding error bars (Woolfe et al., 2015). (b) Same as (a), but for the Ascension Island site. Each ΔT data series is normalized so that a linear fit on the data would have a y-intercept at zero.

REFERENCES:
The ATOC Consortium, (1998). “Ocean Climate Change: Comparison of Acoustic Tomography, Satellite Altimetry, and Modeling”, Science. 281, 1327-1332.
Brenguier, F., Campillo, M., Takeda, T., Aoki, Y., Shapiro, N.M., Briand, X., Emoto, K., and Miyake, H. (2014). “Mapping Pressurized Volcanic Fluids from Induced Crustal Seismic Velocity Drops”, Science. 345, 80-82.
Brenguier, F., Campillo, M., Hadziioannou, C., Shapiro, N.M., Nadeau, R.M., and Larose, E. (2008). “Postseismic Relazation Along the San Andreas Fault at Parkfield from Continuous Seismological Observations.” Science. 321, 1478-1481.
Chapp, E., Bohnenstiehl, D., and Tolstoy, M. (2005). “Sound-channel observations of ice-generated tremor in the Indian Ocean”, Geochem. Geophys. Geosyst., 6, Q06003.
Dushaw, D., Worcester, P., Munk, W., Spindel, R., Mercer, J., Howe, B., Metzger, K., Birdsall, T., Andrew, R., Dzieciuch, M., Cornuelle, B., Menemenlis, D., (2009). “A decade of acoustic thermometry in the North Pacific Ocean”, J. Geophys., 114, C07021.
Ewing, M., and Worzel, J.L., (1948). “Long-Range Sound Transmission”, GSA Memoirs. 27, 1-32.
Fried, S., Walker, S.C. , Hodgkiss, W.S. , and Kuperman, W.A. (2013). “Measuring the effect of ambient noise directionality and split-beam processing on the convergence of the cross-correlation function”, J. Acoust. Soc. Am., 134, 1824-1832.
Gavrilov, A., and Li, B. (2009). “Correlation between ocean noise and changes in the environmental conditions in Antarctica” Proceedings of the 3rd International Conference and Exhibition on Underwater Acoustic Measurements: Technologies and Results. Napflion, Greece, 1199.
Godin, O., Zabotin, N., and Goncharov, V. (2010). “Ocean tomography with acoustic daylight,” Geophys. Res. Lett. 37, L13605.
Matsumoto, H., Bohnenstiehl, D., Tournadre, J., Dziak, R., Haxel, J., Lau, T.K., Fowler, M., and Salo, S. (2014). “Antarctic icebergs: A significant natural ocean sound source in the Southern Hemisphere”, Geochem. Geophys., 15, 3448-3458.
Munk, W., Worcester, P., and Wunsch, C., (1995) .Ocean Acoustic Tomography, Cambridge University Press, Cambridge, 1-28, 197-202.
Prior, M., Brown, D., and Haralabus, G., (2011), “Data features from long-term monitoring of ocean noise”, paper presented at Proceedings of the 4th International Conference and Exhibition on Underwater Acoustic Measurements, p. L.26.1, Kos, Greece.
Roux, P., Kuperman, W., and the NPAL Group, (2004). “Extracting coherent wave fronts from acoustic ambient noise in the ocean,” J. Acoust. Soc. Am, 116, 1995-2003.
Woolfe, K.F., Lani, S., Sabra, K.G., and Kuperman, W.S. (2015). “Monitoring deep ocean temperatures using acoustic ambient noise”, Geophys. Res. Lett., DOI: 10.1002/2015GL063438.

3aPA8 – Using arrays of air-filled resonators to reduce underwater man-made noise

Kevin M. Lee – klee@arlut.utexas.edu
Andrew R. McNeese – mcneese@arlut.utexas.edu
Applied Research Laboratories
The University of Texas at Austin

Preston S. Wilson – wilsonps@austin.utexas.edu
Mechanical Engineering Department and Applied Research Laboratories
The University of Texas at Austin

Mark S. Wochner – mark@adbmtech.com
AdBm Technologies

Popular version of paper 3aPA8
Presented Wednesday Morning, October 29, 2014
168th Meeting of the Acoustical Society of America, Indianapolis, Indiana
See also: Using arrays of air-filled resonators to attenuate low frequency underwater sound in POMA

Many marine and aquatic human activities generate underwater noise and can have potentially adverse effects on the underwater acoustical environment. For instance, loud sounds can affect the migratory or other behavioral patterns of marine mammals [1] and fish [2]. Additionally, if the noise is loud enough, it could potentially have physically damaging effects on these animals as well.

Examples of human activities that that can generate such noise are offshore wind farm installation and operation; bridge and dock construction near rivers, lakes, or ports; offshore seismic surveying for oil and gas exploration, as well as oil and gas production; and noise in busy commercial shipping lanes near environmentally sensitive areas, among others. All of these activities can generate noise over a broad range of frequencies, but the loudest components of the noise are typically at low frequencies, between 10 Hz and about 1000 Hz, and these frequencies overlap with the hearing ranges of many aquatic life forms. We seek to reduce the level of sound radiated by these noise sources to minimize their impact on the underwater environment where needed.

A traditional noise control approach is to place some type of barrier around the noise source. To be effective at low frequencies, the barrier would have to be significantly larger than the noise source itself and more dense than the water, making it impractical in most cases. In underwater noise abatement, curtains of small freely rising bubbles are often used in an attempt to reduce the noise; however, these bubbles are often ineffective at the low frequencies at which the loudest components of the noise occur. We developed a new type of underwater air-filled acoustic resonator that is very effective at attenuating underwater noise at low frequencies. The resonators consist of underwater inverted air-filled cavities with combinations of rigid and elastic wall members. They are intended to be fastened to a framework to form a stationary array surrounding an underwater noise source, such as the ones previously mentioned, or to protect a receiving area from outside noise.

The key idea behind our approach is that our air-filled resonator in water behaves like a mass on a spring, and hence it vibrates in response to an excitation. A good example of this occurring in the real world is when you blow over the top of an empty bottle and it makes a tone. The specific tone it makes is related to three things: the volume of the bottle, the length of its neck, and the size of the opening. In this case, a passing acoustic wave excites the resonator into a volumetric oscillation. The air inside the resonator acts as a spring and the water the air displaces when it is resonating acts as a mass. Like a mass on a spring, a resonator in water has a resonance frequency of oscillation, which is inversely proportional to its size and proportional to its depth in the water. At its resonance frequency, energy is removed from the passing sound wave and converted into heat through compression of the air inside the resonator, causing attenuation of the acoustic wave. A portion of the acoustic energy incident upon an array of resonators is also reflected back toward the sound source, which reduces the level of the acoustic wave that continues past the resonator array. The resonators are designed to reduce noise at a predetermined range of frequencies that is coincident with the loudest noise generated by any specific noise source.

air-filled resonators

Underwater photograph of a panel array of air-filled resonators attached to a framework. The individual resonators are about 8 cm across, 15 cm tall, and open on the bottom. The entire framework is about 250 cm wide and about 800 cm tall.

We investigated the acoustic properties of the resonators in a set of laboratory and field experiments. Lab measurements were made to determine the properties of individual resonators, such as their resonance frequencies and their effectiveness in damping out sound. These lab measurements were used to iterate the design of the resonators so they would have optimal acoustic performance at the desired noise frequencies. Initially, we targeted a resonance frequency of 100 Hz—the loudest components of the noise from activities like marine pile driving for offshore wind farm construction are between 100 Hz and 300 Hz. We then constructed a large number of resonators so we could make arrays like the panel shown in the photograph. Three or four such panels could be used to surround a noise source like an offshore wind turbine foundation or to protect an ecologically sensitive area.

The noise reduction efficacy of various resonator arrays were tested in a number of locations, including a large water tank at the University of Texas at Austin and an open water test facility also operated by the University of Texas in Lake Travis, a fresh water lake near Austin, TX. Results from the Lake Travis tests are shown in the graph of sound reduction versus frequency. We used two types of resonator—fully enclosed ones called encapsulated bubbles and open-ended ones (like the ones shown in the photograph). The number or total volume of resonators used in the array was also varied. Here, we express the resonator air volume as a percentage relative to the total volume of the array framework. Notice, our percentages are very small so we don’t need to use much air. For a fixed percentage of volume, the open-ended resonators provide up to 20 dB more noise reduction than the fully encapsulated resonators. The reader should note that noise reduction of 10 dB means the noise levels were reduced by a factor of three. A 30 dB reduction is equivalent to the noise be quieted by a factor of about 32. Because of the improved noise reduction performance of the open-ended resonators, we are currently testing this type of resonator at offshore wind farm installations in the North Sea, where government regulations require some type of noise abatement to be used to protect the underwater acoustic environment.

sound_reduction

Sound level reduction results from an open water experiment in a fresh water lake.

Various types of air-filled resonators were tested including fully encapsulated resonator and open-ended resonators like the ones shown in the photograph. Because a much total volume (expressed as a percentage here) is needed, the open-ended resonators are much more efficient at reducing underwater noise.

References:

[1] W. John Richardson, Charles R. Greene, Jr., Charles I. Malme, and Denis H. Thomson, Marine Mammals and Noise (Academic Press, San Diego, 1998).

[2] Arthur Popper and Anthony Hawkins (eds.), The Effects of Noise on Aquatic Life, Advances in Experimental Medicine and Biology, vol. 730, (Springer, 2012).