2aNSa2 – The effect of transportation noise on sleep and health

Jo M. Solet Joanne_Solet@HMS.Harvard.edu
Harvard Medical School and Cambridge Health Alliance
Assistant Professor of Medicine
15 Berkeley St, Cambridge, MA 02138
617 461 9006

Bonnie Schnitta bonnie@soundsense.com
SoundSense, LLC
Founder and CEO
PO Box 1360, 39 Industrial Road, Wainscott, NY 11975
631 537 4220

Popular version of paper 2aNSa2
Presented Tuesday morning, December 3, 2019
178th ASA Meeting, San Diego, CA

As transportation noise continues to rise, social justice issues are being raised over the impacts on sleep, health, safety and well being.

The Federal Government, through the Federal Aviation Administration (FAA), is solely responsible for managing the National Airspace System, including flight paths and altitudes. The development of satellite based GPS “air navigation” or RNAV, introduced as a replacement for ground-based radar tracking, has allowed for flights at lower altitudes and at closer time intervals. It has also led to a consolidation of formerly more disbursed flight paths, producing a “super-highway” of flights over defined areas. The resulting noise levels impact concentration, communication and learning during the day and disrupt sleep at night.

Efforts to track and to force disbursal of these consolidated flight paths is underway. However, the government mandated statistics made available to the public, including day/night sound pressure level averages, fail to illuminate the peak exposure levels and timing. Additionally, statistics are reported in A-weighted metrics only, which deemphasizes low frequency sound components.

Some airports offer a “noise complaint hotline”. At Logan Airport in Boston, this hotline is not manned by a living person at night. Complainants may receive a letter several weeks after their call registering the receipt and content of the complaint. However, gauging noise impact levels by timing and/or number of complaints has serious flaws. Among these, sleep scientists are aware that subjects aroused from sleep by noise do not have full memory systems up and running. By morning, residents may be aware of having slept poorly, but be unable to report what aroused them or how often. The documented effects of inadequate sleep include increased likelihood of crashes, industrial accidents, falls, inflammation, pain, weight gain, diabetes and heart disease. Sleep disruption by noise is not simply “annoyance”.

Breakthrough research from Harvard Medical School sleep scientists, Jo M. Solet, Orfeu M. Buxton, and colleagues, quantified arousals from sleep by administering a series of noise source recordings at rising decibels to subjects in the sleep lab. This work demonstrated individual differences among sleepers as well as enhanced protection from arousal by noise in the deepest stages of sleep. Deep sleep is known to decrease dramatically with age; ours is an aging population.

There is now also preliminary evidence though the work of medical doctor, Carter Sigmon, and acoustical engineering leader, Bonnie Schnitta, suggesting that certain diagnoses, for example, PTSD, low thyroid function, and atrial fibrillation, carry extra vulnerabilities to noise exposure.

Acoustics experts, sleep scientists and public health advocates are working to inform policy change to protect our residents. This year two bills have been filed to require a National Academy of Medicine Consensus Report: HR 976, Air Traffic Noise and Pollution Expert Consensus Act by Congressman Stephen Lynch, and S2506, A Bill to Require a Study on the Health Impacts of Air Traffic Noise and Pollution by Senator Elizabeth Warren, both from Massachusetts.

See: https://www.congress.gov/bill/116th-congress/house-bill/976/all-info?r=27&s=1

2 images from presentation pasted below: [IMAGES MISSING]

 

 

 

2aSP2 – Self-Driving Cars: Have You Considered Using Sound?

Keegan Yi Hang Sim – yhsim@connect.ust.hk
Yijia Chen
Yuxuan Wan
Kevin Chau

Department of Electronic and Computer Engineering
Hong Kong University of Science and Technology
Clear Water Bay
Hong Kong

Popular version of paper 2aSP2 Advanced automobile crash detection by acoustic methods
Presented Tuesday morning, December 03, 2019
178th Meeting, San Diego, CA
Read the article in Proceedings of Meetings on Acoustics

Introduction
Self-driving cars are currently a major interest for engineers around the globe. They incorporate more advanced versions of steering and acceleration control found in many of today’s cars. Cameras, radars, and lidars (light detection and ranging) are frequently used to detect obstacles and automatically brake to avoid collision. Air bags, which have been in use as early as 1951, soften the impact during an actual collision.

Vision Zero, an ongoing multinational effort, hopes that all car crashes will eventually be eliminated, and self-driving autonomous vehicles are likely to play a key role in achieving this. However, current technology is unlikely to be enough, as it does not works poorly in low light conditions. We believe that using sound, although it provides less which carries a unique information, is also important as it can be used in all scenarios and also likely performs much better.

Sound waves travel as fast as seventeen times faster in a car than at 1/3 of a kilometer per second in the air, which leads to much faster detection by using sound instead of acceleration, and clearly is not affected by light, air quality, and other factors. Previous research was able to use sound to detect collisions and sirens, but by the time a collision occurs, it is far too late. So instead we want to identify sounds that frequently occur before car crashes, such as tire skidding, honking, and sometimes screaming to figure out the direction they are coming from. Thus, we have designed a method to predict a car crash by detecting and isolating the sounds of tire skidding that might signal a possible crash.

Algorithm
The algorithm utilizes the discrete wavelet transform (DWT), which decomposes a sound wave into high- and low-frequency components in time all sorts of tiny waves each lasting for a short period in time. This can be done repeatedly, yielding a series of components of various frequencies. Using wavelets is significantly faster and gives much more accurate and precise results representation of transient events associated with car crashes than elementary techniques such as the Fourier Transform, which transforms a sound into its frequency steady oscillation components. Previous methods to detect car crashes examined the highest frequency components, but tire skidding only contains lower frequency components, whereas a collision contains almost all frequencies.

One can hear in the original audio of a car crash the three distinct sections: honking, tire skidding, and the collision.

cars
The top diagram shows the audio displayed as a waveform, plotted against time. The bottom shows a spectrogram of the audio, with frequency on the y-vertical axis and time on the horizontal x-axis, and the brightness of the color representing the magnitude of a particular frequency component. This was created using a variation of the Fourier Transform. One can observe the differences in appearance between honking, tire skidding, and collision, which suggests that mathematical methods should be able to detect and isolate these. We can also see that the collision occupies all frequencies while tire skidding occupies lower frequencies with two distinct sharp bands at around 2000Hz.

“OutCollision.wav , the isolated audio containing just that isolates the car crash”

Using our algorithm, we were able to create audio files containing just that isolate the honking, tire skidding, as well as the collision. One can hear that they doThey may not sound like normal honking, tire skidding or collisions, which is a byproduct of our algorithm. Fortunately, but this does not affect the ability to detect the tire skidding various events by a computer.

Conclusion
The algorithm performs well for detecting the honking and tire skidding, and is fast enough to be done in real time, before acceleration information can be processed which would be great for the raising the alert of a possible crash, and for activating the hazard lights and seatbelt pre-tensioners. The use of sound in cars is a big step forward for the analysis prevention of car crashes, as well as improving autonomous and driverless vehicles and achieving Vision Zero, by providing a car with more timely and valuable information about its surroundings.

3pAO7 – The use of passive acoustic to follow killer whale behavior and to understand how they perceive their environment within a context of interaction with fishing activities

Gaëtan Richard – gaetan.richard@ensta-bretagne.fr
Flore Samaran – flore.samaran@ensta-bretagne.fr
ENSTA Bretagne, Lab-STICC UMR 6285
2 rue François Verny
29806 Brest Cedex 9, France

Julien Bonnel –  jbonnel@whoi.edu
Woods Hole Oceanographic Institution
266 Woods Hole Rd
Woods Hole, MA 02543-1050, USA

Christophe Guinet – christophe.guinet@cebc-cnrs.fr
Centre d’Études Biologiques de Chizé, UMR 7372 – CNRS & Université de La Rochelle,
79360 Villiers-en-Bois, France

Popular version of paper
Presented Wednesday afternoon, December 4, 2019
178th ASA Meeting, San Diego, CA

Toothed whales feeding on fish caught on longlines is a growing issue worldwide. This issue named depredation has a serious socio-economic impact and raise conservation questions. Costs for fishermen include damages to the fishing gear and increased fishing effort to complete quotas. For marine predators, depredation increases risks of mortality (lethal retaliation from fishers or bycatch on the gear) and behavior changes, with a loss of natural foraging behavior for an easy human-related food source. Most of studies assessing depredation by odontocetes on longline fisheries have primarily relied on surface observation performed from the fishing vessels during the hauling phase (i.e. when gears are retrieved on board). The way odontocetes interact with longlines underwater thus remains poorly known. In particular, depredation by odontocetes on demersal longlines (i.e. lines that are set on the seafloor) has always been considered to occur only during hauling phases, when the fish are pulled up from the bottom to the predators at the surface.

killer whale

Figure 1

In our study, we focused on the depredation by killer whales on a demersal longline fisheries around Crozet Archipelago (Southern Ocean, Figure 1). Here, we aimed at understanding how, when and where interactions really occur. Recent studies revealed that killer whales could dive up to 1000 m, suggesting that they can actually depredate on longlines that are set on seafloor (remember that the traditional hypothesis was that depredation occurs only during hauling, i.e. close from the sea surface when the lines are brought back to the ship). In order to observe what can’t be seen, we used hydrophones to record sounds of killer whales, fixed on the fishing gears (Figure 2). This species is known to produce vocalization to communicate but also echolocation clicks as a sonar to estimate the direction and the range of an object or a prey (Figure 3). Altogether, communication and echolocation sounds can be used as clues of both presence and behaviour of these toothed whales. Additionally, as killer whales also sense the environment by listening to ambient sounds, we recorded the sounds produced by the fishing vessels, in order to understand more how these predators can detect and localize the fishing activities.

Figure 2. Scheme of fishing phases (setting, soaking and hauling) with the hydrophone deployed on a longline.

Figure 3. Spectrogram of killer whales’ sounds recorded around a fishing gear. This figure is a visual representation of the variation of intensities (color palette) of frequencies of sounds as they vary with time. On the recording we observe both calls (communication sounds) and clicks of echolocation, which can be heard as ‘buzzes’ when the emission rate is too fast to dissociate each click. Click image to listen.

Our main result is that killer whales were present and probably looking for food (production of echolocation clicks) around the longline equipped with the hydrophone while the boat was not hauling or too far to be interacting with the whales. This observation strongly suggest that depredation occurs on soaking longlines, which contradict the historical hypothesis that depredation only occurs during the hauling phases when the behavior is most easily observed from the fishing vessels. However, this new results raises the question on how killer whales know where to find the longlines in the ocean immensity. However, we also observed that the fishing vessels produced different sounds between the setting of longlines and their hauling (Figure 4). We therefore hypothesize that killer whales are able to recognize and to localize the vessel activity using the ship noise, allowing them to find the longlines.

Figure 4. Spectrograms of a fishing vessel setting a longline (left panel) and maneuvering during hauling (right panel). On the first spectrogram, we observed a difference of sound intensity between the setting (until 38 s) and the post setting, while the vessel was still moving forward (after 38 s). On the second spectrogram we recorded a vessel going backwards while hauling the longline, such maneuver characterize the activity and increase the range that killer whale can detect the fishing vessel.

2pAB – Sound production of the critically endangered totoaba: applying underwater sound detection to fish species conservation

Goldie Phillips – gphillips@sci-brid.com
Sci-Brid International Consulting, LLC
16192 Coastal Hwy
Lewes, DE 19958

Gerald D’Spain – gdspain@ucsd.edu
Catalina López-Sagástegui – catalina@ucr.edu
Octavio Aburto-Oropeza – maburto@ucsd.edu
Dennis Rimington – drimington@ucsd.edu
Dave Price – dvprice@ucsd.edu
Scripps Institution of Oceanography,
University of California, San Diego
9500 Gilman Drive
San Diego, CA 92093

Miguel Angel Cisneros-Mata – macisne@yahoo.com
Daniel Guevara – danyguevara47@hotmail.com
Instituto Nacional de Pesca y Acuacultura (INAPESCA) Mexico
Del Carmen, Coyoacán
04100 Mexico City, CDMX, Mexico

Popular version of paper 2pAB
Presented Tuesday afternoon, December 3rd, 2019
178th ASA Meeting, San Diego, CA

The totoaba (Figure 1), the largest fish of the croaker family, faces a severe illegal fishing threat due largely to the high value of its swim bladder (or buche; Figure 2) in Asian markets. While several conservation measures have been implemented in the Gulf of California (GoC) to protect this endemic species, the totoaba’s current population status remains unknown. Passive acoustic monitoring (PAM) – the use of underwater microphones (called hydrophones) to detect, monitor, and localize sounds produced by soniferous species – offers a powerful means of addressing this problem.

Croaker fishes are well known for their ability to produce sound. Their characteristic “croaking” sound is produced by the vibration of their swim bladder membrane caused by the rapid contraction and expansion of nearby sonic muscles. As sound propagates very efficiently underwater, croaks and other sounds produced by species like the totoaba can be readily detected and recorded by specialized PAM systems.

However, as little is known about the characteristics of totoaba sounds, it is necessary to first gain an understanding of the acoustic behavior of this species before PAM can be applied to the GoC totoaba population. Here we present the first step in a multinational effort to implement such a system.

Totoaba

Figure 1. Totoaba housed at CREMES

Totoaba

Figure 2. Totoaba swim bladder.

We conducted a passive acoustic experiment at the aquaculture center, El Centro Reproductor de Especies Marinas (CREMES), located in Kino Bay, Mexico, between April 29 and May 4, 2019. We collected video and acoustic recordings from totoaba of multiple age classes, both in isolation and within group settings. These recordings were collectively used to characterize the sounds of the totoaba.

We found that in addition to croaks (Video 1) captive totoaba produce 4 other call types, ranging from short duration (<0.03s), low-frequency (<1kHz) narrowband pulses, classified here as “knocks” (Video 2), to longer duration, broadband clicks with significant energy over 10kHz. There is also indication that one of the remaining call types may function as an alarm or distress call. Furthermore, call rates and dominant call type were found to be dependent on age.

Video 1. Visual representation (spectrogram) of a croak produced by totoaba at CREMES. Time (in minutes and seconds) is shown on the x-axis with frequency (in kHz) displayed on the y-axis. Sounds with the greatest amplitude are indicated by warmer colors.

Video 2. Visual representation (spectrogram) of a series of “knocks” produced by totoaba at CREMES.

As PAM systems typically produce large amounts of data that can make manual detections by a human analyst extremely time-consuming, we also used several of the totoaba call types to develop and evaluate multiple automated pre-processing/detector algorithms for a future PAM system in the GoC. Collectively, results are intended to form the basis of a totoaba population assessment that spans multiple spatial and temporal scales.

4aAB4 – A Machine Learning Model of the Global Ambient Sound Level

Shane V. Lympany – shane.lympany@blueridgeresearch.com
Michael M. James – michael.james@blueridgeresearch.com
Alexandria R. Salton
Matthew F. Calton
Blue Ridge Research and Consulting, LLC
29 N Market St, Suite 700
Asheville, NC 28801

Kent L. Gee
Mark K. Transtrum
Katrina Pedersen
Department of Physics and Astronomy
Brigham Young University
Provo, Utah 84602

Popular version of paper 4aAB4
Presented Thursday morning, December 5, 2019
178th ASA Meeting, San Diego, CA

Work funded by an Army SBIR

Traffic on a busy road, birds chirping, rushing water—these are some of the many sounds that make up the ambient soundscape, or acoustic environment, that surrounds us. The ambient soundscape is produced by anthropogenic (man-made) and natural sources, and, in turn, the ambient sound level affects the behavior and well-being of humans and animals. Therefore, it is important to understand how the ambient sound level varies in space. To answer this question, we developed a machine learning model to predict the ambient sound level at every point on Earth’s land surface, and we used the model to estimate the global impact of anthropogenic noise.

First, we trained a machine learning model to identify the relationships between more than 1.5 million hours of ambient sound level measurements and 37 environmental variables, such as population density, land cover, and climate. The model predicts the median sound level in A-weighted decibels (dBA). (A-weighting adjusts the sound level based on how the human ear perceives loudness.)

We applied the machine learning model to predict the median daytime ambient sound level at every point on Earth’s land surface (Figure 1). The loudest sound levels occur in highly populated areas, and the quietest sound levels occur in dry biomes with few humans or animals.

daytime Ambient Sound Level
Figure 1. Median daytime ambient sound level produced by anthropogenic and natural sources.

Next, we estimated the natural sound level (Figure 2) by applying the machine learning model to environmental variables that we modified to remove the influence of humans. The natural sound level is loudest in areas with significant biodiversity, such as rainforests.
daytime Ambient Sound Level
Figure 2. Median daytime ambient sound level produced by natural sources only.

The difference between the overall and natural sound levels (Figure 3) is the amount that anthropogenic noise increases the existing ambient sound level above the natural level. Approximately 5.5 billion people and 28 million square kilometers—an area the size of Russia and Canada combined—are affected by anthropogenic noise that increases the ambient sound level by 3 dBA or more. A 3-dBA increase means that anthropogenic noise is about as loud as the natural sound level. Furthermore, approximately 2.2 billion people and 6.1 million square kilometers—an area the size of the Amazon Rainforest—are affected by anthropogenic noise that increases the ambient sound level by 10 dBA or more. A 10-dBA increase means that anthropogenic noise roughly doubles the perceived loudness of the ambient sound level compared to the natural level.
difference Ambient Sound Level
Figure 3. Difference between the overall and natural ambient sound levels.

In this research, we produced the first-ever global maps of the overall and natural ambient sound levels, and we showed that anthropogenic noise impacts billions of people and vast land areas worldwide. Furthermore, our method for modifying environmental variables is a powerful tool that enables us to predict the effects of future scenarios, such as population growth, urbanization, deforestation, and climate change, on the ambient sound level.