Turning Up Ocean Temperature & Volume – Underwater Soundscapes in a Changing Climate

Freeman Lauren – lauren.a.freeman3.civ@us.navy.mil

Instagram: @laur.freeman

NUWC Division Newport, NAVSEA, Newport, RI, 02841, United States

Dr. Lauren A. Freeman, Dr. Daniel Duane, Dr. Ian Rooney from NUWC Division Newport and
Dr. Simon E. Freeman from ARPA-E

Popular version of 1aAB1 – Passive Acoustic Monitoring of Biological Soundscapes in a Changing Climate
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018023

Climate change is impacting our oceans and marine ecosystems across the globe. Passive acoustic monitoring of marine ecosystems has been shown to provide a window into the heartbeat of an ecosystem, its relative health, and even information such as how many whales or fish are present in a given day or month. By studying marine soundscapes, we collate all of the ambient noise at an underwater location and attribute parts of the soundscape to wind and waves, to boats, and to different types of biology. Long term biological soundscape studies allow us to track changes in ecosystems with a single, small, instrument called a hydrophone. I’ve been studying coral reef soundscapes for nearly a decade now, and am starting to have time series long enough to begin to see how climate change affects soundscapes. Some of the most immediate and pronounced impacts of climate change on shallow ocean soundscapes are evident in varying levels of ambient biological sound. We found a ubiquitous trend at research sites in both the tropical Pacific (Hawaii) and sub-tropical Atlantic (Bermuda) that warmer water tends to be associated with higher ambient noise levels. Different frequency bands provide information about different ecological processes (such as fish calls, invertebrate activity, and algal photosynthesis). The response of each of these processes to temperature changes is not uniform, however each type of ambient noise increases in warmer water. At some point, ocean warming and acidification will fundamentally change the ecological structure of a shallow water environment. This would also be reflected in a fundamentally different soundscape, as described by peak frequencies and sound intensity. While I have not monitored the phase shift of an ecosystem at a single site, I have documented and shown that healthy coral reefs with high levels of parrotfish and reef fish have fundamentally different soundscapes, as reflected in their acoustic signature at different frequency bands, than coral reefs that are degraded and overgrown with fleshy macroalgae. This suggests that long term soundscape monitoring could also track these ecological phase shifts under climate stress and other impacts to marine ecosystems such as overfishing.

A healthy coral reef research site in Hawaii with vibrant corals, many reef fish, and copious nooks and crannies for marine invertebrates to make their homes.
Soundscape segmented into three frequency bands capturing fish vocalizations (blue), parrotfish scrapes (red), and invertebrate clicks along with algal photosynthesis bubbles (yellow). All features show an increase in ambient noise level (PSD, y-axis) with increasing ocean temperature at each site studied in Hawaii.

Room Design Considerations for Optimal Podcasting

Madeline Didier – mdidier@jaffeholden.com

Jaffe Holden, 114-A Washington Street, Norwalk, CT, 06854, United States

Twitter: @JaffeHolden
Instagram: @jaffeholden

Popular version of 1aAA2-Podcast recording room design considerations and best practices, presented at the 183rd ASA Meeting.

Podcast popularity has been on the rise, with over two million active podcasts as of 2021. There are countless options when choosing a podcast to listen to, and unacceptable audio quality will cause a listener to quickly move on to another option. Poor acoustics in the space where a podcast was recorded are noticeable even by an untrained ear, and listeners may hear differences in room acoustics without even seeing a space. Podcasters use a variety of setups to record episodes, ranging from closets to professional recording spaces. One trend is recording spaces that feel comfortable and look aesthetically pleasing, more like living rooms rather than radio stations.

Figure 1: Podcast studio with a living room aesthetic. Image courtesy of The Qube.

A high-quality podcast recording is one that does not capture sounds other than the podcaster’s voice. Unwanted sounds include noise from mechanical systems, vocal reflections, or ambient noise such as exterior traffic or people in a neighboring room. Listen to the examples below.

More ideal recording conditions:
Media courtesy of Home Cooking Podcast, Episode: Kohlrabi – Turnip for What

Less ideal recording conditions:
Media courtesy of The Birding Life Podcast, Episode 15: Roberts Bird Guide Second Edition

The first example is a higher quality recording where the voices can be clearly heard. In the second example, the podcast guest is not recording in an acoustically suitable room. The voice reflects off the wall surfaces and detracts from the overall quality and listener experience.

Every room design project comes with its own challenges and considerations related to budget, adjacent spaces, and expected quality. Each room may have different design needs, but best practice recommendations for designing a podcasting room remain the same.

Background noise: Mechanical noise should be controlled so that you cannot hear HVAC systems in a recording. Computers and audio interfaces should ideally be located remotely so that noises, such as computer fans, are not picked up on the recording.
Room shape: Square room proportions should be avoided as this can cause room modes, or buildup of sound energy in spots of the room, creating an uneven acoustic environment.
Room finishes: Carpet is ideal for flooring, and an acoustically absorptive material should be attached to the wall(s) in the same plane as the podcaster’s voice. Wall materials should be 1-2” thick. Ceiling materials should be acoustically absorptive, and window glass should be angled upward to reduce resonance within the room.
Sound isolation: Strategies for improving sound separation may include sound rated doors or standard doors with full perimeter gaskets, sound isolation ceilings, and full height wall constructions with insulation and multiple layers of gypsum wallboard.

In the example below, the podcast studio (circled) is strategically located at the back of a dedicated corridor for radio and podcasting. It is physically isolated from the main corridor, creating more acoustical separation. Absorptive ceiling tile (not shown) and 2” thick wall panels help limit vocal reflections, and background noise is controlled.

Podcast recording room within a radio and podcasting suite. Image courtesy of BWBR and RAMSA.Figure 2: Podcast recording room within a radio and podcasting suite. Image courtesy of BWBR and RAMSA.

While the challenges for any podcast room may differ, the acoustical goals remain the same. With thoughtful consideration of background noise, room shape, finishes, and sound isolation, any room can support high-quality podcast recording.

Connecting industry to a more diverse student population

Felicia Doggett – f.doggett@metro-acoustics.com

Instagram: @metropolitan_acoustics

Metropolitan Acoustics, 1628 JFK Blvd., Suite 1902, Philadelphia, PA, 19103, United States

Popular version of 4pED4-Internships in the acoustical disciplines: How can we attract a more diverse student population?, presented at the 183rd ASA Meeting.

Metropolitan Acoustics has employed 26 interns over a 27-year period. Of those 26, there were 6 students who pursued careers in the acoustics fields; of those 6, there was only one who was both a woman and minority, and that person was a foreign born student who came to the United States for school. Not one woman or minority from the United States who interned with us starting from 1995 entered into the acoustics fields after graduation. This is a very telling microcosm into the Acoustical Society of America as a whole.

Within the acoustics fields, we need to ask ourselves how we are connecting to underrepresented student groups. The engineering disciplines are not very diverse and the few woman and minority groups that enter into the field often leave for a variety of reasons, which most often lead back to a lack of inclusion. It doesn’t have to be a mountain – it can simply be a molehill that sends someone off the track of having sustained and productive careers in the science and engineering fields.

At Metropolitan Acoustics, a large majority of our interns have been 6-month co-ops as compared to 3-month summer interns (23-3). For the most part, the students were fairly productive and we found that interest, enthusiasm, engagement, and work ethic are all factors to their success. Six of the 26 went into careers in acoustics, and one of them works for us currently. The gender and racial breakdown are as follows:

  • Gender diversity: 20 male, 6 female
  • Racial diversity: 20 Caucasian, 6 minority; of the 6 minorities, 4 male and 2 femaleGender/Race diverse
  • Out of the 6 interns that went into careers in acoustics, 5 are Caucasian males and 1 is a minority female who is not native to the US

As an organization, what are we doing to attract a more diverse pipeline of candidates to the acoustics fields? And perhaps a bigger question is how we plan to keep them in the field, which is all about inclusiveness. Dedicated student portals on organizational websites populated with videos, student awards, lists of schools with acoustic programs, and other items is a start. This information can be transmitted to underrepresented student organizations like National Society of Black Engineers, Society of Women Engineers, Society of Hispanic Professional Engineers, Society of STEM Women of Color, American Indian Science and Engineering, among others with the hope that this information may light a spark in some to enter the field.

Presence of a drone and estimating its range simply from the drone audio emissions

Kaliappan Gopalan – kgopala@pnw.edu

Purdue University Northwest, Hammond, IN, 46323, United States

Brett Y. Smolenski, North Point Defense, Rome, NY, USA
Darren Haddad, Information Exploitation Branch, Air Force Research Laboratory, Rome, NY, USA

Popular version of 1ASP8-Detection and Classification of Drones using Fourier-Bessel Series Representation of Acoustic Emissions, presented at the 183rd ASA Meeting.

With the proliferation of drones – from medical supply and hobbyist to surveillance, fire detection and illegal drug delivery, to name a few – of various sizes and capabilities flying day or night, it is imperative to detect their presence and estimate their range for security, safety and privacy reasons.

Our paper describes a technique for detecting the presence of a drone, as opposed to environmental noise such as from birds and moving vehicles, simply from the audio emissions of the drone from its motors, propellers and mechanical vibrations. By applying a feature extraction technique that separates a drone’s distinct audio spectrum from that of atmospheric noise, and employing machine learning algorithms, we were able to identify drones from three different classes flying outdoors with correct class in over 78 % of cases. Additionally, we estimated the range of a drone from the observation point correctly to within ±50 cm in over 85 % of cases.

We evaluated unique features characterizing each type of drone using a mathematical technique known as the Fourier-Bessel series expansion. Using these features which not only differentiated the drone class but also differentiated the drone range, we applied machine learning algorithms to train a deep learning network with ground truth values of drone type, or its range as a discrete variable at intervals of 50 cm. When the trained learning network was tested with new, unused features, we obtained the correct type of drone – with a nonzero range – and a range class that was within the appropriate class, that is, within ±50 cm of the actual range.

Any point along the main diagonal line indicates correct range class, that is, within ±50 cm of actual range, while off-diagonal values correspond to false classification error.

For identifying more than three types of drones, we tested seven different types of drones, namely, DJI S1000, DJI M600, Phantom 4 Pro, Phantom 4 QP with a quieter set of propellers, Mavic Pro Platinum, Mavic 2 Pro, and Mavic Pro, all tethered in an anechoic chamber in an Air Force laboratory and controlled by an operator to go through a series of propeller maneuvers (idle, left roll, right roll, pitch forward, pitch backward, left yaw, right yaw, half throttle, and full throttle) to fully capture the array of sounds the craft emit. Our trained deep learning network correctly identified the drone type in 84 % of our test cases.  Figure 1 shows the results of range classification for each outdoor drone flying between a line-of-sight range of 0 (no-drone) to 935 m.

A moth’s ear inspires directional passive acoustic structures

Lara Díaz-García – lara.diaz-garcia@strath.ac.uk
Twitter: @laradigar23
Instagram: @laradigar

Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, Lanarkshire, G1 1RD, United Kingdom

Popular version of 2aSA1-Directional passive acoustic structures inspired by the ear of Achroia grisella, presented at the 183rd ASA Meeting.

Read the article in Proceedings of Meetings on Acoustics

When most people think of microphones, they think of the ones singers use or you would find in a karaoke machine, but they might not realize that much smaller microphones are all around us. Current smartphones have about three or four microphones that are small. The miniaturization of microphones is therefore a desire in technological development. These microphones are strategically placed to achieve directionality. Directionality means that the microphone’s goal is to discard undesirable noise coming from directions other than the speaker’s as well as to detect and transmit the sound signal. For hearing implant users this functionality is also desirable. Ideally, you want to be able to tell what direction a sound is coming from, as people with unimpaired hearing do.

But dealing with small size and directionality presents problems. People with unimpaired hearing can tell where sound is coming from by comparing the input received by each of our ears, conveniently sitting on opposite sides of our heads and therefore receiving sounds at slightly different times and with different intensities. The brain can do the math and compute what direction sound must be coming from. The problem is that, to use this trick, you need two microphones that are separated so the time of arrival and difference in intensity are not negligible, and that goes against microphone miniaturization. What to do if you want a small but directional microphone, then?

When looking for inspiration for novel solutions, scientists often look to nature, where energy efficiency and simple designs are prioritized in evolution. Insects are one such example that faces the challenge of directional hearing at small scales. The researchers have chosen to look at the lesser wax moth (fig 1), observed to have directional hearing in the 1980s. The males produce a mating call that the females can track even when one of their ears is pierced. This implies that, instead of using both ears as humans do, these moths’ directional hearing is achieved with just one ear.

Lesser wax moth specimen with scale bar. Image courtesy of Birgit E. Rhode (CC BY 4.0).

The working hypothesis is that directionality must be achieved by the asymmetrical shape and characteristics of the moth’s ear itself. To test this hypothesis, the researchers designed a model that resembles the moth’s ear and checked how it behaved when exposed to sound. The model consists of a thin elliptical membrane with two halves of different thicknesses. For it, they used a readily available commercial 3D printer that allows customization of the design and fabrication of samples in just a few hours. The samples were then placed on a turning surface and the behavior of the membrane in response to sound coming from different directions was investigated (fig 2). It was found that the membrane moves more when sound comes from one direction rather than all the others (fig 3), meaning the structure is therefore passively directional. This means it could inspire a single small directional microphone in the future.

Laboratory setup to turn the sample (in orange, center of the picture) and expose it to sound from the speaker (left of the picture). Researcher’s own picture.
Image adapted from Lara Díaz-García’s original paper. Sounds coming from 0º direction elicit a stronger movement in the membrane than other directions.