Popular version of 1aSC2 – Retroflex nasals in the Mai-Ndombe (DRC): the case of nasals in North Boma B82
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022724
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
“All language sounds are equal but some language sounds are more equal than others” – or, at least, that is the case in academia. While French i’s and English t’s are constantly re-dotted and re-crossed, the vast majority of the world’s linguistic communities remain undocumented, with their unique sound heritage gradually fading into silence. The preservation of humankind’s linguistic diversity relies solely on detailed documentation and description.
Over the past few years, a team of linguists from Ghent, Mons, and Kinshasa have dedicated their efforts to recording the phonetic and phonological oddities of southwest Congo’s Bantu varieties. Among these, North Boma (Figure 1) stands out for its display of rare sounds known as “retroflexes”. These sounds are particularly rare in central Africa, which mirrors a more general state of under-documentation of the area’s sound inventories. Through extensive fieldwork in the North Boma area, meticulous data analysis, and advanced statistical processing, these researchers have unveiled the first comprehensive account of North Boma’s retroflexes. As it turns out, North Boma retroflexes are exclusively nasal, a striking typological circumstance. Their work, presented in Sydney this year, not only enriches our understanding of these unique consonants but also unveils potential historical implications behind their prevalence in the region.
Figure 1 – the North Boma area
The study highlights the remarkable salience of North Boma’s retroflexes, characterised by distinct acoustic features that sometimes align and sometimes deviate from those reported in the existing literature. This is clearly shown in Figure 2, where the North Boma nasal space is plotted using a technique known as “Multiple Factor Analysis” allowing for the study of small corpora organised into clear variable groups. As can be seen, their behaviour differs greatly from that of the other nasals of North Boma. This uniqueness also suggests that their presence in the area may stem from interactions with long-lost hunter-gatherer forest languages, providing invaluable insights into the region’s history.
Figure 2 – MFA results show that retroflex and non-retroflex nasals behave very differently in North Boma
Extraordinary sound patterns are waiting to be discovered in the least documented language communities of the world. North Boma serves as just one compelling example among many. As we navigate towards an unprecedented language loss crisis, the imperative for detailed phonetic documentation becomes increasingly evident.
NUWC Division Newport, NAVSEA, Newport, RI, 02841, United States
Dr. Lauren A. Freeman, Dr. Daniel Duane, Dr. Ian Rooney from NUWC Division Newport and
Dr. Simon E. Freeman from ARPA-E
Popular version of 1aAB1 – Passive Acoustic Monitoring of Biological Soundscapes in a Changing Climate
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018023
Climate change is impacting our oceans and marine ecosystems across the globe. Passive acoustic monitoring of marine ecosystems has been shown to provide a window into the heartbeat of an ecosystem, its relative health, and even information such as how many whales or fish are present in a given day or month. By studying marine soundscapes, we collate all of the ambient noise at an underwater location and attribute parts of the soundscape to wind and waves, to boats, and to different types of biology. Long term biological soundscape studies allow us to track changes in ecosystems with a single, small, instrument called a hydrophone. I’ve been studying coral reef soundscapes for nearly a decade now, and am starting to have time series long enough to begin to see how climate change affects soundscapes. Some of the most immediate and pronounced impacts of climate change on shallow ocean soundscapes are evident in varying levels of ambient biological sound. We found a ubiquitous trend at research sites in both the tropical Pacific (Hawaii) and sub-tropical Atlantic (Bermuda) that warmer water tends to be associated with higher ambient noise levels. Different frequency bands provide information about different ecological processes (such as fish calls, invertebrate activity, and algal photosynthesis). The response of each of these processes to temperature changes is not uniform, however each type of ambient noise increases in warmer water. At some point, ocean warming and acidification will fundamentally change the ecological structure of a shallow water environment. This would also be reflected in a fundamentally different soundscape, as described by peak frequencies and sound intensity. While I have not monitored the phase shift of an ecosystem at a single site, I have documented and shown that healthy coral reefs with high levels of parrotfish and reef fish have fundamentally different soundscapes, as reflected in their acoustic signature at different frequency bands, than coral reefs that are degraded and overgrown with fleshy macroalgae. This suggests that long term soundscape monitoring could also track these ecological phase shifts under climate stress and other impacts to marine ecosystems such as overfishing.
A healthy coral reef research site in Hawaii with vibrant corals, many reef fish, and copious nooks and crannies for marine invertebrates to make their homes.
Soundscape segmented into three frequency bands capturing fish vocalizations (blue), parrotfish scrapes (red), and invertebrate clicks along with algal photosynthesis bubbles (yellow). All features show an increase in ambient noise level (PSD, y-axis) with increasing ocean temperature at each site studied in Hawaii.
Jaffe Holden, 114-A Washington Street, Norwalk, CT, 06854, United States
Twitter: @JaffeHolden
Instagram: @jaffeholden
Popular version of 1aAA2-Podcast recording room design considerations and best practices, presented at the 183rd ASA Meeting.
Podcast popularity has been on the rise, with over two million active podcasts as of 2021. There are countless options when choosing a podcast to listen to, and unacceptable audio quality will cause a listener to quickly move on to another option. Poor acoustics in the space where a podcast was recorded are noticeable even by an untrained ear, and listeners may hear differences in room acoustics without even seeing a space. Podcasters use a variety of setups to record episodes, ranging from closets to professional recording spaces. One trend is recording spaces that feel comfortable and look aesthetically pleasing, more like living rooms rather than radio stations.
Figure 1: Podcast studio with a living room aesthetic. Image courtesy of The Qube.
A high-quality podcast recording is one that does not capture sounds other than the podcaster’s voice. Unwanted sounds include noise from mechanical systems, vocal reflections, or ambient noise such as exterior traffic or people in a neighboring room. Listen to the examples below.
More ideal recording conditions: Media courtesy of Home Cooking Podcast, Episode: Kohlrabi – Turnip for What
Less ideal recording conditions: Media courtesy of The Birding Life Podcast, Episode 15: Roberts Bird Guide Second Edition
The first example is a higher quality recording where the voices can be clearly heard. In the second example, the podcast guest is not recording in an acoustically suitable room. The voice reflects off the wall surfaces and detracts from the overall quality and listener experience.
Every room design project comes with its own challenges and considerations related to budget, adjacent spaces, and expected quality. Each room may have different design needs, but best practice recommendations for designing a podcasting room remain the same.
Background noise: Mechanical noise should be controlled so that you cannot hear HVAC systems in a recording. Computers and audio interfaces should ideally be located remotely so that noises, such as computer fans, are not picked up on the recording. Room shape: Square room proportions should be avoided as this can cause room modes, or buildup of sound energy in spots of the room, creating an uneven acoustic environment. Room finishes: Carpet is ideal for flooring, and an acoustically absorptive material should be attached to the wall(s) in the same plane as the podcaster’s voice. Wall materials should be 1-2” thick. Ceiling materials should be acoustically absorptive, and window glass should be angled upward to reduce resonance within the room. Sound isolation: Strategies for improving sound separation may include sound rated doors or standard doors with full perimeter gaskets, sound isolation ceilings, and full height wall constructions with insulation and multiple layers of gypsum wallboard.
In the example below, the podcast studio (circled) is strategically located at the back of a dedicated corridor for radio and podcasting. It is physically isolated from the main corridor, creating more acoustical separation. Absorptive ceiling tile (not shown) and 2” thick wall panels help limit vocal reflections, and background noise is controlled.
Figure 2: Podcast recording room within a radio and podcasting suite. Image courtesy of BWBR and RAMSA.
While the challenges for any podcast room may differ, the acoustical goals remain the same. With thoughtful consideration of background noise, room shape, finishes, and sound isolation, any room can support high-quality podcast recording.
Metropolitan Acoustics, 1628 JFK Blvd., Suite 1902, Philadelphia, PA, 19103, United States
Popular version of 4pED4-Internships in the acoustical disciplines: How can we attract a more diverse student population?, presented at the 183rd ASA Meeting.
Metropolitan Acoustics has employed 26 interns over a 27-year period. Of those 26, there were 6 students who pursued careers in the acoustics fields; of those 6, there was only one who was both a woman and minority, and that person was a foreign born student who came to the United States for school. Not one woman or minority from the United States who interned with us starting from 1995 entered into the acoustics fields after graduation. This is a very telling microcosm into the Acoustical Society of America as a whole.
Within the acoustics fields, we need to ask ourselves how we are connecting to underrepresented student groups. The engineering disciplines are not very diverse and the few woman and minority groups that enter into the field often leave for a variety of reasons, which most often lead back to a lack of inclusion. It doesn’t have to be a mountain – it can simply be a molehill that sends someone off the track of having sustained and productive careers in the science and engineering fields.
At Metropolitan Acoustics, a large majority of our interns have been 6-month co-ops as compared to 3-month summer interns (23-3). For the most part, the students were fairly productive and we found that interest, enthusiasm, engagement, and work ethic are all factors to their success. Six of the 26 went into careers in acoustics, and one of them works for us currently. The gender and racial breakdown are as follows:
Gender diversity: 20 male, 6 female
Racial diversity: 20 Caucasian, 6 minority; of the 6 minorities, 4 male and 2 female
Out of the 6 interns that went into careers in acoustics, 5 are Caucasian males and 1 is a minority female who is not native to the US
As an organization, what are we doing to attract a more diverse pipeline of candidates to the acoustics fields? And perhaps a bigger question is how we plan to keep them in the field, which is all about inclusiveness. Dedicated student portals on organizational websites populated with videos, student awards, lists of schools with acoustic programs, and other items is a start. This information can be transmitted to underrepresented student organizations like National Society of Black Engineers, Society of Women Engineers, Society of Hispanic Professional Engineers, Society of STEM Women of Color, American Indian Science and Engineering, among others with the hope that this information may light a spark in some to enter the field.
Purdue University Northwest, Hammond, IN, 46323, United States
Brett Y. Smolenski, North Point Defense, Rome, NY, USA
Darren Haddad, Information Exploitation Branch, Air Force Research Laboratory, Rome, NY, USA
Popular version of 1ASP8-Detection and Classification of Drones using Fourier-Bessel Series Representation of Acoustic Emissions, presented at the 183rd ASA Meeting.
With the proliferation of drones – from medical supply and hobbyist to surveillance, fire detection and illegal drug delivery, to name a few – of various sizes and capabilities flying day or night, it is imperative to detect their presence and estimate their range for security, safety and privacy reasons.
Our paper describes a technique for detecting the presence of a drone, as opposed to environmental noise such as from birds and moving vehicles, simply from the audio emissions of the drone from its motors, propellers and mechanical vibrations. By applying a feature extraction technique that separates a drone’s distinct audio spectrum from that of atmospheric noise, and employing machine learning algorithms, we were able to identify drones from three different classes flying outdoors with correct class in over 78 % of cases. Additionally, we estimated the range of a drone from the observation point correctly to within ±50 cm in over 85 % of cases.
We evaluated unique features characterizing each type of drone using a mathematical technique known as the Fourier-Bessel series expansion. Using these features which not only differentiated the drone class but also differentiated the drone range, we applied machine learning algorithms to train a deep learning network with ground truth values of drone type, or its range as a discrete variable at intervals of 50 cm. When the trained learning network was tested with new, unused features, we obtained the correct type of drone – with a nonzero range – and a range class that was within the appropriate class, that is, within ±50 cm of the actual range.
Any point along the main diagonal line indicates correct range class, that is, within ±50 cm of actual range, while off-diagonal values correspond to false classification error.
For identifying more than three types of drones, we tested seven different types of drones, namely, DJI S1000, DJI M600, Phantom 4 Pro, Phantom 4 QP with a quieter set of propellers, Mavic Pro Platinum, Mavic 2 Pro, and Mavic Pro, all tethered in an anechoic chamber in an Air Force laboratory and controlled by an operator to go through a series of propeller maneuvers (idle, left roll, right roll, pitch forward, pitch backward, left yaw, right yaw, half throttle, and full throttle) to fully capture the array of sounds the craft emit. Our trained deep learning network correctly identified the drone type in 84 % of our test cases. Figure 1 shows the results of range classification for each outdoor drone flying between a line-of-sight range of 0 (no-drone) to 935 m.