I know what you did last winter: Bowhead whale unusual winter presence in the Beaufort Sea

Nikoletta Diogou – niki.diogou@gmail.com

Twitter: @NikiDiogou
Instagram: @existentialnyquist

University of Victoria
Victoria, BC V5T 4H3
Canada

Additional authors: William Halliday, Stan E. Dosso, Xavier Mouy, Andrea Niemi, Stephen Insley

Popular version of 1aAB8 – I know what you did last winter: Bowhead whale anomalous winter acoustic occurrence patterns in the Beaufort Sea, 2018-2020
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018030

The Arctic is warming at an alarming pace due to climate change. As waters are warming and sea ice is shrinking, the arctic ecosystems are responding with adaptations that we only recently started to observe and strive to understand. Here we present the first evidence of bowhead whales, endemic baleen whales to the Arctic, breaking their annual migration and being detected year-round at their summer grounds.

Whales, positioned at the top of the food web, serve as excellent bio-indicators of environmental change and the health of marine ecosystems. There are more than 16,000 bowhead whales in the Bering-Chukchi-Beaufort (BCB) population in the Western Arctic. The BCB bowheads spend their winters in the ice-free Bering Sea, and typically start a journey early each spring of over 6000 km to summer feeding grounds in the Beaufort Sea, returning to the Bering Sea in early fall when ice forms on the Beaufort Sea (Figure 1). But how stable is this journey in our changing climate?

Figure 1. Map showing migration route of BCB bowhead whales and the wider study area.

The Amundsen Gulf (Figure 1), in the Canadian Arctic Archipelago of the Beaufort Sea, is an important summer-feeding area for the BCB whales. However, winter inaccessibility and harsh conditions year-round make long-term observation of marine wildlife here challenging. Passive acoustic monitoring has proven particularly useful for monitoring vocal marine animals such as whales in remote areas, and offers a remarkable opportunity to explore where and when whales are present in the cold darkness of Arctic waters. Figure 2 shows examples of two types of bowhead whale vocalizations (songs and moans) together with other biological and environmental sounds recorded in the Amundsen Gulf.

Figure 2. Examples of spectrograms recorded in the Amundsen Gulf of bowhead whale songs on the left, and bowhead whale moans on the right. Spectrograms are visual representations of sound, indicating the pitch (frequency) and loudness of sounds as a function of time. Spectrograms on the left include bearded seal calls (trills) interfering with the bowhead songs. Spectrograms on the right include other ambient sounds (ice noise) that interfere with the bowhead moans. Image adapted from authors’ original paper.

Examples of characteristic calls of bowhead whales recorded during 2018-2019 in the southern Amundsen Gulf.

In September of 2018 and 2019 we deployed underwater acoustic recorders at five sites in the southern Amundsen Gulf and recorded the ocean sound for two years to detect bowhead whale calls and quantify the whale’s seasonal and geographic distribution. In particular, we looked for any disruptions to their typical migration patterns. And sure enough, there it was.

A combination of automated and manual analysis of the acoustic recordings revealed that bowhead whales were present at all sites, as shown for 3 sites (CB50, CB300 and PP) in Figure 3. Bowhead calls dominated the acoustic data from early spring to early fall, during their summer migration, confirming the importance of the area as a core foraging site for this whale population. But surprisingly, the analysis uncovered a fascinating anomaly in bowhead whale behavior: bowhead calls were detected at each site through the winter of 2018-2019, representing the first clear evidence of bowhead whales overwintering at their summer foraging grounds (Figure 3). This is a significant departure from their usual migratory pattern. However, analysis of the 2019-2020 recordings did not indicate whales over-wintering that year. Hence, it is not yet clear if the over-wintering was a one-time event or the start of a more stable shift in bowhead whale ecology due to climate change. The variability in bowhead acoustic presence between the two years may be partly explained by differences in sea ice coverage and prey density (zooplankton), as summarized in Figure 4.

Figure 3. Number of days with acoustic detections per month for bowhead whales for sites CB50 (blue), CB300 (green), and PP (red) in 2018-2019. The yellow shaded areas represent time periods at each station when the ice concentration was below 20% (“ice-free”), grey areas when ice concentration was 20%-70% (“shoulder season”), and white areas when ice concentration was greater than 70%. Image adapted from authors’ original paper.

Figure 4. Graphical summary of the objectives and major results of the study.

The findings of this study have important implications for understanding how climate change is affecting the Arctic ecosystem, and highlights the need for continued monitoring of Arctic wildlife. Passive acoustic monitoring can provide data on how whale ecology is responding to a changing environment, which can be used to inform conservation efforts to better protect Arctic ecosystems and their inhabitants.

The ability to differentiate between talkers based on their voice cues changes with age

Yael Zaltz – yaelzalt@tauex.tau.ac.il

Department of Communication Disorders, Steyer School of Health Professions, Faculty of Medicine, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, -, 6997801, Israel

Popular version of 4aPP2 – The underlying mechanisms for voice discrimination across the life span
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018859

By using voice cues, a listener can keep track of a specific talker and tell it apart from other relevant and irrelevant talkers. Voice cues help listeners understand speech in everyday, noisy environments that include multiple talkers. The present study demonstrates that both young children and older adults aren’t as good at voice discrimination compared to young adults. Young children and older adults use more top-down, high-order cognitive resources for voice discrimination.

Four experiments were designed to assess voice discrimination based on two voice cues: the speaker’s fundamental frequency and formant frequencies. These are the resonant frequencies of the vocal tract, reflecting vocal tract length. Two of the experiments assessed voice discrimination in quiet conditions, one experiment assessed the effect of noise on voice discrimination, and one experiment assessed the effect of different testing methods on voice discrimination. In all experiments, an adaptive procedure was used to assess voice discrimination. In addition, high-order cognitive abilities such as non-verbal intelligence, attention, and processing speed were evaluated. The results showed that the youngest children and the older adults displayed the poorest voice discrimination, with significant correlations between voice discrimination and top-down, cognitive abilities; children and older adults with better attention skills and faster processing speed (Figure 1) achieved better voice discrimination. In addition, voice discrimination for the children was shown to depend more on comprehensive acoustic and linguistic information, compared to young adults, and their ability to form an acoustic template in memory to be used as perceptual anchor for the task was less efficient. The outcomes provide an important insight on the effect of age on basic auditory abilities and suggest that voice discrimination is less automatic for children and older adults, perhaps as a result of less mature or deteriorated peripheral (spectral and/or temporal) processing. These findings may partly explain the difficulties of children and older adults in understanding speech in multi-talker situations.

Figure 1: Individual voice discrimination results for (a) the children and (b) the older adults as a function of their scores in the Trail Making Test that assess attention skills and processing speed.

There is a way to differently define the acoustic environment

Semiha Yilmazer – semiha@bilkent.edu.tr

Department of Interior Architecture and Environmental Design, Bilkent University, Ankara, Turkey, 06800, Turkey

Ela Fasllija, Enkela Alimadhi, Zekiye Şahin, Elif Mercan, Donya Dalirnaghadeh

Popular version of 5aPP9 – A Corpus-based Approach to Define Turkish Soundscape Attributes
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0019179

We hear sound wherever we are, on buses, in streets, in cafeterias, museums, universities, halls, churches, mosques, and so forth. How we describe sound environments (soundscapes) changes according to the different experiences we have throughout our lives. Based on this, we wonder how people delineate sound environments and, thus how they perceive them.

There are reasons to believe there may be variances in how soundscape affective attributes are called in a Turkish context. Considering the historical and cultural differences countries have, we thought that it would be important to assess the sound environment by asking individuals of different ages all over Turkey. For our aim, we used the Corpus-driven approach (CDA), an approach found in Cognitive Linguistics. This allowed us to collect data from laypersons to effectively identify soundscapes based on adjective usage.

In this study, the aim is to discover linguistically and culturally appropriate equivalents of Turkish soundscape attributes. The study involved two phases. In the first phase, an online questionnaire was distributed to native Turkish speakers proficient in English, seeking adjective descriptions of their auditory environment and English-to-Turkish translations. This CDA phase yielded 79 adjectives.


Figure 1 Example public spaces; a library and a restaurant

Examples: audio 1, audio 2

In the second phase, a semantic-scale questionnaire was used to evaluate recordings of different acoustic environments in public spaces. The set of environments comprised seven distinct types of public spaces, including cafes, restaurants, concert halls, masjids, libraries, study areas, and design studios. These recordings were collected at various times of the day to ensure they also contained different crowdedness and specific features. A total of 24 audio recordings were evaluated for validity; each listened to 10 times by different participants. In total, 240 audio clips were randomly assessed, with participants rating 79 adjectives per recording on a five-point Likert scale.


Figure 2 The research process and results

The results of the study were analyzed using a principal component analysis (PCA), which showed that there are two main components of soundscape attributes: Pleasantness and Eventfulness. The components were organized in a two-dimensional model, where each is associated with a main orthogonal axis such as annoying-comfortable and dynamic-uneventful. This circular organization of soundscape attributes is supported by two additional axes, namely chaotic-calm and monotonous-enjoyable. It was also observed that in the Turkish circumplex, the Pleasantness axis was formed by adjectives derived from verbs in a causative form, explaining the emotion the space causes the user to feel. It was discovered that Turkish has a different lexical composition of words compared to many other languages, where several suffixes are added to the root term to impose different meanings. For instance, the translation of tranquilizer in Turkish is sakin-leş (reciprocal suffix) -tir (causative suffix)- ici (adjective suffix).

The study demonstrates how cultural differences impact sound perception and language’s role in expression. Its method extends beyond soundscape research and may benefit other translation projects. Further investigations could probe parallel cultures and undertake cross-cultural analyses.

What is a webchuck?

Chris Chafe – cc@ccrma.stanford.edu

Stanford University
CCRMA / Music
Stanford, CA 94305
United States

Ge Wang
Stanford University

Michael Mulshine
Stanford University

Jack Atherton
Stanford University

Popular version of 1aCA1 – What would a Webchuck Chuck?
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018058

Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video

Video 1: The Metered Tide

The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.

Video 2: The Metered Tide with backing track

Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.

Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.

Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.

Virtual Reality Musical Instruments for the 21st Century

Rob Hamilton – hamilr4@rpi.edu
Twitter: @robertkhamilton

Rensselaer Polytechnic Institute, 110 8th St, Troy, New York, 12180, United States

Popular version of 1aCA3 – Real-time musical performance across and within extended reality environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018060

Have you ever wanted to just wave your hands to be able to make beautiful music? Sad your epic air-guitar skills don’t translate into pop/rock super stardom? Given the speed and accessibility of modern computers, it may come as little surprise that artists and researchers have been looking to virtual and augmented reality to build the next generation of musical instruments. Borrowing heavily from video game design, a new generation of digital luthiers is already exploring new techniques to bring the joys and wonders of live musical performance into the 21st Century.

Image courtesy of Rob Hamilton.

One such instrument is ‘Coretet’: a virtual reality bowed string instrument that can be reshaped by the user into familiar forms such as a violin, viola, cello or double bass. While wearing a virtual reality headset such as Meta’s Oculus Quest 2, performers bow and pluck the instrument in familiar ways, albeit without any physical interaction with strings or wood. Sound is generated in Coretet using a computer model of a bowed or plucked string called a ‘physical model’ driven by the motion of a performer’s hands and the use of their VR game controllers. And borrowing from multiplayer online games, Coretet performers can join a shared network server and perform music together.

Our understanding of music, and live musical performance on traditional physical instruments is tightly coupled to time, specifically the understanding that when a finger plucks a string, or a stick strikes a drum head, a sound will be generated immediately, without any delay or latency. And while modern computers are capable of streaming large amounts of data at the speed of light – significantly faster than the speed of sound – bottlenecks in the CPUs or GPUs themselves, or in the code designed to mimic our physical interactions with instruments, or even in the network connections that connect users and computers alike, often introduce latency, making virtual performances feel sluggish or awkward.

This research focuses on some common causes for this kind of latency and looks at ways that musicians and instrument designers can work around or mitigate these latencies both technically and artistically.

Coretet overview video: Video courtesy of Rob Hamilton.

Diving into the Deep End: Exploring an Extraterrestrial Ocean

Grant Eastland – grant.c.eastland.civ@us.navy.mil

Naval Undersea Warfare Center Division, Keyport, Test and Evaluation Department., Keyport, Washington, 98345, United States

Popular version of 4aPAa12 – Considerations of undersea exploration of an extraterrestrial ocean
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018848

As we venture out beyond our home planet to explore our neighbors in our solar system, we have encountered the most extreme environments we could have imagined that provide some of greatest engineering challenges. Probes and landers have measured and experienced dangerous temperatures, atmospheres, and surfaces that would be deadly for human exploration. However, no extraterrestrial ocean environments have been studied beyond observation, which are the mostly unexplored portions of our planet. Remarkably, pass-by planetary probes have found the possible existence of oceans on two of Jupiter’s moons Europa and Ganymede and the existence of a potential ocean, as well as lakes and rivers on Titan, a moon of Saturn. Jupiter’s moon Europa could have a saltwater ocean that could be between 60 and 90 miles deep, covered in up to 15 miles of ice. The deepest point in Earth’s Ocean is a maximum of about 7.5 miles for comparison about 10 to 15 times shallower. Those extreme pressures experienced at that depth would be difficult to withstand with current technology and acoustic propagation could potentially behave differently also. At those pressures, water might not freeze above 8°F (~260 K), causing liquid water at temperatures not seen in our oceans. The effects of this would be found in the speed of sound, which are shown in Figure 1 through a creative and imaginative modelling scheme numerically simulated. The methods used were a mixture of using Earth data with predictive speculation, and physical intuition.

Figure 1. Imaginative scientific freedom determining the speed of sound in the deep ocean on Europa beneath a 30 km ice sheet. The water stays liquid down to potentially 260 K (8 degrees F), heated by currently an unknown mechanism probably related to Jupiter’s gravitational pull.

On Titan, a moon of Saturn, there are lakes and rivers of hydrocarbons like Methane and Ethane. For these compounds to be liquid, the temperature would have to be about -297°F. We know how sound interacts with Methane on Earth, because it is a gas for our conditions, but we would have to get it to cryogenic temperatures to study the acoustics as a liquid. We would have to build systems that could swim around in such temperatures to explore what is underneath. At liquid water temperatures, like potentially some of the extraterrestrial oceans predicted to exist, conditions may still be amenable to life. But to discover that life will require independent systems, making measurements and gathering information for humans to see through the eyes of our technology. The drive to explore extreme ocean environments could provide evidence of life beyond Earth, since where there is water, life is possible.