1aSC3 – Acoustic changes of speech during the later years of life

Benjamin Tucker – benjamin.tucker@ualberta.ca
Stephanie Hedges – shedges@ualberta.ca
Department of Linguistics
University of Alberta
Edmonton, Alberta T6G 2E7
Canada

Mark Berardi – mberardi@msu.edu
Eric Hunter – ejhunter@msu.edu
Department of Communicative Sciences and Disorders
Michigan State University
East Lansing, Michigan 48824

Popular version of paper 1aSC3 (your paper version)
Presented Monday morning 11:15 AM – 12:00P PM, December 7, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Research into the perception and production of the human voice has shown that the human voice changes with age (e.g., Harnsberger et al., 2008). Most of the previous studies have investigated speech changes over time using groups of people of different ages, while a few studies have tracked how an individual speaker’s voice changes over time. The present study investigates three male speakers and how their voices change over the last 30 to 50 years of their lives.

We used publicly available archives of speeches given to large audiences on a semi-regular basis (generally with a couple of years between each instance). The group of speeches was given during the last 30-50 years of each speaker’s life, meaning that we have samples ranging from the speakers’ late 40s to early 90s. We extracted 5-minute samples (recordings and transcripts) from each speech. We then used the Penn forced-alignment system (this system finds and marks the boundaries of individual speech sounds) to identify word and sound boundaries. Acoustic characteristics of the speech were extracted from the speech signal using a custom script using the Praat software package.

In the present analysis, we investigate changes in the vowel space (the acoustic range of vowels a speaker has produced), fundamental frequency (what a listener hears as pitch), the duration of words and sounds (segments), and speech rate. We model the acoustic characteristics of our speakers using Generalized Additive Models (Hastie & Tibshirani, 1990), which allows for an investigation of non-linear changes over time.

The results are discussed in terms of vocal changes over the lifespan in the speakers’ later-years. Figure 1 illustrates the change in one speaker’s vowel space as he ages. We find that for this speaker the vowel space shifts to lower frequencies as he ages.

Figure 1 – An animation of Speaker 1’s vowel space and how it changes over a period of 50 years. Each colored circle represents a different decade.

We also find a similar effect for fundamental frequency across all three speakers, Figure 2, where the average fundamental frequency of their voices gets lower and lower as they age and then starts to get higher after the age of 70. This effect is the same for word and segment duration. We find that on average as our three speakers age their speech (at least when giving public speeches) gets faster and then slows down after around the age of 70.

Figure 2: Average fundamental frequency of our speakers’ speech as they age.

Figure 3: Average speech rate in syllables per second of our speakers’ speech as they age.

While on average our three speakers show a change in the progression of their speech at the age of 70, each speaker has their own unique speech trajectory. From a physiological standpoint, our data suggest that with age come not only laryngeal changes (changes to the voice) but also a decrease in respiratory health – especially expiratory volume – as has been reflected in previous studies.

2pUWb2 – Study of low frequency flight recorder detection

I Yun Su – r07525010@ntu.edu.tw
Wen-Yang Liu – r06525035@ntu.edu.tw
Chi-Fang Chen – chifang@ntu.edu.tw
Engineering Science and Ocean Engineering,
National Taiwan University,
No. 1 Roosevelt Road Sec.#4
Taipei City, Taiwan

Li-Chang Chuang – eric@ttsb.gov.tw
Kai-Hong Fang – khfang@ttsb.gov.tw
Taiwan Transportation Safety Board
11th Floor, 200, Section 3,
Beixin Road, Xindian District,
New Taipei City, Taiwan

Popular version of paper 2pUWb2
Presented Tuesday afternoon, December 8, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

A flight recorder is installed in every aircraft to record the flight status. When the aviation accident occurs, this recorder can help clarify the cause of the incident. Furthermore, if the plane crashes into the ocean, the underwater locater beacon (ULB) inside the flight recorder will be triggered which would make a sound that could be located by the rescue team.

In 2009, there was a serious accident involving the Air French flight 447. According to the final report, French Civil Aviation Safety Investigation Authority suggested that the ULB should acquire extended transmission time up to 90 days and increased transmission range. In Taiwan, the flight recorder has already been installed with a 37.5 kHz ULB inside the tail section of every vehicle, and now Taiwan Transportation Safety Board considered to put in an additional 8.8 kHz ULB in the flight belly. (Picture 1)

underwater locater beacon (ULB) inside the flight recorder

Picture 1: The positions of the 37.5 kHz and 8.8 kHz ULB on the plane.

The main propose of this study is to understand the performance of the newly bought 8.8 kHz ULB – DUKANE SEACOM DK180. Firsts off, I did the simulation on both ULB to compare the detection ranges (DR), and according to the beacon specifications, the source level (SL) of the both is 160 dB re 1μPa.

For the DR to be simulated, the transmission loss (TL) which is affected by a lot of different environmental parameters must be determined first. This study is based on the Taiwan database, and using the Gaussian beam propagation to calculate the TL. After the TL is acquired, the noise level (NL) which also has certain impact on the DR has to be determined. Generally, the lower the frequency, the longer the DR. DR can be determined by passive sonar equation, and can be derive the FOM = SL – NL – DT. The DT is Detection Threshold and the FOM is Figure of Merit, which is the maximum TL that can detect. The intersection of the TL and FOM is DR. In the study, the DT is set to be zero. At the Point A, the NL of the 8.8 kHz is 78 dB re 1μPa and for 37.5 kHz is 65 dB re 1μPa, so the FOM of the 8.8 kHz is 82 dB re 1μPa and for 37.5 kHz is 95 dB re 1μPa. The DR in 8.8 kHz ULB is about twice than 37.5 kHz ULB at Point A. (Picture 2)

Picture 2: Detection Ranges of 8.8 kHz ULB and 37.5 kHz ULB in the Point A.

In the study, I have also done the experiment in Taiwan Miaoli offshore. The results also show that the newly bought 8.8 kHz ULB would have a smaller TL and longer DR. In summary, with an additional 8.8 ULB, the more precise prediction of the beacon location could be obtained.

2pBAc – Targeting Sound with Ultrasound in the Brain

Scott Schoen Jr – scottschoenjr@gatech.edu
Costas Arvanitis – costas.arvanitis@gatech.edu

Georgia Tech
901 Atlantic Dr
Atlanta, GA 30318

Popular version of 2pBAc – Spatial Characterization of High Intensity Focused Ultrasound Fields in the Brain
Presented Tuesday afternoon, December 8, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

The pitch and size of a sound are quite intrinsically connected. This is why, for instance, low-register instruments (such as a tuba or double bass) are large, while higher pitched ones may be very small (like a piccolo or triangle). Sound travels in waves, and the product of the length of the wave (wavelength) and its pitch (frequency) is a constant (namely, the speed of sound).

Consequently, the wavelength of sounds we can hear may be between about 15 m and 0.2 cm. But just as there are wavelengths of light we cannot see (such as ultraviolet and X-rays), there exists sound with much smaller wavelengths. Ultrasound, so called since its frequency is above our hearing range, is able to travel through human tissue and enables noninvasive imaging with millimeter resolution.

Since sound is pressure, it also carries energy. And, much like sunlight through a magnifying glass, sound energy may be focused to a small area to cause heating. This technique has allowed noninvasive and minimally invasive therapy, where focused ultrasound (FUS) creates small regions of high heat or forces to burn or manipulate the tissue. This is especially important for brain diseases, where surgery is particularly challenging.

ultrasound
Fig. 1 – Human cells are sensitive to sound frequencies from about 20 Hz to 20 kHz (left). However, focusing sound to a small area requires small wavelengths—and thus much higher frequencies (right). Not to Scale

Interestingly, it turns out that at very high pressures, so-called nonlinear acoustic effects become important, and the sound begins to interact with itself. One consequence is that if the FUS has to very high frequencies, say 995 kHz and 1005 kHz, the focal spot will a few millimeters, similar to 1 mm. However, the high pressure interaction will also generate energy at 1005 kHz – 995 kHz = 10 kHz—within the audible and tactile range.

This work describes our use of simulations and experiments to understand how this low frequency energy might be realized for FUS through the skull. Understanding the strength and distribution of low frequency energy generated with high frequency FUS may open a new range of therapeutic and diagnostic capabilities in one of the most complex and medically imperative organs: the brain.

4aMUa3 – Musical Notes translate to Emotions? A neuro-acoustical endeavor with Indian Classical music

Shankha Sanyal
Samir Karmakar
Dipak Ghosh
Jadavpur University
Kolkata: 700032, INDIA

Archi Banerjee
Rekhi Centre of Excellence for the Science of Happiness
IIT Kharagpur, 721301, INDIA

Popular version of paper 4aMUa3 Emotions from musical notes? A psycho-acoustic exploration with Indian classical music
Presented Thursday morning, December 10, 2020
179th ASA Meeting, Acoustics Virtually Everywhere
Read the article in Proceedings of Meetings on Acoustics

The Indian classical music (ICM) system is based on the note system which consists of 12 notes, each having a definite frequency. The main feature of this music form is the existence of ‘Ragas’, which are unique, having a definite combination of these 12 notes, though the presence of 12 notes is not essential in each of the Raga; some can have only 5 notes which are usually called ‘pentatonic Ragas or scales’. Each Raga evokes not a particular emotion (rasa) but a superposition of different emotional states such as joy, sadness, anger, disgust, fear etc. A mere change in the single frequency of a Raga clip changes it to another Raga and also the associated emotional states change along with it. In this work, for the first time, we envisage to study how the emotion perception in listeners’ change when there is an alteration of a single note in a pentatonic Raga and also when a particular note(s) is replaced by its flat/sharp counterpart. Robust nonlinear signal processing methods have been utilized to quantify the acoustical signal as well as the brain arousal response corresponding to the two pair of Ragas taken for our study.

The two pair of ragas chosen for our study:

Raga Durga- sa re ma pa dha sa 

Raga Gunkali– sa RE ma pa DHA sa

The notes ‘re’ and ‘dha’ of Durga is changed to their respective sharp/flat counterparts which change Raga Durga to Raga Gunkali.

Raga Durga- sa re ma pa dha sa

Raga Bhupali-  sa re ga pa dha sa

The note ‘ma’ in Raga Durga, when changed to ‘ga’, makes the Raga Bhupali

Human Response Analysis-
A human response study was done with 50 subjects who were provided with an emotion chart of the basic 4 emotions, and were asked to mark the clips with their perceived emotional arousal.

The radar plots for the human response analysis:

Indian Classical music Indian Classical music

(Fig. 1 a-b) Pair 1

Indian Classical music Indian Classical music

(Fig. 1 c-d) Pair 2

It is seen that the change of a single note manifests in a complete change in emotional appraisal at the perceptual level of the listeners. In the next section, EEG response of 10 participants (who were made to listen to these raga clips) has been studied using nonlinear multifractal tools. Multifractality is an indirect measure of the inherent signal complexity present in the highly non-stationary EEG fluctuations.

The following figures give the averaged multifractality corresponding to the frontal and temporal lobes in alpha and theta EEG frequency range for the two pair of raga clips. P1…P5 represents the different phrases (note sequences) in which the main changes between the two ragas have been done.

(Fig. 2 a-b) Pair 1

(Fig. 2 c-d) Pair 2

For the first pair, alpha and theta power decreases considerably in the frontal lobe, while in temporal lobes, phrase specific arousal is seen. For the second pair, the arousal is very much specific to the phrases. This can be attributed to the fact that the human response data showed the emotional arousal in second pair is not strongly opposite to each other, but a mixed response is obtained. For the first time, a scientific analysis on how the acoustic, perceptual and neural features change when the emotional appraisal is changed due to the change of a single frequency in a particular Raga is reported in the context of Indian Classical Music

4pAO1 – Oceanic Quieting During a Global Pandemic

John P. Ryan – ryjo@mbari.org
Monterey Bay Aquarium Research Institute
7700 Sandholdt Road
Moss Landing, CA 95039

John E. Joseph – jejoseph@nps.edu
Tetyana Margolina – tmargoli@nps.edu
Department of Oceanography
Naval Postgraduate School
Monterey, CA 93943

Leila T. Hatch – leila.hatch@noaa.gov
Stellwagen Bank National Marine Sanctuary, NOS-NOAA
175 Edward Foster Road
Scituate, MA 02066

Andrew DeVogelaere – andrew.devogelaere@noaa.gov
Monterey Bay National Marine Sanctuary, NOS-NOAA
99 Pacific Street, Bldg. 455A
Monterey, CA  93940

Lindsey E. Peavey Reeves – lindsey.peavey@noaa.gov
NOAA Office of National Marine Sanctuaries
National Marine Sanctuary Foundation
Silver Spring, MD 20910
and
Channel Islands National Marine Sanctuary
University of California, Santa Barbara
Santa Barbara, CA  93106

Brandon L. Southall – brandon.southall@sea-inc.net
Southall Environmental Associates, Inc.
9099 Soquel Drive, Suite 8
Aptos, CA 95003

Simone Baumann-Pickering – sbaumann@ucsd.edu
Scripps Institution of Oceanography, UC San Diego
Ritter Hall 200F
La Jolla, CA 92093

Alison K. Stimpert – astimpert@mlml.calstate.edu
Moss Landing Marine Laboratories
Moss Landing, CA, 95039

Popular version of paper 4pAO1
Presented Thursday afternoon, December 10, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Imagine speaking with only your voice – no technology – and being heard by someone over a hundred kilometers away.  Because sound travels much more powerfully in water than it does in air, great whales can communicate over such vast distances in the ocean.

Whales and other oceanic animals produce and perceive sound for essential life activities – communicating, finding food, navigating, reproducing, and surviving.  This means that we can learn a lot about their underwater lives by recording and analyzing the sounds they produce and hear.  It also means that the noise we introduce into the ocean can cause harm.  Protecting oceanic species and their habitats requires an understanding of the detrimental impacts of our noise and strategies to mitigate these impacts.

There are many sources of anthropogenic noise in the ocean, but the most pervasive and persistent source is that of vessels, notably large commercial ships engaged in global trade.  This worldwide bustling is among the many human activities influenced by the COVID-19 pandemic.  Using sound recordings from the deep sea and information about vessel traffic, we examined oceanic quieting caused by reduced shipping traffic within Monterey Bay National Marine Sanctuary (Figure 1) during this ongoing pandemic.

Oceanic Quieting

Figure 1.  Study context.  Shaded regions represent Monterey Bay National Marine Sanctuary.  The black circle shows the location of a deep-sea (890 m) observatory connected to shore by a cable, through which we recorded sound.  Red and blue lines define nearby shipping lanes.

Our first question was whether the quieting we measured during 2020 could be explained by reduced traffic of large vessels.  We quantified vessel traffic using two independent data sources: (1) economic data representing vessel activity across all California ports, and (2) location data sent from vessels to shore continuously as they transit between ports.  Both of these data sources yielded the same answer: quieting within the sanctuary during January–June 2020 was caused by reduced shipping traffic.  Further, a rebound in noise levels during July 2020 was associated with an increase in vessel traffic.

Our second question was how much quieter 2020 was compared to previous years.  Using the previous two years as a baseline, we found that 2020 was quieter than both previous years during the months of February through June.  Low-frequency noise levels during June 2020, the quietest month having the least shipping activity, were reduced by nearly half compared to June of the previous two years.  For baleen whales that use low-frequency sound to communicate, potential consequences of this quieting include less time exposed to noise-induced interference and stress, and greater distance over which communication can occur.

The effects of this pandemic on oceanic noise will differ from place to place, depending on proximity to hubs of maritime activity, the nature of noise produced by each activity, and the degree and timing of pandemic influence.  These changes are being examined across U.S. National Marine Sanctuaries and all around the world.  The COVID-19 pandemic resulted in an unexpected global experiment in oceanic noise, one that could reveal better ways to care for ocean health and its powerful support of humanity.

Study overview