4aMUa3 – Musical Notes translate to Emotions? A neuro-acoustical endeavor with Indian Classical music

Shankha Sanyal
Samir Karmakar
Dipak Ghosh
Jadavpur University
Kolkata: 700032, INDIA

Archi Banerjee
Rekhi Centre of Excellence for the Science of Happiness
IIT Kharagpur, 721301, INDIA

Popular version of paper 4aMUa3 Emotions from musical notes? A psycho-acoustic exploration with Indian classical music
Presented Thursday morning, December 10, 2020
179th ASA Meeting, Acoustics Virtually Everywhere
Read the article in Proceedings of Meetings on Acoustics

The Indian classical music (ICM) system is based on the note system which consists of 12 notes, each having a definite frequency. The main feature of this music form is the existence of ‘Ragas’, which are unique, having a definite combination of these 12 notes, though the presence of 12 notes is not essential in each of the Raga; some can have only 5 notes which are usually called ‘pentatonic Ragas or scales’. Each Raga evokes not a particular emotion (rasa) but a superposition of different emotional states such as joy, sadness, anger, disgust, fear etc. A mere change in the single frequency of a Raga clip changes it to another Raga and also the associated emotional states change along with it. In this work, for the first time, we envisage to study how the emotion perception in listeners’ change when there is an alteration of a single note in a pentatonic Raga and also when a particular note(s) is replaced by its flat/sharp counterpart. Robust nonlinear signal processing methods have been utilized to quantify the acoustical signal as well as the brain arousal response corresponding to the two pair of Ragas taken for our study.

The two pair of ragas chosen for our study:

Raga Durga- sa re ma pa dha sa 

Raga Gunkali– sa RE ma pa DHA sa

The notes ‘re’ and ‘dha’ of Durga is changed to their respective sharp/flat counterparts which change Raga Durga to Raga Gunkali.

Raga Durga- sa re ma pa dha sa

Raga Bhupali-  sa re ga pa dha sa

The note ‘ma’ in Raga Durga, when changed to ‘ga’, makes the Raga Bhupali

Human Response Analysis-
A human response study was done with 50 subjects who were provided with an emotion chart of the basic 4 emotions, and were asked to mark the clips with their perceived emotional arousal.

The radar plots for the human response analysis:

Indian Classical music Indian Classical music

(Fig. 1 a-b) Pair 1

Indian Classical music Indian Classical music

(Fig. 1 c-d) Pair 2

It is seen that the change of a single note manifests in a complete change in emotional appraisal at the perceptual level of the listeners. In the next section, EEG response of 10 participants (who were made to listen to these raga clips) has been studied using nonlinear multifractal tools. Multifractality is an indirect measure of the inherent signal complexity present in the highly non-stationary EEG fluctuations.

The following figures give the averaged multifractality corresponding to the frontal and temporal lobes in alpha and theta EEG frequency range for the two pair of raga clips. P1…P5 represents the different phrases (note sequences) in which the main changes between the two ragas have been done.

(Fig. 2 a-b) Pair 1

(Fig. 2 c-d) Pair 2

For the first pair, alpha and theta power decreases considerably in the frontal lobe, while in temporal lobes, phrase specific arousal is seen. For the second pair, the arousal is very much specific to the phrases. This can be attributed to the fact that the human response data showed the emotional arousal in second pair is not strongly opposite to each other, but a mixed response is obtained. For the first time, a scientific analysis on how the acoustic, perceptual and neural features change when the emotional appraisal is changed due to the change of a single frequency in a particular Raga is reported in the context of Indian Classical Music

4pAO1 – Oceanic Quieting During a Global Pandemic

John P. Ryan – ryjo@mbari.org
Monterey Bay Aquarium Research Institute
7700 Sandholdt Road
Moss Landing, CA 95039

John E. Joseph – jejoseph@nps.edu
Tetyana Margolina – tmargoli@nps.edu
Department of Oceanography
Naval Postgraduate School
Monterey, CA 93943

Leila T. Hatch – leila.hatch@noaa.gov
Stellwagen Bank National Marine Sanctuary, NOS-NOAA
175 Edward Foster Road
Scituate, MA 02066

Andrew DeVogelaere – andrew.devogelaere@noaa.gov
Monterey Bay National Marine Sanctuary, NOS-NOAA
99 Pacific Street, Bldg. 455A
Monterey, CA  93940

Lindsey E. Peavey Reeves – lindsey.peavey@noaa.gov
NOAA Office of National Marine Sanctuaries
National Marine Sanctuary Foundation
Silver Spring, MD 20910
and
Channel Islands National Marine Sanctuary
University of California, Santa Barbara
Santa Barbara, CA  93106

Brandon L. Southall – brandon.southall@sea-inc.net
Southall Environmental Associates, Inc.
9099 Soquel Drive, Suite 8
Aptos, CA 95003

Simone Baumann-Pickering – sbaumann@ucsd.edu
Scripps Institution of Oceanography, UC San Diego
Ritter Hall 200F
La Jolla, CA 92093

Alison K. Stimpert – astimpert@mlml.calstate.edu
Moss Landing Marine Laboratories
Moss Landing, CA, 95039

Popular version of paper 4pAO1
Presented Thursday afternoon, December 10, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Imagine speaking with only your voice – no technology – and being heard by someone over a hundred kilometers away.  Because sound travels much more powerfully in water than it does in air, great whales can communicate over such vast distances in the ocean.

Whales and other oceanic animals produce and perceive sound for essential life activities – communicating, finding food, navigating, reproducing, and surviving.  This means that we can learn a lot about their underwater lives by recording and analyzing the sounds they produce and hear.  It also means that the noise we introduce into the ocean can cause harm.  Protecting oceanic species and their habitats requires an understanding of the detrimental impacts of our noise and strategies to mitigate these impacts.

There are many sources of anthropogenic noise in the ocean, but the most pervasive and persistent source is that of vessels, notably large commercial ships engaged in global trade.  This worldwide bustling is among the many human activities influenced by the COVID-19 pandemic.  Using sound recordings from the deep sea and information about vessel traffic, we examined oceanic quieting caused by reduced shipping traffic within Monterey Bay National Marine Sanctuary (Figure 1) during this ongoing pandemic.

Oceanic Quieting

Figure 1.  Study context.  Shaded regions represent Monterey Bay National Marine Sanctuary.  The black circle shows the location of a deep-sea (890 m) observatory connected to shore by a cable, through which we recorded sound.  Red and blue lines define nearby shipping lanes.

Our first question was whether the quieting we measured during 2020 could be explained by reduced traffic of large vessels.  We quantified vessel traffic using two independent data sources: (1) economic data representing vessel activity across all California ports, and (2) location data sent from vessels to shore continuously as they transit between ports.  Both of these data sources yielded the same answer: quieting within the sanctuary during January–June 2020 was caused by reduced shipping traffic.  Further, a rebound in noise levels during July 2020 was associated with an increase in vessel traffic.

Our second question was how much quieter 2020 was compared to previous years.  Using the previous two years as a baseline, we found that 2020 was quieter than both previous years during the months of February through June.  Low-frequency noise levels during June 2020, the quietest month having the least shipping activity, were reduced by nearly half compared to June of the previous two years.  For baleen whales that use low-frequency sound to communicate, potential consequences of this quieting include less time exposed to noise-induced interference and stress, and greater distance over which communication can occur.

The effects of this pandemic on oceanic noise will differ from place to place, depending on proximity to hubs of maritime activity, the nature of noise produced by each activity, and the degree and timing of pandemic influence.  These changes are being examined across U.S. National Marine Sanctuaries and all around the world.  The COVID-19 pandemic resulted in an unexpected global experiment in oceanic noise, one that could reveal better ways to care for ocean health and its powerful support of humanity.

Study overview

1aBAd1 – Early Detection of Arterial Disease using Medical Ultrasound

Tuhin Roy – troy@ncsu.edu

Murthy Guddati – mnguddat@ncsu.edu
NC State University – Civil Engineering, Raleigh, NC 27695, USA

Matthew W. Urban – Urban.Matthew@mayo.edu

James Greenleaf – jfg@mayo.edu
Mayo Clinic – Department of Radiology, Rochester, MN 55905, USA

Popular version of paper 1aBAd1:  Guided wave inversion for arterial stiffness
Presented Monday morning, December 7, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Cardiovascular disease is a leading cause of death in the United States and worldwide. Atherosclerosis, or the stiffening of arteries, contributes to damage of downstream organs such as the brain and heart. If early atherosclerosis can be identified, it may be treated. Our research is motivated by developing a diagnostic tool for early detection of atherosclerosis using one of the cheapest and safest modalities, medical ultrasound, which can used widely across the world.

Arterial Disease The target for this work is on estimating the stiffness and other mechanical properties of the carotid artery, a well-known indicator of cardiovascular disease. To accomplish this aim, we use a technique called shear wave elastography, where the wave propagation characteristics measured in the arterial wall are used to estimate the stiffness of the artery. Specifically, we use acoustic radiation force, resulting from focused ultrasound waves from an ultrasound probe to tap on the wall of the artery. This tap creates waves that travel within the artery wall, which are also measured with the same ultrasound probe. In this work, we present algorithms that convert the wave motion measured with ultrasound to values of arterial stiffness.

1aAAa2 – Flooring Impact Sound – A Potential Path to Quieter Hospitals

Mike Raley – mike.raley@ecoreintl.com
Ecore International
715 Fountain Avenue
Lancaster, PA 17601

Popular version of paper 1aAAa2
Presented Monday morning, December 7, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Hospitals are noisy places. The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys patients’ perception of their hospital care. Consistently, the quietness of the hospital is one of the lowest scores in the survey. If you have ever spent time in a hospital, that is likely no surprise.

What might be surprising is that a recent study by Bliefnick et al. showed that the acoustic metrics we typically use to evaluate noise in hospitals are not well-correlated with HCAHPS scores. Interestingly, they found that peak occurrence rates, how often a loud sound was above a certain threshold, were well-correlated with HCAHPS scores. In another recent study, Park et al. found that footsteps were a top five contributor to perceived loudness peaks, noise events that are significantly louder than the sound level before and after the event. Along with anecdotal evidence from healthcare designers, these two studies indicate that footsteps could contribute to a patient’s perception of quietness, and reducing noise from footsteps could improve that patient experience.

Test standard ASTM E3133 measures floor impact sound radiation in the space where the impact occurs. This differs from the common impact insulation class (IIC) standard (ASTM E492) that measures impact sound in the room below where the impacts occur.

(1aAAa2_Fig1_ImpactFoot.jpg)

Using ASTM E3133 we can compare floor impact sound levels for flooring common to hospitals, such as VCT and standard sheet vinyl, as well as specialty acoustical flooring like sheet vinyl fusion bonded to a rubber backing (Vinyl Rx).

(1aAAa1_Fig2_FlooringComparison)

Figure 2 shows that the Vinyl Rx can significantly reduce floor impact radiated sound, with a 13dB reduction in the overall sound level compared to VCT (a ~60% reduction in perceived loudness). The significant reduction in impact sound levels gives us an exciting indicator that specialty acoustical flooring has the potential to reduce predicted loudness peaks and improve the patient experience.

Unfortunately, there are some issues with the ASTM test method that limit its usefulness. In the course of testing to ASTM E3133, we uncovered substantial variation in the sound levels measured using two standard tapping machines from different manufacturers. The variation in tapping machines is evident even on a loud floor like concrete (see Figure 3).

(1aaAAa2_Fig3_BareConc)

The standard has provisions to account for the self-noise of the tapping machine, but those provisions do not correct the discrepancy between the two machines. Further investigation has shown that different flooring actually changes the self-noise of the tapping machine, so it cannot be easily accounted for.

While it may be possible to modify tapping machines to address the variation in self-noise, the most likely solution to the problem is a different impact source. Impact sources like golf balls, cue balls, and ball bearings can create consistent impacts without the self-noise issues of standard tapping machines. These objects are also readily available and easily transportable, so they lend themselves well to field measurements.

5aNSa4 – Preserving workers’ hearing health by improving earplug efficiency

Work carried out by researchers from ÉTS and the IRSST

Bastien Poissenot-Arrigoni – bastien.poissenot.1@ens.etsmtl.ca
Olivier Doutres –  olivier.doutres@etsmtl.ca
École de Technologie Supérieure
1100 Rue Notre-Dame Ouest,
Montréal, QC H3C 1K3

Franck Sgard – franck.sgard@irsst.qc.ca
Chun Hong Law – chunhonglaw@hotmail.com
505 Boulevard de Maisonneuve O.,
Montréal, QC H3A 3C2

Popular version of paper 5aNSa4 (Earcanal anthropometry analysis for the design of realistic artificial ears)
Presented Friday morning, December 11, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Noise exposure accounts for 22% of worldwide work-related health problems. Excessive noise not only causes hearing loss and tinnitus, but also increases the risk of cardiovascular diseases. To provide protection, workers normally wear earplugs. However, commonly available earplugs are often uncomfortable, since they don’t fit everyone’s ears equally well.

How could we improve the comfort and effectiveness of these earplugs? What aspects of the ear canal must be taken into account? To answer these questions, researchers from the École de technologie supérieure (ÉTS University) and the Institut de recherche en santé et sécurité du travail (IRSST) analyzed the varying structure of ear canals to find a correlation between their shapes and the effectiveness of three commonly-used models of earplugs.

Each one is unique
Just like fingerprints, ear canals are unique. So, to find the best compromise between comfort and efficiency, you need to understand the relationship between the shapes of ear canals and of earplugs.

Earplugs must not only fit properly inside the ear canal, but must also exert pressure against the walls of the canal in order to make a tight seal. However, if the plugs put too much pressure on the ear canal walls, they will cause the wearer pain.

The methodology
To study these aspects, 3D models of volunteer workers’ ear canals were created. These people wore three different types of earplugs.  To obtain the geometry of their ear canals, a moulding material was injected to create canal moulds. These moulds were then scanned by measurement software to establish the geometric characteristics of the ear canal, such as the width at various locations and the overall length.
F1_Earcanal_Modelisation.jpg
F2_Earplug_Attenuation_Measurement.jpg - earplug
The noise attenuation of the three models of earplugs was then measured for each volunteer. Two miniature microphones were installed in and around the plugs to measure the noise outside and inside the ear plug.A statistical analysis as well as algorithms based on artificial intelligence helped categorize the morphology of ear canals as a function of the degree of noise mitigation of each earplug.
 “F3_Ear_Anatomy.jpg”
Concrete applications
The results of the study show that the area of the ear canal called the “first bend” is closely linked to noise attenuation by earplugs. Groups of similar structures created using artificial intelligence will allow researchers to develop a multitude of tools for manufacturers, who will then be able to produce a range of more comfortable ear plugs. This will allow prevention professionals to suggest models suited to each worker’s ear canals.

3pBAb1 – Sonobiopsy uses ultrasound to diagnose brain cancer

Christopher Pacia – cpacia@wustl.edu
Lifei Zhu
Jinyun Yuan
Yimei Yue
Hong Chen – hongchen@wustl.edu

Washington University in St. Louis
4511 Forest Park Ave
St. Louis, MO 63108

Popular version of paper 3pBAb1
Presented Wednesday afternoon, December 9, 2020
179th ASA Meeting, Acoustics Virtually Everywhere

Brain cancer diagnosis starts with magnetic resonance imaging, or MRI, which allows clinicians to locate a tumor in the patient’s brain. However, MRI only provides anatomic information about the brain tumor. To understand the tumor type and to make a decision about future treatment, a neurosurgeon performs a tissue biopsy, drilling a small hole in the skull and carefully extracting a tumor sample with a long hollow needle. Liquid biopsy uses a blood sample to achieve similar information as the brain biopsy, without the need for surgery.

Unlike other cancers, whose small biomarkers, such as DNA, can be found circulating in a patient’s blood, brain cancers are separated from the rest of the body by the blood-brain barrier that does not allow tumor DNA to seep into the blood circulation. Two technologies are combined to briefly open the barrier: focused ultrasound and microbubbles. Focused ultrasound uses low-frequency ultrasonic energy to target tumors deep in the brain. Microbubbles are tiny gas bubbles commonly used in ultrasound imaging. When microbubbles are injected into a blood vessel, they travel along the blood flow to all parts of the patient’s body, including the brain. Once at the brain tumor, focused ultrasound causes the bubbles to expand and contract against the blood vessels in the brain, disrupting the blood-brain barrier and opening a door for the tumor DNA to be released into the blood circulation.

Video demonstrating the sonobiopsy technique to diagnose brain cancer.

The research presented here proves the success of sonobiopsy in increasing the levels of brain tumor biomarkers in the blood for the diagnosis of the most common and deadly brain tumor, glioblastoma, with different biomarker types and animal models. Sonobiopsy was optimized by increasing the amount of ultrasonic energy and the number of microbubbles injected to improve the number of biomarkers released in a mouse model. The utility of sonobiopsy was extended to different sized tumors and may be more effective for larger tumors, as demonstrated in a rat model. The potential for clinical translation was demonstrated by enhancing the release of brain-specific biomarkers in a pig model, with similar skull thickness as humans.

Sonobiopsy may be integrated into future clinical practice as a complement to MRI and tissue biopsies as an approach to noninvasively acquire molecular information of the tumor. The potential impact can be for the diagnosis of not only brain tumors but all other brain diseases. There are more studies to be done to better understand and optimize the technology before its practical value in humans, but this presentation is a step towards the future of brain cancer diagnosis.