7 December 2020 – IYS International Student Competition – EXTENSION

IYS International Student Competition – EXTENSION

Student Competitions Updates

Update IYS – 7 December 2020 – The IYS Student competition was first launched in late 2019 with the deadline for submissions being the end of 2020. The enthusiasm and efforts of the competition organiser Sergio Luzzi and his team in Italy has been outstanding.

In view of the many challenges during the later part of 2020 it has been agreed to extend the deadline from end of December to end of April 2021. This allows for extra school terms in both the northern and southern hemisphere.

Updated information on the competition are available at https://sound2020.org/society/student-competition/. Rules for the participation and submission details at: IYS-2020_Competition-Regulations.

The students of primary schools are asked to produce drawings inspired by the motto of IYS 2020 “Importance of Sound for Society and the World” and, possibly, by melody and refrain of the song “The Sound of The World”. The students of high schools are asked to write a stanza of 4 verses (lines) in their mother tongue and/or in English, inspired by the melody and the refrain of the song “The Sound of The World” as well as by the motto of IYS 2020 “Importance of Sound for Society and the World”.

Competitions organized by La Semaine du Son

In conjunction with the International Year of Sound, the partner organisation, La Semaine du Son, has launched a competition for tertiary students “2068, MAKE PLACE FOR SOUND!”.

The aim of this competition is to encourage students specializing in space design to collaborate with students specializing in sound by developing thoughts on the sound design of our living spaces, beyond the question of noise control, in order to imagine the soundscapes of to-morrow’s public places. This competition is open for students of architecture, urban planning, landscape, design, art, engineering, audio engineering, music, engineering, design, etc.



Another competition for students by La Semaine du Son is on the theme “WHEN SOUND CREATES IMAGE”.

Students are provided with a sound track composed by a well-respected film score composer and the challenge is to provide a video to complement the music.

Deadlines for both these competitions are currently end of December.

AiF Press Releases – 180th Meeting of Acoustical Society of America

Acoustics in Focus: Virtual Press Releases

180th Meeting of Acoustical Society of America

Tuesday, June 8, 2021 @ 11:30 AM Eastern Time (US and Canada)

Wednesday, June 9 at 11:30 a.m. Eastern U.S.

Thursday, June 10 at 10:00 a.m. Eastern U.S.



2pPP8 – Teenagers with ADHD may perceive loud sounds in a different way Alexandra Moore

Teenagers with ADHD may perceive loud sounds in a different way

Alexandra Moore – Alexandra.Moore@nemours.org
Shelby Sydenstricker – Shelby.Sydenstricker@nemours.org
Kyoko Nagao – Kyoko.Nagao@nemours.org
Nemours Biomedical Research
1600 Rockland Road
Wilmington, DE 19803


Popular version of paper ‘2pPP8’

Presented Wednesday afternoon, June 9th, 2021

180th ASA Meeting, Acoustics in Focus


Hyper-sensitivity and hypo-sensitivity (increased and decreased reaction to sounds) are common among patients with ADHD, but have not been well-studied. Complicating this circumstance, no physiological measure for assessing auditory sensitivity has yet been established.

In this study, we explored how adolescents perceive loud sounds using one physiological measure (gauging middle-ear muscle responses) and two psychological measures (self-reported uncomfortably loud levels and psychological profile scores based on common sensations questionnaire). We also examined whether the relationship between physiological and psychological measures to loud sounds differs between adolescents with and without ADHD.

Thirty-nine participants aged 13 to 19 were divided into two groups: 19 participants with a current ADHD diagnosis (ADHD group) and 20 participants without ADHD (control group).

We evaluated the participants’ physiological response to loud sounds in the middle ear, known as acoustic reflex. Acoustic reflex testing is a non-invasive means of detecting the middle-ear muscle contraction as a response to tones or noise stimuli presented to the ear. To evaluate psychological response, we measured loudness discomfort levels, asking participants to report when a sound (tone or noise stimuli) was uncomfortably loud. To further assess psychological response, we used the Adolescent/Adult Sensory Profile questionnaire. All participants were asked how they respond to common sensations. Low registration and sensation sensitivity scores from the Sensory Profile were used for measures of hypo- and hyper-sensitivity (or under- and hyper-responsiveness) based on a previous adult study (Bijlenga, D., et al. 2017. Eur Psychiatry, 43, 51-57).

Preliminary results in the ADHD group showed a weak relationship between physiological (acoustic reflex) measures and sensory sensitivity scores (hyper-sensitivity), as well as a relationship between loudness discomfort levels and low registration scores (hypo-sensitivity). The control group did not show any relationships between the physiological measures and psychological measures we used in this study. We also found that older participants (16-19 years old) tended to be less sensitive to loud sounds than younger participants (13-15 years old). This insensitivity to loud sounds may be attributed to prolonged headphone use for schoolwork and recreational use (e.g., watching TV, listening to music, or playing video games).

Our results seem to suggest that some adolescents with ADHD perceive sound loudness differently from their peers without ADHD. Even within the ADHD group, their responses to loud sounds could be completely opposite from one another. Further research is needed to deepen our understanding of the relationship between physiological and psychological measures of sound sensitivity in patients with ADHD. We hope to continue to examine sound sensitivity in patients with ADHD by examining the effect of ADHD medications and of age on sound sensitivity. [Work supported by the ACCEL grant (NIH U54GM104941), the State of Delaware, and the Nemours Foundation].

The research team at Nemours Children’s Health System


2aMU1 – Supercomputer simulation reveals how the reed vibrations are controlled in single-reed instruments – Tsukasa Yoshinaga

Supercomputer simulation reveals how the reed vibrations are controlled in single-reed instruments

Tsukasa Yoshinaga – yoshinaga@me.tut.ac.jp
Hiroshi Yokoyama – h-yokoyama@me.tut.ac.jp
Akiyoshi Iida – iida@me.tut.ac.jp
Toyohashi University of Technology
1-1 Hibarigaoka, Tempaku, Toyohashi 441-8580 Japan

Tetsuro Shoji – tetsuro.shoji@music.yamaha.com
Akira Miki – akira.miki@music.yamaha.com
Yamaha Corporation
10-1 Nakazawacho, Nakaku, Hamamatsu 430-8650 Japan

Popular version of paper 2aMU1

Presented 9:35-9:50 morning, June 9, 2021

180th ASA Meeting, Acoustics in Focus

Single-reed instruments, like clarinet, produce sounds with reed vibrations induced by airflow and pressure in the player’s mouth. This reed vibration is also affected by the sound propagation in the instrument so that the player can change the musical tones by controlling the tone holes. Therefore, to analyze the single-reed instrument, it is important to consider the interactions among the reed vibration, sound propagation, and airflow in the instrument. In particular, the airflow passing through a gap between the reed tip and mouthpiece becomes turbulent, and it has been difficult to investigate the details of the interactions in the single-reed instruments.

In this study, we conducted a numerical simulation of sound generation in a single-reed instrument called Saxonett which has a clarinet mouthpiece and recorder-like straight resonator.  In the simulation, airflow and sound generation were predicted by solving the compressible Navier-Stokes equations, while the reed vibration was predicted by calculating the one-dimensional beam equation. To accurately predict the turbulent flow in the mouthpiece, computational grids were needed to be smaller than the turbulent vortices in the airflow (approximately 160 million grid points were constructed). In contrast, the simulation time became larger than the usual flow simulation because the frequency of musical tone was relatively low (approximately 150 Hz). Therefore, the supercomputer was needed to simulate the turbulent flow and sound generation associated with the reed vibration.

By setting a mouth-like pressure chamber around the mouthpiece in the simulation and inserting the airflow, the reed started vibrating and the sound was produced from the instrument. Moreover, amplitudes of the reed oscillation as well as the sound generation were changed by adding the lip force on the reed. Then, by controlling the lip force, a stable reed vibration was achieved. As a result, the reed waveform and sound propagated from the instrument well agreed with the experimental measurements.

With this simulation technology, we could observe the details of airflow and acoustic characteristics inside the instrument while the player is playing the single-reed instrument. By applying the simulation to various designs of the instruments, we can clarify how the sound is produced differently in each model and contribute to the improvement of the sound quality as well as the player’s feeling.


Numerical simulation of the single-reed instrument. Blue to red color shows the pressure amplitude whereas the rainbow color vectors indicate the flow velocity.


1aSPa5 – Saving Lives During Disasters by Using Drones – Macarena Varela

Saving Lives During Disasters by Using Drones

Macarena Varela – macarena.varela@fkie.fraunhofer.de
Wulf-Dieter Wirth – wulf-dieter.wirth@fkie.fraunhofer.de
Fraunhofer FKIE/ Department of Sensor Data and Information Fusion (SDF)
Fraunhoferstr. 20
53343 Wachtberg, Germany

Popular version of paper ‘1aSPa5’

Presented Tuesday morning 9:30 AM – 11:15 AM, June 8, 2021

180th ASA Meeting, Acoustics in Focus


During disasters, such as earthquakes or shipwrecks, every minute counts to find survivors.

Unmanned Aerial Vehicles (UAVs), also called drones, can better reach and cover inaccessible and larger areas than rescuers on the ground or other types of vehicles, such as Unmanned Ground Vehicles.  Nowadays, UAVs could be equipped with state-of-the-art technology to provide quick situational awareness, and support rescue teams to locate victims during disasters.


[Video: Field experiment using the MEMS system mounted on the drone to hear inpulsive sounds produced by a potential victim.mp4]

Survivors typically plead for help by producing impulsive sounds, such as screams. Therefore, an accurate acoustic system mounted on a drone is currently being developed at Fraunhofer FKIE, focused on localizing those potential victims.

The system will be filtering environmental and UAV noise in order to get positive detections on human screams or other impulsive sounds. It will be using a particular type of microphone array, called “Crow’s Nest Array” (CNA) combined with advanced signal processing techniques (beamforming) to provide accurate locations of the specific sounds produced by missing people (see Figure 1). The spatial distribution and number of microphones in arrays have a crucial influence on the estimated location accuracy, therefore it is important to select them properly.

Figure 1: Conceptual diagram to localize victims



The system components are minimized in quantity, weight and size, for the purpose of being mounted on a drone. With this in mind, the microphone array is composed of a large number of tinny digital Micro-Electro-Mechanical-Systems (MEMS) microphones to find the locations of the victims. In addition, one supplementary condenser microphone covering a larger frequency spectrum will be used to have a more precise signal for detection and classification purposes.


Figure 2: Acoustic system mounted on a drone


Different experiments, including open field experiments, have successfully been conducted, demonstrating the good performance of the ongoing project.