2pPP8 – Teenagers with ADHD may perceive loud sounds in a different way

Alexandra Moore – Alexandra.Moore@nemours.org
Shelby Sydenstricker – Shelby.Sydenstricker@nemours.org
Kyoko Nagao – Kyoko.Nagao@nemours.org
Nemours Biomedical Research
1600 Rockland Road
Wilmington, DE 19803

Popular version of paper ‘2pPP8’
Presented Wednesday afternoon, June 9th, 2021
180th ASA Meeting, Acoustics in Focus

Hyper-sensitivity and hypo-sensitivity (increased and decreased reaction to sounds) are common among patients with ADHD, but have not been well-studied. Complicating this circumstance, no physiological measure for assessing auditory sensitivity has yet been established.

In this study, we explored how adolescents perceive loud sounds using one physiological measure (gauging middle-ear muscle responses) and two psychological measures (self-reported uncomfortably loud levels and psychological profile scores based on common sensations questionnaire). We also examined whether the relationship between physiological and psychological measures to loud sounds differs between adolescents with and without ADHD.

Thirty-nine participants aged 13 to 19 were divided into two groups: 19 participants with a current ADHD diagnosis (ADHD group) and 20 participants without ADHD (control group).

We evaluated the participants’ physiological response to loud sounds in the middle ear, known as acoustic reflex. Acoustic reflex testing is a non-invasive means of detecting the middle-ear muscle contraction as a response to tones or noise stimuli presented to the ear. To evaluate psychological response, we measured loudness discomfort levels, asking participants to report when a sound (tone or noise stimuli) was uncomfortably loud. To further assess psychological response, we used the Adolescent/Adult Sensory Profile questionnaire. All participants were asked how they respond to common sensations. Low registration and sensation sensitivity scores from the Sensory Profile were used for measures of hypo- and hyper-sensitivity (or under- and hyper-responsiveness) based on a previous adult study (Bijlenga, D., et al. 2017. Eur Psychiatry, 43, 51-57).

Preliminary results in the ADHD group showed a weak relationship between physiological (acoustic reflex) measures and sensory sensitivity scores (hyper-sensitivity), as well as a relationship between loudness discomfort levels and low registration scores (hypo-sensitivity). The control group did not show any relationships between the physiological measures and psychological measures we used in this study. We also found that older participants (16-19 years old) tended to be less sensitive to loud sounds than younger participants (13-15 years old). This insensitivity to loud sounds may be attributed to prolonged headphone use for schoolwork and recreational use (e.g., watching TV, listening to music, or playing video games).

Our results seem to suggest that some adolescents with ADHD perceive sound loudness differently from their peers without ADHD. Even within the ADHD group, their responses to loud sounds could be completely opposite from one another. Further research is needed to deepen our understanding of the relationship between physiological and psychological measures of sound sensitivity in patients with ADHD. We hope to continue to examine sound sensitivity in patients with ADHD by examining the effect of ADHD medications and of age on sound sensitivity. [Work supported by the ACCEL grant (NIH U54GM104941), the State of Delaware, and the Nemours Foundation].

The research team at Nemours Children’s Health System

2aMU1 – Supercomputer simulation reveals how the reed vibrations are controlled in single-reed instruments

Tsukasa Yoshinaga – yoshinaga@me.tut.ac.jp
Hiroshi Yokoyama – h-yokoyama@me.tut.ac.jp
Akiyoshi Iida – iida@me.tut.ac.jp
Toyohashi University of Technology
1-1 Hibarigaoka, Tempaku, Toyohashi 441-8580 Japan

Tetsuro Shoji – tetsuro.shoji@music.yamaha.com
Akira Miki – akira.miki@music.yamaha.com
Yamaha Corporation
10-1 Nakazawacho, Nakaku, Hamamatsu 430-8650 Japan

Popular version of paper 2aMU1 Numerical investigation of effects of lip stiffness on reed oscillation in a single-reed instrument
Presented 9:35-9:50 morning, June 9, 2021
180th ASA Meeting, Acoustics in Focus

Single-reed instruments, like clarinet, produce sounds with reed vibrations induced by airflow and pressure in the player’s mouth. This reed vibration is also affected by the sound propagation in the instrument so that the player can change the musical tones by controlling the tone holes. Therefore, to analyze the single-reed instrument, it is important to consider the interactions among the reed vibration, sound propagation, and airflow in the instrument. In particular, the airflow passing through a gap between the reed tip and mouthpiece becomes turbulent, and it has been difficult to investigate the details of the interactions in the single-reed instruments.

In this study, we conducted a numerical simulation of sound generation in a single-reed instrument called Saxonett which has a clarinet mouthpiece and recorder-like straight resonator.  In the simulation, airflow and sound generation were predicted by solving the compressible Navier-Stokes equations, while the reed vibration was predicted by calculating the one-dimensional beam equation. To accurately predict the turbulent flow in the mouthpiece, computational grids were needed to be smaller than the turbulent vortices in the airflow (approximately 160 million grid points were constructed). In contrast, the simulation time became larger than the usual flow simulation because the frequency of musical tone was relatively low (approximately 150 Hz). Therefore, the supercomputer was needed to simulate the turbulent flow and sound generation associated with the reed vibration.

By setting a mouth-like pressure chamber around the mouthpiece in the simulation and inserting the airflow, the reed started vibrating and the sound was produced from the instrument. Moreover, amplitudes of the reed oscillation as well as the sound generation were changed by adding the lip force on the reed. Then, by controlling the lip force, a stable reed vibration was achieved. As a result, the reed waveform and sound propagated from the instrument well agreed with the experimental measurements.

With this simulation technology, we could observe the details of airflow and acoustic characteristics inside the instrument while the player is playing the single-reed instrument. By applying the simulation to various designs of the instruments, we can clarify how the sound is produced differently in each model and contribute to the improvement of the sound quality as well as the player’s feeling.

Numerical simulation of the single-reed instrument. Blue to red color shows the pressure amplitude whereas the rainbow color vectors indicate the flow velocity.

 

1aSPa5 – Saving Lives During Disasters by Using Drones

Macarena Varela – macarena.varela@fkie.fraunhofer.de
Wulf-Dieter Wirth – wulf-dieter.wirth@fkie.fraunhofer.de
Fraunhofer FKIE/ Department of Sensor Data and Information Fusion (SDF)
Fraunhoferstr. 20
53343 Wachtberg, Germany

Popular version of ‘1aSPa5 Bearing estimation of screams using a volumetric microphone array mounted on a UAV’
Presented Tuesday morning 9:30 AM – 11:15 AM, June 8, 2021
180th ASA Meeting, Acoustics in Focus
Read the abstract by clicking here.

During disasters, such as earthquakes or shipwrecks, every minute counts to find survivors.

Unmanned Aerial Vehicles (UAVs), also called drones, can better reach and cover inaccessible and larger areas than rescuers on the ground or other types of vehicles, such as Unmanned Ground Vehicles.  Nowadays, UAVs could be equipped with state-of-the-art technology to provide quick situational awareness, and support rescue teams to locate victims during disasters.

[Video: Field experiment using the MEMS system mounted on the drone to hear impulsive sounds produced by a potential victim.mp4]

Survivors typically plead for help by producing impulsive sounds, such as screams. Therefore, an accurate acoustic system mounted on a drone is currently being developed at Fraunhofer FKIE, focused on localizing those potential victims.

The system will be filtering environmental and UAV noise in order to get positive detections on human screams or other impulsive sounds. It will be using a particular type of microphone array, called “Crow’s Nest Array” (CNA) combined with advanced signal processing techniques (beamforming) to provide accurate locations of the specific sounds produced by missing people (see Figure 1). The spatial distribution and number of microphones in arrays have a crucial influence on the estimated location accuracy, therefore it is important to select them properly.

Figure 1: Conceptual diagram to localize victims

The system components are minimized in quantity, weight and size, for the purpose of being mounted on a drone. With this in mind, the microphone array is composed of a large number of tinny digital Micro-Electro-Mechanical-Systems (MEMS) microphones to find the locations of the victims. In addition, one supplementary condenser microphone covering a larger frequency spectrum will be used to have a more precise signal for detection and classification purposes.

Figure 2: Acoustic system mounted on a drone

Figure 2: Acoustic system mounted on a drone

Different experiments, including open field experiments, have successfully been conducted, demonstrating the good performance of the ongoing project.

3aNS4 – Protecting Sleep from Noise in the Built Environment

Jo M. Solet – Joanne_Solet@HMS.Harvard.edu
Harvard Medical School, Division of Sleep Medicine
Boston, MA    United States

Popular version of paper 3aNS4 Protecting sleep from noise in the built environment
Presented Thursday, June 10, 2021
180th ASA Meeting, Acoustics in Focus

Recognition is growing over the need to protect patrons from hearing damage caused by high sound levels in stadiums and concert halls. In parallel, attention must be drawn to the health and safety impacts of lower level sound exposures, which contribute to resident sleep loss in built environments.

Those living in aging or poorly built, multiple occupancy buildings are likely to have substantial exposure to site exterior noise intrusions, as well as to noise produced within their own building envelopes. Sleep disruptive noise is very common in crowded, under-resourced neighborhoods; along with limited access to fresh food, poor air quality, and inadequate access to healthcare, disrupted sleep contributes to known health disparities. Older individuals are especially vulnerable, since as we age the parts of the night spent in the deepest sleep, most protected from disruption by noise, continues to decrease. Unfortunately, noise complaints are too often described as “annoyance” without recognition of potential health impacts.

Many localities have ordinances that define day and night sound level maximums, as measured at property lines; these typically apply to noise nuisance produced on one property and experienced on another, excluding noise produced inside a building, experienced between units. In Cambridge MA, noise intrusion enforcement is complaint-driven only. For local government to address the problem, those who are disturbed by noise emanating from an abutting property must first be aware of their rights, then file a complaint and submit evidence and or/attend a public hearing. This requires sophisticated self-advocacy, as well as time free from other responsibilities. Those carrying multiple jobs, doing shift work or having concerns about language skills or residency status, may not act on their rights even when they are aware of them.

It is well known that anticipating needed noise protections before construction is much easier and more cost-effective than retrofitting. Planning and design review for public housing, for example, should include attention to acoustics. Special care must be taken to consider “site exterior noise” such as auto traffic, commuter rail, overhead air flights, air-handling equipment and heat pumps, even local sirens and trash pick-up. Noise generated from “with-in the building envelope” including by elevators, plumbing, footfalls and other resident activities must also be considered in planning design configurations, and in selecting construction materials and finishes.

Insufficient sleep is known to have multiple negative health impacts, including upon cardiovascular health and diabetes risk, as well as impaired antibody production. Supporting the immune system through sufficient sleep has become especially critical during the Covid-19 crisis, both for directly fighting infection and for supporting adequate vaccine response.

By protecting sleep from disruption by noise, acoustics professionals have an important role to play in supporting public health. To address health disparities and other inequities in our society, we must come together, join forces and contribute to problem-solving beyond academic boundaries. I encourage my colleagues to step up and use science to inform policy. As part of the Division of Sleep Medicine at Harvard Medical School, I welcome your partnership and expertise.

consequences of inadequate sleep

2aSC1 – Testing invisible Participants: Conducting Behavioural Science online during the Pandemic

Prof Jennifer Rodd
Department of Experimental Psychology, University College London
j.rodd@ucl.ac.uk
@jennirodd

Popular version of paper 2aSC1 Collecting experimental data online: How to maintain data quality when you can’t see your participants
Presented at the 180th ASA meeting

In early 2020 many researchers across the world had to close up their labs and head home to help prevent further spread of coronavirus.

If this pandemic had arrived a few years earlier, these restrictions on testing human volunteers in person would have resulted in a near-complete shutdown of behavioural research. Fortunately, the last 10 years have seen rapid advances in the software needed to conduct behavioural research online (e.g., Gorilla, jsPsych) and researchers now have access to well regulated pools of paid participants (e.g., Prolific). This allowed the many researchers who had already switched to online data collection could to continue to collect data throughout the pandemic. In addition, many lab-based researchers, who may have been sceptical about online data collection made the switch to online experiments over the last year. Jo Evershed (Founder CEO of Gorilla Experiment Builder) reports that the number of participants who completed a task online using Gorilla nearly tripled between the first quarter of 2020 and the same time period in 2021.

But this rapid shift to online research is not without problems. Many researchers have well-founded concerns about the lack of experimental control that arises when we cannot directly observe our participants.

Based on 8 years of running behavioural research online, I encourage researchers to embrace online research, but argue that we must carefully adapt our research protocols to maintain high data quality.

I present a general framework for conducting online research. This requires researcher to explicitly specify how moving data collection online might negatively impact their data and undermine their theoretical conclusions.

  • Where are participants doing the experiment? Somewhere noisy or distracting? Will this make data noisy or introduce systematic bias?

online

  • What equipment are participants using? Slow internet connection? Small screen? Headphones or speakers? How might this impact results?

online

  • Are participants who they say they are? Why might they lie about their age or language background? Does this matter?

online

  • Can participants cheat on your task? By writing things down as they go, or looking up information on the internet?

online

I encourage researchers to take a ‘worst case’ approach and assume that some of the data they collect will inevitably be of poor quality. The onus is on us to carefully build in experiment-specific safeguards to ensure that poor quality data can be reliably identified and excluded from our analyses. Sometimes this can be achieved by pre-specifying specific performance criteria on existing tasks, but often it included creating new tasks to provide critical information about our participants and their behaviour. These additional steps must be take prior to data collection, and can be time-consuming, but are vital to maintain the credibility of data obtained using online methods.

1aMU2 – Measurements and Analysis of Acoustic Guitars During Various Stages of Their Construction

Mark Rau – mrau@ccrma.stanford.edu
Center for Computer Research in Music and Acoustics (CCRMA), Stanford University
660 Lomita Court
Stanford, California 94305, USA

Popular version of paper ‘1aMU2’ Measurements and Analysis of Acoustic Guitars During Various Stages of Their Construction
Presented Tuesday morning 9:50 – 10:05am, June 08, 2021
180th ASA Meeting, Acoustics in Focus

Stringed instruments have an internal structure which determines how they vibrate and produce sound when driven by the strings. This internal structure is made up of multiple vibrational resonances and is referred to as the resonant structure. Many stringed instrument builders (luthiers) will take acoustic measurements of instruments as they are being built to probe the resonant structure and make changes so that the instrument will sound as intended. However, the resonant structure of the instrument continuously evolves throughout the construction process, so it is unclear at which stage the acoustic measurements should be made.

To address this, we measured the resonant structure of three guitars during their construction. Two guitars are of the Orchestra Model (OM) style and were made by the Santa Cruz Guitar Company. The third is an 000-28 style guitar built by the author. The guitars were measured at multiple stages while being constructed, including: during the bracing of the top, construction of the box, sanding, application of polish, and once fully constructed. The stages of construction of the 000-28 are shown in Figure 2.

guitarsFigure 1: The three guitars in their completed state. The left and center guitars are the OMs and the right guitar is the 000-28.

Figure 2: Various stages of the 000-28 construction.

The resonant structure was measured by using a small hammer to impart a force to the instrument, and a laser Doppler vibrometer to measure the resulting vibrations. This provided the frequency and amplitude of each structural resonance as well as how long it would ring once struck.

Figure 3: Vibration measurement setup.

The lowest resonances are the most important, because they fall near the fundamental frequencies of most notes on the guitar, so we tracked how the first three prominent resonances changed. Figure 4 shows the frequency response of the 000-28 with the box constructed and sanded (top right of Fig. 2) and the guitar fully constructed (bottom right of Fig. 2). The lowest three prominent resonances are circled and their structural mode shapes are shown for the guitar box.


Figure 4: Frequency response of the 000-28 box (left) and completed guitar (right). The lowest three prominent resonances are highlighted.

We observed some general trends as the guitar evolves, such as the resonant frequencies and amplitudes decreasing as the guitar nears completion, particularly as the polish is applied. If one is trying to achieve a specific sonic quality from an instrument, we recommend taking measurements before the final sanding and adjusting the amount of sanding based on these observations. Final alterations can be made by carving the braces through the sound hole.