3aPPb1 – Spectral Processing Deficits Appear to Underlie Developmental Language Disorders

Susan Nittrouer, snittrouer@phhp.ufl.edu
Joanna H. Lowenstein
Kayla Tellez
Priscilla O’Hara
Donal G. Sinex

Popular version of 3aPPb1 Spectral processing deficits appear to underlie developmental language disorders
Presented Wednesday morning, May 25, 2022
182nd ASA Meeting
Click here to read the abstract

The Problem
Sophisticated oral and written language skills are essential to academic and occupational success in our modern, technically based society. Unfortunately, as many as twenty percent of children encounter difficulties learning language, a condition termed Developmental Language Disorder (DLD). This work was undertaken to try to uncover the root of these problems.

Brief Background
For 50 years, scientists have hypothesized that auditory problems are at the root of the challenges encountered by children with DLD. The idea is that children with DLD simply cannot recognize the acoustic structure in speech signals that underlies linguistic forms. Work in this area, however, has been fraught with controversy, and at present, no agreed-upon explanation exists.

What we did
We believe that children with DLD likely have problems processing the acoustic speech signal. In our work we changed three components of our approach from earlier work.

  1. Auditory problems are likely worst at young ages and disrupt language learning at the initial stages. The auditory problems may eventually resolve, but children may be left with language deficits. We looked across ages 7-10 years for evidence of auditory problems that might be more severe in younger than older
  2. The critical auditory problems may involve spectral (frequency), rather than temporal structure, as commonly manipulated. The spectral structure of speech signals is most responsible for defining linguistic We tested children on their ability to detect both temporal and spectral structure. Watch video here.
  3. Word-internal elements, known as phonological units (or simply phonemes), may be disproportionately affected by auditory problems, rather than vocabulary or syntactic We tested all three kinds of skills: vocabulary, syntax, and phonology, with a focus on phonological skills. We expected to find the strongest effects of auditory problems on those phonological skills.

What we found

  1. Younger children with DLD showed more severe auditory problems than older children with
  2. Problems detecting spectral structure were more severe for children with DLD and lasted longer across age than problems detecting temporal
  3. Problems with spectral structure were most strongly related to children’s awareness of phonological units, rather than lexical or syntactic

Developmental Language Disorder

Significance
These findings should serve to refocus research efforts on different kinds of acoustic structure than those examined previously, as well as on specific language deficits. DLD puts children at serious risk for problems in school that can masquerade as other disorders, such as attention deficit or reading problems. Underlying conditions – including premature birth and frequent ear infections in infancy – can cause the kinds of auditory problems identified in the work reported here, and unfortunately, children living in poverty face healthcare inequities that put them at risk for those medical problems. This work is one more step in efforts to achieve equity in educational outcomes.

Filtering Unwanted Sounds from Baby Monitors

Filtering Unwanted Sounds from Baby Monitors

Ideal baby monitors alert parents to infant cries, not irrelevant background noise

Media Contact:
Larry Frum
AIP Media
301-209-3090
media@aip.org

SEATTLE, December 2, 2021 – New parents often keep a constant ear on their children, listening for any signs of distress as their baby sleeps. Baby monitors make that possible, but they can also inundate parents with annoying background audio.

In his presentation, “Open-Source Baby Monitor,” TJ Flynn, of Johns Hopkins University, will discuss his team’s effort to develop and test a smart baby monitor. The talk, on Dec. 2 at 1:25 p.m. Eastern U.S. in Room Columbia C, is part of the 181st Meeting of the Acoustical Society of America, taking place Nov. 29 to Dec. 3 at the Hyatt Regency Seattle.

Flynn, Shane Lani, and their team aim to create an ideal baby monitor that alerts parents when their baby needs attention but does not transmit or amplify sound from other sources. The project uses open source audio processing hardware, originally intended for hearing aids, to filter out unwanted noises. These extra sounds might lead parents to turn down their baby monitor volume and potentially miss infant cries.

“Three of the study authors, including myself, are parents to new babies,” said Lani, a researcher from Johns Hopkins University. “While not directly applicable to every home, my house is situated next to a large state road and in the flight path for landing planes depending on the wind conditions. Due to these factors, loud motorcycles tearing down the highway and low flying planes have historically been a big culprit in setting off the monitor.”

The researchers found baby cries have a fundamental frequency in the range of 400 to 600 hertz, with plenty of harmonics that extend up to 10 kilohertz. They plan to keep the whole frequency range in mind as they explore signal processing options.

Their device is of comparable size to commercial baby monitors, and they are currently testing its performance.

———————– MORE MEETING INFORMATION ———————–
USEFUL LINKS
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eventpilotadmin.com/web/planner.php?id=ASAFALL21
Press Room: https://acoustics.org/world-wide-press-room/

WORLDWIDE PRESS ROOM
In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.

PRESS REGISTRATION
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

During COVID-19 Lockdown, Emotional Wellbeing Declined for Adults with Vision, Hearing Loss

During COVID-19 Lockdown, Emotional Wellbeing Declined for Adults with Vision, Hearing Loss

Older individuals with sensory impairment suffered from lack of social interactions during pandemic

Media Contact:
Larry Frum
AIP Media
301-209-3090
media@aip.org

SEATTLE, Dec. 1, 2021 – During the first year of the COVID-19 pandemic, many assisted living and senior center facilities were forced to close their doors to outside visitors to limit potential exposure to the virus. While it was a step to keep the older residents physically healthy, those with sensory impairment found the isolation created mental and emotional issues.

Peggy Nelson, of the University of Minnesota, will outline the impacts in her presentation, “COVID-19 effects on social isolation for older persons with sensory loss,” at the 181st Meeting of the Acoustical Society of America, which will be held from Nov. 29 to Dec. 3. The session will take place on Dec. 1 at 6:05 p.m. Eastern U.S. in the Quinault Room of the Hyatt Regency Seattle.

Nelson and her team surveyed groups of older adults from the Twin Cities community with vision loss, hearing loss, and without either condition. They asked the participants about their worries, wellbeing, and social isolation at 6 week intervals from April 2020 to July 2021. The period corresponded to strict lockdowns in Minnesota, with some restrictions easing towards the end of the study.

All three groups of adults scored lower on a patient health questionnaire after the pandemic began. Additionally, people with vision and hearing loss faced unique problems.

“People with low vision were really hit hard,” said Nelson. “Their whole mobility systems are built around public transportation and being around other people.”

Masks made conversations especially difficult for adults with hearing loss, leading them to prefer virtual options for health care visits, among other scenarios. However, the overall quieter environment during stay-at-home orders may have compensated for some of the negative effects.

While Nelson said the changes brought by the pandemic often led to a loss of independence for impaired adults, some solutions may be within reach.

“We’ll hopefully find a new hybrid world,” she said. “People with low vision can be close to other people as needed, and people with hearing loss can have remote access to clear communication when masks would prevent that.”

———————– MORE MEETING INFORMATION ———————–
USEFUL LINKS
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eventpilotadmin.com/web/planner.php?id=ASAFALL21
Press Room: https://acoustics.org/world-wide-press-room/

WORLDWIDE PRESS ROOM
In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.

PRESS REGISTRATION
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Echolocation Builds Prediction Models of Prey Movement

Echolocation Builds Prediction Models of Prey Movement

Bats use echoes of own vocalizations to anticipate location, trajectory of prey

Media Contact:
Larry Frum
AIP Media
301-209-3090
media@aip.org

SEATTLE, November 30, 2021 — Bats are not only using their acoustical abilities to find a meal — they are also using it to predict where their prey would be, increasing their chances of a successful hunt.

During the 181st Meeting of the Acoustical Society of America, which will be held Nov. 29 to Dec. 3, Angeles Salles, from Johns Hopkins University, will discuss how bats rely on acoustic information from the echoes of their own vocalizations to hunt airborne insects. The session, “Bats use predictive strategies to track moving auditory objects,” will take place Tuesday, Nov. 30, at 1:50 p.m. Eastern U.S.

In contrast to predators that primarily use vision, bats create discrete echo snapshots, to build a representation of their environment. They produce sounds for echolocation through contracting the larynx or clicking their tongues before analyzing the returning echoes. This acoustic information facilitates bat navigation and foraging, often in total darkness.

Echo snapshots provide interrupted sensory information about target insect trajectory to build prediction models of prey location. This process enables bats to track and intercept their prey.

“We think this is an innate capability, such as humans can predict where a ball will land when it is tossed at them,” said Salles. “Once a bat has located a target, it uses the acoustic information to calculate the speed of the prey and anticipate where it will be next.”

The calls produced by the bats are usually ultrasonic, so human hearing cannot always recognize such noises. Echolocating bats integrate the acoustic snapshots over time, with larger prey producing stronger echoes, to predict prey movement in uncertain conditions.

“Prey with erratic flight maneuvers and clutter in the environment does lead to an accumulation of errors in their prediction,” said Salles. “If the target does not appear where the bat expects it to, they will start searching again.”

By amalgamating representations of prey echoes, bats can determine prey distance, size, shape, and density, as well as identify what they are tracking. Studies have shown bats learn to steer away from prey they deem unappetizing.

———————– MORE MEETING INFORMATION ———————–
USEFUL LINKS
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eventpilotadmin.com/web/planner.php?id=ASASPRING22
Press Room: https://acoustics.org/world-wide-press-room/

WORLDWIDE PRESS ROOM
In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.

PRESS REGISTRATION
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

1pPPb – Young adults with ADHD display altered neural responses to attended sounds than neurotypical counterparts

Jasmine Kwasa – jkwasa@andrew.cmu.edu
Laura María Torres – lmtorres@bu.edu
Abby Noyce – anoyce@andrew.cmu.edu
Barbara Shinn-Cunningham – bgsc@andrew.cmu.edu

Carnegie Mellon University
5000 Forbes Ave
Pittsburgh, PA 15217

Popular version of paper ‘1pPPb – Top-down attention modulates neural responses in neurotypical, but not ADHD, young adults
To be presented Monday afternoon, November 29, 2021
181st ASA Meeting

Competing sounds, like a teacher’s voice against the sudden trill of a cell phone, pose a challenge to our attention. Listening in such environments depends upon a push-and-pull between our goals (wanting to listen to the teacher) and involuntary distractions to salient, unexpected sounds (the phone notification). The outcome of this attentional contest depends on the strength of an individual’s “top-down” control of attention relative to their susceptibility to “bottom-up” attention capture.

We wanted to understand the range of this ability in the general population, from neurotypical functioning to neurodivergence. We reasoned that people with Attention Deficit Hyperactivity Disorder (ADHD) would perform worse when undertaking a challenging task that required strong top-down control (mental flexibility) and show altered neural signatures of this ability.

We created an auditory paradigm that stressed top-down control of attention. Forty-five young adult volunteers with normal hearing listened to multiple concurrent streams of spoken syllables that came from the left, center, and right (listen to an example trial below) while we recorded electroencephalography (EEG). We tested both the ability to sustain attentional focus on a single “target” stream (always heard from the center, depicted in black in Figure 1) and the ability to monitor the target but flexibly switch attention to an unpredictable “interrupter” stream from another direction if and when it appeared (depicted in red in Figure 1).

You can hear an example trial here:

A visual depiction of this clip is seen below:
ADHD

We included key conditions in which the stimuli were identical between trials, but the attentional focus differed, allowing us to isolate effects of attention. The EEG recording allowed us to capture neural responses, called event-related potential (ERP) components, whose amplitudes reflect the strength of top-down relative to bottom-up attention.

We found that while volunteers performed within a large range from near-perfect to near-chance levels of attentive listening, ADHD did not influence who were among the best or worst. In fact, there were no significant differences between ADHD (N=25) and Neurotypical (N=20) volunteers in terms of reporting the order of the syllables. However, ADHD subjects exhibited weaker attentional modulation (less flexibility) of ERP component amplitudes than did neurotypical listeners.

Importantly, neural response modulation significantly correlated with behavioral performance, implying that the best performers are those whose brain responses are under stronger top-down control.

Together, these results demonstrate that in the general population of both neurotypical and neurodivergent people, there is indeed a spectrum of top-down control in the face of salient interruptions, regardless of ADHD status. However, young adults with ADHD might achieve attentive listening via different mechanisms in need of further investigation.

2pPP8 – Teenagers with ADHD may perceive loud sounds in a different way

Alexandra Moore – Alexandra.Moore@nemours.org
Shelby Sydenstricker – Shelby.Sydenstricker@nemours.org
Kyoko Nagao – Kyoko.Nagao@nemours.org
Nemours Biomedical Research
1600 Rockland Road
Wilmington, DE 19803

Popular version of paper ‘2pPP8’
Presented Wednesday afternoon, June 9th, 2021
180th ASA Meeting, Acoustics in Focus

Hyper-sensitivity and hypo-sensitivity (increased and decreased reaction to sounds) are common among patients with ADHD, but have not been well-studied. Complicating this circumstance, no physiological measure for assessing auditory sensitivity has yet been established.

In this study, we explored how adolescents perceive loud sounds using one physiological measure (gauging middle-ear muscle responses) and two psychological measures (self-reported uncomfortably loud levels and psychological profile scores based on common sensations questionnaire). We also examined whether the relationship between physiological and psychological measures to loud sounds differs between adolescents with and without ADHD.

Thirty-nine participants aged 13 to 19 were divided into two groups: 19 participants with a current ADHD diagnosis (ADHD group) and 20 participants without ADHD (control group).

We evaluated the participants’ physiological response to loud sounds in the middle ear, known as acoustic reflex. Acoustic reflex testing is a non-invasive means of detecting the middle-ear muscle contraction as a response to tones or noise stimuli presented to the ear. To evaluate psychological response, we measured loudness discomfort levels, asking participants to report when a sound (tone or noise stimuli) was uncomfortably loud. To further assess psychological response, we used the Adolescent/Adult Sensory Profile questionnaire. All participants were asked how they respond to common sensations. Low registration and sensation sensitivity scores from the Sensory Profile were used for measures of hypo- and hyper-sensitivity (or under- and hyper-responsiveness) based on a previous adult study (Bijlenga, D., et al. 2017. Eur Psychiatry, 43, 51-57).

Preliminary results in the ADHD group showed a weak relationship between physiological (acoustic reflex) measures and sensory sensitivity scores (hyper-sensitivity), as well as a relationship between loudness discomfort levels and low registration scores (hypo-sensitivity). The control group did not show any relationships between the physiological measures and psychological measures we used in this study. We also found that older participants (16-19 years old) tended to be less sensitive to loud sounds than younger participants (13-15 years old). This insensitivity to loud sounds may be attributed to prolonged headphone use for schoolwork and recreational use (e.g., watching TV, listening to music, or playing video games).

Our results seem to suggest that some adolescents with ADHD perceive sound loudness differently from their peers without ADHD. Even within the ADHD group, their responses to loud sounds could be completely opposite from one another. Further research is needed to deepen our understanding of the relationship between physiological and psychological measures of sound sensitivity in patients with ADHD. We hope to continue to examine sound sensitivity in patients with ADHD by examining the effect of ADHD medications and of age on sound sensitivity. [Work supported by the ACCEL grant (NIH U54GM104941), the State of Delaware, and the Nemours Foundation].

The research team at Nemours Children’s Health System