2pPP7 – Your ears never sleep: auditory processing of nonwords during sleep in children

Adrienne Roman – adrienne.s.roman@vumc.org
Carlos Benitez – carlos.r.benitez@vanderbilt.edu
Alexandra Key – sasha.key@vanderbilt.edu
Anne Marie Tharpe – anne.m.tharpe@vumc.org

The brain needs a variety of stimulation from the environment to develop and grow. The ability for the brain to change as a result of sensory input and experiences is often referred to as experience-dependent plasticity. When children are young, their brains are more susceptible to experience-dependent plasticity (e.g., Kral, 2013) so the quantity and quality of input is important. Because our ears are always “on”, our auditory system receives a lot of input to process, especially while we are awake. But, what can we “hear” when we are asleep? And, does what we hear while we are asleep help our brains develop?

Although there has been research in infants and adults examining the extent to which our brains process sounds during sleep, very little research has focused on young children, a group that sleeps a significant portion of their day (Paruthi et al., 2016). We decided to start our investigation by trying to answer the question, do children process and retain information heard during sleep? To investigate this question, we used electroencephalography (EEG) to measure the electrical activity of children’s brains in response to different sounds – sounds they heard when asleep and sounds they heard when awake.

First, during the child’s regular naptime, each child was hooked up to a portable EEG. Using EEG, a technician could tell us when the child went to sleep. Once asleep, we played the child three made-up words over and over in random order for ten minutes. Then, we let the child continue to sleep until he or she woke up.

When the children awoke from their naps, we took them to our EEG lab for event-related potential (ERP) testing. ERPs are segments of on-going EEG recordings appearing as waveforms that reflect the brain’s response to  events or stimulation (such as a sound played).

The children wore “hats” consisting of 128 spongy electrodes while listening to the same three made-up words heard during the nap mixed in with new made-up words that the children never heard before. We then analyzed the ERPs, to determine if the children’s brains responded differently to the words played during sleep than to the new words the children had not heard before. We were looking for ‘memory traces’ in the EEG that would indicate that the children ‘remembered’ the words heard while sleeping.

We found that children’s brains were able to differentiate the nonsensical words “heard” during the nap from the brand new words played during the ERP testing. This means that the brain did not just filter the information coming in, but also retained it long enough to recognize it after they woke up. This is the first step in understanding the impact of a child’s auditory environment during sleep on the brain.

Kral, A. (2013). Auditory critical periods: a review from system’s perspective. Neuroscience, 247, 117-133.

Paruthi, S., Brooks, L. J., D’Ambrosio, C., Hall, W. A., Kotagal, S., Lloyd, R. M.,

Malow, B. A., Maski, K., Nichols, C., Quan, S. F., Rosen, C. L., Troester, M. M., & Wise, M.S. (2016). Recommended amount of sleep for pediatric populations: a consensus statement of the American Academy of Sleep Medicine. Journal of clinical sleep medicine: JCSM: official publication of the American Academy of Sleep Medicine, 12(6), 785.

 

1pAB4 – Combining underwater photography and passive acoustics to monitor fish

Camille Pagniello – cpagniel@ucsd.edu
Gerald D’Spain – gdspain@ucsd.edu
Jules Jaffe – jjaffe@ucsd.edu
Ed Parnell – eparnell@ucsd.edu

Scripps Institution of Oceanography, University of California San Diego
La Jolla, CA 92093-0205, USA

Jack Butler – Jack.Butler@myfwc.com
2796 Overseas Hwy, Suite 119
Marathon, FL 33050

Ana Širović – asirovic@tamug.edu
Texas A&M University Galveston
P.O. Box 1675
Galveston, TX 77550

Popular version of paper 1pAB4 “Searching for the FishOASIS: Using passive acoustics and optical imaging to identify a chorusing species of fish”
Presented Monday afternoon, November 5, 2018
176th ASA Meeting, Victoria, Canada

Although over 120 marine protected areas (MPAs) have been established along the coast of southern California, it has been difficult to monitor their ability to quantify their effectiveness via the presence of target animals. Traditional monitoring methods, such as diver surveys, allow species to be identified, but are laborious and expensive, and heavily rely on good weather and a talented pool of scientific divers. Additionally, the diver’s presence is known to alter animal presence and behavior. As one alternative to aid and perhaps, in the long run, replace the divers, we explored the use of long-term, continuous, passive acoustic recorders to listen to the animals’ vocalizations.

Many marine animals produce sound. In shallow coastal waters, fish are often a dominant contributor. Aristotle was the first to note the “voice” of fish, yet only sporadic reports on fish sounds appeared over the next few millennia. Many of the over 30,000 species of fish that exist today are believed to produce sound; however, the acoustic behavior has been determined in less than 5% of these biologically and commercially important animals.

Towards the goal of both listening to the fish and identifying which species are vocalizing, we developed a Fish Optical and Acoustic Sensor Identification System (FishOASIS) (Figure 1). This portable, low-cost instrument couple’s a multi-element passive acoustic array with multiple cameras, thus allowing us to determine which fish are making which sound for a variety of species. In addition to detecting sporadic events such as fish spawning aggregations, this instrument also provides the ability to track individual fish within aggregations.

FishOASIS

Figure 1. A diver deploying FishOASIS in the kelp forest off La Jolla, CA.

Choruses (i.e., the simultaneous vocalization of animals) are often associated with fish spawning aggregations and, in our work, FishOASIS was successful in recording a low-frequency fish chorus in the kelp forest off La Jolla, CA (Figure 2).

Figure 2. Long-term spectral average (LTSA) of low-frequency fish chorus of unknown species on June 8, 2017 at 17:30:00. Color represents spectrum level, with red indicating highest pressure level.

The chorus starts half an hour before sunset and lasts about 3-4 hours almost every day from May to September. While individuals within the aggregation are dispersed over a large area (approx. 0.07 km2), the chorus’ spatial extent is fairly fixed over time. Species that could be producing this chorus include kelp bass (Paralabrax clathratus) and halfmoons (Medialuna californiensis) (Figure 3).

Figure 3. A halfmoon (Medialuna californiensis) in the kelp forest off La Jolla, CA.

FishOASIS has also been used to identify the sounds of barred sand bass (Paralabrax nebulifer), a popular species among recreational fishermen in the Southern California Bight (Figure 4).

Figure 4. Barred sand bass (Paralabrax nebulifer) call.

This article demonstrates that combining multiple cameras with multi-element passive acoustic arrays is a cost-effective method for monitoring sound-producing fish activities, diversity and biomass. This approach is minimally invasive and offers greater spatial and temporal coverage at significantly lower cost than traditional methods. As such, FishOASIS is a promising tool to collect the information required for the implementation of passive acoustics to monitor MPAs.

2pAB1 – The Acoustic World of Bat Biosonar

Rolf Mueller – rolf.mueller@vt.edu

Virginia Tech
1075 Life Science Cir
Blacksburg, VA 24061

Popular version of paper 2pAB1
Presented Tuesday afternoon, November 6, 2018
176th ASA Meeting, Victoria, BC, Canada

Ultrasound plays a pivotal role in the life of bats, since the animals rely on echoes triggered by their ultrasonic biosonar pulses as their primary source of information on their environments.

However, air is far from an ideal medium for sound propagation since it subjects the waves to severe absorption that dissipates sound energy into heat. Because absorption gets a lot worse with increasing frequency, the ultrasonic frequencies of bats are particularly effected by this and the operation range of bat biosonar is just a few meters for typical sensing tasks.

Absorption limits the highest ultrasonic frequencies that the bats can operate on. This has consequences for the ability of the animals to concentrate the acoustic energy they emit or receive in narrow beams. Forming a narrow beam requires a sonar emitter/receiver that is much larger than the wavelength. Being small mammals, bat have not been able to evolve ears that are much larger (i.e., 2 or 3 orders of magnitude) than the ultrasonic wavelengths of their biosonar systems and hence have fairly wide beams (e.g., 60 degrees or wider).

biosonar

Figure 1. Ultrasonic pulses followed by their echo trains that where created by a robot that mimics the biosonar system of horseshoe in a forest.

For bat species that navigate and hunt in dense vegetation, a broad sonar beam means that the animals receive a lot of “clutter” echoes from the surrounding vegetation. These clutter echoes are likely to drown out informative echoes related to important the presence of prey or passage ways.

biosonar

Figure 2. Biomimetic robot mimicking the biosonar system of horseshoe bats.

Given these basic acoustical conditions, it appears that bat biosonar should be a complete disaster, but in reality the opposite is the case. Bats are the second most species-rich group of mammals (after rodents) and have successfully conquered a diverse set of habitats and food sources based on a combination of active biosonar and flapping flight. Hence, a narrow focus on standard sonar parameters like beamwidth, signal-to-noise ratio, resolution, etc. may not be the right direction to understand the biosonar skills of bats. To remedy this situation, we have created a robot that mimics the biosonar system of horseshoe bats. The robot is currently being used to collect large numbers of echoes from natural environments to create a data basis to identify non-standard informative echo features using machine learning methods.

4aMU1 – Are phantom partials produced in piano strings?

Thomas Moore – tmoore@rollins.edu
Lauren Neldner – lneldner@rollins.edu
Eric Rokni – erokni@rollins.edu

Department of Physics
Rollins College
1000 Holt Ave – 2743
Winter Park, FL 32789

Popular version of paper 4aMU1, “Are phantom partials produced in piano strings?”
Presented Thursday morning, November 8, 2018, 8:55-9:10 AM, Crystal Ballroom (FE)
176th ASA Meeting, Victoria, BC

The unique sound of the piano, or any stringed musical instrument, begins with the vibrating string. The string vibrations produce many different musical pitches simultaneously, most of which are harmonics of the note being played. The final sound depends both on the relative power in each of the harmonics in the string, as well as how efficiently these sounds are transferred to the air. This type of arrangement, where there is a source of the sound (the strings) and a mechanism to transmit the sound to the air (the wooden parts of a piano) is often referred to as a source-filter system. The vibrations from the string are said to be filtered through the bridge and soundboard because these wooden components do not transmit every pitch equally efficiently. The wood can change the balance of the sound created by the string, but it cannot add new sounds.

The work reported in this presentations shows that this idea of how the piano works is flawed. Experiments have shown that the wooden parts of the piano can produce sounds that are not created in the string. That is, the wood can be a source of sound as well as the string, and it is not always simply a filter. The sound originating in the wood occurs at frequencies that are sums and differences of the frequencies found in the vibrations of the string, but they are created in the wood not the string.

These anomalous components in the sound from a piano, commonly referred to as phantom partials, were first reported in 1944,1 and work over the following 70 years resulted in the conclusion that they originate in the stretching of the string as it vibrates.2,3 Therefore, the source of all of the sound from a piano is still considered to be the string. This idea has been incorporated into the most complex computer models of the piano, which may eventually be used to study the effects of changing the piano design without having to build a new piano to determine if the change is desirable.

The commonly accepted idea that phantom partials can originate in the string is not wrong – some of the phantom is created by the string motion. However, the work reported in this presentation shows that only a small part of the power in the phantom partials comes from the string. Much more of the phantom partial is created in the wood. This has implications for those trying to build computer models of the piano, as well as those trying to understand the difference between a good piano and a truly great one.

Before this new information can be included in the latest computer models, the process that creates phantom partials in the wood must be understood. The next step is to develop a theory that can describe the process, and test the theory against further experiments. But the idea that the piano is merely a source-filter system will have to be abandoned if we are to understand this wonderful and ubiquitous musical instrument.

1)  A. F. Knoblaugh, “The clang tone of the piano forte,” J. Acoust. Soc. Am. 128, 102 (1944).

2)  H. A. Conklin, “Generation of partials due to nonlinear mixing in a stringed instrument,” J. Acoust. Soc. Am. 105, 536-545 (1999).

3)  N. Etchenique, S. R. Collin, and T. R. Moore, “Coupling of transverse and longitudinal waves in piano strings,” J. Acoust. Soc. Am. 137, 1766-1771 (2015).

3pID2 – Yanny or Laurel? Acoustic and non-acoustic cues that influence speech perception

Brian B. Monson, monson@illinois.edu

Speech and Hearing Science
University of Illinois at Urbana-Champaign
901 S Sixth St
Champaign, IL 61820
USA

Popular version of paper 3pID2, “Yanny or Laurel? Acoustic and non-acoustic cues that influence speech perception”
Presented Wednesday afternoon, November 7, 1:25-1:45pm, Crystal Ballroom FE
176th ASA Meeting, Victoria, Canada

“What do you hear?” This question that divided the masses earlier this year highlights the complex nature of speech perception, and, more generally, each individual’s perception of the world.  From the yanny v. laurel phenomenon, it should be clear that what we perceive is dependent not only upon the physics of the world around us, but also upon our individual anatomy and individual life experiences. For speech, this means our perception can be influenced greatly by individual differences in auditory anatomy, physiology, and function, but also by factors that may at first seem unrelated to speech.

In our research, we are learning that one’s ability (or inability) to hear at extended high frequencies can have substantial influence over one’s performance in common speech perception tasks.  These findings are striking because it has long been presumed that extended high-frequency hearing is not terribly useful for speech perception.

Extended high-frequency hearing is defined as the ability to hear at frequencies beyond 8,000 Hz.  These are the highest audible frequencies for humans, are not typically assessed during standard hearing exams, and are believed to be of little consequence when it comes to speech.  Notably, sensitivity to these frequencies is the first thing to go in most forms of hearing loss, and age-related extended high-frequency hearing loss begins early in life for nearly everyone.  (This is why the infamous “mosquito tone” ringtones are audible to most teenagers but inaudible to most adults.)

Previous research from our lab and others has revealed that a surprising amount of speech information resides in the highest audible frequency range for humans, including information about the location of a speech source, the consonants and vowels being spoken, and the sex of the talker. Most recently, we ran two experiments assessing what happens when we simulate extended high-frequency hearing loss.  We found that one’s ability to detect the head orientation of talker is diminished without extended high frequencies.  Why might that be important?  Knowing a talker’s head orientation (i.e., “Is this person facing me or facing away from me?”) helps to answer the question of whether a spoken message is intended for you or someone else.  Relatedly, and most surprisingly, we found that restricting access to the extended high frequencies diminishes one’s ability to overcome the “cocktail party” problem.  That is, extended high-frequency hearing improves one’s ability to “tune in” to a specific talker of interest when many interfering talkers are talking simultaneously, as when attending a cocktail party or other noisy gathering.  Do you seem to have a harder time understanding speech at a cocktail party than you used to?  Are you middle-aged?  It may be that the typical age-related hearing loss at extended high frequencies is contributing to this problem.  Our hope is that assessment of hearing at extended high frequencies will become standard routine for audiological exams.  This would allow us to determine the severity of extended high-frequency hearing loss in the population and whether some techniques (e.g., hearing aids) could be used to address it.

Yanny or Laurel

Figure 1. Spectrographic representation of the phrase “Oh, say, can you see by the dawn’s early light.” While the majority of energy in speech lies below about 6,000 Hz (dotted line), extended high-frequency (EHF) energy beyond 8,000 Hz is audible and assists with speech detection and comprehension.

2pAB8 – Blind as a bat? Evidence suggests bats use vision to supplement echolocation in presence of ambient light

Kathryn A. McGowan – kmcgowan01@saintmarys.edu
Saint Mary’s College
Le Mans Hall, 149
Notre Dame, IN 46556

Presented Tuesday afternoon, November 6, 2018
176th ASA Meeting, Victoria, British Columbia

Bats use echolocation, or biological sonar, to make an auditory picture of their environment when foraging and avoiding obstacles in flight (1). To echolocate, bats emit a loud, high-pitched sound using their mouth or nose. The sound bounces off an object and returns to the bat as an echo, providing each individual with information about the object characteristics and location. While echolocation allows for the detection and discrimination of targets, the high-pitched frequency sounds that bats emit when echolocating provide a limited range of information (2). Despite being known for flying at night, some bats spend only a part of their time flying in complete darkness, suggesting that they may also rely on vision to supplement their echolocation in environments that have more light (2, 3). Previous studies have demonstrated that vision in bats influences flight behavior, which suggests bats may combine vision and echolocation to sense their environment (2). It is, therefore, accepted that bats are not blind, as the common phrase suggests, but little is known about how vision influences the way bats use echolocation.

Figure 1. Swarm of Brazilian free-tailed bats flying during daylight hours after emergence. Photo Credit – Dr. Laura Kloepper, 2018

The Brazilian free-tailed bat migrates annually from Mexico to form large maternal colonies in caves in the Southwestern United States (2). These bats forage for insects in flight and emerge from the cave in groups of thousands for nightly foraging. The bats return to the cave in the early hours of the morning, requiring them to navigate back to their complex cave environment across a vast, open landscape. This reentry occurs across periods of complete darkness as well as early morning hours when ambient light is present. This suggests that bats have the option of using both echolocation and visual cues to navigate their environment in hours of daylight. Our research addresses how bats change their echolocation calls from an open environment to the more complex cave edge environment, and how the presence of daylight may influence their level of echolocation when accomplishing this feat.

bat echolocation

Figure 2. Spectrogram image of a sequence of bat echolocation calls recorded at the cave environment.

Compared to the calls used over a vast landscape, bats at the cave edge used more complex calls that gathered more precise information about that environment. During hours of daylight, however, these calls collected less precise information than hours of darkness. As less information was gathered acoustically by bats during daylight hours, it is likely that bats are getting information from visual cues once daybreak occurs. This supplementing of vision for echolocation indicates that despite what the phrases say, bats are not blind.

Video 1. Bats emerging for foraging during early dusk.

  1. Moss, C. F., & Surlykke, A. 2010. Probing the natural scene by echolocation in bats. Frontiers in Behavioral Neuroscience 4: 33.
  2. Mistry, S. 1990. Characteristics of the visually guided escape response of the Mexican free-tailed bat Tadarida Brasiliensis Animal Behavior 39: 314-320.
  3. Davis, W.H., Barbour, R.W. 1965. The use of vision in flight by the bat Myotis sodalis. The American Midland Naturalist 74: 497–499.