1pUW4 – Videos of ultrasonic wave propagation through transparent acrylic objects in water for introductory physics courses produced using refracto-vibrometry

Matthew Mehrkens – mmehrken@gustavus.edu
Benjamin Rorem – brorem@gustavus.edu
Thomas Huber – huber@gustavus.edu
Gustavus Adolphus College
Department of Physics
800 West College Avenue
Saint Peter, MN 56082

Popular version of paper 1pUW4, “Videos of ultrasonic wave propagation through transparent acrylic objects in water for introductory physics courses produced using refracto-vibrometry”
Presented Monday afternoon, May 7, 2018, 2:30pm – 2:45pm, Greenway B
175th ASA Meeting, Minneapolis

In most introductory physics courses, there are units on sound waves and optics. These may include readings, computer simulations, and lab experiments where properties such as reflection and refraction of light are studied. Similarly, students may study how an object, such as an airplane, traveling faster than the speed of sound can produce a Mach cone. Equations, such as Snell’s Law of Refraction or the Mach angle equation are derived or presented that allow students to perform calculations. However, there is an important piece that is missing for some students – they are not able to actually see the sound or light waves traveling.

The goal of this project was to produce videos of ultrasonic wave propagation through a transparent acrylic sample that could be incorporated into introductory high-school and college physics courses. Students can observe and quantitatively study wave phenomena such as reflection, refraction and Mach cone formation. By using rulers, protractors, and simple equations, students can use these videos to determine the velocity of sound in water and acrylic.

Video that demonstrates ultrasonic waves propagating in acrylic samples measured using refracto-vibrometry.

To produce these videos, an optical technique called refracto-vibrometry was used. As shown in Figure 1, the laser from a scanning laser Doppler vibrometer was directed through a water-filled tank at a retroreflective surface.

refracto-vibrometry

Figure 1: (a) front view, and (b) top view. The pulse from an ultrasound transducer passes through water and is incident on a transparent rectangular target. To measure propagating wave fronts using refracto-vibrometery, the laser from the vibrometer traveled through the water and was reflected off a retro reflector.

 

The vibrometer detected the density changes as the ultrasound wave pulse passed through the laser beam. This process of measuring the ultrasound arrival time was performed thousands of times when the laser was directed at a large collection of scan points. These data sets were used to create videos of the propagating ultrasound.

In one measurement, a transparent rectangular acrylic block, tilted at an angle, was placed in the water tank. Figure 2 is a single frame from a video showing the traveling ultrasonic waves emitted from a transducer and reflected/refracted by the block. By using the video, along with a ruler and protractor, students can determine the speed of sound in the water and acrylic block.

Video showing ultrasonic waves traveling through water as they are reflected and refracted by a transparent acrylic block.

Figure 2: Ultrasonic wave pulses (cyan and red colored bands) as they travel from water into the acrylic block (the region outlined in magenta). The path of the maximum position of the waves are shown by the green and blue dots.

In a similar measurement, a transparent acrylic cylinder was suspended in the water tank by fine monofilament string.  As an ultrasonic pulse traveled in the cylinder, it created a small bulge in the surface. Because this bulge in the acrylic cylinder traveled faster than the speed of sound in water, it produced a Mach cone that can be seen in the video and in Figure 3.  Students can determine the speed of sound in the cylinder by measuring the angle of this cone.

Figure 3: Mach cone produced by ultrasonic waves traveling faster in acrylic cylinder than in water.

Video showing formation of a Mach cone resulting from ultrasonic waves traveling faster through an acrylic cylinder than in water.

By interacting with these videos, students should be able to gain a better understanding of wave behavior. The videos are available for download from http://physics.gustavus.edu/~huber/acoustics

This material is based upon work supported by the National Science Foundation under Grant Numbers 1300591 and 1635456. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

4aPP7 – The Loudness of an Auditory Scene

William A. Yost – william.yost@asu.edu
Michael Torben Pastore – m.torben.pastore@gmail,edu

Speech and Hearing Science
Arizona State University
PO Box 870102
Tempe AZ, 85287-0102

Popular version of paper 4aPP7
Presented Thursday morning, May 10, 2018
175th ASA Meeting, Minneapolis, MN

This paper is part of special session honoring Dr. Neil Viemeister, University of Minnesota, for his brilliant career. One of the topics Dr. Viemeister studies is loudness perception. Our presentation deals with the perceived loudness of an auditory scene when several people talk at about the same time. In the real world, the sounds of all the talkers are combined into one complex sound before they reach a listener’s ears. The auditory brain sorts this single complex sound into acoustic “images“, where each image represents the sound of one of the talkers. In our research, we try to understand how many such images can be ”pulled out” of an auditory scene so that they are perceived as separate, identifiable talkers.

In one type of simple experiment listeners are asked to determine how many more talkers it takes for listeners to notice that the number of talkers has increased. When we increase the number of talkers, the additional talkers make the overall sound louder and the change in loudness can be used as a cue to help listeners discriminate which sound has more talkers. If we make the overall loudness of a four-talker scene (as an example) and a six-talker scene (as an example) the same, the loudness of the individual talkers in the six-talker scene will be less than the loudness of the individual talkers in the four-talker scene.

If listeners can focus on the individual talkers in the two scenes, they might be able to use the change in loudness of individual talkers as a cue for discrimination. If listeners cannot focus on individual talkers in a scene, then the two scenes may not be discriminable and they are likely to be judged as equally loud. We have found that listeners can make loudness judgments of the individual talkers for scenes of two or three talkers, but not more. This indicates that the loudness of a complex sound may depend on how well the individual components of the sound are perceived and, if so, that only two or three such components (images, talkers) can be processed by the auditory brain at a given time.

Trying to listen to one or more people in a situation of many people talking at the same time is difficult, especially for people who are hard of hearing. If the normal auditory system can only process a few sound sources presented at the same time, this reduces the complexity of devices (e.g., hearing aids) that might be designed to help people with hearing impairment process sounds in complex acoustic environments. In auditory virtual reality (AVR) scenarios, there is a computational cost associated with processing each sound source. If an AVR system only has to process a few sound sources to mimic normal hearing, it would be a lot less expensive than if the system has to process many sound sources.  (Supported by grants from National Institutes of Health, NIDCD and Oculus VR, LLC)

2aEA3 – Insect Ears Inspire Miniature Microphones

James Windmill – james.windmill@strath.ac.uk
University of Strathclyde
204 George Street
Glasgow, G1 1XW
United Kingdom

Popular version of paper 2aEA3
Presented Tuesday morning, May 8, 2018
175th ASA Meeting, Minneapolis, MN

Miniature microphones are a technology that everyone uses everyday without thinking about it. They are used in smartphones, laptops, tablets, and more recently in smart home equipment. However, working with sound technology always means there are issues, like how to deal with background noise. Engineers have always looked for ways to make technology better, and in miniature microphones one of the paths for improvement has been to look at how insects hear. If you want to design a really small microphone, then why not look at how the ear of a really small animal works?

In the 1990’s researchers discovered that a small fly (Ormia ochracea) had a very directional ear. That is, it can tell the direction that sound was coming from with a lot higher accuracy than predicted. Since that discovery many engineers have made attempts to make microphones copying the mechanism in the Ormia ear. Much of the effort has spent been trying to get round the problem that the Ormia is only interested in hearing one specific frequency. Humans want microphones that cover all the frequencies we can hear. Why bother copying this insect ear? If you could make a tiny directional microphone then a lot of background noise drops simply because the microphone points towards the person speaking.

At Strathclyde we have developed a variety of microphones based on the Ormia ear mechanism. The main push in this work has been to try and get more sensitive microphones working across more frequencies. To do this we have put four microphones into one Ormia type design, as in Figure 1. So instead of a single frequency, the microphone works as a miniature directional microphone across four main frequencies [1].

insect ear

Figure 1. Four frequency Ormia inspired miniature microphone.

Work on the Ormia system at Strathclyde encouraged us to think of other things that insect ears do, and their structure, to see if there are other advantages to find. This work has taken two main themes. Firstly, many hearing systems in nature are not just simple mechanical systems; they are active sensors. That is they change how they function depending on what sound they’re listening to. So for a quiet sound they increase the amplification of the signal in the ear, or for a loud sound they turn it down. Some ears also change their frequency response, changing the frequencies they are tuned to. Strathclyde researchers have taken these ideas and produced miniature microphone systems that can do the same thing [2]. Why do this, when you can just do it in signal processing? By making the microphone “smart” you can free up processor power to do other things, or reduce the delay between a sound arriving and the electronic signal being used.

Figure 2. Graphs showing the results of a miniature microphone actively changing its frequency (A) and gain response (B).

Secondly, we thought about how you make miniature microphones. The ones we use in phones, computers etc today are made using computer chip technology, so are made very flat out of very hard silicon. Insect ears are made of a relatively soft material, and come in a huge variety of three dimensional shapes. The obvious thing it seemed to us was to try making insect inspired microphones using 3D printing techniques. This is very early work, its not easy to do. But we have had some success making microphone sensors using 3D printers [3]. Figure 3 shows an “acoustic sensor” that was inspired by how the locust hears sound.

Figure 3. 3D printed acoustic sensor inspired by the ear of a locust.

There is still a lot of work to do, both on developing these techniques and technologies, and on working out how best to use them in everyday technologies like the smartphone. Then again, a huge number of different insects have ears, each working in slightly different ways to hear different things for different reasons, so there are a lot of ears out there we can take inspiration from.

[1] Bauer R et al. (2017), Housing influence on multi-band directional MEMS microphones inspired by Ormia ochracea, IEEE Sensors Journal, 17: 5529-5536.
http://dx.doi.org/10.1109/JSEN.2017.2729619

[2] Guerreiro J et al. (2017), Simple Ears Inspire Frequency Agility in an Engineered Acoustic Sensor System, IEEE Sensors Journal, 17: 7298-7305.
http://dx.doi.org/10.1109/JSEN.2017.2699697

[3] Domingo-Roca R et al. (2018), Bio-inspired 3D-printed piezoelectric device for acoustic frequency selection, Sensors & Actuators: A. Physical, 271: 1-8.
https://doi.org/10.1016/j.sna.2017.12.056

4aSC12 – When it comes to recognizing speech, being in noise is like being old

Kristin Van Engen – kvanengen@wustl.edu
Avanti Dey
Nichole Runge
Mitchell Sommers
Brent Spehar
Jonathen E. Peelle

Washington University in St. Louis
1 Brookings Drive
St. Louis, MO 63130

Popular version of paper 4aSC12
Presented Thursday morning, May 10, 2018
175th ASA Meeting, Minneapolis, MN

How hard is it to recognize a spoken word?

Well, that depends. Are you old or young? How is your hearing? Are you at home or in a noisy restaurant? Is the word one that is used often, or one that is relatively uncommon? Does it sound similar to lots of other words in the language?

As people age, understanding speech becomes more challenging, especially in noisy situations like parties or restaurants. This is perhaps unsurprising, given the large proportion of older adults who have some degree of hearing loss. However, hearing measurements do not actually do a very good job of predicting the difficulty a person will have with speech recognition, and older adults tend to do worse than younger adults even when their hearing is good.

We also know that some words are more difficult to recognize than others. Words that are used rarely are more difficult than common words, and words that sound similar to many other words in the language are recognized less accurately than unique-sounding words. Relatively little is known, however, about how these kinds of challenges interact with background noise to affect the process of word recognition or how such effects might change across the lifespan.

In this study, we used eye tracking to investigate how noise and word frequency affect the process of understanding spoken words. Listeners were shown a computer screen displaying four images, and listened the instruction “Click on the” followed by a target word (e.g., “Click on the dog.”). As the speech signal unfolds, the eye tracker records the moment-by-moment direction of the person’s gaze (60 times per second). Since listeners direct their gaze toward the visual information that matches incoming auditory information, this allows us to observe the process of word recognition in real time.

Our results indicate that word recognition is slower in noise than in quiet, slower for low-frequency words than high-frequency words, and slower for older adults than younger adults. Interestingly, young adults were more slowed down by noise than older adults. The main difference, however, was that young adults were considerably faster to recognize words in quiet conditions. That is, word recognition by older adults didn’t differ much from quiet to noisy conditions, but young listeners looked like older listeners when tasked with listening to speech in noise.

1aPP7- Say what? Brief periods of hearing loss in childhood can have consequences later in life

Kelsey L Anbuhl – kla@nyu.edu
Daniel J Tollin – Daniel.tollin@ucdenver.edu

Department of Physiology & Biophysics
University of Colorado School of Medicine
RC1-N, 12800 E 19th Avenue
Aurora, CO 80045

Popular version of paper 1aPP7
Presented Monday morning, May 7, 2018
175th ASA Meeting, Minneapolis, MN

The sense of hearing enables us to effortlessly and precisely pinpoint the sounds around us. Even in total darkness, listeners with normal, healthy hearing can distinguish sound sources only inches apart. This remarkable ability depends on the coordinated use of sounds at the two ears, known as binaural hearing. Binaural hearing helps us to discern and learn speech sounds as infants, to listen to the teacher’s voice rather than the chatter of nearby students as children, and to navigate and communicate in a noisy world as adults.

For individuals with hearing loss, these tasks are notoriously more challenging, and often remain so even after treatment with hearing aids or other assistive devices. Classrooms, restaurants, and parties represent troublesome settings where listening is effortful and interrupted. Perplexingly, some individuals that appear to have normal hearing (as assessed with an audiogram, a common test of hearing) experience similar difficulties, as if the two ears are not working together. Such binaural hearing difficulties can lead to a diagnosis of Central Auditory Processing Disorder, CAPD. CAPD is defined by auditory deficits that are not explained by typical hearing loss (as would be seen on an audiogram), and indicates dysfunction in the auditory brain. Prevalence of CAPD has been estimated at 5-20% in adults and ~5-7% in children.

Interestingly, CAPD is especially prevalent in children that have experienced frequent ear infections during the first few years of life. Ear infections can lead to a temporary conductive hearing loss from the buildup of fluid in the middle ear (called otitis media with effusion) which prevents sound from reaching the inner ear normally. For children who experience repeated ear infections, the developing auditory brain might receive unbalanced input from the two ears for weeks or months at a time. While infections generally dissipate later in childhood and the audiograms of both ears return to normal, early disruptions in auditory input could have lasting consequences for the binaural centers of the brain.

We hypothesized that persistent conductive hearing loss (such as that caused by ear infections) disrupts the fine-tuning of binaural hearing in the developing auditory system. Using an animal model (the guinea pig), we found that chronic conductive hearing loss during development (induced by an earplug) caused the brain to generate an altered representation of auditory space. When the hearing loss was reversed by simply removing the earplug, the brain misinterpreted the normal sounds arriving at the two ears and the animals consequently pinpointed sounds less precisely; in fact, animals were ~threefold worse at a simple sound location discrimination task than animals that had not worn earplugs, as if the sense of auditory space had been blurred.  These results provide a model for CAPD; a child with CAPD may struggle to understand a teacher because a less precise (“blurry”) representation of sound location in the brain makes it difficult to disentangle the teacher’s voice from competing sounds (Figure 1). Overall, the results suggest that experiencing even temporary hearing loss during early development can alter the normal maturation of the auditory brain.  These findings underscore the importance of early detection and treatment of hearing loss.

Figure 1: A typical classroom is an acoustically complex environment that can be difficult for a child with CAPD. Children with normal binaural hearing (blue shading) can separate the teacher’s voice from background noise sources, but those with impaired binaural hearing (red shading) may have a much harder time accomplishing this task.

2pAB2 – Sound of wood-boring larvae and its automated detection

Alexander Sutin – asutin@stevens.edu
Alexander Yakubovskiy – ayakubov@stevens.edu
Hady Salloum – hsalloum@stevens.edu
Timothy Flynn – tflynn2@stevens.edu
Nikolay Sedunov – nsednov@stevens.edu
Stevens Institute of Technology, Hoboken, NJ 07030

Hannah Nadel – Hannah.Nadel@aphis.usda.gov
USDA APHIS PPQ S&T, 1398 West Truck Road, Buzzards Bay, MA 02542

Sindhu Krishnankutty – Sindhu.Krishnankutty@aphis.usda.gov
Department of Biology, Xavier University, Cincinnati, OH 45207

Popular version of paper 2pAB2 , “Sound of wood-boring larvae and its automated detection”
Presented Tuesday, May 8, 2018, 1:40-2:00 PM, LAKESHORE B,
175th ASA Meeting, Minneapolis.

larvae

Figure 1. Tree bolt with wood-boring beetle larva and attached sensors

The difficulty to detect potentially dangerous plant pests at ports of entry by agricultural inspectors, and the increasing invasion of U.S. agriculture and forestry by exotic pests in recent years are serious problems, given that the Federal Government instituted a robust reinforcement of the country’s borders and ports of entry. It is estimated that costs to the American economy caused by exotic invasive species are now over $138 B per year. Customs and Border Protection (CBP) facilitates processing of ~$2 trillion in legitimate trade, imports and exports yearly while enforcing U.S. trade laws that protect the economy, health, and the safety of people worldwide. Currently, CBP agriculture specialists inspect for pests relying on mostly manual techniques that are time-consuming and potentially not 100% effective because resources allow for only 2% of cargo to be examined.  Wood boring pests are especially time-consuming to detect, as they burrow and feed inside wood, and often leave few visual cues to their presence.

Stevens Institute of Technology has been investigating engineering solutions to augment the current wood inspection process at ports of entry in an effort to minimize the time spent per inspection and maximize the detection rate of infestations in wood packaging and wood products. One of our systems is based on the detection of vibrational pulses make by wood boring larvae during feeding; results of the initial research in this direction are presented in [1].  A major problem of automated detection of wood-boring larvae is detecting insect-induced vibrational pulses with noise in the background. To develop an acoustic-signature detection algorithm, numerous acoustic signals made by the larvae of Anoplophora glabripennis (Asian Longhorn Beetle) and Agrilus planipennis (Emerald Ash Borer) were collected in a quarantine facility at the USDA-APHIS PPQ Otis laboratory. We also recorded and analyzed typical background noise pulses, namely, speech, knocking, and tapping made by humans, and sounds of electronic equipment. Examples of time tracks and spectrograms of the recorded signal are shown in Fig. 2. and 3.

Figure 2. Recorded vibrational pulses from various sources.

Figure 3. Spectrograms of insect sounds and human speech.

In the conducted analyses, we considered the features of both sound pulses of larvae and typical noise pulses. The extracted and evaluated features of those sound pulses were based on the estimation of the duration, spectrum and spectrogram of the signal, and spectrum and spectrogram of the signal envelope (estimated via Hilbert transform). Some noise pulses (knocking and tapping) are longer than the larval bite sounds, while some (electronic beeps) are similar in duration. Speech includes fragments (vowels) which are much longer than larval bite sounds, but also very short fragments inside the vowels (high-pitched harmonics). The spectral content of some non-insect sounds differ from larval feeding sounds. The envelope spectra, therefore, appear to be informative features.

Analysis of the recorded vibrations allowed extraction of signal features that could ultimately be used for larval classification. These features include the main frequency of the generated pulses, their duration, and main frequency of the pulse envelop (modulation frequency).  In the conducted tests, these features show a clear separation of ALB and EAB acoustic signatures. For example, the main frequency of the ALB sound was in the range of 3.8-4.8 kHz, while for EAB it was between 1.2 and 1.8 kHz. A preliminary algorithm for automated insect-signal detection was developed. The algorithm automatically detects pulses with parameters typical for larva-induced sounds and rejects non-insect sound pulses that belong to the ambient noise. Detection is determined when the number of detected pulses for some time (1 min) exceeds the definite threshold.  In the test, this algorithm detected a larva in all samples without false alarms.

We are close to the finalization of a prototype for wood-boring-insect detection in wooden pallets. This prototype includes the following features:

  1. Sensitive sensors that are practically unaffected by external sounds. These sensors contain an accelerometer and a microphone used for ambient noise estimation and elimination of strong ambient noise signals that can penetrate the vibrational channel.
  2. An insect sound emitter to simulate real insect sounds and apply it to the testing and calibration of the detection system.
  3. Detection of insect-produced vibrations, based on the principles presented above.

Acknowledgement
This project was funded under contract with the U.S Department of Homeland Security’s Science and Technology Directorate (S&T). The opinions contained herein are those of the contractors and do not necessarily reflect those of DHS S&T.

References
[1] Sutin, A.,  T. Flynn, H. Salloum, N. Sedunov, Y. Sinelnikov, and H. Hull-Sanders. 2017. Vibro-acoustic methods of insect detection in agricultural shipments and wood packing materials. In: Proceedings of Technologies for Homeland Security (HST), IEEE International Symposium, 2017, Boston, USA, pp. 1 – 6.

[embeddoc url=”https://acoustics.org/wp-content/uploads/2018/05/Sutin-LLP.docx” viewer=”microsoft”]