1pNS1 – Acoustic vehicle alerts and sleep disruption: A content analysis of online discussion forums

Jeanine Botta – jeanine.botta@downstate.edu
The Right to Quiet Society for Soundscape Awareness and Protection
720 East 31st Street
Apartment 7G
Brooklyn, NY 11210

Popular version of paper 1pNS1
Presented Monday afternoon, May 13, 2019
177th ASA Meeting, Louisville, KY

Based on quieter engines that have evolved over decades with electric, hybrid, and many internal combustion engine vehicles, authoritative voices tell us that the problem of noisy cars has been solved. Moving vehicles may be quieter now, but this is not the case for cars that are stationary.

Most vehicles manufactured for the North American market feature an audible signal that assures owners that they have locked their cars. Roughly half of cars sold in the U.S. and Canada use an electronic tone, while the rest use a horn sound. Most cars offer the option to use a visual signal.

Owners can lock a car and arm its security system without use of any signal, but this feature is rarely discussed when buying a car. With some cars, remote start, stages of battery charging, and stages of filling a tire with air use a horn signal. Some brands have incorporated horn sounds to signify that the engine is running when its driver has left the car.

Many people are unaware of acoustic vehicle signals, or the fact that some horn sounds are emitted from parked cars. Awareness may occur after buying a car or when walking in front of a car that is being locked. When one’s home faces a parking lot or street parking, sleep disruption can occur, and annoyance can affect the ability to return to sleep.

This study was conducted to explore common experiences related to sleep disruption caused by remote horn signaling, using content analysis of online complaints posted in discussion forums and forums created by car owners who want to disable the sound. The study uses a database that was compiled as part of a noise activism endeavor called the Silence the Horns project.

vehicle

In complaint forum posts, remote horn signals are described as a source of sleep disruption, reduced quality of life, annoyance, and emotional and physical stress responses. Other concerns include hearing health, safety, and legal considerations. Posts can provide useful information, even as some authors introduce a degree of antagonism. [1]

Car owner forum posts are practical. Authors may not mention a reason for wanting to eliminate the horn sound, but when they do, concerns include general consideration for neighbors, specific concerns about sleep disruption of a family member or neighbor, and embarrassment about creating a sound that brings attention. Authors are helpful, often post in other auto topic forums, and set a friendly tone. [2]

Online forum data will continue to be added to the database throughout 2019. It is hoped that auto industry stakeholders might consider sleep disruption to be an important unintended consequence of horn use with lock confirmation in future car models.

If passed, proposed Senate bill S.543, the Protecting Americans from the Risks of Keyless Ignition Technology Act, would protect consumers from the risk of carbon monoxide poisoning related to cars inadvertently left with the engine running. [3, 4] Stakeholders should be aware that consumers are eliminating horn sounds that automakers implemented to discourage leaving a car with the engine running.

References

[1] ParaPundit forum, Locking A Car With A Short Horn Blast Is Rude and Obnoxious
http://www.parapundit.com/archives/002265.html

[2] GM-VOLT: Chevy Volt Electric Car Site, Suppress option for Charging Honk
https://gm-volt.com/forum/showthread.php?37986-Suppress-option-for-Charging-Honk

[3] Blumenthal announces legislation to protect against CO and rollaway risk raised by keyless cars
http://www.norwalkplus.com/nwk/information/nwsnwk/publish/News_1/Blumenthal-announces-legislation-to-protect-against-CO-and-rollaway-risk-raised-by-keyless-cars_np_25124.shtml

[4] S.543 – PARK IT Act
https://www.congress.gov/bill/116th-congress/senate-bill/543

1pBA11 – An ultrasound surface wave elastography technique for noninvasive measurement of scar tissue

Boran Zhou – zhou.boran@mayo.edu
Xiaoming Zhang – zhang.xiaoming@mayo.edu
Department of Radiology, Mayo Clinic,
Rochester, MN 55905

Saranya P. Wyles – Wyles.Saranya@mayo.edu
Alexander Meves – Meves.Alexander@mayo.edu
Department of Dermatology, Mayo Clinic,
Rochester, MN 55905

Steven Moran – Moran.Steven@mayo.edu
Department of Plastic Surgery, Mayo Clinic,
Rochester, MN 55905

Popular version of paper 1pBA11
Presented Monday afternoon, May 13, 2019
177th ASA Meeting, Louisville, KY

Hypertrophic scars and keloids are characterized by excessive fibrosis and can be functionally problematic. Indeed, hypertrophic scarring is characterized by wide, raised scars that remain within the original borders of injury and have a rapid growth phase. We have developed an ultrasound elastography technique to assess the skin elasticity for patients with scleroderma (1). Currently, no clinical technique is available to noninvasively quantify and assess the progression and development of scar restoration. There is a need for quantitative scar measurement modalities to effectively evaluate and monitor treatments.

We aim to assess the role of ultrasound surface wave elastography (USWE) in accurately evaluating scar metrics. 3 Patients were enrolled in this research based on their clinical diagnoses. For the patients with scar on the forearm, they were tested in a sitting position with their left or right forearm or upper arm placed horizontally on a pillow in a relaxed state. The indenter of the handheld shaker was placed on the tissue at control and scar sites. A 0.1-s harmonic vibration was generated by the indenter on the tissue (2). The vibration was generated at 3 frequencies: 100, 150 and 200 Hz. An ultrasound system with an ultrasound probe with a central frequency of 6.4 MHz was positioned about 5 mm away from the indenter and used for detecting the surface wave motion of the tissue (3).

The wave motions on the 8 selected locations on the tissue surface were noninvasively measured using our ultrasound-based method (Fig. 1a)(4). The phase change with distance of the harmonic wave propagation on the tissue surface was used to measure the surface wave speed.
The measurement of wave speed can be improved by using multiple phase change measurements over distances (5). The regression of the phase change with distance can be obtained by “best fitting” a linear relationship between them (Fig. 1b). Using the tissue motion at the first location as a reference, the wave phase delay of the tissue motions at the remaining locations, relative to the first location, was used to measure surface wave speed (6).

Wave speeds of forearm or upper arm control and scar sites of the 3 patients at 100, 150, and 200 Hz before and after treatment were compared in Figure 2. The p values for the t-tests of the differences between before and after treatment were less than 0.05 for scar sites at 3 frequencies. The higher wave speed indicates the stiffer tissue. The obtained results suggest that scar portion was softener after treatment. USWE provides an objective assessment of the reaction of the scar to injury and treatment response.

Figure 1. (a) Representative B-mode image of skin, (b) Blue circles represent the selected dots for wave speed measurement.

Figure 2. Comparison of wave speeds at 3 frequencies between forearm control and scar sites.

References

1. Zhang X, Zhou B, Kalra S, Bartholmai B, Greenleaf J, Osborn T. An Ultrasound Surface Wave Technique for Assessing Skin and Lung Diseases. Ultrasound in Medicine & Biology. 2018;44(2):321-31.
2. Zhang X, Osborn T, Zhou B, et al. Lung ultrasound surface wave elastography: a pilot clinical study. IEEE transactions on ultrasonics, ferroelectrics, and frequency control. 2017;64(9):1298-304.
3. Clay R, Bartholmai BJ, Zhou B, et al. Assessment of Interstitial Lung Disease Using Lung Ultrasound Surface Wave Elastography: A Novel Technique With Clinicoradiologic Correlates. J Thorac Imaging. 2018.
4. Zhang X, Zhou B, Miranda AF, Trost LW. A Novel Noninvasive Ultrasound Vibro-elastography Technique for Assessing Patients With Erectile Dysfunction and Peyronie Disease. Urology. 2018;116:99-105.
5. Zhou B, Zhang X. The effect of pleural fluid layers on lung surface wave speed measurement: Experimental and numerical studies on a sponge lung phantom. Journal of the Mechanical Behavior of Biomedical Materials. 2019;89:13-8.
6. Zhang X, Zhou B, Osborn T, Bartholmai B, Kalra S. Lung ultrasound surface wave elastography for assessing interstitial lung disease. IEEE Transactions on Biomedical Engineering. 2018:1-.

5aAB4 – How far away can an insect as tiny as a female mosquito hear the males in a mating swarm?

Lionel Feugère1,2 – l.feugere@gre.ac.uk
Gabriella Gibson2 – g.gibson@gre.ac.uk
Olivier Roux1 – olivier.roux@ird.fr
1MIVEGEC, IRD, CNRS,
Université de Montpellier,
Montpellier, France.
2Natural Resources Institute,
University of Greenwich,
Chatham, Kent ME4 4TB, UK.

Popular version of paper 5aAB4
Presented Friday morning during the session “Understanding Animal Song” (8:00 AM – 9:30 AM), May 17, 2019
177th ASA Meeting, Louisville, KY

Why do mosquitoes make that annoying sound just when we are ready for a peaceful sleep? Why do they risk their lives by ‘singing’ so loudly? Scientists recently discovered that mosquito wing-flapping creates tones that are very important for mating; the flight tones help both males and females locate a mate in the dark.

Mosquitoes hear with their two hairy antennae, which vibrate when stimulated by the sound-wave created by another mosquito flying nearby. Their extremely sensitive hearing organ at the base of the antennae transforms the vibrations into an electrical signal to the brain, similar to how a joystick responds to our hand movements. Mosquitoes have the most sensitive hearing of all insects, however, this hearing mechanism is optimal only at short distances. Consequently, scientists have assumed that mosquitoes use sound for only very short-range communication.

Theoretically, however, a mosquito can hear a sound at any distance, provided it is loud enough. In practice, a single mosquito will struggle to hear another mosquito more than a few centimeters away because the flight tone is not loud enough. However, in the field mosquitoes are exposed to much louder flight tones. For example, males of the malaria mosquito, Anopheles coluzzii, can gather by the thousands in station-keeping flight (‘mating swarms’) for at least 20 minutes at dusk, waiting for females to arrive. We wondered if a female mosquito could hear the sound of a male swarm from far away if the swarm is large enough.

To investigate this hypothesis, we started a laboratory population of An. gambiae from field-caught mosquitoes and observed their behaviour under field-like environmental conditions in a sound-proof room. Phase 1: we reproduced the visual cues and dusk lighting conditions that trigger swarming behaviour, and released males in groups of tens to hundreds of males and recorded their flight-sounds (listen to SOUND 1 below).

Phase 2: we released one female at a time and played-back the recordings of different sizes of male swarms over a range of distances to determine how far away a female can detect males. If a female hears a flying male or males, she alters her own flight tone to let the male(s) know she is there.

Our results show that a female cannot hear a small swarm until she comes within tens of centimeter of the swarm. However, for larger, louder swarms, females consistently responded to male flight tones. The larger the number of males in the swarm, the further away the females responded; females detected a swarm of ~1,500 males at a distance of ~0.75 m, and they detected a swarm of ~6,000 males at a distance of ~1.5 m.

2aSPa8 and 4aSP6 – Safe and Sound – Using acoustics to improve the safety of autonomous vehicles

Eoin A King – eoking@hartford.edu
Akin Tatoglu
Digno Iglesias
Anthony Matriss
Ethan Wagner

Department of Mechanical Engineering
University of Hartford
200 Bloomfield Avenue
West Hartford, CT 06117

Popular version of papers 2aSPa8 and 4aSP6
Presented Tuesday and Thursday morning, May 14 & 16, 2019
177th ASA Meeting, Louisville, KY

Introduction
In cities across the world everyday, people use and process acoustic alerts to safely interact in and amongst traffic; drivers listen for emergency sirens from police cars or fire engines, or the sounding of a horn to warn of an impending collision, while pedestrians listen for cars when crossing a road – a city is full of sounds with meaning, and these sounds make the city a safer place.

Future cities will see the large-scale deployment of (semi-) autonomous vehicles (AVs). AV technology is quickly becoming a reality, however, the manner in which AVs and other vehicles will coexist and communicate with one another is still unclear, especially during the prolonged period of mixed vehicles sharing the road. In particular, the manner in which Autonomous Vehicles can use acoustic cues to supplement their decision-making process is an area that needs development.

The research presented here aims to identify the meaning behind specific sounds in a city related to safety. We are developing methodologies to recognize and locate acoustic alerts in cities and use this information to inform the decision-making process of all road users, with particular emphasis on Autonomous Vehicles. Initially we aim to define a new set of audio-visual detection and localization tools to identify the location of a rapidly approaching emergency vehicle. In short we are trying to develop the ‘ears’ to complement the ‘eyes’ already present on autonomous vehicles.

Test Set-Up
For our initial tests we developed a low cost array consisting of two linear arrays of 4 MEMS microphones. The array was used in conjunction with a mobile robot equipped with visual sensors as shown in Fig. 1. Our array acquired acoustic signals that were analyzed to i) identify the presence of an emergency siren, and then ii) determine the location of the sound source (which was occasionally behind an obstacle). Initially our tests were conducted in the controlled setting of an anechoic chamber.

autonomous vehicles

Picture 1: Test Robot with Acoustic Array

Step 1: Using convolutional neural networks for the detection of an emergency siren
Using advanced machine learning techniques, it has become possible to ‘teach’ a machine (or a vehicle) to recognize certain sounds. We used a deep layer Convolutional Neural Network (CNN) and trained it to recognize emergency sirens in real time, with 99.5% accuracy in test audio signals.

Step 2: Identifying the location of the source of the emergency siren
Once an emergency sound has been detected, it must be rapidly localized. This is a complex task in a city environment, due to moving sources, reflections from buildings, other noise sources, etc. However, by combining acoustic results with information acquired from the visual sensors already present on an autonomous vehicle, it will be possible to identify the location of a sound source. In our research, we modified an existing direction-of-arrival algorithm to report a number of sound source directions, arising from multiple reflections in the environment (i.e. every reflection is recognized as an individual source). These results can be combined with the 3D map of the area acquired from the robot’s visual sensors. A reverse ray tracing approach can then be used to triangulate the likely position of the source.

Picture 2: Example test results. Note in this test our array indicates a source at approximately 30o and another at approximately -60o.

Picture 3: Ray Trace Method. Note, by tracing the path of the estimated angles, both reflected and direct, the approximate source location can be triangulated.

Video explaining theory.

4pSC15 – Reading aloud in a clear speaking style may interfere with sentence recognition memory

Sandie Keerstock – keerstock@utexas.edu
Rajka Smiljanic – rajka@austin.utexas.edu
Department of Linguistics, The University of Texas at Austin
305 E 23rd Street, B5100, Austin, TX 78712

Popular version of paper 4pSC15
Presented Thursday afternoon, May 16, 2019
177th ASA Meeting, Louisville, KY

Can you improve your memory by speaking clearly? If, for example, you are rehearsing for a presentation, what speaking style will better enhance your memory of the material: reading aloud in a clear speaking style, or reciting the words casually, as if speaking with a friend?

When conversing with a non-native listener or someone with a hearing problem, talkers spontaneously switch to clear speech: they slow down, speak louder, use a wider pitch range, and hyper-articulate their words. Compared to more casual speech, clear speech enhances a listener’s ability to understand speech in a noisy environment. Listeners also better recognize previously heard sentences and recall what was said if the information was spoken clearly.

Figure 1. Illustration of the procedure of the recognition memory task.

In this study, we set out to examine whether talkers, too, have better memory of what they said if they pronounced it clearly.In the training phase of the experiment, 60 native and 30 non-native English speakers were instructed to read aloud and memorize 60 sentences containing high-frequency words, such as “The hot sun warmed the ground,” as they were presented one by one on a screen. Each screen directed the subject with regard to speaking style, alternating between “clear” and “casual” every ten slides. During the test phase, they were asked to identify as “old” or “new” 120 sentences written on the screen one at a time: 60 they had read aloud in either style, and 60 they had not.

clear speech

Figure 2. Average of d’ (discrimination sensitivity index) for native (n=60) and non-native English speakers (n=30) for sentences produced in clear (light blue) and casual (dark blue) speaking styles. Higher d’ scores denote enhanced accuracy during the recognition memory task. Error bars represent standard error.

Unexpectedly, both native and non-native talkers in this experiment showed enhanced recognition memory for sentences they read aloud in a casual style. Unlike in perception, where hearing clearly spoken sentences improved listeners’ memory, findings from the present study tend to indicate a memory cost when talkers themselves produced clear sentences. This asymmetry between the production and perception effect on memory may be related to the same underlying mechanism, namely the Effortfulness Hypothesis (McCoy et al. 2005). In perception, more cognitive resources are used during processing of more-difficult-to-understand casual speech and fewer resources remain available for storing information in memory. Conversely, cognitive resources may be more depleted during the production of hyper-articulated clear sentences, which could lead to poorer memory encoding. This study suggests that the benefit of clear speech may be limited to the retention of spoken information in long-term memory of listeners, but not talkers.

4aSP4 – Streaming Video through Biological Tissues using Ultrasonic Communication

Gizem Tabak – tabak2@illinois.edu
Michael Oelze – oelze@illinois.edu
Andrew Singer – acsinger@illinois.edu
University of Illinois at Urbana-Champaign
306 N Wright St
Urbana, IL 61801

Popular version of paper 4aSP4
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

Researchers at the University of Illinois at Urbana-Champaign have developed a fast, wireless communication alternative that also has biomedical implications. Instead of using radio frequency (RF) to transmit signals, the team is using ultrasonic waves to send signals at high enough data rates to transmit video through animal or human tissue.

The team of electrical and computer engineering professors Andrew Singer and Michael Oelze and graduate researcher Gizem Tabak have achieved a transmission rate of 4 megabits per second through animal tissue with 2-mm transmitting devices. This rate is high enough to send high definition video (3 Mbps) and 15 times faster than that RF waves can currently deliver.

ultrasonic communication

Figure 1 – Experimental setup for streaming at 4Mbps through 2” beef liver

The team is using this approach for communicating with implanted medical devices, like those used to scan tissue in a patients’ gastrointestinal (GI) tract.

Currently one of two methods are used to image the GI tract. The first is video endoscopy, which involves inserting a long probe with a camera and light down the throat to take real-time video and send it to an attached computer. This method has limitations in that it cannot reach the midsection of the GI tract and is highly invasive.

The second method involves a patient swallowing a pill that contains a mini camera that can take images throughout the tract. After a day or so, the pill is retrieved, and the physician can extract the images. This method, however, is entirely offline, meaning there is no real-time interaction with the camera inside the patient.

A third option uses the camera pill approach but sends the images through RF waves, which are absorbed by the surrounding tissue. Due to safety regulations governing electromagnetic radiation, the transmitted signal power is limited, resulting in data rates of only 267 kilobits per second.

The Illinois team is proposing to use ultrasound, a method that has already proven safe for medical imaging, as a communication method. Having achieved data rates of 4 Mbps with this system through animal tissue, the team is translating the approach to operate in real-time for use in the human body.

Pairing this communication technology with the camera pill approach, the device not only could send real-time video, but also could be remotely controlled. For example, it might travel to specific areas and rotate to arbitrary orientations. It may even be possible to take tissue samples for biopsy, essentially replacing endoscopic procedures or surgeries through such mini-remote controlled robotic devices.