2pBA2 – Double, Double, Toil and Trouble: Nitric Oxide or Xenon BubbleChristy K. Holland

Double, Double, Toil and Trouble: Nitric Oxide or Xenon Bubble

Christy K. Holland – Christy.Holland@uc.edu
Department of Internal Medicine, Division of Cardiovascular Health and Disease and
Department of Biomedical Engineering
University of Cincinnati
Cardiovascular Center 3935
231 Albert Sabin Way
Cincinnati, Ohio  45267-0586
https://www.med.uc.edu/ultrasound
office:  +1 513 558 5675

Himanshu Shekhar – h.shekhar.uc@gmail.com
Department of Electrical Engineering
AB 6/327A
Indian Institute of Technology (IIT) Gandhinagar
Palaj 382355, Gujarat, India

Maxime Lafond – lafondme@ucmail.uc.edu
Department of Internal Medicine, Division of Cardiovascular Health and Disease and
Department of Biomedical Engineering
University of Cincinnati
Cardiovascular Center 3933
231 Albert Sabin Way
Cincinnati, Ohio  45267-0586

Popular version of paper 2pBA2
Presented Tuesday afternoon at 1:20 pm, May 14, 2019
177th ASA Meeting, Louisville, KY

Designer bubbles loaded with special gases are under development at the University of Cincinnati Image-guided Ultrasound Therapeutics Laboratories to treat heart disease and stroke. Xenon is a rare, pricey, heavy, noble gas, and a potent protector of a brain deprived of oxygen. Nitric oxide is a toxic gas that paradoxically plays an important role in the body, triggering the dilation of blood vessels, regulating the release and binding of oxygen in red blood cells, and even killing virus-infected cells and bacteria.

Microbubbles loaded with xenon or nitric oxide stabilized against dissolution with a fatty coating, can be exposed to ultrasound for site-specific release of these beneficial gases, as shown in the video (Supplementary Video 1). The microbubbles were stable against dissolution for for 30 minutes, which is longer than the circulation time before removal from the body. Curiously, the co-encapsulation of either of these bioactive gases with a heavier perfluorocarbon gas increased the stability of the microbubbles. Bioactive gas-loaded microbubbles act as a highlighting agent on a standard diagnostic ultrasound image (Supplementary Video 2). Triggered release was demonstrated with pulsed ultrasound already in use clinically. The total dose of xenon or nitric oxide was measured after release from the microbubbles. These results constitute the first step toward the development of ultrasound-triggered release of therapeutic gases to help rescue brain tissue during stroke.

Supplementary Video 1: High-speed video of a gas-loaded microbubble exposed to a single Doppler ultrasound pulse. Note the reduction in size over exposure to ultrasound, thus demonstrating acoustically-driven diffusion of gas out of the microbubble.

Supplementary Video 2: Ultrasound image of a rat heart filled with nitric oxide-loaded microbubbles. The chamber of the heart appears bright because of the presence of the microbubbles.

2pAB16 – Biomimetic sonar and the problem of finding objects in foliage – Joseph Sutlive

Biomimetic sonar and the problem of finding objects in foliage

Joseph Sutlive – josephs7@vt.edu
Rolf Müller – rolf.mueller@vt.edu
Virginia Tech
ICTAS II, 1075 Life Science Cir (Mail Code 0917)
Blacksburg, VA 24061-1016 USA

Popular version of paper 2pAB16
Presented Monday afternoon, May 14, 2019
177th ASA Meeting, Louisville, KY

The ability of sonars to find targets-of-interest is often hampered by a cluttered environment. For example, naval sonars encounter difficulties finding mines partially or fully buried among other distracting (clutter) targets. Such situations pose target identification challenges that are much harder than target detection and resolution problems. Possible new ideas for approaching such problems could come from the many bat species which navigate and hunt in dense vegetation and thus must be able to identify targets-of-interest within clutter. Evolutionary adaptation of the bat biosonar system is likely to have resulted in the “discovery” of features that support making distinctions between clutter and echoes of interest.

There are two main types of sonar: active sonar, in which echoes are triggered by the sonar’s own pulses, and passive sonar, in which the system remains silent and listens to its environment to gain a better understanding of it. The most well-established case is given by certain groups of bats that use Doppler shifts caused by the wingbeat of a flying insect prey to identify the prey in foliage. Different bat species have been shown to use a passive sonar approach that is based on unique prey-generated acoustic signals. We have designed a sonar which mimics the biosonar of the horseshoe bat, a bat which uses active sonar and is one of the bats that use doppler shifts as an identification mechanism

A sonar that mimics the biosonar of the horseshoe bat.

The sonar scanned a variety of targets hidden in artificial foliage; the data was analyzed later. Initial analysis has shown that the sonar can be used to discriminate between different objects in foliage. Additional target discrimination tasks were used; gathering initial echo data of an object without clutter, then trying to find that object within clutter. Initial analysis has indicated the possibility of this sonar head being able to be used for this paradigm, though the results seemed very dependent on the direction of the target. Further investigation will look to refine the models explored here to better understand the how we can extract an object from a noisy, cluttered environment.

4pSC10 – It’s Not What You Said, It’s How You Said It: How Prosody Affects Reaction Time – Aleah D. Combs

It’s Not What You Said, It’s How You Said It: How Prosody Affects Reaction Time

Aleah D. Combs – aleah.d.combs@uky.edu
Emma-Kate Calvert – ekca225@g.uky.edu
Dr. Kevin B. McGowan – kbmgowan@uky.edu
University of Kentucky
120 Patterson Drive
Lexington, KY 40506

Popular version of paper 4pSC10
Presented Thursday, May 16, 2019
177th ASA Meeting, Louisville, KY

In order to speak to someone, a great many things must occur in a very small amount of time in your brain.
In no particular order, your articulators (that’s your mouth, tongue, lips, and anything else that change position when you speak) must be prepared to move quickly in a coordinated fashion to make a set of required sounds. You will have to listen to the background noise in order to produce speech at an appropriate volume. You must be prepared with the sounds that both parties have agreed mean whatever concepts you’re trying to convey, in an order that makes sense to both parties. You must also prepare for a number of plausible responses, what to do if the other person (from here on out, your interlocutor) does not hear or understand your utterance, and plan what you have to say based on the things you think your interlocutor knows. You must decide how to present the information—how do you want your interlocutor to feel? Do you want to claim authority on the subject, or express uncertainty?
None of this is a surprise to anyone with receptive language skill advanced enough to comprehend the above paragraph. Nonetheless, it provides a solid context to explain one of the key fields of inquiry in psycholinguistics: process ordering. That is, what order do all of these processes happen in? Do they overlap? Do they interact? These questions are often explored in reaction time studies, wherein a participant is subjected to a set of stimuli and asked to react to it, usually by pressing a button or clicking a mouse.
In this study, we were interested in the interaction between imperative commands and the tone of voice they were presented in. Specifically, we were interested in whether changing the tone of voice of a command changed the reaction time to that command in a significant way.
Our setup was as follows:
These were the buttons that our participant could choose from.

There were 12 different types of stimulus. Each animal (bird, dog, goat, fish) had angry, happy, and neutral versions of the command “press the [animal] button”. These are some of the stimuli for the word goat.

Angry (characterized by lower overall pitch and rate of speech, hyperarticulation):

Happy (characterized by a raise in pitch variation and rate of speech):

Neutral (the control or baseline for pitch variation, rate of speech, and overall pitch)

These sound files were produced by a trained actor and the emotions simulated using his training.
38 participants later, we had our answer.

[Response times] – [stimulus times] =
Ð angr 925.3958 ms
Ð happ 902.5510 ms
Ð neut 876.3297 ms
Holding neutral as a control, our angry commands produced a statistically significant (p = 0.0251) slower reaction time. This is consistent with a Sumner, Kim, King, and McGowan model of processing where social and semantic information is processed interactively and simultaneously (2014).
###

1pBA4 – Dedicated signal processing for lung ultrasound imaging: Can we see what we hear? – Libertario Demi

Libertario Demi – libertario.demi@unitn.it

Department of Information Engineering and Computer Science
University of Trento, Italy

 

Popular version of paper 1pBA4

Presented Monday morning, May 13, 2019

177th ASA Meeting, Louisville, KY

Lung diseases have a large impact worldwide. Chronic Obstructive Pulmonary Diseases (COPD) and lower respiratory infections are respectively the third and fourth leading cause of death in the world, and are responsible for six million deaths per year [1]. Pneumonia, an inflammatory condition of the lung, is the leading cause of death in children under five years of age and responsible for approximately 1 million deaths per year. The economical burden is also significant. Considering only COPD, in the United States of America, the sum of indirect and direct healthcare costs is estimated to be in the order of 50 billion dollars [2].   

Cost effective and largely available solutions for the diagnosis and monitoring of lung diseases would be of tremendous help, and this is exactly the role that could be played by ultrasound (US) technologies.

Compared to the current standard, i.e., X-ray based imaging technologies like a CT-scan, US tech is in fact safe, transportable, and cost-effective. Firstly, being an ionizing-radiation-free modality, US is a diagnostic option especially relevant to children, pregnant women and patients subjected to repeated investigations. Secondly, US devices are easily transportable to patient’s site, also in remote and rural areas, and developing countries. Thirdly, devices and examinations are significantly cheaper as compared to CT or MRI, making US tech accessible to a much broader range of facilities, thus reaching more patients.

However, this large potential is today underused. The examination of the lung is in fact performed with US equipment conceptually unsuitable to this task. Standard US scanners and probes have been designed to visualize body parts (hart, liver, mother’s womb, the abdomen) for which the speed of sound can be assumed to be constant. This is clearly not the case for the lung, due to presence of air. As a consequence, it is impossible to correctly visualize the anatomy of the lung beyond its surface and, in most conditions, the only usable products of standard US equipment are images that display “signs”.

These signs are called imaging artifacts, i.e., objects that are present in the image but which are not physically present in the lung (see example in the Figures). These artifacts, for most of which we still do not know why exactly they appear in the images, carry diagnostic information and are currently used in the clinics, but can obviously only lead to qualitative and subjective analysis.

 Example of standard ultrasound images with different artifacts: A-line artifacts, left, are generally associated with a healthy lung, while B-lines, on the right, correlate with different pathological conditions of the lung. The arrows on top indicate the location of the lung surface in the image, visualized as a bright horizontal line. Beyond this depth the capability of these images to provide an anatomical description of the lung is lost.

Moreover, their appearance in the image largely depends on the user and on the equipment. Clearly, there is much more that we can do. Can we correctly (see) visualize what we (hear) receive from the lung after insonification? Can we re-conceive US tech in order to adapt it to the specific properties of the lung?

Can we develop an ultrasound-based method which can support, in real time, the clinician in the diagnosis of the many different pathologies affecting the lung? In this talk, trying to answer to these questions, recently developed imaging modalities and signal processing techniques dedicated to the analysis of the lung response to ultrasound will be introduced and discussed. In particular, in-vitro and clinical data will be presented which show how the study of the ultrasound spectral features [3] could lead to a quantitative ultrasound method dedicated to the lung.

 

[1] Global Health Estimates 2016: Deaths by Cause, Age, Sex, by Country and by Region, 2000-2016. Geneva, World Health Organization; 2018.

[2] The clinical and economic burden of chronic obstructive pulmonary disease in the USA, A.J. Guarascio et al. Clinicoecon Outcomes Res, 2013.

[3] Determination of a potential quantitative measure of the state of the lung using lung ultrasound spectroscopy. L. Demi et al. Scientific Reports, 2017.

 

 

 

2aSPa8 and 4aSP6 – Safe and Sound – Using acoustics to improve the safety of autonomous vehicles – Eoin A King

Safe and Sound – Using acoustics to improve the safety of autonomous vehicles

Eoin A King – eoking@hartford.edu
Akin Tatoglu
Digno Iglesias
Anthony Matriss
Ethan Wagner

Department of Mechanical Engineering
University of Hartford
200 Bloomfield Avenue
West Hartford, CT 06117

Popular version of papers 2aSPa8 and 4aSP6
Presented Tuesday and Thursday morning, May 14 & 16, 2019
177th ASA Meeting, Louisville, KY

Introduction

In cities across the world everyday, people use and process acoustic alerts to safely interact in and amongst traffic; drivers listen for emergency sirens from police cars or fire engines, or the sounding of a horn to warn of an impending collision, while pedestrians listen for cars when crossing a road – a city is full of sounds with meaning, and these sounds make the city a safer place.

Future cities will see the large-scale deployment of (semi-) autonomous vehicles (AVs). AV technology is quickly becoming a reality, however, the manner in which AVs and other vehicles will coexist and communicate with one another is still unclear, especially during the prolonged period of mixed vehicles sharing the road. In particular, the manner in which Autonomous Vehicles can use acoustic cues to supplement their decision-making process is an area that needs development.

The research presented here aims to identify the meaning behind specific sounds in a city related to safety. We are developing methodologies to recognize and locate acoustic alerts in cities and use this information to inform the decision-making process of all road users, with particular emphasis on Autonomous Vehicles. Initially we aim to define a new set of audio-visual detection and localization tools to identify the location of a rapidly approaching emergency vehicle. In short we are trying to develop the ‘ears’ to complement the ‘eyes’ already present on autonomous vehicles.

Test Set-Up
For our initial tests we developed a low cost array consisting of two linear arrays of 4 MEMS microphones. The array was used in conjunction with a mobile robot equipped with visual sensors as shown in Fig. 1. Our array acquired acoustic signals that were analyzed to i) identify the presence of an emergency siren, and then ii) determine the location of the sound source (which was occasionally behind an obstacle). Initially our tests were conducted in the controlled setting of an anechoic chamber.

Picture 1: Test Robot with Acoustic Array

Step 1: Using convolutional neural networks for the detection of an emergency siren

Using advanced machine learning techniques, it has become possible to ‘teach’ a machine (or a vehicle) to recognize certain sounds. We used a deep layer Convolutional Neural Network (CNN) and trained it to recognize emergency sirens in real time, with 99.5% accuracy in test audio signals.

Step 2: Identifying the location of the source of the emergency siren

Once an emergency sound has been detected, it must be rapidly localized. This is a complex task in a city environment, due to moving sources, reflections from buildings, other noise sources, etc. However, by combining acoustic results with information acquired from the visual sensors already present on an autonomous vehicle, it will be possible to identify the location of a sound source. In our research, we modified an existing direction-of-arrival algorithm to report a number of sound source directions, arising from multiple reflections in the environment (i.e. every reflection is recognized as an individual source). These results can be combined with the 3D map of the area acquired from the robot’s visual sensors. A reverse ray tracing approach can then be used to triangulate the likely position of the source.

Picture 2: Example test results. Note in this test our array indicates a source at approximately 30o and another at approximately -60o.

Picture 3: Ray Trace Method. Note, by tracing the path of the estimated angles, both reflected and direct, the approximate source location can be triangulated.

Video explaining theory.

5aAB4 – How far away can an insect as tiny as a female mosquito hear the males in a mating swarm? – Lionel Feugère

How far away can an insect as tiny as a female mosquito hear the males in a mating swarm?

Lionel Feugère1,2 – l.feugere@gre.ac.uk
Gabriella Gibson2 – g.gibson@gre.ac.uk
Olivier Roux1 – olivier.roux@ird.fr
1MIVEGEC, IRD, CNRS,
Université de Montpellier,
Montpellier, France.
2Natural Resources Institute,
University of Greenwich,
Chatham, Kent ME4 4TB, UK.

Popular version of paper 5aAB4
Presented Friday morning during the session “Understanding Animal Song” (8:00 AM – 9:30 AM), May 17, 2019

177th ASA Meeting, Louisville, KY

Why do mosquitoes make that annoying sound just when we are ready for a peaceful sleep? Why do they risk their lives by ‘singing’ so loudly? Scientists recently discovered that mosquito wing-flapping creates tones that are very important for mating; the flight tones help both males and females locate a mate in the dark.

Mosquitoes hear with their two hairy antennae, which vibrate when stimulated by the sound-wave created by another mosquito flying nearby. Their extremely sensitive hearing organ at the base of the antennae transforms the vibrations into an electrical signal to the brain, similar to how a joystick responds to our hand movements. Mosquitoes have the most sensitive hearing of all insects, however, this hearing mechanism is optimal only at short distances. Consequently, scientists have assumed that mosquitoes use sound for only very short-range communication.

Theoretically, however, a mosquito can hear a sound at any distance, provided it is loud enough. In practice, a single mosquito will struggle to hear another mosquito more than a few centimeters away because the flight tone is not loud enough. However, in the field mosquitoes are exposed to much louder flight tones. For example, males of the malaria mosquito, Anopheles coluzzii, can gather by the thousands in station-keeping flight (‘mating swarms’) for at least 20 minutes at dusk, waiting for females to arrive. We wondered if a female mosquito could hear the sound of a male swarm from far away if the swarm is large enough.

To investigate this hypothesis, we started a laboratory population of An. gambiae from field-caught mosquitoes and observed their behaviour under field-like environmental conditions in a sound-proof room. Phase 1: we reproduced the visual cues and dusk lighting conditions that trigger swarming behaviour, and released males in groups of tens to hundreds of males and recorded their flight-sounds (listen to SOUND 1). 



Phase 2: we released one female at a time and played-back the recordings of different sizes of male swarms over a range of distances to determine how far away a female can detect males. If a female hears a flying male or males, she alters her own flight tone to let the male(s) know she is there.

Our results show that a female cannot hear a small swarm until she comes within tens of centimeter of the swarm. However, for larger, louder swarms, females consistently responded to male flight tones. The larger the number of males in the swarm, the further away the females responded; females detected a swarm of ~1,500 males at a distance of ~0.75 m, and they detected a swarm of ~6,000 males at a distance of ~1.5 m.