2pBA2 – Double, Double, Toil and Trouble: Nitric Oxide or Xenon Bubble – Christy K. Holland

Christy K. Holland – Christy.Holland@uc.edu
Department of Internal Medicine, Division of Cardiovascular Health and Disease and
Department of Biomedical Engineering
University of Cincinnati
Cardiovascular Center 3935
231 Albert Sabin Way
Cincinnati, Ohio  45267-0586
office:  +1 513 558 5675

Himanshu Shekhar – h.shekhar.uc@gmail.com
Department of Electrical Engineering
AB 6/327A
Indian Institute of Technology (IIT) Gandhinagar
Palaj 382355, Gujarat, India

Maxime Lafond – lafondme@ucmail.uc.edu
Department of Internal Medicine, Division of Cardiovascular Health and Disease and
Department of Biomedical Engineering
University of Cincinnati
Cardiovascular Center 3933
231 Albert Sabin Way
Cincinnati, Ohio  45267-0586

Popular version of paper 2pBA2
Presented Tuesday afternoon at 1:20 pm, May 14, 2019
177th ASA Meeting, Louisville, KY

Designer bubbles loaded with special gases are under development at the University of Cincinnati Image-guided Ultrasound Therapeutics Laboratories to treat heart disease and stroke. Xenon is a rare, pricey, heavy, noble gas, and a potent protector of a brain deprived of oxygen. Nitric oxide is a toxic gas that paradoxically plays an important role in the body, triggering the dilation of blood vessels, regulating the release and binding of oxygen in red blood cells, and even killing virus-infected cells and bacteria.

Microbubbles loaded with xenon or nitric oxide stabilized against dissolution with a fatty coating, can be exposed to ultrasound for site-specific release of these beneficial gases, as shown in the video (Supplementary Video 1). The microbubbles were stable against dissolution for for 30 minutes, which is longer than the circulation time before removal from the body. Curiously, the co-encapsulation of either of these bioactive gases with a heavier perfluorocarbon gas increased the stability of the microbubbles. Bioactive gas-loaded microbubbles act as a highlighting agent on a standard diagnostic ultrasound image (Supplementary Video 2). Triggered release was demonstrated with pulsed ultrasound already in use clinically. The total dose of xenon or nitric oxide was measured after release from the microbubbles. These results constitute the first step toward the development of ultrasound-triggered release of therapeutic gases to help rescue brain tissue during stroke.

Supplementary Video 1: High-speed video of a gas-loaded microbubble exposed to a single Doppler ultrasound pulse. Note the reduction in size over exposure to ultrasound, thus demonstrating acoustically-driven diffusion of gas out of the microbubble.

Supplementary Video 2: Ultrasound image of a rat heart filled with nitric oxide-loaded microbubbles. The chamber of the heart appears bright because of the presence of the microbubbles.

1pMU4: Reproducing tonguing strategies in single-reed woodwinds using an artificial blowing machine – Montserrat Pàmies-Vilà

“Investigating clarinet articulation with an artificial blowing and tonguing machine”

Montserrat Pàmies-Vilà – pamies-vila@mdw.ac.at
Alex Hofmann – hofmann-alex@ mdw.ac.at
Vasileios Chatziioannou – chatziioannou@mdw.ac.at
University of Music and Performing Arts Vienna
Anton-von-Webern-Platz 1
1030 Vienna, Austria

Popular version of paper 1pMU4: Reproducing tonguing strategies in single-reed woodwinds using an artificial blowing machine
Presented Monday morning, May 13, 2019
177th ASA Meeting, Louisville, KY

Clarinet and saxophone players create sounds by blowing into the instrument through a mouthpiece with an attached reed, and they control the sound production by adjusting the air pressure in their mouth and the force that the lips apply to the reed. The role of the player’s tongue is to achieve different articulation styles, for example legato (or slurred), portato and staccato. The tongue touches the reed in order to stop its vibration and regulates the separation between notes. In legato the notes are played without separation, in portato the tongue shortly touches the reed and in staccato there is a longer silence between notes. A group of 11 clarinet players from the University of Music and Performing Arts Vienna (Vienna, Austria) tested these tonguing techniques with an equipped clarinet. Figure 1 shows an example of the recorded signals. The analysis revealed that the portato technique is performed similarly among players, whereas staccato requires tonguing and blowing coordination and it is more player-dependent.

Figure 1: Articulation techniques in the clarinet, played by a professional player. Blowing pressure (blue), mouthpiece sound pressure (green) and reed displacement (orange) in legato, portato and staccato articulation. Bottom right: pressure sensors placed on the clarinet mouthpiece and strain gauge on a reed.

The interest of the current study is to mimic these tonguing techniques using an artificial setup, where the vibration of the reed and the motion of the tongue can be observed. The artificial setup consists of a transparent box (artificial mouth), allowing to track the reed motion, the position of the lip and the artificial tongue. This artificial blowing-and-tonguing machine is shown in Figure 2. The build-in tonguing system is controlled with a shaker, in order to assure repeatability. The tonguing system enters the artificial mouth through a circular joint, which allows testing several tongue movements. The parameters obtained from the measurements with players are used to set up the air pressure in the artificial mouth and the behavior of the tonguing system.

Figure 2: The clarinet mouthpiece is placed through an airtight hole into a Plexiglas box. This blowing machine allows monitoring the air pressure in the box, the artificial lip and the motion of the artificial tongue, while recording the mouth and mouthpiece pressure and the reed displacement.

The signals recorded with the artificial setup were compared to the measurements obtained with clarinet players. We provide some sound examples comparing one player (first) with the blowing machine (second). A statistical analysis showed that the machine is capable of reproducing the portato articulation, achieving similar attack and release transients (the sound profile at the beginning and at the end of every note). However, in staccato articulation the blowing machine produces too fast release transients.

Comparison between a real player and the blowing machine.

This artificial blowing and tonguing set-up gives the possibility to record the essential physical variables taking part in the sound production and helps into the better understanding of the processes taking place inside the clarinetist’s mouth during playing.

2pAB16 – Biomimetic sonar and the problem of finding objects in foliage – Joseph Sutlive

Biomimetic sonar and the problem of finding objects in foliage

Joseph Sutlive – josephs7@vt.edu
Rolf Müller – rolf.mueller@vt.edu
Virginia Tech
ICTAS II, 1075 Life Science Cir (Mail Code 0917)
Blacksburg, VA 24061-1016 USA

Popular version of paper 2pAB16
Presented Monday afternoon, May 14, 2019
177th ASA Meeting, Louisville, KY

The ability of sonars to find targets-of-interest is often hampered by a cluttered environment. For example, naval sonars encounter difficulties finding mines partially or fully buried among other distracting (clutter) targets. Such situations pose target identification challenges that are much harder than target detection and resolution problems. Possible new ideas for approaching such problems could come from the many bat species which navigate and hunt in dense vegetation and thus must be able to identify targets-of-interest within clutter. Evolutionary adaptation of the bat biosonar system is likely to have resulted in the “discovery” of features that support making distinctions between clutter and echoes of interest.

There are two main types of sonar: active sonar, in which echoes are triggered by the sonar’s own pulses, and passive sonar, in which the system remains silent and listens to its environment to gain a better understanding of it. The most well-established case is given by certain groups of bats that use Doppler shifts caused by the wingbeat of a flying insect prey to identify the prey in foliage. Different bat species have been shown to use a passive sonar approach that is based on unique prey-generated acoustic signals. We have designed a sonar which mimics the biosonar of the horseshoe bat, a bat which uses active sonar and is one of the bats that use doppler shifts as an identification mechanism

A sonar that mimics the biosonar of the horseshoe bat.

The sonar scanned a variety of targets hidden in artificial foliage; the data was analyzed later. Initial analysis has shown that the sonar can be used to discriminate between different objects in foliage. Additional target discrimination tasks were used; gathering initial echo data of an object without clutter, then trying to find that object within clutter. Initial analysis has indicated the possibility of this sonar head being able to be used for this paradigm, though the results seemed very dependent on the direction of the target. Further investigation will look to refine the models explored here to better understand the how we can extract an object from a noisy, cluttered environment.

1pBA4 – Dedicated signal processing for lung ultrasound imaging: Can we see what we hear? – Libertario Demi

Libertario Demi – libertario.demi@unitn.it

Department of Information Engineering and Computer Science
University of Trento, Italy


Popular version of paper 1pBA4

Presented Monday morning, May 13, 2019

177th ASA Meeting, Louisville, KY

Lung diseases have a large impact worldwide. Chronic Obstructive Pulmonary Diseases (COPD) and lower respiratory infections are respectively the third and fourth leading cause of death in the world, and are responsible for six million deaths per year [1]. Pneumonia, an inflammatory condition of the lung, is the leading cause of death in children under five years of age and responsible for approximately 1 million deaths per year. The economical burden is also significant. Considering only COPD, in the United States of America, the sum of indirect and direct healthcare costs is estimated to be in the order of 50 billion dollars [2].   

Cost effective and largely available solutions for the diagnosis and monitoring of lung diseases would be of tremendous help, and this is exactly the role that could be played by ultrasound (US) technologies.

Compared to the current standard, i.e., X-ray based imaging technologies like a CT-scan, US tech is in fact safe, transportable, and cost-effective. Firstly, being an ionizing-radiation-free modality, US is a diagnostic option especially relevant to children, pregnant women and patients subjected to repeated investigations. Secondly, US devices are easily transportable to patient’s site, also in remote and rural areas, and developing countries. Thirdly, devices and examinations are significantly cheaper as compared to CT or MRI, making US tech accessible to a much broader range of facilities, thus reaching more patients.

However, this large potential is today underused. The examination of the lung is in fact performed with US equipment conceptually unsuitable to this task. Standard US scanners and probes have been designed to visualize body parts (hart, liver, mother’s womb, the abdomen) for which the speed of sound can be assumed to be constant. This is clearly not the case for the lung, due to presence of air. As a consequence, it is impossible to correctly visualize the anatomy of the lung beyond its surface and, in most conditions, the only usable products of standard US equipment are images that display “signs”.

These signs are called imaging artifacts, i.e., objects that are present in the image but which are not physically present in the lung (see example in the Figures). These artifacts, for most of which we still do not know why exactly they appear in the images, carry diagnostic information and are currently used in the clinics, but can obviously only lead to qualitative and subjective analysis.

 Example of standard ultrasound images with different artifacts: A-line artifacts, left, are generally associated with a healthy lung, while B-lines, on the right, correlate with different pathological conditions of the lung. The arrows on top indicate the location of the lung surface in the image, visualized as a bright horizontal line. Beyond this depth the capability of these images to provide an anatomical description of the lung is lost.

Moreover, their appearance in the image largely depends on the user and on the equipment. Clearly, there is much more that we can do. Can we correctly (see) visualize what we (hear) receive from the lung after insonification? Can we re-conceive US tech in order to adapt it to the specific properties of the lung?

Can we develop an ultrasound-based method which can support, in real time, the clinician in the diagnosis of the many different pathologies affecting the lung? In this talk, trying to answer to these questions, recently developed imaging modalities and signal processing techniques dedicated to the analysis of the lung response to ultrasound will be introduced and discussed. In particular, in-vitro and clinical data will be presented which show how the study of the ultrasound spectral features [3] could lead to a quantitative ultrasound method dedicated to the lung.


[1] Global Health Estimates 2016: Deaths by Cause, Age, Sex, by Country and by Region, 2000-2016. Geneva, World Health Organization; 2018.

[2] The clinical and economic burden of chronic obstructive pulmonary disease in the USA, A.J. Guarascio et al. Clinicoecon Outcomes Res, 2013.

[3] Determination of a potential quantitative measure of the state of the lung using lung ultrasound spectroscopy. L. Demi et al. Scientific Reports, 2017.




2aSPa8 and 4aSP6 – Safe and Sound – Using acoustics to improve the safety of autonomous vehicles – Eoin A King

Safe and Sound – Using acoustics to improve the safety of autonomous vehicles

Eoin A King – eoking@hartford.edu
Akin Tatoglu
Digno Iglesias
Anthony Matriss
Ethan Wagner

Department of Mechanical Engineering
University of Hartford
200 Bloomfield Avenue
West Hartford, CT 06117

Popular version of papers 2aSPa8 and 4aSP6
Presented Tuesday and Thursday morning, May 14 & 16, 2019
177th ASA Meeting, Louisville, KY


In cities across the world everyday, people use and process acoustic alerts to safely interact in and amongst traffic; drivers listen for emergency sirens from police cars or fire engines, or the sounding of a horn to warn of an impending collision, while pedestrians listen for cars when crossing a road – a city is full of sounds with meaning, and these sounds make the city a safer place.

Future cities will see the large-scale deployment of (semi-) autonomous vehicles (AVs). AV technology is quickly becoming a reality, however, the manner in which AVs and other vehicles will coexist and communicate with one another is still unclear, especially during the prolonged period of mixed vehicles sharing the road. In particular, the manner in which Autonomous Vehicles can use acoustic cues to supplement their decision-making process is an area that needs development.

The research presented here aims to identify the meaning behind specific sounds in a city related to safety. We are developing methodologies to recognize and locate acoustic alerts in cities and use this information to inform the decision-making process of all road users, with particular emphasis on Autonomous Vehicles. Initially we aim to define a new set of audio-visual detection and localization tools to identify the location of a rapidly approaching emergency vehicle. In short we are trying to develop the ‘ears’ to complement the ‘eyes’ already present on autonomous vehicles.

Test Set-Up
For our initial tests we developed a low cost array consisting of two linear arrays of 4 MEMS microphones. The array was used in conjunction with a mobile robot equipped with visual sensors as shown in Fig. 1. Our array acquired acoustic signals that were analyzed to i) identify the presence of an emergency siren, and then ii) determine the location of the sound source (which was occasionally behind an obstacle). Initially our tests were conducted in the controlled setting of an anechoic chamber.

Picture 1: Test Robot with Acoustic Array

Step 1: Using convolutional neural networks for the detection of an emergency siren

Using advanced machine learning techniques, it has become possible to ‘teach’ a machine (or a vehicle) to recognize certain sounds. We used a deep layer Convolutional Neural Network (CNN) and trained it to recognize emergency sirens in real time, with 99.5% accuracy in test audio signals.

Step 2: Identifying the location of the source of the emergency siren

Once an emergency sound has been detected, it must be rapidly localized. This is a complex task in a city environment, due to moving sources, reflections from buildings, other noise sources, etc. However, by combining acoustic results with information acquired from the visual sensors already present on an autonomous vehicle, it will be possible to identify the location of a sound source. In our research, we modified an existing direction-of-arrival algorithm to report a number of sound source directions, arising from multiple reflections in the environment (i.e. every reflection is recognized as an individual source). These results can be combined with the 3D map of the area acquired from the robot’s visual sensors. A reverse ray tracing approach can then be used to triangulate the likely position of the source.

Picture 2: Example test results. Note in this test our array indicates a source at approximately 30o and another at approximately -60o.

Picture 3: Ray Trace Method. Note, by tracing the path of the estimated angles, both reflected and direct, the approximate source location can be triangulated.

Video explaining theory.

5aAB4 – How far away can an insect as tiny as a female mosquito hear the males in a mating swarm? – Lionel Feugère

Lionel Feugère1,2 – l.feugere@gre.ac.uk
Gabriella Gibson2 – g.gibson@gre.ac.uk
Olivier Roux1 – olivier.roux@ird.fr
Université de Montpellier,
Montpellier, France.
2Natural Resources Institute,
University of Greenwich,
Chatham, Kent ME4 4TB, UK.

Popular version of paper 5aAB4
Presented Friday morning during the session “Understanding Animal Song” (8:00 AM – 9:30 AM), May 17, 2019
177th ASA Meeting, Louisville, KY

Why do mosquitoes make that annoying sound just when we are ready for a peaceful sleep? Why do they risk their lives by ‘singing’ so loudly? Scientists recently discovered that mosquito wing-flapping creates tones that are very important for mating; the flight tones help both males and females locate a mate in the dark.

Mosquitoes hear with their two hairy antennae, which vibrate when stimulated by the sound-wave created by another mosquito flying nearby. Their extremely sensitive hearing organ at the base of the antennae transforms the vibrations into an electrical signal to the brain, similar to how a joystick responds to our hand movements. Mosquitoes have the most sensitive hearing of all insects, however, this hearing mechanism is optimal only at short distances. Consequently, scientists have assumed that mosquitoes use sound for only very short-range communication.

Theoretically, however, a mosquito can hear a sound at any distance, provided it is loud enough. In practice, a single mosquito will struggle to hear another mosquito more than a few centimeters away because the flight tone is not loud enough. However, in the field mosquitoes are exposed to much louder flight tones. For example, males of the malaria mosquito, Anopheles coluzzii, can gather by the thousands in station-keeping flight (‘mating swarms’) for at least 20 minutes at dusk, waiting for females to arrive. We wondered if a female mosquito could hear the sound of a male swarm from far away if the swarm is large enough.

To investigate this hypothesis, we started a laboratory population of An. gambiae from field-caught mosquitoes and observed their behaviour under field-like environmental conditions in a sound-proof room. Phase 1: we reproduced the visual cues and dusk lighting conditions that trigger swarming behaviour, and released males in groups of tens to hundreds of males and recorded their flight-sounds (listen to SOUND 1).

Phase 2: we released one female at a time and played-back the recordings of different sizes of male swarms over a range of distances to determine how far away a female can detect males. If a female hears a flying male or males, she alters her own flight tone to let the male(s) know she is there.

Our results show that a female cannot hear a small swarm until she comes within tens of centimeter of the swarm. However, for larger, louder swarms, females consistently responded to male flight tones. The larger the number of males in the swarm, the further away the females responded; females detected a swarm of ~1,500 males at a distance of ~0.75 m, and they detected a swarm of ~6,000 males at a distance of ~1.5 m.