1pMU4: Reproducing tonguing strategies in single-reed woodwinds using an artificial blowing machine

Montserrat Pàmies-Vilà – pamies-vila@mdw.ac.at
Alex Hofmann – hofmann-alex@ mdw.ac.at
Vasileios Chatziioannou – chatziioannou@mdw.ac.at
University of Music and Performing Arts Vienna
Anton-von-Webern-Platz 1
1030 Vienna, Austria

Popular version of paper 1pMU4: Reproducing tonguing strategies in single-reed woodwinds using an artificial blowing machine
Presented Monday morning, May 13, 2019
177th ASA Meeting, Louisville, KY

Clarinet and saxophone players create sounds by blowing into the instrument through a mouthpiece with an attached reed, and they control the sound production by adjusting the air pressure in their mouth and the force that the lips apply to the reed. The role of the player’s tongue is to achieve different articulation styles, for example legato (or slurred), portato and staccato. The tongue touches the reed in order to stop its vibration and regulates the separation between notes. In legato the notes are played without separation, in portato the tongue shortly touches the reed and in staccato there is a longer silence between notes. A group of 11 clarinet players from the University of Music and Performing Arts Vienna (Vienna, Austria) tested these tonguing techniques with an equipped clarinet. Figure 1 shows an example of the recorded signals. The analysis revealed that the portato technique is performed similarly among players, whereas staccato requires tonguing and blowing coordination and it is more player-dependent.

Figure 1: Articulation techniques in the clarinet, played by a professional player. Blowing pressure (blue), mouthpiece sound pressure (green) and reed displacement (orange) in legato, portato and staccato articulation. Bottom right: pressure sensors placed on the clarinet mouthpiece and strain gauge on a reed.

The interest of the current study is to mimic these tonguing techniques using an artificial setup, where the vibration of the reed and the motion of the tongue can be observed. The artificial setup consists of a transparent box (artificial mouth), allowing to track the reed motion, the position of the lip and the artificial tongue. This artificial blowing-and-tonguing machine is shown in Figure 2. The build-in tonguing system is controlled with a shaker, in order to assure repeatability. The tonguing system enters the artificial mouth through a circular joint, which allows testing several tongue movements. The parameters obtained from the measurements with players are used to set up the air pressure in the artificial mouth and the behavior of the tonguing system.

Figure 2: The clarinet mouthpiece is placed through an airtight hole into a Plexiglas box. This blowing machine allows monitoring the air pressure in the box, the artificial lip and the motion of the artificial tongue, while recording the mouth and mouthpiece pressure and the reed displacement.

The signals recorded with the artificial setup were compared to the measurements obtained with clarinet players. We provide some sound examples comparing one player (first) with the blowing machine (second). A statistical analysis showed that the machine is capable of reproducing the portato articulation, achieving similar attack and release transients (the sound profile at the beginning and at the end of every note). However, in staccato articulation the blowing machine produces too fast release transients.

Comparison between a real player and the blowing machine.

This artificial blowing and tonguing set-up gives the possibility to record the essential physical variables taking part in the sound production and helps into the better understanding of the processes taking place inside the clarinetist’s mouth during playing.

2pBA2 – Double, Double, Toil and Trouble: Nitric Oxide or Xenon Bubble

Christy K. Holland – Christy.Holland@uc.edu
Department of Internal Medicine, Division of Cardiovascular Health and Disease and
Department of Biomedical Engineering
University of Cincinnati
Cardiovascular Center 3935
231 Albert Sabin Way
Cincinnati, Ohio  45267-0586
https://www.med.uc.edu/ultrasound
office:  +1 513 558 5675

Himanshu Shekhar – h.shekhar.uc@gmail.com
Department of Electrical Engineering
AB 6/327A
Indian Institute of Technology (IIT) Gandhinagar
Palaj 382355, Gujarat, India

Maxime Lafond – lafondme@ucmail.uc.edu
Department of Internal Medicine, Division of Cardiovascular Health and Disease and
Department of Biomedical Engineering
University of Cincinnati
Cardiovascular Center 3933
231 Albert Sabin Way
Cincinnati, Ohio  45267-0586

Popular version of paper 2pBA2
Presented Tuesday afternoon at 1:20 pm, May 14, 2019
177th ASA Meeting, Louisville, KY

Designer bubbles loaded with special gases are under development at the University of Cincinnati Image-guided Ultrasound Therapeutics Laboratories to treat heart disease and stroke. Xenon is a rare, pricey, heavy, noble gas, and a potent protector of a brain deprived of oxygen. Nitric oxide is a toxic gas that paradoxically plays an important role in the body, triggering the dilation of blood vessels, regulating the release and binding of oxygen in red blood cells, and even killing virus-infected cells and bacteria.

Microbubbles loaded with xenon or nitric oxide stabilized against dissolution with a fatty coating, can be exposed to ultrasound for site-specific release of these beneficial gases, as shown in the video (Supplementary Video 1). The microbubbles were stable against dissolution for for 30 minutes, which is longer than the circulation time before removal from the body. Curiously, the co-encapsulation of either of these bioactive gases with a heavier perfluorocarbon gas increased the stability of the microbubbles. Bioactive gas-loaded microbubbles act as a highlighting agent on a standard diagnostic ultrasound image (Supplementary Video 2). Triggered release was demonstrated with pulsed ultrasound already in use clinically. The total dose of xenon or nitric oxide was measured after release from the microbubbles. These results constitute the first step toward the development of ultrasound-triggered release of therapeutic gases to help rescue brain tissue during stroke.

Supplementary Video 1: High-speed video of a gas-loaded microbubble exposed to a single Doppler ultrasound pulse. Note the reduction in size over exposure to ultrasound, thus demonstrating acoustically-driven diffusion of gas out of the microbubble.

Supplementary Video 2: Ultrasound image of a rat heart filled with nitric oxide-loaded microbubbles. The chamber of the heart appears bright because of the presence of the microbubbles.

4aAB1 – The best available science? Are NOAA Fisheries marine mammal exposure noise guidelines up to date?

Michael Stocker – mstocker@OCR.org
Ocean Conservation Research
P.O. Box 559
Lagunitas, California 94938

Popular version of paper 4aAB1
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY
Click here tor read the abstract
Click here to read the proceedings paper

Abstract
NOAA Fisheries employs a set of in-water noise exposure guidelines that establish regulatory thresholds for ocean actions that impact marine mammals. These are established based on two impact criteria: Level A – a physiological impact, and Level B – a behavioral impact or disruption. Since the introduction of these exposure definitions, much more work has been published on behavioral impacts of various noise exposures, and consideration of other variables such as frequency, sound quality, and multiple sound-source exposures. But these variables have not yet been incorporated into the NOAA Fisheries exposure guidelines.

Determining regulatory thresholds
In the Marine Mammal Protection Act (MMPA) sound exposure levels are categorized in two levels, Level A” and “Level B.” “Level A Take” defined by the National Marine Fisheries Service (NMFS) as a “do not exceed” threshold below which physical injury would not occur. In whales and whales, dolphins, and porpoises this was 180dB (re: 1μPa).

A “Level B Take” is defined as “any act that disturbs or is likely to disturb a marine mammal or marine mammal stock in the wild by causing disruption of natural behavioral patterns, including, but not limited to, migration, surfacing, nursing, breeding, feeding, or sheltering, to a point where such behavioral patterns are abandoned or significantly altered.” But defining what constitutes “disruption” is fraught with threshold vagaries – given that behavior is always contextual, and the weight of the “biological significance” of the disruption hinges on a human value scale. How biologically significant is it when Bowhead whales change their vocalization rates in response to barely audible airgun exposure, well below the Level B threshold? How biologically significant is it when a sea lion risks exposure to loud, intentionally (above Level A) Acoustic Harassment Devices intended to scare sea lions away from fish farms actually attracts them by letting them know that “dinner” is available.

Regulatory Metrics
Regulations work best when they are unambiguous. Regulators are not fond of nuance. Dichotomous decisions of Yes/No, Go/No-Go are their stock and trade. It was for this reason that until just recently the marine mammal exposure guidelines were really simple:

Noise exposure above 180dB = Level A exposure.
Noise exposure above 160dB = Level B exposure (for impulsive sounds)
Noise exposure above 120dB = Level B exposure (for continuous sounds)

But it was clear that these original regulatory thresholds were actually too simple. When dolphins ride the bow waves of seismic survey vessels – frolicking in a Level A noise field, it was apparent that the regulatory thresholds did not reflect common field conditions. This was recently addressed in guidelines that more accurately reflected the noise exposure criteria relative to the hearing ranges of a range of the various marine mammal species – from large “Low Frequency” baleen whales, to small “High Frequency” dolphins and porpoises. While this new standard more accurately reflects the frequency-defined hearing ranges of the exposed animals, it does not accurately address the complexity of the noise exposures in terms of sound qualities, nor in terms of the complexity of the sound environments in which the exposures would typically occur.

Actual sound exposures
Increasingly complex signals are being used in the sea for underwater communication and equipment control. These communication signals can be rough or “screechy” sounding and more disturbing and more damaging than the simple signals used for auditory testing.

Additionally, when sounds presented in a typical Environmental Impact Statements, they are presented as single sources of sound. And while there is some consideration for accumulated noise impacts, the accumulation period “resets” after 24 hours, so the metric only reflects accumulated noise exposure and does not address the impacts of a habitat completely transformed by continuous, or ongoing noise. Given that typical seismic airgun surveys run around the clock for weeks to months at a time, and have an acoustical reach of hundreds to thousands of kilometers, the activity is likely to have much greater behavioral impact than is reflected in accumulating and dumping of a noise exposure index every 24 hours.

Furthermore, operations such as seismic survey, or underwater extraction industry operations typically use a lot of different, but simultaneous sound sources. Seismic surveys may include seafloor profiling with multi-beam or side-scan sonars. Underwater extraction industries such as seafloor processing for oil and gas extraction, or seafloor mining operations will necessarily have multiple sound sources – with noisy equipment, along with acoustical communications for status monitoring, and acoustical remote control of the equipment. These concurrently operating compliments of equipment can create a very complex soundscape. And even if the specific pieces of equipment don’t in-and-of-themselves exceed regulatory thresholds, they may nonetheless create acoustically-hostile soundscapes likely to have behavioral and metabolic impacts on marine animals. So far there is no qualitative metrics for compromised soundscapes, but modeling for concurrent sound exposures is possible, and in this context, many concurrent sounds would constitute “continuous sound,” thereby qualifying the soundscape as a whole under the Level B continuous sound criteria of 120dB.

This is particularly the case for a proposed set of seismic surveys in the Mid-Atlantic, wherein three separate geophysical surveys would be occurring simultaneously in close proximity. “Incidental Harassment Authorizations” have been released by NOAA Fisheries for these surveys which have not taken the ‘concurrent noise exposures’ into account.

Additionally, while sound sources in the near-field may be considered “impulsive sounds.” And thus regulated under “Level B” criteria for impulse sounds, due to reverberation, louder sounds which have a long reach should be considered as “continuous sound sources” and thus be regulated under the Level B ‘continuous sound’ criteria of 120dB.

Recommendations:
1. NOAA sound exposure metric should be updated to reflect sound quality (accommodating for signal characteristics) as well as amplitude.
2. “Soundscapes” need qualitative and quantitative definitions, and then incorporated into the regulatory framework.
3. Exposure metrics needs to accommodate for concurrent sound source exposures.
4. The threshold for what constitutes “continuous sound” needs to be more clearly defined, particularly in terms of loud sound sources in the far field subject to reverberation and “multi-path” echoes.

4aBAa7 – Unprecedented high-spatial resolution was achieved in ultrasound imaging by breaking the fundamental limitation with the operating ultrasound wavelength

Kang Kim – kangkim@upmc.edu
Qiyang Chen – qic41@pitt.edu
Jaesok Yu – jaesok.yu@ece.gatech.edu
Roderick J Tan – tanrj@upmc.edu
University of Pittsburgh
3550 Terrace St, room 623, Pittsburgh, PA 15261

Popular version of paper 2aBA8; 4aBAa7
Presented Tuesday & Thursday morning, May 14 & 16, 2019
177th ASA Meeting, Louisville, KY

US imaging is one of the most favored imaging modalities in clinics in general because of its real-time display, safety, noninvasiveness, portability and affordability. One major disadvantage of ultrasound imaging is its limited spatial resolution that is fundamentally governed by the wavelength of the operating ultrasound. We developed a new super-resolution imaging algorithm that can achieve super high-spatial resolution beyond such limitation called acoustic diffraction limit.

The concept of the super resolution that bypasses a physical limit for the maximum resolution of traditional optical imaging was originally introduced in microscopy imaging community and later developed into a ground-breaking technology of the nano-dimension microscopy imaging, for which the Nobel Prize in Chemistry was awarded in 2014. In brief, microscopy super resolution imaging technology is based on randomly repeated blinking process of the fluorophores in response to the light source of the microscopy. In recent years, the concept has been translated into ultrasound imaging community. The random blinking process that requires for achieving super resolution using ultrasound is provided by flowing microbubbles in blood vessels which randomly oscillate in response to the ultrasound pressure from the imaging transducer. The maximum spatial resolution in super resolution microscopy technology is in the range of tens of nanometers (10-9 m) that allows to visualize the pathways of individual molecules inside living cells, while ultrasound super resolution imaging can achieve a spatial resolution in the range of tens of micrometers (10-6 m) when using a typical clinical ultrasound imaging transducer of a few MHz center frequency. However, due to the large imaging depth of ultrasound up to several centimeters, ultrasound super resolution imaging technology is practically very useful in imaging human subject with greater details of microvasculature which is of critical importance for many diseases.

Figure 1

Traditional contrast enhanced ultrasound (CEU) imaging technologies using microbubbles provide superior contrast of vasculatures, effectively suppressing the surrounding tissue signals, but the spatial resolution remains to the acoustic diffraction limit. In recent years, to overcome such limitation with CEU, several approaches have been made to overcome such limitation by employing super resolution concept, however requiring a long scan time, which hinders the technology from being wide spread. The major contribution from my laboratory is to drastically shorten the scan time of super resolution imaging using deconvolution algorithm for microbubble center localization, as well as to compensate artifacts due to physiological motions using block matching based motion correction and spatio-temporal-interframe-correlation based data re-alignment, so that the technology can be used in vivo for diverse applications. In brief, a novel approach of ultrafast ultrasound imaging, rigid motion compensation, tissue signal suppressor and deconvolution based deblurring has been developed for both high spatial and temporal resolution.

Video 1

The developed technology was applied in imaging microvasculature change which is a critical feature during disease development and progress. Vasa vasorum that is network of small blood vessels that supply the walls of large blood vessels and often multiplies and infiltrates into atherosclerotic plaque were identified in rabbit model.

Figure2

Microvascular rarefaction is a key signature of acute kidney injury that often progress into chronic kidney diseases and eventual kidney failure. Microvessels in mouse acute kidney injury model were successfully identified and quantitatively analyzed.

Figure 3

1aPP – The Role of Talker/Vowel Change in Consonant Recognition with Hearing Loss

Ali Abavisani – aliabavi@illinois.edu
Jont B. Allen – jontalle@illinois.edu
Dept. of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
405 N Mathews Ave
Urbana, IL, 61801

Popular version of paper 1aPP
Presented Monday, May 13, 2019
177th ASA Meeting, Louisville, KY

Hearing loss can have serious impact on social life of individuals experiencing it. The effect of hearing loss becomes more complicated in environments such as restaurants, where the background noise is similar to speech. Although hearing aids in various designs, intend to address these issues, users complain about hearing aids performance in social situations, where they are mostly needed. Part of this problem refers to the nature of hearing aids, which do not use speech as part of design and fitting process. If we somehow incorporate speech sounds in real life conditions into the fitting process of hearing aids, it may be possible to address most of the shortcomings that irritates the users.

There have been many studies on the features that are important in identification of speech sounds such as isolated consonant + vowel (CV) phones (i.e., meaningless speech sound). Most of these studies ran experiments on normal hearing listeners, to identify the effects of different speech features in correct recognition. It turned out that manipulation of speech sounds, such as replacing a vowel, or amplifying/attenuating certain parts of sound in time-frequency domain, leads to identification of new speech sounds by the normal hearing listeners. One goal of current study is to investigate whether there are similar responses to such manipulations from listeners who have hearing loss.

We designed a speech-based test that may be utilized by audiologists to determine susceptible speech phones for each individual with hearing loss. The design includes a perceptual measure that corresponds to speech understanding in background noise, where the noise is similar to speech. The perceptual measure identifies the noise level in which the speech sound is recognizable by an average normal hearing listener, at least with 90% accuracy. The speech sounds within the test include combinations of 14 consonants {p, t, k, f, s, S, b, d, g, v, z, Z, m, n} and four vowels {A, ae, I, E}, to cover different features that are present in speech. All the test sounds have pre-evaluated to make sure they are recognizable by normal hearing listeners in the noise conditions of the experiments. Two sets of sounds named T$_1$ and T$_2$ having same consonant-vowel combinations of sounds but different talkers, had been presented to the listeners at their most comfortable level of hearing (not depending to their specific hearing loss). The two speech sets had distinct perceptual measure. When two sounds with similar perceptual measure, and with the same consonant but different vowel are presented to a listener with hearing loss, their response can show us how their particular hearing function, may cause errors in understanding this particular speech sound, and why this function led to recognition of a specific sound instead of the presented speech. Also, presenting sounds from the two sets constitute the means to compare the role of perceptual measure (which is based on normal hearing listeners), on listeners with hearing loss. When the recognition score for a particular listener increases as the result of a change in presented speech sounds, it is an indication on how the fitting process of hearing aid should follow, regarding that particular (listener, speech sound) pair.

While the study shows that improvement or degradation of the speech sounds are listener dependent, on average 85% of sounds are improved when we replaced the CV with same CV but with a better perceptual measure. Additionally, using CVs with similar perceptual measure, on average 28% of CVs are improved when we replaced the vowel with vowel {A}, 28% of CVs are improved when we replaced the vowel with vowel {E}, 25% of CVs are improved when we replaced the vowel with vowel {ae}, and 19% of CVs are improved when we replaced the vowel with vowel {I}.

The confusion pattern in each case, provides insight on how these changes affect the phone recognition in each ear. We propose to prescribe hearing aid amplification tailored to individual ears, based on the confusion pattern, the response from change in perceptual measure, and the response from change in vowel.

These tests are directed at the fine-tuning of hearing aid insertion gain, with the ultimate goal of improving speech perception, and to precisely identify when and for what consonants the ear with hearing loss needs treatment to enhance speech recognition.

2pAB16 – Biomimetic sonar and the problem of finding objects in foliage

Joseph Sutlive – josephs7@vt.edu
Rolf Müller – rolf.mueller@vt.edu
Virginia Tech
ICTAS II, 1075 Life Science Cir (Mail Code 0917)
Blacksburg, VA 24061-1016 USA

Popular version of paper 2pAB16
Presented Monday afternoon, May 14, 2019
177th ASA Meeting, Louisville, KY

The ability of sonars to find targets-of-interest is often hampered by a cluttered environment. For example, naval sonars encounter difficulties finding mines partially or fully buried among other distracting (clutter) targets. Such situations pose target identification challenges that are much harder than target detection and resolution problems. Possible new ideas for approaching such problems could come from the many bat species which navigate and hunt in dense vegetation and thus must be able to identify targets-of-interest within clutter. Evolutionary adaptation of the bat biosonar system is likely to have resulted in the “discovery” of features that support making distinctions between clutter and echoes of interest.

There are two main types of sonar: active sonar, in which echoes are triggered by the sonar’s own pulses, and passive sonar, in which the system remains silent and listens to its environment to gain a better understanding of it. The most well-established case is given by certain groups of bats that use Doppler shifts caused by the wingbeat of a flying insect prey to identify the prey in foliage. Different bat species have been shown to use a passive sonar approach that is based on unique prey-generated acoustic signals. We have designed a sonar which mimics the biosonar of the horseshoe bat, a bat which uses active sonar and is one of the bats that use doppler shifts as an identification mechanism.

Biomimetic sonar

A sonar that mimics the biosonar of the horseshoe bat.

The sonar scanned a variety of targets hidden in artificial foliage; the data was analyzed later. Initial analysis has shown that the sonar can be used to discriminate between different objects in foliage. Additional target discrimination tasks were used; gathering initial echo data of an object without clutter, then trying to find that object within clutter. Initial analysis has indicated the possibility of this sonar head being able to be used for this paradigm, though the results seemed very dependent on the direction of the target. Further investigation will look to refine the models explored here to better understand the how we can extract an object from a noisy, cluttered environment.