1aAB11 – A New Dimension to Bat Biosonar

Rolf Müller – rolf.mueller@vt.edu
Anupam K. Gupta – anupamkg@vt.edu
Yanqing Fu – fyq@vt.edu
Uzair Gillani – uzair@vt.edu
Hongxiao Zhu – hongxiao@vt.edu

Virginia Tech
1075 Life Science Circle
Blacksburg, VA 24061

Popular version of paper 1aAB11
Presented Monday morning, October 27, 2014
168th ASA Meeting, Indianapolis

Sonar is a sensing modality that is found in engineering as well as in nature. Man-made sonar systems can be found in places that include the bows of nuclear submarines and the bumpers of passenger cars. Likewise, natural sonar systems can be found in toothed whales that can weigh over 50 tons as well as in tiny bats that weigh just a few grams. All these systems have in common that they emit ultrasonic waves and listen to the returning echoes for clues as to what may be going on in their environments.

Beyond these basic commonalities, man-made and biological sonar systems differ radically in their approach to emitting and receiving the ultrasonic waves. Human sonar engineers tend to favor large numbers of simple elements distributed over a wide area. For example, sonar engineers fit hundreds of emitting and receiving elements into the bow of a nuclear submarine and even automotive engineers often arrange a handful of elements along the bumper of a car. As small flying mammals, bats did not have the option of distributing a large number of sonar elements over wide areas. Instead, they were forced to take a radically different approach. This biological approach has led to sonar systems that are based on a small number of highly complex emitting and receiving elements. At the same time, they have achieved levels of performance that remain unmatched by their man-made peers.

Bat biosonar has only one emitting element, in some bat species this is the mouth and in other, nasally emitting species, the nose. In all bat species, the echoes are received through two receiving elements, i.e., the two ears. But where is the complexity that allows these three elements to vastly outperform naval sonars with hundreds of emitting and receiving elements?

Over the past few years, research on two groups (families) of bats with particularly sophisticated sonar systems has yielded clues to the existence of a new functional dimension in bat biosonar that could be a key factor behind the remaining performance gap between engineered sonar and biosonar. Horseshoe bats (Rhinolophidae) and Old World leaf-nosed bats (Hipposideridae) emit their biosonar pulses nasally and have elaborate baffle shapes (so-called “noseleaves”) that surround the nostrils and can be seen to act as miniature megaphones.
Old World leaf-nosed bat
Figure 1. Noseleaves (“miniature megaphones”) and outer ears of Old World leaf-nosed bats.

Close-up studies of live bats have shown that the noseleaves and the outer ears of these species are both highly dynamic structures. The noseleaves of these bats, for example, have not only much greater geometric complexity than man-made megaphones, but most intriguingly their walls are dynamic: Each time the bat emits an ultrasonic wave packet through its nostrils, it can set the walls of its noseleaf in motion. Hence, the outgoing ultrasonic wave interacts with a changing surface geometry. On the reception side, certain horseshoe bats, for example, have been shown to change the shape of their outer ears within one tenth of second. This is about three times as fast as the proverbial blink of an eye. As for the noseleaf, these changes in shape can take place as the bat receives the ultrasonic echoes.

Figure 2 (video). Motions of the outer ear in an Old World leaf-nosed bats (landmarks added for tracking purposes).

While it is still not certain whether these dynamic features in the sonar system of bats have a function and help the animals to improve their sensory abilities, there is a growing body of evidence that suggests that these fast changes are more than just an oddity. The shape changes in the noseleaves and outer ears are the results of a highly specialized muscular machinery that is unlikely to have evolved without a significant functional advantage acting as a driving force. The resulting changes in shape are big enough to have an impact on the interaction between surface geometry and the passing ultrasonic waves and indeed acoustic impacts have been demonstrated using numerical as well as experimental methods. Finally, dynamic effects are wide-spread among bats with sophisticated sonar systems and are even found in unrelated species that are most likely to have acquired them in response to parallel evolutionary pressures.

2aID11 – A transducer not to be ignored: The siren

J. D. Maynard
Department of Physics
The Pennsylvania State University
University Park, PA 16802

Popular version of paper 2aID11
Presented Tuesday morning, October 28, 2014
168th ASA Meeting, Indianapolis

The siren is a source of sound (or sound transducer) which captures our attention because we know it may emanate from a police vehicle, fire engine, tornado warning tower or other danger warning system. However, there is another reason to heed the siren: it can be a “death ray”! Most of us know of the death ray from science fiction stories which describe a device which can annihilate whole armies silently from a distance. Around 1950 there were newspaper stories which heralded the advent of an actual death ray, with headlines and text such as: “‘Death Ray’ May Be Red [Soviet] Weapon. In the great super arms duel between east and west, has Russia successfully added the “death ray” to its growing arsenal?” (Franklin Johnson, OP, Washington, February 17, 1953) and “US sound ray kills mice in minute. The United States Army has announced the development of a supersonic death ray that kills mice in one minute. In spite of precautions the ray has inflicted burns, dizzyness and loss of balance on laboratory workers.” (American journal, New York, 1947). It may be assumed, and in some cases known, that the death ray referred to in these articles was a high intensity siren which was “silent” because it operated at a frequency above the threshold of human hearing (humans cannot hear frequencies above about 20,000 cycles per second). It was “high intensity” because it operated at a power level which was 10,000 times louder than the level of sound where the sense of “loudness” disappears and pain sets in; at the much louder level, pain becomes death, at least for mice.

A likely cause for the news articles was research with a siren undertaken by acousticians C. H. Allen and Isadore Rudnick, working under H. K. Schilling, Director of the Pennsylvania State College Acoustics Laboratory in 1946. Anyone who knew Izzy Rudnick would hypothesize that his response to the news articles would have been “Rumors of my death ray have been greatly exaggerated”. Indeed, a mouse had to be within about four inches (about 10 centimeters) of the siren in order to be killed, and its death was deemed to be a result of an increase in the temperature of the mouse due to absorption of the sound. In the same manner, the siren was used to heat a cup of coffee, ignite a ball of cotton and pop popcorn. The figure below shows the “trumpet horn” shaped opening of a siren, above which a glass tube is suspended; the lower part of the glass tube contains some popcorn kernels, and the upper part shows some popcorn popping upward.

At close range, a high intensity siren could cause human inner ear problems and deafness, and could set your hair on fire, but it could never be a real death ray. For the most part, the siren has received serious study by acousticians so as to make it a more efficient and longer range danger warning device.

[MISSING IMAGE] Figure. A high intensity acoustic siren being used to pop popcorn.

3aPA8 – Using arrays of air-filled resonators to reduce underwater man-made noise

Kevin M. Lee – klee@arlut.utexas.edu
Andrew R. McNeese – mcneese@arlut.utexas.edu
Applied Research Laboratories
The University of Texas at Austin

Preston S. Wilson – wilsonps@austin.utexas.edu
Mechanical Engineering Department and Applied Research Laboratories
The University of Texas at Austin

Mark S. Wochner – mark@adbmtech.com
AdBm Technologies

Popular version of paper 3aPA8
Presented Wednesday Morning, October 29, 2014
168th Meeting of the Acoustical Society of America, Indianapolis, Indiana
See also: Using arrays of air-filled resonators to attenuate low frequency underwater sound in POMA

Many marine and aquatic human activities generate underwater noise and can have potentially adverse effects on the underwater acoustical environment. For instance, loud sounds can affect the migratory or other behavioral patterns of marine mammals [1] and fish [2]. Additionally, if the noise is loud enough, it could potentially have physically damaging effects on these animals as well.

Examples of human activities that that can generate such noise are offshore wind farm installation and operation; bridge and dock construction near rivers, lakes, or ports; offshore seismic surveying for oil and gas exploration, as well as oil and gas production; and noise in busy commercial shipping lanes near environmentally sensitive areas, among others. All of these activities can generate noise over a broad range of frequencies, but the loudest components of the noise are typically at low frequencies, between 10 Hz and about 1000 Hz, and these frequencies overlap with the hearing ranges of many aquatic life forms. We seek to reduce the level of sound radiated by these noise sources to minimize their impact on the underwater environment where needed.

A traditional noise control approach is to place some type of barrier around the noise source. To be effective at low frequencies, the barrier would have to be significantly larger than the noise source itself and more dense than the water, making it impractical in most cases. In underwater noise abatement, curtains of small freely rising bubbles are often used in an attempt to reduce the noise; however, these bubbles are often ineffective at the low frequencies at which the loudest components of the noise occur. We developed a new type of underwater air-filled acoustic resonator that is very effective at attenuating underwater noise at low frequencies. The resonators consist of underwater inverted air-filled cavities with combinations of rigid and elastic wall members. They are intended to be fastened to a framework to form a stationary array surrounding an underwater noise source, such as the ones previously mentioned, or to protect a receiving area from outside noise.

The key idea behind our approach is that our air-filled resonator in water behaves like a mass on a spring, and hence it vibrates in response to an excitation. A good example of this occurring in the real world is when you blow over the top of an empty bottle and it makes a tone. The specific tone it makes is related to three things: the volume of the bottle, the length of its neck, and the size of the opening. In this case, a passing acoustic wave excites the resonator into a volumetric oscillation. The air inside the resonator acts as a spring and the water the air displaces when it is resonating acts as a mass. Like a mass on a spring, a resonator in water has a resonance frequency of oscillation, which is inversely proportional to its size and proportional to its depth in the water. At its resonance frequency, energy is removed from the passing sound wave and converted into heat through compression of the air inside the resonator, causing attenuation of the acoustic wave. A portion of the acoustic energy incident upon an array of resonators is also reflected back toward the sound source, which reduces the level of the acoustic wave that continues past the resonator array. The resonators are designed to reduce noise at a predetermined range of frequencies that is coincident with the loudest noise generated by any specific noise source.

air-filled resonators

Underwater photograph of a panel array of air-filled resonators attached to a framework. The individual resonators are about 8 cm across, 15 cm tall, and open on the bottom. The entire framework is about 250 cm wide and about 800 cm tall.

We investigated the acoustic properties of the resonators in a set of laboratory and field experiments. Lab measurements were made to determine the properties of individual resonators, such as their resonance frequencies and their effectiveness in damping out sound. These lab measurements were used to iterate the design of the resonators so they would have optimal acoustic performance at the desired noise frequencies. Initially, we targeted a resonance frequency of 100 Hz—the loudest components of the noise from activities like marine pile driving for offshore wind farm construction are between 100 Hz and 300 Hz. We then constructed a large number of resonators so we could make arrays like the panel shown in the photograph. Three or four such panels could be used to surround a noise source like an offshore wind turbine foundation or to protect an ecologically sensitive area.

The noise reduction efficacy of various resonator arrays were tested in a number of locations, including a large water tank at the University of Texas at Austin and an open water test facility also operated by the University of Texas in Lake Travis, a fresh water lake near Austin, TX. Results from the Lake Travis tests are shown in the graph of sound reduction versus frequency. We used two types of resonator—fully enclosed ones called encapsulated bubbles and open-ended ones (like the ones shown in the photograph). The number or total volume of resonators used in the array was also varied. Here, we express the resonator air volume as a percentage relative to the total volume of the array framework. Notice, our percentages are very small so we don’t need to use much air. For a fixed percentage of volume, the open-ended resonators provide up to 20 dB more noise reduction than the fully encapsulated resonators. The reader should note that noise reduction of 10 dB means the noise levels were reduced by a factor of three. A 30 dB reduction is equivalent to the noise be quieted by a factor of about 32. Because of the improved noise reduction performance of the open-ended resonators, we are currently testing this type of resonator at offshore wind farm installations in the North Sea, where government regulations require some type of noise abatement to be used to protect the underwater acoustic environment.

sound_reduction

Sound level reduction results from an open water experiment in a fresh water lake.

Various types of air-filled resonators were tested including fully encapsulated resonator and open-ended resonators like the ones shown in the photograph. Because a much total volume (expressed as a percentage here) is needed, the open-ended resonators are much more efficient at reducing underwater noise.

References:

[1] W. John Richardson, Charles R. Greene, Jr., Charles I. Malme, and Denis H. Thomson, Marine Mammals and Noise (Academic Press, San Diego, 1998).

[2] Arthur Popper and Anthony Hawkins (eds.), The Effects of Noise on Aquatic Life, Advances in Experimental Medicine and Biology, vol. 730, (Springer, 2012).

4pBA1 – Ultrasound Helps Detect Cancer Biomarkers

Tatiana Khokhlova – tdk7@uw.edu
George Schade – schade@uw.edu
Yak-Nam Wang – ynwang@uw.edu
Joo Ha Hwang – jooha@uw.edu
University of Washington
1013 NE 40th St
Seattle, WA 98105

John Chevillet – jchevill@systemsbiology.org
Institute for Systems Biology
401 Terry Ave N
Seattle, WA 98109

Maria Giraldez – mgiralde@med.umich.edu
Muneesh Tewari – mtewari@med.umich.edu
University of Michigan
109 Zina Pitcher Place 4029
Ann Arbor, MI 48109

Popular version of paper 4pBA1 High intensity focused ultrasound-induced bubbles stimulate the release of nucleic acid cancer biomarkers
Presented Thursday afternoon, October 30, 2014, at 1:30 pm
168th ASA Meeting, Indianapolis

The clinical evaluation of solid tumors typically includes needle biopsies, which can provide diagnostic (benign vs. cancer) and molecular information (targetable mutations, drug resistance, etc). This procedure has several diagnostic limitations, most notably, the potential to miss the mutations only millimeters away. In response to these limitations, the concept of “liquid biopsy” has emerged in recent years: the detection of nucleic acid cancer biomarkers, such as tumor-derived microRNAs (miRNAs) and circulating tumor DNA (ctDNA). These biomarkers have shown high diagnostic value and could guide the selection of appropriate targeted therapies. However, the abundance of these biomarker classes in the circulation is often too low to be detectable even with the most sensitive techniques because of their low levels of release from the tumor.

illustration1 - Cancer Biomarkers

Figure caption: Experimental setup and the basic concept of “ultrasound-aided liquid biopsy”. Pulsed high intensity focused ultrasound (HIFU) waves create, grow and collapse bubbles in tissue, which leads to puncturing of cell membranes and capillary walls. Cancer-derived microRNAs are thus released from the cells into the circulation and can be detected in a blood sample.

How can we make tumor cells release these biomarkers into the blood? The most straightforward way would be to puncture the cell membrane so that its content is released. One technology that allows for just that is high intensity focused ultrasound (HIFU). HIFU uses powerful, controlled ultrasound waves that are focused inside human body to ablate the targeted tissue at the focus without affecting the surrounding organs. Alternatively, if HIFU waves are sent in short, infrequent but powerful bursts, they cause mechanical disruption of tissue at the focus without any thermal effects. The disruption is achieved by small gas bubbles in tissue that appear, grow and collapse in response to the ultrasound wave– a phenomenon known as cavitation. Depending on the pulsing protocol employed, the outcome can range from small holes in cell membranes and capillaries to complete liquefaction of a small region of tumor. Using this technology, we seek to release biomarkers from tumor cells into the circulation in the effort to detect them using a blood test and avoiding biopsy.

To test this approach, we applied pulsed HIFU exposures to prostate cancer tumors implanted under the skin of laboratory rats, as illustrated in the image above. For image guidance and targeting we used conventional ultrasound imaging. Blood samples were collected immediately before and at periodic intervals after HIFU treatment and were tested for the presence of microRNAs that are associated with rat prostate cancer. The levels of these miRNAs were elevated up to 12-fold within minutes after the ultrasound procedure, and then declined over the course of several hours. The effects on tissue were evaluated in the resected tumors, and we found only micron-sized areas of hemorrhage scattered through otherwise intact tissue, suggesting damage to small capillaries. These data provided the proof of principle for the approach that we termed “ultrasound-aided liquid biopsy”. We are now working on identifying other classes of clinically valuable biomarkers, most notably tumor-derived DNA, that could be amplified using this methodology.

4aSCb8 – How do kids communicate in challenging conditions?

Valerie Hazan – v.hazan@ucl.ac.uk
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK

Michèle Pettinato – Michele.Pettinato@uantwerpen.be
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK

Outi Tuomainen – o.tuomainen@ucl.ac.uk
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK

Sonia Granlund – s.granlund@ucl.ac.uk
University College London (UCL)
Speech Hearing and Phonetic Sciences
Gower Street, London WC1E 6BT, UK

Popular version of 4aSCb8 – Acoustic-phonetic characteristics of older children’s spontaneous speech in interactions in conversational and clear speaking styles
Presented Thursday morning, October 30, 2014
168th ASA Meeting, Indianapolis

Kids learn to speak fluently at a young age and we expect young teenagers to communicate as effectively as adults. However, researchers are increasingly realizing that certain aspects of speech communication have a slower developmental path. For example, as adults, we are very skilled at adapting the way that we speak according to the needs of the communication. When we are speaking a predictable message in good listening conditions, we do not need to make an effort to pronounce speech clearly and we can expend less effort. However, in poor listening conditions or when transmitting new information, we increase the effort that we make to enunciate speech clearly in order to be more easily understood.

In our project, we investigated whether 9 to 14 year olds (divided into three age bands) were able to make such skilled adaptations when speaking in challenging conditions. We recorded 96 pairs of friends of the same age and gender while they carried out a simple picture-based ‘spot the difference’ game (See Figure 1).
Hazan1_fig
Figure 1: one of the picture pairs in the DiapixUK ‘spot the difference’ task.

The two friends were seated in different rooms and spoke to each other via headphones; they had to try to find 12 differences between their two pictures without seeing each other or the other picture. In the ‘easy communication’ condition, both friends could hear each other normally, while in the ‘difficult communication’ condition, we made it difficult for one of the friends (‘Speaker B’) to hear the other by heavily distorting the speech of ‘Speaker A’ using a vocoder (See Figure 2 and sound demos 1 and 2). Both kids had received some training at understanding this type of distorted speech. We investigated what adaptations Speaker A, who was hearing normally, made to his or her speech in order to make themselves understood by their friend with ‘impaired’ hearing, so that they could complete the task successfully.
Hazan2_fig
Figure 2: The recording set up for the ‘easy communication’ (NB) and ‘difficult communication’ (VOC) conditions.

Sound 1: Here, you will hear an excerpt from the diapix task between two 10 year olds in the ‘difficult communication’ conversation from the viewpoint of the talker hearing normally. Hear how she attempts to clarify her speech when her friend has difficulty understanding her.

Sound 2: Here, you will hear the same excerpt but from the viewpoint of the talker hearing the heavily degraded (vocoded) speech. Even though you will find this speech very difficult to understand, even 10 year olds get better at perceiving it after a bit of training. However, they are still having difficulty understanding what is being said, which forces their friend to make greater effort to communicate.

We looked at the time it took to find the differences between the pictures as a measure of communication efficiency. We also carried out analyses of the acoustic aspects of the speech to see how these varied when communication was easy or difficult.

We found that when communication was easy, the child groups did not differ from adults in the average time that it took to find a difference in the picture, showing that 9 to 14 year olds were communicating as efficiently as adults. When the speech of Speaker A was heavily distorted, all groups took longer to do the task, but only the 9-10 year old group took significantly longer than adults (See Figure 3). The additional problems experienced by younger kids are likely to be due both to greater difficulty for Speaker B in understanding degraded speech and to Speaker A being less skilled at compensating for this difficulties. The results obtained for children aged 11 and older suggest that they were using good strategies to compensate for the difficulties imposed on the communication (See Figure 3).
Hazan3_fig
Figure 3: Average time taken to find one difference in the picture task. The four talker groups do not differ when communication is easy (blue bars); in the ‘difficult communication’ condition (green bars), the 9-10 years olds take significantly longer than the adults but the other child groups do not.

In terms of the acoustic characteristics of their speech, the 9 to 14 year olds differed in certain aspects from adults in the ‘easy communication’ condition. All child groups produced more distinct vowels and used a higher pitch than adults; kids younger than 11-12 also spoke more slowly and more loudly than adults. They hadn’t learnt to ‘reduce’ their speaking effort in the way that adults would do when communication was easy. When communication was made difficult, the 9 to 14 year olds were able to make adaptations to their speech for the benefit of their friend hearing the distorted speech, even though they themselves were having no hearing difficulties. For example, they spoke more slowly (See Figure 4) and more loudly. However, some of these adaptations differed from those produced by adults.
Hazan4_fig
Figure 4: Speaking rate changes with age and communication difficulty. 9-10 year olds spoke more slowly than adults in the ‘easy communication’ condition (blue bars). All speaker groups slowed down their speech as a strategy to help their friend understand them in the ‘difficult communication’ (vocoder) condition (green bars).

Overall, therefore, even in the second decade of life, there are changes taking place in the conversational speech produced by young people. Some of these changes are due to physiological reasons such as growth of the vocal apparatus, but increasing experience with speech communication and cognitive developments occurring in this period also play a part.

Younger kids may experience greater difficulty than adults when communicating in difficult conditions and even though they can make adaptations to their speech, they may not be as skilled at compensating for these difficulties. This has implications for communication within school environments, where noise is often an issue, and for communication with peers with hearing or language impairments.