Making Sense of Visualized Acoustic Information

Brett Bissinger – beb194@psu.edu
J. Daniel Park
Applied Research Laboratory Penn State University
P.O.Box 30, State College, PA

Daniel A. Cook
Georgia Tech Research Institute
Smyrna , GA

Alan J. Hunter
University of Bath
United Kingdom

Popular version of paper 3aSP1, “Signal processing trade-offs for the quality assessment of acoustic color signatures”

Presented Wednesday Morning, December 6, 2017, 9:00-9:20 AM, Salon D

174th ASA Meeting, New Orleans

We make sense of the world by seeing, hearing, smelling, touching, and tasting. Of these different modes of sensing, the majority of the information people consume is in the form of visual representations such as photos. Cameras take photos with light, but underwater, because sound waves propagate more efficiently than electromagnetic waves, we use acoustic transducers, or underwater microphones, to sense the fluctuations of acoustic pressure and record them as raw data. This data is processed into various forms, including sonar imagery such as Figure 1.

Imagery obtained from sonar data is used in many applications for understanding the underwater environment, including fishery monitoring, navigation, and tracking. Generation of acoustic imagery creates a geometric representation of information, allowing us to easily understand the content of the sonar data, just as it is easy for us to recognize different shapes in photos and identify objects. Images with better quality, or higher resolution, typically provide more information and it is often the goal of sonar systems to generate high resolution images by increasing the size of the sensor just as larger camera lenses allow us to take better photos [1].

However, this analogy only works when the wavelength of sound waves is small compared to the size of an object, just as the wavelength of light is very short compared to most objects we see. When the wavelengths used in sonar are comparable to or longer than the size of underwater objects, but still processed to generate imagery, they are not easy to understand. The geometric cues that we expect to see are no longer there. These geometric features are important not only for human consumption, but also for many signal/image processing algorithms that are designed work geometric features in images. Therefore, different ways of processing the raw acoustic data in search for simple geometric features may allow better understanding of, and quality assessment of information is contained in the data, and one such approach is called acoustic color.

Acoustic color, as shown in Figure 2, is a representation that characterizes how the object responds differently as the direction of incoming sound changes [2]. Instead of describing geometric features such as shape, it describes the spectral features, which are magnitudes and time delays of combinations of sound waves with different frequencies [3], [4]. The characteristics of this feature change with the direction of observation and can provide information that is not easily recognizable in sonar images. An analogy would be to think of striking a drum or a bell and trying to guess its shape by the sound it makes. Even with very similar exterior shapes, the sounds they generate are associated with different magnitudes and time delays, making them easily distinguishable.

Acoustic color is one of many candidate representations we are exploring to better understand information contained in acoustic data. Various physics/model-based signal processing methods with different perspectives, or models, are being developed and compared to determine which methods best show different mechanisms of acoustic phenomenology. This process can potentially help us find other sound-generating mechanisms we are not yet familiar with.

Figure 1 A sonar image of an object on the sea floor that shows a rectangular shape with clear edges and highlights.  Source: ARL/PSU

 

Figure 2 Acoustic color (left, 2a) and wavenumber spectrum (right, 2b) of a cylinder. They contain the same information, but wavenumber spectrum may be more amenable to further signal processing and quality assessment.  Data source: Applied Physics Laboratory, University of Washington PONDEX 09/10

 

 

References:

[1] Callow, Hayden J. “Signal processing for synthetic aperture sonar image enhancement.” (2003).

[2] Kennedy, J. L., et al. “A rail system for circular synthetic aperture sonar imaging and acoustic target strength measurements: Design/operation/preliminary results.” Review of Scientific Instruments 85.1 (2014): 014901.

[3] Williams, Kevin L., et al. “Acoustic scattering from a solid aluminum cylinder in contact with a sand sediment: Measurements, modeling, and interpretation.” The Journal of the Acoustical Society of America 127.6 (2010): 3356-3371.

[4] Morse, Scot F., and Philip L. Marston. “Backscattering of transients by tilted truncated cylindrical shells: Time-frequency identification of ray contributions from measurements.” The Journal of the Acoustical Society of America 111.3 (2002): 1289-1294.

2aPA6  – An acoustic approach to assess natural gas quality in real time – Andi Petculescu

2aPA6  – An acoustic approach to assess natural gas quality in real time – Andi Petculescu

An acoustic approach to assess natural gas quality in real time

Andi Petculescu- andi@louisiana.edu
University of Louisiana at Lafayette
Lafayette, Louisiana, US

Popular version of paper 2aPA6 “An acoustic approach to assess natural gas quality in real time.”

Presented Tuesday morning, December 5, 2017, 11:00 AM-11:20, Balcony L

174th ASA in New Orleans

 

Infrared laser spectroscopy offers amazing measurement resolution for gas sensing applications, ranging between 1 part per million (ppm) down to a few parts per billion (ppb).

 

There are applications, however, that require sensor hardware able to operate in harsh conditions, without the need for periodic maintenance or recalibration. Examples are monitoring of natural gas composition in transport pipes, explosive gas accumulation in grain silos, and ethylene concentration in greenhouse environments. A robust alternative is embodied by gas-coupled acoustic sensing. Such gas sensors operate on the principle that sound waves are intimately coupled to the gas under study hence any perturbation on the latter will affect i) how fast the waves can travel and ii) how much energy they lose during propagation. The former effect is represented by the so-called speed of sound, which is the typical “workhorse” of acoustic sensing. The reason the sound speed of a gas mixture changes with composition is because it depends on two gas parameters beside temperature. The first parameter is the mass of the molecules forming the gas mixture; the second parameter is the heat capacity, describing the ability of the gas to follow, via the amount of heat exchanged, the temperature oscillations accompanying the sound wave. All commercial gas-coupled sonic gas monitors rely solely on the dependence of sound speed on molecular mass. This traditional approach, however, can only sense relative changes in the speed of sound hence in mean molecular mass; thus it cannot do a truly quantitative analysis. Heat capacity, on the other hand, is the thermodynamic “footprint” of the amount of energy exchanged during molecular collisions. It therefore opens up the possibility to perform quantitative gas sensing. Furthermore, the attenuation coefficient, which describes how fast energy is lost from the coherent (“acoustic”) motion to incoherent (random) behavior of the gas molecules, has largely been ignored. We have shown that measurements of sound speed and attenuation at only two acoustic frequencies can be used to infer the intermolecular energy transfer rates, depending on the species present in the gas. The foundation of our model is summarized in the pyramid of Figure 1. One can either predict the sound speed and attenuation if the composition is known (bottom-to-top arrow) or perform quantitative analysis or sensing based on measured sound speed and attenuation (top-to-bottom arrow).

We are developing physics-based algorithms that not only quantify a gas mixture but also help identify contaminant species in a base gas. With the right optimization, the algorithms can be used in real time to measure the composition of piped natural gas as well as its degree of contamination by CO2, N2, O2 and other species. It is these features that have sparked the interest of the gas flow-metering industry. Figure 2 shows model predictions and experimental data for the attenuation coefficient for mixtures of nitrogen in methane (Fig. 2a) and ethylene in nitrogen (Fig. 2b).

The sensing algorithm that we named “Quantitative Acoustic Relaxational Spectroscopy” (QARS) is based on a purely geometric interpretation of the frequency-dependent heat capacity of the mixture of polyatomic molecules. This characteristic makes it highly amenable to implementation as a robust real-time sensing/monitoring technique. The results of the algorithm are shown in Figure 3, for a nitrogen-methane mixture. The example shows how the normalized attenuation curve arising from intermolecular exchanges is reconstructed (or synthesized) from data at just two frequencies. The prediction of the first-principles model (dashed line) shows two relaxation times: the main one of approximately 50 us (=1/20000 Hz-1) and a secondary one around 1 ms (=1/1000 Hz-1). Probing the gas with only two frequencies yields the main relaxation process, around 20000 Hz, from which the composition of the mixture can be inferred with relatively high accuracy.

Figure 1. The prediction/sensing pyramid of molecular acoustics. Direct problem: prediction of sound wave propagation (speed and attenuation). Inverse problem: quantifying a gas mixture from measured sound speed and attenuation.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 2. The normalized (dimensionless) attenuation coefficient in mixtures of N2 in CH4 (a) and C2H4 in N2 (b). Solid lines–theory; symbols–measurements.

Figure 3. The normalized (dimensionless) attenuation as a function of frequency. Dashed line–theoretical prediction; solid line–reconstructed curve.

 

 

3aUWa6 – Inversion of geo-acoustic parameters from sound attenuation measurements in the presence of swim bladder bearing  – Orest Diachok

3aUWa6 – Inversion of geo-acoustic parameters from sound attenuation measurements in the presence of swim bladder bearing – Orest Diachok

Inversion of geo-acoustic parameters from sound attenuation measurements in the presence of swim bladder bearing

Orest Diachok – orest.diachok@jhuapl.edu
Johns Hopkins University Applied Physics Laboratory
11100 Johns Hopkins Rd.
Laurel MD 20723

Altan Turgut – turgut@wave.nrl.navy.mil
Naval Research Laboratory
4555 Overlook Ave. SW
Washington DC 20375

Popular version of paper 3aUWa6 “Inversion of geo-acoustic parameters from transmission loss measurements in the presence of swim bladder bearing fish in the Santa Barbara Channel”

Presented Wednesday morning, December 6, 2017, 9:15-10:00 AM, Salon E

174th ASA Meeting, New Orleans

The intensity of sound propagating from a source in the ocean becomes diminished with range due to geometrical spreading, chemical absorption, and reflection losses from the bottom and surface. Measurements of sound intensity vs. range and depth in the water column may be used to infer the speed of sound, density and attenuation coefficient (geo-alpha) of bottom sediments. Numerous inversion algorithms have been developed to search through physically viable permutations of these parameters and identify the values of these parameters that provide the best fit to measurements. This approach yields valid results in regions where the concentration of swim bladder bearing fish is negligible.

In regions where the there are large numbers of swim bladder bearing fish, the effect of attenuation due to fish (bio-alpha) needs to be considered to permit unbiased estimates of geo-acoustic parameters (Diachok and Wales, 2005; Diachok and Wadsworth, 2014).

Swim bladder bearing fish resonate at frequencies controlled by the dimensions of their swim bladders. Adult 16 cm long sardines resonate at 1.1 kHz at 12 m depth. Juvenile sardines, being smaller, resonate at higher frequencies. If the number of fish is sufficiently large, sound will be highly attenuated at the resonance frequencies of their swim bladders.

To demonstrate the competing effects of bio and geo-alpha on sound attenuation we conducted an interdisciplinary experiment in the Santa Barbara Channel during a month when the concentration of sardines was known to be relatively high. This experiment included an acoustic source, S, which permitted measurements at frequencies between 0.3 and 5 kHz and an array of 16 hydrophones, H, which was deployed 3.7 km from the source, as illustrated in Figure 1. Sound propagating from S to H was attenuated by sediments at the bottom of the ocean (yellow) and a layer of fish at about 12 m depth (blue). To validate inferred geo-acoustic values from the sound intensity vs. depth data, we sampled the bottom with cores and measured sound speed and geo-alpha vs. depth with a near-bottom towed chirp sonar (Turgut et al., 2002). To validate inferred bio-acoustic values, Carla Scalabrin of Ifremer, France measured fish layer depths with an echo sounder, and Paul Smith of the Southwest Fisheries Science Center conducted trawls, which provided length distributions of dominant species. The latter permitted calculation of swim bladder dimensions and resonance frequencies.

Figure 2 provides two-hour averaged measurements of excess attenuation coefficients (corrected for geometrical spreading and chemical absorption) vs. frequency and depth at night, when these species are generally dispersed (far apart from each other) near the surface. The absorption bands centered at 1.1, 2.2 and 3.5 kHz corresponded to 16 cm sardines, 10 cm anchovies, and juvenile sardines or anchovies at 12 m respectively. During daytime, sardines generally form schools at greater depths, where they resonate at “bubble cloud” frequencies, which are lower than the resonance frequencies of individuals.

The method of concurrent inversion (Diachok and Wales, 2005) was applied to measurements of sound intensity vs. depth to estimate values of bio-and geo-acoustic parameters. The geo-acoustic search space consisted of the sound speed at the top of the sediments, the gradient in sound speed and geo-alpha. The biological search space consisted of the depth and thickness of the fish layer and bio-alpha within the layer. Figure 3 shows the results of the search for the values of geo-alpha that resulted in the best fit between calculations and measurements, 0.1 dB/m at 1.1 kHz and 0.5 dB/m at 1.9 kHz. Also shown are results of chirp sonar estimates of geo-alpha at 3.2 kHz and quadratic fit to the data.

If we had assumed that bio-alpha was zero, then the inverted value of geo-alpha would have been 0.12 dB/m at 1.1 kHz, which is about ten times greater than the properly derived estimate, and 0.9 dB/m at 1.9 kHz.

These measurements were made at a biological hot spot, which was identified through an echo sounder survey. None of the previously reported experiments, which were designed to permit inversion of geo-acoustic parameters from sound propagation measurements, included echo sounder measurements of fish depth or trawls. Consequently, some of these measurements may have been conducted at sites where the concentration of swim bladder bearing fish may have been significant, and inverted values of geo-acoustic parameters may have been biased by neglect of bio-alpha.

 

Figure 1. Experimental geometry: source, S deployed 9 m below the surface between a float and an anchor, and a vertical array of hydrophones, H, deployed 3.7 km from source.

Figure 2. Concurrent echo sounder measurements of energy reflected from fish vs. depth (left), and excess attenuation vs. frequency and depth at night (right).

Figure 3. Attenuation coefficient in sediments derived from concurrent inversion of bio and geo parameters, geo only, chirp sonar, and quadratic fit to data.

 

Acknowledgement: This research was supported by the Office of Naval Research Ocean Acoustics Program.

References

Diachok, O. and S. Wales (2005), “Concurrent inversion of bio and geo-acoustic parameters from transmission loss measurements in the Yellow Sea”, J. Acoust. Soc. Am., 117, 1965-1976.

Diachok, O. and G. Wadsworth (2014), “Concurrent inversion of bio and geo-acoustic parameters from broadband transmission loss measurements in the Santa Barbara Channel”, J. Acoust. Soc. Am., 135, 2175.

Turgut, A., M. McCord, J. Newcomb and R. Fisher (2002) “Chirp sonar sediment characterization at the northern Gulf of Mexico Littoral Acoustic Demonstration Center experimental site”, Proceedings, Oce

3aPA7 – Moving and sorting living cells with sound and light – Gabriel Dumy

3aPA7 – Moving and sorting living cells with sound and light – Gabriel Dumy

Moving and sorting living cells with sound and light

Gabriel Dumy– gabriel.dumy@espci.fr
Mauricio Hoyos – mauricio.hoyos@espci.fr
Jean-Luc Aider – jean-luc.aider@espci.fr
ESPCI Paris – PMMH Lab
10 rue Vauquelin
Paris, 75005, FRANCE

 

 

Popular version of paper 3aPA7, “Investigation on a novel photoacoustofluidic effect”

Presented Wednesday morning, December 6, 2017, 11:00-11:15 AM, Balcony L

174th ASA Meeting, New Orleans

 

Amongst the various ways of manipulating suspensions, acoustic levitation is one of the most practical yet not very known to the public. Allowing for contactless concentration of microscopic bodies (from particles to living cells) in fluids (whether it be air, water, blood…), this technique only requires a small amount of power and materials. It is thus smaller and less power consuming than other technologies using magnetic or electric fields for instance and does not require any preliminary tagging.

Acoustic levitation occurs when using standing ultrasonic waves trapped between two reflecting walls. If the ultrasonic wavelength ac is matched to the distance between the two walls (it has to be a certain number of the half wavelength), then an acoustic pressure field forces the particles or cells to move toward the region where the acoustic pressure is minimal (this region is called a pressure node) [1]. Once the particles or cells have reached the pressure node, they can be kept in so-called “acoustic levitation” as long as needed. They are literally trapped in an “acoustic tweezer”. Using this method, it is easy to force cells or particles to create large clusters or aggregates than can be kept in acoustic levitation as long as the ultrasonic field is on.

What happens if we illuminate the aforementioned aggregates of fluorescent particles or cells with a strong monochromatic (only one color) optic wave? If this wave is absorbed by the levitating objects, then the previously very stable aggregate explodes.

We can observe that the particles are now ejected from the illuminated aggregate at great speed from its periphery. But they are still kept in acoustic levitation, which is not affected by the introduction of light.

We determined that the key parameter is the absorption of light by the levitating objects because the explosions happened even with non-fluorescent particles. Moreover, this phenomenon exhibits a strong coupling between light and sound, as it needs the two sources of energy to be present at the same time to occur. If the particles are not in acoustic levitation, on the bottom of the cavity or floating in the suspending medium, even a very strong light does not move them. Without the adequate illumination, we only observe a classical acoustic aggregation process.

Using this light absorption property together with acoustic levitation opens the way to more complex and challenging experiments, like advanced manipulations of micro-objects in acoustic levitation or fast and highly selective sorting of mixed suspensions, since we can discriminate these particles not only on their mechanical properties but also on their optic ones.

We did preliminary experiments with living cells. We observed that human red blood cells (RBCs), having a strong absorption of blue light, could be easily manipulated by both sounds and light. We were able to break up RBCs aggregates very quickly. As a matter of fact, this new effect coupling both acoustics and light suggests all new perspectives for living cells manipulation and sorting, like cell washing (removing unwanted cells from the target cell).  Indeed, most of the living cells absorb light at different wavelengths and can already be manipulated using acoustic fields. This discovery should allow very selective manipulations and/or sorting of living cells in a very simple and easy way, using a low-cost setup.

Figure 1. (Figure_ARF_with_light.tif) Illustration of the acoustic manipulation of suspensions. A suspension is first focused under the influence of the vertical acoustic pressure field in red (a and b). Once in the pressure node, the suspension is radially aggregated c) by secondary acoustic forces [2]. On d), when we enlighten the stable aggregate with an adequate wavelength, this one laterally explodes.

Videos: Here (Figure 2.) can be put side by side movies red_explosion.avi and green_explosion.avi. Legend can read “Explosion (red_explosion) of the previously formed aggregate of 1.6 polystyrene beads, that are red fluorescent, by a green light. Explosion (green_explosion) of an aggregate of 1.7µm green fluorescent polystyrene beads by a blue light.”

Videos: Figure 3: (Montage_separation_contour) Illustration of the separation potential of the phenomenon. We take an aggregate (a) that is a mix of two kind of polystyrene particles with same diameter, one absorbing blue light and fluorescing green (b), the other absorbing green light and fluorescing red (c), that we cannot separate by acoustics alone. We expose this aggregate to blue light for 10 seconds. On the bottom row is shown the effect of this light, we effectively separated the blue absorbing particles (e) from the green absorbing one (f).

Here can be inserted aggregation_movie, that describes the observation from the top of the regular acoustic aggregation process of a suspension of 1.6µm polystyrene beads.

 

[1] K. Yosioka and Y. Kawasima, “Acoustic radiation pressure on a compressible sphere,” Acustica, vol. 5, pp. 167–173, 1955.

[2] G. Whitworth, M. A. Grundy, and W. T. Coakley, “Transport and harvesting of suspended particles using modulated ultrasound,” Ultrasonics, vol. 29, pp. 439–444, 1991.

1pEAa5 – A study on a friendly automobile klaxon production with rhythm  – SangHwi Jee

1pEAa5 – A study on a friendly automobile klaxon production with rhythm – SangHwi Jee

A study on a friendly automobile klaxon production with rhythm

SangHwi Jee- slayernights@ssu.ac.kr
Myungsook Kim
Myungjin Bae
Sori Sound Engineering Lab
Soongsil University
369 Sangdo-Ro, Dongjak-Gu, Seoul, Seoul Capital Area 156-743
Republic of Korea

Popular version of paper 1pEAa5, “A study on a friendly automobile klaxon production with rhythm”

Presented Monday, December 04, 2:00-2:15 PM, Balcony N

174th ASA meeting, New Orleans

 

Cars are part of our everyday lives and as a result, traffic noise is always present when we are driving, riding as a passenger or walking down the street. Among the traffic noise, blaring car horns are among the most stressful and unpleasant sounds. Impulse noises, like those experienced from honking traffic horns can lead to emotional dysregulation or emotional hyperactivity of the driver and potentially cause an accident. While impulse sounds may be dangerous, the risk of avoiding car horn use altogether can be just as deadly. Not using horn sounds prevent us from informing pedestrians or other drivers of a potential accident. Although it is an important means of informing the pedestrians and other drivers of a present crisis, until now, car horn sounds and their impact on the listener have not been heavily studied. Generally, the Klaxon electromechanical car horn is a simple mechanical structure, which is excellent in durability and ease of use. However, once it is attached, it can’t change the tone of the horn sound or redesign the sound pressure size. Therefore, in this study, the width of power supply time of Klaxon was adjusted to 5 (0.01s, 0.02s, 0.03s, 0.06s, 0.13s). Then, the sound level of the Klaxon sound was set to 5 sound levels (80dB, 85dB, 90dB, 100dB, 110dB). The experimental result shows that the maximum sound pressure (pmax = 110dB) after operating the Klaxon is tmax.

Equation 1: Ps (dB) = 110 (dB) – {10log (ton / (ton + toff)) + 20log (ton / tmax)

 

In Equation 1, preferences were evaluated for five types of 5-second Klaxonsounds, which were designed with the Klaxon’s operating time and downtime appropriately adjusted. We performed 100 Mean Option Score (MOS) evaluations of the 5 types of Klaxons three times, then MOS evaluation, and started to listen to the existing Klaxon sounds of three times for 1 second to get the sound criterion. The evaluation items were MOS measurement for risk perception, loudness, unpleasantness, and stress. Results from the evaluation of preference, when designing various forms of the Klaxon sound, noted that giving the horn sound rhythm was preferred compared to the conventional horn sound when continuously heard for 5 seconds. When the rhythm was changed by the sound of Klaxon, the average perceived horn sound level decreased by 20dB. This can be heard in the human ear if the sound is rhythmic rather than normal. Hearing can be perceived as a simple tone, but rhythmic sound can also be perceived easily. In fact, it was found that the sound that when rhythm was added to a sound, it was found to be more pleasant than the corresponding normal sound