Making Sense of Visualized Acoustic Information

Brett Bissinger – beb194@psu.edu
J. Daniel Park
Applied Research Laboratory Penn State University
P.O.Box 30, State College, PA

Daniel A. Cook
Georgia Tech Research Institute
Smyrna , GA

Alan J. Hunter
University of Bath
United Kingdom

Popular version of paper 3aSP1, “Signal processing trade-offs for the quality assessment of acoustic color signatures”

Presented Wednesday Morning, December 6, 2017, 9:00-9:20 AM, Salon D

174th ASA Meeting, New Orleans

We make sense of the world by seeing, hearing, smelling, touching, and tasting. Of these different modes of sensing, the majority of the information people consume is in the form of visual representations such as photos. Cameras take photos with light, but underwater, because sound waves propagate more efficiently than electromagnetic waves, we use acoustic transducers, or underwater microphones, to sense the fluctuations of acoustic pressure and record them as raw data. This data is processed into various forms, including sonar imagery such as Figure 1.

Imagery obtained from sonar data is used in many applications for understanding the underwater environment, including fishery monitoring, navigation, and tracking. Generation of acoustic imagery creates a geometric representation of information, allowing us to easily understand the content of the sonar data, just as it is easy for us to recognize different shapes in photos and identify objects. Images with better quality, or higher resolution, typically provide more information and it is often the goal of sonar systems to generate high resolution images by increasing the size of the sensor just as larger camera lenses allow us to take better photos [1].

However, this analogy only works when the wavelength of sound waves is small compared to the size of an object, just as the wavelength of light is very short compared to most objects we see. When the wavelengths used in sonar are comparable to or longer than the size of underwater objects, but still processed to generate imagery, they are not easy to understand. The geometric cues that we expect to see are no longer there. These geometric features are important not only for human consumption, but also for many signal/image processing algorithms that are designed work geometric features in images. Therefore, different ways of processing the raw acoustic data in search for simple geometric features may allow better understanding of, and quality assessment of information is contained in the data, and one such approach is called acoustic color.

Acoustic color, as shown in Figure 2, is a representation that characterizes how the object responds differently as the direction of incoming sound changes [2]. Instead of describing geometric features such as shape, it describes the spectral features, which are magnitudes and time delays of combinations of sound waves with different frequencies [3], [4]. The characteristics of this feature change with the direction of observation and can provide information that is not easily recognizable in sonar images. An analogy would be to think of striking a drum or a bell and trying to guess its shape by the sound it makes. Even with very similar exterior shapes, the sounds they generate are associated with different magnitudes and time delays, making them easily distinguishable.

Acoustic color is one of many candidate representations we are exploring to better understand information contained in acoustic data. Various physics/model-based signal processing methods with different perspectives, or models, are being developed and compared to determine which methods best show different mechanisms of acoustic phenomenology. This process can potentially help us find other sound-generating mechanisms we are not yet familiar with.

Figure 1 A sonar image of an object on the sea floor that shows a rectangular shape with clear edges and highlights.  Source: ARL/PSU

 

Figure 2 Acoustic color (left, 2a) and wavenumber spectrum (right, 2b) of a cylinder. They contain the same information, but wavenumber spectrum may be more amenable to further signal processing and quality assessment.  Data source: Applied Physics Laboratory, University of Washington PONDEX 09/10

 

 

References:

[1] Callow, Hayden J. “Signal processing for synthetic aperture sonar image enhancement.” (2003).

[2] Kennedy, J. L., et al. “A rail system for circular synthetic aperture sonar imaging and acoustic target strength measurements: Design/operation/preliminary results.” Review of Scientific Instruments 85.1 (2014): 014901.

[3] Williams, Kevin L., et al. “Acoustic scattering from a solid aluminum cylinder in contact with a sand sediment: Measurements, modeling, and interpretation.” The Journal of the Acoustical Society of America 127.6 (2010): 3356-3371.

[4] Morse, Scot F., and Philip L. Marston. “Backscattering of transients by tilted truncated cylindrical shells: Time-frequency identification of ray contributions from measurements.” The Journal of the Acoustical Society of America 111.3 (2002): 1289-1294.

Share This