J. Daniel Park (ARL/PSU)
Daniel A. Cook (GTRI)

Lay-language paper for abstract 3aUW8 “Representation trade-offs for the quality assessment of acoustic color signatures”
presented at the 175th Meeting of Acoustical Society of America in Minneapolis.

We use sound to learn about the underwater environment because sound waves travel much better in water than light waves do. Similar to using a flashlight to find your lost car keys in the woods, sound wave pulses are used to ‘light up’ the sea floor. When carefully organized, sound echoes from the surroundings can be shown as sonar imagery such as Figure 1.

see

Figure 1. A sonar image is generated from a collection of sound recordings by carefully organizing them into a spatial representation, and we can see various features of the sea floor and even shadows cast by sea floor textures and objects, similar to when using a flash light.

Images are easy for us to understand, but not all of the useful information embedded in the sound recordings is represented well by images. For example, a plastic and a metallic trash bin may have the same cylindrical shape, but the sounds they make when you knock on them are easy to distinguish. This idea leads to a different method of organizing a sound recording, and the resulting representation is called acoustic color, Figure 2. This shows how different frequencies emerge and fade as you ‘knock’ on the object with sound from different directions.

Figure 2. Acoustic color of a solid aluminum cylinder. It shows the strength of frequency components when seen from different viewing angle. Source, University of Washington PONDEX 09/10

This representation has the potential to be useful for distinguishing objects that have similar shapes in visual imagery, but have noticeably different acoustic spectral responses. However, it is not easy to extract relevant information that can help discriminate different objects as seen in Figure 2. One of the reasons is the weak and dispersed nature of the object signatures, which makes it difficult to mentally organize them and draw conclusions. We want to explore other ways of organizing the acoustic data in order to make it intuitive for us to ‘see’ what is yet to be uncovered from the environment. Certain animals, such as dolphins and bats are able to take advantage of complicated acoustic echoes to hunt for prey and understand their environment.

One representation under consideration is time-varying acoustic color, Video 1, which provides the ability to observe the time-evolving characteristics of the acoustic color, with some loss in the ability to precisely distinguish frequencies. This helps one understand how different spectral signatures appear and change, and eventually fade out. This short timescale evolution is important information not easily extractable in the typical acoustic color representation.

<Video 1 missing>

Another representation under consideration is an approach called symbolic time series analysis. By representing a short segment of the raw time series as a symbol and assigning the same symbol for segments that are similar, the time series is transformed into a sequence of symbols as illustrated in Figure 3. This allows us to use tools developed for sequence analysis such as the ones used in DNA sequencing for comparing and recognizing patterns in the sequence. It may prove to be an effective approach to extracting underlying patterns from acoustic data that are not as easily accessible in other more common modes of visualizing the data.

Figure 3. Each time series is transformed into a sequence of symbols, then can be further analyzed to characterize temporal patterns. We can use tools developed for other applications such as DNA sequencing, and extract information that are not as easily accessible from more common modes of visualizing the data.

Share This