5aMU1 – The inner ear as a musical instrument

Brian Connolly – bconnolly1987@gmail.com
Music Department
Logic House
South Campus
Maynooth University
Co. Kildare
Ireland

Popular version of paper 5aMU1, “The inner ear as a musical instrument”
Presented Friday morning, November 6, 2015, 8:30 AM, Grand Ballroom 2
170th ASA meeting Jacksonville
See also: The inner ear as a musical instrument – POMA

(please use headphones for listening to all audio samples)

Did you know that your ears could sing? You may be surprised to hear that they, in fact, have the capacity to make particularly good performers and recent psychoacoustics research has revealed the true potential of the ears within musical creativity. ‘Psychoacoustics’ is loosely defined as the study of the perception of sound.

Figure 1: The Ear

inner ear

A good performer can carry out required tasks reliably and without errors. In many respects the very straight-forward nature of the ear’s responses to certain sounds results in the ear proving to be a very reliable performer as its behaviour can be predicted and so it is easily controlled. In the context of the listening system, the inner ear has the ability to behave as a highly effective instrument which can create its own sounds that many experimental musicians have been using to turn the listeners’ ears into participating performers in the realization of their music.

One of the most exciting avenues of musical creativity is the psychoacoustic phenomenon known as otoacoustic emissions. These are tones which are created within the inner ear when it is exposed to certain sounds. One such example of these emissions is ‘difference tones.’ When two clear frequencies enter the ear at, say 1,000Hz and 1,200Hz the listener will hear these two tones, as expected, but the inner ear will also create its own third frequency at 200Hz because this is the mathematical difference between the two original tones. The ear literally sends a 200Hz tone back out in reverse through the ear and this sound can be detected by an in-ear microphone, a process which doctors carrying out hearing tests on babies use as an integral part of their examinations. This means that composers can create certain tones within their work and predict that the listeners’ ears will also add their extra dimension to the music upon hearing it. Within certain loudness and frequency ranges, the listeners will also be able to feel their ears buzzing in response to these stimulus tones! This makes for a very exciting and new layer to contemporary music making and listening.

First listen to this tone. This is very close to the sound your ear will sing back during the second example.

Insert – 200.mp3

Here is the second sample containing just two tones at 1,000Hz and 1,200Hz. See if you can also hear the very low and buzzing difference tone which is not being sent into your ear, it is being created in your ear and sent back out towards your headphones!

Insert – 1000and1200.mp3

If you could hear the 200Hz difference tone in the previous example, have a listen to this much more complex demonstration which will make your ears sing a well known melody. It is important to try to not listen to the louder impulsive sounds and see if you can hear your ears humming along to perform the tune of Twinkle, Twinkle, Little Star at a much lower volume!

(NB: The difference tones will start after about 4 seconds of impulses)

Insert – Twinkle.mp3

Auditory beating is another phenomenon which has caught the interest of many contemporary composers. In the below example you will hear the following: 400Hz in your left ear and 405Hz in your right ear.

First play the below sample by placing the headphones into your ears just one at a time. Not together. You will hear two clear tones when you listen to them separately.

Insert – 400and405beating.mp3

Now try and see what happens when you place them into your ears simultaneously. You will be unable to hear these two tones together. Instead, you will hear a fused tone which beats five times per second. This is because each of your ears are sending electrical signals to the brain telling it what frequency it is responding to but these two frequencies are too close together and so a perceptual confusion occurs resulting in a combined frequency being perceived which beats at a rate which is the same as the mathematical difference between the two tones.

Auditory beating becomes particularly interesting in pieces of music written for surround sound environments when the proximity of the listener to the various speakers plays a key factor and so simply turning one’s head in these scenarios can often entirely change the colour of the sound as different layers of beating will alter the overall timbre of the sound.

So how can all of these be meaningful to composers and listeners alike? The examples shown here are intended to be basic and provide proofs of concept more so than anything else. In the much more complex world of music composition the scope for the employment of such material is seemingly endless. Considering the ear as a musical instrument gives the listener the opportunity to engage with sound and music in a more intimate way than ever before.

Brian Connolly’s compositions which explore such concepts in greater detail can be found at www.soundcloud.com/brianconnolly-1

4aEA10 – Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications

Key F. Lima – keyflima@gmail.com
Pontifical Catholic University of Paraná
Curitiba, Paraná, Brazil

Popular version of paper 4aEA10, “Preliminary evaluation of the sound absorption coefficient of a thin coconut fiber panel for automotive applications”
Presented Thursday morning, November 5, 2015, 11:15 AM, Orlando Room
170th ASA Meeting, Jacksonville, Fl

Absorbents materials are fibrous or porous and must have the property of being good acoustic dissipaters. Sound propagation causes multiples reflections and friction of the air present in the absorbent medium converting sound energy to thermal energy. The acoustic surface treatment with absorbent material are widely used to reduce the reverberation in enclosed spaces or to increase the sound transmission loss of acoustics panels. In addition, these materials can be applied into acoustics filters with the purpose to increase their efficiencies. The sound absorption depends on the excitation frequency of the sound and it is more effective at high frequencies. Natural fibers such as coconut coir fiber have a great potential to be used like sound absorbent material. As natural fibers are agriculture waste, manufacturing this fiber is a natural product, therefore an economic and interesting option. This work compares the sound absorption coefficient between a thin coconut fiber panel and a composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven, which are used in the automotive industry as a roof trim. The evaluation of sound absorption coefficient was carried out with the impedance tube technique.

In 1980, Chung and Blaser evaluated the normal incidence sound absorption coefficient through transfer function method.  The standard ASTM E1050-10 and ISO 10534-2 was based in Chung and Blaser’s method, Figure 1. In summary, this method uses an impedance tube with the sound source placed to one end and at another, the absorbent material backed in a rigid wall. The decomposition of the stationary sound wave pattern into forward and backward traveling components is achieved by measuring sound pressures. This evaluating is carried out simultaneously at two spaced locations in the tube’s sidewall where two microphones are located, Figure 1.

Impedance Tube Fig1

Figure 1. Impedance Tube

The wave decomposition allows to the determination of the complex reflection coefficient R(f) from which the complex acoustic impedance and the normal incidence sound absorption coefficient (a) of an absorbent material can be determined. Furthermore, the two coefficients R(f) and a are calculated by Transfer Function H12 between the two microphones through:fig1, where s is the distance between the microphones, x1 is the distance between the farthest microphone and the sample, i is the imaginary unity and k0 is the wave number of the air. If R(f) is known, the coefficient a is easily obtained by expression:fig2.

In this work, eight samples of coconut fiber and eight samples of composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven used in the automotive industry, Figure 2 and 3. The material properties are shown in Table 1.

Sample Fig2 - coconut coir fiber

Figure 2. Samples

Composite Panel Fig3 - coconut coir fiber

Figure 3. Composite panel structure.

Table 1. Material Properties.
Coconut Fiber Composite Panel
Sample

diameter

[mm]

thickness

[mm]

mass

[g]

density

[kg/m3]

Sample

diameter

[mm]

thickness

[mm]

mass

[g]

density

[kg/m3]

1 28,25 5,17 0,67 649,5 1 28,05 5,78 0,41 360,6
2 28,20 5,04 0,62 618,8 2 28,08 5,66 0,42 376,6
3 28,20 4,93 0,60 612,6 3 28,15 5,59 0,42 379,6
4 28,35 5,09 0,69 674,7 4 28,23 5,54 0,44 398,8
5 100,43 4,98 8,89 708,0 5 99,55 5,86 5,40 371,9
6 100,43 4,84 9,73 797,7 6 99,55 6,20 5,54 360,9
7 100,73 5,34 9,64 712,1 7 99,68 6,06 5,57 370,4
8 100,45 4,79 9,13 755,2 8 99,55 5,99 5,62 378,9

The random noise signal with frequency band between 200 Hz and 5000 Hz was utilized to evaluate a.  The Figure 4 shows the mean normal incidence absorption coefficient obtained from the measurements.

Comparison absorption coeff Fig4

Figure 4. Comparison of normal absorption coefficient (a)

The results shows that the composite panel have a better sound absorption coefficient than coconut fiber panel. To improve the coconut fiber panel acoustical efficiency it is needed to add some filling material with the same effect of the polyurethane foam of the composite panel.

REFERENCES
Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring  in-duct acoustic properties – I Theory,” J. Acoust. Soc. Am. 68, 907-913.

Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring  in-duct acoustic properties – II Experiment,” J. Acoust. Soc. Am. 68, 913-921.

ASTM E1050:2012. “Standard test method for impedance and absorption of acoustical materials using a tube, two microphones and a digital frequency analysis system,” American Society for Testing and Materials, Philadelphia, PA, 2012.

ISO 10534-2:1998. “Determination of sound absorption coefficient and impedance in impedance tubes – Part 2: Transfer-function method”, International Organization for Standardization, Geneva, 1998.

4pEA4 – “See” subsurface soils using surface waves

Zhiqu Lu – zhiqulu@olemiss.edu
National Center for Physical Acoustics, The University of Mississippi,
1 Chucky Mullins,
University, MS, 38677

Lay language paper for 4pEA4
Presented Thursday afternoon, November 5, 2015
170th ASA Meeting, Jacksonville

Within a few meters beneath the earth surface, three distinctive soil layers are formed: a top dry and hard layer, a middle moist and soft region, and a deeper zone where the mechanical strength of the soil increases with depth.  The information of this subsurface soil is required for agricultural, environmental, civil engineering, and military applications. A seismic surface wave method has been recently developed to non-invasively obtain such information (Lu, 2014; Lu, 2015).  The method, known as the multichannel analysis of surface wave method (MASW) (Park, et al., 1999; Xia, et al., 1999), consists of three essential parts: surface wave generation and collection (Figure 1), spectrum analysis, and inversion process. The implement of the technique employs sophisticated sensor technology, wave propagation modeling, and inversion algorithm.

Lu1

Figure 1. The experimental setup for the MASW method

The technique makes use of the characteristic of one type of surface waves, the so-called Rayleigh waves that travel along the earth’s surface within a depth of one and a half wavelengths. Therefore the components of surface waves with short wavelength contain information of shallow soil, whereas the longer wavelength surface waves provide the properties of deep soil (Figure 2).

Lu2

Figure 2. Rayleigh wave propagation

The outcome of the MASW method is a soil vertical profile, i.e., the acoustic shear (S) wave velocity as a function of depth (Figure 3).

Lu3 - soil profile

Figure 3. A typical soil profile

By repeating the MASW measurements either spatially or temporarily, one can measure and “see” the spatial and temporal variations of the subsurface soils. Figure 4 shows a typical vertical cross-section image in which the intensity of the image represents the value of the shear wave velocity. From this image, three different layers mentioned above are identified.

Lu4 - soil vertical cross-section

Figure 4. A typical example of soil vertical cross-section image

Lu5

Figure 5. A vertical cross-section image showing the presence of a fragipan layer

Figure 5 displays another two-dimensional image in which a middle high velocity zone (red area) appears. This high velocity zone represents a geological anomaly, known as a fragipan, a naturally occurring dense and hard soil layer (Lu, et al., 2014). The detection of fragipan is important in agricultural land managements.

The MASW method can also be applied to monitor weather influence on soil properties (Lu 2014). Figure 6 shows the temporal variations of the underground soil.  This is a result of a long term survey conducted in 2012.  By drawing a vertical line and moving it from left side to right side, i.e., along the time index number axis, the evolution of the soil profile due to weather effects can be evaluated. In particular, the high velocity zones occurred in the summer of 2012, reflecting very dry soil conditions.

Lu6

Figure 6. The  temporal variations of soil profile due to weather effects

Lu,  Z., 2014.  Feasibility of using a seismic surface wave method to study seasonal and weather effects on shallow surface soils. Journal of Environmental & Engineering Geophysics, DOI: 10.2113/JEEG19.2.71, Vol.19, 71–85.

Lu, Z. 2015. Self-adaptive method for high frequency multi-channel analysis of surface wave method, Journal of Applied Geophysics, Vol. 121, 128-139. http://dx.doi.org/10.1016/j.jappgeo.2015.08.003

Lu, Z., Wilson, G.V., Hickey, C.J., 2014. Imaging a soil fragipan using a high-frequency MASW method. In Proceedings of the Symposium on the Application of Geophysics to Engineering and Environmental Problems (SAGEEP 2014), Boston, MA., Mar. 16-20.

Park, C.B., Miller, R.D., Xia, J., 1999. Multichannel analysis of surface waves. Geophysics, Vol. 64, 800-808.

Xia, J., Miller, R.D., Park, C.B., 1999. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves. Geophysics, Vol. 64, 691-700.

3aSA7 – Characterizing defects with nonlinear acoustics

Pierre-Yves Le Bas, pylb@lanl.gov1,  Brian E. Anderson1,2, Marcel Remillieux1, Lukasz Pieczonka3, TJ Ulrich1

1Geophysics group EES-17, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
2Department of Physics and Astronomy, Brigham Young University, N377 Eyring Science Center, Provo, UT 84601, USA
3AGH University of Science and Technology, Krakow, Poland

Popular version of paper 3aSA7, “Elasticity Nonlinear Diagnostic method for crack detection and depth estimation”
Presented Wednesday morning, November 4, 2015, 10:20 AM, Daytona room
170th ASA Meeting, Jacksonville

One common problem in industry is to detect and characterize defects, especially at an early stage. Indeed, small cracks are difficult to detect with current techniques and, as a result, it is customary to replace parts after an estimated lifetime instead of keeping them in service until they are effectively approaching failure. Being able to detect early stage damage before it becomes structurally dangerous is a challenging problem of great economic importance. This is where nonlinear acoustics can help. Nonlinear acoustics is extremely sensitive to tiny cracks and thus early damage. The principle of nonlinear acoustics is easily understood if you consider a bell. If the bell is intact, it will ring with an agreeable tone determine by the geometry of the bell. If the bell is cracked, one will hear a dissonant sound, which is due to nonlinear phenomena. Thus, if an object is struck it is possible to determine, by listening to the tone(s) produced, whether or not it is damaged. Here the same principle is used but in a more quantitative way and, usually, at ultrasonic frequencies. Ideally, one would also like to know where the damage is and what its orientation is. Indeed, a crack growing thru an object could be more important to detect as it could lead to the object splitting in half, but in other circumstances, chipping might be more important, so knowing the orientation of a crack is critical in the health assessment of a part.

To localize and characterize a defect, time reversal is a useful technique. Time reversal is a technique that can be used to localize vibration in a known direction, i.e., a sample can be made to vibrate perpendicularly to the surface of the object or parallel to it, which are referred to as out-of-plane and in-plane motions, respectively. The movie below shows how time reversal is used to focus energy: a source broadcasts a wave from the back of a plate and signals are recorded on the edges using other transducers. The signals from this initial phase are then flipped in time and broadcast from all the edge receivers. Time reversal then dictates that these waves focus at the initial source location.

 

1

-video file missing-

Time reversal can also be more that the simple example in the video. Making use of the reciprocity principle, i.e., that a signal traveling from A to B is identical to the same signal traveling from B to A, the source in the back of the plate can be replaced by a receiver and the initial broadcast can be done from the side, meaning TR can focus energy anywhere a signal can be recorded; and with a laser as receiver, this means anywhere on the surface of an object.

In addition, the dominant vibration direction, e.g., in-plane or out-of plane, of the focus can be specified by recording specific directions of motion of the initial signals. If during the first step of the time reversal process, the receiver is set to record in-plane vibration, the focus will be primarily in that in-plane direction; similarly if the receiver records the out-of-plane vibration in the first step of the process, the focus will be essentially in the out-of-plane direction. This is important as the nonlinear response of a crack depends on the orientation of the vibration that makes it vibrate. To fully characterize a sample in terms of crack presence and orientation TR is used to focus energy at defined locations and at each point the nonlinear response is quantified.  This can be done for any orientation of the focused wave. To cover all possibilities, three scans are usually done in three orthogonal directions.

Figure 2 shows three scans on x, y and z directions of the same sample composed of a glass plate glued on an aluminum plate. The sample has 2 defects, one delamination due to a lack of glue between the 2 plates (in the (x,y) plane) at the top of the scan area and one crack perpendicular to the surface in the glass plate in the (x,z) plane in the middle of the scan area.

2 - Nonlinear acoustics

Figure 2. Nonlinear component of the time reversal focus at each point of a scan grid with wave focused in the x, y and z direction (from left to right)

As can be seen on those scans, the delamination in the (x,y) plane is visible only when the wave is focused in the Z direction while the crack in the (x,z) plane is visible only in the Y scan. This means that cracks have a strong nonlinear behavior when excited in a direction perpendicular to their main orientation. So by scanning with three different orientations of the focused vibration one should be able to recreate the orientation of a crack.

Another feature of the time reversal focus is that its spatial extent is about a wavelength of the focus wave. Which means the higher the frequency, the smaller the spot size, i.e., the area of the focused energy. One can then think that the higher the frequency the better the resolution and thus higher frequency is always best. However, the extent of the focus is also the depth that this technique can probe; so lower frequency means a deeper investigation and thus a more complete characterization of the sample. Therefore there is a tradeoff between depth of investigation and resolution. However, by doing several scans at different frequencies, one can extract additional information about a crack. For example, Figure 3 shows 2 scans done on a metallic sample with the only difference being the frequency of the focused wave.

3 - Nonlinear acoustics

Figure 3. From left to right: Nonlinear component of the time reversal focus at each point of a scan grid at 200kHz and 100kHz and photography of the sample from its side.

At 200kHz, it looks like there is only a thin crack while at 100kHz the extent of this crack is larger toward the bottom of the scan and more than double so there is more than just a resolution issue. At 200kHz the depth of investigation is about 5mm; at 100kHz it is about 10mm. Looking on the side of the sample in the right panel of figure 3, the crack is seen to be perpendicular to the surface for about 6mm and then dip severely. At 200kHz, the scan is only sensitive to the part perpendicular to the surface while at 100kHz, the scan will also show the dipping part. So doing several scans at different frequencies can give some information on the depth profile of the crack.

In conclusion, using time reversal to focus energy in several directions and at different frequencies and studying the nonlinear component of this focus can lead to a characterization of a crack, its orientation and depth profile, something that is currently only available using techniques, like X-ray CT, which are not as easily deployable as ultrasonic ones.

1pAB6 – Long-lasting suppression of spontaneous firing in inferior colliculus neurons: implication to the residual inhibition of tinnitus

A.V. Galazyuk – agalaz@neomed.edu
Northeast Ohio Medical University

Popular version of poster 1pAB6
Presented Monday morning, November 2, 2015, 3:25 PM – 3:45 PM, City Terrace 9
170th ASA Meeting, Jacksonville

More than one hundred years ago, US clinician James Spalding first described an interesting phenomenon when he observed tinnitus patients suffering from perceived phantom ringing [1]. Many of his patients reported that a loud, long-lasting sound produced by violin or piano made their tinnitus disappear for about a minute after the sound was presented. Nearly 70 years later, the first scientific study was conducted to investigate how this phenomenon, termed residual inhibition, is able to provide tinnitus relief [2]. Further research into this phenomenon has been conducted to understand the basic properties of this “inhibition of ringing” and to identify what sounds are most effective at producing the residual inhibition [3].

The research indicated that indeed, residual inhibition is an internal mechanism for temporary tinnitus suppression. However, at present, little is known about the neural mechanisms underlying residual inhibition. Increased knowledge about residual inhibition may not only shed light on the cause of tinnitus, but also may open an opportunity to develop an effective tinnitus treatment.

For the last four years we have studied a fascinating phenomenon of sound processing in neurons of the auditory system that may provide an explanation of what causes the residual inhibition in tinnitus patients. After presenting a sound to a normal hearing animal, we observed a phenomenon where firing activity of auditory neurons is suppressed [4, 5]. There are several striking similarities between this suppression in the normal auditory system and residual inhibition observed in tinnitus patients:

  1. Relatively loud sounds trigger both the neuronal firing suppression and residual inhibition.
  2. Both the suppression and residual inhibition last for the same amount of time after a sound, and increasing the duration of the sound makes both phenomena last longer.
  3. Simple tones produce more robust suppression and residual inhibition compared to complex sounds or noises.
  4. Multiple attempts to induce suppression or residual inhibition within a short timeframe make both much weaker.

These similarities make us believe that the normal sound-induced suppression of spontaneous firing is an underlying mechanism of residual inhibition.

The most unexpected outcome from our research is that the phenomenon of residual inhibition, which focuses on tinnitus patients, appears to be a natural feature of sound processing, because suppression was observed in both the normal hearing mice and in mice with tinnitus. If so, why is it that people with tinnitus experience residual inhibition whereas those without tinnitus do not?

It is well known that hyperactivity in auditory regions of the brain has been linked to tinnitus, meaning that in tinnitus, auditory neurons have elevated spontaneous firing rates [6]. The brain then interprets this hyperactivity as phantom sound. Therefore, suppression of this increased activity by a loud sound should lead to elimination or suppression of tinnitus. Normal hearing people also have this suppression occurring after loud sounds. However spontaneous firing of their auditory neurons remains low enough that they never perceive the phantom ringing that tinnitus sufferers do. Thus, even though there is suppression of neuronal firing by loud sounds in normal hearing people, it is not perceived.

Most importantly, our research has helped us identify a group of drugs that can alter this suppression response [5], as well as the spontaneous firing of the auditory neurons responsible for tinnitus. These drugs will be further investigated in our future research to develop effective tinnitus treatments.

This research was supported by the research grant RO1 DC011330 from the National Institute on Deafness and Other Communication Disorders of the U.S. Public Health Service.

[1] Spalding J.A. (1903). Tinnitus, with a plea for its more accurate musical notation. Archives of Otology, 32(4), 263-272.

[2] Feldmann H. (1971). Homolateral and contralateral masking of tinnitus by noise-bands and by pure tones. International Journal of Audiology, 10(3), 138-144.

[3] Roberts L.E. (2007). Residual inhibition. Progress in Brain Research, Tinnitus: Pathophysiology and Treatment, Elsevier, 166, 487-495.

[4] Voytenko SV, Galazyuk AV. (2010) Suppression of spontaneous firing in inferior colliculus neurons during sound processing. Neuroscience 165: 1490-1500.

[5] Voytenko SV, Galazyuk AV (2011) mGluRs modulate neuronal firing in the auditory midbrain. Neurosci Lett. 492: 145-149

[6] Eggermont JJ, Roberts LE. (2015) Tinnitus: animal models and findings in humans. Cell Tissue Res. 361: 311-336.