5aMU1 – The inner ear as a musical instrument  – Brian Connolly

5aMU1 – The inner ear as a musical instrument – Brian Connolly

The inner ear as a musical instrument

 

Brian Connolly – bconnolly1987@gmail.com
Music Department
Logic House
South Campus
Maynooth University
Co. Kildare
Ireland

 

Popular version of paper 5aMU1, “The inner ear as a musical instrument”
Presented Friday morning, November 6, 2015, 8:30 AM, Grand Ballroom 2
170th ASA meeting Jacksonville

 

 

(please use headphones for listening to all audio samples)

 

Did you know that your ears could sing? You may be surprised to hear that they, in fact, have the capacity to make particularly good performers and recent psychoacoustics research has revealed the true potential of the ears within musical creativity. ‘Psychoacoustics’ is loosely defined as the study of the perception of sound.

 

Figure 1: The Ear

ear

 

 

A good performer can carry out required tasks reliably and without errors. In many respects the very straight-forward nature of the ear’s responses to certain sounds results in the ear proving to be a very reliable performer as its behaviour can be predicted and so it is easily controlled. In the context of the listening system, the inner ear has the ability to behave as a highly effective instrument which can create its own sounds that many experimental musicians have been using to turn the listeners’ ears into participating performers in the realization of their music.

One of the most exciting avenues of musical creativity is the psychoacoustic phenomenon known as otoacoustic emissions. These are tones which are created within the inner ear when it is exposed to certain sounds. One such example of these emissions is ‘difference tones.’ When two clear frequencies enter the ear at, say 1,000Hz and 1,200Hz the listener will hear these two tones, as expected, but the inner ear will also create its own third frequency at 200Hz because this is the mathematical difference between the two original tones. The ear literally sends a 200Hz tone back out in reverse through the ear and this sound can be detected by an in-ear microphone, a process which doctors carrying out hearing tests on babies use as an integral part of their examinations. This means that composers can create certain tones within their work and predict that the listeners’ ears will also add their extra dimension to the music upon hearing it. Within certain loudness and frequency ranges, the listeners will also be able to feel their ears buzzing in response to these stimulus tones! This makes for a very exciting and new layer to contemporary music making and listening.

First listen to this tone. This is very close to the sound your ear will sing back during the second example.

Insert – 200.mp3

Here is the second sample containing just two tones at 1,000Hz and 1,200Hz. See if you can also hear the very low and buzzing difference tone which is not being sent into your ear, it is being created in your ear and sent back out towards your headphones!

Insert – 1000and1200.mp3

If you could hear the 200Hz difference tone in the previous example, have a listen to this much more complex demonstration which will make your ears sing a well known melody. It is important to try to not listen to the louder impulsive sounds and see if you can hear your ears humming along to perform the tune of Twinkle, Twinkle, Little Star at a much lower volume!

(NB: The difference tones will start after about 4 seconds of impulses)

Insert – Twinkle.mp3

Auditory beating is another phenomenon which has caught the interest of many contemporary composers. In the below example you will hear the following: 400Hz in your left ear and 405Hz in your right ear.

First play the below sample by placing the headphones into your ears just one at a time. Not together. You will hear two clear tones when you listen to them separately.

Insert – 400and405beating.mp3

Now try and see what happens when you place them into your ears simultaneously. You will be unable to hear these two tones together. Instead, you will hear a fused tone which beats five times per second. This is because each of your ears are sending electrical signals to the brain telling it what frequency it is responding to but these two frequencies are too close together and so a perceptual confusion occurs resulting in a combined frequency being perceived which beats at a rate which is the same as the mathematical difference between the two tones.

Auditory beating becomes particularly interesting in pieces of music written for surround sound environments when the proximity of the listener to the various speakers plays a key factor and so simply turning one’s head in these scenarios can often entirely change the colour of the sound as different layers of beating will alter the overall timbre of the sound.

So how can all of these be meaningful to composers and listeners alike? The examples shown here are intended to be basic and provide proofs of concept more so than anything else. In the much more complex world of music composition the scope for the employment of such material is seemingly endless. Considering the ear as a musical instrument gives the listener the opportunity to engage with sound and music in a more intimate way than ever before.

Brian Connolly’s compositions which explore such concepts in greater detail can be found at www.soundcloud.com/brianconnolly-1

2aSCb3 – How would you sketch a sound with your hands? – Hugo Scurto, Guillaume Lemaitre, Jules Françoise, Patrick Susini, Frédéric Bevilacqua

2aSCb3 – How would you sketch a sound with your hands? – Hugo Scurto, Guillaume Lemaitre, Jules Françoise, Patrick Susini, Frédéric Bevilacqua

“How would you sketch a sound with your hands?”

Hugo Scurto – Hugo.Scurto@ircam.fr
Guillaume Lemaitre – Guillaume.Lemaitre@ircam.fr
Jules Françoise – Jules.Francoise@ircam.fr
Patrick Susini – Patrick.Susini@ircam.fr
Frédéric Bevilacqua – Frederic.Bevilacqua@ircam.fr
Ircam
1 place Igor Stravinsky
75004 Paris, France

 

Popular version of paper 2aSCb3, “Combining gestures and vocalizations to imitate sounds”
Presented Tuesday morning, November 3, 2015, 10:30 AM in Grand Ballroom 8
170th ASA Meeting, Jacksonville

 

 

Scurto fig 1

Figure 1. A person hears the sound of door squeaking and imitates it with vocalizations and gestures. Can the other person understand what he means?

Have you ever listened to an old Car Talk show? Here is what it sounded like on NPR back in 2010:

“So, when you start it up, what kind of noises does it make?
– It just rattles around for about a minute. […]
– Just like budublu-budublu-budublu?
– Yeah! It’s definitely bouncing off something, and then it stops”

 

As the example illustrates, it is often very complicated to describe a sound with words. But it is really easy to make it with our built-in sound-making system: the voice! In fact, we have observed earlier that this is exactly what people do: when we ask a person to communicate a sound to another person, she will very quickly try to recreate this noise with her voice – and also use a lot of gestures.

And this works! Communicating sounds with voice and gesture is much more effective than describing them with words and sentences. Imitations of sounds are fun, expressive, spontaneous, widespread in human communication, and very effective. These non-linguistic vocal utterances have been little studied, but nevertheless have the potential to provide researchers with new insights into several important questions in domains such as articulatory phonetics and auditory cognition.

The study we are presenting at this ASA meeting is part of a larger European project on how people imitate sounds with voice and gestures: SkAT-VG (“Sketching Audio Technologies with Voice and Gestures”, http://www.skatvg.eu): How do people produce vocal imitations (phonetics)? What are imitations made of (acoustics and gesture analysis)? How do other people interpret them (psychology)? The ultimate goal is to create “sketching” tools for sound designers (the persons that create the sounds of everyday products). If you are an architect and want to sketch a house, you can simply draw it on a sketchpad. But what do you do if you are a sound designer and want to rapidly sketch the sound of a new motorbike? Well, all that is available today are cumbersome pieces of software. Instead, the Skat-VG project aims to offer sound designers new tools that are as intuitive as a sketching pad: simply use their voice and gestures to control complex sound design tools. Therefore, the SkAT-VG project also conducts research in machine learning, sound synthesis, and studies how sound designers work.

Here at the ASA meeting, we are presenting a partial study in which we asked the question: “What do people use gestures for when they imitate a sound?” In fact, people use a lot of gestures, but we do not know what information these gestures convey: Are they redundant with the voice? Do they convey specific pieces of information that the voice cannot represent?

We first collected a huge database of vocal and gestural imitations. Then, we asked 50 participants to come to our lab and make vocal and gestural imitations for several hours. We recorded their voice, filmed them with a high-speed camera, and used a depth camera and accelerometers to measure their gestures. This resulted in a database of about 8000 imitations! This database is an unprecedented amount of material that now allows

 

We first analyzed the database qualitatively, by watching and annotating the videos. From this analysis, several hypotheses about the combination of gestures and vocalizations were drawn. Then, to test these hypotheses, we asked 20 participants to imitate 25 specially synthesized sounds with their voice and gestures.

The results showed a quantitative advantage of voice over gesture for communicating rhythmic information. Voice can reproduce accurately higher tempos than gestures, and is more precise than gestures when reproducing complex rhythmic patterns. We also found that people often use gestures in a metaphorical way, whereas voice reproduces some acoustic features of the sound. For instance, people shake their hands very rapidly whenever a sound is stable and noisy. This type of gesture does not really follow a feature of the sound: it simply means that the sound is noisy.

Overall, our study reveals the metaphorical function of gestures during sound imitation. Rather than following an acoustic characteristic, gestures expressively emphasize the vocalization and signal the most salient features. These results will inform the specifications of the SkAT-VG tools and make the tools more intuitive.

 

 

3aUW8 – A view askew: Bottlenose dolphins improve echolocation precision by aiming their sonar beam to graze the target – Laura N. Kloepper

3aUW8 – A view askew: Bottlenose dolphins improve echolocation precision by aiming their sonar beam to graze the target – Laura N. Kloepper

A view askew: Bottlenose dolphins improve echolocation precision by aiming their sonar beam to graze the target

Laura N. Kloepper– lkloepper@saintmarys.edu
Saint Mary’s College
Notre Dame, IN 46556

 

Yang Liu–yang.liu@umassd.edu
John R. Buck– jbuck@umassd.edu
University of Massachusetts Dartmouth
285 Old Westport Road
Dartmouth, MA 02747

 

Paul E. Nachtigall–nachtiga@hawaii.edu
University of Hawaii at Manoa
PO Box 1346
Kaneohe, HI 96744

 

Popular version of paper 3aUW8, “Bottlenose dolphins direct sonar clicks off-axis of targets to maximize Fisher Information about target bearing”

Presented Wednesday morning, November 4, 2015, 10:25 AM in River Terrace 2

170th ASA Meeting, Jacksonville

 

Bottlenose dolphins are incredible echolocators. Using just sound, they can detect a ping-pong ball sized object from 100 m away, and discriminate between objects differing in thickness by less than 1 mm. Based on what we know about man-made sonar, however, the dolphins’ sonar abilities are an enigma–simply put, they shouldn’t be as good at echolocation as they actually are.

Typical manmade sonar devices achi­eve high levels of performance by using very narrow sonar beams. Creating narrow beams requires large and costly equipment. In contrast to these manmade sonars, bottlenose dolphins achieve the same levels of performance with a sonar beam that is many times wider–but how? Understanding their “sonar secret” can help lead to more sophisticated synthetic sonar devices.

Bottlenose dolphins’ echolocation signals contain a wide range of frequencies.  The higher frequencies propagate away from the dolphin in a narrower beam than the low frequencies do. This means the emitted sonar beam of the dolphin is frequency-dependent.  Objects directly in front of the animal echo back all of the frequencies.   However, as we move out of the direct line in front of the animal, there is less and less high frequency, and when the target is way off to the side, only the lower frequencies reach the target to bounce back.   As shown below in Fig. 1, an object 30 degrees off the sonar beam axis has lost most of the frequencies.

 

Kloepper-fig1

 

Figure 1. Beam pattern and normalized amplitude as a function of signal frequency and bearing angle. At 0 degrees, or on-axis, the beam contains an equal representation across all frequencies. As the bearing angle deviates from 0, however, the higher frequency components fall off rapidly.

Consider an analogy to light shining through a prism.  White light entering the prism contains every frequency, but the light leaving the prism at different angles contains different colors.  If we moved a mirror to different angles along the light beam, it would change the color reflected as it moved through different regions of the transmitted beam.  If we were very good, we could locate the mirror precisely in angle based on the color reflected.  If the color changes more rapidly with angle in one region of the beam, we would be most sensitive to small changes in position at that angle, since small changes in position would create large changes in color.  In mathematical terms, this region of maximum change would have the largest gradient of frequency content with respect to angle.  The dolphin sonar appears to be exploiting a similar principle, only the different colors are different frequencies or pitch in the sound.

Prior studies on bottlenose dolphins assumed the animal pointed its beam directly at the target, but this assumption resulted in the conclusion that the animals shouldn’t be as “good” at echolocation as they actually are. What if, instead, they use a different strategy? We hypothesized that the dolphin might be aiming their sonar so that the main axis of the beam passes next to the target, which results in the region of maximum gradient falling on the target. Our model predicts that placing the region of the beam most sensitive to change on the target will give the dolphin greatest precision in locating the object.

To test our hypothesis, we trained a bottlenose dolphin to detect the presence or absence of an aluminum cylinder while we recorded the echolocation signals with a 16-element hydrophone array (Fig.2).

Laura Dolphin Graphics

 

Figure 2: Experimental setup. The dolphin detected the presence or absence of cylinders at different distances while we recorded sonar beam aim with a hydrophone array.

We then measured where the dolphin directed its sonar beam in relation to the target and found the dolphin pointed its sonar beam 7.05 ± 2.88 degrees (n=1930) away from the target (Fig.3).

 

Kloepper-Fig_3

 

Figure 3: Optimality in directing beam away from axis. The numbers on the emitted beam represent the attenuation in decibels relative to the sound emitted from the dolphin. The high frequency beam (red) is narrower than the blue and attenuates at angle more rapidly. The dolphin directs its sonar beam 7 degrees away from the target.

To then determine if certain regions of the sonar beam provide more theoretical “information” to the dolphin, which would improve its echolocation, we applied information theory to the dolphin sonar beam. Using the weighted frequencies present in the signal, we calculated the Fisher Information for the emitted beam of a bottlenose dolphin. From our calculations we determined 95% of the maximum Fisher Information to be between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees (Fig. 4).

 

 

Kloepper-Fig_4

Figure 4: The calculated Fisher Information as a function of bearing angle. The peak of the information is between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees.

The result? The dolphin is using a strategy that is the mathematically optimal! By directing its sonar beam slightly askew of the target (such as a fish), the target is placed in the highest frequency gradient of the beam, allowing the dolphin to locate the target more precisely.

4pEA4 – “See”  subsurface soils using surface waves – Zhiqu Lu

4pEA4 – “See” subsurface soils using surface waves – Zhiqu Lu

 “See”  subsurface soils using surface waves

Zhiqu Lu — zhiqulu@olemiss.edu

National Center for Physical Acoustics, The University of Mississippi,

1 Chucky Mullins,

University, MS, 38677

 

Lay language paper 4pEA4

Presented Thursday afternoon, November 5, 2015

170th ASA Meeting, Jacksonville

 

Within a few meters beneath the earth surface, three distinctive soil layers are formed: a top dry and hard layer, a middle moist and soft region, and a deeper zone where the mechanical strength of the soil increases with depth.  The information of this subsurface soil is required for agricultural, environmental, civil engineering, and military applications. A seismic surface wave method has been recently developed to non-invasively obtain such information (Lu, 2014; Lu, 2015).  The method, known as the multichannel analysis of surface wave method (MASW) (Park, et al., 1999; Xia, et al., 1999), consists of three essential parts: surface wave generation and collection (Figure 1), spectrum analysis, and inversion process. The implement of the technique employs sophisticated sensor technology, wave propagation modeling, and inversion algorithm.

Lu1

“Figure 1. The experimental setup for the MASW method”

The technique makes use of the characteristic of one type of surface waves, the so-called Rayleigh waves that travel along the earth’s surface within a depth of one and a half wavelengths. Therefore the components of surface waves with short wavelength contain information of shallow soil, whereas the longer wavelength surface waves provide the properties of deep soil (Figure 2).

Lu2

“Figure 2. Rayleigh wave propagation”

The outcome of the MASW method is a soil vertical profile, i.e., the acoustic shear (S) wave velocity as a function of depth (Figure 3).

Lu3

“Figure 3. A typical soil profile”

By repeating the MASW measurements either spatially or temporarily, one can measure and “see” the spatial and temporal variations of the subsurface soils. Figure 4 shows a typical vertical cross-section image in which the intensity of the image represents the value of the shear wave velocity. From this image, three different layers mentioned above are identified.

 

Lu4

“Figure 4. A typical example of soil vertical cross-section image “

 

Lu5

Figure 5 displays another two-dimensional image in which a middle high velocity zone (red area) appears. This high velocity zone represents a geological anomaly, known as a fragipan, a naturally occurring dense and hard soil layer (Lu, et al., 2014). The detection of fragipan is important in agricultural land managements.

“Figure 5. A vertical cross-section image showing the presence of a fragipan layer”

The MASW method can also be applied to monitor weather influence on soil properties (Lu 2014). Figure 6 shows the temporal variations of the underground soil.  This is a result of a long term survey conducted in 2012.  By drawing a vertical line and moving it from left side to right side, i.e., along the time index number axis, the evolution of the soil profile due to weather effects can be evaluated. In particular, the high velocity zones occurred in the summer of 2012, reflecting very dry soil conditions.

“Figure 6. The  temporal variations of soil profile due to weather effects”

Lu6

 

 

Lu,  Z., 2014.  Feasibility of using a seismic surface wave method to study seasonal and weather effects on shallow surface soils. Journal of Environmental & Engineering Geophysics, DOI: 10.2113/JEEG19.2.71, Vol.19, 71–85.

Lu, Z. 2015. Self-adaptive method for high frequency multi-channel analysis of surface wave method, Journal of Applied Geophysics, Vol. 121, 128-139. http://dx.doi.org/10.1016/j.jappgeo.2015.08.003

Lu, Z., Wilson, G.V., Hickey, C.J., 2014. Imaging a soil fragipan using a high-frequency MASW method. In Proceedings of the Symposium on the Application of Geophysics to Engineering and Environmental Problems (SAGEEP 2014), Boston, MA., Mar. 16-20.

Park, C.B., Miller, R.D., Xia, J., 1999. Multichannel analysis of surface waves. Geophysics, Vol. 64, 800-808.

Xia, J., Miller, R.D., Park, C.B., 1999. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves. Geophysics, Vol. 64, 691-700.

 

4aEA2 – How soon can you use your new concrete driveway? –  Jinying Zhu

4aEA2 – How soon can you use your new concrete driveway? – Jinying Zhu

How soon can you use your new concrete driveway?

Jinying Zhu: jyzhu@unl.edu

 

Department of Civil Engineering

University of Nebraska-Lincoln

1110 S 67th St., Omaha, NE 68182, USA

 

Popular version of paper 4aEA2, “Monitoring hardening of concrete using ultrasonic guided waves” Presented Thursday morning, Nov. 5, 2015, 8:50 AM, ORLANDO room,
170th ASA Meeting, Jacksonville, FL

 

Concrete is the most commonly used construction material in the world. The performance of concrete structures is largely determined by properties of fresh concrete at early ages. Concrete gains strength through a chemical reaction between water and cement (hydration), which gradually change a fluid fresh concrete mix to a rigid and hard solid. The process is called setting and hardening.  It is important to measure the setting times, because you may not have enough time to mix and place concrete if the setting time is too early, while too late setting will cause delay in strength gain.  The setting and hardening process is affected by many parameters, including water and cement ratio, temperature, and chemical admixtures.  The standard method to test setting time is to measure penetration resistance of fresh concrete samples in laboratory, which may not represent the real condition in field.

Ultrasonic waves have been proposed to monitor the setting and hardening process of concrete by measuring wave velocity change. When concrete becomes hard, the stiffness increases, and the ultrasonic velocity also increases. The authors found there is a clear relationship between the shear wave velocity and the traditional penetration resistance. However, most ultrasonic tests measure a small volume of concrete sample in laboratory, and they are not suitable for field application. In this paper, the authors proposed an ultrasonic guided wave test method. Steel reinforcements (rebars) are used in most concrete structures. When ultrasonic guided waves propagate within rebar, they leak energy to surrounding concrete, and the energy leakage rate is proportion to the stiffness of concrete.  Ultrasonic waves can be introduced into rebars from one end and the echo signal will be received at the same end using the same ultrasonic sensor.  This test method has a simple test setup, and is able to monitor the concrete hardening process continuously.

Figure 2 shows guided wave echo signals measured on a 19mm diameter rebar embedded in concrete. It is clear that the signal amplitude decreases with the age of concrete (2 ~ 6 hours). The attenuation can be plotted vs. age for different cement/concrete mixes. Figure 3 shows the attenuation curves for 3 cement paste mixes. It is known that a cement mix with larger water cement ratio (w/c) will have slower strength gain, which agrees with the ultrasonic guided wave test, where the w/c=0.5 mix has lower attenuation rate.  When there is a void around the rebar, energy leakage will be less than the case without a void, which is also confirmed by the test result in Figure 3.

Summary: This study presents experimental results using ultrasonic guided waves to monitor concrete setting and hardening process. It shows the guided wave leakage attenuation is proportional to the stiffness change of fresh concrete. Therefore the leakage rate can be used to monitor the concrete strength gain at early ages. This study may have broader applications in other disciplines to measure mechanical property of material using guided wave.

Zhu1

Figure. 1 Principle of ultrasonic guided wave test.

zhu2

Figure. 2 Ultrasonic echo signals measured in an embedded rebar for concrete age of 2~6 hours.

Zhu3

Figure. 3 Guided wave attenuation rate in a rebar embedded in different cement pastes.

 

2pAAa4 – Does it sound better behind Miles Davis’ back? – What would it sound like face-to-face? Rushing through a holographic sound image of the trumpet. – Franz Zotter, Matthias Frank

2pAAa4 – Does it sound better behind Miles Davis’ back? – What would it sound like face-to-face? Rushing through a holographic sound image of the trumpet. – Franz Zotter, Matthias Frank

Does it sound better behind Miles Davis’ back? – What would it sound like face-to-face? Rushing through a holographic sound image of the trumpet

 

Franz Zotter – zotter@iem.at

Matthias Frank – frank@iem.at

University of Music and Performing Arts Graz

Institute of Electronic Music and Acoustics (IEM)

Inffeldgasse 10/3, 8010 Graz, Austria

 

Popular version of paper 2pAAa4, “Challenges of musical instrument reproduction including directivity”

Presented Tuesday afternoon, November 3, 2015, 2:25 PM, Grand Ballroom 3

170th ASA Meeting, Jacksonville

 

In many of his concerts, Miles Davis used to play his trumpet facing away from the audience. Would it have made a difference had he faced the audience?

 

Unplugged acoustical instruments can feature a tremendously different timbre for different orientations. Musicians experience such effects while playing their instrument in different environments. Those lacking such experience can only learn about the so-called directivity of musical instruments from publications showing diagrams of measured timbral changes. Comprehensive publications from the nineteen sixties deliver remarkably detailed descriptions. And yet, it requires training to imagine how the timbral changes sound like by just looking at these diagrams.

 

In the new millennium, researchers built surrounding spheres of microphones that allow to record a holographic sound image of any musical instrument (Figure 1). This was done to get a more natural representation of instruments in virtual acoustic environments for games or computer-aided acoustic design. Alternatively, the holographic sound image can be played back in real environments using a compact spherical loudspeaker array (Figure 2).

 

Such a recording allows, for instance, to convey a tangible experience of how strongly the timbre and loudness of a trumpet changes with orientation. (Audio example 1) is an excerpt from a corresponding holographic sound image using 64 surrounding microphones. With each repetition of the excerpt, the recording position gradually moves from behind the instrumentalist to the face-to-face orientation.

 

While what was shown above was done under the exclusion of acoustical influences of the room, the new kind of holographic sound imagery is a key technology used to reproduce a fully convincing experience of a musical instrument within arbitrary rooms it is played in.

microphone_sphere_trumpet

Figure1:

A surrounding sphere of 64 microphone was built at IEM (Fabian Hohl, 2009) to record holographic sound images of musical instruments. The photo (Fabian Hohl, 2009) shows Silvio Rether playing the trumpet.

OLYMPUS DIGITAL CAMERA

OLYMPUS DIGITAL CAMERA

Figure2:

The icosahedron as a housing of 20 loudspeakers (a compact spherial loudspeaker array) was built 2006 at IEM. It is a device to play back holographic sound images of musical instruments. Currently, it is used as a new tool in computer music to project sound into rooms utilizing wall reflections from different directions.

The photo (Franz Zotter, 2010) shows the icosahedral loudspeaker during concert rehearsals.

AudioExample:

In the example, one can clearly hear the orientation-related timbral changes of the trumpet. The short excerpt is played in 7 repetitions, each time recorded at another position, moving from behind the trumpet player to the front. The piece “Gaelforce” by Peter Graham is performed by Silvio Rether, and the recording was done by Fabian Hohl at IEM using the sphere shown in Figure 1.