4pMU5 – Evolution of the piano

Nicholas Giordano – nig003@auburn.edu
Auburn University
Auburn, AL

Popular version of paper 4pMU5 – Evolution of the piano
Presented Thursday afternoon, November 5, 2:25 PM, Grand Ballroom 2
170th ASA Meeting, Jacksonville, Fl
Click here to read the abstract

Introduction 
The piano was invented 300 years ago by Bartolomeo Cristofori, who in his “day job” was responsible for the instruments owned by the famous Medici family in Florence, Italy. Many of those instruments were harpsichords, and the first pianos were very similar to a harpsichord with one crucial difference. In a harpsichord the strings are set into motion by plucking (as in a guitar) and the amplitude of a pluck is independent of how forcefully a key is pressed.  In a piano the strings are struck with a hammer and Cristofori invented a clever mechanism (called the piano “action”) through which the speed of the hammer and hence the volume of a tone is controlled by the force with which a key is pressed. In this way a piano player can vary the loudness of notes individually, something that was not possible with the harpsichord or organ, the dominant keyboard instruments of the day. This gave the piano new expressive capabilities which were soon exploited by composers such as Mozart and Beethoven.

Figure 1 shows one of the three existing Cristofori pianos. It is composed almost entirely of wood (except for the strings) and has a range of 4 octaves – 49 notes. It has 98 strings (two for each note), each held at a tension of about 60 Newtons (around 13 lbs), and is light enough that two adults can easily lift it. A typical modern piano is shown in Figure 2. It has a range of 7-1/3 octaves – 88 notes – and more than 200 strings (most notes have three strings), each held at a tension of around 600 Newtons. This instrument weighs almost 600 lbs.

Piano built by Bartolomeo Cristofori in 1722
Figure 1 caption. Piano built by Bartolomeo Cristofori in 1722. This piano is in the Museo Nationale degli Strumenti Musicali in Rome. Image from Wikimedia Commons (wikimedia.org/wikipedia/commons/3/32/Piano_forte_Cristofori_1722.JPG). The other pianos made by Cristofori and still in existence are in the Metropolitan Museum of Art in New York City and the Musikinstrumenten-Museum in Leipzig.

A typical modern piano giordano_fig_2
Figure 2 caption. A typical modern piano. This is a Steinway model M that belongs to the author. Photo by Lizz Giordano.

My conference paper considers how the piano in Figure 1 evolved into the instrument in Figure 2. As is described in the paper, this evolution was driven by a combination of factors including the capabilities and limitations of the human auditory system, the demands of composers ranging from Mozart to Beethoven to Rachmaninoff, and developments in technology such as the availability of the high strength steel wire that is now used for the strings.

How many notes?
The modern piano has nearly twice as many notes as the pianos of Cristofori. These additional notes were added gradually over time. Most of the keyboard music of J. S. Bach can be played on the 49 notes of the first pianos, but composers soon wanted more. By Mozart’s time in the late 1700s, most pianos had 61 notes (a five octave range). They expanded to 73 notes (six octaves) for Beethoven in the early 1800s, and eventually to the 88 notes we have today by about 1860. The frequency range covered by these notes extends from around 25 Hz to just over 4000 Hz. The human ear is sensitive to a much wider range so one might ask “why don’t we have even more notes?” The answer seems to lie in the way we hear tones with frequencies that are much outside the piano range. Tones with frequencies below the piano range are heard by most people as clicks [1], and such tones would not be useful for most kinds of music. Tones with frequencies much above the high end of the piano range pose a different problem. In much music two or more tones are played simultaneously to produce chords and similar combinations. It turns out that our auditory system is not able to perceive such “chordal” relationships for tones much above the piano range [1]. Hence, these tones cannot be used by a composer to form the chords and other note combinations that are an essential part of western music. The range of notes found in a piano is thus determined by the human auditory system – this is why the number of notes found in a piano has not increased beyond the limits reached about 150 years ago.

Improving the strings
The piano strings in Cristofori’s piano were thin (less than 1 mm in diameter) and composed of brass or iron. They were held at tensions of about 60 N, which was probably a bit more than half their breaking tensions, providing a margin of safety. An increase in tension allows the string to be hit harder with the hammer, producing a louder sound. Hence, as the piano came to be used more and more as a solo instrument and as concert halls grew in size, piano makers needed to incorporate stronger strings. These improved strings were generally composed of iron with controlled amounts of impurities such as carbon. The string tensions used in piano design thus increased by about a factor of 10 from the earliest pianos to around 1860 at which time steel piano wire was available. Steel wire continues to be used in modern pianos, but the strength of modern steel wire is not much greater than the wire available in 1860, so this aspect of piano design has not changed substantially since that time.

Making a stronger case
The increased number of strings in a modern piano combined with the greater string tension results in much larger forces, by about a factor of 20, on the case of a modern instrument as compared to the Cristofori piano. The case of an early piano was made from wood but the limits of a wooden case were reached by the early 1800s in the pianos that Beethoven encountered. To cope with this problem, piano makers then added metal rods and later plates to strengthen the case, leading to what is now called a “full metal plate.” The plate is now composed of iron (steel is not required since iron under compression is quite strong and stable) and is visible in Figure 2 as the gold colored plate that extends from the front to the back of the instrument. Some piano makers objected to adding metal to the piano, arguing that it would give the tone a “metallic” sound. They were evidently able to overlook the fact that the strings were already metal. Interestingly, the full metal plate was the first important contribution to piano design by an American, as it was introduced in the mid-1820s by Alphaeus Babcock.

Making a piano hammer
As the string tension increased it was also necessary to redesign the piano hammer. In most early pianos the hammer was fairly light (about 1 g or less), with a layer of leather glued over a wooden core. As the string tension grew a more durable covering was needed, and leather was replaced by felt in the mid-1800s. This change was made possible by improvements in the technology of making felt with a high and reproducible density. The mass of the hammer also increased; in a modern piano the hammers for the bass (lowest) notes have a mass more than 10 times greater than in Cristofori’s instruments.

How has the sound changed?
We have described how the strings, case, hammers, and range of the piano have changed considerably since Cristofori invented the instrument, and there have been many other changes as well. It is thus not surprising that the sounds produced by an early piano can be distinguished from those of a modern piano. However, the tones of these instruments are remarkable similar – even the casual listener will recognize both as coming from a “piano.” While there are many ways to judge and describe a piano tone, the properties of the hammers are, in the opinion of the author (an amateur pianist), most responsible for the differences in the tones of early and modern pianos. The collision between the hammer and string have a profound effect on the tone, and the difference in the hammer covering (leather versus felt) makes the tone of an early piano sound more “percussive” and “pluck-like” than that of a modern piano. This difference can be heard in sound examples that accompany this article.

The future of the piano
While the piano is now 300 years old, its evolution from Cristofori’s first instruments to the modern piano was complete by the mid-1800s. Why has the piano remained unchanged for the past 150 years? We have seen that much of the evolution was driven by improvements in technology such as the availability of steel wire that is now used for the strings. Modern steel wire is not much different than that available more than a century ago, but other string materials are now available. For example, wires made of carbon fibers can be stronger than steel and would seem to have advantages as piano strings [2], but this possibility has not (yet) been explored in more than a theoretical way. Indeed, the great success of the piano has made piano makers, players, and listeners resistant to major changes. While new technologies or designs will probably be incorporated into the pianos of the future, it seems likely that it will always sound much like the instrument we have today.

The evolution of the piano is described in more detail in an article by the author that will appear in Acoustics Today later this year. Much longer and more in-depth versions of this story can be found in Refs. 3 and 4.

[1] C. J. Plack, A. J. Oxenham, R. R. Fay, and A. N. Popper (2005). Pitch: Neural Coding and Perception (Springer), Chapter 2.

[2] N. Giordano (2011). “Evolution of music wire and its impact on the development of the piano,” Proceedings of Meetings on Acoustics 12, 035002.

[3] E. M. Good (2002). Giraffes, Black Dragons, and Other Pianos, 2nd edition (Stanford University Press).

[4] N. J. Giordano (2010). Physics of the Piano (Oxford University Press).

Sound examples
Both of these audio examples are the beginning of the first movement of Mozart’s piano sonata in C major, K. 545. The first one is played with a piano that is a copy of an instrument like the ones Mozart played. The second audio example was played with a modern piano.

(1) Early piano. Played by Malcom Bilson in a copy of a c. 1790 piano made by Paul McNulty (CD: Hungaroton Classic, Wolfgang Amadeus Mozart Sonatas Vol. III, Malcolm Bilson, fortepiano, HCD31013-14).

 

(2) Modern piano. Played by Daniel Barenboim on a modern (Steinway) piano (CD: EMI Classics, Mozart, The Piano Sonatas, Catalog #67294).

4aAA5 – Conversion of an acoustically dead opera hall in a live one

Wolfgang Ahnert1, Tobias Behrens1 (info@ada-amc.eu) and Radu Pana2 (pana.radu@gmail.com)

1 ADA Acoustics & Media Consultants GmbH, Arkonastr. 45-49, D-13189 Berlin / Germany
2 University of Architecture and Urbanism “Ion Mincu”, Str. Academiei 18-20, RO-010014 Bucuresti / Romania

Popular version of paper 4aAA5, “The National Opera in Bucharest – Update of the room-acoustical properties”
Presented Thursday morning, November 5, 2015, 10:35 AM, Grand ballroom 3
170th ASA Meeting, Jacksonville

The acoustics of an opera hall has changed dramatically within the last 100 years. Until the end of the 19th century, mostly horseshoe-shaped halls were built with acoustically high-absorbing wall and even floor areas. Likewise, the often used boxes had fully absorbing claddings. That way the reverberation in these venues was made low and the hall was perceived as acoustically dry, e.g. the opera hall in Milan. 100 years later, the trend shows opera halls with more live and higher reverberation, preferred now for music reproduction, e.g. Semper Opera in Dresden.

This desire to enhance the acoustic liveliness in the Opera House in Bucharest led to renovation work in 2013-2014. The Opera House was built in 1952-1953 for around 2200 spectators and it followed a so-called style of “socialist realism”. This type of architecture was popular at the time, when communism was new to Romania, and the building has therefore a neoclassical design. The house was looking inside the hall like a theatre of the late 19th century. The conditions in the orchestra pit for the musicians, as far as mutual hearing is concerned, were bad as well. So, construction works took place in order to improve room acoustical properties for musicians and audience.

Ahnert-Fig.1 - opera hall

Fig. 1: Opera hall after reconstruction

The acoustic task was to enhance the room acoustic properties significantly by substituting absorptive faces (as carpet, fabric wall linings, etc.) by reflective materials:

  1. Carpet on all floor areas, upholstered back- and undersides of chairs
  2. Textile wall linings at walls/ceilings in boxes, upholstered hand rails
  3. Textile wall linings at balustrades, upholstered hand rails in the galleries

All the absorbing wall and ceiling parts were substituted by reflecting wood panels, the carpet was removed and a parquet floor was introduced. As a result, the sound does not fade out anymore as in an open-air theatre but spaciousness may be perceived now.

The primary and secondary structures of the orchestra pit were changed as well in order to improve mutual hearing in the pit and between stage and pit.  The orchestra pit had the following acoustically disadvantageous properties:

  • Insufficient ratio between open and covered area (depth of opening 3.5 m, depth of cover 4.7 m)
  • The height within the pit in the covered area was very small.
  • The space in the covered area of the pit was highly overdamped by too much absorber.

Ahnert_Fig.2 - opera hall

Fig. 2: new orchestra pit, section

The following changes have been applied:

  • The ratio between open area and covered area is now better by shifting the front edge of the stage floor to the back: Depth of opening is now 5.1 m, depth of cover only 3.1 m.
  • The height within the pit in the covered area is increased by lowering the new movable podium.
  • The walls and soffit in the pit are now generally reflective, broadband absorbers can be placed variably at the back wall in the pit.

After an elaborate investigation by measurements and simulation on site a prolongation of the reverberation time of 0.2-0.3 s was reached to actual values of about 1.3 to 1.4 s.

Together with alterations of the geometric situation of pit, the acoustic properties of the hall are now very satisfactory for musicians, singers and the audience.

Beside the reverberation time, other room acoustical measures such as C80, Support, Strength, etc. have been improved significantly.

3aUW8 – A view askew: Bottlenose dolphins improve echolocation precision by aiming their sonar beam to graze the target

Laura N. Kloepper– lkloepper@saintmarys.edu
Saint Mary’s College
Notre Dame, IN 46556

Yang Liu–yang.liu@umassd.edu
John R. Buck– jbuck@umassd.edu
University of Massachusetts Dartmouth
285 Old Westport Road
Dartmouth, MA 02747

Paul E. Nachtigall–nachtiga@hawaii.edu
University of Hawaii at Manoa
PO Box 1346
Kaneohe, HI 96744

Popular version of paper 3aUW8, “Bottlenose dolphins direct sonar clicks off-axis of targets to maximize Fisher Information about target bearing”
Presented Wednesday morning, November 4, 2015, 10:25 AM in River Terrace 2
170th ASA Meeting, Jacksonville

Bottlenose dolphins are incredible echolocators. Using just sound, they can detect a ping-pong ball sized object from 100 m away, and discriminate between objects differing in thickness by less than 1 mm. Based on what we know about man-made sonar, however, the dolphins’ sonar abilities are an enigma–simply put, they shouldn’t be as good at echolocation as they actually are.

Typical manmade sonar devices achi­eve high levels of performance by using very narrow sonar beams. Creating narrow beams requires large and costly equipment. In contrast to these manmade sonars, bottlenose dolphins achieve the same levels of performance with a sonar beam that is many times wider–but how? Understanding their “sonar secret” can help lead to more sophisticated synthetic sonar devices.

Bottlenose dolphins’ echolocation signals contain a wide range of frequencies.  The higher frequencies propagate away from the dolphin in a narrower beam than the low frequencies do. This means the emitted sonar beam of the dolphin is frequency-dependent.  Objects directly in front of the animal echo back all of the frequencies.   However, as we move out of the direct line in front of the animal, there is less and less high frequency, and when the target is way off to the side, only the lower frequencies reach the target to bounce back.   As shown below in Fig. 1, an object 30 degrees off the sonar beam axis has lost most of the frequencies.

Kloepper-fig1

Figure 1. Beam pattern and normalized amplitude as a function of signal frequency and bearing angle. At 0 degrees, or on-axis, the beam contains an equal representation across all frequencies. As the bearing angle deviates from 0, however, the higher frequency components fall off rapidly.

Consider an analogy to light shining through a prism.  White light entering the prism contains every frequency, but the light leaving the prism at different angles contains different colors.  If we moved a mirror to different angles along the light beam, it would change the color reflected as it moved through different regions of the transmitted beam.  If we were very good, we could locate the mirror precisely in angle based on the color reflected.  If the color changes more rapidly with angle in one region of the beam, we would be most sensitive to small changes in position at that angle, since small changes in position would create large changes in color.  In mathematical terms, this region of maximum change would have the largest gradient of frequency content with respect to angle.  The dolphin sonar appears to be exploiting a similar principle, only the different colors are different frequencies or pitch in the sound.

Prior studies on bottlenose dolphins assumed the animal pointed its beam directly at the target, but this assumption resulted in the conclusion that the animals shouldn’t be as “good” at echolocation as they actually are. What if, instead, they use a different strategy? We hypothesized that the dolphin might be aiming their sonar so that the main axis of the beam passes next to the target, which results in the region of maximum gradient falling on the target. Our model predicts that placing the region of the beam most sensitive to change on the target will give the dolphin greatest precision in locating the object.

To test our hypothesis, we trained a bottlenose dolphin to detect the presence or absence of an aluminum cylinder while we recorded the echolocation signals with a 16-element hydrophone array (Fig.2).

Laura Dolphin Graphics

Figure 2: Experimental setup. The dolphin detected the presence or absence of cylinders at different distances while we recorded sonar beam aim with a hydrophone array.

We then measured where the dolphin directed its sonar beam in relation to the target and found the dolphin pointed its sonar beam 7.05 ± 2.88 degrees (n=1930) away from the target (Fig.3).

Kloepper-Fig_3

Figure 3: Optimality in directing beam away from axis. The numbers on the emitted beam represent the attenuation in decibels relative to the sound emitted from the dolphin. The high frequency beam (red) is narrower than the blue and attenuates at angle more rapidly. The dolphin directs its sonar beam 7 degrees away from the target.

To then determine if certain regions of the sonar beam provide more theoretical “information” to the dolphin, which would improve its echolocation, we applied information theory to the dolphin sonar beam. Using the weighted frequencies present in the signal, we calculated the Fisher Information for the emitted beam of a bottlenose dolphin. From our calculations we determined 95% of the maximum Fisher Information to be between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees (Fig. 4).

Kloepper-Fig_4

Figure 4: The calculated Fisher Information as a function of bearing angle. The peak of the information is between 6.0 and 8.5 degrees off center, with a peak at 7.2 degrees.

The result? The dolphin is using a strategy that is the mathematically optimal! By directing its sonar beam slightly askew of the target (such as a fish), the target is placed in the highest frequency gradient of the beam, allowing the dolphin to locate the target more precisely.

2pABa9 – Energetically speaking, do all sounds that a dolphin makes cost the same?

Marla M. Holt – marla.holt@noaa.gov
Dawn P. Noren – dawn.noren@noaa.gov
Conservation Biology Division
NOAA NMFS Northwest Fisheries Science Center
2725 Montlake Blvd East
Seattle WA, 98112

Robin C. Dunkin – rdunkin@ucsc.edu
Terrie M. Williams – tmwillia@ucsc.edu
Department of Ecology and Evolutionary Biology
University of California, Santa Cruz
100 Shaffer Road
Santa Cruz, CA 95060

Popular version of paper 2pABa9, “The metabolic costs of producing clicks and social sounds differ in bottlenose dolphins (Tursiops truncatus).”
Presented Tuesday afternoon, November 3, 2015, 3:15, City Terrace room
170th ASA Meeting Jacksonville

Dolphins are known to be quite vocal, producing a variety of sounds described as whistles, squawks, barks, quacks, pops, buzzes and clicks.  These sounds can be tonal (think whistle) or broadband (think buzz), short or long, or loud or not.  Some sounds, such as whistles, are used in social contexts for communication.  Other sounds, such as clicks and buzzes, are used for echolocation, a form of active biosonar that is important for hunting fish [1].   Regardless of what type of sound a dolphin makes in its diverse vocal repertoire, sounds are generated in an anatomically unique way compared to other mammals.   Most mammals, including humans, make sound in their throats or technically, in the larynx.  In contrast, dolphins make sound in their nasal cavity via two sets of structures called the “phonic lips” [2].

All sound production comes at an energetic cost to the signaler [3].  That is, when an animal produces sound, metabolic rate increases a certain amount above baseline or resting (metabolic) rate.  Additionally, many vociferous animals, including dolphins and other marine mammals, modify their acoustic signals in noise.  That is, they call louder, longer or more often in an attempt to be heard above the background din.  Ocean noise levels are rising, particularly in some areas from shipping traffic and other anthropogenic activities and this motivated a series of recent studies to understand the metabolic costs of sound production and vocal modification in dolphins.

We recently measured the energetic cost for both social sound and click production in dolphins and determined if these costs increased when the animals increased the loudness or other parameters of their sounds [4,5].  Two bottlenose dolphins were trained to rest and vocalize under a specialized dome which allowed us to measure their metabolic rates while making different kinds of sounds and while resting (Figure 1).  The dolphins also wore an underwater microphone (a hydrophone embedded in a suction cup) on their foreheads to keep track of vocal performance during trials. The amount of metabolic energy that the dolphins used increased as the total acoustic energy of the vocal bout increased regardless of the type of sound the dolphin made.  The results clearly demonstrate that higher vocal effort results in higher energetic cost to the signaler.

Holt fig 1 - dolphins

Figure 1 – A dolphin participating in a trial to measure metabolic rates during sound production.  Trials were conducted in Dr. Terrie Williams’ Mammalian Physiology lab at the University of California Santa Cruz.  All procedures were approved by the UC Santa Cruz Institutional Animal Care and Use Committee and conducted under US National Marine Fisheries Service permit No.13602.

These recent results allow us to compare metabolic costs of production of different sound types. However, the average total energy content of the sounds produced per trial was different depending on the dolphin subject and whether the dolphins were producing social sounds or clicks.  Since metabolic cost is dependent on vocal effort, metabolic cost comparisons across sound types need to be made for equal energy sound production.

The relationship between energetic cost and vocal effort for social sounds allowed us to predict metabolic costs of producing these sounds at the same sound energy as in click trials.  The results, shown in Figure 2, demonstrate that bottlenose dolphins produce clicks at a very small fraction of the metabolic cost of producing whistles of equal energy.  These findings are consistent with empirical observations demonstrating that considerably higher air pressure within the dolphin nasal passage is required to generate whistles compared to clicks [1].  This pressurized air is what powers sound production in dolphins and toothed whales [1] and mechanistically explains the observed difference in metabolic cost between the different sound types.

Holt fig 2 - dolphins

Figure 2 – Metabolic costs of producing social sounds and clicks of equal energy content within a dolphin subject.

Differences in metabolic costs of whistling versus clicking have implications for understanding the biological consequences of behavioral responses to ocean noise.  Across different sound types, metabolic costs depend on vocal effort.  Yet, overall costs of producing clicks are substantially lower than costs of producing whistles.  The results reported in this paper demonstrate that the biological consequences of vocal responses to noise can be quite different depending on the behavioral context of the animals affected, as well as the extent of the response.

 

  1. Au, W. W. L. The Sonar of Dolphins, New York: Springer-Verlag.
  2. Cranford, T. W., et al., Observation and analysis of sonar signal generation in the bottlenose dolphin (Tursiops truncatus): evidence for two sonar sources. Journal of Experimental Marine Biology and Ecology, 2011. 407: p. 81-96.
  3. Ophir, A. G., Schrader, S. B. and Gillooly, J. F., Energetic cost of calling: general constraints and species-specific differences. Journal of Evolutionary Biology, 2010. 23: p. 1564-1569.
  4. Noren, D. P., Holt, M. M., Dunkin, R. C. and Williams, T. M. The metabolic cost of communicative sound production in bottlenose dolphins (Tursiops truncatus). Journal of Experimental Biology, 2013. 216: 1624-1629.
  5. Holt, M. M., Noren, D. P., Dunkin, R. C. and Williams, T. M. Vocal performance affects metabolic rate in dolphins: implication for animals communicating in noisy environments. Journal of Experimental Biology, 2015. 218: 1647-1654.

2aEAa5 – Miniature Directional Sound Sensor Inspired by Fly’s Ears

Daniel Wilmott – dwilmott@nps.edu
Fabio Alves – fdalves@nps.edu
Gamani Karunasiri – karunasiri@nps.edu

Department of Physics
Naval Postgraduate School
Monterey, CA 93943

Popular version of paper 2aEAa
Presented Tuesday morning, November 3, 2015
170th ASA Meeting, Jacksonville

Humans and animals that posses a relatively large separation between ears, compared to the wavelength of sound, utilize the delay of sound arrival between ears to sense its direction with relatively good accuracy.  This approach is less effective when the separation between ears is small, such as in insects.  However, the parasitic Ormia Ochracea fly is particularly adept at finding crickets by listening to their chirps, though the separation of their ears is much smaller than the wavelengths generated by the chirps. The female of this species seek out chirping crickets (see Fig. 1) to lay their eggs on, and do so with an accuracy of few degrees. The two eardrums of the fly are separated by a mere 1.5 millimeters (mm) yet it homes in on the cricket chirping with 50 times longer wavelength where the arrival time difference between ears is only a few millionths of a second.  It is interesting to note that Zuk and coworkers found that “between the late 1990s and 2003, in just 20 or so cricket generations, Kauai’s cricket population had evolved into an almost entirely silent one” to avoid detection by the flies.  The studies carried out on the fly’s hearing organ by Miles and coworkers in the mid-90s found that workings of the fly ears are different from that of the large species and are mechanically coupled at the middle and tuned to the cricket chirps giving them remarkable ability locate them.

1 - Ormia Ochracea fly

Figure 1. Ormia Ochracea uses direction finding ears to locate crickets.

In this paper, we present a miniature directional sensor that was designed based on the fly’s ears, which consists of two wings connected in the middle using a bridge and fabricated using micro-electro-mechanical-system (MEMS) technology as shown in Fig. 2.  The sensor is made of the same material used in making microchips (silicon) with the two wings having dimensions 1 mm x 1 mm each and thickness of less than half the width of human hair (25 micrometers).  The sensor is tuned to a narrow frequency range, which depends on the size of the bridge that connects the two wings.  The vibration amplitudes of the sensor wings (less than one millionth of a meter) under sound excitation was electronically probed using highly sensitive comb finger capacitors (similar to tuning capacitors employed in older radios) attached to the edges of the wings.  It was found that the response of the sensor is highly directional (see Fig. 3) and matches well with the expected behavior.

2

Figure 2. Designed (left) and fabricated (right) directional sound sensor showing the comb finger capacitors for electronically measuring nanometer scale vibrations generated by incident sound.  The size of the entire sensor is less than that of a pea.

3

Figure 3. Measured directional response of the sensor tuned to 1.67 kHz for a set of sound pressures down to 33 dB.

The sensor was able to detect sound levels close to that of a quite whisper 30 decibel (dB) which is thousand times smaller than the sound level generated in a typical conversation (60 dB).  The sensor has many potential civilian and military applications involving localization of sound sources including explosions and gunshots.

2pAAa4 – Does it sound better behind Miles Davis’ back? What would it sound like face-to-face? Rushing through a holographic sound image of the trumpet

Franz Zotter – zotter@iem.at
Matthias Frank – frank@iem.at

University of Music and Performing Arts Graz
Institute of Electronic Music and Acoustics (IEM)
Inffeldgasse 10/3, 8010 Graz, Austria

Popular version of paper 2pAAa4, “Challenges of musical instrument reproduction including directivity”
Presented Tuesday afternoon, November 3, 2015, 2:25 PM, Grand Ballroom 3
170th ASA Meeting, Jacksonville

In many of his concerts, Miles Davis used to play his trumpet facing away from the audience. Would it have made a difference had he faced the audience

Unplugged acoustical instruments can feature a tremendously different timbre for different orientations. Musicians experience such effects while playing their instrument in different environments. Those lacking such experience can only learn about the so-called directivity of musical instruments from publications showing diagrams of measured timbral changes. Comprehensive publications from the nineteen sixties deliver remarkably detailed descriptions. And yet, it requires training to imagine how the timbral changes sound like by just looking at these diagrams.

microphone_sphere_trumpet - holographic sound

Figure 1: A surrounding sphere of 64 microphone was built at IEM (Fabian Hohl, 2009) to record holographic sound images of musical instruments. The photo (Fabian Hohl, 2009) shows Silvio Rether playing the trumpet.

In the new millennium, researchers built surrounding spheres of microphones that allow to record a holographic sound image of any musical instrument (Figure 1). This was done to get a more natural representation of instruments in virtual acoustic environments for games or computer-aided acoustic design. Alternatively, the holographic sound image can be played back in real environments using a compact spherical loudspeaker array (Figure 2).

OLYMPUS DIGITAL CAMERA

Figure 2: The photo (Franz Zotter, 2010) shows the icosahedral loudspeaker during concert rehearsals.

Such a recording allows, for instance, to convey a tangible experience of how strongly the timbre and loudness of a trumpet changes with orientation. (Audio example 1) is an excerpt from a corresponding holographic sound image using 64 surrounding microphones. With each repetition of the excerpt, the recording position gradually moves from behind the instrumentalist to the face-to-face orientation.

While what was shown above was done under the exclusion of acoustical influences of the room, the new kind of holographic sound imagery is a key technology used to reproduce a fully convincing experience of a musical instrument within arbitrary rooms it is played in.

The icosahedron as a housing of 20 loudspeakers (a compact spherial loudspeaker array) was built 2006 at IEM. It is a device to play back holographic sound images of musical instruments. Currently, it is used as a new tool in computer music to project sound into rooms utilizing wall reflections from different directions.

Audio Example:

 

In the example, one can clearly hear the orientation-related timbral changes of the trumpet. The short excerpt is played in 7 repetitions, each time recorded at another position, moving from behind the trumpet player to the front. The piece “Gaelforce” by Peter Graham is performed by Silvio Rether, and the recording was done by Fabian Hohl at IEM using the sphere shown in Figure 1.