Gerald Bennett - gbennett@cerfnet.com
Swiss Center for Computer Music
Florhofgasse 6, 8001 Zurich, Switzerland
Popular version of paper 2pMU7
Presented Tuesday Afternoon, March 16, 1999
ASA/EAA/DAGA '99 Meeting, Berlin, Germany
It may seem as though the relation of acoustics to composition is tenuous, at best, for weren't all the masterpieces of classical music written without much explicit knowledge of acoustics? True enough; nevertheless, I will argue here that the acoustical knowledge of each age provided the framework within which composers imagined their music and that this relationship is of special relevance to electroacoustic music, where composers actually compose the sound itself.
Acoustics begins with Pythagoras in the sixth century BC, and his discovery that the phenomenon of consonance is related to simple whole-number relationships is one of the great abstractions of all time and still colors the way we think about sound. The Pythagoreans understood the idea of pitch, but they had no conception of the scale. It was only in the fourth century BC that another Greek, Aristoxenos, conceived of sound as a continuum along which pitches were specific points, thus paving the way for the idea of the scale, one of the most fundamental ideas with which composers deal.
Greek scales were based on the fourth, not the octave, and it was not until the 14th century that music theory noted the "identity" of two notes an octave apart. This "identity" is, of course an acoustical phenomenon. Once again, acoustics is at the root of an abstraction - the octave scale - without which our Western music cannot be imagined.
Many other concepts in music theory have an acoustical basis. Three have been particularly important for composition. The first is Jean-Philippe Rameau's derivation in 1722 of the major triad by Pythagorean string division and his positing the same fundamental for each of the three inversions of that triad, an idea which is obvious to every first-semester harmony student but was very new at the time. The other two concepts were formulated by Hermann Helmholtz in 1863 in the most important book about musical acoustics ever published, Die Lehre von den Tonempfindungen als psychologische Grundlage fr die Theorie der Musik. The first of these is his proof of Ohm's law of acoustics, namely that the ear separates complex tones into series of simple vibrations (that is, that it performs Fourier analysis), and his analysis of timbre as a function of the frequency and amplitude of partial tones.
Rameau's book changed the way musicians thought about harmonies and their relationships within a generation or two. Helmholtz's discoveries had to wait until composers began using electroacoustic means to "compose" the sound itself in the early 20th century to have much influence on composition. (The Hammond organ [1933] is perhaps the best-known electronic instrument in the first half of the century. Its control of timbre stems directly from Helmholtz's discoveries.) When Herbert Eimert founded the Cologne Electronic Music Studio in 1951, one of his principal ideas was to apply the same compositional stringency used for pitches and rhythm to the dimension of timbre by controlling the frequency and amplitude of partials. This idea would have been unthinkable without Helmholtz's work. When Max V. Mathews "invented" computer music in the 1960's at Bell Laboratories, it was also Helmholtz's insight into the nature of timbre which determined the structure of the synthesis programs Mathews wrote and hence the framework within which composers thought and formulated their ideas.
In the early 1960's, musical acoustics books usually described instrumental timbre using recipes derived from very precise steady-state measurements. When composers used these recipes to synthesize sounds, their results barely resembled the original instruments. The French composer and physicist, Jean-Claude Risset, used computer techniques to take another look at instrumental timbre, and in 1969 he and Mathews published their important paper "Analysis of musical instrument tones" in Physics Today (22[2], pp. 32-40). Risset and Mathews analyzed trumpet tones and found that the amplitudes of the partials changed continuously in an ordered way over time. They proposed for timbre a dynamic, three-dimensional model in place of the older static, two-dimensional Helmholtzian model. The discovery of the temporal evolution of timbre has changed more profoundly the way musicians think about sound than any other acoustical discovery in this century.
At the same time as Mathews was inventing the technology of computer music, and in the same department at Bell Laboratories, Manfred Schroeder was inventing the technology of digital reverberation. Schroeder developed the algorithms which are at the heart of all hardware and software digital reverberators. Digital reverberators allow composers of electroacoustic music (and recording engineers) to create imaginary spaces for their music. Schroeder's work, and that of the acousticians who followed him, has freed music from its dependence on the physical surroundings of its performance and has deeply influenced contemporary composers' view of acoustical space.
Two other areas of acoustic research have been of special importance for digital sound synthesis: acoustical modeling and physical modeling. Acoustical modeling is based on detailed studies of complex acoustical phenomena, for example the singing voice. Synthesis techniques are developed to model the acoustical results of the natural sound production process. Physical modeling, on the other hand, is based on studies of the actual modes of sound production of specific physical objects such as plates, strings or air columns. Here synthesis techniques are developed to model the sound production process itself, a procedure which until recently was computationally prohibitively costly. Both types of modeling are characterized by retaining important perceptual aspects of the original model, even if the composer chooses to create sounds which seem to be far removed from the originals.
Four short sound examples may serve as illustration. The examples are in RealAudio format and can be downloaded to be played on the reader's computer. Example 1 (692 kBytes), an excerpt from Jean-Claude Risset's Computer Suite from Little Boy (1968), is a never-ending downward glissando made by controlling very precisely the temporal evolution of the partials of the sound. Example 2 (726 kBytes) is taken from the piece Northern Migrations by the American composer Shawn Decker and illustrates the illusion of movement of sound in space. Example 3 (907 kBytes) is from Rainstick (1993) by the author and illustrates acoustical modeling of the singing voice. At the climax of the example one voice "explodes", an idea derived from acoustical knowledge about the structure of the singing voice. Example 4 (174 kBytes) illustrates physical modeling. The three sounds model a large metal plate, a strummed string instrument and a bass clarinet playing multiphonic sounds. The synthesis was realized at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM).
In summary, musical acoustics has always provided the composer with the framework within which he or she can think and dream hitherto unheard-of music. But some of the most important developments in musical acoustics have come from musicians (Risset's investigations of timbre, Johan Sundberg's studies of the singing voice) or from musically oriented research environments (Schroeder's research on artificial reverberation at Bell Laboratories). We composers depend on musical acoustics to think our ideas. But our ideas often lead musical acoustics into areas it would not have otherwise explored.