ASA Lay Language Papers
162nd Acoustical Society of America Meeting


Music from "an evil subterranean beast"

Chris Chafe – cc@ccrma.stanford.edu
Center for Computer Research in Music and Acoustics
Stanford University
Stanford, CA 94305

Popular version of paper 3aMU7
Presented Wednesday Morning, November 2, 2011
162nd ASA Meeting, San Diego, Calif.

Computers have been used for synthesizing musical tones ever since Max Mathews' pioneering work at Bell Labs in 1957. A computer program can create music with recognizable qualities such as melody, rhythm, and instrument type, often by way of "rendering" a composition. The same program code if made to run in real time with playable controls converts the computer into the musician's instrument. MIDI keyboards are a good example of how live synthesis has become a common "axe" in the arsenal of musical tools.

The mathematical algorithms used to simulate more typical acoustical instruments derive from Mathew's earliest work. If a trumpet sound is desired, the basic time and frequency structure of the target tone must be understood and replicated. (This is as opposed to "sampling" in which a recording of a tone is simply played back) Different synthesis methods are employed to create this replica. One which excels at rendering "expressive nuance" is physical modeling and has a good analogy in the world of computer graphics. Say that a cartoon figure is to be led across a desert, limping along in search of water and becoming more and more desperate. The modeler and animator would first create themselves a figure (in software) with the appropriate shapes, joints and textures.

Next they would add "physics" to the detailed geometry of the moveable parts: points in a mathematical model which represent articulations that can be moved. Then, the model is "performed" by instructing the complex of motions that propel the figure (just as our ensemble of leg muscles must be "choreographed" precisely to walk well). To get that desperate feeling, it's a matter of sequencing the motions so that the difficulties of dehydration and desperation are communicated on top of just ordinary walking motion across the scene.

In music, communicating feeling is what instrumentalists ultimately train for. A passage can be played in ways which evoke emotions across a spectrum of feelings. Much more difficult is doing the same thing in computer code, establishing the nuances of timing, phrasing and subtle quality changes in a program which generates sound from scratch. That's where physical modeling has an appeal as a synthesis approach. Back to our figure in the desert, and say that she/he/it is emitting vocal sounds. As things become more desperate the sounds should become slower, hoarser, plagued by dust. A physical model of the vocal tract would create vocal sounds which can do this depending on if the physics underlying the model have controls for sluggishness, stiffness, friction and so on. The work reported here creates expressive "character," which can be used in live performance.  

Two recent compositions have capitalized on the great emotional range of a custom-designed music synthesis algorithm. "Animal" plays a part in Tomato Quintet (a music installation for display in museums) and Phasor (a solo contrabass piece with computer accompaniment). The Animal algorithm is played "live" in real time, in both cases with live gestural controls but from two very different origins. Tomato Quintet "plays" the Animal with updates from CO2 sensors that are "sniffing" vats of ripening tomatoes over the course of weeks. The contrabassist playing Phasor manipulates Animal with signals from a K-Bow (a sensor bow which has built-in accelerometers and strain gauges). Animal's sound world is uniquely its own. Like an animated character, instructions that move or change its state elicit characteristic and sometimes quirky behavior. There is an identifiable sense of personality with moods or temperaments which can be exploited in the music. "The computerized sounds were spacey and sometimes menacing, sounding at times like Chafe was trying to tame an evil subterranean beast. ...
Attached to his bow was a small motion-detecting computer, something like a powerful Wii controller. The sounds he created were modified, manipulated and echoed back by his laptop, creating a swirl of sounds in the speakers set up in all corners of the room in surround sound." (Hao Ying, Global Times, 2011).

More common physical models of wind instruments, brass, and bowed strings have related constructions, all being patched together from simulations of basic acoustical parts such as resonators and exciters.

Animal can be categorized as a "meta-physical" model or modeling abstraction because it goes beyond acoustical simulation. Abstraction opens the door to inclusion of mathematical "parts" from other domains besides instrument acoustics. Model components can be combined in physically-impossible ways. In Animal's case, the logistic map has been borrowed from population biology and serves as the exciter. It is "welded" in software to two lengths of tube which simulate slide whistles.