Brian Connolly – bconnolly1987@gmail.com Music Department Logic House South Campus Maynooth University Co. Kildare Ireland
Popular version of paper 5aMU1, “The inner ear as a musical instrument” Presented Friday morning, November 6, 2015, 8:30 AM, Grand Ballroom 2 170th ASA meeting Jacksonville See also: The inner ear as a musical instrument – POMA
(please use headphones for listening to all audio samples)
Did you know that your ears could sing? You may be surprised to hear that they, in fact, have the capacity to make particularly good performers and recent psychoacoustics research has revealed the true potential of the ears within musical creativity. ‘Psychoacoustics’ is loosely defined as the study of the perception of sound.
Figure 1: The Ear
A good performer can carry out required tasks reliably and without errors. In many respects the very straight-forward nature of the ear’s responses to certain sounds results in the ear proving to be a very reliable performer as its behaviour can be predicted and so it is easily controlled. In the context of the listening system, the inner ear has the ability to behave as a highly effective instrument which can create its own sounds that many experimental musicians have been using to turn the listeners’ ears into participating performers in the realization of their music.
One of the most exciting avenues of musical creativity is the psychoacoustic phenomenon known as otoacoustic emissions. These are tones which are created within the inner ear when it is exposed to certain sounds. One such example of these emissions is ‘difference tones.’ When two clear frequencies enter the ear at, say 1,000Hz and 1,200Hz the listener will hear these two tones, as expected, but the inner ear will also create its own third frequency at 200Hz because this is the mathematical difference between the two original tones. The ear literally sends a 200Hz tone back out in reverse through the ear and this sound can be detected by an in-ear microphone, a process which doctors carrying out hearing tests on babies use as an integral part of their examinations. This means that composers can create certain tones within their work and predict that the listeners’ ears will also add their extra dimension to the music upon hearing it. Within certain loudness and frequency ranges, the listeners will also be able to feel their ears buzzing in response to these stimulus tones! This makes for a very exciting and new layer to contemporary music making and listening.
First listen to this tone. This is very close to the sound your ear will sing back during the second example.
Insert – 200.mp3
Here is the second sample containing just two tones at 1,000Hz and 1,200Hz. See if you can also hear the very low and buzzing difference tone which is not being sent into your ear, it is being created in your ear and sent back out towards your headphones!
Insert – 1000and1200.mp3
If you could hear the 200Hz difference tone in the previous example, have a listen to this much more complex demonstration which will make your ears sing a well known melody. It is important to try to not listen to the louder impulsive sounds and see if you can hear your ears humming along to perform the tune of Twinkle, Twinkle, Little Star at a much lower volume!
(NB: The difference tones will start after about 4 seconds of impulses)
Insert – Twinkle.mp3
Auditory beating is another phenomenon which has caught the interest of many contemporary composers. In the below example you will hear the following: 400Hz in your left ear and 405Hz in your right ear.
First play the below sample by placing the headphones into your ears just one at a time. Not together. You will hear two clear tones when you listen to them separately.
Insert – 400and405beating.mp3
Now try and see what happens when you place them into your ears simultaneously. You will be unable to hear these two tones together. Instead, you will hear a fused tone which beats five times per second. This is because each of your ears are sending electrical signals to the brain telling it what frequency it is responding to but these two frequencies are too close together and so a perceptual confusion occurs resulting in a combined frequency being perceived which beats at a rate which is the same as the mathematical difference between the two tones.
Auditory beating becomes particularly interesting in pieces of music written for surround sound environments when the proximity of the listener to the various speakers plays a key factor and so simply turning one’s head in these scenarios can often entirely change the colour of the sound as different layers of beating will alter the overall timbre of the sound.
So how can all of these be meaningful to composers and listeners alike? The examples shown here are intended to be basic and provide proofs of concept more so than anything else. In the much more complex world of music composition the scope for the employment of such material is seemingly endless. Considering the ear as a musical instrument gives the listener the opportunity to engage with sound and music in a more intimate way than ever before.
John Mourjopoulos – mourjop@upatras.gr University of Patras Audio & Acoustic Technology Group, Electrical and Computer Engineering Dept., 26500 Patras, Greece
Historical perspective The ancient open amphitheatres and the roofed odeia of the Greek-Roman era present the earliest testament of public buildings designed for effective communication of theatrical and music performances over large audiences, often up to 15000 spectators [1-4]. Although mostly located around the Mediterranean, such antique theatres were built in every major city of the ancient world in Europe, Middle East, North Africa and beyond. Nearly 1000 such buildings have been identified, their evolution starting possibly from the Minoan and archaic times, around 12th century BC. However, the known amphitheatric form appears during the age that saw the flourishing of philosophy, mathematics and geometry, after the 6th century BC. These theatres were the birthplace of the classic ancient tragedy and comedy plays fostering theatrical and music activities for at least 700 years, until their demise during the early Christian era. After a gap of 1000 years, public theatres, opera houses and concert halls, often modelled on these antique buildings, re-emerged in Europe during the Renaissance era.
During the antiquity, open theatres were mainly used for staging drama theatrical performances so that their acoustics were tuned for speech intelligibility allowing very large audiences to hear clearly the actors and the singing chorus. During this era, smaller sized roofed versions of these theatres, the “odeia” (plural for “odeon”), were also constructed [4, 5], often at close vicinity to open theatres (Figure 1). The odeia had different acoustics qualities with strong reverberation and thus were not appropriate for speech and theatrical performances but instead were good for performing music functioning somehow similarly to modern-day concert halls.
Figure 1: representation of buildings around ancient Athens Acropolis during the Roman era. Besides the ancient open amphitheatre of Dionysus, the roofed odeion of Pericles is shown, along with the later period odeion of Herodes (adopted from www.ancientathens3d.com [6]).
Open amphitheatre acoustics for theatrical plays The open antique theatre signifies the initial meeting point between architecture, acoustics and the theatrical act. This simple structure consists of the large truncated-cone shaped stepped audience area, (the amphitheatrical “koilon” in Greek or “cavea” in Latin), the flat stage area for the chorus (the “orchestra”) and the stage building (the “skene”) with the raised stage (“proskenion”) for the actors (Figure 2).
Figure 2: structure of the Hellenistic period open theatre.
The acoustic quality of these ancient theatres amazes visitors and experts alike. Recently, the widespread use of acoustic simulation software and of sophisticated computer models has allowed a better understanding of the unique open amphitheatre acoustics, even when the theatres are known solely from archaeological records [1,3,7,9,11]. Modern portable equipment has allowed state-of-the-art measurements to be carried out in some well-preserved ancient theatres [8,10,13]. As a test case, the classical / Hellenistic theatre of Epidaurus in southern Greece is often studied which is famous for its near-perfect speech intelligibility [12,13]. Recent measurements with audience present (Figure 3) confirm that intelligibility is retained besides the increased audience sound absorption [13].
Figure 3: Acoustic measurements at the Epidaurus theatre during recent drama play (form Psarras et al.[13]).
It is now clear that the “good acoustics” of these amphitheatres and especially of Epidaurus, is due to a number of parameters: sufficient amplification of stage sound, uniform spatial acoustic coverage, low reverberation, enhancement of voice timbre, all contributing to perfect intelligibility even at seats 60 meters away, provided that environmental noise is low. These acoustically important functions are largely a result of the unique amphitheatrical shape: for any sound produced in the stage or the orchestra, the geometric shape and hard materials of the theatre’s surfaces generate sufficient reflected and scattered sound energy which comes first from the stage building (when this exists), then the orchestra floor and finally from the surfaces at the top and back of seat rows adjacent each listener position and which is uniformly spread to the audience area [11,13] (see Figure 4 and Figure 5).
Figure 4: Acoustic wave propagation 2D model for the Epidaurus theatre. The blue curves show the direct and reflected waves at successive time instances indicated by the red dotted lines. Along with the forward propagating wavefronts, backscattered and reflected waves from the seating rows are produced (from Lokki et al. [11]).
This reflected sound energy reinforces the sound produced in the stage and its main bulk arrives at the listener’s ears very shortly, typically within 40 milliseconds after the direct signal (see Figure 5). Within such short intervals, as far as the listeners’ brain is concerned, this is sound also coming from the direction of the source in the stage, due to a well-known perceptual property of human hearing, often referred to as “precedence or Haas effect” [11,13].
Figure 5: Acoustic response measurement for the Epidaurus theatre, assuming that the source emits a short pulse and the microphone is at a seat at 15 meters. Given that today the stage building does not exist, the first reflection arrives very shortly from the orchestra ground. Seven successive and periodic reflections can be seen from the top and the risers of adjacent seat rows. Their energy is reduced within approx. 40 milliseconds after the arrival of the direct sound (from Vassilantonopoulos et al. [12]).
The dimensions for seating width and riser height, as well as the koilon slope, can ensure minimal sound occlusion by lower tiers and audience and result to the fine tuning of in-phase combinations of the strong direct and reflected sounds [9,11]. As a result, frequencies useful for speech communication are amplified adding a characteristic coloration of voice sound and further assisting clear speech perception [11]. These specific amphitheatre design details have been found to affect the qualitative and quantitative aspects of amphitheatre acoustics and in this respect, each ancient theatre has unique acoustic character. Given that the amphitheatric seating concept evolved from earlier archaic rectangular or trapezoidal shaped seating arrangements with inferior acoustics (see Figure 6), such evolution hints at possible conscious acoustic design principles employed by the ancient architects. During the Roman period, stage building grew in size and the orchestra was truncated, showing adaptation to artistic, political and social trends with acoustic properties correlated to intended new uses favouring more the visual performance elements [4,15]. Unfortunately, only few fragments of such ancient acoustic design principles have been found and only via the writings of the Roman architect Marcus Vitruvius Pollio (70-15 BC), [14].
Figure 6: Evolution of the shape of open theatres. Roman period theatres had semi-circular orchestra and taller and more elaborate stage building.The red lines indicate the koilon / orchestra design principle as described by the ancient architect Vitruvius.
The acoustics of odeia for music performances Although the form of ancient odeia broadly followed the amphitheatric seating and stage / orchestra design, they were covered by roofs usually made from timber. This covered amphitheatric form was also initially adopted by the early Renaissance theatres, nearly 1000 years after the demise of antique odeia [16] (Figure 7).
Figure 7: Different shapes of roofed odeia of antiquity and the Renaissance period (representations from www.ancientathens3d.com [6]).
Supporting a large roof structure without any inner pillars over the wide diameter dictated by the amphitheatric shape, presents even today a structural engineering feat and it is no wonder that odeia roofs are not preserved. Without their roofs, these odeia appear today to be similar to the open amphitheatres. However, computer simulations indicate that in period, unlike the open theatres, they had strong acoustic reverberation and their acoustics helped the loudness and timbre of musical instruments at the expense of speech intelligibility, so that these spaces were not appropriate and were not used for theatrical plays [4,5]. For the case of the Herodes odeion in Athens (Figure 8), computer simulations show that the semi-roofed version had up to 25% worst speech intelligibility compared to the current open state, but the strong acoustic reverberation which was similar to a modern concert hall of compatible inner volume of 10000 m3, made it suitable as a music performance space [5].
Figure 8: The Herodes odeion at its current state and via computer model of the current open and its antique semi-roofed version. (from Vassilantonopoulos et al. [5]). Very recent archaeological evidence indicates that the roof covered fully the building, as is also shown in Figure 10.
Thousand years ago, these antique theatres established acoustic functionality principles that even today prevail for the proper presentation of theatre and music performances to public audiences and thus signal the origins of the art and science in building acoustics.
References [1] F. Canac, “L’acoustique des théâtres antiques”, published by CNRS, Paris, (1967). [2] R. Shankland, “Acoustics of Greek theatres”, Physics Today, (1973). [3] K. Chourmouziadou, J. Kang, “Acoustic evolution of ancient Greek and Roman theatres”, Applied Acoustics vol.69 (2008). [4] G. C. Izenur, “Roofed Theaters of Classical Antiquity”, Yale University Press, New Haven, Connecticut, (1992). [5] S. Vassilantonopoulos, J. Mourjopoulos, “The Acoustics of Roofed Ancient Odea”, Acta Acoustica united with Acustica, vol.95, (2009). [6] D. Tsalkanis, www.ancientathens3d.com, (accessed April 2015). [7] S. L. Vassilantonopoulos, J. N. Mourjopoulos, “A study of ancient Greek and Roman theater acoustics”, Acta Acustica united with Acustica 89 (2002). [8] A.C. Gade, C. Lynge, M. Lisa, J.H.Rindel, “Matching simulations with measured acoustic data from Roman theatres using the ODEON programme”, Proceedings of Forum Acusticum 2005, (2005). [9] N. F. Declerq, C. S. Dekeyser, “Acoustic diffraction effects at the Hellenistic amphitheatre of Epidaurus: Seat rows responsible for the marvellous acoustics”, J. Acoust. Soc. Am. 121 (2007). [10] A. Farnetani, N. Prodi, R. Pompoli, “On the acoustics of ancient Greek and Roman theatres”, J. Acoust. Soc. Am. 124 (2008). [11] T. Lokki, A. Southern, S. Siltanen, L. Savioja, “Studies of Epidaurus with a hybrid room acoustics modelling method”, Acta Acustica united with Acustica, vol.99, 2013. [12] S. Vassilantonopoulos, T. Zakynthinos, P. Hatziantoniou, N.-A. Tatlas, D. Skarlatos, J. Mourjopoulos, “Measurement and analysis of acoustics of Epidaurus theatre” (in Greek), Hellenic Institute of Acoustics Conference, (2004). [13] S. Psarras, P. Hatziantoniou, M. Kountouras, N-A. Tatlas, J. Mourjopoulos, D. Skarlatos, “Measurement and Analysis of the Epidaurus Ancient Theatre Acoustics”, Acta Acustica united with Acustica, vol.99, (2013). [14] Vitruvius, “The ten books on architecture” (translated by Morgan MH), London / Cambridge, MA: Harvard University Press, (1914). [15] Beckers, Benoit, N.Borgia, “The acoustic model of the Greek theatre.” Protection of Historical Buildings, Prohitech09, (2009). [16] M. Barron, “Auditorium acoustics and architectural design”, London: E& FN Spon (1993).
Popular version of poster 5aMU1 Presented Friday morning, May 22, 2015, 8:35 AM – 8:55 AM, Kings 4 169th ASA Meeting, Pittsburgh
In this paper the relationship between musical instruments and the rooms they are performed in was investigated. A musical instrument is typically characterized as a system that consists of a tone generator combined with a resonator. A saxophone for example has a reed as a tone generator and a comical shaped resonator that can be effectively changed in length with keys to produce different musical notes. Often neglected is the fact that there is a second resonator for all wind instruments coupled to the tone generator – the vocal cavity. We use our vocal cavity everyday when we speak to form characteristic formants, local enhancements in frequency to shape vowels. This is achieved by varying the diameter of the vocal tract at specific local positions along its axis. In contrast to the resonator of a wind instrument, the vocal tract is fixed its length by the dimensions between the vocal chords and the lips. Consequently, the vocal tract cannot be used to change the fundamental frequency over a larger melodic range. For out voice, the change in frequency is controlled via the tension of the vocal chords. The musical instrument’s instrument resonator however is not an adequate device to control the timbre (harmonic spectrum) of an instrument because it can only be varied in length but not in width. Therefore, the players adjustment of the vocal tract is necessary to control the timbre if the instrument. While some instruments posses additional mechanisms to control timbre, e.g., via the embouchure to control the tone generator directly using the lip muscles, for others like the recorder changes in the wind supply provided by the lungs and the changes of the vocal tract. The role of the vocal tract has not been addressed systematically in literature and learning guides for two obvious reasons. Firstly, there is no known systematic approach of how to quantify internal body movements to shape the vocal tract. Each performer has to figure out the best vocal tract configurations in an intuitive manner. For the resonator system, the changes are described through the musical notes, and in cases where multiple ways exist to produce the same note, additional signs exist to demonstrate how to finger this note (e.g., by providing a specific key combination). Secondly, in western classic music culture the vocal tract adjustments predominantly have a correctional function to balance out the harmonic spectrum to make the instrument sound as even as possible across the register.
PVC-Didgeridoo adapter for soprano saxophone
In non-western cultures, the role of the oral cavity can be much more important to convey musical meaning. The didgeridoo, for example, has a fixed resonator with no keyholes and consequently it can only produce a single pitched drone. The musical parameter space is then defined by modulating the overtone spectrum above the tone by changing the vocal tract dimensions and creating vocal sounds on top of the buzzing lips on the didgeridoo edge. Mouthpieces of Western brass instruments have a cup behind the rim with a very narrow opening to the resonator, the throat. The didgeridoo does not have a cup, and the rim is the edge of the resonator with a ring of bee wax. While the narrow throat of western mouthpiece mutes additional sounds produced with the voice, didgeridoos are very open from end to end and carry the voice much better.
The room, a musical instrument is performed in acts as a third resonator, which also affect the timbre of the instrument. In our case, the room was simulated using a computer model with early reflections and late reverberation.
Tone generators for soprano saxophone from left to right: Chinese Bawu, soprano saxophone, Bassoon reed, cornetto.
In general, it is difficult to assess the effect of a mouthpiece and resonator individually, because both vary across instruments. The trumpet for example has a narrow cylindrical bore with a brass mouthpiece, the saxophone has a wide conical bore with reed-based mouthpiece. To mitigate this effect, several tone generators were adapted for a soprano saxophone, including a brass mouthpiece from a cornetto, a bassoon mouthpiece and a didgeridoo adapter made from a 140 cm folded PCV pipe that can be attached to the saxophone as well. It turns out that the exchange of tone generators change the timbre of the saxophone significantly. The cornetto mouthpiece gives the instrument a much mellower tone. Similar to the baroque cornetto, the instruments sounds better in a bright room with lot of high frequencies, while the saxophone is at home at a 19th-century concert hall with a steeper roll off at high frequencies.
Miriam Kolar, Ph.D. – mkolar@amherst.edu AC# 2255, PO Box 5000 Architectural Studies Program & Dept. of Music Amherst College Amherst, MA 01002
Popular version of paper 4pAAa2. Pututus, Resonance and Beats: Acoustic Wave Interference Effects at Ancient Chavín de Huántar, Perú Presented Thursday afternoon, October 30, 2014 168th ASA Meeting, Indianapolis See also: Archaeoacoustics: Re-Sounding Material Culture
Excavated from Pre-Inca archaeological sites high in the Peruvian Andes, giant conch shell horns known as “pututus” have been discovered far from the tropical sea floor these marine snails once inhabited.
Fig. 1a: Excavation of a Chavín pututu at Chavín de Huántar, 2001. Photo by John Rick.
B)
C)
Fig. 1 B-C: Chavín pututus: decorated 3,000-year-old conch shell horns from the Andes, on display at the Peruvian National Museum in Chavín de Huántar. Photos by José Luis Cruzado.
At the 3,000-year-old ceremonial center Chavín de Huántar, carvings on massive stone blocks depict humanoid figures holding and perhaps blowing into the weighty shells. A fragmented ceramic orb depicts groups of conches or pututus separated from spiny oysters by rectilinear divisions on its relief-modeled surface. Fossil sea snail shells are paved into the floor of the site’s Circular Plaza.
Fig. 2: Depictions of pututus players on facing stones in the Circular Plaza at Chavín. Photo by José Luis Cruzado & Miriam Kolar.
Pututus are the only known musical or sound-producing instruments from Chavín, whose monumental stone architecture was constructed and used over several centuries during the first millennium B.C.E.
Fig. 3 (VIDEO): Chavín’s monumental stone-and-earthen-mortar architecture towers above plazas and encloses kilometers of labyrinthine corridors, room, and canals. Video by José Luis Cruzado and Miriam Kolar, with soundtrack of a Chavín pututu performed by Tito La Rosa in the Museo Nacional Chavin.
How, by whom, and in what cultural contexts were these instruments played at ancient Chavín? What was their significance? How did they sound, and what sonic effects could have been produced between pututus and Chavín’s architecture or landform surroundings? Such questions haunt and intrigue archaeoacousticians, who apply the science of sound to material traces of the ancient past. Acoustic reconstructions of ancient buildings, instruments, and soundscapes can help us connect with our ancestors through experiential analogy. Computer music pioneer Dr. John Chowning and archaeologist Dr. John Rick founded the Chavín de Huántar Archaeological Acoustics Project (https://ccrma.stanford.edu/groups/chavin/) to discover more.
Material traces of past life––such as artifacts of ancient sound-producing instruments and architectural remains––provide data from which to reconstruct ancient sound. Nineteen use-worn Strombus galeatus pututus were unearthed at Chavín in 2001 by Stanford University’s Rick and teams. Following initial sonic evaluation by Rick and acoustician David Lubman (ASA 2002), a comprehensive assessment of their acoustics and playability was made in 2008 by Dr. Perry Cook and researchers based at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA).
Fig. 4: Dr. Perry Cook performs acoustic measurements of the Chavín pututus. Photo by José Luis Cruzado.
Transforming an empty room at the Peruvian National Museum at Chavín into a musical acoustics lab, we established a sounding-tone range for these specific instruments from about 272 Hz to 340 Hz (frequencies corresponding to a few notes ascending from around Middle C on the piano), and charted their harmonic structure.
Fig. 5 (VIDEO): Dr. Perry Cook conducting pututu measurements with Stanford CCRMA team. Video by José Luis Cruzado.
Back at CCRMA, Dr. Jonathan Abel led audio digital signal processing to map their strong directionality, and to track the progression of sound waves through their exponentially spiraling interiors. This data constitutes a digital archive of the shell instrument sonics, and drives computational acoustic models of these so-called Chavín pututus (ASA 2010; Flower World 2012; ICTM 2013).
Where does data meet practice? How could living musicians further inform our study? Cook’s expertise as winds and shells player allowed him to evaluate the Chavín pututus’ playability with respect to a variety of other instruments, and produce a range of articulations. Alongside the acoustic measurement sessions, Peruvian master musician Tito La Rosa offered a performative journey, a meditative ritual beginning and ending with the sound of human breath, the source of pututu sounding. This reverent approach took us away from our laboratory perspectives for a moment, and pushed us to consider not only the performative dynamics of voicing the pututus, but their potential for nuanced sonic expression.
Fig. 6 (VIDEO): Tito La Rosa performs one of the Chavín pututus in the Museo Nacional Chavín. Video by Cobi van Tonder.
When Cook and La Rosa played pututus together, we noted the strong acoustic “beats” that result when shell horns’ similar frequencies constructively and destructively interfere, producing an amplitude variation at a much lower frequency. Some nearby listeners described this as a “warbling” or “throbbing” of the tone, and said they thought that the performers were creating this effect through a performance technique (not so; it’s a well-known acoustic wave-interference phenomenon; see Hartmann 1998: 393-396).
Fig. 7 (VIDEO): José Cruzado and Swiss trombonist Michael Flury demonstrate amplitude “beats” between replica pututus in Chavín’s Doble Ménsula Galley. Video by Miriam Kolar.
If present-day listeners are unaware of an acoustics explanation for a sound effect, how might ancient listeners have understood and attributed such a sound? A pututu player would know that s/he was not articulating this warble, yet would be subject to its strong sensations. How would this visceral experience be interpreted? Might it be experienced as a phantom force?
The observed acoustic beating effect between pututus was so impressive that we sought to reproduce it during our on-site tests of architectural acoustics using replica shell horns. CCRMA Director Dr. Chris Chafe joined us, and he and Rick moved through Chavín’s labyrinthine corridors, blasting and droning pututus in different articulations to identify and excite acoustic resonances in the confined interior “galleries” of the site.
Fig. 8: CCRMA Director Chris Chafe and archaeologist John Rick play replica pututus to test the acoustics of Chavín’s interior galleries. Photos by José Luis Cruzado.
The short reverberation times of Chavín’s interior architecture allow the pututus to be performed as percussive instruments in the galleries (ASA 2008). However, the strong modal resonances of the narrow corridors, alcoves, and rooms also support sustained tonal production, in an acoustically fascinating way. Present-day pututu players have reported the experience of their instruments’ tones being “pulled into tune” with these architectural resonances. This eerie effect is both sonic and sensed, an acoustic experience that is not only heard, but felt through the body, an external force that seemingly influences the way the instrument is played.
Fig. 9 (AUDIO MISSING): Resonant compliance: Discussion of phantom tuning effect as Kolar and Cruzado perform synchronizing replica pututus in the Laberintos Gallery at Chavín. Audio by Miriam Kolar.
From an acoustical science perspective, what could be happening? As is well known from musical acoustics research (e.g., Fletcher and Rossing 1998), shell horns are blown-open lip-reed or lip-valve instruments, terminology that refers to the physical dynamics of their sounding. Mechanically speaking, the instrument player’s lips vibrate (or “buzz”) in collaborative resonance with the oscillations produced within the air column of the pututu’s interior, known in instrument lingo as its “bore”. Novice players may have great difficulty producing sound, or immediately generate a strong tone; there’s not one typical tendency, though producing higher, lower, or sustained tones requires greater control.
Experienced pututu players such as Cook and La Rosa can change their lip vibrations to increase the frequency––and therefore raise the perceived pitch––that the shell horn produces. To drop the pitch below the instrument’s natural sounding tone (the fundamental resonant frequency of its bore), the player can insert a hand in the lip opening, or “bell”, of the shell horn. Instrument players also modify intonation by altering the shape of their vocal tracts. This vocal tract modification is produced intuitively, by “feel”, and may involve several different parts of that complex sound-producing system.
Strong architectural acoustic resonance can “couple”, or join with the air column in the instrument that is also coupled to that of the player’s vocal tract (with the players lips opening and closing in between). When the oscillatory frequencies of the player’s lips, those within the air column of his or her vocal tract, the pututu bore resonance, and the corridor resonance are synchronized, the effect can produce a strong sensation of immersion in the acoustic environment for the performer. The pututu is “tuned” to the architecture: both performer and shell horn are acoustically compliant with the architectural resonance.
When a second pututu player joins the first in the resonant architectural location, both players may share the experience of having their instrument tones guided into tune with the space, yet at the same time, sense the synchrony between their instruments. The closer together the shell openings, the more readily their frequencies will synchronize with each other. As Cook has observed, “if players are really close together, the wavefronts can actually get into the shells, and the lips of the players can phase lock.” (Interview between Kolar & Cook 2011: https://ccrma.stanford.edu/groups/chavin/interview_prc.html).
Fig. 10 (VIDEO): Kolar and Cruzado performing resonance-synchronizing replica pututus in the Laberintos Gallery at Chavín. Video by Miriam Kolar.
From the human interpretive perspective, what might pututu players in ancient Chavín have thought about these seemingly phantom instrument guides? A solo pututu performer who sensed the architectural and instrumental acoustic coupling might understand this effect to be externally driven, but how would s/he attribute the phenomenon? Would it be thought of as embodied by the instrument being played, or as an intervention of an otherworldly power, or an effect of some other aspect of the ceremonial context? Pairs or multiple performers experiencing the resonant pull might attribute the effect to the skill of a powerful lead player, with or without command of supernatural forces. Such interpretations are motivated by archaeological interpretations of Chavín as a cult center or religious site where social hierarchy was developing (Rick 2006).
However these eerie sonics might have been understood by people in ancient Chavín, from an acoustics perspective we can theorize and demonstrate complex yet elegant physical dynamics that are reported to produce strong experiential effects. Chavín’s phantom forces––however their causality might be interpreted––guide the sound of its instruments into resonant synchrony with each other and its architecture.
(ASA 2002): Rick, John W., and David Lubman. “Characteristics and Speculations on the Uses of Strombus Trumpets found at the Ancient Peruvian Center Chavín de Huántar”. (Abstract). In Journal of the Acoustical Society of America 112/5, 2366, 2002.
(ASA 2010): Cook, Perry R., Abel, Jonathan S., Kolar, Miriam A., Huang, Patty, Huopaniemi, Jyri, Rick, John W., Chafe, Chris, and Chowning, John M. “Acoustic Analysis of the Chavín Pututus (Strombus galeatus Marine Shell Trumpets).(Abstract). Journal of the Acoustical Society of America, Vol. 128, No. 2, 359, 2010.
(Flower World 2012): Kolar, Miriam A., with Rick, John W., Cook, Perry R., and Abel, Jonathan S. “Ancient Pututus Contextualized: Integrative Archaeoacoustics at Chavín de Huántar, Perú”. In Flower World – Music Archaeology of the Americas, Vol. 1. Eds. M. Stöckli and A. Both. Ekho VERLAG, Berlin, 2012.
(ICTM 2013): Kolar, Miriam A. “Acoustics, Architecture, and Instruments in Ancient Chavín de Huántar, Perú: An Integrative, Anthropological Approach to Archaeoacoustics and Music Archaeology”. In Music & Ritual: Bridging Material & Living Cultures. Ed. R. Jiménez Pasalodos. Publications of the ICTM Study Group on Music Archaeology, Vol. 1. Ekho VERLAG, Berlin, 2013.
(Hartmann 1998): Hartmann, William M. Signals, Sound, and Sensation. Springer-Verlag, New York, 1998.
(ASA 2008): Abel, Jonathan S., Rick, John W., Huang, Patty P., Kolar, Miriam A., Smith, Julius O. / Chowning, John. “On the Acoustics of the Underground Galleries of Ancient Chavín de Huántar, Peru”. (Abstract). Journal of the Acoustical Society of America, Vol. 123, No. 3, 605, 2008.
(Fletcher and Rossing 1998): Fletcher, Neville H., and Thomas D. Rossing. The Physics of Musical Instruments. Springer-Verlag, New York, 1998.
(Rick 2006): Rick, John W. “Chavín de Huántar: Evidence for an Evolved Shamanism”. In Mesas and Cosmologies in the Central Andes (Douglas Sharon, ed.), 101-112. San Diego Museum Papers 44, San Diego, 2006.