ASA PRESSROOM

154th ASA Meeting, New Orleans, LA

[ Lay Language Paper Index | Press Room ]


Making Spatial Music Accessible: Investigating elementary spatial movements

Georgios Marentakis
Nils Peters and Stephen McAdams
CIRMMT, Schulich School of Music McGill University 555 Sherbrooke Street West H3A 1E3
Montreal,QC,CA

Popular version of paper 4aMU2
Presented morning 30th November 2006
154th ASA Meeting, New Orleans, LA

In our research, we seek to understand how auditory virtual environments can be integrated in the music production process. Auditory virtual environments enhance sound reproduction by communicating the direction and distance of sounds and place sounds in an acoustic context, i.e. give the impression of a room. In our work, we seek to define some sense of uniformity that will enable concise perception of spatial music for listeners in concert halls. Such uniformity is by no means given in concert halls. It is well known, that most techniques for simulating space in audition are optimized for listeners in the centre of circular or spherical loudspeaker arrays (i.e. the sweet spot) with reference to a dry, reflection-free environment. In practice however, there is substantial variation in concert hall acoustics, the spatial distribution of the audience and practical limitations in loudspeaker placement.

How we can we then make sure that audience members throughout a concert hall will form a consistent impression of the spatial dimension of musical events? Furthermore, is it possible to speak of uniformity when a musical piece is performed in a variety of concert halls? Spatial events cannot be perceived consistently in an absolute manner due to the differences in the location of the listeners in the concert hall. Even with a perfect spatial audio system, an event appearing in front for a listener seated in the middle of the hall will be perceived as originating from the front right direction for a listener at the left end of the hall. What could be perceived consistently however are discrete or continuous changes in the spatial dimensions and movements of the sounds and the spatial interrelations among the musical elements of the spatial scene.

We seek to provide insight on the aforementioned questions by way of an evaluation design inspired by psychoacoustic research. The evaluation of auditory virtual environments is a challenging task that has to take into account quantitative measures such as sound localization and distance perception accuracy but also qualitative measures such as sound quality, envelopment of the listeners and so on. In addition, it has to consider context-specific details such as the physical environment the system is targeted for.

Measuring the Minimum Audible Angles in a big space.

In our initial study, we focus on the perception of sound displacement and movement and examine how this varies for measurements in a studio set-up and in a large area similar to that of a concert hall. The experiments draw on psychoacoustic studies on the Minimum Audible Angle (MAA). The Minimum Audible Angle is the minimum angular displacement that is perceivable with a probability of 75% by a human. It depends on the sound used (in particular the spectral content of the sound), the direction from which the sound is emitted, the plane in which the movement is taking place (i.e. horizontal, vertical or diagonal) and to a smaller extent on the duration of the sound. Typical values for MAA s are about 1 degree for a sound directly in front of a person, 1.6 degrees for a sound at 60 degrees and 3 degrees for a sound at the side of the listener, (estimations done with broadband noise stimulus and real sounds, after Saberi et al. 1991). It should be noted that there is substantial variation in estimators of these quantities, depending on the experimental methodology as well as substantial variation across people (especially for sounds at the side MAA s can be orders of magnitude higher) to the point that generalizations are not easy to make.

Measuring MAA s for auditory virtual environments is important because it provides a method of evaluating their localization fidelity in an unbiased way. In contrary to studies where people are asked to report the absolute location of sounds, MAA estimation follows the two-alternative forced-choice method that is known to exhibit little if any bias. Apart from providing a new way for the evaluation of localization fidelity of spatial audio systems, our research can provide data that can be used by composers to make sure that their spatial manipulations are perceivable. Such data could perhaps be integrated into compositional software to validate the perceptual feasibility of the composer's intentions. In this way a scale of movements can be formed that could be used in a similar way to the pitch scale of musical instruments. However, this cannot be achieved without examining how these quantities will vary as a function of the listener's position in a concert hall and the hall acoustics. Therefore in our initial study we estimated MAA s for two setups one in a studio with a single listener in the sweet spot and one in a medium-sized concert space for nine people seated at selected positions in the hall. For the studio tests, thresholds were measured for three directions of incidence (0, 60 and 90 degrees) and sounds emitting from these directions but starting their displacement on a loudspeaker, mid way between two speakers and one third of the way between two speakers, for a auditory virtual environment that uses four or eight speakers. For the concert hall, we estimated these quantities for sounds coming from 0, 45 and 90 degrees relative to a person sitting in the middle of the hall. In this latter case, sounds being emitted half-way between the speakers, using the same system with 8 or 16 loudspeakers. Although a large number of algorithms for providing 3D audio impressions exist, we used in our first study, a well-known algorithm derived from conventional stereo but has been extended by Ville Pulkki for loudspeaker arrays of arbitrary shape and number. In addition, all of our estimations were done for sounds in the horizontal plane defined by the ears and nose of a person that were displaced along a circle of constant radius.

Our results show that increasing the number of speakers improved localization performance. For four speakers results were disappointing even for a studio setup. With eight speakers, MAA s for sounds originating midway between speakers were 2, 3.4 and 9.4 degrees for direction of incidence of 0, 60 and 90 degrees respectively. The results although comparable for frontal incidence seem to deteriorate more than expected for sounds emitting from the sides of listeners. In addition, we found that these values deteriorate significantly for sounds originating from locations on or close to a loudspeaker. In the concert hall, for frontal incidence, the best performance was found for listeners aligned with the source. Performance deteriorated as the angle between the seat of the listeners and the sound increased, as would be expected. Localization using the 8-speaker system was in general less accurate than in the studio, implying that the room had a significant effect. Overall it improved when 16 speakers were used and became comparable to the studio for frontal incidence. Performance for sounds at oblique incidence was however significantly degraded and the effect of the room became more pronounced.

It appears therefore feasible to create a uniform experience of sound displacement both in the studio and in the concert hall, however in the latter case a specialized measurement procedure is necessary to compensate for the effect of the room and the variability in listener positioning. Composers and practitioners should be particularly careful for sounds at the sides of listeners where the expectations of algorithm designers are not met especially in room conditions. We are currently extending our research to accommodate sound movement and a variety of spatialization algorithms.


[ Lay Language Paper Index | Press Room ]