1 ADA Acoustics & Media Consultants GmbH, Arkonastr. 45-49, D-13189 Berlin / Germany
2 University of Architecture and Urbanism “Ion Mincu”, Str. Academiei 18-20, RO-010014 Bucuresti / Romania
Popular version of paper 4aAA5, “The National Opera in Bucharest – Update of the room-acoustical properties” Presented Thursday morning, November 5, 2015, 10:35 AM, Grand ballroom 3
170th ASA Meeting, Jacksonville
The acoustics of an opera hall has changed dramatically within the last 100 years. Until the end of the 19th century, mostly horseshoe-shaped halls were built with acoustically high-absorbing wall and even floor areas. Likewise, the often used boxes had fully absorbing claddings. That way the reverberation in these venues was made low and the hall was perceived as acoustically dry, e.g. the opera hall in Milan. 100 years later, the trend shows opera halls with more live and higher reverberation, preferred now for music reproduction, e.g. Semper Opera in Dresden.
This desire to enhance the acoustic liveliness in the Opera House in Bucharest led to renovation work in 2013-2014. The Opera House was built in 1952-1953 for around 2200 spectators and it followed a so-called style of “socialist realism”. This type of architecture was popular at the time, when communism was new to Romania, and the building has therefore a neoclassical design. The house was looking inside the hall like a theatre of the late 19th century. The conditions in the orchestra pit for the musicians, as far as mutual hearing is concerned, were bad as well. So, construction works took place in order to improve room acoustical properties for musicians and audience.
Fig. 1: Opera hall after reconstruction
The acoustic task was to enhance the room acoustic properties significantly by substituting absorptive faces (as carpet, fabric wall linings, etc.) by reflective materials:
Carpet on all floor areas, upholstered back- and undersides of chairs
Textile wall linings at walls/ceilings in boxes, upholstered hand rails
Textile wall linings at balustrades, upholstered hand rails in the galleries
All the absorbing wall and ceiling parts were substituted by reflecting wood panels, the carpet was removed and a parquet floor was introduced. As a result, the sound does not fade out anymore as in an open-air theatre but spaciousness may be perceived now.
The primary and secondary structures of the orchestra pit were changed as well in order to improve mutual hearing in the pit and between stage and pit. The orchestra pit had the following acoustically disadvantageous properties:
Insufficient ratio between open and covered area (depth of opening 3.5 m, depth of cover 4.7 m)
The height within the pit in the covered area was very small.
The space in the covered area of the pit was highly overdamped by too much absorber.
Fig. 2: new orchestra pit, section
The following changes have been applied:
The ratio between open area and covered area is now better by shifting the front edge of the stage floor to the back: Depth of opening is now 5.1 m, depth of cover only 3.1 m.
The height within the pit in the covered area is increased by lowering the new movable podium.
The walls and soffit in the pit are now generally reflective, broadband absorbers can be placed variably at the back wall in the pit.
After an elaborate investigation by measurements and simulation on site a prolongation of the reverberation time of 0.2-0.3 s was reached to actual values of about 1.3 to 1.4 s.
Together with alterations of the geometric situation of pit, the acoustic properties of the hall are now very satisfactory for musicians, singers and the audience.
Beside the reverberation time, other room acoustical measures such as C80, Support, Strength, etc. have been improved significantly.
National Center for Physical Acoustics, The University of Mississippi,
1 Chucky Mullins,
University, MS, 38677
Lay language paper 4pEA4
Presented Thursday afternoon, November 5, 2015
170th ASA Meeting, Jacksonville
Within a few meters beneath the earth surface, three distinctive soil layers are formed: a top dry and hard layer, a middle moist and soft region, and a deeper zone where the mechanical strength of the soil increases with depth. The information of this subsurface soil is required for agricultural, environmental, civil engineering, and military applications. A seismic surface wave method has been recently developed to non-invasively obtain such information (Lu, 2014; Lu, 2015). The method, known as the multichannel analysis of surface wave method (MASW) (Park, et al., 1999; Xia, et al., 1999), consists of three essential parts: surface wave generation and collection (Figure 1), spectrum analysis, and inversion process. The implement of the technique employs sophisticated sensor technology, wave propagation modeling, and inversion algorithm.
“Figure 1. The experimental setup for the MASW method”
The technique makes use of the characteristic of one type of surface waves, the so-called Rayleigh waves that travel along the earth’s surface within a depth of one and a half wavelengths. Therefore the components of surface waves with short wavelength contain information of shallow soil, whereas the longer wavelength surface waves provide the properties of deep soil (Figure 2).
“Figure 2. Rayleigh wave propagation”
The outcome of the MASW method is a soil vertical profile, i.e., the acoustic shear (S) wave velocity as a function of depth (Figure 3).
“Figure 3. A typical soil profile”
By repeating the MASW measurements either spatially or temporarily, one can measure and “see” the spatial and temporal variations of the subsurface soils. Figure 4 shows a typical vertical cross-section image in which the intensity of the image represents the value of the shear wave velocity. From this image, three different layers mentioned above are identified.
“Figure 4. A typical example of soil vertical cross-section image “
Figure 5 displays another two-dimensional image in which a middle high velocity zone (red area) appears. This high velocity zone represents a geological anomaly, known as a fragipan, a naturally occurring dense and hard soil layer (Lu, et al., 2014). The detection of fragipan is important in agricultural land managements.
“Figure 5. A vertical cross-section image showing the presence of a fragipan layer”
The MASW method can also be applied to monitor weather influence on soil properties (Lu 2014). Figure 6 shows the temporal variations of the underground soil. This is a result of a long term survey conducted in 2012. By drawing a vertical line and moving it from left side to right side, i.e., along the time index number axis, the evolution of the soil profile due to weather effects can be evaluated. In particular, the high velocity zones occurred in the summer of 2012, reflecting very dry soil conditions.
“Figure 6. The temporal variations of soil profile due to weather effects”
Lu, Z., 2014. Feasibility of using a seismic surface wave method to study seasonal and weather effects on shallow surface soils. Journal of Environmental & Engineering Geophysics, DOI: 10.2113/JEEG19.2.71, Vol.19, 71–85.
Lu, Z., Wilson, G.V., Hickey, C.J., 2014. Imaging a soil fragipan using a high-frequency MASW method. In Proceedings of the Symposium on the Application of Geophysics to Engineering and Environmental Problems (SAGEEP 2014), Boston, MA., Mar. 16-20.
Park, C.B., Miller, R.D., Xia, J., 1999. Multichannel analysis of surface waves. Geophysics, Vol. 64, 800-808.
Xia, J., Miller, R.D., Park, C.B., 1999. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves. Geophysics, Vol. 64, 691-700.
Popular version of paper 4aEA10, “Preliminary evaluation of the sound absorption coefficient of a thin coconut fiber panel for automotive applications”
Presented Thursday morning, November 5, 2015, 11:15 AM, Orlando Room
170th ASA Meeting, Jacksonville, Fl
Absorbents materials are fibrous or porous and must have the property of being good acoustic dissipaters. Sound propagation causes multiples reflections and friction of the air present in the absorbent medium converting sound energy to thermal energy. The acoustic surface treatment with absorbent material are widely used to reduce the reverberation in enclosed spaces or to increase the sound transmission loss of acoustics panels. In addition, these materials can be applied into acoustics filters with the purpose to increase their efficiencies. The sound absorption depends on the excitation frequency of the sound and it is more effective at high frequencies. Natural fibers such as coconut coir fiber have a great potential to be used like sound absorbent material. As natural fibers are agriculture waste, manufacturing this fiber is a natural product, therefore an economic and interesting option. This work compares the sound absorption coefficient between a thin coconut fiber panel and a composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven, which are used in the automotive industry as a roof trim. The evaluation of sound absorption coefficient was carried out with the impedance tube technique.
In 1980, Chung and Blaser evaluated the normal incidence sound absorption coefficient through transfer function method. The standard ASTM E1050-10 and ISO 10534-2 was based in Chung and Blaser’s method, Figure 1. In summary, this method uses an impedance tube with the sound source placed to one end and at another, the absorbent material backed in a rigid wall. The decomposition of the stationary sound wave pattern into forward and backward traveling components is achieved by measuring sound pressures. This evaluating is carried out simultaneously at two spaced locations in the tube’s sidewall where two microphones are located, Figure 1.
Figure 1. Impedance Tube.
The wave decomposition allows to the determination of the complex reflection coefficient R(f) from which the complex acoustic impedance and the normal incidence sound absorption coefficient (a) of an absorbent material can be determined. Furthermore, the two coefficients R(f) and a are calculated by Transfer Function H12 between the two microphones through:
where s is the distance between the microphones, x1 is the distance between the farthest microphone and the sample, i is the imaginary unity and k0 is the wave number of the air.
If R(f) is known, the coefficient a is easily obtained by expression:
In this work, eight samples of coconut fiber and eight samples of composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven used in the automotive industry, Figure 2 and 3. The material properties are shown in Table 1.
Figure 2. Samples.
Figure 3. Composite panel structure.
Table 1. Material Properties.
The random noise signal with frequency band between 200 Hz and 5000 Hz was utilized to evaluate a. The Figure 4 shows the mean normal incidence absorption coefficient obtained from the measurements.
Figure 4. Comparison of normal absorption coefficient (a)
The results shows that the composite panel have a better sound absorption coefficient than coconut fiber panel. To improve the coconut fiber panel acoustical efficiency it is needed to add some filling material with the same effect of the polyurethane foam of the composite panel.
Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring in-duct acoustic properties – I Theory,” J. Acoust. Soc. Am. 68, 907-913.
Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring in-duct acoustic properties – II Experiment,” J. Acoust. Soc. Am. 68, 913-921.
ASTM E1050:2012. “Standard test method for impedance and absorption of acoustical materials using a tube, two microphones and a digital frequency analysis system,” American Society for Testing and Materials, Philadelphia, PA, 2012.
ISO 10534-2:1998. “Determination of sound absorption coefficient and impedance in impedance tubes – Part 2: Transfer-function method”, International Organization for Standardization, Geneva, 1998.
Pierre-Yves Le Bas, email@example.com, Brian E. Anderson1,2, Marcel Remillieux1, Lukasz Pieczonka3, TJ Ulrich1
1Geophysics group EES-17, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
2Department of Physics and Astronomy, Brigham Young University, N377 Eyring Science Center, Provo, UT 84601, USA
3AGH University of Science and Technology, Krakow, Poland
Popular version of paper 3aSA7, “Elasticity Nonlinear Diagnostic method for crack detection and depth estimation”
Presented Wednesday morning, November 4, 2015, 10:20 AM, Daytona room
170th ASA Meeting, Jacksonville
One common problem in industry is to detect and characterize defects, especially at an early stage. Indeed, small cracks are difficult to detect with current techniques and, as a result, it is customary to replace parts after an estimated lifetime instead of keeping them in service until they are effectively approaching failure. Being able to detect early stage damage before it becomes structurally dangerous is a challenging problem of great economic importance. This is where nonlinear acoustics can help. Nonlinear acoustics is extremely sensitive to tiny cracks and thus early damage. The principle of nonlinear acoustics is easily understood if you consider a bell. If the bell is intact, it will ring with an agreeable tone determine by the geometry of the bell. If the bell is cracked, one will hear a dissonant sound, which is due to nonlinear phenomena. Thus, if an object is struck it is possible to determine, by listening to the tone(s) produced, whether or not it is damaged. Here the same principle is used but in a more quantitative way and, usually, at ultrasonic frequencies. Ideally, one would also like to know where the damage is and what its orientation is. Indeed, a crack growing thru an object could be more important to detect as it could lead to the object splitting in half, but in other circumstances, chipping might be more important, so knowing the orientation of a crack is critical in the health assessment of a part.
To localize and characterize a defect, time reversal is a useful technique. Time reversal is a technique that can be used to localize vibration in a known direction, i.e., a sample can be made to vibrate perpendicularly to the surface of the object or parallel to it, which are referred to as out-of-plane and in-plane motions, respectively. The movie below shows how time reversal is used to focus energy: a source broadcasts a wave from the back of a plate and signals are recorded on the edges using other transducers. The signals from this initial phase are then flipped in time and broadcast from all the edge receivers. Time reversal then dictates that these waves focus at the initial source location.
Time reversal can also be more that the simple example in the video. Making use of the reciprocity principle, i.e., that a signal traveling from A to B is identical to the same signal traveling from B to A, the source in the back of the plate can be replaced by a receiver and the initial broadcast can be done from the side, meaning TR can focus energy anywhere a signal can be recorded; and with a laser as receiver, this means anywhere on the surface of an object.
In addition, the dominant vibration direction, e.g., in-plane or out-of plane, of the focus can be specified by recording specific directions of motion of the initial signals. If during the first step of the time reversal process, the receiver is set to record in-plane vibration, the focus will be primarily in that in-plane direction; similarly if the receiver records the out-of-plane vibration in the first step of the process, the focus will be essentially in the out-of-plane direction. This is important as the nonlinear response of a crack depends on the orientation of the vibration that makes it vibrate. To fully characterize a sample in terms of crack presence and orientation TR is used to focus energy at defined locations and at each point the nonlinear response is quantified. This can be done for any orientation of the focused wave. To cover all possibilities, three scans are usually done in three orthogonal directions.
Figure 2 shows three scans on x, y and z directions of the same sample composed of a glass plate glued on an aluminum plate. The sample has 2 defects, one delamination due to a lack of glue between the 2 plates (in the (x,y) plane) at the top of the scan area and one crack perpendicular to the surface in the glass plate in the (x,z) plane in the middle of the scan area.
Figure 2. Nonlinear component of the time reversal focus at each point of a scan grid with wave focused in the x, y and z direction (from left to right)
As can be seen on those scans, the delamination in the (x,y) plane is visible only when the wave is focused in the Z direction while the crack in the (x,z) plane is visible only in the Y scan. This means that cracks have a strong nonlinear behavior when excited in a direction perpendicular to their main orientation. So by scanning with three different orientations of the focused vibration one should be able to recreate the orientation of a crack.
Another feature of the time reversal focus is that its spatial extent is about a wavelength of the focus wave. Which means the higher the frequency, the smaller the spot size, i.e., the area of the focused energy. One can then think that the higher the frequency the better the resolution and thus higher frequency is always best. However, the extent of the focus is also the depth that this technique can probe; so lower frequency means a deeper investigation and thus a more complete characterization of the sample. Therefore there is a tradeoff between depth of investigation and resolution. However, by doing several scans at different frequencies, one can extract additional information about a crack. For example, Figure 3 shows 2 scans done on a metallic sample with the only difference being the frequency of the focused wave.
Figure 3. From left to right: Nonlinear component of the time reversal focus at each point of a scan grid at 200kHz and 100kHz and photography of the sample from its side.
At 200kHz, it looks like there is only a thin crack while at 100kHz the extent of this crack is larger toward the bottom of the scan and more than double so there is more than just a resolution issue. At 200kHz the depth of investigation is about 5mm; at 100kHz it is about 10mm. Looking on the side of the sample in the right panel of figure 3, the crack is seen to be perpendicular to the surface for about 6mm and then dip severely. At 200kHz, the scan is only sensitive to the part perpendicular to the surface while at 100kHz, the scan will also show the dipping part. So doing several scans at different frequencies can give some information on the depth profile of the crack.
In conclusion, using time reversal to focus energy in several directions and at different frequencies and studying the nonlinear component of this focus can lead to a characterization of a crack, its orientation and depth profile, something that is currently only available using techniques, like X-ray CT, which are not as easily deployable as ultrasonic ones.
Brian Connolly – firstname.lastname@example.org Music Department
Popular version of paper 5aMU1, “The inner ear as a musical instrument”
Presented Friday morning, November 6, 2015, 8:30 AM, Grand Ballroom 2
170th ASA meeting Jacksonville
(please use headphones for listening to all audio samples)
Did you know that your ears could sing? You may be surprised to hear that they, in fact, have the capacity to make particularly good performers and recent psychoacoustics research has revealed the true potential of the ears within musical creativity. ‘Psychoacoustics’ is loosely defined as the study of the perception of sound.
Figure 1: The Ear
A good performer can carry out required tasks reliably and without errors. In many respects the very straight-forward nature of the ear’s responses to certain sounds results in the ear proving to be a very reliable performer as its behaviour can be predicted and so it is easily controlled. In the context of the listening system, the inner ear has the ability to behave as a highly effective instrument which can create its own sounds that many experimental musicians have been using to turn the listeners’ ears into participating performers in the realization of their music.
One of the most exciting avenues of musical creativity is the psychoacoustic phenomenon known as otoacoustic emissions. These are tones which are created within the inner ear when it is exposed to certain sounds. One such example of these emissions is ‘difference tones.’ When two clear frequencies enter the ear at, say 1,000Hz and 1,200Hz the listener will hear these two tones, as expected, but the inner ear will also create its own third frequency at 200Hz because this is the mathematical difference between the two original tones. The ear literally sends a 200Hz tone back out in reverse through the ear and this sound can be detected by an in-ear microphone, a process which doctors carrying out hearing tests on babies use as an integral part of their examinations. This means that composers can create certain tones within their work and predict that the listeners’ ears will also add their extra dimension to the music upon hearing it. Within certain loudness and frequency ranges, the listeners will also be able to feel their ears buzzing in response to these stimulus tones! This makes for a very exciting and new layer to contemporary music making and listening.
First listen to this tone. This is very close to the sound your ear will sing back during the second example.
Insert – 200.mp3
Here is the second sample containing just two tones at 1,000Hz and 1,200Hz. See if you can also hear the very low and buzzing difference tone which is not being sent into your ear, it is being created in your ear and sent back out towards your headphones!
Insert – 1000and1200.mp3
If you could hear the 200Hz difference tone in the previous example, have a listen to this much more complex demonstration which will make your ears sing a well known melody. It is important to try to not listen to the louder impulsive sounds and see if you can hear your ears humming along to perform the tune of Twinkle, Twinkle, Little Star at a much lower volume!
(NB: The difference tones will start after about 4 seconds of impulses)
Insert – Twinkle.mp3
Auditory beating is another phenomenon which has caught the interest of many contemporary composers. In the below example you will hear the following: 400Hz in your left ear and 405Hz in your right ear.
First play the below sample by placing the headphones into your ears just one at a time. Not together. You will hear two clear tones when you listen to them separately.
Insert – 400and405beating.mp3
Now try and see what happens when you place them into your ears simultaneously. You will be unable to hear these two tones together. Instead, you will hear a fused tone which beats five times per second. This is because each of your ears are sending electrical signals to the brain telling it what frequency it is responding to but these two frequencies are too close together and so a perceptual confusion occurs resulting in a combined frequency being perceived which beats at a rate which is the same as the mathematical difference between the two tones.
Auditory beating becomes particularly interesting in pieces of music written for surround sound environments when the proximity of the listener to the various speakers plays a key factor and so simply turning one’s head in these scenarios can often entirely change the colour of the sound as different layers of beating will alter the overall timbre of the sound.
So how can all of these be meaningful to composers and listeners alike? The examples shown here are intended to be basic and provide proofs of concept more so than anything else. In the much more complex world of music composition the scope for the employment of such material is seemingly endless. Considering the ear as a musical instrument gives the listener the opportunity to engage with sound and music in a more intimate way than ever before.