3aSA11 – Hollow vs. Foam-filled racket: Feel-good vibrations – Kritika Vayur, Dr. Daniel A. Russell

3aSA11 – Hollow vs. Foam-filled racket: Feel-good vibrations – Kritika Vayur, Dr. Daniel A. Russell

Hollow vs. Foam-filled racket: Feel-good vibrations

Kritika Vayur – kuv126@psu.edu
Dr. Daniel A. Russell – dar119@psu.edu

Pennsylvania State University
201 Applied Science Building
State College, PA, 16802

Popular version of paper 3aSA11, “Vibrational analysis of hollow and foam-filled graphite tennis rackets”
Presented Wednesday morning, May 20, 2015, 11:15 AM in room Kings 3
169th ASA Meeting, Pittsburgh

Tennis Rackets and Injuries
The typical modern tennis racket has a light-weight, hollow graphite frame with a large head. Though these rackets are easier to swing, there seems to be an increase in the number of players experiencing injuries commonly known as “tennis elbow”. Recently, even notable professional players such as Rafael Nadal, Victoria Azarenka, and Novak Djokovic have withdrawn from tournaments because of wrist, elbow or shoulder injuries.
A recent new solid foam-filled graphite racket design claims to reduce the risk of injury. Previous testing has suggested that these foam-filled rackets are less stiff and damp the vibrations more than hollow rackets, thus reducing the risk of injury and shock delivered to the arm of the player [1]. Figure 1 shows cross-sections of the handles of hollow and foam-filled versions of the same model racket.
The preliminary study reported in this paper was an attempt to identify the vibrational characteristics that might explain why foam-filled rackets improve feel and reduce risk of injury.
Vayur_Fig1
Figure 1: Cross-section of the handle of a foam-filled racket (left) and a hollow racket (right).
Damping Rates

The first vibrational characteristic we set out to identify was the damping associated with first few bending and torsional vibrations of the racket frame. A higher damping rate means the unwanted vibration dies away faster and results in a less painful vibration delivered to the hand, wrist, and arm. Previous research on handheld sports equipment (baseball and softball bats and field hockey sticks) has demonstrated that bats and sticks with higher damping feel better and minimize painful sting [2,3,4].

We measured the damping rates of 20 different tennis rackets, by suspending the racket from the handle with rubber bands, striking the racket frame in the head region, and measuring the resulting vibration at the handle using an accelerometer. Damping rates were obtained from the frequency response of the racket using a frequency analyzer. We note that suspending the racket from rubber bands is a free boundary condition, but other research has shown that this free boundary condition more closely reproduces the vibrational behavior of a hand-held racket than does a clamped-handle condition [5,6].

Measured damping rates for the first bending mode, shown in Fig. 2, indicate no difference between the damping and decay rates for hollow and foam-filled graphite rackets. Similar results were obtained for other bending and torsional modes. This result suggests that the benefit of or preference for foam-filled rackets is not due to a higher damping that could cause unwanted vibrations to decay more quickly.

Vayur_Fig2
Figure 2: Damping rates of the first bending mode for 20 rackets, hollow (open circles) and foam-filled (solid squares). A higher damping rate means the vibration will have a lower amplitude and will decay more quickly.

Vibrational Mode Shapes and Frequencies
Experimental modal analysis is a common method to determine how the racket vibrates with various mode shapes at its resonance frequencies [7]. In this experiment, two rackets were tested, a hollow and a foam-filled racket of the same make and model. Both rackets were freely suspended by rubber bands, as shown in Fig. 3. An accelerometer, fixed at one location, measured the vibrational response to a force hammer impact at each of approximately 180 locations around the frame and strings of the racket. The resulting Frequency Response Functions for each impact location were post-processed with a modal analysis software to extract vibrational mode shapes and resonance frequencies. An example of the vibrational mode shapes for hollow graphite tennis racket may be found on Dr. Russell’s website.

Vayur_Fig3
Figure 3: Modal analysis set up for a freely suspended racket.

Figure 4 compares the first and third bending modes and the first torsional mode for a hollow and foam-filled racket. The only difference between the two rackets is that one was hollow and the other was foam-filled. In the figure, the pink and green regions represent motion in opposite directions, and the white regions indicate regions, called nodes, where no vibration occurs. The sweet spot of a tennis racket is often identified as being at the center of the nodal line of the first bending mode shape in the head region [8]. An impact from an incoming ball at this location results in zero vibration at the handle, and therefore a better “feel” for the player. The data in Fig. 4 shows that there are very few differences between the mode shapes of the hollow and foam-filled rackets. The frequencies at which the mode shapes for the foam-filled rackets occur are slightly higher than those of the hollow rackets, but the difference in shapes are negligible between the two types.

Vayur_Fig4
Figure 4: Contour maps representing the out-of-plane vibration amplitude for the first bending (left), first torsional (middle), and third bending (right) modes for a hollow (top) and a foam-filled racket (bottom) of the same make and model.

Conclusions

This preliminary study shows that damping rates for this particular design of foam-filled rackets are not higher than those of hollow rackets. The modal analysis gives a closer, yet non-conclusive, look at the intrinsic properties of the hollow and foam-filled rackets. The benefit of using this racket design is perhaps related to a larger impact shock, but additional testing is needed to discover this conjecture.

Tags: tennis, vibrations, graphite, design
Bibliography
[1] Ferrara, L., & Cohen, A. (2013). A mechanical study on tennis racquets to investigate design factors that contribute to reduced stress and improved vibrational dampening. Procedia Engineering, 60, 397-402.
[2] Russell D.A. (2012). Vibration damping mechanisms for the reduction of sting in baseball bats. In 164th meeting of the Acoustical Society of America, Kansas City, MO, Oct 22-26. Journal of Acoustical Society of America, 132(3) Pt.2, 1893.
[3] Russell, D.A. (2012). Flexural vibration and the perception of sting in hand-held sports implements. In Proceedings of InterNoise 2012, August 19-22, New York City, NY.
[4] Russell, D.A. (2006). Bending modes, damping, and the sensation of string in
baseball bats. In Proceedings 6th IOMAC Conference, 1, 11-16.
[5] Banwell, G.H., Roberts, J.R., & Halkon, B.J. (2014). Understanding the dynamics behavior of a tennis racket under play conditions. Experimental Mechanics, 54, 527-537.
[6] Kotze, J., Mitchell, S.R., & Rothberg, S.J. (2000).The role of the racket in high-speed tennis serves. Sports Engineering, 3, 67-84.
[7] Schwarz, B.J., & Richardson, M.H. (1999). Experimental modal analysis. CSI Reliability Week, 35(1), 1-12.
[8] Cross, R. (2004). Center of percussion of hand-held implements. American Journal of Physics, 72, 622-630.

3aSPb5 – Improving Headphone Spatialization: Fixing a problem you’ve learned to accept

3aSPb5 – Improving Headphone Spatialization: Fixing a problem you’ve learned to accept

Improving Headphone Spatialization: Fixing a problem you’ve learned to accept

Muhammad Haris Usmani – usmani@cmu.edu
Ramón Cepeda Jr. – rcepeda@andrew.cmu.edu
Thomas M. Sullivan – tms@ece.cmu.edu
Bhiksha Raj – bhiksha@cs.cmu.edu
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213

Popular version of paper 3aSPb5, “Improving headphone spatialization for stereo music”
Presented Wednesday morning, May 20, 2015, 10:15 AM, Brigade room
169th ASA Meeting, Pittsburgh

The days of grabbing a drink, brushing dust from your favorite record and playing it in the listening room of the house are long gone. Today, with the portability technology has enabled, almost everybody listens to music on their headphones. However, most commercially produced stereo music is mixed and mastered for playback on loudspeakers– this presents a problem for the growing number of headphone listeners. When a legacy stereo mix is played on headphones, all instruments or voices in that piece get placed in between the listener’s ears, inside of their head. This not only is unnatural and fatiguing for the listener, but is detrimental toward the original placement of the instruments in that musical piece. It disturbs the spatialization of the music and makes the sound image appear as three isolated lobes inside of the listener’s head [1], see Figure 1.

usmani_1

Hard-panned instruments separate into the left and right lobes, while instruments placed at center stage are heard in the center of the head. However, as hearing is a dynamic process that adapts and settles with the perceived sound, we have accepted headphones to sound this way [2].
In order to improve the spatialization of headphones, the listener’s ears must be deceived into thinking that they are listening to the music inside of a listening room. When playing music in a room, the sound travels through the air, reverberates inside the room, and interacts with the listener’s head and torso before reaching the ears [3]. These interactions add the necessary psychoacoustic cues for perception of an externalized stereo soundstage presented in front of the listener. If this listening room is a typical music studio, the soundstage perceived is close to what the artist intended. Our work tries to place the headphone listener into the sound engineer’s seat inside a music studio to improve the spatialization of music. For the sake of compatibility across different headphones, we try to make minimal changes to the mastering equalization curve of the music.
Since there is a compromise between sound quality and the spatialization that can be presented, we developed three different systems that present different levels of such compromise. We label these as Type-I, Type-II, and Type-0. Type-I focuses on improving spatialization but at the cost of losing some sound quality, Type-II improves spatialization while taking into account that the sound quality is not degraded too much, and Type-0 focuses on refining conventional listening by making the sound image more homogeneous. Since the sound quality is key in music, we will skip over Type-I and focus on the other two systems.
Type-II, consists of a head related transfer function (HRTF) model [4], room reverberation (synthesized reverb [5]), and a spectral correction block. HRTFs embody all the complex spatialization cues that exist due to the relative positions of the listener and the source [6]. In our case, a general HRTF model is used which is configured to place the listener at the “sweet spot” in the studio (right and left speakers placed at an angle of 30° from the listener’s head). The spectral correction attempts to keep the original mastering equalization curve as intact as possible.
Type-0, is made up of a side-content crossfeed block and a spectral correction block. Some headphone amps allow crossfeed between the left and right channels to model the fact that when listening to music through loudspeakers, each ear can hear the music from each speaker with a delay attached to the sound originating from the speaker that is furthest away. A shortcoming of conventional crossfeed is that the delay we can apply is limited (to avoid comb filtering) [7]. Side-content crossfeed resolves this by only crossfeeding unique content between the two channels, allowing us to use larger delays. In this system, the side-content is extracted by using a stereo-to-3 upmixer, which is implemented as a novel extension to Nikunen et al.’s upmixer [8].
These systems were put to the test by conducting a subjective evaluation with 28 participants, all between 18 to 29 years of age. The participants were introduced to the metrics that were being measured in the beginning of the evaluation. Since the first part of the evaluation included specific spatial metrics which are a bit complicated to grasp for untrained listeners, we used a collection of descriptions, diagrams, and/or music excerpts that represented each metric to provide in-evaluation training for the listeners. The results of the first part of the evaluation suggest that this method worked well.
We were able to conclude from the results that Type-II externalized the sounds while performing at a level analogous to the original source in the other metrics and Type-0 was able to improve sound quality and comfort by compromising stereo width when compared to the original source, which is what we expected. Also, there was strong content-dependence observed in the results suggesting that a different setting of improving spatialization must be used with music that’s been produced differently. Overall, two of the three proposed systems in this work are preferred in equal or greater amounts to the legacy stereo mix.

Tags: music, acoustics, design, technology

References

[1] G-Sonique, “Monitor MSX5 – Headphone monitoring system,” G-Sonique, 2011. [Online]. Available: http://www.g-sonique.com/msx5headphonemonitoring.html.
[2] S. Mushendwa, “Enhancing Headphone Music Sound Quality,” Aalborg University – Institute of Media Technology and Engineering Science, 2009.
[3] C. J. C. H. K. K. Y. J. L. Yong Guk Kim, “An Integrated Approach of 3D Sound Rendering,” Springer-Verlag Berlin Heidelberg, vol. II, no. PCM 2010, p. 682–693, 2010.
[4] D. Rocchesso, “3D with Headphones,” in DAFX: Digital Audio Effects, Chichester, John Wiley & Sons, 2002, pp. 154-157.
[5] P. E. Roos, “Samplicity’s Bricasti M7 Impulse Response Library v1.1,” Samplicity, [Online]. Available: http://www.samplicity.com/bricasti-m7-impulse-responses/.
[6] R. O. Duda, “3-D Audio for HCI,” Department of Electrical Engineering, San Jose State University, 2000. [Online]. Available: http://interface.cipic.ucdavis.edu/sound/tutorial/. [Accessed 15 4 2015].
[7] J. Meier, “A DIY Headphone Amplifier With Natural Crossfeed,” 2000. [Online]. Available: http://headwize.com/?page_id=654.
[8] J. Nikunen, T. Virtanen and M. Vilermo, “Multichannel Audio Upmixing by Time-Frequency Filtering Using Non-Negative Tensor Factorization,” Journal of the AES, vol. 60, no. 10, pp. 794-806, October 2012.

5aMU1 – Understanding timbral effects of multi-resonator/generator systems of wind instruments in the context of western and non-western music – Jonas Braasch

5aMU1 – Understanding timbral effects of multi-resonator/generator systems of wind instruments in the context of western and non-western music – Jonas Braasch

Popular version of poster 5aMU1
Presented Friday morning, May 22, 2015, 8:35 AM – 8:55 AM, Kings 4
169th ASA Meeting, Pittsburgh

In this paper the relationship between musical instruments and the rooms they are performed in was investigated. A musical instrument is typically characterized as a system that consists of a tone generator combined with a resonator. A saxophone for example has a reed as a tone generator and a comical shaped resonator that can be effectively changed in length with keys to produce different musical notes. Often neglected is the fact that there is a second resonator for all wind instruments coupled to the tone generator – the vocal cavity. We use our vocal cavity everyday when we speak to form characteristic formants, local enhancements in frequency to shape vowels. This is achieved by varying the diameter of the vocal tract at specific local positions along its axis. In contrast to the resonator of a wind instrument, the vocal tract is fixed its length by the dimensions between the vocal chords and the lips. Consequently, the vocal tract cannot be used to change the fundamental frequency over a larger melodic range. For out voice, the change in frequency is controlled via the tension of the vocal chords. The musical instrument’s instrument resonator however is not an adequate device to control the timbre (harmonic spectrum) of an instrument because it can only be varied in length but not in width. Therefore, the players adjustment of the vocal tract is necessary to control the timbre if the instrument. While some instruments posses additional mechanisms to control timbre, e.g., via the embouchure to control the tone generator directly using the lip muscles, for others like the recorder changes in the wind supply provided by the lungs and the changes of the vocal tract. The role of the vocal tract has not been addressed systematically in literature and learning guides for two obvious reasons. Firstly, there is no known systematic approach of how to quantify internal body movements to shape the vocal tract. Each performer has to figure out the best vocal tract configurations in an intuitive manner. For the resonator system, the changes are described through the musical notes, and in cases where multiple ways exist to produce the same note, additional signs exist to demonstrate how to finger this note (e.g., by providing a specific key combination). Secondly, in western classic music culture the vocal tract adjustments predominantly have a correctional function to balance out the harmonic spectrum to make the instrument sound as even as possible across the register.

Braasch2

PVC-Didgeridoo adapter for soprano saxophone

In non-western cultures, the role of the oral cavity can be much more important to convey musical meaning. The didgeridoo, for example, has a fixed resonator with no keyholes and consequently it can only produce a single pitched drone. The musical parameter space is then defined by modulating the overtone spectrum above the tone by changing the vocal tract dimensions and creating vocal sounds on top of the buzzing lips on the didgeridoo edge. Mouthpieces of Western brass instruments have a cup behind the rim with a very narrow opening to the resonator, the throat. The didgeridoo does not have a cup, and the rim is the edge of the resonator with a ring of bee wax. While the narrow throat of western mouthpiece mutes additional sounds produced with the voice, didgeridoos are very open from end to end and carry the voice much better.

The room, a musical instrument is performed in acts as a third resonator, which also affect the timbre of the instrument. In our case, the room was simulated using a computer model with early reflections and late reverberation.

Braasch 1
Tone generators for soprano saxophone from left to right: Chinese Bawu, soprano saxophone, Bassoon reed, cornetto.

In general, it is difficult to assess the effect of a mouthpiece and resonator individually, because both vary across instruments. The trumpet for example has a narrow cylindrical bore with a brass mouthpiece, the saxophone has a wide conical bore with reed-based mouthpiece. To mitigate this effect, several tone generators were adapted for a soprano saxophone, including a brass mouthpiece from a cornetto, a bassoon mouthpiece and a didgeridoo adapter made from a 140 cm folded PCV pipe that can be attached to the saxophone as well. It turns out that the exchange of tone generators change the timbre of the saxophone significantly. The cornetto mouthpiece gives the instrument a much mellower tone. Similar to the baroque cornetto, the instruments sounds better in a bright room with lot of high frequencies, while the saxophone is at home at a 19th-century concert hall with a steeper roll off at high frequencies.

3aSA7 – Characterizing defects with nonlinear acoustics – Pierre-Yves Le Bas,  Brian E. Anderson, Marcel Remillieux, Lukasz Pieczonka, TJ Ulrich

3aSA7 – Characterizing defects with nonlinear acoustics – Pierre-Yves Le Bas, Brian E. Anderson, Marcel Remillieux, Lukasz Pieczonka, TJ Ulrich

Characterizing defects with nonlinear acoustics

 

Pierre-Yves Le Bas, pylb@lanl.gov1,  Brian E. Anderson1,2, Marcel Remillieux1, Lukasz Pieczonka3, TJ Ulrich1

1Geophysics group EES-17, Los Alamos National Laboratory, Los Alamos, NM 87545, USA

2Department of Physics and Astronomy, Brigham Young University, N377 Eyring Science Center, Provo, UT 84601, USA

3AGH University of Science and Technology, Krakow, Poland

 

Popular version of paper 3aSA7, “Elasticity Nonlinear Diagnostic method for crack detection and depth estimation”
Presented Wednesday morning, November 4, 2015, 10:20 AM, Daytona room
170th ASA Meeting, Jacksonville

 

One common problem in industry is to detect and characterize defects, especially at an early stage. Indeed, small cracks are difficult to detect with current techniques and, as a result, it is customary to replace parts after an estimated lifetime instead of keeping them in service until they are effectively approaching failure. Being able to detect early stage damage before it becomes structurally dangerous is a challenging problem of great economic importance. This is where nonlinear acoustics can help. Nonlinear acoustics is extremely sensitive to tiny cracks and thus early damage. The principle of nonlinear acoustics is easily understood if you consider a bell. If the bell is intact, it will ring with an agreeable tone determine by the geometry of the bell. If the bell is cracked, one will hear a dissonant sound, which is due to nonlinear phenomena. Thus, if an object is struck it is possible to determine, by listening to the tone(s) produced, whether or not it is damaged. Here the same principle is used but in a more quantitative way and, usually, at ultrasonic frequencies. Ideally, one would also like to know where the damage is and what its orientation is. Indeed, a crack growing thru an object could be more important to detect as it could lead to the object splitting in half, but in other circumstances, chipping might be more important, so knowing the orientation of a crack is critical in the health assessment of a part.

To localize and characterize a defect, time reversal is a useful technique. Time reversal is a technique that can be used to localize vibration in a known direction, i.e., a sample can be made to vibrate perpendicularly to the surface of the object or parallel to it, which are referred to as out-of-plane and in-plane motions, respectively. The movie below shows how time reversal is used to focus energy: a source broadcasts a wave from the back of a plate and signals are recorded on the edges using other transducers. The signals from this initial phase are then flipped in time and broadcast from all the edge receivers. Time reversal then dictates that these waves focus at the initial source location.

1

Time reversal can also be more that the simple example in the video. Making use of the reciprocity principle, i.e., that a signal traveling from A to B is identical to the same signal traveling from B to A, the source in the back of the plate can be replaced by a receiver and the initial broadcast can be done from the side, meaning TR can focus energy anywhere a signal can be recorded; and with a laser as receiver, this means anywhere on the surface of an object.

In addition, the dominant vibration direction, e.g., in-plane or out-of plane, of the focus can be specified by recording specific directions of motion of the initial signals. If during the first step of the time reversal process, the receiver is set to record in-plane vibration, the focus will be primarily in that in-plane direction; similarly if the receiver records the out-of-plane vibration in the first step of the process, the focus will be essentially in the out-of-plane direction. This is important as the nonlinear response of a crack depends on the orientation of the vibration that makes it vibrate. To fully characterize a sample in terms of crack presence and orientation TR is used to focus energy at defined locations and at each point the nonlinear response is quantified.  This can be done for any orientation of the focused wave. To cover all possibilities, three scans are usually done in three orthogonal directions.

Figure 2 shows three scans on x, y and z directions of the same sample composed of a glass plate glued on an aluminum plate. The sample has 2 defects, one delamination due to a lack of glue between the 2 plates (in the (x,y) plane) at the top of the scan area and one crack perpendicular to the surface in the glass plate in the (x,z) plane in the middle of the scan area.

2

Figure 2. Nonlinear component of the time reversal focus at each point of a scan grid with wave focused in the x, y and z direction (from left to right)

As can be seen on those scans, the delamination in the (x,y) plane is visible only when the wave is focused in the Z direction while the crack in the (x,z) plane is visible only in the Y scan. This means that cracks have a strong nonlinear behavior when excited in a direction perpendicular to their main orientation. So by scanning with three different orientations of the focused vibration one should be able to recreate the orientation of a crack.

Another feature of the time reversal focus is that its spatial extent is about a wavelength of the focus wave. Which means the higher the frequency, the smaller the spot size, i.e., the area of the focused energy. One can then think that the higher the frequency the better the resolution and thus higher frequency is always best. However, the extent of the focus is also the depth that this technique can probe; so lower frequency means a deeper investigation and thus a more complete characterization of the sample. Therefore there is a tradeoff between depth of investigation and resolution. However, by doing several scans at different frequencies, one can extract additional information about a crack. For example, Figure 3 shows 2 scans done on a metallic sample with the only difference being the frequency of the focused wave.

 

3

Figure 3. From left to right: Nonlinear component of the time reversal focus at each point of a scan grid at 200kHz and 100kHz and photography of the sample from its side.

 

At 200kHz, it looks like there is only a thin crack while at 100kHz the extent of this crack is larger toward the bottom of the scan and more than double so there is more than just a resolution issue. At 200kHz the depth of investigation is about 5mm; at 100kHz it is about 10mm. Looking on the side of the sample in the right panel of figure 3, the crack is seen to be perpendicular to the surface for about 6mm and then dip severely. At 200kHz, the scan is only sensitive to the part perpendicular to the surface while at 100kHz, the scan will also show the dipping part. So doing several scans at different frequencies can give some information on the depth profile of the crack.

In conclusion, using time reversal to focus energy in several directions and at different frequencies and studying the nonlinear component of this focus can lead to a characterization of a crack, its orientation and depth profile, something that is currently only available using techniques, like X-ray CT, which are not as easily deployable as ultrasonic ones.

 

4aEA10 – Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications. – Key F. Lima

4aEA10 – Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications. – Key F. Lima

Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications.

Key F. Lima – keyflima@gmail.com

Pontifical Catholic University of Paraná

Curitiba, Paraná, Brazil

 

Popular version of paper 4aEA10, “Preliminary evaluation of the sound absorption coefficient of a thin coconut fiber panel for automotive applications”

Presented Thursday morning, November 5, 2015, 11:15 AM, Orlando Room

170th ASA Meeting, Jacksonville, Fl

 

Absorbents materials are fibrous or porous and must have the property of being good acoustic dissipaters. Sound propagation causes multiples reflections and friction of the air present in the absorbent medium converting sound energy to thermal energy. The acoustic surface treatment with absorbent material are widely used to reduce the reverberation in enclosed spaces or to increase the sound transmission loss of acoustics panels. In addition, these materials can be applied into acoustics filters with the purpose to increase their efficiencies. The sound absorption depends on the excitation frequency of the sound and it is more effective at high frequencies. Natural fibers such as coconut coir fiber have a great potential to be used like sound absorbent material. As natural fibers are agriculture waste, manufacturing this fiber is a natural product, therefore an economic and interesting option. This work compares the sound absorption coefficient between a thin coconut fiber panel and a composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven, which are used in the automotive industry as a roof trim. The evaluation of sound absorption coefficient was carried out with the impedance tube technique.

 

In 1980, Chung and Blaser evaluated the normal incidence sound absorption coefficient through transfer function method.  The standard ASTM E1050-10 and ISO 10534-2 was based in Chung and Blaser’s method, Figure 1. In summary, this method uses an impedance tube with the sound source placed to one end and at another, the absorbent material backed in a rigid wall. The decomposition of the stationary sound wave pattern into forward and backward traveling components is achieved by measuring sound pressures. This evaluating is carried out simultaneously at two spaced locations in the tube’s sidewall where two microphones are located, Figure 1.

Impedance Tube Fig1

Figure 1. Impedance Tube.

The wave decomposition allows to the determination of the complex reflection coefficient R(f) from which the complex acoustic impedance and the normal incidence sound absorption coefficient (a) of an absorbent material can be determined. Furthermore, the two coefficients R(f) and a are calculated by Transfer Function H12 between the two microphones through:

 

fig1,                                                                       (1)

 

where s is the distance between the microphones, x1 is the distance between the farthest microphone and the sample, i is the imaginary unity and k0 is the wave number of the air.

If R(f) is known, the coefficient a is easily obtained by expression:

 

fig2.                                                                                               (2)

 

In this work, eight samples of coconut fiber and eight samples of composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven used in the automotive industry, Figure 2 and 3. The material properties are shown in Table 1.

Sample Fig2

Figure 2. Samples.

Composite Panel Fig3

Figure 3. Composite panel structure.

Table 1. Material Properties.

Coconut Fiber Composite Panel
Sample diameter

[mm]

thickness

[mm]

mass

[g]

density

[kg/m3]

Sample diameter

[mm]

thickness

[mm]

mass

[g]

density

[kg/m3]

1 28,25 5,17 0,67 649,5 1 28,05 5,78 0,41 360,6
2 28,20 5,04 0,62 618,8 2 28,08 5,66 0,42 376,6
3 28,20 4,93 0,60 612,6 3 28,15 5,59 0,42 379,6
4 28,35 5,09 0,69 674,7 4 28,23 5,54 0,44 398,8
5 100,43 4,98 8,89 708,0 5 99,55 5,86 5,40 371,9
6 100,43 4,84 9,73 797,7 6 99,55 6,20 5,54 360,9
7 100,73 5,34 9,64 712,1 7 99,68 6,06 5,57 370,4
8 100,45 4,79 9,13 755,2 8 99,55 5,99 5,62 378,9

 

 

 

The random noise signal with frequency band between 200 Hz and 5000 Hz was utilized to evaluate a.  The Figure 4 shows the mean normal incidence absorption coefficient obtained from the measurements.

Comparison absorption coeff Fig4

Figure 4. Comparison of normal absorption coefficient (a)

 

The results shows that the composite panel have a better sound absorption coefficient than coconut fiber panel. To improve the coconut fiber panel acoustical efficiency it is needed to add some filling material with the same effect of the polyurethane foam of the composite panel.

REFERENCES

 

Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring  in-duct acoustic properties – I Theory,” J. Acoust. Soc. Am. 68, 907-913.

 

Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring  in-duct acoustic properties – II Experiment,” J. Acoust. Soc. Am. 68, 913-921.

 

ASTM E1050:2012. “Standard test method for impedance and absorption of acoustical materials using a tube, two microphones and a digital frequency analysis system,” American Society for Testing and Materials, Philadelphia, PA, 2012.

 

ISO 10534-2:1998. “Determination of sound absorption coefficient and impedance in impedance tubes – Part 2: Transfer-function method”, International Organization for Standardization, Geneva, 1998.