4aAB2 – “Seemingly simple songs: Black-capped chickadee song revisited” –  Allison H. Hahn – Christopher B. Sturdy

4aAB2 – “Seemingly simple songs: Black-capped chickadee song revisited” – Allison H. Hahn – Christopher B. Sturdy

Seemingly simple songs: Black-capped chickadee song revisited” –

Allison H. Hahn – ahhahn@ualberta.ca
Christopher B. Sturdy – csturdy@ualberta.ca


University of Alberta

Edmonton, AB, Canada


Popular version of paper 4aAB2, “Seemingly simple songs: Black-capped chickadee song revisited”
Presented Thursday morning, November 5, 8:55 AM, City Terrace Room
170th ASA Meeting, Jacksonville, Fl


Vocal communication is a mode of communication important to many animal species, including humans. Over the past 60 years, songbird vocal communication has been widely-studied, largely because the invention of the sound spectrograph allows researchers to visually represent vocalizations and make precise acoustic measurements. Black-capped chickadees (Poecile atricapillus; Figure 1) are one example of a songbird whose song has been well-studied. Black-capped chickadees produce a short (less than 2 seconds), whistled fee-bee song. Compared to the songs produced by many songbird species, which often contain numerous note types without a fixed order, black-capped chickadee song is relatively simple, containing two notes produced in the same order during each song rendition. Although the songs appear to be acoustically simple, they contain a rich variety of information about the singer including: dominance rank, geographic location, and individual identity [1,2,3].

Interestingly, while songbird song has been widely-examined, most of the focus (at least for North Temperate Zone species) has been on male-produced song, largely because it was thought that only males actually produced song. However, more recently, there has been mounting evidence that in many songbird species, both males and females produce song [4,5]. In the study of black-capped chickadees, the focus has also been on male-produced song. However, recently, we reported that female black-capped chickadees also produce fee-bee song. One possible reason that female song has not been extensively reported is that to human vision, male and female chickadees are visually identical, so females that are singing may be mistakenly identified as male. However, by identifying a bird’s sex (via DNA analysis) and recording both males and females, our work [6] has shown that female black-capped chickadees do produce fee-bee song. Additionally, these songs are overall acoustically similar to male song (songs of both sexes contain two whistled notes; see Figure 2), making vocal discrimination by humans difficult.

Our next objective was to determine if any acoustic features varied between male and female songs. Using bioacoustic techniques, we were able to demonstrate that there are acoustic differences in male and female song, with females producing songs that contain a greater frequency decrease in the first note compared to male songs (Figure 2). These results demonstrate that there are sufficient acoustic differences to allow birds to identify the sex of a signing individual even in the absence of visual cues. Because birds may live in densely wooded environments, in which visual, but not auditory, cues are often obscured, being able to identify the sex of a bird (and whether the singer is a potential mate or territory rival) would be an important ability.

Following our bioacoustic analysis, an important next step was to determine whether birds are able to distinguish between male and female songs. In order to examine this, we used a behavioral paradigm that is common in animal learning studies: operant conditioning. By using this task, we were able to demonstrate that birds can distinguish between male and female songs; however, the particular acoustic features birds use in order to discriminate between the sexes may depend on the sex of the bird that is listening to the song. Specifically, we found evidence that male subjects responded based on information in the song’s first note, while female subjects responded based on information in the song’s second note [7]. One possible reason for this difference in responding is that in the wild, males need to quickly respond to a rival male that is a territory intruder, while females may assess the entire song to gather as much information about the singing individual (for example, information regarding a potential mate’s quality). While the exact function of female song is unknown, our studies clearly indicate that female black-capped chickadees produce songs and the birds themselves can perceive differences between male and female songs.




Figure 1. An image of a black-capped chickadee.


Figure 2. Spectrogram (x-axis: time; y-axis: frequency in kHz) on a male song (top) and female song (bottom).

Sound file 1. An example of a male fee-bee song.

Sound file 2. An example of a female fee-bee song.



  1. Hoeschele, M., Moscicki, M.K., Otter, K.A., van Oort, H., Fort, K.T., Farrell, T.M., Lee, H., Robson, S.W.J., & Sturdy, C.B. (2010). Dominance signalled in an acoustic ornament. Animal Behaviour, 79, 657–664.
  2. Hahn, A.H., Guillette, L.M., Hoeschele, M., Mennill, D.J., Otter, K.A., Grava, T., Ratcliffe, L.M., & Sturdy, C.B. (2013). Dominance and geographic information contained within black-capped chickadee (Poecile atricapillus) song. Behaviour, 150, 1601-1622.
  3. Christie, P.J., Mennill, D.J., & Ratcliffe, L.M. (2004). Chickadee song structure is individually distinctive over long broadcast distances. Behaviour 141, 101–124.
  4. Langmore, N.E. (1998). Functions of duet and solo songs of female birds. Trends in Ecology and Evolution, 13, 136–140.
  5. Riebel, K. (2003). The “mute” sex revisited: vocal production and perception learning in female songbirds. Advances in the Study of Behavior, 33, 49–86
  6. Hahn, A.H., Krysler, A., & Sturdy, C.B. (2013). Female song in black-capped chickadees (Poecile atricapillus): Acoustic song features that contain individual identity information and sex differences. Behavioural Processes, 98, 98-105.
  7. Hahn, A.H., Hoang, J., McMillan, N., Campbell, K., Congdon, J., & Sturdy, C.B. (2015). Biological salience influences performance and acoustic mechanisms for the discrimination of male and female songs. Animal Behaviour, 104, 213-228.

3pBA5 – Using Acoustic Levitation to Understand, Diagnose, and Treat Cancer and Other Diseases – Brian D. Patchett

Using Acoustic Levitation to Understand, Diagnose, and Treat Cancer and Other Diseases


Brian D. Patchett – brian.d.patchett@gmail.com

Natalie C. Sullivan – nhillsullivan@gmail.com

Timothy E. Doyle – Timothy.Doyle@uvu.edu

Department of Physics
Utah Valley University
800 West University Parkway, MS 179
Orem, Utah 84058


Popular version of paper 3pBA5, “Acoustic Levitation Device for Probing Biological Cells With High-Frequency Ultrasound”

Presented Wednesday afternoon, November 4, 2015

170th ASA Meeting, Jacksonville


Imagine a new medical advancement that would allow scientists to measure the physical characteristics of diseased cells involved in cancer, Alzheimer’s, and autoimmune diseases. Through the use of high-frequency ultrasonic waves, such an advancement will allow scientists to test the normal healthy range of virtually any cell type for density and stiffness, providing new capabilities for analyzing healthy cell development as well as insight into the changes that occur as diseases develop and the cells’ characteristics begin to change.


Prior research methods of probing cells with ultrasound have relied upon growing the cells on the bottom of a Petri dish, which distorts not only the cells’ shape and structure, butlso interfere with the ultrasonic signals. A new method was therefore needed to probe the cells without disturbing their natural form, and to “clean up” the signals received by the ultrasound device. Research presented at the 2015 ASA meeting in Jacksonville Florida will show that the use of acoustic levitation is effective in providing the ideal conditions for probing the cells.


Acoustic levitation is a phenomenon whereby pressure differences of stationary sound waves can be used to suspend small objects in gases or fluids such as air or water. We are currently exploring a new frontier in acoustic levitation of cellular structures in a fluid medium by perfecting a method by which we can manipulate the shape and frequency of sound waves inside of special containers. By manipulating these sound waves in just the right fashion it is possible to isolate a layer of cells in a fluid such as water, which can then be probed with an ultrasound device. The cells are then in a more natural form and environment, and the interference from the floor of the Petri dish is no longer a hindrance.


This method has proven effective in the laboratory with buoyancy neutral beads that are roughly the same size and shape as human blood cells, and a study is currently underway to test the effectiveness of this method with biological samples. If effective, this will give researchers new experimental methods by which to study cellular processes, thus leading to a better understanding of the development of certain diseases in the human body.

2pSCb11 – Effect of Menstrual Cycle Hormone Variations on Dichotic Listening Results – Richard Morris

Effect of Menstrual Cycle Hormone Variations on Dichotic Listening Results


Richard Morris – Richard.morris@cci.fsu.edu

Alissa Smith


Florida State University

Tallahassee, Florida


Popular version of poster presentation 2pSCb11, “Effect of menstrual phase on dichotic listening”

Presented Tuesday afternoon, November 3, 2015, 3:30 PM, Grand Ballroom 8


How speech is processed by the brain has long been of interest to researchers and clinicians. One method to evaluate how the two sides of the brain work when hearing speech is called a dichotic listening task. In a dichotic listening task two words are presented simultaneously to a participant’s left and right ears via headphones. One word is presented to the left ear and a different one to the right ear. These words are spoken at the same pitch and loudness levels. The listener then indicates what word was heard. If the listener regularly reports hearing the words presented to one ear, then there is an ear advantage. Since most language processing occurs in the left hemisphere of the brain, most listeners attend more closely to the right ear. The regular selection of the word presented to the right ear is termed a right ear advantage (REA).

Previous researchers reported different responses from males and females to dichotic presentation of words. Those investigators found that males more consistently heard the word presented to the right ear and demonstrated a stronger REA. The female listeners in those studies exhibited more variability as to the ear of the word that was heard. Further research seemed to indicate that women exhibit different lateralization of speech processing at different phases of their menstrual cycle. In addition, data from recent studies indicate that the degree to which women can focus on the input to one ear or the other varies with their menstrual cycle.

However, the previous studies used a small number of participants. The purpose of the present study was to complete a dichotic listening study with a larger sample of female participants. In addition, the previous studies focused on women who did not take oral contraceptives as they were assumed to have smaller shifts in the lateralization of speech processing. Although this hypothesis is reasonable, it needs to be tested. For this study, it was hypothesized that the women would exhibit a greater REA during the days that they menstruate than during other days of their menstrual cycle. This hypothesis was based on the previous research reports. In addition, it was hypothesized that the women taking oral contraceptives will exhibit smaller fluctuations in the lateralization of their speech processing.

Participants in the study were 64 females, 19-25 years of age. Among the women 41 were taking oral contraceptives (OC) and 23 were not. The participants listened to the sound files during nine sessions that occurred once per week. All of the women were in good general health and had no speech, language, or hearing deficits.

The dichotic listening task was executed using the Alvin software package for speech perception research. The sound file consisted of consonant-vowel syllables comprised of the six plosive consonants /b/, /d/, /g/, /p/, /t/, and /k/ paired with the vowel “ah”. The listeners heard the syllables over stereo headphones. Each listener set the loudness of the syllables to a comfortable level.

At the beginning of the listening session, each participant wrote down the date of the initiation of her most recent menstrual period on a participant sheet identified by her participant number. Then, they heard the recorded syllables and indicated the consonant heard by striking that key on the computer keyboard. Each listening session consisted of three presentations of the syllables. There were different randomizations of the syllables for each presentation. In the first presentation, the stimuli will be presented in a non-forced condition. In this condition the listener indicted the plosive that she heard most clearly. After the first presentation, the experimental files were presented in a manner referred to as a forced left or right condition. In these two conditions the participant was directed to focus on the signal in the left or right ear. The sequence of focus on signal to the left ear or to the right ear was counterbalanced over the sessions.

The statistical analyses of the listeners’ responses revealed that no significant differences occurred between the women using oral contraceptives and those who did not. In addition, correlations between the day of the women’s menstrual cycle and their responses were consistently low. However, some patterns did emerge for the women’s responses across the experimental sessions as opposed to the days of their menstrual cycle. The participants in both groups exhibited a higher REA and lower percentage of errors for the final sessions in comparison to earlier sessions.

The results from the current subjects differ from those previously reported. Possibly the larger sample size of the current study, the additional month of data collection, or the data recording method affected the results. The larger sample size might have better represented how most women respond to dichotic listening tasks. The additional month of data collection may have allowed the women to learn how to respond to the task and then respond in a more consistent manner. The short data collection period may have confused the learning to respond to a novel task with a hormonally dependent response. Finally, previous studies had the experimenter record the subjects’ responses. That method of data recording may have added bias to the data collection. Further studies with large data sets and multiple months of data collection are needed to determine any sex and oral contraceptive use effects on REA.

2aAA9 – Quietly Staying Fit in the Multifamily Building  –  Paulette Nehemias Harvey

2aAA9 – Quietly Staying Fit in the Multifamily Building – Paulette Nehemias Harvey

Quietly Staying Fit in the Multifamily Building


Paulette Nehemias Harvey – pendeavors@gmail.com
Kody Snow – ksnow@phoenixnv.com
Scott Harvey – sharvey@phoenixnv.com


Phoenix Noise & Vibration
5216 Chairmans Court, Suite 107
Frederick, Maryland 21703


Popular version of paper 2aAA9, “Challenges facing fitness center designers in multifamily buildings”
Presented Tuesday morning, November 3, 2015, 11:00 AM, Grand Ballroom 3
Session 2aAA, Acoustics of Multifamily Dwellings
170th ASA Meeting, Jacksonville


Harvey 1 Treadmill


Transit centered living relies on amenities close to home; mixing multifamily residential units with professional, retail and commercial units on the same site. Use the nearby trains to get to work and out, but rely on the immediate neighborhood, even the lobby for errands and everyday needs. Transit centered living is appealing as it eliminates the need for sitting in traffic, seems good for the environment and adds a sense of security, aerobic health and time-saving convenience. Include an on-site fitness center and residents don’t even have to wear street clothes to get to their gym!

Developers know that a state-of-the-art fitness center is icing on their multifamily residence cake as far as attracting buyers. Gone is the little interior room with a couple treadmills and a stationary bike. Today’s designs include panoramic views, and enough ellipticals, free weights, weight & strength machines, and shower rooms to fill 2500-4000 square feet, not to mention the large classes offered with high energy music and an enthusiastic leader with a microphone. The increased focus on maintaining aerobic health, strength and mobility is fantastic, but the noise and vibration it generates? Not so great. Sometimes cooperative scheduling keeps the peace, but often residents will want to have access to their fitness center at all hours, so wise project leaders involve a noise control engineer early in the design process to develop a fitness center next to which everyone will want to live.

Remember the string and two empty cans? Stretch the string taut and conversations can travel the length of the string, but pinch the string and the system fails. As noise travels through all kinds of structures and through the air as well, it is the design goal of the noise and vibration experts to prevent that transmission. Airborne noise control can be effective using a combination of layered gypsum board, fiberglass batt insulation, concrete and resilient framing members that absorb the sound rather than transmit it through a wall or a floor/ceiling system. Controlling the structure borne noise and vibration can involve much thicker rubber mats, isolated concrete slabs and a design that incorporates the structural engineer’s input on stiffening the base building structure. And it’s not simply noise that the design is intended to restrict, it is silent, but annoying vibrations as well.


Harvey 2 Kettleball exercise

Reducing the floor shaking impact of dropping barbells on the ground is the opposite of hearing a pin drop. Heavy, steel plates loaded on a barbell, lifted 6-8 feet off the ground and then dropped. Repeatedly. Nobody wants to live under that, so designers think location, location, location. But big windows are pointless in the basement, so something has to go under the fitness center. Garage space, storage units or mechanical rooms won’t mind the calisthenics above them. And sometimes the overall design of the building structure, whether it be high-rise with underground parking, Texas wrap building

(u-shaped building with elevated parking garage on interior.), or a podium style building can offer an ideal location for this healthy necessity.

It’s not an acoustical trade secret that the best method of noise control is at the source so consider what makes the noise. Manufacturers have met the demand for replacing the old standard graduated barbell steel plates for free weight combinations with a rubber/urethane coated steel weight. These weights make much less noise when impacting each other, but are still capable of generating excessive structure-borne noise levels. This is a great example of controlling both air borne (plates clanking together) and structure borne (barbells impacting the floor) transmission paths. Speakers and sound systems and the wall/floor/ceiling systems can work together to offer clarity and quality to listeners and limitations for what the neighbors will hear, but it takes expertise and attention.

Disregarding the recommendations of noise and vibration professionals can result in an annoying, on-site gym that brings stressful tension and ongoing conflict, nothing that promotes healthy well-being.

Foresight in design and attention to acoustical specs on building materials, under the direction of a noise and vibration engineer, assures a fitness center that is a pleasant, effective space for fitness and social opportunities, an asset to the transit centered neighborhood. Do everyone a favor and pay attention to good design and product specification early on; that’s sound advice.


4aEA10 – Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications. – Key F. Lima

4aEA10 – Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications. – Key F. Lima

Preliminary evaluation of the sound absorption coefficient of a thin coconut coir fiber panel for automotive applications.

Key F. Lima – keyflima@gmail.com

Pontifical Catholic University of Paraná

Curitiba, Paraná, Brazil


Popular version of paper 4aEA10, “Preliminary evaluation of the sound absorption coefficient of a thin coconut fiber panel for automotive applications”

Presented Thursday morning, November 5, 2015, 11:15 AM, Orlando Room

170th ASA Meeting, Jacksonville, Fl


Absorbents materials are fibrous or porous and must have the property of being good acoustic dissipaters. Sound propagation causes multiples reflections and friction of the air present in the absorbent medium converting sound energy to thermal energy. The acoustic surface treatment with absorbent material are widely used to reduce the reverberation in enclosed spaces or to increase the sound transmission loss of acoustics panels. In addition, these materials can be applied into acoustics filters with the purpose to increase their efficiencies. The sound absorption depends on the excitation frequency of the sound and it is more effective at high frequencies. Natural fibers such as coconut coir fiber have a great potential to be used like sound absorbent material. As natural fibers are agriculture waste, manufacturing this fiber is a natural product, therefore an economic and interesting option. This work compares the sound absorption coefficient between a thin coconut fiber panel and a composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven, which are used in the automotive industry as a roof trim. The evaluation of sound absorption coefficient was carried out with the impedance tube technique.


In 1980, Chung and Blaser evaluated the normal incidence sound absorption coefficient through transfer function method.  The standard ASTM E1050-10 and ISO 10534-2 was based in Chung and Blaser’s method, Figure 1. In summary, this method uses an impedance tube with the sound source placed to one end and at another, the absorbent material backed in a rigid wall. The decomposition of the stationary sound wave pattern into forward and backward traveling components is achieved by measuring sound pressures. This evaluating is carried out simultaneously at two spaced locations in the tube’s sidewall where two microphones are located, Figure 1.

Impedance Tube Fig1

Figure 1. Impedance Tube.

The wave decomposition allows to the determination of the complex reflection coefficient R(f) from which the complex acoustic impedance and the normal incidence sound absorption coefficient (a) of an absorbent material can be determined. Furthermore, the two coefficients R(f) and a are calculated by Transfer Function H12 between the two microphones through:


fig1,                                                                       (1)


where s is the distance between the microphones, x1 is the distance between the farthest microphone and the sample, i is the imaginary unity and k0 is the wave number of the air.

If R(f) is known, the coefficient a is easily obtained by expression:


fig2.                                                                                               (2)


In this work, eight samples of coconut fiber and eight samples of composite panel made by fiberglass and expanded polyurethane foam, no-woven woven, and polyester woven used in the automotive industry, Figure 2 and 3. The material properties are shown in Table 1.

Sample Fig2

Figure 2. Samples.

Composite Panel Fig3

Figure 3. Composite panel structure.

Table 1. Material Properties.

Coconut Fiber Composite Panel
Sample diameter








Sample diameter








1 28,25 5,17 0,67 649,5 1 28,05 5,78 0,41 360,6
2 28,20 5,04 0,62 618,8 2 28,08 5,66 0,42 376,6
3 28,20 4,93 0,60 612,6 3 28,15 5,59 0,42 379,6
4 28,35 5,09 0,69 674,7 4 28,23 5,54 0,44 398,8
5 100,43 4,98 8,89 708,0 5 99,55 5,86 5,40 371,9
6 100,43 4,84 9,73 797,7 6 99,55 6,20 5,54 360,9
7 100,73 5,34 9,64 712,1 7 99,68 6,06 5,57 370,4
8 100,45 4,79 9,13 755,2 8 99,55 5,99 5,62 378,9




The random noise signal with frequency band between 200 Hz and 5000 Hz was utilized to evaluate a.  The Figure 4 shows the mean normal incidence absorption coefficient obtained from the measurements.

Comparison absorption coeff Fig4

Figure 4. Comparison of normal absorption coefficient (a)


The results shows that the composite panel have a better sound absorption coefficient than coconut fiber panel. To improve the coconut fiber panel acoustical efficiency it is needed to add some filling material with the same effect of the polyurethane foam of the composite panel.



Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring  in-duct acoustic properties – I Theory,” J. Acoust. Soc. Am. 68, 907-913.


Chung, J. Y. and Blaser D. A. (1980) “ Transfer function method of measuring  in-duct acoustic properties – II Experiment,” J. Acoust. Soc. Am. 68, 913-921.


ASTM E1050:2012. “Standard test method for impedance and absorption of acoustical materials using a tube, two microphones and a digital frequency analysis system,” American Society for Testing and Materials, Philadelphia, PA, 2012.


ISO 10534-2:1998. “Determination of sound absorption coefficient and impedance in impedance tubes – Part 2: Transfer-function method”, International Organization for Standardization, Geneva, 1998.