2aABa3 – Indris’ melodies are individually distinctive and genetically driven – Marco Gamba

2aABa3 – Indris’ melodies are individually distinctive and genetically driven – Marco Gamba

Indris’ melodies are individually distinctive and genetically driven

Marco Gamba – marco.gamba@unito.it
Cristina Giacoma – cristina.giacoma@unito.it

University of Torino
Department of Life Sciences and Systems Biology
Via Accademia Albertina 13
10123 Torino, Italy

Popular version of paper 2aABa3 “Melody in my head, melody in my genes? Acoustic similarity, individuality and genetic relatedness in the indris of Eastern Madagascar”
Presented Tuesday morning, November 29, 2016
172nd ASA Meeting, Honolulu


Melody in my head, melody in my genes?
Acoustic similarity, individuality and genetic relatedness in the indris of Eastern Madagascar


Human hearing ablities are exceptional at identifying the voices of friends and relatives [1]. The potential for this identification lies in the acoustic structures of our words, which not only convey verbal information (the meaning of our words) but also non-verbal cues (such as sex and identity of the speakers).

In animal communication, the recognizing a member of the same species can also be important. Birds and mammals may adjust their signals that function for neighbor recognition, and the discrimination between a known neighbor and a stranger would result in strikingly different responses in term of territorial defense [2].

Indris (Indri indri) are the only lemurs that produce group songs and among the few primate species that communicate using articulated singing displays. The most distinctive portions of the indris’ song are called descending phrases, consisting of between two and five units or notes. We recorded 21 groups of indris in the Eastern rainforests of Madagascar from 2005 to 2015. In each recording, we identified individuals using natural markings. We noticed that group encounters were rare, and hypothesized that song might play a role in providing members of the same species with information about the sex and identity of an individual singer and the emitting group.




We found we could effectively discriminate between the descending phrases of an individual indris, showing they have the potential for advertising about sex and individual identity. This strengthened the hypothesis that song may play a role in processes like kinship and mate recognition. Finding that there is was degree of group specificity in the song also supports the idea that neighbor-stranger recognition is also important in the indris and that the song may function announcing territorial occupation and spacing.




Traditionally, primate songs are considered an example of a genetically determined display. Thus the following step in our research was to examine whether the structure of the phrases could relate to the genetic relatedness of the indris. We found a significant correlation between the genetic relatedness of the studied individuals and the acoustic similarity of their song phrases. This suggested that genetic relatedness may play a role in determining song similarity.

For the first time, we found evidence that the similarity of a primate vocal display changes within a population in a way that is strongly associated with kin. When examining differences between sexes we found that male offspring showed phrases that were more similar to their fathers, while daughters did not show similarity with any of their parents.




The potential for kin detection may play a vital role in determining relationships within a population, regulating dispersal, and avoiding inbreeding. Singing displays may advertise kin to signal against potential mating, information that females, and to a lesser degree males, can use when forming a new group. Unfortunately, we still do not know whether indris can perceptually decode this information or how they use it in their everyday life. But work like this sets the basis for understanding primates’ mating and social systems and lays the foundation for better conservation methods.


  1. Belin, P. Voice processing in human and non-human primates. Philosophical Transactions of the Royal Society B: Biological Sciences, 2006. 361: p. 2091-2107.
  2. Randall, J. A. Discrimination of foot drumming signatures by kangaroo rats, Dipodomys spectabilis. Animal Behaviour, 1994. 47: p. 45-54.
  3. Gamba, M., Torti, V., Estienne, V., Randrianarison, R. M., Valente, D., Rovara, P., Giacoma, C. The Indris Have Got Rhythm! Timing and Pitch Variation of a Primate Song Examined between Sexes and Age Classes. Frontiers in Neuroscience, 2016. 10: p. 249.
  4. Torti, V., Gamba, M., Rabemananjara, Z. H., Giacoma, C. The songs of the indris (Mammalia: Primates: Indridae): contextual variation in the long-distance calls of a lemur. Italian Journal of Zoology, 2013. 80, 4.
  5. Barelli, C., Mundry, R., Heistermann, M., Hammerschmidt, K. Cues to androgen and quality in male gibbon songs. PLoS ONE, 2013. 8: e82748.


Figure legends.


Figure 1. A female indri with offspring in the Maromizaha Forest, Madagascar. Maromizaha is a New Protected Area located in the Region Alaotra-Mangoro, east of Madagascar. It is managed by GERP (Primate Studies and Research Group). At least 13 species of lemurs have been observed in the area.

Figure 2. Spectrograms of an indri song showing a typical sequence of different units. In the enlarged area, the pitch contour in red shows a typical “descending phrase” of 4 units. The indris also emit phrases of 2, 3 and more rarely 5 or 6 units.

Figure 3. A 3d-plot of the dimensions (DF1, DF2, DF3) generated from a Discriminant model that successfully assigned descending phrases of four units (DP4) to the emitter. Colours denote individuals. The descending phrases of two (DP2) and three units (DP3) also showed a percentage of correct classification rate significantly above chance.



1aPP44 – What’s That Noise?  The Effect of Hearing Loss and Tinnitus on Soldiers Using Military Headsets – Candice Manning, AuD, PhD

1aPP44 – What’s That Noise? The Effect of Hearing Loss and Tinnitus on Soldiers Using Military Headsets – Candice Manning, AuD, PhD

What’s That Noise?  The Effect of Hearing Loss and Tinnitus on Soldiers Using Military Headsets

Candice Manning, AuD, PhD – Candice.Manning@va.gov

Timothy Mermagen, BS – timothy.j.mermagen.civ@mail.mil

Angelique Scharine, PhD – angelique.s.scharine.civ@mail.mil

Human and Intelligent Agent Integration Branch (HIAI)
Human Research and Engineering Directorate
U.S. Army Research Laboratory
Building 520
Aberdeen Proving Ground, MD

Lay language paper 1aPP44, “Speech recognition performance of listeners with normal hearing, sensorineural hearing loss, and sensorineural hearing loss and bothersome tinnitus when using air and bone conduction communication headsets”

Presented Monday Morning, May 23, 2016, 8:00 – 12:00, Salon E/F

171st ASA Meeting, Salt Lake City

Military personnel are at high risk for noise-induced hearing loss due to the unprecedented proportion of blast-related acoustic trauma experienced during deployment from high-level impulsive and continuous noise (i.e., transportation vehicles, weaponry, blast-exposure).  In fact, noise-induced hearing loss is the primary injury of United States Soldiers returning from Afghanistan and Iraq.  Ear injuries, including tympanic membrane perforation, hearing loss, and tinnitus, greatly affect a Soldier’s hearing acuity and, as a result, reduce situational awareness and readiness.  Hearing protection devices are accessible to military personnel; however, it has been noted that many troops forego the use of protection believing it may decrease circumstantial responsiveness during combat.

Noise-induced hearing loss is highly associated with tinnitus, the experience of perceiving sound that is not produced by a source outside of the body.  Chronic tinnitus causes functional impairment that may result in a tinnitus sufferer to seek help from an audiologist or other healthcare professional.  Intervention and management are the only options for those individuals suffering from chronic tinnitus as there is no cure for this condition.  Tinnitus affects every aspect of an individual’s life including sleep, daily tasks, relaxation, and conversation to name only a few.  In 2011, the United States Government Accountability Office report on noise indicated that tinnitus was the most prevalent service-connected disability.  The combination of noise-induced hearing loss and the perception of tinnitus could greatly impact a Soldier’s ability to rapidly and accurately process speech information under high-stress situations.

The prevalence of hearing loss and tinnitus within the military population suggests that Soldier use of hearing protection is extremely important. The addition of hearing protection into reliable communication devices will increase the probability of use among Soldiers.  Military communication devices using air and bone-conduction provide clear two-way audio communications through a headset and a microphone.

Air conduction headsets offer passive hearing protection from high ambient noise, and talk-through microphones allow the user to engage in face-to-face conversation and hear ambient environmental sounds, preserving situation awareness.  Bone-conduction technology utilizes the bone-conduction pathway and presents auditory information differently than air-conduction devices (see Figure 1).  Because headsets with bone conduction transducers do not cover the ears, they allow the user to hear the surrounding environment and the option to communicate over a radio network.  Worn with or without hearing protection, bone conduction devices are inconspicuous and fit easily under the helmet.   Bone conduction communication devices have been used in the past; however, as newer devices have been designed, they have not been widely adopted for military applications.








Figure 1. Air and Bone conduction headsets used during study: a) Invisio X5 dual in-ear headset and X50 control unit and b) Aftershockz Sports 2 headset.


Since many military personnel operate in high noise environments and with some degree of noise induced hearing damage and/or tinnitus, it is important to understand how speech recognition performance might be altered as a function of headset use.  This is an important subject to evaluate as there are two auditory pathways (i.e., air-conduction pathway and bone-conduction pathway) that are responsible for hearing perception.  Comparing the differences between the air and bone-conduction devices on different hearing populations will help to describe the overall effects of not only hearing loss, an extremely common disability within the military population, but the effect of tinnitus on situational awareness as well.  Additionally, if there are differences between the two types of headsets, this information will help to guide future communication device selection for each type of population (NH vs. SNHL vs. SNHL/Tinnitus).

Based on findings from speech understanding in noise literature, communication devices do have a negative effect on speech intelligibility within the military population when noise is present.  However, it is uncertain as to how hearing loss and/or tinnitus effects speech intelligibility and situational awareness under high-level noise environments.  This study looked at speech recognition of words presented over AC and BC headsets and measured three groups of listeners: Normal Hearing, sensorineural hearing impaired, and/or tinnitus sufferers. Three levels of speech-to-noise (SNR=0,-6,-12) were created by embedding speech items in pink noise.  Overall, performance was marginally, but significantly better for the Aftershockz bone conduction headset (Figure 2).  As would be expected, performance increases as the speech to noise ratio increases (Figure 3).

One of the most fascinating things about the data is that although the effect of hearing profile was significant, it was not practically so, the means for the Normal Hearing, Hearing Loss and Tinnitus groups were 65, 61, and 63, respectively (Figure 4).  Nor was there any interaction with any of the other variables under test.  One might conclude from the data that if the listener can control the level of presentation, the speech to noise ratio has about the same effect, regardless of hearing loss. There was no difference in performance with the TCAPS due to one’s hearing profile; however, the Aftershockz headset provided better speech intelligibility for all listeners.


Figure 2.  Mean rationalized arcsine units measured for each of the TCAPS under test.


Figure 3. Mean rationalized arcsine units measured as a function of speech to noise ratio.



Figure 4.  Mean rationalized arcsine units observed as a function of the hearing profile of the listener.

3aBA1 – Ultrasound-Mediated Drug Targeting to Tumors: Revision of Paradigms Through Intravital Imaging – Natalya Rapoport

3aBA1 – Ultrasound-Mediated Drug Targeting to Tumors: Revision of Paradigms Through Intravital Imaging – Natalya Rapoport

Ultrasound-Mediated Drug Targeting to Tumors: Revision of Paradigms Through Intravital Imaging


Natalya Rapoport – natasha.rapoport@utah.edu

Department of Bioengineering
University of Utah
36 S. Wasatch Dr., Room 3100
Salt Lake City, Utah 84112

Popular version of paper 3aBA1, “Ultrasound-mediated drug targeting to tumors: Revision of paradigms through intravital imaging”

Presented Wednesday morning, May 25, 2016, 8:15 AM in Salon H

171st ASA Meeting, Salt Lake City


More than a century ago, Nobel Prize laureate Paul Ehrlich formulated the idea of a “magic bullet”. This is a virtual drug that hits its target while bypassing healthy tissues. No field of medicine could benefit more from the development of a “magic bullet” than cancer chemotherapy, which is complicated by severe side effects. For decades, the prospects of developing “magic bullets” remained elusive. During the last decade, progress in nanomedicine has enabled tumor-targeted delivery of anticancer drugs via their encapsulation in tiny carriers called nanoparticles. Nanoparticle tumor targeting is based on the “Achilles’ heels” of cancerous tumors – their poorly organized and leaky microvasculature. Due to their size, nanoparticles are not capable to penetrate through a tight healthy tissue vasculature. In contrast, nanoparticles penetrate through a leaky tumor microvasculature thus providing for localized accumulation in tumor tissue.  After tumor accumulation of drug-loaded nanoparticles, a drug should be released from the carrier to allow penetration into a site of action (usually located in a cell cytoplasm or nucleus). A local release of an encapsulated drug may be triggered by tumor-directed ultrasound; application of ultrasound has additional benefits: ultrasound enhances nanoparticle penetration through blood vessel walls (extravasation) as well as drug uptake (internalization) by tumor cells.

For decades, ultrasound has been used only as an imaging modality; the development of microbubbles as ultrasound contrast agents in early 2000s has revolutionized imaging. Recently, microbubbles have attracted attention as drug carriers and enhancers of drug and gene delivery. Microbubbles could have been ideal carriers for the ultrasound-mediated delivery of anticancer drugs.  Unfortunately, their micron-scale size does not allow effective extravasation from the tumor microvasculature into tumor tissue. In Dr. Rapoport’s lab, this problem has been solved by the development of nanoscale microbubble precursors, namely drug-loaded nanodroplets that converted into microbubbles under the action of ultrasound[1-6]. Nanodroplets comprised a liquid core formed by a perfluorocarbon compound and a two-layered drug-containing polymeric shell (Figure 1.Schematic representation of a drug-loaded nanodroplet). An aqueous dispersion of nanodroplets is called nanoemulsion.

A suggested mechanism of therapeutic action of drug-loaded perfluorocarbon nanoemulsions is discussed below [3, 5, 6]. A nanoscale size of droplets (ca. 250 nm) provides for their extravasation into a tumor tissue while bypassing normal tissues, which is a basis of tumor targeting. Upon nanodroplet tumor accumulation, tumor-directed ultrasound triggers nanodroplet conversion into microbubbles, which in turn triggers release of a nanodroplet-encapsulated drug.  This is because in the process of the droplet-to-bubble conversion, particle volume increases about a hundred-fold, with a related decrease of a shell thickness. Microbubbles oscillate in the ultrasound field, resulting in a drug “ripping” off a thin microbubble shell (Figure 2. Schematic representation of the mechanism of drug release from perfluorocarbon nanodroplets triggered by ultrasound-induced droplet-to-bubble conversion; PFC – perfluorocarbon). In addition, oscillating microbubbles enhance internalization of released drug by tumor cells.

This tumor treatment modality has been tested in mice bearing breast, ovarian, or pancreatic cancerous tumors and has been proved very effective. Dramatic tumor regression and sometimes complete resolution was observed when optimal nanodroplet composition and ultrasound parameters were applied

Rapoport 3A


Rapoport 3B


Rapoport 3C


(Figure 3. A – Photographs of a mouse bearing a subcutaneously grown breast cancer tumor xenograft treated by four systemic injections of the nanodroplet-encapsulated anticancer drug paclitaxel (PTX) at a dose of 40 mg/kg as PTX. B – Photographs of a mouse bearing two ovarian carcinoma tumors (a) – immediately before and (b) – three weeks after the end of treatment; mouse was treated by four systemic injections of the nanodroplet-encapsulated PTX at a dose of 20 mg/kg as PTX; only the right tumor was sonicated. C – Photographs (a, c) and fluorescence images (b, d) of a mouse bearing fluorescent pancreatic tumor taken before (a, b) and three weeks after the one-time treatment with PTX-loaded nanodroplets at a dose of 40 mg/kg as PTX (c,d). The tumor was completely resolved and never recurred) [3, 4, 6].

In the current presentation, the proposed mechanism of a therapeutic action of drug-loaded, ultrasound-activated perfluorocarbon nanoemulsions has been tested using intravital laser fluorescence microscopy performed in collaboration with Dr. Brian O’Neill (then with Houston Methodist Research Institute, Houston, Texas) [2]. Fluorescently labeled nanocarrier particles (or a fluorescently labeled drug) were systemically injected though the tail vein to anesthetized live mice bearing subcutaneously grown pancreatic tumors. Nanocarrier and drug arrival and extravasation in the region of interest (i.e. normal or tumor tissue) were quantitatively monitored. Various drug nanocarriers in the following size hierarchy were tested: individual polymeric molecules; tiny micelles formed by a self-assembly of these molecules; nanodroplets formed from micelles. The results obtained confirmed the mechanism discussed above.

  • As expected, dramatic differences in the extravasation rates of nanoparticles were observed.
  • The extravsation of individual polymer molecules was extremely fast even in the normal (thigh muscle) tissue; In contrast, the extravasation of nanodroplets into the normal tissue was very slow. (Figure 4. A – Bright field image of the adipose and thigh muscle tissue. B,C – extravasation of individual molecules (B – 0 min; C – 10 min after injection); vasculature lost fluorescence while tissue fluorescence increased. D,E – extravasation of nanodroplets; blood vessel fluorescence was retained for an hour of observation (D – 30 min; E – 60 min after injection).
  • Nanodroplet extravasation into the tumor tissue was substantially faster than that into the normal tissue thus providing for effective nanodroplet tumor targeting.
  • Tumor-directed ultrasound significantly enhanced extravasation and tumor accumulation of both, micelles and nanodroplets (Figure 5. Effect of ultrasound on the extravasation of Fluorescence of blood vessels dropped while that of the tumor tissue increased after ultrasound). Also, pay attention to a very irregular tumor microvasculature, to be compared with that of a normal tissue shown in Figure 4.
  • The ultrasound effect on nanodroplets was 3-fold stronger than that on micelles thus making nanodroplets a better drug carriers for ultrasound-mediated drug delivery.
  • On a negative side, some premature drug release into the circulation that preceded tumor accumulation was observed. This proposes directions for a further improvement of nanoemulsion formulations.
Rapoport 1



Rapoport 2



Rapoport 5


2aBAa7 – Ultrasonic “Soft Touch” for Breast Cancer Diagnosis – Mahdi Bayat

2aBAa7 – Ultrasonic “Soft Touch” for Breast Cancer Diagnosis – Mahdi Bayat

Ultrasonic “Soft Touch” for Breast Cancer Diagnosis


Mahdi Bayat – bayat.mahdi@mayo.edu

Alireza Nabavizadeh- nabavizadehrafsanjani.alireza@mayo.edu

Viksit Kumar- kumar.viksit@mayo.edu

Adriana Gregory- gregory.adriana@mayo.edu

Azra Aliza- alizad.azra@mayo.edu

Mostafa Fatemi- Fatemi.mostafa@mayo.edu


Mayo Clinic College of Medicine
200 First St SW
Rochester, MN 55905


Michael Insana- mfi@illinois.edu

University of Illinois at Urbana-Champaign
Department of Bioengineering
1270 DCL, MC-278
1304 Springfield Avenue
Urbana, IL 61801


Popular version of paper 2aBAa7, “Differentiation of breast lesions based on viscoelasticity response at sub-Hertz frequencies”

Presented Tuesday Morning, May 24, 2016, 9:30 AM, Snowbird/Brighton room

171st ASA Meeting, Salt Lake City



Breast cancer remains the first cause of death among American women under the age of 60. Although modern imaging technologies, such as enhanced mammography (tomosynthesis), MRI and ultrasound, can visualize a suspicious mass in breast, it often remains unclear whether the detected mass is cancerous or non-cancerous until a biopsy is performed.

Despite high sensitivity for detecting lesions, no imaging modality alone has yet been able to determine the type of all abnormalities with high confidence. For this reason most patients with suspicious masses, even those with very small likelihood of a cancer, opt in to undergo a costly and painful biopsy.

It is long believed that cancerous tumors grow in the form of stiff masses that, if found to be superficial enough, can be identified by palpation. The feeling of hardness under palpation is directly related to the tissue’s tendency to deform upon compression.  Elastography, which has emerged as a branch of ultrasound, aims at capturing tissue stiffness by relating the amount of tissue deformation under a compression to its stiffness. While this technique has shown promising results in identifying some types of breast lesions, the diversity of breast cancer types leaves doubt whether stiffness alone is the best discriminator for diagnostic purposes.

Studies have shown that tissues subjected to a sudden external force do not deform instantly, rather they deform gradually over a period of time. Tissue deformation rate reveals another important aspect of its mechanical property known as viscoelasticity. This is the main material feature that, for example, makes a piece of memory foam to feel differently from a block of rubber under the touch. Similar material feature can be used to explore mechanical properties of different types of tissue. In breast masses, studies have shown that biological pathways leading to different breast masses are quite different. While in benign lesions an increase in a protein-based component can potentially increase its viscosity, hence a slower deformation rate compared to normal tissue, the opposite trend occurs in malignant tumors.

In this study, we report on using an ultrasound technique that enables capturing the deformation rate in breast tissue. We studied 43 breast masses in 42 patients and observed that a factor based on the deformation rate was significantly different in benign and malignant lesions (Fig. 1).

The results of this study promise a new imaging biomarker for diagnosis of the breast masses. If such technique proves to be of high accuracy in a large pool of patients, then this technology can be integrated into breast examination procedures to improve the accuracy of diagnosis, reduce unnecessary biopsies, and help detecting cancerous tumors early on


Figure 1 Error bar chart for benign and malignant

Figure1- Distribution of relative deformation rates for malignant and benign breast lesions. A significantly different relative deformation rates can be observed in the two groups, thus allowing differentiation of such lesions.


2aSC7 –  Effects of aging on speech breathing -Simone Graetzer, PhD.,  Eric J. Hunter, PhD.

2aSC7 – Effects of aging on speech breathing -Simone Graetzer, PhD., Eric J. Hunter, PhD.

Simone Graetzer, PhD. – sgraetz@msu.edu

Eric J. Hunter, PhD. – ejhunter@msu.edu


Voice Biomechanics and Acoustics Laboratory
Department of Communicative Sciences and Disorders
College of Communication Arts & Sciences
Michigan State University
1026 Red Cedar Road
East Lansing, MI 48824


Popular version of paper 2aSC7, entitled: “A longitudinal study of the effects of aging on speech breathing: Evidence of decreased expiratory volume in speech recordings”

Presented Tuesday morning, May 24, 2016, 8:00 – 11:30 AM, Salon F

171st ASA Meeting, Salt Lake City



The aging population is the fastest growing segment of the population. Some voice, speech and breathing disorders occur more frequently as individuals age. For example, lung capacity diminishes in older age due to loss of lung elasticity, which places an upper limit on utterance duration. Further, decreased lung and diaphragm elasticity and muscle strength can occur, and the rib cage can stiffen, leading to reductions in lung pressure and the volume of air that can be expelled by the lungs (‘expiratory volume’). In the laryngeal system, tissues can break down and cartilages can harden, causing more voice breaks, increased hoarseness or harshness, reduced loudness, and pitch changes.

Our study attempted to identify the normal speech and respiratory changes that accompany aging in healthy individuals. Specifically, we examined how long individuals could speak in a single breath group using a series of speeches from six individuals (three females and three males) over the course of many years (between 18 and 49 years). All speakers had been previously recorded in similar environments giving long, monologue speeches. All but one speaker gave their addresses at a podium using a microphone, and most were longer than 30 minutes each. The speakers’ ages ranged between 43 (51 on average) and 98 (84 on average) years. Samples of five minutes in length were extracted from each recording. Subsequently, for each subject, three raters identified the durations of exhalations during speech in these samples.

Two figures illustrate how the breath groups changed with age for one of the women (Figure 1) and one of the men (Figure 2). We found a change in the speech breathing, which might be caused by a less flexible rib cage and the loss of vital capacity and expiratory volume. In males especially, it may also have been caused by poor closure of the vocal folds, resulting in more air leakage during speech. Specifically, we found a decreased breath group duration for all male subjects after 70 years, with overall durations averaging between 1 and 3.5 seconds. Importantly, the point of change appeared to occur between 60 and 65. For females, this change occurred at a later time, between 60-70 years, with durations averaging between 1.5 and 3.5 seconds.



Figure 1 For one of the women talkers, the speech breath groups were measured and plotted to correspond with age. The length of the speech breath groups begins to decrease at about 68 years of age.

Graetzer and Hunter – Aging1

Figure 1 For one of the women talkers, the speech breath groups were measured and plotted to correspond with age. The length of the speech breath groups begins to decrease at about 68 years of age.



Figure 2 For one of the men talkers, the speech breath groups were measured and plotted to correspond with age. The length of the speech breath groups begins to decrease at about 66 years of age.

Graetzer and Hunter – Aging1


The study results indicate decreases in speech breath group duration for most individuals as their age increased (especially from 65 years onwards), consistent with the age-related decline in expiratory volume reported in other studies. Typically, the speech breath group duration of the six subjects decreased from ages 65 to 70 years onwards. There was some variation between individuals in the point at which the durations started to decrease. The decreases indicate that, as they aged, speakers could not sustain the same number of words in a breath group and needed to inhale more frequently while speaking.

Future studies involving more participants may further our understanding of normal age-related changes vs. pathology, but such a corpus of recordings must first be constrained on the basis of communicative intent, venues, knowledge of vocal coaching, and related information.



Hunter, E. J., Tanner, K., & Smith, M. E. (2011), Gender differences affecting vocal health of women in vocally demanding careers. Logopedics Phoniatrics Vocology, 36(3), 128-136.


Janssens, J.P. , Pache, J.C. and Nicod, L.P. (1999), Physiological changes in respiratory function associated with ageing. European Respiratory Journal, 13, 197–205.


We acknowledge the efforts of Amy Kemp, Lauren Glowski, Rebecca Wallington, Allison Woodberg, Andrew Lee, Saisha Johnson, and Carly Miller. Research was in part supported by the National Institute On Deafness And Other Communication Disorders of the National Institutes of Health under Award Number R01DC012315. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.