1aNS4 – Musical mind control: Human speech takes on characteristics of background music – Ryan Podlubny

1aNS4 – Musical mind control: Human speech takes on characteristics of background music – Ryan Podlubny

Musical mind control: Human speech takes on characteristics of background music

Ryan Podlubny

 – ryan.podlubny@pg.canterbury.ac.nz

Department of Linguistics, University of Canterbury
20 Kirkwood Avenue, Upper Riccarton
Christchurch, NZ, 8041

Popular version of paper 1aNS4, “Musical mind control: Acoustic convergence to background music in speech production.”
Presented Monday morning, November 28, 2016
172nd ASA Meeting, Honolulu

 

People often adjust their speech to resemble that of their conversation partners – a phenomenon known as speech convergence. Broadly defined, convergence describes automatic synchronization to some external source, much like running to the beat of music playing at the gym without intentionally choosing to do so. Through a variety of studies a general trend has emerged where we find people automatically synchronizing to various aspects of their environment 1,2,3. With specific regard to language use, convergence effects have also been observed in many linguistic domains such as sentence-formation4, word-formation 5, and vowel production6 (where differences in vowel production are well associated with perceived accentedness 7,8). This prevalence in linguistics raises many interesting questions about the extent to which speakers converge. This research uses a speech-in-noise paradigm to explore whether or not speakers also converge to non-linguistic signals in the environment: Specifically, will a speaker’s rhythm, pitch, or intensity (which is closely related to loudness) be influenced by fluctuations in background music such that the speech echoes specific characteristics of that background music (for example, if the tempo of background music slows down, will that influence those listening to unconsciously decrease their speech rate)?

In this experiment participants read passages aloud while hearing music through headphones. Background music was composed by the experimenter to be relatively stable with regard to pitch, tempo/rhythm, and intensity, so we could manipulate and test only one of these dimensions at a time, within each test-condition. We imposed these manipulations gradually and consistently toward a target, which can be seen in Figure 1, and would similarly return to the level at which they started after reaching that target. We played the participants music with no experimental changes in between all manipulated sessions. (Examples of what participants heard in headphones are available as sound-files 1 and 2]

podlubny_fig1

Fig. 1
Using software designed for digital signal processing (analyzing and altering sound), manipulations were applied in a linear fashion (in a straight line) toward a target – this can be seen above as the blue line, which first rises and then falls. NOTE: After manipulations reach their target (the target is seen above as a dashed, vertical red line), the degree of manipulation would then return to the level at which it started in a similar linear fashion. Graphic captured while using Praat 9 to increase and then decrease the perceived loudness of the background music.     

 

Data from 15 native speakers of New Zealand English were analyzed using statistical tests that allow effects to vary somewhat for each participant where we observed significant convergence in both the pitch and intensity conditions. Analysis of the Tempo condition, however, has not yet been conducted. Interestingly, these effects appear to differ systematically based on a person’s previous musical training. While non-musicians demonstrate the predicted effect and follow the manipulations, musicians appear to invert the effect and reliably alter aspects of their pitch and intensity in the opposite direction of the manipulation (see Figure 2). Sociolinguistic research indicates that under certain conditions speakers will emphasize characteristics of their speech to distinguish themselves socially from conversation partners or groups, as opposed to converging with them6. It seems plausible then that, given a relatively heightened ability to recognize low-level variations of sound, musicians may on some cognitive level be more aware of the variation in their sound environment, and as a result similarly resist the more typical effect. However, more work is required to better understand this phenomenon.

podlubny_fig2

Fig. 2
The above plots measure pitch on the y-axis (up and down on the left edge), and indicate the portions of background music that have been manipulated on the x- axis (across the bottom). The blue lines show that speakers generally lower their pitch as an un-manipulated condition progresses. However the red lines show that when global pitch is lowered during a test-condition, such lowering is relatively more dramatic for non-musicians (left plot) and that the effect is reversed by those with musical training (right plot). NOTE: A follow-up model further accounts for the relatedness of Pitch and Intensity and shows much the same effect.

This work indicates that speakers are not only influenced by human speech partners in production, but also, to some degree, by noise within the immediate speech environment, which suggests that environmental noise may constantly be influencing certain aspects of our speech production in very specific and predictable ways. Human listeners are rather talented when it comes to recognizing subtle cues in speech 10, especially compared to computers and algorithms that can’t  yet match this ability. Some language scientists argue these changes in speech occur to make understanding easier for those listening 11. That is why work like this is likely to resonate in both academia and the private sector, as a better understanding of how speech will change in different environments contributes to the development of more effective aids for the hearing impaired, as well as improvements to many devices used in global communications.

 

Sound-file 1.
An example of what participants heard as a control condition (no experimental manipulation) in between test-conditions.

 

Sound-file 2.
An example of what participants heard as a test condition (Pitch manipulation, which drops 200 cents/one full step).

 

 

References

1.  Hill, A. R., Adams, J. M., Parker, B. E., & Rochester, D. F. (1988). Short-term entrainment of ventilation to the walking cycle in humans. Journal of Applied Physiology65(2), 570-578.
2. Will, U., & Berg, E. (2007). Brain wave synchronization and entrainment to periodic acoustic stimuli. Neuroscience letters424(1), 55-60.
3.  McClintock, M. K. (1971). Menstrual synchrony and suppression. Nature, Vol 229, 244-245.
4.  Branigan, H. P., Pickering, M. J., McLean, J. F., & Cleland, A. A. (2007). Syntactic alignment and participant role in dialogue. Cognition, 104(2), 163-197.
5.  Beckner, C., Rácz, P., Hay, J., Brandstetter, J., & Bartneck, C. (2015). Participants Conform to Humans but Not to Humanoid
Robots in an English Past Tense Formation Task. Journal of Language and Social Psychology, 0261927X15584682.
Retreived from: http://jls.sagepub.com.ezproxy.canterbury.ac.nz/content/early/2015/05/06/0261927X15584682.
6.  Babel, M. (2012). Evidence for phonetic and social selectivity in spontaneous phonetic imitation. Journal of Phonetics, 40(1), 177-189.
7.  Major, R. C. (1987). English voiceless stop production by speakers of Brazilian Portuguese. Journal of Phonetics, 15, 197—
202.
8.  Rekart, D. M. (1985) Evaluation of foreign accent using synthetic speech. Ph.D. dissertation, the Lousiana State University.
9.  Boersma, P., & Weenink, D. (2014). Praat: Doing phonetics by computer (Version 5.4.04) [Computer program]. Retrieved
from www.praat.org.
10.  Hay, J., Podlubny, R., Drager, K., & McAuliffe, M. (under review). Car-talk: Location-specific speech production and
perception.
11.  Lane, H., & Tranel, B. (1971). The Lombard sign and the role of hearing in speech. Journal of Speech, Language, and
Hearing Research14(4), 677-709.

 

4pMU4 – How Well Can a Human Mimic the Sound of a Trumpet? -Ingo R. Titze

4pMU4 – How Well Can a Human Mimic the Sound of a Trumpet? -Ingo R. Titze

How Well Can a Human Mimic the Sound of a Trumpet?

Ingo R. Titze – ingo.titze@utah.edu

University of Utah
201 Presidents Cir
Salt Lake City, UT

Popular version of paper 4pMU4 “How well can a human mimic the sound of a trumpet?”

Presented Thursday May 26, 2:00 pm, Solitude room

171st ASA Meeting Salt Lake City

 

Man-made musical instruments are sometimes designed or played to mimic the human voice, and likewise vocalists try to mimic the sounds of man-made instruments.  If flutes and strings accompany a singer, a “brassy” voice is likely to produce mismatches in timbre (tone color or sound quality).  Likewise, a “fluty” voice may not be ideal for a brass accompaniment.  Thus, singers are looking for ways to color their voice with variable timbre.

Acoustically, brass instruments are close cousins of the human voice.  It was discovered prehistorically that sending sound over long distances (to locate, be located, or warn of danger) is made easier when a vibrating sound source is connected to a horn.  It is not known which came first – blowing hollow animal horns or sea shells with pursed and vibrating lips, or cupping the hands to extend the airway for vocalization. In both cases, however, airflow-induced vibration of soft tissue (vocal folds or lips) is enhanced by a tube that resonates the frequencies and radiates them (sends them out) to the listener.

Around 1840, theatrical singing by males went through a revolution.  Men wanted to portray more masculinity and raw emotion with vocal timbre. “Do di Petto”, which is Italien for “C  in chest voice” was introduced by operatic tenor Gilbert Duprez in 1837, which soon became a phenomenon.  A heroic voice in opera took on more of a brass-like quality than a flute-like quality.  Similarly, in the early to mid- twentieth century (1920-1950), female singers were driven by the desire to sing with a richer timbre, one that matched brass and percussion instruments rather than strings or flutes.  Ethel Merman became an icon in this revolution. This led to the theatre belt sound produced by females today, which has much in common with a trumpet sound.

Titze_Fig1_Merman

Fig.1.  Mouth opening to head-size ratio for Ethel Merman and corresponding frequency spectrum for the sound “aw” with a fundamental frequency fo (pitch) at 547 Hz and a second harmonic frequency 2 fo at 1094 Hz.

 

The length of an uncoiled trumpet horn is about 2 meters (including the full length of the valves), whereas the length of a human airway above the glottis (the space between the vocal cords) is only about 17 cm (Fig. 2). The vibrating lips and the vibrating vocal cords can produce similar pitch ranges, but the resonators have vastly different natural frequencies due to the more than 10:1 ratio in airway length.  So, we ask, how can the voice produce a brass-like timbre in a “call” or “belt”?

One structural similarity between the human instrument and the brass instrument is the shape of the airway directly above the glottis, a short and narrow tube formed by the epiglottis.  It corresponds to the mouthpiece of brass instruments.  This mouthpiece plays a major role in shaping the sound quality.  A second structural similarity is created when a singer uses a wide mouth opening, simulating the bell of the trumpet.  With these two structural similarities, the spectrum of tones produced by the two instruments can be quite similar, despite the huge difference in the overall length of the instrument.

 

Titze_Fig2_airway_trumpet

Fig 2.  Human airway and trumpet (not drawn to scale).

 

Acoustically, the call or belt-like quality is achieved by strengthening the second harmonic frequency 2fin relation to the fundamental frequency fo.  In the human instrument, this can be done by choosing a bright vowel like /ᴂ/ that puts an airway resonance near the second harmonic.  The fundamental frequency will then have significantly less energy than the second harmonic.

Why does that resonance adjustment produce a brass-like timbre?  To understand this, we first recognize that, in brass-instrument playing, the tones produced by the lips are entrained (synchronized) to the resonance frequencies of the tube.  Thus, the tones heard from the trumpet are the resonance tones. These resonance tones form a harmonic series, but the fundamental tone in this series is missing.  It is known as the pedal tone.  Thus, by design, the trumpet has a strong second harmonic frequency with a missing fundamental frequency.

Perceptually, an imaginary fundamental frequency may be produced by our auditory system when a series of higher harmonics (equally spaced overtones) is heard.  Thus, the fundamental (pedal tone) may be perceptually present to some degree, but the highly dominant second harmonic determines the note that is played.

In belting and loud calling, the fundamental is not eliminated, but suppressed relative to the second harmonic.  The timbre of belt is related to the timbre of a trumpet due to this lack of energy in the fundamental frequency.  There is a limit, however, in how high the pitch can be raised with this timbre.  As pitch goes up, the first resonance of the airway has to be raised higher and higher to maintain the strong second harmonic.  This requires ever more mouth opening, literally creating a trumpet bell (Fig. 3).

Titze_Fig3_Menzel

Fig 3. Mouth opening to head-size ratio for Idina Menzel and corresponding frequency spectrum for a belt sound with a fundamental frequency (pitch) at 545 Hz.

 

Note the strong second harmonic frequency 2fo in the spectrum of frequencies produced by Idina Menzel, a current musical theatre singer.

One final comment about the perceived pitch of a belt sound is in order.  Pitch perception is not only related to the fundamental frequency, but the entire spectrum of frequencies.  The strong second harmonic influences pitch perception. The belt timbre on a D5 (587 Hz) results in a higher pitch perception for most people than a classical soprano sound on the same note. This adds to the excitement of the sound.

 

 

2aMU4 – Yelling vs. Screaming in Operatic and Rock Singing – Lisa Popeil

2aMU4 – Yelling vs. Screaming in Operatic and Rock Singing – Lisa Popeil

  • Yelling vs. Screaming in Operatic and Rock Singing

    Lisa Popeil – lisa@popeil.com

    Voiceworks®
    14431 Ventura Blvd #200
    Sherman Oaks, CA 91423

     

    Popular version of paper 2aMU4

    Presented Tuesday morning, May 24, 2016

     

    There exist a number of ways the human vocal folds can vibrate which create unique sounds used in singing.  The two most common vibrational patterns of the vocal folds are commonly called “chest voice” and “head voice”, with chest voice sounding like speaking or yelling and head voice sounding more flute-like or like screaming on high pitches.  In the operatic singing tradition, men sing primarily in chest voice while women sing primarily in their head voice.  However, in rock singing, men often emit high screams using their head voice while female rock singers use almost exclusively their chest voice for high notes.

    Vocal fold vibrational pattern differences are only a part of the story though, since the shaping of the throat, mouth and nose (the vocal tract) play a large part in the perception of the final sound.  That means that head voice can be made to “sound” like chest voice on high screams using vocal tract shaping and only the most experienced listener can determine if the vocal register used was chest or head voice.

    Using spectrographic analysis, differences and similarities between operatic and rock singers can be seen.  One similarity between the two is the heightened output of a resonance commonly called “ring”.  This resonance, when amplified by vocal tract shaping, creates a piercing sound that’s perceived by the listener as extremely loud. The amplified ring harmonics can be seen in the 3,000 Hz band in both the male opera sample and in rock singing samples:

     

     

  • MALE OPERA – HIGH B (B4…494 Hz)       CHEST VOICE
    Popeil1

    Figure 1 

     

  • MALE ROCK – HIGH E (E5…659 Hz)       CHEST VOICE
    Popeil 2

       Figure 2                                                                 

     

  • MALE ROCK – HIGH G (G5…784 Hz)    HEAD VOICE
    Popeil 3

    Figure 3

     

     

     

    Though each of these three male singers exhibit a unique frequency signature and whether singing in chest or head voice, each singer is using the amplified ring strategy in the 3,000Hz range amplify their sound and create excitement.

     

2aMU5 – Do people find vocal fry in popular music expressive? – Mackenzie Parrott

Do people find vocal fry in popular music expressive?

 

Mackenzie Parrott – mackenzie.lanae@gmail.com

 

John Nix – john.nix@utsa.edu

 

Popular version of paper 2aMU5, “Listener Ratings of Singer Expressivity in Musical Performance.”

 

Presented Tuesday, May 24, 2016, 10:20-10:35 am, Salon B/C, ASA meeting, Salt Lake City

Vocal fry is the lowest register of the human voice.  Its distinct sound is characterized by a low rumble interspersed with uneven popping and crackling.  The use of fry as a vocal mannerism is becoming increasingly common in American speech, fueling discussion about the implications of its use and how listeners perceive the speaker [1].  Previous studies have suggested that listeners find vocal fry to be generally unpleasant in women’s speech, but associate it with positive characteristics in men’s speech [2].

As it has become more prevalent, fry has perhaps not surprisingly found its place in many commercial song styles as well.  Many singers are implementing fry as a stylistic device at the onset or offset of a sung tone.  This can be found very readily in popular musical styles, presumably to impact and amplify the emotion that the performer is attempting to convey.

Researchers at the University of Texas at San Antonio conducted a survey to analyze whether listeners’ ratings of a singer’s expressivity in musical samples in two contemporary commercial styles (pop and country) were affected by the presence of vocal fry, and to see if there was a difference in listener ratings according to the singer’s gender.  A male and a female singer recorded musical samples for the study in a noise reduction booth.  As can be seen in the table below, the singers were asked to sing most of the musical selections twice, once using vocal fry at phrase onsets, and once without fry, while maintaining the same vocal quality, tempo, dynamics, and stylization.  Some samples were presented more than one time in the survey portion of the study to test listener reliability.

 

Song Singer Gender Vocal Mode
(Hit Me) Baby One More Time Female Fry Only
If I Die Young Female With and Without Fry
National Anthem Female With and Without Fry
Thinking Out Loud Male Without Fry Only
Amarillo By Morning Male With and Without Fry
National Anthem Male With and Without Fry

 

Across all listener ratings of all the songs, the recordings which included vocal fry were rated as being only slightly more expressive than the recordings which contained no vocal fry.  When comparing the use of fry between the male and female singer, there were some differences between the genders.  The listeners rated the samples where the female singer used vocal fry higher (e.g., more expressive) than those without fry, which was surprising considering the negative association with women using vocal fry in speech.  Conversely, the listeners rated the male samples without fry as being more expressive than those with fry. Part of this preference pattern may have also been an indication of the singer; the male singer was much more experienced with pop styles than the female singer, who is primarily classically trained.  The overall expressivity ratings for the male singer were higher than those of the female singer by a statistically significant margin.

There were also listener rating trends between the differing age groups of participants.  Younger listeners drove the gap of preference between the female singer’s performances with fry versus non-fry and the male singer’s performances without fry versus with fry further apart.  Presumably they are more tuned into stylistic norms of current pop singers.  However, this could also imply a gender bias in younger listeners.  The older listener groups rated the mean expressivity of the performers as being lower than the younger listener groups.  Since most of the songs that we sampled are fairly recent in production, this may indicate a generational trend in preference.  Perhaps listeners rate the style of vocal production that is most similar to what they listened to during their young adult years as the most expressive style of singing. These findings have raised many questions for further studies about vocal fry in pop and country music.

 

 

  1. Anderson, R.C., Klofstad, C.A., Mayew, W.J., Venkatachalam, M. “Vocal Fry May Undermine the Success of Young Women in the Labor Market. “ PLoS ONE, 2014. 9(5): e97506. doi:10.1371/journal.pone.0097506.

 

  1. Yuasa, I. P. “Creaky Voice: A New Feminine Voice Quality for Young Urban-Oriented Upwardly Mobile American Women.” American Speech, 2010. 85(3): 315-337.

 

 

5aMU1 – Understanding timbral effects of multi-resonator/generator systems of wind instruments in the context of western and non-western music – Jonas Braasch

5aMU1 – Understanding timbral effects of multi-resonator/generator systems of wind instruments in the context of western and non-western music – Jonas Braasch

Popular version of poster 5aMU1
Presented Friday morning, May 22, 2015, 8:35 AM – 8:55 AM, Kings 4
169th ASA Meeting, Pittsburgh

In this paper the relationship between musical instruments and the rooms they are performed in was investigated. A musical instrument is typically characterized as a system that consists of a tone generator combined with a resonator. A saxophone for example has a reed as a tone generator and a comical shaped resonator that can be effectively changed in length with keys to produce different musical notes. Often neglected is the fact that there is a second resonator for all wind instruments coupled to the tone generator – the vocal cavity. We use our vocal cavity everyday when we speak to form characteristic formants, local enhancements in frequency to shape vowels. This is achieved by varying the diameter of the vocal tract at specific local positions along its axis. In contrast to the resonator of a wind instrument, the vocal tract is fixed its length by the dimensions between the vocal chords and the lips. Consequently, the vocal tract cannot be used to change the fundamental frequency over a larger melodic range. For out voice, the change in frequency is controlled via the tension of the vocal chords. The musical instrument’s instrument resonator however is not an adequate device to control the timbre (harmonic spectrum) of an instrument because it can only be varied in length but not in width. Therefore, the players adjustment of the vocal tract is necessary to control the timbre if the instrument. While some instruments posses additional mechanisms to control timbre, e.g., via the embouchure to control the tone generator directly using the lip muscles, for others like the recorder changes in the wind supply provided by the lungs and the changes of the vocal tract. The role of the vocal tract has not been addressed systematically in literature and learning guides for two obvious reasons. Firstly, there is no known systematic approach of how to quantify internal body movements to shape the vocal tract. Each performer has to figure out the best vocal tract configurations in an intuitive manner. For the resonator system, the changes are described through the musical notes, and in cases where multiple ways exist to produce the same note, additional signs exist to demonstrate how to finger this note (e.g., by providing a specific key combination). Secondly, in western classic music culture the vocal tract adjustments predominantly have a correctional function to balance out the harmonic spectrum to make the instrument sound as even as possible across the register.

Braasch2

PVC-Didgeridoo adapter for soprano saxophone

In non-western cultures, the role of the oral cavity can be much more important to convey musical meaning. The didgeridoo, for example, has a fixed resonator with no keyholes and consequently it can only produce a single pitched drone. The musical parameter space is then defined by modulating the overtone spectrum above the tone by changing the vocal tract dimensions and creating vocal sounds on top of the buzzing lips on the didgeridoo edge. Mouthpieces of Western brass instruments have a cup behind the rim with a very narrow opening to the resonator, the throat. The didgeridoo does not have a cup, and the rim is the edge of the resonator with a ring of bee wax. While the narrow throat of western mouthpiece mutes additional sounds produced with the voice, didgeridoos are very open from end to end and carry the voice much better.

The room, a musical instrument is performed in acts as a third resonator, which also affect the timbre of the instrument. In our case, the room was simulated using a computer model with early reflections and late reverberation.

Braasch 1
Tone generators for soprano saxophone from left to right: Chinese Bawu, soprano saxophone, Bassoon reed, cornetto.

In general, it is difficult to assess the effect of a mouthpiece and resonator individually, because both vary across instruments. The trumpet for example has a narrow cylindrical bore with a brass mouthpiece, the saxophone has a wide conical bore with reed-based mouthpiece. To mitigate this effect, several tone generators were adapted for a soprano saxophone, including a brass mouthpiece from a cornetto, a bassoon mouthpiece and a didgeridoo adapter made from a 140 cm folded PCV pipe that can be attached to the saxophone as well. It turns out that the exchange of tone generators change the timbre of the saxophone significantly. The cornetto mouthpiece gives the instrument a much mellower tone. Similar to the baroque cornetto, the instruments sounds better in a bright room with lot of high frequencies, while the saxophone is at home at a 19th-century concert hall with a steeper roll off at high frequencies.