2pSC – How do narration experts provide expressive storytelling in Japanese fairy tales?

Takashi Saito – saito@sc.shonan-it.ac.jp
Shonan Institute of Technology
1-1-25 Tsujido-Nishikaigan,
Fujisawa, Kanagawa, JAPAN

Popular version of paper 2pSC, “Prosodic analysis of storytelling speech in Japanese fairy tale”
Presented Tuesday afternoon, November 29, 2016
172nd ASA Meeting, Honolulu

Recent advances in speech synthesis technologies bring us relatively high quality synthetic speech, as smartphones today often provide it with speech message output. The acoustic sound quality especially seems to sometimes be particularly close to that of human voices. Prosodic aspects, or the patterns of rhythm and intonation, however, still have large room for improvement. The overall speech messages generated by speech synthesis systems sound somewhat awkward and monotonous. In other words, those messages lack expressiveness of speech compared with human speech. One of the reasons for this is that most systems use a one-sentence speech synthesis scheme in which each sentence in the message is generated independently, connected just to construct the message. The lack of expressiveness might hinder widening the range of applications for speech synthesis. Storytelling is a typical application to expect speech synthesis to be capable of having a control mechanism beyond just one sentence to provide really vivid and expressive storytelling. This work attempts to investigate the actual storytelling strategies of human narration experts for the purpose of ultimately reflecting them on the expressiveness of speech synthesis.

A Japanese popular fairy tale titled, “The Inch-High Samurai,” in its English translation was the storytelling material in this study. It is a short story taking about six minutes to tell verbally. The story consists of four elements typically found in simple fairy tales: introduction, build-up, climax, and ending. These common features suit the story well for observing prosodic changes in the story’s flow. The story was told by six narration experts (four female and two male narrators) and were recorded. First, we were interested in what they were thinking while telling the story, so we interviewed them on their actual reading strategies after the recording. We found they usually did not adopt fixed reading techniques for each sentence, but tried to go into the world of the story, and make a clear image of characters appearing in the story, as would an actor. They also reported paying attention to the following aspects of the scenes associated with the story elements: In the introduction, featuring the birth of the little Samurai character, they started to speak slowly and gently in effort to grasp the hearts of listeners. In the story’s climax, depicting the extermination of the devil character, they tried to express a tense feeling through a quick rhythm and tempo. Finally, in the ending, they gradually changed their reading styles to make the audience understand that the happy ending is coming soon.

For all six speakers a baseline speech segmentation was conducted for words, and accentual phrases in a semi-automatic way. We then used a multi-layered prosodic tagging method, performed manually, to provide information on various changes of “story states” relevant to impersonation, emotional involvement and scene flow control. Figure 1 shows an example of the labeled speech data. Wavesurfer [1] software served as our speech visualization and labelling tool. The example utterance contains a part of the storyteller’s speech (containing the phrase “oniwa bikkuridesu” meaning, “the devil was surprised,” and devil’s part, “ta ta tasukekuree,” meaning, “please help me!”) and is shown in the top label pane for characters (chrlab). The second top label pane (evelab) shows event labels such as scene changes and emotional involvement (desire, joy, fear, etc…). In this example, a “fear” event is attached to the devil’s utterance part. The dynamic pitch movement can be observed in the pitch contour pane located at the bottom of the figure.

segmentedspeechsample

How are the events of scene change or emotional involvement provided by human narrators manifested in speech data? Prosodic parameters of speed, measured in speech rate or mora/sec; pitch, measured in Hz; power, measured in dB; and preceding pause length, measured in seconds, are investigated for all the breath groups in the speech data. Breath group refers to a speech segment which is uttered consecutively without pausing. Figure 2, 3 and 4 show these parameters at a scene-change event (Figure 2), desire event (Figure 3), and fear event (Figure 4). The axis on the left of the figures shows the ratio of the parameter to its average value. Each event has its own distinct tendency on prosodic parameters, also seen in the figures, which seems to be fairly common to all speakers. For instance, the differences between the scene-change event and the desire event are the amount of preceding pause and the degree of the contributions from the other three parameters. The fear event shows a quite different tendency from other events, but it is common to all speakers though the degree of the parameter movement differs between speakers. Figure 5 shows how to expresses character differences, when the reader impersonates the story’s characters, with the three parameters. In short, speed and pitch are changed dynamically for impersonation, and this is a common tendency of all speakers.

Based on findings obtained from these human narrations, we are designing a framework of mapping story events through scene changes and emotional involvement to prosodic parameters. Simultaneously, it is necessary to build additional databases to ensure and reinforce story event description and mapping framework.

saito-fig2 saito-fig3
saito-fig4 saito-fig5

[1] Wavesurfer: http://www.speech.kth.se/wavesurfer/

5aSC43 – Appropriateness of acoustic characteristics on perception of disaster warnings

Naomi Ogasawara – naomi-o@mail.gpwu.ac.jp
Kenta Ofuji – o-fu@u-aizu.ac.jp
Akari Harada

Popular version of paper, 5aSC43, “Appropriateness of acoustic characteristics on perception of disaster warnings.”
Presented Friday morning, December 2, 2016
172nd ASA Meeting, Honolulu

As you might know, Japan has often been hit by natural disasters, such as typhoons, earthquakes, flooding, landslides, and volcanic eruptions. According to the Japan Institute of Country-ology and Engineering [1], 20.5% of all the M6 and greater earthquakes in the world occurred in Japan, and 0.3% of deaths caused by natural disasters worldwide were in Japan. These numbers seem quite high compared with the fact that Japan occupies only 0.28% of the world’s land mass.

Municipalities in Japan issue and announce evacuation calls to local residents through the community wireless system or home receiver when a disaster is approaching; however, there have been many cases reported in which people did not evacuate even after they heard the warnings [2]. This is because people tend to not believe and disregard warnings due to a normalcy bias [3]. Facing this reality, it is necessary to find a way to make evacuation calls more effective and trustworthy. This study focused on the influence of acoustic characteristics (voice gender, pitch, and speaking rate) of a warning call on the listeners’ perception of the call and tried to make suggestions for better communication.

Three short warnings were created:

  1. Kyoo wa ame ga furimasu. Kasa wo motte dekakete kudasai. ‘It’s going to rain today. Please take an umbrella with you.’
  2. Ookina tsunami ga kimasu. Tadachini hinan shitekudasai. ‘A big tsunami is coming. Please evacuate immediately.’ and
  3. Gakekuzure no kiken ga arimasu. Tadachini hinan shitekudasai. ‘There is a risk of landslide. Please evacuate immediately.’

A female and a male native speaker of Japanese, who both have relatively clear voices and good articulation, read the warnings out aloud at a normal speed (see Table 1 for the acoustic information of the utterances), and their utterances were recorded in a sound attenuated booth with a high quality microphone and recording device. Each of the female and male utterances was modified using the acoustic analysis software PRAAT [4] to create stimuli with 20% higher or lower pitch and 20% faster or slower speech rate. The total number of tokens created was 54 (3 warning types x 2 genders x 3 pitch levels x 3 speech rates), but only 4 of the warning 1) tokens were used in the perception experiment as practice stimuli.

oga1

Table 1: Acoustic Data of Normal Tokens

34 university students listened to each stimulus through the two speakers placed on the right and left front corners in a classroom (930cm x 1,500cm). Another group of 42 students and 11 people from the public listened to the same stimuli through one speaker placed on the front in a lab (510cm x 750cm). All of the participants rated each token on 1-to-5 scale (1: lowest, 5: highest) in terms of Intelligibility, Reliability, and Urgency.

Figure 1 summarizes the evaluation responses (n=87) in a bar chart, with the average scores calculated from the ratings on a 1-5 scale for each combination of the acoustic conditions. Taking Intelligibility, for example, the average score was the highest when the calls were spoken with a female voice, with normal speed and normal pitch. Similar results are seen for Reliability as well. On the other hand, respondents felt a higher degree of Urgency for both faster speed and higher pitch.

oga2

Figure 1.  Evaluation responses (bar graph, in percent) and Average scores (data labels and line graph on 1 – 5 scale)

The data were then analyzed with an analysis of variance (ANOVA, Table 2). Figure 2 illustrates the same results as bar charts. It was confirmed that for all of Intelligibility, Reliability, and Urgency, the main effect of speaking speed was the most dominant. In particular, Urgency can be influenced by the speed factor alone by up to 43%.

oga3

Table 2: ANOVA results

oga4

Figure 2: Decomposed variances in stacked bar charts based on the ANOVA results

Finally, we calculated the expected average evaluation scores, with respect to different levels of speed, to find out how much influence speed has on Urgency, with a female speaker and normal pitch (Figure 3). Indeed, by setting speed to fast, the perceived Urgency can be raised to the highest level, even at the expense of Intelligibility and Reliability to some degrees. Based on these results, we argue that the speech rate may effectively be varied depending on the purpose of an evacuation call, whether it prioritizes Urgency, or Intelligibility and Reliability.

oga5

Figure 3: Expected average evaluation scores on 1-5 scale, setting female voice and normal pitch

References

  1. Japan Institute of Country-ology and Engineering (2015). Kokudo wo shiru [To know the national land]. Retrieved from: http://www.jice.or.jp/knowledge/japan/commentary09.
  2. Nakamura, Isao. (2008). Dai 6 sho Hinan to joho, dai 3 setsu Hinan to jyuumin no shinri [Chapter 6 Evacuation and Information, Section 3 Evacuation and Residents’ Mind]. In H. Yoshii & A. Tanaka (Eds.), Saigai kiki kanriron nyuumon [Introduction to Disaster Management Theory] (pp.170-176). Tokyo: Kobundo.
  3. Drabek, Thomas E. (1986). Human System Responses to Disaster: An Inventory of Sociological Findings. NY: Springer-Verlag New York Inc.
  4. Boersma, Paul & Weenink, David (2013). Praat: doing phonetics by computer [Computer program]. Retrieved from: http://www.fon.hum.uva.nl/praat/.

Tags:
-Emergency warnings/response
-Natural disasters
-Broadcasting
-Speech rate
-Pitch