5pAOb1 – Acoustic mapping of ocean currents using moving vehicles

Chen-Fen Huang – chenfen@ntu.edu.tw
KuangYu Chen – seven5172002@gmail.com
IO.NTU – Acoustic Oceanography Lab

Sheng-Wei Huang – swhuang1983@ntu.edu.tw
JenHwa Guo – jguo@ntu.edu.tw
ESOE.NTU – Underwater Vehicles Lab
Taipei, 10617, Taiwan, R.O.C.

Popular version of paper 5pAOb1, “Acoustic mapping of ocean currents using moving vehicles”
Presented Friday afternoon, November 9, 2018, 1:00 PM – 1:20 PM, Balcony L
176th ASA Meeting, Victoria, BC Canada

ocean currentsWith the increased availability of highly maneuverable unmanned vehicles, abundant ocean environmental data can be collected.  Among the various ways of collecting the ocean temperature and current data, ocean acoustic tomography (OAT) is probably the most efficient method to obtain a comprehensive view of those properties in the interior ocean.

OAT uses differential travel times (DTTs) to estimate the currents.  Imagine two transceivers are separated by a distance R in a moving medium with sound speed of c.  The sound transmitted from the transceiver upstream will travel faster than the sound from the transceiver downstream.  By measuring the sound traveling in both directions, we can obtain the DTTs and from the DTTs we can determine the path-averaged current between the transceivers.

What happens if the vehicles carrying the transceivers are moving?  First, the DTTs are affected. The magnitude of the DTTs is reduced by the average speed of the vehicles [1].  Second, the acoustic signals are Doppler distorted due to the relative motion between the moving vehicles.

To determine the Doppler shift, we correlated the transmitted signals of different hypothetical Doppler shifts (replicas) with the received signals.  The hypothetical Doppler shift yielding the maximum correlation is used to compensate the acoustic measurements and determine the acoustic arrival patterns.

The Doppler shift measures the relative speed between two vehicles; however, relative speed isn’t sufficient to determine the ocean current speed – absolute speed (projected onto the path connecting the two vehicles) is required.  If only one of the vehicles is moving, then the Doppler shift indicates the projected speed of the moving vehicle.  If both of the vehicles are moving, we determine their average speeds by measuring the ground speed of at least one of the mobile vehicles.

We determined the DTTs using the correlation-based method.  The time series of the acoustic arrivals received at each pair of transceivers (reciprocal arrival patterns) are correlated to obtain the cross-correlated function (CCF).  We selected the lag time corresponding to the maximum peak in the CCF as an average estimate of the DTT.

We conducted a moving-vehicles experiment using two moving vehicles (auv and ship) and one moored station (buoy) in WangHiXiang Bay nearby Keelung City, Taiwan.  The AUV sailed near the shore while the ship surveyed in counterclockwise direction along a square trajectory. We installed the tomographic transceivers on the moving vehicles and the moored station. A DVL was on the ship for the validation of our current estimate.  Taken together, the moving vehicles and the moored station construct a triangular formation which can be used to map the ocean currents.

We used the distributed sensing method [2] to obtain the current field.  The estimated current velocities near the ship show consistency with the point measurements from the DVL.  We reconstructed the current distribution in the Bay using the acoustic data (the path-averaged currents) collected over the last 20 minutes.  A small-scale eddy was revealed.

ocean currents

Figure 1. Illustration of the acoustic mapping of ocean currents. Estimation of the current velocities near the ship for a) eastward direction and b) northward direction. The red circle and line indicate the DVL measurement while the black color indicates the DTT estimate. c) Spatial distribution of the estimated current field (yellow arrows) using the acoustic transmission paths indicated by the white lines.

[1] W. Munk, P. F. Worcester, and C. Wunsch, Ocean Acoustic Tomography, Cambridge University Press, 1995.

[2] C.-F. Huang, T. C. Yang, J.-Y. Liu, and J. Schindall, “Acoustic mapping of ocean currents using networked distributed sensors,” J. Acoust. Soc. Am., vol. 134, pp. 2090–2105, 2013.

4aAB8- Boat and noise effects on the behavior of killer whales revealed by suction cup tags

Marla Holt – marla.holt@noaa.gov, NOAA NMFS Northwest Fisheries Science Center
Brad Hanson – brad.hanson@noaa.gov, NOAA NMFS Northwest Fisheries Science Center
Candice Emmons – candice.emmons@noaa.gov, NOAA NMFS Northwest Fisheries Science Center
Jennifer Tennessen – jennifer.tennessen@noaa.gov, Lynker Technologies
Deborah Giles – dagiles7@gmail.com, University of Washington Friday Harbor Labs
Jeffery Hogan – jeff@killerwhaletales.org, Cascadia Research Collective

Popular version of paper 4aAB8, “Effects of vessels and noise on the subsurface behavior of endangered killer whales (Orcinus orca)”
Presented Thursday morning, November 8, 2018, 10:00-10:15 AM, Shaughnessy (FE)
176th ASA Meeting, Victoria, BC

killer whalesSouthern Resident killer whales are unique and iconic to the Pacific Northwest. They are also among the most endangered marine mammals in the world.

Researchers have identified three main threats to the recovery of Southern Residents: 1.) availability of prey, 2.) vessel noise and traffic, and 3.) chemical pollutants.  Unlike transient killer whales that prey on marine mammals such as seals, Southern Residents prey on fish.

Like all killer whales, Southern Residents use echolocation, a process of producing short sound pulses that bounce off objects to detect and identify things in the water, including their preferred prey, Chinook salmon.  But vessel traffic can disrupt the whales’ behavior and radiated noise from vessels can mask echolocation signals the whales use for hunting. This crowded and noisy environment can make it more difficult for hungry whales to find and catch their prey.

For the past several years, we have been working to better understand the effect that vessel traffic and underwater noise is having on individual whales. We have done this by placing suction-cup tags on several members of the Southern Resident population.  These digital acoustic recording tags, or DTAGs, contain two underwater microphones to record sound along with pressure, accelerometer and magnetometer sensors that allow us to re-create whale movement [1], much like an activity tracker in your watch or smart phone.

When whales were tagged, we collected GPS data on all nearby vessels and took observations of the whale’s feeding behavior. When possible we also collected any scraps left behind after a feeding event to better understand how the whales make their catch and what they are eating.

Over the past four years, we have deployed 28 DTAGs, yielding a rich set of acoustic and movement data.  So far, we have found:

  • Unsurprisingly, the DTAGs measured higher noise levels when there are more vessels around and when the vessels are moving fast [2].
  • The frequencies of sounds emitted by vessels overlapped with the echolocation frequencies that the whales use to hunt fish.
  • Additionally, we could differentiate different foraging activities from the acoustic and movement record, including when the whales used echolocation signals to search and pursue fish, fast rolls and jerks during fish chases, and the detection of crunching sounds from eating after fish kills.

Attached file “20100921CKE_DG1_269”)

Figure 1. A Southern Resident killer whale with a suction-cup attached DTAG.  Photo taken by Candice Emmons under NOAA NMFS issued Research Permit No. 781-1824.

These results allowed us to identify different phases of foraging and determine how vessels and/or noise affect  the whales’ behavior and their ability to catch fish. This work, along with a comparative investigation involving DTAG data from Northern Resident killer whales, a fisheating population that is growing steadily, is improving our understanding of these killer whale populations.

This improved understanding is informing killer whale conservation and management measures, including assessing the effectiveness of vessel regulations for killer whales in the U.S. [3].  Additionally, the Pacific Whale Watch Association’s updated guidelines include a slow zone around killer whales because recent research showed that speed is the biggest factor in how much noise reaches the whales. Science is informing real change for the benefit of the whales.

Fig 2 Video:  Legend can read, “An animation of the track of a tagged whale and all of the boats around it during an entire tag deployment.  The whale track is shown by the thicker pale yellow line and each vessel track is connected by a thin line.” Animation prepared by Damon Holzer of NOAA Northwest Fisheries Science Center.

[1] M. P. Johnson and P. L. Tyack, “A digital acoustic recording tag for measuring the response of wild marine mammals to sound,” IEEE Journal of Ocean Engineering, vol 28, pp. 3-12, 2003.

[2] M. M. Holt, M. B Hanson, D. A. Giles, C. K. Emmons, and J.  T. Hogan “Noise levels received by endangered killer whales Orcinus orca before and after implementation of vessel regulations” Endangered Species Research, vol. 34, pp. 15-26, 2017.

[3] G. A. Ferrara, T. M. Mongillo, and L. M. Barre “Reducing disturbance from vessels to Southern Resident killer whales:  Assessing the effectiveness of the 2011 federal regulations in advancing recovery goals.” NOAA Technical Memorandum NMFS-OPR-58, 76 pp. 2017.

1pSCb15 – Why your boot might not sound like my boot: Gender, ethnicity, and back-vowel fronting in Mississippi

Wendy Herd – wherd@english.msstate.edu
Joy Cariño – smc790@msstate.edu
Meredith Hilliard – mah838@msstate.edu
Emily Coggins – egc102@msstate.edu
Jessica Sherman – jls1790@msstate.edu

Linguistics Research Laboratory
Mississippi State University
Mississippi State, MS 39762

Popular version of paper 1pSCb15, “The role of gender, ethnicity, and rurality in Mississippi back-vowel fronting.”
Presented Monday afternoon, November 5, 2018, Upper Pavilion, 176th ASA Meeting, Victoria, Canada

We often notice differences in pronunciation between our own speech and that of other speakers. We even use differences, like the Southern pronunciation of hi or the Northeastern absence of ‘r’ in park, to guess where a given speaker is from. A speaker’s pronunciation also includes cues that tell us which social groups that speaker identifies with. For example, the way you pronounce words might give listeners information about where you are from, whether you identify with a specific cultural group, whether you identify as a man, a woman, or a non-binary gender, as well as other information.

Back-vowel fronting is a particular type of pronunciation change that affects American English vowels like the /u/ in boot and the /o/ in boat. While these two vowel sounds are canonically produced with the tongue raised in the back of the mouth, speakers from across the United States sometimes produce these vowels with the tongue closer to the front of the mouth, nearing the position of the tongue in words like beat. We can measure this difference in tongue position by analyzing F1 and F2, which represent the important frequency information that allows us to differentiate between different vowel sounds. As seen in Figure 1, F1 and F2 (i.e., the dark horizontal bars in the bottom portion of the images) are very close together when the [u] of boot is pronounced in the back of the mouth while F1 and F2 are far apart when the [u] of boot is pronounced in the front of the mouth. These differences in pronunciation can also be heard in the sound files corresponding to each image.

black male boot

black female boot

white male boot

white female boot

Figure 1. Waveform (top) and spectrogram (bottom) of (a-b) boot pronounced with a back vowel by a Black male speaker and by a Black female speaker, and of (c-d) boot pronounced with a fronted vowel by a White male speaker and by a White female speaker.

Other studies have found fronting in words like boot and boat in almost every regional dialect across the United States; however, back-vowel fronting is still primarily associated with the speech of young women, and the research in this area still tends to be limited to the speech of White speakers [1, 2]. The few studies that focused on Black speakers have reported mixed results, either that Black speakers do not front back vowels [3] or that Black speakers do front back vowels but exhibit less extreme fronting than White speakers [4]. Note that in the case of the former study, only male speakers from North Carolina were recorded, and in the case of the latter, both male and female speakers were recorded, but they were all from Memphis, an urban area.

Our study is different in that it includes recordings of both men and women and both Black and White speakers and in that it focuses on a specific geographic region, thus minimizing variation due to regional differences that might be confounded with variation due to gender and/or ethnicity. We recorded the speech of 73 volunteers from Mississippi, making sure to recruit similar numbers of volunteers from different regions and/or cities in the state. The study included 19 Black female speakers, 15 Black male speakers, 20 White female speakers, and 19 White male speakers, all of whom fell within the age range of 18 – 22. This allowed us to directly compare the speech of women and men as well as Black and White speakers within Mississippi.

As can be seen in Figure 2, we found that speakers who identified as White were much more likely to front their back vowels in boot and boat than speakers who self-identified as Black. However, we did not find any gender differences. With the exception of one speaker, women who identified as Black were just as resistant to back-vowel fronting as men. Likewise, men who identified as White were just as likely to front their back vowels as women.


Figure 2. Scatterplots of vowels produced in boot (yellow), boat (blue), beat (red), book (green), and bought (purple) by Black male speakers (top-left), Black female speakers (top-right), White male speakers (bottom-left), and White female speakers (bottom-right). Each point represents a different speaker. The words beat, book, and bought as well as the labels “high,” “low,” “front,” and “back” were included to illustrate the most front/back and high/low points in the mouth.

Why do Black speakers and White speakers pronounce the vowels in boot and boat differently? Speakers tend to pronounce vowels – like other speech sounds – the way others in their social group pronounce those sounds. As such, pronouncing a fronted /u/ or /o/ could be perceived as a cue that tells listeners that the speaker identifies with other speakers who also front those vowels, in this case White speakers and vice-versa. Note that while back-vowel fronting might be associated with a more feminine identity in other regional dialects, that may not be the case in Mississippi because we found no gender differences. Finally, to learn more about how we use back-vowel fronting to align ourselves with social groups, it is necessary to look at the perception of fronted back vowels by speakers from different groups as well as to look at the degree of back-vowel fronting that occurs during spontaneous speech. What do you think? Do you front your back vowels? Can you hear the difference in the recordings above?

 

  1. Fridland, V. 2001. The social dimension of the Southern Vowel Shift: Gender, age and class. Journal of Sociolinguistics, 5(2), 233-253.
  2. Clopper, C., Pisoni, D., & de Jong, K. 2005. Acoustic characteristics of the vowel systems of six regional varieties of American English. Journal of the Acoustical Society of America, 118(3), 1661-1676.
  3. Holt, Y. 2018. Mechanisms of vowel variation in African American English. Journal of Speech, Language, and Hearing Research, 61, 197-209.
  4. Fridland, V. & Bartlett, K. 2006. The social and linguistic conditioning of back vowel fronting across ethnic groups in Memphis, Tennessee. English Language and Linguistics, 10(1), 1-22.

5pAB3 – Seasonal patterns in marine mammal vocalizations in the western Canadian Arctic

William D. Halliday
Stephen J. Insley
Xavier Mouy

Presented at the 176th ASA Meeting

Climate change is causing rapid changes to the Arctic marine environment through a combination of sea ice loss and increased human activity. It is imperative that we monitor marine species in order to determine how they are reacting to these changes, but to do this, we must monitor these species over long periods of time, and must have a baseline for comparison. In this presentation, I examine underwater acoustic data that our team collected at two sites in the western Canadian Arctic (Sachs Harbour and Ulukhaktok), and use these data to assess when four species of marine mammals in the region (beluga and bowhead whales, bearded and ringed seals) were vocalizing. For the whales, the timing of vocalization serves as an estimate of migration timing, and for the seals, vocalization timing is more representative of the timing of the mating season. Our data show that both whale species migrated into the region in April, and that beluga whales migrated out of the region in the early autumn, whereas bowhead whales migrated in the late autumn. Both whale species at Ulukhaktok were recorded later into the year than at Sachs Harbour. Patterns in seal species vocalizations were quite different between the two sites. Bearded seals vocalized constantly during the winter and spring at Sachs Harbour, vocalizing 24 hours a day between April and June, whereas at Ulukhaktok, vocalizations began around the same time as at Sachs Harbour, but were much more sporadic and appeared to taper off before the mating season began in the spring. Ringed seals were generally quiet at Sachs Harbour, whereas their vocalizations were abundant throughout the winter at Ulukhaktok. These data serve as a baseline record for all four of these species, and will allow for useful comparisons as we continue to monitor these species at both sites into the future as climate continues to change. These data will also allow us to examine the influence of human-induced stressors, such as increased underwater noise, on these animals. We will also expand our monitoring network throughout the region in order to more fully understand these species in this region.

marine mammals

 

 

5aSP2 – Two-dimensional high-resolution acoustic localization of distributed coherent sources for structural health monitoring

Tyler J. Flynn (t.jayflynn@gmail.com),
David R. Dowling (drd@umich.edu)

University of Michigan
Mechanical Engineering Dept.
Ann Arbor, MI 48109

Popular version of paper 5aSP2 “Two-dimensional high-resolution acoustic localization of distributed coherent sources for structural health monitoring”
Presented Friday morning, 9 November 2018 9:15-9:30am Rattenbury A/B
176th ASA Meeting Victoria, BC

When in use, many structures – like driveshafts, windmill blades, ship hulls, etc. – tend to vibrate, casting pressure waves (aka sound) into the surrounding environment. When worn or damaged, these systems may vibrate differently, resulting in measurable changes to the broadcast sound. This presents an opportunity for the enterprising acoustician: could you monitor systems, and even locate structural defects, at a distance by exploiting acoustic changes? Such a technique would surely be useful for structures that are difficult to reach or that are in challenging environments, like ships in the ocean – though these benefits would come at the cost of the added complexity to measure sound precisely. This work shows that yes, it is possible to localize defects using only acoustic measurements, and such a technique is validated with two proof-of-concept experiments.

In cases where damage affects how a structure vibrates locally (e.g. near the defect), localizing the damage reduces to finding out where the source of the sound is changing. The most common method for figuring out where sound is coming from is known as beamforming. Put simply, beamforming involves listening for sounds at different points in space (using multiple microphones known as an array) then looking for relative time delays between microphones to back out the direction(s) of the incident sound. This presents two distinct challenges for locating defects: 1) the acoustic changes from a defect are pretty small compared to all the sound being generated, so they can easily get ‘washed out’. This can be addressed by using previously recorded measurements of the undamaged structure, then subtracting these recordings in a special way such that the differences between the damaged and undamaged structures are localized. Even then, more advanced high-resolution beamforming techniques are needed to precisely pinpoint changes. This leads to the second challenge, 2) Sound emitted from vibrating structures is typically coherent (meaning that sounds coming from different directions are strongly related) and this causes problems for high-resolution beamforming. However, a trick can be used wherein the full array of microphones is divided into smaller subarrays that can then be averaged in a special way to side-step the coherence problem.

acoustic localization

Figure 1: Experimental setups. The square microphone array sitting above a single speaker source (top left). The microphone array sitting above the clamped aluminum plate that is vibrated from below (right). A close-up of the square microphone array (bottom left).

Two validation experiments were conducted. In the first, an 8×8 array of 64 microphones was used to record 5kHz pulses from small loudspeakers at various locations on the floor (Figure 1). With three speaker sources in an arbitrary configuration, a recording was made. The volume of one source was then reduced 20% and another measurement was made. Using the described method (with the 8×8 array subdivided and averaged over 25 4×4 subarrays) the 20% change was precisely located with great agreement to computer simulations of the experiment (Figure 2). To test for actual damage, in the second experiment, a 3.8-cm cut was added to a 30-cm-square aluminum plate. The plate, vibrated from below to induce sound, was recorded from above, with and without the cut. Once again using the special method described here, the change, i.e. the cut was successfully found (Figure 3) – a promising result for practical applications of the technique.

Figure 2: Results of the first experiment. The top row of images uses the proposed technique, while the bottom uses a conventional technique. A ‘subtraction’ between the two very similar acoustic measurements (far, center left) allows for precise localization of the 20% change (center right) and great agreement with simulated results (far right).

Figure 3: Results of the second experiment. The two left images show vibrational measurement of the plate (vibrated around 830 Hz) with and without the added cut, showing that the cut noticeably affects the vibration. The right image shows high-resolution acoustic localization of the cut using the described technique (at 3600 Hz).

2aSC11 – Adult imitating child speech: A case study using 3D ultrasound

Colette Feehan – cmfeehan@iu.edu
Steven M. Lulich – slulich@iu.edu
Indiana University

Popular version of paper 2aSC11
Presented Tuesday morning, November 6, 2018
176th ASA meeting, Victoria
Click here to read the abstract

Many people do not realize that a lot of the “child” voices they hear in animated TV shows and movies are actually produced by adults.1 The field of animation has a long tradition of using adults to voice child characters such as in Peter Pan (1953), The Jetsons (1962-63), Rugrats (1991-2004), The Wild Thornberrys (1998-2004), and The Boondocks (2005-2014) to name just a few1. Reasons for using adults include: the fact that children are hard to direct, they legally cannot work long hours, and their voices change as they grow up,1 so if they had used real children in a series like The Simpsons (1989-), they might be on Bart number seven by now, whereas with the talented Nancy Cartwright, Bart has maintained the same vocal spunk of his 1980s self.8

Voice actors are an interesting population for linguistic study because they are essentially professional folk linguists9: this means that without formal training in linguistics they skillfully and reliably perform complex linguistic tasks. Previous studies10-17 of voice actors investigated how changes in pitch, movement of the vocal tract, and voice quality (e.g. how breathy or scratchy a voice sounds) affect the way listeners and viewers understand and interpret the animated character. The current investigation uses 3D ultrasound data from an amateur voice actor to address the question: What do adult voice actors do with their vocal tracts in order to sound like a child?

Ultrasound works by emitting high-frequency sound and measuring the time it takes for the sound to echo back. For this study, an ultrasound probe (like what you use to see a baby) was placed under the participant’s chin and held in place using a customized helmet. The sound waves travel through the tissues of the face and tongue—a fairly dense medium—and when the waves come into contact with the air along the surface of the tongue—a much lower density medium—they echo back.

These echoes are represented in ultrasound images as a bright line (see Figure 1).

Multiple images can be analyzed and assembled into 3D representations of the tongue surface (see Figure 2).

This study identified three strategies for imitating a child’s voice. First the actor raised the hyoid bone (a tiny bone in your neck) which is visible as an acoustic “shadow” circled in Figure 3.

This gesture effectively shortens the vocal tract, helping the actor to sound like a smaller person. Second, the actor pushed tongue movements forward in the mouth (visible in Figure 4).

This gesture shortens the front part of the vocal tract, which also helps the actor to sound like a smaller person. Third, the actor produced a prominent groove down the middle of the tongue (visible in Figure 2), effectively narrowing the vocal tract. These three strategies together help voice actors sound like people with smaller vocal tracts, which is very effective when voicing an animated child character!

 

References

  1. Holliday, C. “Emotion Capture: Vocal Performances by Children in the Computer-Animated Film”. Alphaville: Journal of Film and Screen Media 3 (Summer 2012). Web. ISSN: 2009-4078.
  2. Disney, W. (Producer) Geronimi, C., Jackson, W., Luske, H. (Directors). (1953). Peter Pan [Motion Picture]. Burbank, CA: Walt Disney Productions.
  3. Hanna, W., & Barbera, J. (1962). The Jetsons. [Television Series] Los Angles, CA: Hanna Barbera Productions.
  4. Klasky, A., Csupo, G., Coffey, V., Germain, P., Harrington, M. (Executive Producers) (1991). Rugrats [Television Series]. Hollywood, CA: Klasky/Csupo, Inc.
  5. Klasky, A., & Csupo, G. (Executive Producers). (1998). The Wild Thornberrys [Television Series]. Hollywood, CA: Klasky/Csupo, Inc.
  6. McGruder, A., Hudlin, R., Barnes, R., Cowan, B., Jones, C. (Executive Producers). (2005). The Boondocks [Television Series] Culver City, CA: Adelaide Productions Television.
  7. Brooks, J., & Groening, M. (Executive Producers). (1989). The Simpsons [Television Series]. Los Angeles, CA: Gracie Films.
  8. Cartwright, N. (2001) My Life as a 10-Year-Old Boy. New York: Hyperion Books.
  9. Preston, D. R. (1993). Folk dialectology. American dialect research, 333-378.
  10. Starr, R. L. (2015). Sweet voice: The role of voice quality in a Japanese feminine style. Language in Society, 44(01), 1-34.
  11. Teshigawara, M. (2003). Voices in Japanese animation: a phonetic study of vocal stereotypes of heroes and villains in Japanese culture. Dissertation.
  12. Teshigawara, M. (2004). Vocally expressed emotions and stereotypes in Japanese animation: Voice qualities of the bad guys compared to those of the good guys. Journal of the Phonetic Society of Japan8(1), 60-76.
  13. Teshigawara, M., & Murano, E. Z. (2004). Articulatory correlates of voice qualities of good guys and bad guys in Japanese anime: An MRI study. In Proceedings of INTERSPEECH (pp. 1249-1252).
  14. Teshigawara, M., Amir, N., Amir, O., Wlosko, E., & Avivi, M. (2007). Effects of random splicing on listeners’ perceptions. In 16th international congress of phonetic sciences (icphs).
  15. Teshigawara, M. 2009. Vocal expressions of emotions and personalities in Japanese anime. In Izdebski, K. (ed.), Emotions of the Human Voice, Vol. III Culture and Perception. San Diego: Plural Publishing, 275-287.
  16. Teshigawara, K. (2011). Voice-based person perception: two dimensions and their phonetic properties. ICPhSXVII, 1974-1977.
  17. Uchida, T. 2007. Effects of F0 range and contours in speech upon the image of speakers’ personality. Proc.19th ICA Madrid. http://www.seaacustica.es/WEB_ICA_07/fchrs/papers/cas-03-024.pdf