1pEAa2 – Rotor noise control using 3D-printed porous materials

Chaoyang Jiang, Yendrew Yauwenas, Jeoffrey Fischer, Danielle Moreau and Con Doolan-c.doolan@unsw.edu.au
School of Mechanical and Manufacturing Engineering
University of New South Wales
Sydney, NSW, Australia, 2052

Popular version of paper 1pEAa2, “Rotor noise control using 3D-printed porous materials”
Presented Monday, December 04, 2017, 1:15-1:30 PM, Balcony N
174th ASA meeting, New Orleans

You may not realise it, but you are surrounded by rotor blades.  Fans in your computer, the air-conditioning system above your head, the wind turbine creating your renewable energy and the jet engines powering you to your next holiday or business meeting are some examples of technology where rotor blades are essential.  Unfortunately, rotor blades create noise and with so many of them, controlling rotor noise is necessary to improve the liveability and health of our communities.

Perhaps the most challenging type of rotor noise to control is turbulent trailing edge noise.  Trailing edge noise is created by turbulence in the air surrounding the rotor blade passing the blade trailing edge. This noise is produced over a wide range of frequencies (it is broadband in nature) because it is the acoustic signature of turbulence, which is a random mixture of swirling eddies of varying size.

Because this noise is driven by turbulence and its interaction with the rotor blade, it is difficult to predict and very challenging to control.  Adding porous material to a rotor blade has been shown to provide some noise relief; however, the amount of noise control is usually small and sometimes more noise is created by the porous material itself.  The problem to solve is to work out how to fabricate a quiet rotor blade with optimised and integrated porosity.  This is a significant departure from current methods, that normally apply standard porous materials late in the design or manufacturing process.

We use 3D printing technology to overcome this problem.  3D printing (also known as additive manufacturing) allows complex designs to be realised quickly through carefully controlled deposition of material (polymer, metal or ceramic).  We have used 3D printing to explore how porosity in polymers can be optimised with subsurface cavities to provide maximum sound absorption over a wide range of frequencies.  Then, we 3D print these porous designs directly into the rotor blade of a fan and test their acoustic performance in a special facility at UNSW Sydney.

Figure 1(a) shows 3D printed rotor blades under test at UNSW Sydney, with a picture of the 3D printed blade tip, with porous trailing edge, shown in figure 1(b).  A three-bladed fan is shown and in the background, a microphone array.  The microphone array allows very accurate noise measurements from the rotor blades.  When we compare solid and 3D printed porous blades, significant noise reduction is achieved, as shown in figure 2.  Over 10 dB of noise control can be achieved, which is much higher than other control methods.  Audio files (see below) allow you to hear the difference between regular solid blades and the 3D printed porous blades.

3D printing has shown that it is possible to produce much quieter rotor blades than we have been able to previously.  Our next step is to further optimise the porosity designs to achieve maximum noise reduction.  We are also investigating the impact of these designs on aerodynamic performance to ensure excessive drag is not produced.  Further, exploring the use of metallic 3D printing systems is required to make more durable rotor blades suitable for extreme environments, such as gas turbine blades.

(a)Rotor noise (b)Rotor noise

Figure 1.  3D rotor blades under test at UNSW Sydney.  (a) Test rig with microphone array; (b) illustration of rotor blade with integrated porosity.

Rotor noise

Figure 2.  Comparison of noise spectra from solid and porous rotor blades at 900 RPM and blade pitch angle of 5 degrees.

Audio 1: Solid rotor blades spinning at 900 RPM

Audio 2: 3D printed porous rotor blades spinning at 900 RPM

3aPPb7 – Influence of Age and Instrumental-Musical Training on Pitch Memory in Children and Adults

Aurora J. Weaver – ajw0055@auburn.edu
Molly Murdock- mem0092@auburn.edu
Auburn University
1199 Haley Center
Auburn, AL 36849

Jeffrey J. DiGiovanni – digiovan@ohio.edu
Ohio University
W151a Grover Center
Athens, Ohio

Dennis T. Ries – Dennis.Ries@ucdenver.edu
University of Colordo Anshutz Medical Campus
Building 500, Mailstop F546
13001 East 17th Place, Room E4326C
Aurora, CO 80045

Popular version of paper 3aPPb7
Presented Wednesday morning, December 6, 2017
174th ASA Meeting, New Orleans

Infants are inherently sensitive to the relational properties of music (e.g., musical intervals, melody).1 Knowledge of complex structural properties of music (e.g., key, scale), however, are learned to varying degrees through early school age.1-3 Acquisition of some features does not require specialized instruction, but extensive musical training further enhances the ability to learn musical structures.4 Related to this project, formal musical instruction is linked to improvement in listening tasks (other than music) that stress attention in adult participants.5,6,7   

Musical training influences sound processing in the brain through learning-based processes while also enhancing lower-level acoustic processing within the brainstem8. Behavioral and physiological evidence suggest there is a critical period for pitch processing refinement within these systems between the ages of 7-to-11 years.9-13 The purpose of this project was to determine the contributions of musical training and age to refinement of pitch processing beyond this critical period.

Individuals with extensive and active instrumental musical training were matched in age with individuals with limited instrumental musical training. This comparison served as a baseline to evaluate the extent of presumed physiologic changes within the brain/brainstem relative to the amount and duration of musical training.14,15 We hypothesized that the processing mechanisms for active musicians become increasingly more efficient over time, due to training. Therefore, this group can focus more mental resources on the retention of sound information during pitch perception tasks of varying difficulty. Sixty-six participants, in three different age groups (i.e., 10-12 year olds; 13-15 year olds, and adults), completed two experiments.

The first experiment included a measure of non-verbal auditory working memory (pitch pattern span [PPS]).16 The second experiment used a pitch matching task, which closely modeled the procedure implemented by Ross and colleagues.17-19 Figure 1 displays the individual PPS scores for each instrumental training group as a function of age in years.

Musical Training

Figure 1. Individual PPS scores (y-axis) for each instrumental training group as a function of age in years (x- axis). The participant scores in the active group are represented by filled in circles, and the participants with limited instrumental training are open circles.

The second experimental task, a pitch matching production task, eliminated the typical need to understand musical terminology (e.g. naming musical notes). This method provided a direct comparison of musicians and non-musicians, when they could only rely on their listening skills to remember a target, and to match the pitch to an ongoing tonal sequence.17-19 We wanted to evaluate pitch matching accuracy (via constant error) and consistency (via standard deviation) in individuals with limited and active instrumental musical training. Figure 2 illustrates the timing pattern and describes the task procedure. Each participant completed thirty pitch matches.

Figure 2. Schematic representation of timing pattern of the pure-tones showing the target and examples of the first three comparison tones that might have followed. Once the pitch target had been presented, an adjustable dial appeared on a touch screen and the presentation of the first comparison stimulus occurred 0.2 seconds later. Note the frequency of the 1st comparison tone was placed randomly 4-6 semitones above or below the target tone (not represent in this figure). The values of subsequent tones were controlled by the participant through movement of the onscreen dial. Presentation of comparison tones continued, at the same time interval, until a participant had adjusted the pitch of the ongoing comparison tones using the GUI dial to match the pitch target

Figure 3 depicts distribution of responses across age groups and instrumental training groups (see figure legend). Statistical analyses (i.e., Manova and Linear Regression) revealed that duration of instrumental musical training and age uniquely contribute to enhanced memory for pitch, indicated by greater PPS scores and smaller standard deviations of the pitch matches. Unexpectedly, based on the task procedures where participates are equally likely to match a pitch above or below the target, the youngest children (ages 10-12) demonstrated significantly sharper pitch matches (i.e., positive constant error) across pitch matches than the older participants (13 and older; see Figure 3 dashed lines). That is, across music groups, the youngest participants on average tended to produce sharper pitch matches than the presented target pitch.

Figure 3. Displays the proportion of response matches produced as a function of the deviation in half-steps (smallest musical distance between notes, e.g., progressively going up the white and black keys on a piano) across age groups in rows (ages 10-12 years, top; ages 13-15 years, middle; ages 18-35 years, bottom) and instrumental training groups by column (Limited, left; Active, right). The dashed line depicts the overall accuracy (i.e., constant error) across pitch matches produced by each participant subgroup.

Matching individuals in age groups, with and without active musical training, allowed the comparison of the unique contributions of age and duration of music training on pitch memory. Consistent with our hypothesis, individuals with active and longer durations of musical training produced greater PPS scores and performance on pitch matching was less degraded (i.e., produced smaller standards deviations across pitch matches) than age-matched groups. Most individuals can distinguish pitch changes in half note steps, although they may have considerable difficulty establishing a reliable relationship between a frequency and its note value.20,21,23,24 There are individuals, however, with absolute pitch, who have the capacity to name a musical note without the use of a reference tone.24 While no participant in either music group (Active or Limited) reported absolute pitch, two participants in the active music group matched all thirty pitch matches within 1 semitone; that is, within one half step (HS) of the target. This may indicate that the two listeners were using memory of the categorical notes to facilitate pitch matches (e.g., using their memory of the note A4, could help when matching a target pitch close to 440 Hz in the task). Consist with previous application of this method,17,18,19 the pitch matching production task did identify participants who possess similar categorical memory for tonal pitch when musical notes and terminology were removed from the production method.

References

  1. Schellenberg, E. G., & Trehub, S. E. (1996). Natural musical intervals: Evidence from infant listeners. Psychological Science, 7(5), 272-277.
  2. Fujioka, T., Ross, B., Kakigi, R., Pantev, C., & Trainor, L. (2006). One year of musical training affects development of auditory cortical evoked fields in young children. Brain, 129(10), 2593-2608.
  3. Trehub, S. E., Bull, D., & Thorpe, L. A. (1984). Infants’ perception of melodies: The role of melodic contour. Child Development, 55(3), 821-830. doi:10.1111/1467-8624.ep12424362
  4. Morrongiello, B. A., & Roes, C. L. (1990). Developmental changes in children’s perception of musical sequences: Effects of musical training. Developmental Psychology, 26(5), 814-820.
  5. Strait, D., Kraus, N., Parbery-Clark, A., & Ashley, R. (2010). Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance. Hearing Research, 261, 22-29.
  6. Williamson, V. J., Baddeley, A. D., & Hitch, G. J. (2010). Musicians’ and nonmusicians’ memory for verbal and musical sequences: Comparing phonological similarity and pitch proximity. Memory and Cognition, 38(2), 163-175. doi: 10.3758/MC.38.2.163.
  7. Schön, D., Magne, C. & Besson, M. The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology 41, 341–349 (2004).
  8. Kraus, N., Skoe, E., Parbery-Clark, A. & Ashley, R. Experience-induced Malleability in Neural Encoding of Pitch, Timbre, and Timing. N. Y. Acad. Sci. 1169, 543–557 (2009).
  9. Banai, K., Sabin, A.T., Wright, B.A. (2011). Separable developmental trajectories for the abilities to detect auditory amplitude and frequency modulation. Hearing Research, 280, 219-227.
  10. Dawes, P., & Bishop, D.V., 2008. Maturation of visual and auditory temporal processing in school-aged children. J. Speech. Lang. Hear. Res. 51, 1002-1015.
  11. Moore, D., Cowan, J., Riley, A., Edmondson-Jones, A., & Ferguson, M. (2011). Development of auditory processing in 6- to 11-yr-old children. Ear and Hearing, 32, 269-285.
  12. Morrongiello, B. A., & Roes, C. L. (1990). Developmental changes in children’s perception of musical sequences: Effects of musical training. Developmental Psychology, 26, 814-820.
  13. Sutcliffe, P., & Bishop, D. (2005). Psychophysical design influences frequency discrimination performance in young children. Journal of Experimental Child. Psychology, 91, 249-270
  14. Habib, M., & Besson, M. (2009). What do musical training and musical experience teach us about brain plasticity? Music Perception, 26, 279-285.
  15. Zatorre, R. J. (2003). Music and the brain. Annals of the New York Academy of    Sciences, 999, 4-14
  16. Weaver, A.J., DiGiovanni, J.J & Ries, D.T. (2015). The Influence of Musical Training and Maturation on Pitch Perception and Memory. Poster AAS, Scottsdale, AZ
  17. Ross, D. A., & Marks, L. E. (2009). Absolute pitch in children prior to the beginning of musical training. Annals of the New York Academy of Sciences, 1169, 199-204. doi:10.1111/j.1749-6632.2009.04847.x
  18. Ross, D. A., Olson, I. R., & Gore, J. (2003). Absolute pitch does not depend on early musical training. Annals of the New York Academy of Sciences, 999(1), 522-526.
  19. Ross, D. A., Olson, I. R., Marks, L., & Gore, J. (2004). A nonmusical paradigm for identifying absolute pitch possessors. Journal of the Acoustical Society of America, 116, 1793-1799. Ross, Olsen, and Gore’s procedure (2003)
  20. Levitin, D. (2006). This is your brain on music: The science of human obsession. New York, NY: Dutton.
  21. Moore, B. C. J. (2003). An introduction to the psychology of hearing. London, UK: Academic Press.
  22. Hyde KL, Peretz I, Zatorre RJ. Evidence for the role of the right auditory cortex in fine pitch resolution. Neuropsychologia 2008;46:632–639. [PubMed: 17959204]
  23. McDermott, J. H., & Oxenham, A. J. (2008). Music perception, pitch, and the auditory system. Current Opinion in Neurobiology18(4), 452–463. http://doi.org/10.1016/j.conb.2008.09.005
  24. Dooley, K., & Deutsch, D. (2010). Absolute pitch correlates with high performance on musical dictation. Journal of the Acoustic Society of America, 128(2), 890-893. doi:10.1121/1.3458848

3aAB8 – Sea turtles are silent… until there is something important to communicate: first sound recording of a sea turtle

Amaury Cordero-Tapia  – acordero@cibnor.mx
Eduardo Romero-Vivas – evivas@cibnor.mx
CIBNOR
Mar Bermejo 195
Playa Palo de Santa Rita Sur 23090
La Paz, BCS, Mexico

Popular version of paper 3aAB8, “Opportunistic underwater recording of what might be a distress call of Chelonya mydas agassizii”
Presented Wednesday morning, December 6, 2017, 10:15-10:30 AM, Salon F/G/H
174th ASA Meeting, New Orleans, Louisiana
Click here to read the abstract.

Sea turtles are considered “the least vocal of all living reptiles” (DOSIT), since their vocalization has been documented only during nesting (Cook & Forrest, 2005). Although they distribute worldwide in the oceans, there seems to be no recordings of sounds produced by them, perhaps until now.

In Baja California Sur Mexico there is a conservation program run by Government Authorities, Industry, and Non-Governmental Agencies focused on vulnerable, threatened and endangered marine species. In zones of high density of sea turtles, special nets, which allow them to surface for breathing, are deployed monthly for monitoring purposes. Nets are checked by divers every 2 hours during the 24 Hrs. of the census.

During one of these checks a female specimen of Green Turtle (Chelonya mydas agassizii) was video recorded using an action cam. Posterior analysis of the underwater recording showed a clear pattern of pulsed sound when the diver was at close proximity to the turtle. The signal covers the reported audition range for this species (Ketten & Bartol, 2005; Romero-vivas & Cordero-Tapia, 2008) and given the circumstances we think that it might be a distress call. With more recordings we will confirm if such is the case, although this first recording gives an initial hint of what to look for. Maybe sea turtles are not that silent; there was just no need to break the silence

Figure 1. Green turtle in the special net & sound recording

 

Dosits.org. (2017). DOSITS: How do sea turtles hear?. [online] Available at: http://dosits.org/animals/sound-reception/how-do-sea-turtles-hear/ [Accessed 16 Nov 2017].

Cook, S. L., and T. G. Forrest. 2005, Sounds produced by nesting Leatherback sea turtles (Dermochelys coriacea). Herpetological Review 36:387–390.

Ketten, D.R. and Bartol, S.M. 2005, Functional Measures of Sea Turtle Hearing. Woods Hole Oceanographic Institution: ONR Award No: N00014-02-1-0510.

Romero-Vivas, E. and Cordero-Tapia, A. 2008, Behavioral Acoustic Response of two endangered sea turtle species: Chelonia Mydas Agassizzi –Tortuga Prieta- and Lepidochelys Olivaceas –Tortuga Golfina- XV Mexican International Congress on Acoustics, Taxco 380-385.

 

 

 

 

 

3pIDa1 – Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles

Lora J. Van Uffelen, Ph.D – loravu@uri.edu
University of Rhode Island
Department of Ocean Engineering &
Graduate School of Oceanography
45 Upper College Rd
Kingston, RI 02881

Popular version of paper 3pIDa1, “Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles”
Presented Wednesday, December 06, 2017, 1:05-1:25 PM, Salon E
174th ASA meeting, New Orleans

What do you think of when you think of a drone?  A quadcopter that your neighbor flies too close to your yard?  A weaponized military system?  A selfie drone?  The word drone typically refers to an unmanned aerial vehicle (UAV), but it also now is used to refer to an unmanned underwater vehicle (UUV).  Aerial drones are typically outfitted with cameras, but cameras are not always the best way to “see” underwater.  Hydronephones are underwater vehicles, or underwater drones, equipped with hydrophones, or underwater microphones, which receive and record sound underwater.   Sound is one of the best tools for sensing or “seeing” the underwater environment.

Sound travels 4-5 times faster in the ocean than it does in air. The speed of sound depends on ocean temperature, salinity, and pressure. Sound can also travel far – hundreds of miles under the right conditions! – which makes sound an excellent tool for things like underwater communication, navigation, and even measuring oceanographic properties like temperature and currents.

Here, the term hydronephone is used specifically to refer to an ocean glider, a subclass of UUV, used as an acoustic receiver [Figure 1].  Gliders are autonomous underwater vehicles (AUVs) because they do not require constant piloting.  A pilot can only communicate with a glider when it is at the sea surface; while it is underwater it travels autonomously.  Gliders do not have propellers, but they move through the water by controlling their buoyancy and using hydrofoil wings to “glide” through the water. Key advantages of these vehicles are that they are relatively quiet, they have low power consumption so they can be deployed for long durations of time, they can operate in harsh environments, and they are much more cost-effective than traditional ship-based observational methods.

Hydronephones

Figure 1: Seaglider hydronephones (SG196 and SG198) on the deck of the USCGC Healy prior to deployment in the Arctic Ocean north of Alaska in August 2016.

Two hydronephones were deployed August-September of 2016 and 2017 in the Arctic Ocean.  They recorded sound signals at ranges up to 480 kilometers (about 300 miles) from six underwater acoustic sources that were placed in the Arctic Ocean north of Alaska as part of a large-scale ocean acoustics experiment funded by the Office of Naval Research [Figure 2].  This acoustic system was designed to learn about how sound travels in the Arctic ocean where the temperatures and ice conditions are changing.  The hydronephones were a mobile addition to this stationary system, allowing for measurements at many different locations.

Figure 2: Map of Seaglider SG196 and SG198 tracks in the Arctic Ocean in August/September of 2016 and 2017. Locations of stationary sound sources are shown as yellow pins.

One of the challenges of using gliders is figuring our exactly where they are when they are underwater.  When the gliders are at the surface, they can get their position in latitude and longitude using Global Positioning System (GPS) satellites, in a similar way to how a handheld GPS or a cellphone gets position.  Gliders only have access to GPS when they come to the ocean surface because the GPS signals are electromagnetic waves, which do not travel far underwater.   The gliders only come to the surface a few times a day and can travel several miles between surfacings, so a different method is needed to determine where they are while they are deep underwater. For the case of the Arctic experiment, the recordings of the acoustic transmissions from the six sources on the hydronephones could be used to position them underwater using sound in a way that is analogous to the way that GPS uses electromagnetic signals for positioning.

Improvements in underwater positioning will make hydronephones an even more valuable tool for ocean acoustics and oceanography.  As vehicle and battery technology improves and as data storage continues to become smaller and cheaper, hydronephones will also be able to record for longer periods of time allowing more extensive exploration of the underwater world.

Acknowledgments:  Many investigators contributed to this experiment including Sarah Webster, Craig Lee, and Jason Gobat from the University of Washington, Peter Worcester and Matthew Dzieciuch from Scripps Institution of Oceanography, and Lee Freitag from the Woods Hole Oceanographic Institution. This project was funded by the Office of Naval Research.

4aAB4 – Analysis of bats’ gaze and flight control based on the estimation of their echolocated points with time-domain acoustic simulation

Taito Banda – dmq1001@mail4.doshiha.ac.jp
Miwa Sumiya – miwa1804@gmail.com
Yuya Yamamoto – dmq1050@mail4.doshisha.ac.jp
Yasufumi Yamada – yasufumi.yamada@gmail.com
Faculty of Life and Medical Sciences, Doshisha UniversityKyotanabe, Kyoto, Japan

Yoshiki Nagatani – nagatani@ultrasonics.jp
Department of Electronics, Kobe City College of Technology, Kobe, Japan.

Hiroshi Araki – Araki.Hiroshi@ak.MitsubishiElectric.co.jp
Advanced Technology R&D Center, Mitsubishi Electric Corporation, Amagaski, Japan

Kohta I. Kobayasi – kkobayas@mail.doshisha.ac.jp
Shizuko Hiryu – shiryu@mail.doshisha.ac.jp
Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, Kyoto, Japan

Popular version of paper 4aAB4 “Analysis of bats’ gaze and flight control based on the estimation of their echolocated points with time-domain acoustic simulation.”
Presented Friday morning, December 7, 2017, 8:45-9:00 AM, Salon F/G/H
174th ASA in New Orleans

Bats broadcast ultrasound and listen to the echoes to understand surrounding information. It is called echolocation. By analyzing those echoes, i.e., arrival time of echoes, bats can detect the position of objects, shape or texture [1-3]. Contrary to the way people use visual information, bats use the sound for sensing the world. How is the world different between the two by sensing? Because both senses are completely different, we cannot imagine how bats see the world.

To address this question, we simulated the echoes arriving at the bats during obstacle-avoiding flight based on the behavioral data so that we could investigate how the surrounding objects were described acoustically.

First, we arranged microphone arrays (24 microphones) and two high-speed cameras in an experimental flight chamber (Figure 1) [4]. The timing, positions and directions of emitted ultrasound as well as the flight paths were measured. A small telemetry-microphone was attached on the back of the bat so that the intensity of emitted ultrasound could be recorded accurately [5]. The bat was forced to follow a S-shaped flight pattern to avoid the obstacle acrylic boards.

Based on those behavioral data, we simulated propagation of sounds with the measured strength and direction emitted at the position of the bat in the experiment, and we could obtain echoes reaching both left and right ears from the obstacles. By using interaural time difference of echoes, we could acoustically identify the echolocated points in the space for all emissions (square plots in Fig.2). We also investigated how the bats show spatial and temporal changes in the echolocated points in the space as they became familiar with the space (top and bottom panels).

We analyzed changes in the echolocated points by using this acoustic simulation, corresponding to which part of objects the bats intended to gaze at. In a comparison between before and after the habituation in the same obstacle layout, there are differences in the wideness of echolocated points on the objects. By flying the same layout repeatedly, false detection of objects was reduced, and their echolocating fields became narrower.

It is natural for animals to pay their attention toward objects adequately and adapt both flight and sensing controls cooperatively as they became familiar with the space. These finding suggests that our approach in this paper, i.e., acoustic simulation based on behavioral experiment is one of effective ways to visualize how the object groups are acoustically structured and represented in the space for bats by echolocation during flight. We believe that it might serve a tip to the question; “What is it like to see as a bat?”

ehcolocation
Figure 1 Diagram of bat flight experiment. Blue and red circles indicate microphones on the wall and the acrylic boards, respectively. Two high-speed video cameras are attached at the two corners of the room. Three acrylic boards are arranged to make bats follow S-shaped flight pattern to avoid the obstacles.

echolocation
Figure 2 Comparison of echolocated points between before and after space habituation. The measured positions where the bat emitted the sound are shown with circle plots meanwhile the calculated echolocated points are shown with square plots. Color variation from blue to red for both circle and square plots corresponds to temporal sequence of the flight. Sizes of circle and square plots correspond to the strength of emissions and their echoes from the obstacles at the bat, respectively.

References:
[1] Griffim D. R., Listning in the dark, Yle University, New Haven, CT, 1958

[2] Simmons J.A., Echolocation in bats: signal processing of echoes for target range, Science, vol. 171, pp.925-928., 1971

[3] Kick S. A., Target-Detection by the Echolocating Bat, Eptesicus fuscus, J Comp Physiol, A., vol. 145, pp.431-435, 1982

[4] Matsuta N, Hiryu S, Fujioka E, Yamada Y, Riquimaroux H, Watanabe Y., Adaptive beam-width control of echolocation sounds by CF-FM bats, Rhinolophus ferrumequinum nippon, during prey-capture flight, J Exp Biol., vol. 206, pp.1210-1218, 2013

[5] Hiryu S, Shiori Y, Hosokawa T, Riquimaroux H, Watanabe Y., On-board telemetry of emitted sounds from free-flying bats: compensation for velocity and distance stabilizes echo frequency and amplitude, J Comp Physiol A., vol. 194, pp.841-851, 2008