1pSP10 – Design of an Unmanned Aerial Vehicle Based on Acoustic Navigation Algorithm

Yunmeng Gong1 – (476793382@qq.com)
Huping Xu1 – (hupingxu@126.com)
Yu Hen Hu2 – (yuhen.hu@wisc.edu)
1 School of Logistic Engineering
Wuhan University of Technology
Wuhan, Hubei, China 430063
2Department of Electrical and Computer Engineering
University of Wisconsin – Madison
Madison, WI 53706 USA

Popular version of paper 1pSP10, Design of an Unmanned Aerial Vehicle Based on Acoustic Navigation Algorithm”
Presented on Monday afternoon, December 4, 2017, 4:05-4:20 PM, Salon D
174th ASA Meeting, New Orleans

Acoustic UAV guidance is an enabling technology for future urban UAV transportation systems. When large numbers of commercial UAVs are tasked to deliver goods and services in a metropolitan area, they need to be guided to travel orderly along aerial corridors above streets. They will need to land or take off from designated parking structure and obey “traffic signals” to mitigate potential collisions.

An UAV acoustic guidance system consists of a group of ground stations distributed over the operating region. When the UAV is entering the system, the UAV’s fly path will be under the guidance of a regional air-traffic controller system. The UAV and the controller will communicate via radio channel using wifi or 5G cellular network internet of things protocols. The UAV’s position will be estimated through estimation of the DoA angles of narrow band acoustic signals.

a)acoustic navigation b)acoustic navigation

Figure 1 UAV acoustic guidance system (a) passive mode acoustic guidance system (b) active model acoustic guidance system

As shown in Figure 1, acoustic UAV guidance can operate in a passive self-guidance mode as well as an active guidance mode. In the passive self-guidance mode, beacons with known 3D positions will emit known, distinct narrow-band (harmonic) signals. A UAV will passively receive acoustic signals using an on-board microphone phase array. It will use the acoustic signals so sampled to estimate the direction-of-arrival (DoA) of each beacon harmonic signal. If the UAV is provided with the beacon stations’ 3D coordinates, the UAV will be able to determine its own locations and heading complement those estimated using GPS or inertial guidance systems. The advantage of the passive guidance system is that multiple UAVs can use the same group of beacon stations to estimate their own position. The technical challenge is that each UAV will be mounted with a bulky acoustic phase array; and the received acoustic signal will suffer from strong noise interference due to engine, propeller/rotor, and wind.

Conversely, in an active guidance mode, the UAV will actively emit an omni-directionally transmitted, narrow-band acoustic signal using a harmonic frequency designated by the local air-traffic controller. Each beacon station will use its local acoustic micro-phone phase array to estimate the DoA of the UAV acoustic signal. The UAV’s location, speed, and heading then will be estimated by the local air-traffic controller and transmitted to the UAV. The advantage of the active guidance mode is that the UAV has a lighter payload which consists of an amplified speaker and related circuitry. The disadvantage of this approach is that each UAV within the region needs to be able to generate harmonic signals with a distinct center frequency. As the number of UAVs within the region increases, available acoustic frequencies may be insufficient.

In this paper, we investigate key issues relating to the design and implementation of a passive mode acoustic guidance system. We ask fundamental questions such as what is the effective range of applying acoustic guidance? What are sizes and configurations of the on-board phase array? What is an efficient formulation of a direction of arrival estimation algorithm so that it can be implemented on the computers on-board a UAV?

We conducted on-the-ground experiment and found the sound attenuation as a function of distance and harmonic frequency. The result is shown in Figure 2 below.

Figure 2. Sound attenuation in air as a function of distance for different harmonic frequencies

Using a commercial UAV (DJI Phantom model), we conduct experiments to study the frequency spectrum of sound at different motion states to identify beacon frequencies that may be least interfered by engine sound and noise. An example of the acoustic spectrum during taking off is shown in Figure 3 below.

Figure 3. UAV acoustic noise during take-off

We also developed a simplified direction of arrival estimation algorithm that achieves encouraging accuracy while implemented using a STM32F407 micro-controller that can easily be installed on a UAV.

1pEAa2 – Rotor noise control using 3D-printed porous materials

Chaoyang Jiang, Yendrew Yauwenas, Jeoffrey Fischer, Danielle Moreau and Con Doolan-c.doolan@unsw.edu.au
School of Mechanical and Manufacturing Engineering
University of New South Wales
Sydney, NSW, Australia, 2052

Popular version of paper 1pEAa2, “Rotor noise control using 3D-printed porous materials”
Presented Monday, December 04, 2017, 1:15-1:30 PM, Balcony N
174th ASA meeting, New Orleans

You may not realise it, but you are surrounded by rotor blades.  Fans in your computer, the air-conditioning system above your head, the wind turbine creating your renewable energy and the jet engines powering you to your next holiday or business meeting are some examples of technology where rotor blades are essential.  Unfortunately, rotor blades create noise and with so many of them, controlling rotor noise is necessary to improve the liveability and health of our communities.

Perhaps the most challenging type of rotor noise to control is turbulent trailing edge noise.  Trailing edge noise is created by turbulence in the air surrounding the rotor blade passing the blade trailing edge. This noise is produced over a wide range of frequencies (it is broadband in nature) because it is the acoustic signature of turbulence, which is a random mixture of swirling eddies of varying size.

Because this noise is driven by turbulence and its interaction with the rotor blade, it is difficult to predict and very challenging to control.  Adding porous material to a rotor blade has been shown to provide some noise relief; however, the amount of noise control is usually small and sometimes more noise is created by the porous material itself.  The problem to solve is to work out how to fabricate a quiet rotor blade with optimised and integrated porosity.  This is a significant departure from current methods, that normally apply standard porous materials late in the design or manufacturing process.

We use 3D printing technology to overcome this problem.  3D printing (also known as additive manufacturing) allows complex designs to be realised quickly through carefully controlled deposition of material (polymer, metal or ceramic).  We have used 3D printing to explore how porosity in polymers can be optimised with subsurface cavities to provide maximum sound absorption over a wide range of frequencies.  Then, we 3D print these porous designs directly into the rotor blade of a fan and test their acoustic performance in a special facility at UNSW Sydney.

Figure 1(a) shows 3D printed rotor blades under test at UNSW Sydney, with a picture of the 3D printed blade tip, with porous trailing edge, shown in figure 1(b).  A three-bladed fan is shown and in the background, a microphone array.  The microphone array allows very accurate noise measurements from the rotor blades.  When we compare solid and 3D printed porous blades, significant noise reduction is achieved, as shown in figure 2.  Over 10 dB of noise control can be achieved, which is much higher than other control methods.  Audio files (see below) allow you to hear the difference between regular solid blades and the 3D printed porous blades.

3D printing has shown that it is possible to produce much quieter rotor blades than we have been able to previously.  Our next step is to further optimise the porosity designs to achieve maximum noise reduction.  We are also investigating the impact of these designs on aerodynamic performance to ensure excessive drag is not produced.  Further, exploring the use of metallic 3D printing systems is required to make more durable rotor blades suitable for extreme environments, such as gas turbine blades.

(a)Rotor noise (b)Rotor noise

Figure 1.  3D rotor blades under test at UNSW Sydney.  (a) Test rig with microphone array; (b) illustration of rotor blade with integrated porosity.

Rotor noise

Figure 2.  Comparison of noise spectra from solid and porous rotor blades at 900 RPM and blade pitch angle of 5 degrees.

Audio 1: Solid rotor blades spinning at 900 RPM

Audio 2: 3D printed porous rotor blades spinning at 900 RPM

3aPPb7 – Influence of Age and Instrumental-Musical Training on Pitch Memory in Children and Adults

Aurora J. Weaver – ajw0055@auburn.edu
Molly Murdock- mem0092@auburn.edu
Auburn University
1199 Haley Center
Auburn, AL 36849

Jeffrey J. DiGiovanni – digiovan@ohio.edu
Ohio University
W151a Grover Center
Athens, Ohio

Dennis T. Ries – Dennis.Ries@ucdenver.edu
University of Colordo Anshutz Medical Campus
Building 500, Mailstop F546
13001 East 17th Place, Room E4326C
Aurora, CO 80045

Popular version of paper 3aPPb7
Presented Wednesday morning, December 6, 2017
174th ASA Meeting, New Orleans

Infants are inherently sensitive to the relational properties of music (e.g., musical intervals, melody).1 Knowledge of complex structural properties of music (e.g., key, scale), however, are learned to varying degrees through early school age.1-3 Acquisition of some features does not require specialized instruction, but extensive musical training further enhances the ability to learn musical structures.4 Related to this project, formal musical instruction is linked to improvement in listening tasks (other than music) that stress attention in adult participants.5,6,7   

Musical training influences sound processing in the brain through learning-based processes while also enhancing lower-level acoustic processing within the brainstem8. Behavioral and physiological evidence suggest there is a critical period for pitch processing refinement within these systems between the ages of 7-to-11 years.9-13 The purpose of this project was to determine the contributions of musical training and age to refinement of pitch processing beyond this critical period.

Individuals with extensive and active instrumental musical training were matched in age with individuals with limited instrumental musical training. This comparison served as a baseline to evaluate the extent of presumed physiologic changes within the brain/brainstem relative to the amount and duration of musical training.14,15 We hypothesized that the processing mechanisms for active musicians become increasingly more efficient over time, due to training. Therefore, this group can focus more mental resources on the retention of sound information during pitch perception tasks of varying difficulty. Sixty-six participants, in three different age groups (i.e., 10-12 year olds; 13-15 year olds, and adults), completed two experiments.

The first experiment included a measure of non-verbal auditory working memory (pitch pattern span [PPS]).16 The second experiment used a pitch matching task, which closely modeled the procedure implemented by Ross and colleagues.17-19 Figure 1 displays the individual PPS scores for each instrumental training group as a function of age in years.

Musical Training

Figure 1. Individual PPS scores (y-axis) for each instrumental training group as a function of age in years (x- axis). The participant scores in the active group are represented by filled in circles, and the participants with limited instrumental training are open circles.

The second experimental task, a pitch matching production task, eliminated the typical need to understand musical terminology (e.g. naming musical notes). This method provided a direct comparison of musicians and non-musicians, when they could only rely on their listening skills to remember a target, and to match the pitch to an ongoing tonal sequence.17-19 We wanted to evaluate pitch matching accuracy (via constant error) and consistency (via standard deviation) in individuals with limited and active instrumental musical training. Figure 2 illustrates the timing pattern and describes the task procedure. Each participant completed thirty pitch matches.

Figure 2. Schematic representation of timing pattern of the pure-tones showing the target and examples of the first three comparison tones that might have followed. Once the pitch target had been presented, an adjustable dial appeared on a touch screen and the presentation of the first comparison stimulus occurred 0.2 seconds later. Note the frequency of the 1st comparison tone was placed randomly 4-6 semitones above or below the target tone (not represent in this figure). The values of subsequent tones were controlled by the participant through movement of the onscreen dial. Presentation of comparison tones continued, at the same time interval, until a participant had adjusted the pitch of the ongoing comparison tones using the GUI dial to match the pitch target

Figure 3 depicts distribution of responses across age groups and instrumental training groups (see figure legend). Statistical analyses (i.e., Manova and Linear Regression) revealed that duration of instrumental musical training and age uniquely contribute to enhanced memory for pitch, indicated by greater PPS scores and smaller standard deviations of the pitch matches. Unexpectedly, based on the task procedures where participates are equally likely to match a pitch above or below the target, the youngest children (ages 10-12) demonstrated significantly sharper pitch matches (i.e., positive constant error) across pitch matches than the older participants (13 and older; see Figure 3 dashed lines). That is, across music groups, the youngest participants on average tended to produce sharper pitch matches than the presented target pitch.

Figure 3. Displays the proportion of response matches produced as a function of the deviation in half-steps (smallest musical distance between notes, e.g., progressively going up the white and black keys on a piano) across age groups in rows (ages 10-12 years, top; ages 13-15 years, middle; ages 18-35 years, bottom) and instrumental training groups by column (Limited, left; Active, right). The dashed line depicts the overall accuracy (i.e., constant error) across pitch matches produced by each participant subgroup.

Matching individuals in age groups, with and without active musical training, allowed the comparison of the unique contributions of age and duration of music training on pitch memory. Consistent with our hypothesis, individuals with active and longer durations of musical training produced greater PPS scores and performance on pitch matching was less degraded (i.e., produced smaller standards deviations across pitch matches) than age-matched groups. Most individuals can distinguish pitch changes in half note steps, although they may have considerable difficulty establishing a reliable relationship between a frequency and its note value.20,21,23,24 There are individuals, however, with absolute pitch, who have the capacity to name a musical note without the use of a reference tone.24 While no participant in either music group (Active or Limited) reported absolute pitch, two participants in the active music group matched all thirty pitch matches within 1 semitone; that is, within one half step (HS) of the target. This may indicate that the two listeners were using memory of the categorical notes to facilitate pitch matches (e.g., using their memory of the note A4, could help when matching a target pitch close to 440 Hz in the task). Consist with previous application of this method,17,18,19 the pitch matching production task did identify participants who possess similar categorical memory for tonal pitch when musical notes and terminology were removed from the production method.

References

  1. Schellenberg, E. G., & Trehub, S. E. (1996). Natural musical intervals: Evidence from infant listeners. Psychological Science, 7(5), 272-277.
  2. Fujioka, T., Ross, B., Kakigi, R., Pantev, C., & Trainor, L. (2006). One year of musical training affects development of auditory cortical evoked fields in young children. Brain, 129(10), 2593-2608.
  3. Trehub, S. E., Bull, D., & Thorpe, L. A. (1984). Infants’ perception of melodies: The role of melodic contour. Child Development, 55(3), 821-830. doi:10.1111/1467-8624.ep12424362
  4. Morrongiello, B. A., & Roes, C. L. (1990). Developmental changes in children’s perception of musical sequences: Effects of musical training. Developmental Psychology, 26(5), 814-820.
  5. Strait, D., Kraus, N., Parbery-Clark, A., & Ashley, R. (2010). Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance. Hearing Research, 261, 22-29.
  6. Williamson, V. J., Baddeley, A. D., & Hitch, G. J. (2010). Musicians’ and nonmusicians’ memory for verbal and musical sequences: Comparing phonological similarity and pitch proximity. Memory and Cognition, 38(2), 163-175. doi: 10.3758/MC.38.2.163.
  7. Schön, D., Magne, C. & Besson, M. The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology 41, 341–349 (2004).
  8. Kraus, N., Skoe, E., Parbery-Clark, A. & Ashley, R. Experience-induced Malleability in Neural Encoding of Pitch, Timbre, and Timing. N. Y. Acad. Sci. 1169, 543–557 (2009).
  9. Banai, K., Sabin, A.T., Wright, B.A. (2011). Separable developmental trajectories for the abilities to detect auditory amplitude and frequency modulation. Hearing Research, 280, 219-227.
  10. Dawes, P., & Bishop, D.V., 2008. Maturation of visual and auditory temporal processing in school-aged children. J. Speech. Lang. Hear. Res. 51, 1002-1015.
  11. Moore, D., Cowan, J., Riley, A., Edmondson-Jones, A., & Ferguson, M. (2011). Development of auditory processing in 6- to 11-yr-old children. Ear and Hearing, 32, 269-285.
  12. Morrongiello, B. A., & Roes, C. L. (1990). Developmental changes in children’s perception of musical sequences: Effects of musical training. Developmental Psychology, 26, 814-820.
  13. Sutcliffe, P., & Bishop, D. (2005). Psychophysical design influences frequency discrimination performance in young children. Journal of Experimental Child. Psychology, 91, 249-270
  14. Habib, M., & Besson, M. (2009). What do musical training and musical experience teach us about brain plasticity? Music Perception, 26, 279-285.
  15. Zatorre, R. J. (2003). Music and the brain. Annals of the New York Academy of    Sciences, 999, 4-14
  16. Weaver, A.J., DiGiovanni, J.J & Ries, D.T. (2015). The Influence of Musical Training and Maturation on Pitch Perception and Memory. Poster AAS, Scottsdale, AZ
  17. Ross, D. A., & Marks, L. E. (2009). Absolute pitch in children prior to the beginning of musical training. Annals of the New York Academy of Sciences, 1169, 199-204. doi:10.1111/j.1749-6632.2009.04847.x
  18. Ross, D. A., Olson, I. R., & Gore, J. (2003). Absolute pitch does not depend on early musical training. Annals of the New York Academy of Sciences, 999(1), 522-526.
  19. Ross, D. A., Olson, I. R., Marks, L., & Gore, J. (2004). A nonmusical paradigm for identifying absolute pitch possessors. Journal of the Acoustical Society of America, 116, 1793-1799. Ross, Olsen, and Gore’s procedure (2003)
  20. Levitin, D. (2006). This is your brain on music: The science of human obsession. New York, NY: Dutton.
  21. Moore, B. C. J. (2003). An introduction to the psychology of hearing. London, UK: Academic Press.
  22. Hyde KL, Peretz I, Zatorre RJ. Evidence for the role of the right auditory cortex in fine pitch resolution. Neuropsychologia 2008;46:632–639. [PubMed: 17959204]
  23. McDermott, J. H., & Oxenham, A. J. (2008). Music perception, pitch, and the auditory system. Current Opinion in Neurobiology18(4), 452–463. http://doi.org/10.1016/j.conb.2008.09.005
  24. Dooley, K., & Deutsch, D. (2010). Absolute pitch correlates with high performance on musical dictation. Journal of the Acoustic Society of America, 128(2), 890-893. doi:10.1121/1.3458848

3aAB8 – Sea turtles are silent… until there is something important to communicate: first sound recording of a sea turtle

Amaury Cordero-Tapia  – acordero@cibnor.mx
Eduardo Romero-Vivas – evivas@cibnor.mx
CIBNOR
Mar Bermejo 195
Playa Palo de Santa Rita Sur 23090
La Paz, BCS, Mexico

Popular version of paper 3aAB8, “Opportunistic underwater recording of what might be a distress call of Chelonya mydas agassizii”
Presented Wednesday morning, December 6, 2017, 10:15-10:30 AM, Salon F/G/H
174th ASA Meeting, New Orleans, Louisiana
Click here to read the abstract.

Sea turtles are considered “the least vocal of all living reptiles” (DOSIT), since their vocalization has been documented only during nesting (Cook & Forrest, 2005). Although they distribute worldwide in the oceans, there seems to be no recordings of sounds produced by them, perhaps until now.

In Baja California Sur Mexico there is a conservation program run by Government Authorities, Industry, and Non-Governmental Agencies focused on vulnerable, threatened and endangered marine species. In zones of high density of sea turtles, special nets, which allow them to surface for breathing, are deployed monthly for monitoring purposes. Nets are checked by divers every 2 hours during the 24 Hrs. of the census.

During one of these checks a female specimen of Green Turtle (Chelonya mydas agassizii) was video recorded using an action cam. Posterior analysis of the underwater recording showed a clear pattern of pulsed sound when the diver was at close proximity to the turtle. The signal covers the reported audition range for this species (Ketten & Bartol, 2005; Romero-vivas & Cordero-Tapia, 2008) and given the circumstances we think that it might be a distress call. With more recordings we will confirm if such is the case, although this first recording gives an initial hint of what to look for. Maybe sea turtles are not that silent; there was just no need to break the silence

Figure 1. Green turtle in the special net & sound recording

 

Dosits.org. (2017). DOSITS: How do sea turtles hear?. [online] Available at: http://dosits.org/animals/sound-reception/how-do-sea-turtles-hear/ [Accessed 16 Nov 2017].

Cook, S. L., and T. G. Forrest. 2005, Sounds produced by nesting Leatherback sea turtles (Dermochelys coriacea). Herpetological Review 36:387–390.

Ketten, D.R. and Bartol, S.M. 2005, Functional Measures of Sea Turtle Hearing. Woods Hole Oceanographic Institution: ONR Award No: N00014-02-1-0510.

Romero-Vivas, E. and Cordero-Tapia, A. 2008, Behavioral Acoustic Response of two endangered sea turtle species: Chelonia Mydas Agassizzi –Tortuga Prieta- and Lepidochelys Olivaceas –Tortuga Golfina- XV Mexican International Congress on Acoustics, Taxco 380-385.

 

 

 

 

 

3pIDa1 – Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles

Lora J. Van Uffelen, Ph.D – loravu@uri.edu
University of Rhode Island
Department of Ocean Engineering &
Graduate School of Oceanography
45 Upper College Rd
Kingston, RI 02881

Popular version of paper 3pIDa1, “Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles”
Presented Wednesday, December 06, 2017, 1:05-1:25 PM, Salon E
174th ASA meeting, New Orleans

What do you think of when you think of a drone?  A quadcopter that your neighbor flies too close to your yard?  A weaponized military system?  A selfie drone?  The word drone typically refers to an unmanned aerial vehicle (UAV), but it also now is used to refer to an unmanned underwater vehicle (UUV).  Aerial drones are typically outfitted with cameras, but cameras are not always the best way to “see” underwater.  Hydronephones are underwater vehicles, or underwater drones, equipped with hydrophones, or underwater microphones, which receive and record sound underwater.   Sound is one of the best tools for sensing or “seeing” the underwater environment.

Sound travels 4-5 times faster in the ocean than it does in air. The speed of sound depends on ocean temperature, salinity, and pressure. Sound can also travel far – hundreds of miles under the right conditions! – which makes sound an excellent tool for things like underwater communication, navigation, and even measuring oceanographic properties like temperature and currents.

Here, the term hydronephone is used specifically to refer to an ocean glider, a subclass of UUV, used as an acoustic receiver [Figure 1].  Gliders are autonomous underwater vehicles (AUVs) because they do not require constant piloting.  A pilot can only communicate with a glider when it is at the sea surface; while it is underwater it travels autonomously.  Gliders do not have propellers, but they move through the water by controlling their buoyancy and using hydrofoil wings to “glide” through the water. Key advantages of these vehicles are that they are relatively quiet, they have low power consumption so they can be deployed for long durations of time, they can operate in harsh environments, and they are much more cost-effective than traditional ship-based observational methods.

Hydronephones

Figure 1: Seaglider hydronephones (SG196 and SG198) on the deck of the USCGC Healy prior to deployment in the Arctic Ocean north of Alaska in August 2016.

Two hydronephones were deployed August-September of 2016 and 2017 in the Arctic Ocean.  They recorded sound signals at ranges up to 480 kilometers (about 300 miles) from six underwater acoustic sources that were placed in the Arctic Ocean north of Alaska as part of a large-scale ocean acoustics experiment funded by the Office of Naval Research [Figure 2].  This acoustic system was designed to learn about how sound travels in the Arctic ocean where the temperatures and ice conditions are changing.  The hydronephones were a mobile addition to this stationary system, allowing for measurements at many different locations.

Figure 2: Map of Seaglider SG196 and SG198 tracks in the Arctic Ocean in August/September of 2016 and 2017. Locations of stationary sound sources are shown as yellow pins.

One of the challenges of using gliders is figuring our exactly where they are when they are underwater.  When the gliders are at the surface, they can get their position in latitude and longitude using Global Positioning System (GPS) satellites, in a similar way to how a handheld GPS or a cellphone gets position.  Gliders only have access to GPS when they come to the ocean surface because the GPS signals are electromagnetic waves, which do not travel far underwater.   The gliders only come to the surface a few times a day and can travel several miles between surfacings, so a different method is needed to determine where they are while they are deep underwater. For the case of the Arctic experiment, the recordings of the acoustic transmissions from the six sources on the hydronephones could be used to position them underwater using sound in a way that is analogous to the way that GPS uses electromagnetic signals for positioning.

Improvements in underwater positioning will make hydronephones an even more valuable tool for ocean acoustics and oceanography.  As vehicle and battery technology improves and as data storage continues to become smaller and cheaper, hydronephones will also be able to record for longer periods of time allowing more extensive exploration of the underwater world.

Acknowledgments:  Many investigators contributed to this experiment including Sarah Webster, Craig Lee, and Jason Gobat from the University of Washington, Peter Worcester and Matthew Dzieciuch from Scripps Institution of Oceanography, and Lee Freitag from the Woods Hole Oceanographic Institution. This project was funded by the Office of Naval Research.