2pBA3 – Semi-Automated Smart Detection of Prostate Cancer using Machine Learning and a Novel Near-Microscopic Imaging Platform

Daniel Rohrbach- drohrbach@RiversideResearch.org , Jonathan Mamou and Ernest Feleppa
Lizzi Center for Biomedical Engineering, Riverside Research
New York, NY, USA, 10038

Brian Wodlinger and Jerrold Wen
Exact Imaging, Markham
Ontario, Canada, L3R 2N2

Popular version of paper 2pBA3, “Quantitative-ultrasound-based prostate-cancer imaging by means of a novel micro-ultrasound scanner”
Presented Tuesday, December 05, 2017, 1:45-2:00 PM, Balcony M
174th ASA meeting, New Orleans

Prostate cancer is the second-leading cause of male cancer-related death in the U.S. with approximately 1 in 7 men being diagnosed with prostate cancer during their lifetime[i].  Detection and diagnosis of this significant disease presents a major clinical challenge because the current standard-of-care imaging method, conventional transrectal ultrasound, cannot reliably distinguish cancerous from non-cancerous prostate tissue.  Therefore, prostate biopsies for definitively diagnosing cancer are currently delivered in a systematic but “blind” pattern.  Other imaging methods, such as MRI, have been investigated for guiding biopsies, but MRI involves complicated procedures, is costly, is poorly tolerated by most patients, and  demonstrates significant variability among clinical sites and practitioners.  Our study investigated sophisticated tissue-typing algorithms for possible use in a novel, fine-resolution, ultrasound instrument called the ExactVu™ micro-ultrasound instrument by Exact Imaging, Markham, Ontario.  The ExactVu recently has been approved for commercial sale in North America and Europe.  The term micro-ultrasound refers to the near-microscopic resolution of the device.  This new, fine-resolution instrument allows clinicians to visualize previously unseen features of the prostate in real time and enables them to differentiate suspicious regions of the prostate so that they can “target” biopsies to those suspicious regions.  To enable more-objective interpretation of tissue features made visible by the ExactVu, a cancer-risk-identification protocol – called PRI-MUS™ (prostate risk Identification using micro-ultrasound)[ii] – has been developed and validated to distinguish benign prostate tissue from tissue that has a high probability of being cancerous based on its appearance in a micro-ultrasound image.

The paper, “High-frequency quantitative ultrasound for prostate-cancer imaging using a novel micro-ultrasound scanner, which is being presented at the 174th Acoustical Society of America, shows promising results from a collaborative research project undertaken by Riverside Research, a leading biomedical research institution in New York, NY, and Exact Imaging.  The paper describes an approach that successfully applies a combination of (1) sophisticated ultrasound signal processing methods known as quantitative ultrasound and (2) machine-learning and artificial intelligence to analysis of fine-resolution data acquired with the novel micro-ultrasound imaging platform to automate detection of cancerous tissue in the prostate.  Results of the study were very encouraging and showed a promising ability of the methods to distinguish cancerous from non-cancerous prostate tissue.

A database of 12,000 fine-resolution, micro-ultrasound images and correlated biopsy histology has been developed.  The new algorithm for automated detection continues to evolve and is applied to this growing data set.

Future clinical application of the algorithms implemented in the ExactVu would involve scanning a patient with indications of prostate cancer (e.g., as a result of a transrectal palpation or a high level of prostate-specific antigen in the blood) to identify regions of the gland that are sufficiently suspicious for cancer to warrant a biopsy.  As the scan proceeds, the algorithm continuously analyzes the ultrasound signals and automatically indicates to the examining urologist any regions that have a significant risk of being cancerous.  The urologist evaluates the indicated region and makes a clinical judgement on whether the region in fact warrants a biopsy.

The results of this study show an encouraging ability of ultrasound-signal processing and the machine-learning algorithm together with the novel micro-ultrasound instrumentation to depict regions of the prostate that are cancerous with high reliability.  The study demonstrates a promising potential of the algorithms and micro-ultrasound to improve targeting of biopsies, to increase cancer-detection rates, to avoid unnecessary biopsies and associated risks, to support focal therapy more effectively, and consequently to achieve better patient outcomes.

[i] American Cancer Association: https://cancerstatisticscenter.cancer.org/?_ga=2.177940773.1025752599.1511161127-1043893878.1511161127#!/

[ii] Ghai S, et al: Assessing Cancer Risk on Novel 29 MHz Micro-Ultrasound Images of the Prostate: Creation of the Micro-Ultrasound Protocol for Prostate Risk Identification. J. Urol. 2016; 196: 562–569.

1pAO9 – The Acoustic Properties of Crude Oil

Scott Loranger – sloranger@ccom.unh.edu
Department of Earth Science
University of New Hampshire
Durham, NH, United States

Christopher Bassett – chris.bassett@noaa.gov
Alaska Fisheries Science Center
National Marine Fisheries Service
Seattle, WA, United States

Justin P. Cole – jpq68@wildcats.unh.edu
Department of Chemistry
Colorado State University
Fort Collins, CO, United States

Thomas C. Weber – Weber@ccom.unh.edu
Department of Mechanical Engineering
University of New Hampshire
Durham, NH, United States

Popular version of paper 1pAO9, “The Acoustic Properties of Three Crude Oils at Oceanographically Relevant Temperatures and Pressures”
Presented Monday afternoon, December 04, 2017, 3:35-3:50 PM, Balcony M
174th ASA Meeting, New Orleans, LA
Click here to read the abstract

The difficulty of detecting and quantifying oil in the ocean has limited our understanding of the fate and environmental impact of oil from natural seeps and man-made spills. Oil on the surface can be detected by satellite (figure 1) and studied with optical instrumentation, however, as researchers look deeper to study oil as it rises through the water column, the rapid attenuation of light in the ocean limits the usefulness of these systems. Active sonar – where an acoustic transmitter generates a pulse of sound and a receiver listens for the sound reflected from an object – takes advantage of the low attenuation of sound in the ocean to detect things farther away than optical instruments. However, oil is difficult to detect acoustically because oil and seawater have similar physical properties. The amount of sound reflected from an object is dependent to the object’s size, shape and a physical property called the acoustic impedance – the product of the density and sound speed of the material being measured. When an object has an acoustic impedance similar to the medium that surrounds it, the object reflects relatively little sound. The acoustic impedance of oil (which differs by type of oil) and sea water is often very similar. In fact, under certain conditions oil droplets could be acoustically invisible. To study oil acoustically, we need to better understand the physical properties that affect its acoustic impedance.

Most measurements of the density and sound speed of oil come from oil exploration research which focuses on studying oil under reservoir conditions – high temperatures and pressures associated with oil deep underground.  As oil cools to oceanographically relevant temperatures it can transition from a liquid to a waxy semisolid. This transition may result in significant changes to the acoustic properties of oil which would not be predicted by measurements made at reservoir conditions. To inform models of acoustic scattering from oil and produce quantitatively meaningful measurements it is necessary to have well-understood properties at relevant temperatures and pressures. Density and sound speed can be measured directly, while the shape of an oil droplet can be predicted from the density and viscosity. Density and viscosity will tell you how quickly a droplet will rise, and how the drag force of the surrounding water will modify its shape. Droplets can range from spheres to more pancake like shapes that one could produce by pushing down on an inflated balloon.

To better understand these important properties, we obtained samples of three different crude oils. Each sample was sent for “fingerprinting” to identify differences in the molecular composition of the oils. “Fingerprinting” is a technique used by oil exploration scientists and spill responders to identify different crude oils. Measurements of the sound speed, density, and viscosity were made from -10°C (14°F) to 30°C (86°F). A sound speed chamber was specifically designed to measure sound speed at the same temperature range but with the added effects of pressure (0 to 2500 psi – equivalent to approximately 1700 m depth, deeper than the Deepwater Horizon well).

Light, medium and heavy crude oil was tested. Each of these is typically defined by their American Petroleum Institute (API) gravity. API gravity is a common descriptor of oils and is a measure of the density of oil relative to water. The properties of the medium and heavy crude oil are in the figure (2) below. The sound speed is different both in amplitude and shape, while the viscosity only differs in amplitude, suggesting that the changes to shape of the sound speed curve may not be related to the viscosity. The heavy oil is currently limited to measurements above 5°C because below that temperature it becomes very difficult to transfer sound through the oil. Part of this ongoing research is to develop new techniques to measure sound speed, and to use these techniques to extend our measurements of heavy oils to cold temperatures similar to those found in Arctic regions where oil can be trapped in ice. By better understanding these physical properties of oil, the methods and models used to detect and quantify oil in the marine environment can be improved.

Figure 1: Satellite image of surface oil slicks from natural seeps.

crude oil

Figure 2: Experimental measurements of the physical properties of a Medium and Heavy crude oil.

4pSC11 – The role of familiarity in audiovisual speech perception

Chao-Yang Lee – leec1@ohio.edu
Margaret Harrison – mh806711@ohio.edu
Ohio University
Grover Center W225
Athens, OH 45701

Seth Wiener – sethw1@cmu.edu
Carnegie Mellon University
160 Baker Hall, 5000 Forbes Avenue
Pittsburgh, PA 15213

Popular version of paper 4pSC11, “The role of familiarity in audiovisual speech perception”
Presented Thursday afternoon, December 7, 2017, 1:00-4:00 PM, Studios Foyer
174th ASA Meeting, New Orleans

When we listen to someone talk, we hear not only the content of the spoken message, but also the speaker’s voice carrying the message. Although understanding content does not require identifying a specific speaker’s voice, familiarity with a speaker has been shown to facilitate speech perception (Nygaard & Pisoni, 1998) and spoken word recognition (Lee & Zhang, 2017).

Because we often communicate with a visible speaker, what we hear is also affected by what we see. This is famously demonstrated by the McGurk effect (McGurk & MacDonald, 1976). For example, an auditory “ba” paired with a visual “ga” usually elicits a perceived “da” that is not present in the auditory or the visual input.

Since familiarity with a speaker’s voice affects auditory perception, does familiarity with a speaker’s face similarly affect audiovisual perception? Walker, Bruce, and O’Malley (1995) found that familiarity with a speaker reduced the occurrence of the McGurk effect. This finding supports the “unity” assumption of intersensory integration (Welch & Warren, 1980), but challenges the proposal that processing facial speech is independent of processing facial identity (Bruce & Young, 1986; Green, Kuhl, Meltzoff, & Stevens, 1991).

In this study, we explored audiovisual speech perception by investigating how familiarity with a speaker affects the perception of English fricatives “s” and “sh”. These two sounds are useful because they contrast visibly in lip rounding. In particular, the lips are usually protruded for “sh” but not “s”, meaning listeners can potentially identify the contrast based on visual information.

Listeners were asked to watch/listen to stimuli that were audio-only, visual-only, audiovisual-congruent, or audiovisual-incongruent (e.g., audio “save” paired with visual “shave”). The listeners’ task was to identify whether the first sound of the stimuli was “s” or “sh”. We tested two groups of native English listeners – one familiar with the speaker who produced the stimuli and one unfamiliar with the speaker.

The results showed that listeners familiar with the speaker identified the fricatives faster in all conditions (Figure 1) and more accurately in the visual-only condition (Figure 2). That is, listeners familiar with the speaker were more efficient in identifying the fricatives overall, and were more accurate when visual input was the only source of information.

We also examined whether visual familiarity affects the occurrence of the McGurk effect. Listeners were asked to identify syllable-initial stops (“b”, “d”, “g”) from stimuli that were audiovisual-congruent or incongruent (e.g., audio “ba” paired with visual “ga”). A blended (McGurk) response was indicated by a “da” response to an auditory “ba” paired with a visual “ga”.

Contrary to the “s”-“sh” findings reported earlier, the results from our identification task showed no difference between the familiar and unfamiliar listeners in the proportion of McGurk responses. This finding did not replicate Walker, Bruce, and O’Malley (1995).

In sum, familiarity with a speaker facilitated the speed of identifying fricatives from audiovisual stimuli. Familiarity also improved the accuracy of fricative identification when visual input was the only source of information. Although we did not find an effect of familiarity on the McGurk responses, our findings from the fricative task suggest that processing audiovisual speech is affected by speaker identity.

familiarityFigure 1- Reaction time of fricative identification from stimuli that were audio-only, visual-only, audiovisual-congruent, or audiovisual-incongruent. Error bars indicate 95% confidence intervals.

familiarityFigure 2- Accuracy of fricative identification (d’) from stimuli that were audio-only, visual-only, audiovisual-congruent, or audiovisual-incongruent (e.g., audio “save” paired with visual “shave”). Error bars indicate 95% confidence intervals.

Figure 3- Proportion of McGurk response (“da” response to audio “ba” paired with visual “ga”).

Video 1 – Example of an audiovisual-incongruent stimulus (audio “save” paired with visual “shave”).

Video 2 – Example of an audiovisual-incongruent stimulus (audio “ba” paired with visual “ga”).

References:

Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305-327.

Green, K. P., Kuhl, P. K., Meltzoff, A. N., & Stevens, E. B. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics, 50, 524-536.

Lee, C.-Y., & Zhang, Y. (in press). Processing lexical and speaker information in repetition and semantic/associative priming. Journal of Psycholinguistic Research.

McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 26, 746-748.

Nygaard, L. C., & Pisoni, D. B. (1998). Talker-specific learning in speech perception. Perception & Psychophysics, 60, 355-376.

Walker, S., Bruce, V., & O’Malley, C. (1995). Facial identity and facial speech processing: Familiar faces and voices in the McGurk effect. Perception and Psychophysics, 57, 1124-1133.

Welch, R. B., & Warren, D. H. (1980). Immediate perceptual response to intersensory discrepancy. Psychological Bulletin, 88, 638-667.

4aPP4 – Listener’s body rotation and sound duration improve sound localization accuracy or not?

Akio Honda- honda@yamanash-eiwa.ac.jp
Yamanashi-Eiwa College
888 Yokone-machi, Kofu
Yamanashi, Japan 400-8555

Popular version of paper 4aPP4, “Effects of listener’s whole-body rotation and sound duration on sound localization accuracy”
Presented Thursday morning, December 7, 2017, 9:45-10:00 AM, Studio 4
174th ASA Meeting, New Orleans

Sound localization is an important ability to make daily life safe and rich. When trying to localize a sound, our head/body movement is known to facilitate sound localization, which creates dynamic changes to the information input to each ear [1–4]. However, earlier reports have described that sound localization accuracy deteriorates during a listener’s head rotation [5–7]. Moreover, the facilitative effects of a listener’s movement differ depending on the sound features [3–4]. Therefore, the interaction between a listener’s movement and sound features remains unclear. For this study, we used a digitally controlled spinning chair to assess the effects of a listener’s whole-body rotation and sound duration on horizontal sound localization accuracy.

In this experiment, listeners were 12 adults with normal audition. Stimuli were 1/3-octave band noise bursts (center frequency = 1 kHz, SPL = 65 dB) of 50, 200, and 1000 ms duration. Each stimulus was presented from a loudspeaker in a circular array (1.2 m radius) with loudspeaker separation of 2.5 deg (total 25 loudspeakers). Listeners were unable to see the loudspeakers because an acoustically transparent curtain was placed between the listener and the circular loudspeaker array while maintaining brighter conditions inside the curtain than outside. We assigned numbers for the azimuth angle at 1.25 degree intervals: the number zero was 31.25 deg to the left; the number 25 was in front of the listener; and the number 50 was 31.25 deg to the right. These numbers were presented on the curtain to facilitate responses. Listeners sitting on the spinning chair set at the circle center were asked to report the number corresponding to the position of the presented stimulus (see Fig. 1).

In the chair-still condition, listeners faced forward with the head aligned frontward (0 deg). Then the stimulus was presented from one loudspeaker of the circular array. In the chair-rotation condition, listeners faced forward with the head 15 deg left or 15 deg right. Then, the chair rotated for 30 deg clockwise or counterclockwise respectively when the listener first faced 15 deg left or right. During the rotation, when listeners faced forward with the head front at 0 deg, the stimulus was presented from one of the loudspeakers in the circular array.

We analyzed the angular errors in the horizontal planes. The angular errors were calculated as the difference between the perceptually localized position and the physical target position. Figure 2 depicts the mean horizontal sound localization performance.

Our results demonstrated superior sound localization accuracy of the chair-rotation condition to that of a chair-still condition. Moreover, a significant effect of sound duration was observed; the accuracy for 200 ms stimuli seems worst among the durations used. However, the interaction of the test condition and the sound duration was not significant.

These findings suggest that the sound localization performance might be improved if listeners are able to obtain dynamic auditory information from their movement.  Furthermore, the duration difference of target sound was not crucially important for their sound localization accuracy. Of course, other explanations are possible. For instance, listeners might be better able to localize the sound using shorter sound (less than 50 ms), although a halfway longer duration such as 200 ms would not provide effective dynamic information to facilitate sound localization. Irrespective of the interpretation, our results provide valuable suggestions for future studies undertaken to elucidate the interaction between a listener’s movement and sound duration.

sound localization

Fig. 1 Outline of the loudspeaker array system.

Fig. 2 Results of angular error in the horizontal planes

References:

  • Wallach, “On sound localization,” J. Acoust. Soc. Am., 10, 270–274 (1939).
  • Honda, H. Shibata, S. Hidaka, J. Gyoba, Y. Iwaya, and Y. Suzuki, “Effects of Head Movement and Proprioceptive Feedback in Training of Sound Localization,” i-Perception, 4, 253–264 (2013).
  • Iwaya, Y. Suzuki and D. Kimura, “Effects of head movement on front-back error in sound localization,” Acoust. Sci. Technol., 24, 322–324 (2003).
  • Perrett and W. Noble, “The contribution of head motion cues to localization of low-pass noise,” Percept. Psychophys., 59, 1018–1026 (1997).
  • Cooper, S. Carlile and D. Alais, “Distortions of auditory space during rapid head turns,” Exp. Brain. Res., 191, 209–219 (2008).
  • Leung, D. Alais and S. Carlile, “Compression of auditory space during rapid head turns,” Proc. Natl. Acad. Sci. U.S.A., 105, 6492–6497 (2008).
  • Honda, K. Ohba, Y. Iwaya, and Y. Suzuki, “Detection of sound image movement during horizontal head rotation,” i-Perception, 7, 2041669516669614 (2016).

1pSP10 – Design of an Unmanned Aerial Vehicle Based on Acoustic Navigation Algorithm

Yunmeng Gong1 – (476793382@qq.com)
Huping Xu1 – (hupingxu@126.com)
Yu Hen Hu2 – (yuhen.hu@wisc.edu)
1 School of Logistic Engineering
Wuhan University of Technology
Wuhan, Hubei, China 430063
2Department of Electrical and Computer Engineering
University of Wisconsin – Madison
Madison, WI 53706 USA

Popular version of paper 1pSP10, Design of an Unmanned Aerial Vehicle Based on Acoustic Navigation Algorithm”
Presented on Monday afternoon, December 4, 2017, 4:05-4:20 PM, Salon D
174th ASA Meeting, New Orleans

Acoustic UAV guidance is an enabling technology for future urban UAV transportation systems. When large numbers of commercial UAVs are tasked to deliver goods and services in a metropolitan area, they need to be guided to travel orderly along aerial corridors above streets. They will need to land or take off from designated parking structure and obey “traffic signals” to mitigate potential collisions.

An UAV acoustic guidance system consists of a group of ground stations distributed over the operating region. When the UAV is entering the system, the UAV’s fly path will be under the guidance of a regional air-traffic controller system. The UAV and the controller will communicate via radio channel using wifi or 5G cellular network internet of things protocols. The UAV’s position will be estimated through estimation of the DoA angles of narrow band acoustic signals.

a)acoustic navigation b)acoustic navigation

Figure 1 UAV acoustic guidance system (a) passive mode acoustic guidance system (b) active model acoustic guidance system

As shown in Figure 1, acoustic UAV guidance can operate in a passive self-guidance mode as well as an active guidance mode. In the passive self-guidance mode, beacons with known 3D positions will emit known, distinct narrow-band (harmonic) signals. A UAV will passively receive acoustic signals using an on-board microphone phase array. It will use the acoustic signals so sampled to estimate the direction-of-arrival (DoA) of each beacon harmonic signal. If the UAV is provided with the beacon stations’ 3D coordinates, the UAV will be able to determine its own locations and heading complement those estimated using GPS or inertial guidance systems. The advantage of the passive guidance system is that multiple UAVs can use the same group of beacon stations to estimate their own position. The technical challenge is that each UAV will be mounted with a bulky acoustic phase array; and the received acoustic signal will suffer from strong noise interference due to engine, propeller/rotor, and wind.

Conversely, in an active guidance mode, the UAV will actively emit an omni-directionally transmitted, narrow-band acoustic signal using a harmonic frequency designated by the local air-traffic controller. Each beacon station will use its local acoustic micro-phone phase array to estimate the DoA of the UAV acoustic signal. The UAV’s location, speed, and heading then will be estimated by the local air-traffic controller and transmitted to the UAV. The advantage of the active guidance mode is that the UAV has a lighter payload which consists of an amplified speaker and related circuitry. The disadvantage of this approach is that each UAV within the region needs to be able to generate harmonic signals with a distinct center frequency. As the number of UAVs within the region increases, available acoustic frequencies may be insufficient.

In this paper, we investigate key issues relating to the design and implementation of a passive mode acoustic guidance system. We ask fundamental questions such as what is the effective range of applying acoustic guidance? What are sizes and configurations of the on-board phase array? What is an efficient formulation of a direction of arrival estimation algorithm so that it can be implemented on the computers on-board a UAV?

We conducted on-the-ground experiment and found the sound attenuation as a function of distance and harmonic frequency. The result is shown in Figure 2 below.

Figure 2. Sound attenuation in air as a function of distance for different harmonic frequencies

Using a commercial UAV (DJI Phantom model), we conduct experiments to study the frequency spectrum of sound at different motion states to identify beacon frequencies that may be least interfered by engine sound and noise. An example of the acoustic spectrum during taking off is shown in Figure 3 below.

Figure 3. UAV acoustic noise during take-off

We also developed a simplified direction of arrival estimation algorithm that achieves encouraging accuracy while implemented using a STM32F407 micro-controller that can easily be installed on a UAV.

1pEAa2 – Rotor noise control using 3D-printed porous materials

Chaoyang Jiang, Yendrew Yauwenas, Jeoffrey Fischer, Danielle Moreau and Con Doolan-c.doolan@unsw.edu.au
School of Mechanical and Manufacturing Engineering
University of New South Wales
Sydney, NSW, Australia, 2052

Popular version of paper 1pEAa2, “Rotor noise control using 3D-printed porous materials”
Presented Monday, December 04, 2017, 1:15-1:30 PM, Balcony N
174th ASA meeting, New Orleans

You may not realise it, but you are surrounded by rotor blades.  Fans in your computer, the air-conditioning system above your head, the wind turbine creating your renewable energy and the jet engines powering you to your next holiday or business meeting are some examples of technology where rotor blades are essential.  Unfortunately, rotor blades create noise and with so many of them, controlling rotor noise is necessary to improve the liveability and health of our communities.

Perhaps the most challenging type of rotor noise to control is turbulent trailing edge noise.  Trailing edge noise is created by turbulence in the air surrounding the rotor blade passing the blade trailing edge. This noise is produced over a wide range of frequencies (it is broadband in nature) because it is the acoustic signature of turbulence, which is a random mixture of swirling eddies of varying size.

Because this noise is driven by turbulence and its interaction with the rotor blade, it is difficult to predict and very challenging to control.  Adding porous material to a rotor blade has been shown to provide some noise relief; however, the amount of noise control is usually small and sometimes more noise is created by the porous material itself.  The problem to solve is to work out how to fabricate a quiet rotor blade with optimised and integrated porosity.  This is a significant departure from current methods, that normally apply standard porous materials late in the design or manufacturing process.

We use 3D printing technology to overcome this problem.  3D printing (also known as additive manufacturing) allows complex designs to be realised quickly through carefully controlled deposition of material (polymer, metal or ceramic).  We have used 3D printing to explore how porosity in polymers can be optimised with subsurface cavities to provide maximum sound absorption over a wide range of frequencies.  Then, we 3D print these porous designs directly into the rotor blade of a fan and test their acoustic performance in a special facility at UNSW Sydney.

Figure 1(a) shows 3D printed rotor blades under test at UNSW Sydney, with a picture of the 3D printed blade tip, with porous trailing edge, shown in figure 1(b).  A three-bladed fan is shown and in the background, a microphone array.  The microphone array allows very accurate noise measurements from the rotor blades.  When we compare solid and 3D printed porous blades, significant noise reduction is achieved, as shown in figure 2.  Over 10 dB of noise control can be achieved, which is much higher than other control methods.  Audio files (see below) allow you to hear the difference between regular solid blades and the 3D printed porous blades.

3D printing has shown that it is possible to produce much quieter rotor blades than we have been able to previously.  Our next step is to further optimise the porosity designs to achieve maximum noise reduction.  We are also investigating the impact of these designs on aerodynamic performance to ensure excessive drag is not produced.  Further, exploring the use of metallic 3D printing systems is required to make more durable rotor blades suitable for extreme environments, such as gas turbine blades.

(a)Rotor noise (b)Rotor noise

Figure 1.  3D rotor blades under test at UNSW Sydney.  (a) Test rig with microphone array; (b) illustration of rotor blade with integrated porosity.

Rotor noise

Figure 2.  Comparison of noise spectra from solid and porous rotor blades at 900 RPM and blade pitch angle of 5 degrees.

Audio 1: Solid rotor blades spinning at 900 RPM

Audio 2: 3D printed porous rotor blades spinning at 900 RPM