2aEA6 – A MEMS condenser microphone based acoustic receiver for totally implantable cochlear implants

Lukas Prochazka1 – Lukas.Prochazka@usz.ch
Flurin Pfiffner1 – Flurin.Pfiffner@usz.ch
Ivo Dobrev1 – Ivo.Dobrev@usz.ch
Jae Hoon Sim1 – JaeHoon.Sim@usz.ch
Christof Röösli1 – Christof.Roeoesli@usz.ch
Alex Huber1 – Alex.Huber@usz.ch
Francesca Harris2 – fharris@cochlear.com
Joris Walraevens2 – JWalraevens@cochlear.com
Jeremie Guignard3 – jguignard@cochlear.com

  1. Department of Otorhinolaryngology, Head and Neck Surgery
    Unversity of Zurich
    Frauenklinikstrasse 24
    Zürich, 8091, SWITZERLAND
  2. Cochlear Technology Centre
    Schalienhoevedreef 20 I
    Mechelen, 2800, BELGIUM
  3. Cochlear AG
    Peter Merian-Weg 4
    Basel, 4052, SWITZERLAND

Popular version of paper 2aEA6, “A MEMS condenser microphone based acoustic receiver for totally implantable cochlear implants”
Presented Tuesday morning, May 8, 2018, 11:00-11:20 AM, Greenway D
175th ASA Meeting, Minneapolis

In the totally implantable cochlear implant (TICI) system, the external parts of the currently available cochlear implants (CIs) are integrated into the implant and hence, become invisible and well protected. Recipients using such a system would significantly benefit from 24/7 hearing and the overall improved quality of life that comes with an invisible hearing aid (related to playing sports, sleep comfort, reduced social stigma, etc.). A TICI system is not commercially available to date, mainly because of technical difficulties of making an implantable microphone (IM).

In contrast to an external microphone, an implantable one needs sophisticated packaging to meet stringent requirements for long term biocompatibility, safety and reliability. In addition, high sensing performance, low power consumption and simple surgical approach have to be considered during the design phase.

The goal of the present project is to develop and validate an IM for a TICI system.

MEMS

Figure 1. Schematic drawing of the present concept of an IM for a TICI system. The illustration shows the main parts of the intracochlear acoustic receiver (ICAR) and their anatomical locactions. The sound receptor (SR) with up to 4 sound receiving protective diaphragms, the enclosure of the MEMS condenser microphone (CMIC) and the system for static pressure equalization (SPEQ) form a biocompatible Ti packaging structure which hermetically seals the MEMS CMIC against body tissue. The SPEQ system represents a passive adaptive volume which compensates for ambient static pressure variations and thus, provides stable sensing performance.

Our approach for an IM is a device which measures the pressure fluctuations in the cochlea (inner ear), which are induced by the outer and middle ear chain, a so-called intracochlear acoustic receiver (ICAR, Fig. 1). An ICAR benefits from the amplification and directionality cues of the ear anatomy, whilst minimizing interference by body noises. The ICAR might potentially be integrated into the existing CI electrode array and hence, such a TICI may benefit from a similar surgical procedure as applied for a CI.

The design concept for the ICAR is based on a commercially available MEMS condenser microphone (MEMS CMIC) as it is used for telecommunication devices. The MEMS CMIC of the ICAR is fully packaged in a biocompatible enclosure made out of titanium (Ti) but still enables sensing of the pressure fluctuations in the cochlea. The sensing capability of the MEMS CMIC is maintained by sealing its pressure sensing port with thin protective Ti diaphragms (PD). Sound induced vibrations of the PDs cause pressure fluctuations within the gas-filled volume formed by the PDs and the sensing element of the MEMS CMIC. Since the size of the MEMS CMIC enclosure prevents its insertion into the cochlea, only the thin sensor head carrying the PDs, called the sound receptor (SR), is inserted into the cochlea duct. The enclosure remains in the middle ear cavity adjacent to the entrance of the cochlea (Fig. 1).

MEMS

Figure 2. The first prototype (PT I) of the proposed design concept of the ICAR (a). PT I uses a commercially available MEMS CMIC in its original packaging (c, top enclosure removed). An acrylic adapter interconnects the pressure port of the MEMS CMIC and the SR (fused silica capillary tube). The PD, a 1 micron thick polyimide diaphragm supported by a thin-wall cylindrical structure made out of single crystal silicon, seals the front end of the SR tube (b).

The development process of the ICAR started with a simplified version of the proposed concept. The first prototype (PT I) is not implantable and does not meet the sensing performance targeted in the final ICAR (Fig. 2). It was mainly designed to validate lumped element modelling of the sensor concept and to measure and quantify intracochlear sound pressure (ICSP) in human and sheep temporal bones, providing crucial information towards an ICAR of a TICI system [1, 2]. The data from ICSP measurements were in good agreement with results in the literature [3].

MEMS

Figure 3. Prototype II (PT II) combines the SR from PT I and a custom-made Ti enclosure for the MEMS CMIC with optimum form factor for surgical insertion (b). The flexible interface between microphone and the amplifier unit simplifies surgical insertion and sensor fixation (a). A flexible printed circuit board (FCB) enables packaging of the MEMS CMIC and the corresponding ASIC unit in an enclosure with optimum form factor. In addition, it simplifies electrical interfacing due to an integrated FCB cable (c).

As the next step, the second ICAR prototype (PT II) was designed and built such that surgical insertion into the cochlea was possible during acute large animal experiments. In PT II, a custom-made Ti enclosure for the MEMS CMIC was combined with the SR of PT I (Fig. 3). A flexible interface between the microphone and the external amplifier unit allows surgeons to insert and fix the sensor without using complex assisting tools (e.g. micro-manipulator). The acute large animal experiments revealed that the presented ICAR concept is a suitable receiver technology for TICI systems.

MEMS

Figure 4. CAD model of prototype III (PT III) of the ICAR combining the MEMS CMIC enclosure from PT II and a Ti SR with four 1 micron thick Ti diaphragms. The SR structure and the enclosure are laser welded together. The multi-diaphragm SR design is required to meet the targeted sensing performance (sensitivity, bandwidth). The micro-channel within the SR pneumatically interconnects the PDs and the MEMS CMIC.

Currently, a fully biocompatible ICAR (PT III, Fig. 4) is under development. PT III, which is planned to be used for chronic large animal tests, is expected to fulfill all requirements for application of the ICAR to a TICI system, including high performance, low power consumption and good system integration. The key feature of PT III is the Ti SR with four PDs instead of one as used in PT I and PT II. It is fabricated from thin Ti sheets which are structured by photo etching and hermetically joined by diffusion bonding. The 1 micron thick PDs are deposited onto the bare SR structure using DC magnetron sputtering on top of a low temperature decomposable polymer material (Fig. 5).

Figure 5. Tip region of the Ti SR of PT III after DC magnetron sputtering of a 1 micron thick Ti layer on both sides of the SR (picture from the first diaphragm fabrication trial on the multi-diaphragm SR structure design).

Acknowledgements
This work was supported by the Baugarten Stiftung Zürich, Switzerland, by the Cochlear Technology Centre Belgium and by Cochlear AG, European Headquarters, Switzerland.

[1] F. Pfiffner, et al., “A MEMS Condenser Microphone-Based Intracochlear Acoustic Receiver” IEEE Transactions on Biomedical Engineering, 64, pp. 2431-2438, 2016

[2] D. Péus, et al., “Sheep as a large animal ear model: Middle-ear ossicular velocities and intracochlear sound pressure” Hearing research 351, pp. 88-97, 2017

[3] H. H. Nakajima, et al., “Differential intracochlear sound pressure measurements in normal human temporal bones” J Assoc Res Otolaryngol 10(1) pp. 23-36, 2009 

2aPA6 – An acoustic approach to assess natural gas quality in real time

Andi Petculescu – andi@louisiana.edu
University of Louisiana at Lafayette
Lafayette, Louisiana, US

Popular version of paper 2aPA6 “An acoustic approach to assess natural gas quality in real time.”
Presented Tuesday morning, December 5, 2017, 11:00 AM-11:20, Balcony L
174th ASA in New Orleans

Infrared laser spectroscopy offers amazing measurement resolution for gas sensing applications, ranging between 1 part per million (ppm) down to a few parts per billion (ppb).

There are applications, however, that require sensor hardware able to operate in harsh conditions, without the need for periodic maintenance or recalibration. Examples are monitoring of natural gas composition in transport pipes, explosive gas accumulation in grain silos, and ethylene concentration in greenhouse environments. A robust alternative is embodied by gas-coupled acoustic sensing. Such gas sensors operate on the principle that sound waves are intimately coupled to the gas under study hence any perturbation on the latter will affect i) how fast the waves can travel and ii) how much energy they lose during propagation. The former effect is represented by the so-called speed of sound, which is the typical “workhorse” of acoustic sensing. The reason the sound speed of a gas mixture changes with composition is because it depends on two gas parameters beside temperature. The first parameter is the mass of the molecules forming the gas mixture; the second parameter is the heat capacity, describing the ability of the gas to follow, via the amount of heat exchanged, the temperature oscillations accompanying the sound wave. All commercial gas-coupled sonic gas monitors rely solely on the dependence of sound speed on molecular mass. This traditional approach, however, can only sense relative changes in the speed of sound hence in mean molecular mass; thus it cannot do a truly quantitative analysis. Heat capacity, on the other hand, is the thermodynamic “footprint” of the amount of energy exchanged during molecular collisions. It therefore opens up the possibility to perform quantitative gas sensing. Furthermore, the attenuation coefficient, which describes how fast energy is lost from the coherent (“acoustic”) motion to incoherent (random) behavior of the gas molecules, has largely been ignored. We have shown that measurements of sound speed and attenuation at only two acoustic frequencies can be used to infer the intermolecular energy transfer rates, depending on the species present in the gas. The foundation of our model is summarized in the pyramid of Figure 1. One can either predict the sound speed and attenuation if the composition is known (bottom-to-top arrow) or perform quantitative analysis or sensing based on measured sound speed and attenuation (top-to-bottom arrow).

gas

Figure 1. The prediction/sensing pyramid of molecular acoustics. Direct problem: prediction of sound wave propagation (speed and attenuation). Inverse problem: quantifying a gas mixture from measured sound speed and attenuation.

We are developing physics-based algorithms that not only quantify a gas mixture but also help identify contaminant species in a base gas. With the right optimization, the algorithms can be used in real time to measure the composition of piped natural gas as well as its degree of contamination by CO2, N2, O2 and other species. It is these features that have sparked the interest of the gas flow-metering industry. Figure 2 shows model predictions and experimental data for the attenuation coefficient for mixtures of nitrogen in methane (Fig. 2a) and ethylene in nitrogen (Fig. 2b).

gas

Figure 2. The normalized (dimensionless) attenuation coefficient in mixtures of N2 in CH4 (a) and C2H4 in N2 (b). Solid lines–theory; symbols–measurements

The sensing algorithm that we named “Quantitative Acoustic Relaxational Spectroscopy” (QARS) is based on a purely geometric interpretation of the frequency-dependent heat capacity of the mixture of polyatomic molecules. This characteristic makes it highly amenable to implementation as a robust real-time sensing/monitoring technique. The results of the algorithm are shown in Figure 3, for a nitrogen-methane mixture. The example shows how the normalized attenuation curve arising from intermolecular exchanges is reconstructed (or synthesized) from data at just two frequencies. The prediction of the first-principles model (dashed line) shows two relaxation times: the main one of approximately 50 us (=1/20000 Hz-1) and a secondary one around 1 ms (=1/1000 Hz-1). Probing the gas with only two frequencies yields the main relaxation process, around 20000 Hz, from which the composition of the mixture can be inferred with relatively high accuracy.

Figure 3. The normalized (dimensionless) attenuation as a function of frequency. Dashed line–theoretical prediction; solid line–reconstructed curve.

5aPA3 – Elastic Properties of a Self-Healing Thermal Plastic

Kenneth A. Pestka II – pestkaka@longwood.edu
Jacob W. Hull – ‪jacob.hull@live.longwood.edu
Jonathan D. Buckley – ‪jonathan.buckley@live.longwood.edu

Department of Chemistry and Physics
Longwood University
Farmville, Virginia, 23909, USA

Stephen J. Kalista Jr. –kaliss@rpi.edu
Department of Biomedical Engineering,
Rensselaer Polytechnic Institute
Troy, New York, 12180, USA

Popular version of paper 5aPA3
Presented Friday morning, May 11, 2018
175th ASA Meeting, Minneapolis, MN

In our lab at Longwood University we have recently used Resonant Acoustic and Ultrasonic Spectroscopy to improve our understanding of a self-healing thermal plastic ionomer composed of polyethylene co-methacrylic acid (EMAA-0.6Na) both before and after damage [1]. Resonant Ultrasound Spectroscopy (RUS) is a prodigious technique ideally suited for the characterization and determination of the elastic properties of novel materials, especially those that are often only accessible in small sample sizes or with exotic attributes, and EMAA-0.6Na is among one of the more exotic materials [1,2]. EMAA-0.6Na is a thermal plastic material that is capable of autonomously self-healing after energetic impact and even after penetration by a bullet [3].

Material samples, including those composed of EMAA-0.6Na, exhibit normal modes of vibration and resonant frequencies that are governed by their sample geometry, mass and elastic properties, as illustrated in Fig. 1. The standard RUS approach uses an initial set of approximate elastic constants as input parameters in a computer program to calculate a set of theoretical resonant frequencies. The resulting theoretically calculated resonant frequencies are then iteratively adjusted and compared to the experimentally measured resonant frequencies in order to determine the actual elastic properties of a material.

Figure 1. 3D-model of a self-healing EMAA-0.6Na sample illustrating the first six vibrational modes.

However, EMAA-60Na is a relatively soft material, leading to sample resonances that are often difficult to isolate and identify. A partial spectrum from an EMAA-0.6Na sample is shown in Fig. 2. In order to extract individual resonant frequencies a multiple peak-fitting algorithm was used as shown in Fig. 2 (b).

Thermal Plastic

Figure 2. Undamaged Sample Behavior: Time dependence of the partial resonant spectrum of an approximately 7✕7.5✕1.4 mm3 EMAA sample over 48 hours (a). Lorentzian multi-peak fit to the signal used to extract individual resonances (b). Time evolution of the resonant frequencies at approximately 8.7 kHz (c) and 9.8 kHz (d) for the undamaged EMAA sample, adapted from[1].

Interestingly, the resonant frequencies of undamaged EMAA-0.6Na samples changed over time as shown in Fig. 2(c) and 2(d), but the observed rate of elastic evolution was quite gradual. However, once the samples were damaged, in this case by a 3mm pinch punch hammered directly into approximately 1mm thick samples, dramatic changes occurred in the resonant spectrum, as shown in Fig. 3. Using this approach we were able to determine the approximate healing timescale of several EMAA-0.6Na samples after exposure to damage.

Thermal Plastic

Figure 3. Partial time dependent spectrum of an approximately 7✕7.5✕1.4 mm3 EMAA sample before damage (a) and after damage (b). The lorentzian multi-peak fits are shown just after damage (c) and over an hour after damage (d), adapted from [1].

Building on this approach we have been able to identify a sufficient number resonant frequencies of undamaged EMAA-0.6Na samples to determine the complete material elastic constants. In addition, it should be possible to assess the evolution of EMAA-0.6Na elastic constants for both undamaged and damaged samples, with the ultimate goal of quantifying the material parameters and environmental conditions that most significantly affect the elastic and self-healing behavior of this unusual material.

[1] Pestka II, K. A., Buckley, J. D., Kalista Jr., S. J., Bowers, N. R., Elastic evolution of a self-healing ionomer observed via acoustic and ultrasonic resonant spectroscopy Rep. vol. 7, Article number: 14417 (2017). doi:10.1038/s41598-017-14321-z

[2] Migliori, A. and Maynard, J. D. Implementation of a modern resonant ultrasound spectroscopy system for the measurement of the elastic moduli of small solid specimens.  Rev. Sci. Instrum. 76, 121301 (2005).

[3] S. J. Kalista, T.C. Ward, Self-Healing of Poly(ethylene-co-methacrylic acid) Copolymers Following Ballistic Puncture, Proceedings of the First International Conference on Self Healing Materials, Noorwijk aan Zee, The Netherlands: Springer (2007).

5aSC – This is se{w,r}ious: using acoustics, phonetic transcription, and naïve judgments to better understand how children learn (or fail to learn) the /r/ sound

Mara Logerquist1
Alisha Martell 1
Hyuna Mia Kim2
Benjamin Munson1 (contact author, munso005@umn.edu, +1 612 619 7724)
Jan Edwards2,3,4

1Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, 2Department of Communication Sciences and Disorders, University of Wisconsin, Madison, 3Department of Hearing and Speech Sciences, University of Maryland, College Park,
4Language Science Center, University of Maryland, College Park

Lay-language version of paper Growth in the Accuracy of Preschool Children’s /r/ Production: Evidence from a Longitudinal Study, poster 5aSC,
presented in the session Speech Production, Friday, May 11, 8 am – 12 pm.

Few would dispute that language acquisition is a fascinating and remarkable feat.  Children progress from their first coos and cries to saying full sentences in a matter of just a few years.  Given all that is involved in spoken language, it seems almost unreal that children could accomplish this Herculean task in such a short time.  Even the seemingly simple task of learning to pronounce sounds is, on closer examination, rather tough.  Children have to listen to the adults around them to figure out what they should sound like.  Then, they have to approximate the adult productions that they hear with the very different vocal instrument that they have: children’s vocal tracts are about half the size of an adult’s.  Not surprisingly, specific difficulty in learning speech sounds is one of the most common communication disorders.

The English /r/ sound is a particularly interesting topic in speech sound acquisition.  It is one of the last sounds to be acquired.  For many children with developmental speech disorders, /r/ errors (which usually sound like the /w/ sound) are very persistent, even when other speech errors have been successfully corrected.  Perhaps because it is so common, /r/ errors are very socially salient.  We can easily find examples of portrayals of children’s speech in TV shows that have /r/ errors, such as Ming-Ming the duck’s catch phrase “This is serious” (with a production of /r/ that sounds like a /w/) on the show Wonder Pets (https://www.youtube.com/watch?v=bjmYee2ZfSk).

The sound /r/ has a distinctive acoustic signature, as illustrated by productions of the words rock and walk (which rhyme in speech of many people in Minnesota).  These illustrations are spectrograms, which are a type of acoustic record of a sound.  Spectrograms allow us to measure fine-grained detail in speech.  The red dots on these spectrograms are estimates of which of the many frequencies that are present in the speech signal are the loudest.  In the production of rock, the third-lowest peak frequency (which we call the third formant [F3]) is low (about 1500 Hz).  In the production of walk, it is much higher (about 2500 Hz).

The last two authors of this study, along with a third collaborator, Dr. Mary E. Beckman, recently finished a longitudinal study of relationships among speech perception, speech production, and word learning in children.  As part of this study, we collected numerous productions of late-acquired sounds in word-initial position (like the /r/ sound in rocking).  The ultimate goal of that study is to understand how speech production and perception early in life set the stage for vocabulary growth throughout the preschool years, and how vocabulary growth helps children refine their knowledge of speech sounds.  The study collected a treasure trove of data that we can use to analyze other secondary questions. Our ASA poster does just that.  In it, we ask whether we can identify predictors of which children with inaccurate /r/ productions at our second time point (TP2, when children were between 39 and 52 months old, which followed the first time point, in which they were 28 to 39 months old) improve their /r/ production at our third time point (TP3, when the children were between 52 and 64 months old), and which did not improve.  Our candidate measures were taken from a battery of standardized and non-standardized tests of speech perception, vocabulary knowledge, and nonlinguistic cognitive skills.

Our first stab at answering this question involved looking at phonetic transcriptions of children’s productions of /r/ and /w/.  We picked /w/ as a comparison sound because most of children’s /r/ errors sound like /w/.  Phonetic transcription was completed by trained phonetic transcribers using a very strict protocol.  We calculated the accuracy of /r/ and /w/ at both TP2 and TP3.  As the plot below (in which each dot represents a single child’s performance) shows, children’s performance generally improved: most of the children are above the y=x line.

We examined predictors of how much growth in accuracy occurred from TP2 to TP3 for the subset of children whose accuracy of /r/ was below 50% at TP2.  Surprisingly, the results did not help us understand why some children improved more than others.  In general, we found that the children who had the most improvement were those with low speech perception and vocabulary scores at TP2.  A naïve interpretation might be that low vocabulary is associated with positive speech acquisition—an unlikely story!  Closer inspection showed that this relationship was because the children who had the lowest accuracy scores at TP2 (that is, the children with the most room to grow) were those who had the lowest vocabulary and speech perception scores.

We then went back to our data and asked whether we could get a finer-grained measure than simply we get from phonetic transcriptions.  We know from our own previous work, and from work of others (especially Tara McAllister and colleagues, who have worked on /r/ extensively) that speech sounds are acquired gradually.  A child learning /s/ (the “s” sound) over the course of development gradually learns how to produce /s/ differently from other similar sounds (like the “th” sound and the “sh” sound).  McAllister and colleagues showed this to be the case with /r/, too.  Measures like phonetic transcription don’t do a very good job of capturing this gradual acquisition, since a transcription either says that a sound is correct or incorrect.  It doesn’t track degrees of accuracy or inaccuracy.

To examine gradual learning of /r/, we first tried to look at acoustic measures, like the F3 measures that are useful in characterizing adults’ /r/ productions.  A quick look at two spectrograms of children’s productions reveal how hard this endeavor actually is.  Both of these are the first 175 ms of two kids’ productions of the word rocking.  Both of them were transcribed as /w/.  In both of them, the F3 is hard to find.  It’s not nearly as clear as it is in the adults’ productions that are shown above.  In both of these cases, the algorithm to track formant frequencies gives values that are rather suspicious.  In short, we would need to carefully code these by hand to get anything approximating an accurate F3, and some tokens wouldn’t be amenable to any kind of acoustic analysis.  Given our large number of productions in this study (nearly 3,000!), this would take many hundreds of hours of work.

Subject 679L

Subject 671L

To remedy this, we decided to abandon acoustics.  Instead, we presented brief clips of children’s speech (the first 175 ms of words starting with the /r/ and /w/ sounds) to naïve listeners, where “naïve” means “without any specialized training in speech-language pathology, phonetics, or acoustics.”  We asked them to rate the children’s productions on a continuous scale, by clicking on an arrow like this:

The “r” sound /r/ to /w/The “w” sound

When we examine listener ratings, we find quite a bit of variation across kids in how their sounds are rated.  Consider the ratings for the productions above.  Each one of the “r” symbols represents a rating by an individual.  The higher the rating, the more /w/-like it was judged to sound.

679L

671L

Listen to these sounds yourself, and ask where you would click on the line.  Do your judgments match those of our listeners?

We find that the production by 679L (which was rated by 125 listeners) is perceived as much more /w/-like than was the production by 671L(which was rated by 20 listeners).  How do these data help us understand growth in /r/?  In our ongoing anlayses, we are examining growth by using pooled listener ratings instead of phonetic transcription.  Our hope is that these finer-grained measures will help us better understand individual differences in how children learn the /r/ sound.

1pAB4 – Size Matters To Engineers, But Not To Bats

Rolf Müller – rolf.mueller@vt.edu
Bryan D. Todd

Popular version of paper 1pAB4, “Beamwidth in bat biosonar and man-made sonar”
Presented Monday, May 7, 2018, 1:30-3:50 PM, LAKESHORE B,
175th ASA Meeting, Minneapolis.

Bats and Navy engineers both use sonar systems. But do they worry about the same design features?

To find out, we have done an exhaustive review of both kinds of sonar systems, poring over the spec sheets of about two dozen engineered sonars for a variety of applications and using computer models to predict 151 functional characteristics of bat biosonar systems spanning eight different biological families. Crunching the numbers revealed profound differences between the way engineers approach sonar and the way bats do.

The most important finding from this analysis is related to a parameter called beamwidth. Beamwidth is a measure of the angle over which the emitted sonic power or receiver sensitivity is distributed. A small beamwidth implies a focused emission, where the sound energy is – ideally – concentrated with laser-like precision. But the ability to generate such a narrow beam is limited by the sonar system’s size: the larger the emitter is relative to the wavelength it uses, the finer the beam it can produce. Reviewing the design of man-made sonars indicates that beamwidth has clearly been the holy grail of sonar engineering — and in fact, the beamwidth of these systems hews closely to their theoretical minima.

bats

Some of the random emission baffles made from crumpled aluminum foil that served as a reference for the scatter seen in the bat beam width data.

But when it comes to beamwidth, tiny bats are at a significant disadvantage: even the largest bat ears are barely ten times the size of the animals’ ultrasonic wavelength, while engineered systems can exceed their wavelengths by 100 or 1000 times. Remarkably, our analysis showed that bats seem to disregard beamwidth entirely. In our data set, the bats’ beamwidth scattered widely towards larger values; the scatter was even larger than that for random cone shapes we created from crumpled aluminum foil. Clearly, the bats’ sonar systems are not optimized for beamwidth. But we know that they are incredible capable when it comes to navigating complex environments — which begs the question: what criteria are influencing their design?

We don’t know yet. But the bats’ superior performance demonstrates every night that giant sonar arrays with narrow beamwidths aren’t the only and certainly not the most efficient path to success: smaller, leaner solutions exist. And those solutions will be necessary for compact modern systems like autonomous underwater or aerial vehicles. To make sonar-based autonomy in natural environments a reality, engineers should let go of their fixation on size and look at the bats.

4bPA2 – Perception of sonic booms from supersonic aircraft of different sizes

Alexandra Loubeau – a.loubeau@nasa.gov
Structural Acoustics Branch
NASA Langley Research Center
MS 463
Hampton, VA 23681
USA

Popular version of paper 4bPA2, “Evaluation of the effect of aircraft size on indoor annoyance caused by sonic booms and rattle noise”
Presented Thursday afternoon, May 10, 2018, 2:00-2:20 PM, Greenway J
175th Meeting of the ASA, Minneapolis, MN, USA

Continuing interest in flying faster than the speed of sound has led researchers to develop new tools and technologies for future generations of supersonic aircraft.  One important breakthrough for these designs is that the sonic boom noise will be significantly reduced as compared to that of previous planes, such as the Concorde.  Currently, U.S. and international regulations prohibit civil supersonic flight over land because of people’s annoyance to the impulsive sound of sonic booms.  In order for regulators to consider lifting the ban and introducing a new rule for supersonic flight, surveys of the public’s reactions to the new sonic boom noise are required. For community overflight studies, a quiet sonic boom demonstration research aircraft will be built. A NASA design for such an aircraft is shown in Fig. 1.

(Loubeau_QueSST.jpg) - sonic booms

Figure 1. Artist rendering of a NASA design for a low-boom demonstrator aircraft, exhibiting a characteristic slender body and carefully shaped swept wings.

To keep costs down, this demonstration plane will be small and only include space for one pilot, with no passengers.  The smaller size and weight of the plane are expected to result in a sonic boom that will be slightly different from that of a full-size plane.  The most noticeable difference is that the demonstration plane’s boom will be shorter, which corresponds to less low-frequency energy.

A previous study assessed people’s reactions, in the laboratory, to simulated sonic booms from small and full-size planes.  No significant differences in annoyance were found for the booms from different size airplanes.  However, these booms were presented without including the secondary rattle sounds that would be expected in a house under the supersonic flight path.

The goal of the current study is to extend this assessment to include indoor window rattle sounds that are predicted to occur when a supersonic aircraft flies over a house.  Shown in Fig. 2, the NASA Langley indoor sonic boom simulator that was used for this test reproduces realistic sonic booms at the outside of a small structure, built to model a corner room of a house.  The sonic booms transmit to the inside of the room that is furnished to resemble a living room, which helps the subjects imagine that they are at home.  Window rattle sounds are played back through a small speaker below the window inside the room.  Thirty-two volunteers from the community rated the sonic booms on a scale ranging from “Not at all annoying” to “Extremely annoying”.  The ratings for 270 sonic boom and rattle combinations were averaged for each boom to obtain an estimate of the general public’s reactions to the sounds.

(Loubeau_IER.jpg) - sonic booms

Figure 2. Inside of NASA Langley’s indoor sonic boom simulator.

The analysis shows that aircraft size is still not significant when realistic window rattles are included in the simulated indoor sound field.  Hence a boom from a demonstration plane is predicted to result in approximately the same level of annoyance as a full-size plane’s boom, as long as they are of the same loudness level.  This further confirms the viability of plans to use the demonstrator for community studies.  While this analysis is promising, additional calculations would be needed to confirm the conclusions for a variety of house types.