3aAA7 – Fast and perceptually convincing simulation of room acoustics: Shoebox rooms with bells and whistles

Oliver Buttler – oliver.buttler@uni-oldenburg.de
Torben Wendt – torben.wendt@uni-oldenburg.de
Steven van de Par – steven.van.de.par@uni-oldenburg.de
Stephan D. Ewert – stephan.ewert@uni-oldenburg.de

Medical Physics and Acoustics and Cluster of Excellence Hearing4all,
University Oldenburg
Carl-von-Ossietzky-Straße 9-11
26129 Oldenburg, GERMANY

Popular version of paper 3aAA7, “Perceptually plausible room acoustics simulation including diffuse reflections”
Presented Wednesday morning, May 9, 2018, 10:50-11:05 AM, Location: NICOLLET C
175th ASA Meeting, Minneapolis

Today’s audio technology allows us to create virtual environments where the listener feels immersed in the scene. This technology is currently used in entertainment, computer games, but also in research where the function of a hearing aid algorithm or the behavior of humans in complex and realistic situations is investigated. To create such immersive virtual environments, besides convincing computer graphics also convincing computer sound is of key importance. We can easily experience the richness of the acoustic world when we close our eyes. We can hear that the acoustic world allows us to perceive sounds in an omnidirectional way such that we can perceive a sound source from different directions or even around a corner, and we might even be able to hear whether we are in a concert hall or bathroom, based on the acoustics.

To create immersive and convincing acoustics in virtual reality applications, computationally efficient methods are required. While in the last decades, the development towards today’s astonishing real-time computer graphics was strongly driven by the first-person computer game genre, until recently, comparable techniques in computer sound received much less attention. One reason might be that the physics of sound propagation and acoustics is at least as complicated as that of light propagation and illumination, and computing power was mainly spent on computer graphics so far. Moreover, from early on, computer graphics focused on the creation of visually convincing results rather than on physics-based simulation which allowed for tremendous simplifications of the computations. Methods for simulating acoustics, however, often focused on physics-based to predict how a planned concert hall or classroom might sound like. These methods disregarded perceptual limitations of our hearing system that might allow for significant simplifications of the acoustic simulations.

Our perceptually plausible room acoustics simulator [RAZR, www.razrengine.com, 1]  creates a computationally efficient acoustics simulation by drastic simplifications with respect to physical accuracy while still accomplishing a perceptually convincing result. To achieve this, RAZR approximates the geometry of real rooms by a simple, empty shoebox-shaped room and calculates the first sound reflections from walls as if they were mirrors creating image sources for a sound source in the room [2]. Later reflections that we perceive as reverb are treated in an even more simplified way and only the temporal decay of sound energy and the binaural distribution at our two ears is considered using a so-called feedback-delay-network [FDN, 3].

Although we demonstrated that a good perceptual agreement with real non-shoebox rooms is indeed achieved [1], the empty shoebox-room simplification might be too inaccurate for rooms which strongly diverge from this assumption, e.g., a staircase or a room with multiple interior objects. Here multiple reflections and scattering occur which we simulate in a  perceptually convincing manner by temporal smearing of the simulated reflections. A single parameter was introduced to quantify deviations from an empty shoebox room and thus the amount of temporal smearing. We demonstrate that a perceptually convincing room acoustical simulation can be obtained for sounds like music and impulses similar to a hand clap. Given its tremendous simplifications, we believe that RAZR is optimally suited for real-time acoustics simulation even in mobile devices were virtual sounds could be embedded in augmented reality applications.

 Shoebox rooms
Figure 1. Examples for the simplification of different real room geometries to shoeboxes in RAZR. The red boxes indicate the shoebox approximation. The green box in panel c) indicates a second, coupled volume attached to the lower main volume. While the rooms in panel a) and b) might be well approximated with the empty shoebox, the rooms in panel c) and d) show more severe deviations which were accounted for by a single parameter estimating the deviation from the shoebox in percent and by applying the according temporal smearing to the reflections.

 
Figure 2. Perceptually rated differences between real room recordings (A: large aula, C: corridor, S: seminar room) and simulated rooms with a hand-clap-like sound source (pulse). Different perceptual attributes are shown in the panels. The error bars indicate inter-subject standard deviations. Depending on the attribute, ordinate scales range from “less pronounced” to “more pronounced” or semantically fitting descriptors. The different symbols show the amount of deviation from the empty shoebox assumption as percentage. It can be seen that with a deviation of 20% the critical attributes in the lower panel are rated near zero and thus show a good correspondence with the real room. The remaining overall difference is mainly caused by differences in tone color which can be easily addressed.

 
Figure 3. The virtual audio-visual environment lab at the University of Oldenburg features 86 loudspeakers and 8 subwoofers arranged in a full spherical setup to render 3-dimensional simulated sound fields. The foam wedges at the walls create an anechoic environment, so that the sound created by the loudspeakers is not affected by unwanted sound reflections at the walls.

Sound 1. Simulation of the large aula without the assumption of interior objects and multiple sound reflections on those objects. Although the sound is not distorted, an unnatural and crackling sound impression is obvious at the beginning.

Sound 2. Simulation of the large aula with the assumption of 20% of the empty space filled with objects. The sound is more natural and the crackling impression at the beginning is gone.

[1] T. Wendt, S. Van De Par, and S. D. Ewert, “A computationally-efficient and perceptually-plausible algorithm for binaural room impulse response simulation,” Journal of the Audio Engineering Society, 62(11):748–766, 2014.

[2] J. B. Allen and D. A. Berkley, “Image method for efficiently simulating small-room acoustics,” The Journal of the Acoustical Society of America, 65(4):943–950, 1979.

[3] J.-M. Jot and A. Chaigne, “Digital delay networks for designing artificial reverberators,” In 90th Audio Engineering Society Convention, Audio Engineering Society, 1991.

1aAO5 – Underwater sound from recreational swimmers, divers, surfers, and kayakers


Christine Erbe – Curtin University, c.erbe@curtin.edu.au
Miles Parsons – Curtin University and Australian Institute of Marine Science, m.parsons@aims.gov.au
Alec Duncan – Curtin University, A.J.Duncan@curtin.edu.au
Klaus Lucke – Curtin University and JASCO Applied Sciences, Klaus.lucke@jasco.com
Alexander Gavrilov – Curtin University, A.Gavrilov@curtin.edu.au
Kim Allen – THHINK Autonomous Systems, kim.allen@thhink.com

Centre for Marine Science & Technology, Curtin University, Bentley, 6102 Western Australia, AUSTRALIA|

Popular version of paper 1aAO5
Presented Monday morning, May 7, 2018, 11:10-11:25 a.m., GREENWAY A
175th ASA Meeting, Minneapolis, MN

Underwater sound contains a lot of information about the source that produces it. Ships, for example, have a characteristic sound signature underwater, by which the type of vessel, its speed, and its route can easily be determined. In some cases, individual vessels can be identified by their sound and information about the type of propulsion, operational mode, and load can be deduced and maintenance issues (e.g., relating to the propeller) can be picked out. Similarly, just by listening, we can study marine life from whales to fishes and shrimp; we can track their movements; monitor their behavior; and in the case of some species of dolphins, even say which family and individuals are there. Sound is an important commodity for marine life; marine mammals as well as fishes, for example, communicate through sound, sense their environment, navigate, and forage—all mediated by sound.

Video 1: Underwater video and sound recording of different water sports activities.

Given the important role sound plays in the life functions of marine fauna, the potential interference by man-made noise has received growing interest. Noise may disrupt animal behavior, affect their hearing abilities, mask communication, cause stress, and in extreme cases cause physical and physiological damage that can ultimately be fatal. The research and management focus has—quite sensibly—been on the strongest sources, such as geophysical surveys or coastal and marine construction. Non-motorised activities are expected quieter and have hardly been studied.

Within the framework of an underwater acoustic project, we had the opportunity to record ourselves and friends performing a number of recreational water sports activities in a quiet Olympic pool, with all surrounding machinery (including cleaning pumps) switched off [1,2]. Specifically, different people were filmed and acoustically recorded while swimming breaststroke, backstroke, freestyle, and butterfly; snorkeling with and without fins; paddling a surfboard with alternating single or double arms; scuba diving; kayaking; and jumping into the pool. Sound pressure and water particle velocity were measured.

Activities that occurred at the surface, involved repeatedly piercing the surface and hence created bubble clouds were the strongest sound generators. Received levels were 110-131 dB re 1 µPa (10-16,000 Hz) for all of the activities at the closest point of approach (1 m). Levels were lower than those found in environmental noise regulations, but were clearly above ambient noise levels recorded off beaches and hence predicted audible by marine fauna over tens to hundreds of meters.

The characterization and quantification of underwater sound from recreational water sports has applicability well beyond environmental management. For example, just by listening to the recordings, it is easy to identify who of the volunteers was in the pool and which activity (including which style of swimming, with or without fins, with single versus double arms, etc.) was performed. The better (i.e., faster and smoother) swimmers were the quieter swimmers. Underwater sound might be a useful tool to assess professional or competitive swimmer performance and can be used for security monitoring of pools.

[1] C. Erbe, M. Parsons, A. J. Duncan, K. Lucke, A. Gavrilov and K. Allen, “Underwater particle motion (acceleration, velocity and displacement) from recreational swimmers, divers, surfers and kayakers,” Acoustics Australia 45,  293-299 (2017). doi: 10.1007/s40857-017-0107-6

[2] C. Erbe, M. Parsons, A. J. Duncan and K. Allen, “Underwater acoustic signatures of recreational swimmers, divers, surfers and kayakers,” Acoustics Australia 44 (2),  333-341 (2016). doi: 10.1007/s40857-016-0062-7

2pSA – Seismic-infrasound-acoustic-meteorological sensors to dynamically monitor the natural frequencies of concrete dams

Henry Diaz – Alvarez – henry.diaz-alvarez@usace.army.mil
Luis De Jesus-Diaz – Luis.A.DeJesus-Diaz@erdc.dren.mil
Vincent P. Chiarito – Vincent.P.Chiarito@usace.army.mil
Chris P. Simpson – Christopher.P.Simpson@usace.army.mil
Mihan H. McKenna – Mihan.H.McKenna@usace.army.mil

U.S. Army Engineer Research and Development Center
Geotechnical and Structures Laboratory
3909 Halls Ferry Road,
BLDG 5014Vicksburg, MS 39180

Popular version of 2pSA, “Seismic-Infrasound-Acoustic-Meteorological Sensors to Dynamically Monitor the Natural Frequencies of Concrete Dams”
Presented Tuesday afternoon, May 8, 2018, 1:00-3:45 PM
175th ASA Meeting, Minneapolis
Click here to read the abstract

The U.S. Army Engineer Research and Development Center (ERDC) is leading research using seismic-infrasound-acoustic-meteorological (SIAM) arrays to determine structural characteristics of critical infrastructure. Fundamental, vibrational modes of motion for large structures, such as dams, are usually in the sub-audible, infrasound frequency range. Infrasound is low-frequency, sub-audible sound, traditionally defined to be between 0.1 to 20 Hz and below the range of human hearing from 20 Hz to 20,000 Hz [1]. To validate the concept and its potential use for monitoring flood control structures, a structural evaluation was conducted at the Portugues Dam in Ponce, Puerto Rico.

The dam’s dynamic properties were studied prior to the deployment of SIAM arrays using detailed finite element models (FEM) assembled in COMSOL Multiphysics software [2].  The natural frequencies of 4.8 Hz and 6.7 Hz, respectively, were determined for the lower modes of vibrations, shown in Figure 1[3].


Figure 1. Modal analysis of the Portugues dam using COMSOL  multiphysisc software. Vibration mode 1 (a) and vibration mode 2 (b)

To validate the results from the FEM dynamic analysis, Performance Based Testing (PBT) was conducted at the dam.  The PBT consisted of measuring the crest input and output response to an ambient excitation using an array of accelerometers along each monolith.

Power Spectra Density (PSD) analysis of the data from accelerometers was used to confirm the natural resonance frequencies in the dam (Figure 2), and was also used to develop an estimate of the response shape associated with the fundamental modes of vibration developed in the FEM (Figure 1).


Figure 2. Power Spectra Density (PSD) analysis from accelerometers gages due to ambient excitation of the dam.

Instrumentation for a SIAM array consists of five IML infrasound sensors each with four porous hose wind filters (Figure 3), three audible microphones, a 1 Hz triaxial seismometer, and two RefTek 130s digitizers. To triangulate the specific source location of the infrasound, at least three SIAM arrays are required during the field data collecton. Typically one array in deployment also utilizas a bi-level meterorogical station.

seismic-infrasound-acoustic-meteorological
Figure 3. Example of one SIAM array used during test in the Cerrillo area.

A total of three SIAM arrays were used to monitor the dam at distances of 0.46 km Upstream (CPBBR), 0.2 km Downstream (Gazebo), and 6.0 km (Cerrillo) from the dam as shown in Figure 4.

seismic-infrasound-acoustic-meteorological
Figure 4. Illustration of the SIAM array location during the data collection.

An example time-series from a single infrasound sensor at the downstream array with ambient excitation highlighted is shown in Figure 5. The PSD analysis for ambient excitation in Figure 6. shows correlated energy at frequencies 4.3 Hz and 6.0 Hz, which align with the vibrations modes measured on structure with acelerometers. Results from both the FEM using COMSOL Multiphysics agree with the infrasound field experimental data and were used to validate to SIAM array data collected.

 
Figure 5. Raw data from a single infrasound sensor located at the downstream array


Figure 6. PSD analysis from infrasound sensors, located at the Downstream array, ambien excitation.

Performing an infrasound survey of Portugues Dam provides an opportunity to validate whether infrasound’s can be used to remotely determine the fundamental frequencies of vibration of large structures. Infrasound waves are capable of propagating at a significant standoff distance from the source structure. Potential benefits of infrasound monitoring include the determination of a structure’s health without a physical inspection and also passive monitoring of several structures of interest using relatively few SIAM arrays.

[1] P. Campus, D. R. Christie, “Worldwide observations of infrasonic waves” in Infrasound Monitoring for Atmospheric Studies, edited by A. Le Pichon, E. Blanc, A. Hauchecorne (Springer, Dordrecht, 2010), pp. 185–234.
[2] COMSOL Multiphysics® v. 5.2. www.comsol.com. COMSOL AB, Stockholm, Sweden
[3] H. Diaz-Alvarez, V.P Chiarito, S. McComas, and M.H McKenna. (2015). Infrasound Assessment of the Roller Compacted Concrete Dam: Case Study of the Portugues Dam in Ponce, PR. COMSOL conference 2015, Newton, MA. (2015)

2aAAa – The face of the facility: acoustic design of lobbies

Brandon Cudequest -bcudequest@thresholdacoustics.com
Anthony Hoover, FASA – thoover@mchinc.com

McKay Conant Hoover, Inc.
5655 Lindero Canyon Road, Suite 325
Westlake Village, CA 91362

Popular version of paper 2aAAa
Presented Tuesday morning, May 08, 2018
175th ASA Meeting in Minneapolis, MN

Lobbies are a facility’s initial destination, the point of departure, the information center, and a security checkpoint.

It can be difficult to impose blanket acoustical criteria, because lobbies take an infinite number of forms, and must respond to building occupancy, fire safety, pedestrian flow, plus cultural tastes and architectural aesthetics.

The requirements can fluidly fluctuate throughout design, and the acoustics need to keep pace. The following can inform the acoustical design:

  • The primary functions include services for information, ticketing, and security, which necessitate speech intelligibility. This in turn suggests concentrating sound-absorptive treatments near ticketing and information booths where speech intelligibility is important, as well as providing shelter for task-oriented areas.
  • Great lobbies serve secondary functions, such as providing daylighting through large areas of glass, but glass is sound-reflective.
  • Some lobbies are very grand and can be several stories in height. This offers opportunities for ad hoc performances, especially for choral groups and chamber music. Sound-scattering treatments instead of sound-absorptive treatments help to provide a sense of “spaciousness.”
  • Noise, speech, or music generated in the lobby should not transmit to other spaces. Appropriate sound isolation can be achieved through careful placement of doors, vestibules, hallways, and partitions.
  • HVAC noise should be relatively quiet.

Figure 1 shows the grand lobby in a prominent performing arts center, with carefully-designed acoustical features:

  • Ceiling and corridors are highly diffusive, which scatter sound and soften individual sound reflections.
  • Balconies are large protruding elements that scatter sound.
  • Areas under the balconies and stairs offer noise shielding for patrons to comfortably purchase beverages and tickets, check coats, and engage with ushers.

Figure 1. An acoustically successful lobby

This lobby successfully hosts pre-function events such as small choral groups on the main stairwell, bustling cocktail hours, and many post-function events such as autographs, fund raising, and cabaret.

Figure 2 shows a five-story lobby in a large courthouse facility. The lobby is the main entrance for the facility, and connects the courthouse wing to the administrative wing via a series of stacked bridges.

Figure 2. Architectural rendering of the courthouse atrium

The design called for walls to be glass and brick, with a hard floor and wood ceiling.  The resultant reverberation (time for sounds to decay to inaudibility) would be comparable to a cavernous cathedral and would impede clear communication.

Reverberation was reduced by providing ½” separations between individual wood planks at the ceiling, which allows sound to be absorbed in insulation above. The reverberation is still very long (about 2-1/2 seconds), but now people can communicate easily within about 6 feet of each other.  Beyond that distance, speech is garbled, which effectively promotes privacy from most other occupants.

The buildup of sound from various occupant activities would be too noisy for the security guards, so security personnel were relocated under the lowest bridge, shielding them from the general lobby noise and reverberation.

A successful design is sensitive to the goals of the facility, manages reverberation, collects activities into shielded areas, and prevents distracting noise from transmitting to noise-sensitive spaces. Careful balancing of surface shaping, finish treatments, and sound isolation can deliver a great lobby.

2pAA4 – Acoustical balance between the stage and the pit in the Teatro Colón of Buenos Aires

Gustavo Basso – gusjbasso@gmail.com

IPEAL – FBA – Universidad Nacional de La Plata
Calle 5 Nº 84
La Plata, 1900, ARGENTINA.

Popular version of 2pAA4, “Acoustical balance between the stage and the pit in the Teatro Colón of Buenos Aires”
Presented Tuesday, May 08, 2018, 2:05pm – 2:25 PM, Nicollet C
175th ASA Meeting, Minneapolis
Click here to read the abstract

Contrary to an auditorium for symphonic music, in which the orchestra and the audience occupy the same architectural space, an opera theatre has three different coupled spaces, each with different acoustic functions: the stage tower, the area for the audience and the orchestra pit. Given the importance of the voice in the genre, the sound balance B between the singers on the stage and the orchestra in the pit is considered one of the key factors that determine the acoustical quality of an opera performance [1]. In an opera theatre, the singers are at a disadvantage in comparison with the Orchestra, both in number and in sound power, and the balance B should be maintained within a range of -2 to +4 dB [2].

In the case of the Teatro Colón of Buenos Aires, well known for its outstanding acoustical quality [3], achieving the proper balance is not easy given its large size: a huge main volume of 20,000 m3 coupled to a stage tower of 30,000 m3.

Two different situations have been identified: the main floor and the upper levels. On the main floor, the measurements show that the balance is appropriate, with values of B between 0.7 and 4.7 dB. The analysis in a digital model reveals that these values were obtained from many broadband sound reflections of the singers’ voices on the surfaces surrounding the stalls (Fig. 1), in conjunction with the masking of the sound coming from the pit.

Teatro Colón

Figure 1. Some of the lateral reflections of the singers’ voice towards the same seat on the main floor, coming unobstructed from the dihedral angles wall/ceiling from three-balcony levels. They help the voice to be heard, and thus, rising the balance of the stalls.

As important as the values found of B is the spectral distribution of the balance. A singer trained in the western operatic tradition produces a lot of energy in a range of frequencies centred around 2500-3000 Hz. In this region, called the “singer’s formant”, they can reach intensities well above those of the orchestra [4]. In the Teatro Colón, the stage/pit balance reaches its maximum values in the region of the singer’s formant, helping the voices to be heard clearly (Fig. 2).

Figure 2. Spectral characteristics of the Balance on the main floor, in which the frequencies corresponding to the singer’s formant are reinforced and favored by the room.

As could be expected, the stage set-up can reduce the balance values on the main floor, mainly if the singer is placed well inside the stage tower.

At upper levels, where the instruments of the Orchestra in the pit can be seen, the balance loses part of the spectral advantage it has in the stalls; nevertheless, this situation is compensated by the emergence of the powerful reflection of the singers’ voice on the stage floor, reflection that is almost non-existent on the main floor (Fig.3). This fact, plus the appearance of early reflections coming from the ceiling, allows to maintain the balance within appropriate values at the higher levels.

Teatro Colón

Figure 3. Early reflections at the upper level (paraiso) from a directional source on the stage. The strong reflections can be seen on both the stage floor and the ceiling.

 

Video 1. Acoustical measurements of the Teatro Colón (IADAE, 2010)

The results of this work allow to understand some of the acoustic characteristics of the Teatro Colón. Those outcomes also enable us to design the set-ups of the operas based on acoustic considerations in order to keep the singer/orchestra balance at high levels, one of the key factors when it comes to qualifying a lyrical performance.

 

REFERENCES
[1] N. Prodi, R. Pompoli, F. Martellotta, S. Sato. “Acoustics of Italian Historical Opera Houses”, J. Acoust. Soc. Am. 138 (2), 769-781, 2015.
[2] J. Meyer. Acoustics and the Performance of Music, Springer, New York, 2009.
[3] T. Hidaka, L. Beranek. “Objective and subjective evaluations of twenty-three opera houses in Europe, Japan, and the Americas”, J. Acoust. Soc. Am. 107 (1), 368-383, 2000.
[4] J. Sundberg.J. The science of the singing voice, Northern Illinois University Press, Illinois, USA, 1987.

1aPP2 – Restoring stereo hearing to people with one deaf ear

Joshua Bernstein – joshua.g.bernstein.civ@mail.mil

Kenneth Jensen – kjensen@hjf.org

Walter Reed National Military Medical Center
4954 N. Palmer Rd.
Bethesda, MD 20889

Jack Noble – jack.noble@vanderbilt.edu
Vanderbilt University
2301 Vanderbilt Pl.
Nashville, TN 37235

Olga Stakhovskaya – ostakhov@umd.edu
Matthew Goupell – goupell@umd.edu
University of Maryland – College Park
7251 Preinkert Drive
College Park, MD 20742

Popular version of 1aPP2, “Measuring spectral asymmetry for cochlear-implant listeners with single-sided deafness”
Presented Monday morning, May 7, 2018
175th ASA Meeting, Minneapolis, MN

Having two ears provides tremendous benefits in our busy world: helping people to communicate in noisy environments, to tell where sounds are coming from, and to feel a general sense of three-dimensionality. People who go deaf in one ear (single-sided deafness) are therefore at a considerable disadvantage compared to people with access to sound in both ears.

Recently, cochlear implants have been explored as a way to restore some hearing to the deaf ear for people with single-sided deafness. A cochlear implant bypasses the normal inner-ear function, relaying sound information directly to the auditory nerve and brain via small electrical bursts.  While traditionally prescribed to people with two deaf ears, recent studies show that cochlear implants can restore some aspects of spatial hearing to people with single-sided deafness [1, 2].

The benefits that a cochlear implant provides to a person with single-sided deafness might not be as large as they could be because the device was never designed for this population. We know that for a given sound frequency, the cochlear implant stimulates the incorrect place in the cochlea (the snail-shaped hearing organ in the inner ear). Figure 1A shows the snail-shaped cochlea straightened into line. A normal-hearing ear processes the full frequency range (20-20,000 Hz) of from the one end of the cochlea to the other. However, cochlear implants deliver frequencies to the wrong cochlear locations because the device cannot be placed to allow access to the full range of frequency-specific nerve cells. Frequency mismatch could make it difficult for people with single-sided deafness to combine the sounds across the two ears, causing them to “hear double” instead of somewhat, although imperfectly, in stereo.

Figure 1.  A schematic of an unrolled cochlea showing how frequency mismatch arises because the cochlear implant electrode array (blue) cannot be inserted all the way to the end of the cochlea. (A) Programming a cochlear implant in a standard way leads to a frequency mismatch between the cochlear-implant (green) and normal-hearing ears (red). (B) Adjusting the cochlear implant frequency allocation could reduce or eliminate this mismatch. 

Legend: Sound examples, best experienced over headphones.

Simulation of hearing with a mismatched cochlear implant.

Simulation of hearing with a frequency-matched cochlear implant

Our research aims to reprogram cochlear implants to frequency-align the two ears for people with single-sided deafness (Figure 1B) by measuring where in the cochlea individual electrical contacts (electrodes) are stimulating. We compared three methods: computed-tomography (CT) scans (like an x-ray) to visualize electrode locations within the cochlea; having the listener compare the relative pitches of the sounds presented to the two ears; and having the listener judge small (~1ms) differences in the arrival time of sounds at the two ears [3]. The timing judgments – the only of the measurements that required listeners to use their two ears together – gave similar estimates of electrode location to the CT scans. In contrast, pitch measurements gave different estimates, suggesting that the brain rewired itself to accommodate pitch differences, but did not rewire itself for spatial hearing. Device programming based on either the timing or CT measurements shows the most promise to improve the ability to use the ears in concert with one another. Our next step will be to make these programming changes to see if they improve stereo hearing.

[The views expressed in this abstract are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.]

[1] Bernstein, J., Schuchman, G., and Rivera, A, “Head shadow and binaural squelch for unilaterally deaf cochlear implantees,” Otology and Neurotology, vol. 38, pp. e195-e202, 2017.

[2] Vermeire, K., and Van de Heyning, P. “Binaural hearing after cochlear implantation in subjects with unilateral sensorineural deafness and tinnitus,” Audiology and Neurootology, vol. 14, pp. 163–171, 2009.

[3] Bernstein, J., Stakhovskaya, O., Schuchman, G., Jensen, K., and Goupell, M, “Interaural time-difference discrimination as a measure of place of simulation for cochlear-implant users with single-sided deafness,” Trends in Hearing, Vol. 19, p. 2331216515617143, 2018.