2pSA – Seismic-infrasound-acoustic-meteorological sensors to dynamically monitor the natural frequencies of concrete dams

Henry Diaz – Alvarez – henry.diaz-alvarez@usace.army.mil
Luis De Jesus-Diaz – Luis.A.DeJesus-Diaz@erdc.dren.mil
Vincent P. Chiarito – Vincent.P.Chiarito@usace.army.mil
Chris P. Simpson – Christopher.P.Simpson@usace.army.mil
Mihan H. McKenna – Mihan.H.McKenna@usace.army.mil

U.S. Army Engineer Research and Development Center
Geotechnical and Structures Laboratory
3909 Halls Ferry Road,
BLDG 5014Vicksburg, MS 39180

Popular version of 2pSA, “Seismic-Infrasound-Acoustic-Meteorological Sensors to Dynamically Monitor the Natural Frequencies of Concrete Dams”
Presented Tuesday afternoon, May 8, 2018, 1:00-3:45 PM
175th ASA Meeting, Minneapolis
Click here to read the abstract

The U.S. Army Engineer Research and Development Center (ERDC) is leading research using seismic-infrasound-acoustic-meteorological (SIAM) arrays to determine structural characteristics of critical infrastructure. Fundamental, vibrational modes of motion for large structures, such as dams, are usually in the sub-audible, infrasound frequency range. Infrasound is low-frequency, sub-audible sound, traditionally defined to be between 0.1 to 20 Hz and below the range of human hearing from 20 Hz to 20,000 Hz [1]. To validate the concept and its potential use for monitoring flood control structures, a structural evaluation was conducted at the Portugues Dam in Ponce, Puerto Rico.

The dam’s dynamic properties were studied prior to the deployment of SIAM arrays using detailed finite element models (FEM) assembled in COMSOL Multiphysics software [2].  The natural frequencies of 4.8 Hz and 6.7 Hz, respectively, were determined for the lower modes of vibrations, shown in Figure 1[3].


Figure 1. Modal analysis of the Portugues dam using COMSOL  multiphysisc software. Vibration mode 1 (a) and vibration mode 2 (b)

To validate the results from the FEM dynamic analysis, Performance Based Testing (PBT) was conducted at the dam.  The PBT consisted of measuring the crest input and output response to an ambient excitation using an array of accelerometers along each monolith.

Power Spectra Density (PSD) analysis of the data from accelerometers was used to confirm the natural resonance frequencies in the dam (Figure 2), and was also used to develop an estimate of the response shape associated with the fundamental modes of vibration developed in the FEM (Figure 1).


Figure 2. Power Spectra Density (PSD) analysis from accelerometers gages due to ambient excitation of the dam.

Instrumentation for a SIAM array consists of five IML infrasound sensors each with four porous hose wind filters (Figure 3), three audible microphones, a 1 Hz triaxial seismometer, and two RefTek 130s digitizers. To triangulate the specific source location of the infrasound, at least three SIAM arrays are required during the field data collecton. Typically one array in deployment also utilizas a bi-level meterorogical station.

seismic-infrasound-acoustic-meteorological
Figure 3. Example of one SIAM array used during test in the Cerrillo area.

A total of three SIAM arrays were used to monitor the dam at distances of 0.46 km Upstream (CPBBR), 0.2 km Downstream (Gazebo), and 6.0 km (Cerrillo) from the dam as shown in Figure 4.

seismic-infrasound-acoustic-meteorological
Figure 4. Illustration of the SIAM array location during the data collection.

An example time-series from a single infrasound sensor at the downstream array with ambient excitation highlighted is shown in Figure 5. The PSD analysis for ambient excitation in Figure 6. shows correlated energy at frequencies 4.3 Hz and 6.0 Hz, which align with the vibrations modes measured on structure with acelerometers. Results from both the FEM using COMSOL Multiphysics agree with the infrasound field experimental data and were used to validate to SIAM array data collected.

 
Figure 5. Raw data from a single infrasound sensor located at the downstream array


Figure 6. PSD analysis from infrasound sensors, located at the Downstream array, ambien excitation.

Performing an infrasound survey of Portugues Dam provides an opportunity to validate whether infrasound’s can be used to remotely determine the fundamental frequencies of vibration of large structures. Infrasound waves are capable of propagating at a significant standoff distance from the source structure. Potential benefits of infrasound monitoring include the determination of a structure’s health without a physical inspection and also passive monitoring of several structures of interest using relatively few SIAM arrays.

[1] P. Campus, D. R. Christie, “Worldwide observations of infrasonic waves” in Infrasound Monitoring for Atmospheric Studies, edited by A. Le Pichon, E. Blanc, A. Hauchecorne (Springer, Dordrecht, 2010), pp. 185–234.
[2] COMSOL Multiphysics® v. 5.2. www.comsol.com. COMSOL AB, Stockholm, Sweden
[3] H. Diaz-Alvarez, V.P Chiarito, S. McComas, and M.H McKenna. (2015). Infrasound Assessment of the Roller Compacted Concrete Dam: Case Study of the Portugues Dam in Ponce, PR. COMSOL conference 2015, Newton, MA. (2015)

2aAAa – The face of the facility: acoustic design of lobbies

Brandon Cudequest -bcudequest@thresholdacoustics.com
Anthony Hoover, FASA – thoover@mchinc.com

McKay Conant Hoover, Inc.
5655 Lindero Canyon Road, Suite 325
Westlake Village, CA 91362

Popular version of paper 2aAAa
Presented Tuesday morning, May 08, 2018
175th ASA Meeting in Minneapolis, MN

Lobbies are a facility’s initial destination, the point of departure, the information center, and a security checkpoint.

It can be difficult to impose blanket acoustical criteria, because lobbies take an infinite number of forms, and must respond to building occupancy, fire safety, pedestrian flow, plus cultural tastes and architectural aesthetics.

The requirements can fluidly fluctuate throughout design, and the acoustics need to keep pace. The following can inform the acoustical design:

  • The primary functions include services for information, ticketing, and security, which necessitate speech intelligibility. This in turn suggests concentrating sound-absorptive treatments near ticketing and information booths where speech intelligibility is important, as well as providing shelter for task-oriented areas.
  • Great lobbies serve secondary functions, such as providing daylighting through large areas of glass, but glass is sound-reflective.
  • Some lobbies are very grand and can be several stories in height. This offers opportunities for ad hoc performances, especially for choral groups and chamber music. Sound-scattering treatments instead of sound-absorptive treatments help to provide a sense of “spaciousness.”
  • Noise, speech, or music generated in the lobby should not transmit to other spaces. Appropriate sound isolation can be achieved through careful placement of doors, vestibules, hallways, and partitions.
  • HVAC noise should be relatively quiet.

Figure 1 shows the grand lobby in a prominent performing arts center, with carefully-designed acoustical features:

  • Ceiling and corridors are highly diffusive, which scatter sound and soften individual sound reflections.
  • Balconies are large protruding elements that scatter sound.
  • Areas under the balconies and stairs offer noise shielding for patrons to comfortably purchase beverages and tickets, check coats, and engage with ushers.

Figure 1. An acoustically successful lobby

This lobby successfully hosts pre-function events such as small choral groups on the main stairwell, bustling cocktail hours, and many post-function events such as autographs, fund raising, and cabaret.

Figure 2 shows a five-story lobby in a large courthouse facility. The lobby is the main entrance for the facility, and connects the courthouse wing to the administrative wing via a series of stacked bridges.

Figure 2. Architectural rendering of the courthouse atrium

The design called for walls to be glass and brick, with a hard floor and wood ceiling.  The resultant reverberation (time for sounds to decay to inaudibility) would be comparable to a cavernous cathedral and would impede clear communication.

Reverberation was reduced by providing ½” separations between individual wood planks at the ceiling, which allows sound to be absorbed in insulation above. The reverberation is still very long (about 2-1/2 seconds), but now people can communicate easily within about 6 feet of each other.  Beyond that distance, speech is garbled, which effectively promotes privacy from most other occupants.

The buildup of sound from various occupant activities would be too noisy for the security guards, so security personnel were relocated under the lowest bridge, shielding them from the general lobby noise and reverberation.

A successful design is sensitive to the goals of the facility, manages reverberation, collects activities into shielded areas, and prevents distracting noise from transmitting to noise-sensitive spaces. Careful balancing of surface shaping, finish treatments, and sound isolation can deliver a great lobby.

2pAA4 – Acoustical balance between the stage and the pit in the Teatro Colón of Buenos Aires

Gustavo Basso – gusjbasso@gmail.com

IPEAL – FBA – Universidad Nacional de La Plata
Calle 5 Nº 84
La Plata, 1900, ARGENTINA.

Popular version of 2pAA4, “Acoustical balance between the stage and the pit in the Teatro Colón of Buenos Aires”
Presented Tuesday, May 08, 2018, 2:05pm – 2:25 PM, Nicollet C
175th ASA Meeting, Minneapolis
Click here to read the abstract

Contrary to an auditorium for symphonic music, in which the orchestra and the audience occupy the same architectural space, an opera theatre has three different coupled spaces, each with different acoustic functions: the stage tower, the area for the audience and the orchestra pit. Given the importance of the voice in the genre, the sound balance B between the singers on the stage and the orchestra in the pit is considered one of the key factors that determine the acoustical quality of an opera performance [1]. In an opera theatre, the singers are at a disadvantage in comparison with the Orchestra, both in number and in sound power, and the balance B should be maintained within a range of -2 to +4 dB [2].

In the case of the Teatro Colón of Buenos Aires, well known for its outstanding acoustical quality [3], achieving the proper balance is not easy given its large size: a huge main volume of 20,000 m3 coupled to a stage tower of 30,000 m3.

Two different situations have been identified: the main floor and the upper levels. On the main floor, the measurements show that the balance is appropriate, with values of B between 0.7 and 4.7 dB. The analysis in a digital model reveals that these values were obtained from many broadband sound reflections of the singers’ voices on the surfaces surrounding the stalls (Fig. 1), in conjunction with the masking of the sound coming from the pit.

Teatro Colón

Figure 1. Some of the lateral reflections of the singers’ voice towards the same seat on the main floor, coming unobstructed from the dihedral angles wall/ceiling from three-balcony levels. They help the voice to be heard, and thus, rising the balance of the stalls.

As important as the values found of B is the spectral distribution of the balance. A singer trained in the western operatic tradition produces a lot of energy in a range of frequencies centred around 2500-3000 Hz. In this region, called the “singer’s formant”, they can reach intensities well above those of the orchestra [4]. In the Teatro Colón, the stage/pit balance reaches its maximum values in the region of the singer’s formant, helping the voices to be heard clearly (Fig. 2).

Figure 2. Spectral characteristics of the Balance on the main floor, in which the frequencies corresponding to the singer’s formant are reinforced and favored by the room.

As could be expected, the stage set-up can reduce the balance values on the main floor, mainly if the singer is placed well inside the stage tower.

At upper levels, where the instruments of the Orchestra in the pit can be seen, the balance loses part of the spectral advantage it has in the stalls; nevertheless, this situation is compensated by the emergence of the powerful reflection of the singers’ voice on the stage floor, reflection that is almost non-existent on the main floor (Fig.3). This fact, plus the appearance of early reflections coming from the ceiling, allows to maintain the balance within appropriate values at the higher levels.

Teatro Colón

Figure 3. Early reflections at the upper level (paraiso) from a directional source on the stage. The strong reflections can be seen on both the stage floor and the ceiling.

 

Video 1. Acoustical measurements of the Teatro Colón (IADAE, 2010)

The results of this work allow to understand some of the acoustic characteristics of the Teatro Colón. Those outcomes also enable us to design the set-ups of the operas based on acoustic considerations in order to keep the singer/orchestra balance at high levels, one of the key factors when it comes to qualifying a lyrical performance.

 

REFERENCES
[1] N. Prodi, R. Pompoli, F. Martellotta, S. Sato. “Acoustics of Italian Historical Opera Houses”, J. Acoust. Soc. Am. 138 (2), 769-781, 2015.
[2] J. Meyer. Acoustics and the Performance of Music, Springer, New York, 2009.
[3] T. Hidaka, L. Beranek. “Objective and subjective evaluations of twenty-three opera houses in Europe, Japan, and the Americas”, J. Acoust. Soc. Am. 107 (1), 368-383, 2000.
[4] J. Sundberg.J. The science of the singing voice, Northern Illinois University Press, Illinois, USA, 1987.

1aPP2 – Restoring stereo hearing to people with one deaf ear

Joshua Bernstein – joshua.g.bernstein.civ@mail.mil

Kenneth Jensen – kjensen@hjf.org

Walter Reed National Military Medical Center
4954 N. Palmer Rd.
Bethesda, MD 20889

Jack Noble – jack.noble@vanderbilt.edu
Vanderbilt University
2301 Vanderbilt Pl.
Nashville, TN 37235

Olga Stakhovskaya – ostakhov@umd.edu
Matthew Goupell – goupell@umd.edu
University of Maryland – College Park
7251 Preinkert Drive
College Park, MD 20742

Popular version of 1aPP2, “Measuring spectral asymmetry for cochlear-implant listeners with single-sided deafness”
Presented Monday morning, May 7, 2018
175th ASA Meeting, Minneapolis, MN

Having two ears provides tremendous benefits in our busy world: helping people to communicate in noisy environments, to tell where sounds are coming from, and to feel a general sense of three-dimensionality. People who go deaf in one ear (single-sided deafness) are therefore at a considerable disadvantage compared to people with access to sound in both ears.

Recently, cochlear implants have been explored as a way to restore some hearing to the deaf ear for people with single-sided deafness. A cochlear implant bypasses the normal inner-ear function, relaying sound information directly to the auditory nerve and brain via small electrical bursts.  While traditionally prescribed to people with two deaf ears, recent studies show that cochlear implants can restore some aspects of spatial hearing to people with single-sided deafness [1, 2].

The benefits that a cochlear implant provides to a person with single-sided deafness might not be as large as they could be because the device was never designed for this population. We know that for a given sound frequency, the cochlear implant stimulates the incorrect place in the cochlea (the snail-shaped hearing organ in the inner ear). Figure 1A shows the snail-shaped cochlea straightened into line. A normal-hearing ear processes the full frequency range (20-20,000 Hz) of from the one end of the cochlea to the other. However, cochlear implants deliver frequencies to the wrong cochlear locations because the device cannot be placed to allow access to the full range of frequency-specific nerve cells. Frequency mismatch could make it difficult for people with single-sided deafness to combine the sounds across the two ears, causing them to “hear double” instead of somewhat, although imperfectly, in stereo.

Figure 1.  A schematic of an unrolled cochlea showing how frequency mismatch arises because the cochlear implant electrode array (blue) cannot be inserted all the way to the end of the cochlea. (A) Programming a cochlear implant in a standard way leads to a frequency mismatch between the cochlear-implant (green) and normal-hearing ears (red). (B) Adjusting the cochlear implant frequency allocation could reduce or eliminate this mismatch. 

Legend: Sound examples, best experienced over headphones.

Simulation of hearing with a mismatched cochlear implant.

Simulation of hearing with a frequency-matched cochlear implant

Our research aims to reprogram cochlear implants to frequency-align the two ears for people with single-sided deafness (Figure 1B) by measuring where in the cochlea individual electrical contacts (electrodes) are stimulating. We compared three methods: computed-tomography (CT) scans (like an x-ray) to visualize electrode locations within the cochlea; having the listener compare the relative pitches of the sounds presented to the two ears; and having the listener judge small (~1ms) differences in the arrival time of sounds at the two ears [3]. The timing judgments – the only of the measurements that required listeners to use their two ears together – gave similar estimates of electrode location to the CT scans. In contrast, pitch measurements gave different estimates, suggesting that the brain rewired itself to accommodate pitch differences, but did not rewire itself for spatial hearing. Device programming based on either the timing or CT measurements shows the most promise to improve the ability to use the ears in concert with one another. Our next step will be to make these programming changes to see if they improve stereo hearing.

[The views expressed in this abstract are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.]

[1] Bernstein, J., Schuchman, G., and Rivera, A, “Head shadow and binaural squelch for unilaterally deaf cochlear implantees,” Otology and Neurotology, vol. 38, pp. e195-e202, 2017.

[2] Vermeire, K., and Van de Heyning, P. “Binaural hearing after cochlear implantation in subjects with unilateral sensorineural deafness and tinnitus,” Audiology and Neurootology, vol. 14, pp. 163–171, 2009.

[3] Bernstein, J., Stakhovskaya, O., Schuchman, G., Jensen, K., and Goupell, M, “Interaural time-difference discrimination as a measure of place of simulation for cochlear-implant users with single-sided deafness,” Trends in Hearing, Vol. 19, p. 2331216515617143, 2018.

3aUWa6 – Inversion of geo-acoustic parameters from sound attenuation measurements in the presence of swim bladder bearing

Orest Diachok – orest.diachok@jhuapl.edu
Johns Hopkins University Applied Physics Laboratory
11100 Johns Hopkins Rd.
Laurel MD 20723

Altan Turgut – turgut@wave.nrl.navy.mil
Naval Research Laboratory
4555 Overlook Ave. SW
Washington DC 20375

Popular version of paper 3aUWa6 “Inversion of geo-acoustic parameters from transmission loss measurements in the presence of swim bladder bearing fish in the Santa Barbara Channel”
Presented Wednesday morning, December 6, 2017, 9:15-10:00 AM, Salon E
174th ASA Meeting, New Orleans

The intensity of sound propagating from a source in the ocean becomes diminished with range due to geometrical spreading, chemical absorption, and reflection losses from the bottom and surface. Measurements of sound intensity vs. range and depth in the water column may be used to infer the speed of sound, density and attenuation coefficient (geo-alpha) of bottom sediments. Numerous inversion algorithms have been developed to search through physically viable permutations of these parameters and identify the values of these parameters that provide the best fit to measurements. This approach yields valid results in regions where the concentration of swim bladder bearing fish is negligible.

In regions where the there are large numbers of swim bladder bearing fish, the effect of attenuation due to fish (bio-alpha) needs to be considered to permit unbiased estimates of geo-acoustic parameters (Diachok and Wales, 2005; Diachok and Wadsworth, 2014).

Swim bladder bearing fish resonate at frequencies controlled by the dimensions of their swim bladders. Adult 16 cm long sardines resonate at 1.1 kHz at 12 m depth. Juvenile sardines, being smaller, resonate at higher frequencies. If the number of fish is sufficiently large, sound will be highly attenuated at the resonance frequencies of their swim bladders.

To demonstrate the competing effects of bio and geo-alpha on sound attenuation we conducted an interdisciplinary experiment in the Santa Barbara Channel during a month when the concentration of sardines was known to be relatively high. This experiment included an acoustic source, S, which permitted measurements at frequencies between 0.3 and 5 kHz and an array of 16 hydrophones, H, which was deployed 3.7 km from the source, as illustrated in Figure 1. Sound propagating from S to H was attenuated by sediments at the bottom of the ocean (yellow) and a layer of fish at about 12 m depth (blue). To validate inferred geo-acoustic values from the sound intensity vs. depth data, we sampled the bottom with cores and measured sound speed and geo-alpha vs. depth with a near-bottom towed chirp sonar (Turgut et al., 2002). To validate inferred bio-acoustic values, Carla Scalabrin of Ifremer, France measured fish layer depths with an echo sounder, and Paul Smith of the Southwest Fisheries Science Center conducted trawls, which provided length distributions of dominant species. The latter permitted calculation of swim bladder dimensions and resonance frequencies.

Figure 1. Experimental geometry: source, S deployed 9 m below the surface between a float and an anchor, and a vertical array of hydrophones, H, deployed 3.7 km from source.

Figure 2 provides two-hour averaged measurements of excess attenuation coefficients (corrected for geometrical spreading and chemical absorption) vs. frequency and depth at night, when these species are generally dispersed (far apart from each other) near the surface. The absorption bands centered at 1.1, 2.2 and 3.5 kHz corresponded to 16 cm sardines, 10 cm anchovies, and juvenile sardines or anchovies at 12 m respectively. During daytime, sardines generally form schools at greater depths, where they resonate at “bubble cloud” frequencies, which are lower than the resonance frequencies of individuals.

Swim bladder

Figure 2. Concurrent echo sounder measurements of energy reflected from fish vs. depth (left), and excess attenuation vs. frequency and depth at night (right).

The method of concurrent inversion (Diachok and Wales, 2005) was applied to measurements of sound intensity vs. depth to estimate values of bio-and geo-acoustic parameters. The geo-acoustic search space consisted of the sound speed at the top of the sediments, the gradient in sound speed and geo-alpha. The biological search space consisted of the depth and thickness of the fish layer and bio-alpha within the layer. Figure 3 shows the results of the search for the values of geo-alpha that resulted in the best fit between calculations and measurements, 0.1 dB/m at 1.1 kHz and 0.5 dB/m at 1.9 kHz. Also shown are results of chirp sonar estimates of geo-alpha at 3.2 kHz and quadratic fit to the data.

Figure 3. Attenuation coefficient in sediments derived from concurrent inversion of bio and geo parameters, geo only, chirp sonar, and quadratic fit to data.

If we had assumed that bio-alpha was zero, then the inverted value of geo-alpha would have been 0.12 dB/m at 1.1 kHz, which is about ten times greater than the properly derived estimate, and 0.9 dB/m at 1.9 kHz.

These measurements were made at a biological hot spot, which was identified through an echo sounder survey. None of the previously reported experiments, which were designed to permit inversion of geo-acoustic parameters from sound propagation measurements, included echo sounder measurements of fish depth or trawls. Consequently, some of these measurements may have been conducted at sites where the concentration of swim bladder bearing fish may have been significant, and inverted values of geo-acoustic parameters may have been biased by neglect of bio-alpha.

Acknowledgement: This research was supported by the Office of Naval Research Ocean Acoustics Program.

References

Diachok, O. and S. Wales (2005), “Concurrent inversion of bio and geo-acoustic parameters from transmission loss measurements in the Yellow Sea”, J. Acoust. Soc. Am., 117, 1965-1976.

Diachok, O. and G. Wadsworth (2014), “Concurrent inversion of bio and geo-acoustic parameters from broadband transmission loss measurements in the Santa Barbara Channel”, J. Acoust. Soc. Am., 135, 2175.

Turgut, A., M. McCord, J. Newcomb and R. Fisher (2002) “Chirp sonar sediment characterization at the northern Gulf of Mexico Littoral Acoustic Demonstration Center experimental site”, Proceedings, Oce