4aPA4 – Acoustic multi-pole source inversions of volcano infrasound

Keehoon Kim – kkim32@alaska.edu
University of Alaska Fairbanks
Wilson Infrasound Observatory, Alaska Volcano Observatory, Geophysical Institute
903 Koyukuk Drive, Fairbanks, Alaska 99775

David Fee – dfee1@alaska.edu
University of Alaska Fairbanks
Wilson Infrasound Observatory, Alaska Volcano Observatory, Geophysical Institute
903 Koyukuk Drive, Fairbanks, Alaska 99775

Akihiko Yokoo – yokoo@aso.vgs.kyoto-u.ac.jp
Kyoto University
Institute for Geothermal Sciences
Kumamoto, Japan

Jonathan M. Lees – jonathan.lees@unc.edu
University of North Carolina Chapel Hill
Department of Geological Sciences
104 South Road, Chapel Hill, North Carolina 27599

Mario Ruiz – mruiz@igepn.edu.ec
Escuela Politecnica Nacional
Instituto Geofisico
Quito, Ecuador

Popular version of paper 4aPA4, “Acoustic multipole source inversions of volcano infrasound”
Presented Thursday morning, May 21, 2015, at 9:30 AM in room Kings 1
169th ASA Meeting, Pittsburgh
Click here to read the abstract

Volcano infrasound
Volcanoes are outstanding natural sources of infrasound (low-frequency acoustic waves below 20 Hz). In the last few decades local infrasound networks have become an essential part of geophysical monitoring systems for volcanic activity. Unlike seismic networks dedicated to monitoring subsurface activity (c.f., magma or fluid transportation) infrasound monitoring facilitates detecting and characterizing eruption activity at the earth’s surface. Figure 1a shows Sakurajima Volcano in southern Japan and an infrasound network deployed in July 2013. Figure 1b is an image of a typical explosive eruption during the field experiment, which produces loud infrasound.

Sakurajima Volcano - Kim1Figure 1. a) A satellite image of Sakurajima Volcano, adapted from Kim and Lees (2014). Five stand-alone infrasound sensors were deployed around Showa Crater in July 2013, indicated by inverted triangles. b) An image of a typical explosive eruption observed during the field campaign.

Source of volcano infrasound
One of the major sources of volcano infrasound is a volume change in the atmosphere. Mass discharge from volcanic eruptions displaces the atmosphere near and around the vent and this displacement propagates into the atmosphere as acoustic waves. Infrasound signals can, therefore, represent a time history of the atmospheric volume change during eruptions. Volume flux inferred from infrasound data can be further converted into mass eruption rate with the density of the erupting mixture. Mass eruption rate is a critical parameter for forecasting ash-cloud dispersal during eruptions and consequently important for aviation safety. One of the problems associated with the volume flux estimation is that observed infrasound signals can be affected by propagation path effects between the source and receivers. Hence, these path effects must be appropriately accounted for and removed from the signals in order to obtain the accurate source parameter.

Infrasound propagation modeling
vent of Sakurajima Volcano - Kim2Figure 2. a) Sound pressure level in dB relative to the peak pressure at the source position. b) Variation of infrasound waveforms across the network caused by propagation path effects.

Figure 2 shows the results of numerical modeling of sound propagation from the vent of Sakurajima Volcano. The sound propagation is simulated by solving the acoustic wave equation using a Finite-Difference Time-Domain method taking into account volcanic topography. The synthetic wavefield is excited by a Gaussian-like source time function (with 1 Hz corner frequency) inserted at the center of Showa Crater (Figure 2a). Homogeneous atmosphere is assumed since atmospheric heterogeneity should have limited influence in this local range (< 7 km). The numerical modeling demonstrates that both amplitude and waveform of infrasound are significantly affected by the local topography. In Figure 2a, Sound Pressure Level (SPL) relative to the source amplitude is calculated at each computational grid node on the ground surface. The SPL map indicates an asymmetric radiation pattern of acoustic energy. Propagation paths to the northwest of Showa Crater are obstructed by the summit of the volcano (Minamidake), and as a result acoustic shadow zones are created northwest of the summit. Infrasound waveform also shows significant variation across the network. In Figure 2b, synthetic infrasound signals computed at the station positions (ARI – SVO) show bipolar pulses followed by oscillations in pressure while the pressure time history at the source location exhibits only a positive unipolar pulse. This result indicates that the oscillatory infrasound waveforms can be produced by not only source effects but also propagation path effects. Hence, this waveform distortion must be considered for source parameter inversion.

Volume flux estimates
Because wavelengths of volcano infrasound are usually longer than the dimension of source region, the acoustic sources are typically treated as a monopole, which is a point source approximation of volume expansion or contraction. Then, infrasound data represent the convolution of volume flux history at the source and the response of the propagation medium, called Green’s function. Volume flux history can be obtained by deconvolving the Green’s functions from the data. The Green’s functions can be obtained by two different ways: 3-D numerical modeling considering local topography (Case 1) and the analytic solution in a half-space neglecting volcanic topography (Case 2). Resultant volume histories for a selected infrasound event are compared in Figure 3. Case 1 results in gradually decreasing volume flux curve, but Case 2 shows pronounced oscillation in volume flux. In Case 2, propagation path effects are not appropriately removed from the data leading to misinterpretation of the source effect.

Summary
Proper Green’s function is critical for accurate volume flux history estimation. We obtained a reasonable volume flux history using the 3-D numerical Green’s function. In this study only simple source model (monopole) was considered for volcanic explosions. More general representation can be obtained by multipole expansion of acoustic sources. In 169th ASA Meeting presentation, we will further discuss source complexity of volcano infrasound, which requires the higher-order terms of the multipole series.

Kim3Figure 3. Volume flux history inferred from infrasound data. In Case 1, the Green’s function is computed by 3-D numerical modeling considering volcanic topography. In Case 2, the analytic solution of the wave equation in a half-space is used, neglecting the topography.

References

Kim, K. and J. M. Lees (2014). Local Volcano Infrasound and Source Localization Investigated by 3D Simulation. Seismological Research Letters, 85, 1177-1186

4aSC2 – Effects of language and music experience on speech perception

T. Christina Zhao — zhaotc@uw.edu
Patricia K. Kuhl — pkkuhl@uw.edu
Institute for Learning & Brain Sciences
University of Washington, BOX 357988
Seattle, WA, 98195

Popular version of paper 4aSC2, “Top-down linguistic categories dominate over bottom-up acoustics in lexical tone processing”
Presented Thursday morning, May 21st, 2015, 8:00 AM, Ballroom 2
169th ASA Meeting, Pittsburgh

Speech perception involves constant interplay between top-down and bottom-up processing. For example, to process phonemes (e.g. ‘b’ from ‘p’), the listener must accurately process the acoustical information in the speech signals (i.e. bottom-up strategy) and assign these sounds efficiently to a category (i.e. top-down strategy). Listeners’ performance in speech perception tasks is influenced by their experience in either processing strategy. Here, we use lexical tone processing as a window to examine how extensive experience in both strategies influence speech perception.

Lexical tones are contrastive pitch contour patterns at the word level. That is, a small difference in the pitch contour can result in different word meaning. Native speakers of a tonal language thus have extensive experience in using the top-down strategy to assign highly variable pitch contours into lexical tone categories. This top-down influence is reflected by the reduced sensitivity to acoustic differences within a phonemic category compared to across categories (Halle, Chang, & Best, 2004). On the other hand, individuals with extensive music training early in life exhibit enhanced sensitivities to pitch differences not only in music, but also in speech, reflecting stronger bottom-up influence. Such bottom-up influence is reflected by the enhanced sensitivity in detecting differences between lexical tones when the listeners are non-tonal language speakers (Wong, Skoe, Russo, Dees, & Kraus, 2007).
How does extensive experience in both strategies influence lexical tone processing? To address this question, native Mandarin speakers with extensive music training (N=17) completed a music pitch discrimination task and a lexical tone discrimination task. We compared their performance with individuals with extensive experience in only one of the processing strategies (i.e. Mandarin nonmusicians (N=20) and English musicians (N=20), data from Zhao & Kuhl (2015)).

Despite the enhanced performance in the music pitch discrimination task in Mandarin musicians, their performance in the lexical tone discrimination task is similar to the performance of the Mandarin nonmusicians, and different from the English musicians’ performance (Fig. 1, ‘Sensitivity across lexical tone continuum by group’).
ZhaoFig1
That is, they exhibited reduced sensitivities within phonemic categories (i.e. on either end of the line) compared to within categories (i.e. the middle of the line), and their overall performance is lower than the English musicians. This result strongly suggests a dominant effect of the top-down influence in processing lexical tone. Yet, further analyses revealed that Mandarin musicians and Mandarin nonmusicians may still be relying on different underlying mechanisms for performing in the lexical tone discrimination task. In the Mandarin musician, their music pitch discrimination scores are correlated with their lexical tone discrimination scores, suggesting a contribution of the bottom-up strategy in their lexical tone discrimination performance (Fig. 2, ‘Music pitch and lexical tone discrimination’, purple). This relation is similar to the English musicians (Fig. 2, peach) but very different from the Mandarin non-musicians (Fig. 2, yellow). Specifically, for Mandarin nonmusicians, the music pitch discrimination scores do not correlate with the lexical tone discrimination scores, suggesting independent processes.

ZhaoFig2

Halle, P. A., Chang, Y. C., & Best, C. T. (2004). Identification and discrimination of Mandarin Chinese tones by Mandarin Chinese vs. French listeners. Journal of Phonetics, 32(3), 395-421. doi: 10.1016/s0095-4470(03)00016-0
Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat. Neurosci., 10(4), 420-422. doi: 10.1038/nn1872
Zhao, T. C., & Kuhl, P. K. (2015). Effect of musical experience on learning lexical tone categories. The Journal of the Acoustical Society of America, 137(3), 1452-1463. doi: doi:http://dx.doi.org/10.1121/1.4913457

4aPP2 – Localizing Sound Sources when the Listener Moves: Vision Required

William A. Yost – william.yost@asu.edu, paper presenter
Xuan Zhong – xuan.zhong@asu.edu
Speech and Hearing Science
Arizona State University
P.O. Box 870102
Tempe, AZ 87285

Popular version of paper 4aPP2, related papers 1aPPa1, 1pPP7, 1pPP17, 3aPP4,
Presented Monday morning, May 18, 2015
169th ASA Meeting, Pittsburgh

When an object (sound source) produces sound, that sound can be used to locate the spatial position of the sound source. Since sound has no physical attributes related to space and the auditory receptors do not respond according to where the sound comes from, the brain makes computations based on the sound’s interaction with the listener’s head. These computations provide information about sound source location. For instance, sound from a source opposite the right ear will reach that ear slightly before reaching the left ear since the source is closer to the right ear. This slight difference in arrival time produces an interaural (between the ears) time difference (ITD), which is computed in neural circuits in the auditory brainstem as one cue used for sound source localization (i.e., small ITDs indicate that the sound source is near the front and large ITDs that the sound source is off to one side).

We are investigating sound source localization when the listener and/or the source move. See Figure 1 for a picture of the laboratory that is an echo-reduced room with 36 loudspeakers on a 5-foot radius sphere and a computer-controlled chair for rotating listeners while they listen to sounds presented from the loudspeakers. Conditions when sounds and listeners move presents a challenge for the auditory system in processing auditory spatial cues for sound source localization. When either the listener or the source moves, the ITDs change. So when the listener moves the ITD changes, signaling that the source moved even if it didn’t. In order to prevent this type of confusion about the location of sound sources, the brain needs another piece of information. We have shown that in addition to computing auditory spatial cues like the ITD, the brain also needs information about the location of the listener. Without both types of information, our experiments indicate that major errors occur in locating sound sources. When vision is used to provide information about the location of the listener, accurate sound source localization occurs. Thus, sound source localization requires information about the auditory spatial cues such as the ITD, but also information provided by systems like vision indicating the listener’s spatial location. This has been an underappreciated aspect of sound source localization. Additional research will be needed to more fully understand how these two forms of essential information are combined and used to locate sound sources. Improving sound source localization accuracy when listeners and/or sources move has many practical applications ranging from aiding people with hearing impairment to improving robots’ abilities to use sound to locate objects (e.g., a person in a fire). [The research was supported by an Air Force Office of Scientific Research, AFOSR, grant].

Yost1 - Localizing

Figure 1. The Spatial Hearing Laboratory at ASU with sound absorbing materials on all walls, ceiling, and floor; 36-loudspeakers on a 5-foot radius sphere, and a computer controlled rotating chair.

Experimental demonstration of under ice acoustic communication

The winter of Harbin is quite cold with an average temperature of -20o C to -30o C in January. An under ice acoustic communication experiment was done in Songhua River, Harbin, China in January 2015. The Songhua River is a river in Northeast China, and is the largest tributary of the Heilong River, flowing about 1,434 kilometers from Changbai Mountains through Jilin and Heilongjiang provinces. In winter conditions, the Songhua River is covered with about 0.5m thick ice, which provides a natural environment for under ice acoustic experiments.

Yin1
Figure 1. Songhua River in winter
Minus 20~30 degrees work environment brings a great challenge to the under ice experimental. One of our initial concerns was the issue of quickly building temporary experimental base in cold conditions. The experimental base of the transmitter is located in a wharf that provides enough power source and heat.
Figure 2. Temporary experimental base

Yin2
Figure 2 shows the temporary experimental base for the receiver, which can easily be assembled by four people in roughly 5 minutes.

Yin3
Figure 3. The inside of experimental base
Figure 3 shows the inside of the experimental base. Insulation blankets and plastic plates were placed on the ice to avoid prolonged contact for both the experimenters and instruments, as most of the instruments won’t function in conditions of minus 20 degrees. Our second issue was to make sure that all of them worked at the right temperature when receiving signals – we found burning briquettes for heating was a good solution, as this can keep the temperature of the inside experimental base above zero degrees (see Figure 4).

Yin4

Yin5
Figure. 5 Under ice channel based on real data
The under ice channel is quite stable. Figure 5 gives the measured under the ice channel based on real data. Figure 6 shows the CIR of the under ice channel at different depths, and it can be seen that the channels closer to the ice are simpler.

Yin6
Figure. 6 Under ice channel with different depth
A series of underwater acoustic communication tests including spread spectrum, OFDM, Pattern Time Delay Shift Coding (PDS) and CDMA have been achieved. All of the under ice acoustic communication tests achieved low bit error rate communication at 1km range with different received depth. Under ice CDMA multiuser acoustic communication shows that as many as 12 users can be supported simultaneously with as few as five receivers in under ice channels, using the time reversal mirror combined with the differential correlation detectors.

2aAB7 – Nocturnal peace at a Conservation Center for Species Survival?

Suzi Wiseman – sw1210txstate@gmail.com
Texas State University-San Marcos
Environmental Geography
601 University Drive, San Marcos, Texas 78666
Preston S. Wilson – wilsonps@austin.utexas.edu
University of Texas at Austin
Mechanical Engineering Department
1 University Station C2200
Austin, TX 78712

Popular version of paper 2aAB7, “Nocturnal peace at a Conservation Center for Species Survival?”
Presented Tuesday morning, May 19, 2015 at 10.15am
169th ASA Meeting, Pittsburgh

The acoustic environment is essential to wildlife, providing vital information about prey and predators and the activities of other living creatures (biophonic information) (Wilson, 1984), about changing weather conditions and occasionally geophysical movement (geophonic), and about human activities (anthrophonic) (Krause 1987). Small sounds can be as critical as loud, depending on the species trying to listen. Some hear infrasonically (too low for humans, generally considered below 20 Hz), others ultrasonically (too high, above 20 kHz). Biophonic soundscapes frequently exhibit temporal and seasonal patterns, for example a dawn “chorus”, mating and nurturing calls, diurnal and crepuscular events.

Some people are attracted to large parks due in part to their “peace and quiet” (McKenna 2013). But even in a desert, a snake may be heard to slither or wind may sigh between rocks. Does silence in fact exist? Finding truly quiet places, in nature or the built environment is increasingly difficult. Even in our anechoic chamber, which was purpose built to be extremely quiet, located in the heart of our now very crowded and busy urban campus, we became aware of infrasound that penetrated, possibly from nearby construction equipment or from heavy traffic that was not nearly as common when the chamber was first built more than 30 years ago. Is anywhere that contains life actually silent?

Wiseman_Fig1a_AnechoicChmbr_WAVEFORM

Figure 1: In the top window, the waveform in blue indicates the amplitude over time each occasion that a pulse of sound was broadcast in the anechoic chamber, as shown in the spectrogram in the lower window, where the frequency is shown over the same time, and the color indicates the intensity of the sound (red being more intense than blue). Considerable very low frequency sound was evident and can be seen between the pulses in the waveform (which should be silent), and throughout at the bottom of the spectrogram. The blue dotted vertical lines show harmonics that were generated within the loudspeaker system. (Measurements shown in this study were by a Roland R26 recorder with Earthworks M23 measurement microphones with frequency response 9Hz to 23kHz ±1/-3dB)

As human populations increase, so do all forms of anthrophonic noise, often masking the sounds of nature. Does this noise cease at night, especially if well away from major cities and when humans are not close-by? This study analyzed the soundscape continuously recorded beside the southern white rhinoceros (Ceratotherium simum simum) enclosure at Fossil Rim Wildlife Center, about 75 miles southwest of Dallas Texas for a week during Fall 2013, to determine the quietest period each night and the acoustic environment in which these periods tended to occur. Rhinos hear infrasound, so the soundscape was measured from 0.1 Hz to 22,050 kHz. Since frequencies below 9 Hz still need to be confirmed however, these lowest frequencies were removed from this portion of the study.

Wiseman_Fig2_RhinoEncl

Figure 2: Part of the white rhinoceros enclosure of Fossil Rim Wildlife Center, looking towards the tree line where the central recorder was placed

Wiseman_Fig3_DailyRhythm_Fri.png

Figure 3: The sound pressure level throughout a relatively quiet day at the rhino enclosure. The loudest sounds were normally vehicles, machinery, equipment, aircraft, and crows. The 9pm weather front was a major contrast.

Figure 3 illustrates the rhythm of a day at Fossil Rim as shown by the sound level of a fairly typical 24 hours starting from midnight, apart from the evening storm. As often occurred, the quietest period was between midnight and the dawn chorus.

While there were times during the day when birds and insects were their most active and anthrophonic noise was not heard above them, it was discovered that all quiet periods contained anthrophonic noise, even at night. There was generally a low frequency, low amplitude hum – at times just steady and machine-like and not yet identified – and depending on wind direction, often short hums from traffic on a state highway over a mile away. Quiet periods ranged from a few minutes to almost an hour, usually eventually broken by anthrophonic sounds such as vehicles on a nearby county road, high aircraft, or dogs barking on neighboring ranches. However there was also a strong and informative biophonic presence – from insects to nocturnal birds and wildlife such as coyotes, to sounds made by the rhinos themselves and by other species at Fossil Rim. Geophonic intrusions were generally wind, thunder or rain, possibly hail.

The quietest quarter hour was about 4am on the Friday depicted in figure 3, but even then the absolute sound pressure level averaged 44.7 decibels, about the level of a quiet home or library. The wind was from the south southeast around 10 to 14 mph during this time. Audio clip 1 is the sound of this quiet period.

Wiseman_Fig4_QuietestFri_44.5.png

Figure 4: The quietest quarter hour recorded at Fossil Rim appears between the vertical red selection lines, with an average absolute sound pressure level of 44.5 decibels. The fairly constant waveform shown in blue in the top graph and the low frequency noise at the bottom of the spectrogram seemed to comprise the machine-like hum, the distant traffic hum which varies over time, and insects. The blue flashes between 3 and 5 Hz were mainly bird calls.

By contrast, the loudest of the “quietest nightly periods” was less than six minutes long, around 5am on Wednesday 23rd October, as shown between the vertical red lines in figure 5. Despite being the quietest period that night, it averaged a sound pressure level of 55.5 decibels, which is roughly the equivalent of a spoken conversation.

Wiseman_Fig5_LoudestWed_55.5.png (1)

Figure 5: The loudest “quietest period each night” reveals broadband machine noise (possibly road work equipment somewhere in the district?) which continued for some hours and appears as the blue flecks across all frequencies. The horizontal blue line at 16.5 kHz is characteristic of bats. All species identification is being left to biologists for confirmation. Audio clip 2 is this selection.

Either side of the “quiet” minutes were short bursts of low frequency but intense truck and/or other machine noise indicated in red, some of which partially covered a clang when a rhino hit its fence with its horn, and distant barks, howls, moos and other vocalizations. The noise may have masked the extremely low frequency hums and insects that had been apparent on other nights or to have caused the insects to cease their activity. The strata below 2.5 kHz appear more ragged, indicating they are not being produced in such a uniform way as on quieter nights, and they are partially covered by the blue flecks of machine noise. However the strata at 5.5, 8.5, 11 and especially at 16.5 kHz that appeared on other nights are still evident. They appear to be birds, insects and bats. Audio clip 3 contains the sounds that broke this quiet period.

At no point during the entire week was anything closely approaching “silence” apparent. Krause reports that healthy natural soundscapes comprise a myriad of biophony, and indeed the ecological health of a region can be measured by its diverse voices (Krause 1987). However if these voices are too frequently masked or deterred by anthrophonic noise, animals may be altered behaviorally and physiologically (Pater et al, 2009), as the World Health Organization reports to be the case with humans who are exposed to chronic noise (WHO 1999). Despite some level of anthrophonic noise at most times, Fossil Rim seems to provide a healthy acoustic baseline since so many endangered species proliferate there.

Understanding soundscapes and later investigating any acoustic parameters that may correlate with animals’ behavior and/or physiological responses may lead us to think anew about the environments in which we hold animals captive in conservation, agricultural and even domestic environments, and about wildlife in parts of the world that are being increasingly encroached upon by man.

tags: animals, conservation, soundscape, silence, environment

References:
Krause, B. 1987. The niche hypothesis. Whole Earth Review . Wild Sanctuary.
———. 1987. Bio-acoustics: Habitat ambience & ecological balance. Whole Earth Review. Wild Sanctuary.
McKenna, Megan F., et al. “Patterns in bioacoustic activity observed in US National Parks.” The Journal of the Acoustical Society of America 134.5 (2013): 4175-4175.
Pater, L. L., T. G. Grubb, and D. K. Delaney. 2009. Recommendations for improved assessment of noise impacts on wildlife. The Journal of Wildlife Management 73:788-795.
Wilson, E. O. 1984. Biophilia. Harvard University Press.
World Health Organization. “Guidelines for community noise”. WHO Expert Taskforce Meeting. London. 1999.

3aBA12 – Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography

Andrew Wiens– Andrew.wiens@gatech.edu
Andrew Carek
Omar T. Inan
Georgia Institute of Technology
Electrical and Computer Engineering

Popular version of poster 3aBA12 “Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography.”
Presented Wednesday, May 19, 2015, 11:30 am, Kings 2
169th ASA Meeting, Pittsburgh

In 2014, one out of every four internet users in the United States wore a wearable device such as a smart watch or fitness monitor. As more people incorporate wearable devices into their daily lives, better techniques are needed to enable real, accurate health measurements.

Currently, wearable devices can make simple measurements of various metrics such as heart rate, general activity level, and sleep cycles. Heart rate is usually measured from small changes in the intensity of the light reflected from light-emitting diodes, or LEDs, that are placed on the surface of the skin. In medical parlance, this technique is known as photoplethysmography. Activity level and sleep cycles, on the other hand, are usually measured from relatively large motions of the human body using small sensors called accelerometers.

Recently, researchers have improved a technique called ballistocardiography, or BCG, that uses one or more mechanical sensors, such as an accelerometer worn on the body, to measure very small vibrations originating from the beating heart. Using this technique, changes in the heart’s time intervals and the volume of pumped blood, or cardiac output, have been measured. These are capabilities that other types of noninvasive wearable sensors currently cannot provide from a single point on the body, such as the wrist or chest wall. This method could become crucial for blood pressure measurement via pulse-transit time, a promising noninvasive, cuffless method that measures blood pressure using the time interval from when blood is ejected from the heart to when it arrives at the end of a main artery.

Wiens1 - ballistocardiography

Figure. 1. The underwater BCG recorded at rest.

The goal of the preliminary study reported here was to demonstrate similar measurements recorded during immersion in an aquatic environment. Three volunteers wore a waterproof accelerometer on the chest while immersed in water up to the neck. An example of these vibrations recorded at rest appear in Figure 1. The subjects performed a physiologic exercise called a Valsalva maneuver to temporarily modulate the cardiovascular system. Two water temperatures and three body postures were tested as well to discover differences in the signal morphology that could arise under different conditions.

Measurements of the vibrations that occurred during single heart beats appear in Figure 2. Investigation of the recorded signals shows that the amplitude of the signal increased during immersion compared to standing in air. In addition, the median frequency of the vibrations also decreased substantially.

Wiens2 - ballistocardiography

Figure. 2. Single heart beats of the underwater BCG from three subjects in three different environments and body postures.

One remaining question is, why did these changes occur? It is known that a significant volume of blood shifts toward the thorax, or chest, during immersion, leading to changes in the mechanical loading of the heart. It is possible that this phenomenon wholly or partially explains the changes in the vibrations observed during immersion. Finally, how can we make accurate physiologic measurements from the underwater wearable BCG? These are open questions, and further investigation is needed.

Tags: health, cardio, devices, water, wearables