3aBA12 – Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography – Andrew Wiens, Andrew Carek, Omar T. Inan

“Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography”

Andrew Wiens– Andrew.wiens@gatech.edu
Andrew Carek
Omar T. Inan
Georgia Institute of Technology
Electrical and Computer Engineering

Popular version of poster 3aBA12 “Sternal vibrations reflect hemodynamic changes during immersion: underwater ballistocardiography.”
Presented Wednesday, May 19, 2015, 11:30 am, Kings 2
169th ASA Meeting, Pittsburgh
———————————————————————-

In 2014, one out of every four internet users in the United States wore a wearable device such as a smart watch or fitness monitor. As more people incorporate wearable devices into their daily lives, better techniques are needed to enable real, accurate health measurements.
Currently, wearable devices can make simple measurements of various metrics such as heart rate, general activity level, and sleep cycles. Heart rate is usually measured from small changes in the intensity of the light reflected from light-emitting diodes, or LEDs, that are placed on the surface of the skin. In medical parlance, this technique is known as photoplethysmography. Activity level and sleep cycles, on the other hand, are usually measured from relatively large motions of the human body using small sensors called accelerometers.
Recently, researchers have improved a technique called ballistocardiography, or BCG, that uses one or more mechanical sensors, such as an accelerometer worn on the body, to measure very small vibrations originating from the beating heart. Using this technique, changes in the heart’s time intervals and the volume of pumped blood, or cardiac output, have been measured. These are capabilities that other types of noninvasive wearable sensors currently cannot provide from a single point on the body, such as the wrist or chest wall. This method could become crucial for blood pressure measurement via pulse-transit time, a promising noninvasive, cuffless method that measures blood pressure using the time interval from when blood is ejected from the heart to when it arrives at the end of a main artery.
The goal of the preliminary study reported here was to demonstrate similar measurements recorded during immersion in an aquatic environment. Three volunteers wore a waterproof accelerometer on the chest while immersed in water up to the neck. An example of these vibrations recorded at rest appear in Figure 1. The subjects performed a physiologic exercise called a Valsalva maneuver to temporarily modulate the cardiovascular system. Two water temperatures and three body postures were tested as well to discover differences in the signal morphology that could arise under different conditions.
Wiens1
Figure. 1. The underwater BCG recorded at rest.
Measurements of the vibrations that occurred during single heart beats appear in Figure 2. Investigation of the recorded signals shows that the amplitude of the signal increased during immersion compared to standing in air. In addition, the median frequency of the vibrations also decreased substantially.

Wiens2
Figure. 2. Single heart beats of the underwater BCG from three subjects in three different environments and body postures.
One remaining question is, why did these changes occur? It is known that a significant volume of blood shifts toward the thorax, or chest, during immersion, leading to changes in the mechanical loading of the heart. It is possible that this phenomenon wholly or partially explains the changes in the vibrations observed during immersion. Finally, how can we make accurate physiologic measurements from the underwater wearable BCG? These are open questions, and further investigation is needed.

Tags: health, cardio, devices, water, wearables

1aSC4 – Downstream effects of accented speech on memory – by Kristin Van Engen

WASHINGTON, D.C., May 18, 2015 — Struggling to understand someone else talking can be a taxing mental activity. A wide range of studies have already documented that individuals with hearing loss or who are listening to degraded speech — for example over a bad phone line or in a loud room — have greater difficulty remembering and processing the spoken information than individuals who heard more clearly.

Now researchers at Washington University in St. Louis are investigating the relatively unexplored question of whether listening to accented speech similarly affects the brain’s ability to process and store information. Their preliminary results suggest that foreign-accented speech, even when intelligible, may be slightly more difficult to recall than native speech.

The researchers will present their findings at the 169th meeting of the Acoustical Society of America, held May 18 – 22 in Pittsburgh, Pennsylvania.

Listening to accented speech is different than other more widely studied forms of “effortful listening” — think loud cocktail parties — because the accented speech itself deviates from listener expectations in (often) systematic ways, said Kristin Van Engen, a post-doctoral research associate in the linguistics program at Washington University in St. Louis.

How the brain processes information delivered in an accent has relevance to real-world settings like schools and hospitals. “If you’re working hard to understand a professor or doctor with a foreign accent, are you going to have more difficulty encoding the information you’re learning in memory?” Van Engen asked. The answer is not really known, and the issue has received relatively little attention in either the scientific literature on foreign accent processing or the literature on effortful listening, she said.

To begin to answer her question, Van Engen and her colleagues tested the ability of young-adult native English speakers to store spoken words in their short-term memory. The test subjects listened to lists of English words, voiced either with a standard American accent or with a pronounced, but still intelligible Korean accent. After a short time the lists would randomly stop and the listeners were asked to recall the last three words they had heard.

All the volunteer listeners selected for the study were unfamiliar with a Korean accent.

The listeners’ rate of recall for the most recently heard words was similarly high with both accents, but Van Engen and her team found that volunteers remembered the third word back only about 70 percent of the time when listening to a Korean accent, compared to about 80 percent when listening to a standard American accent.

All of the words spoken with the accent had been previously tested to ensure that they were understandable before they were used in the experiment, Van Engen said. The difference in recall rates might be due to the brain using some of its executive processing regions, which are generally used to focus attention and integrate and store information, to understand words spoken in an unfamiliar accent, Van Engen said.

The results are preliminary, and Van Engen and her team are working to gather data on larger sets of listeners, as well as to test other brain functions that require processing spoken information, such as listening to a short lecture and later recalling and using the concepts discussed. She said work might also be done to explore whether becoming familiar with a foreign accent would lessen the observed difference in memory functions.

Van Engen hopes the results might help shape strategies for both listeners and foreign accented speakers to better communicate and ensure that the information they discussed is remembered. For example, it might help listeners to use standard strategies such as looking at the person speaking and asking for repetition. Accented speakers might be able to improve communication by talking more slowing or working to match their intonation, rhythm and stress patterns more closely to that of native speakers, Van Engen said.

2pNSa3 – Tuning the cognitive environment: Sound masking with ‘natural’ sounds in open-plan offices. – Alana G. DeLoach, Jeff P. Carter, and Jonas Braasch

WASHINGTON, D.C., May 19, 2015 — Playing natural sounds such as flowing water in offices could boosts worker moods and improve cognitive abilities in addition to providing speech privacy, according to a new study from researchers at Rensselaer Polytechnic Institute. They will present the results of their experiment at the 169th Meeting of the Acoustical Society of America in Pittsburgh.

An increasing number of modern open-plan offices employ sound masking systems that raise the background sound of a room so that speech is rendered unintelligible beyond a certain distance and distractions are less annoying.

“If you’re close to someone, you can understand them. But once you move farther away, their speech is obscured by the masking signal,” said Jonas Braasch, an acoustician and musicologist at the Rensselaer Polytechnic Institute in New York.

Sound masking systems are custom designed for each office space by consultants and are typically installed as speaker arrays discretely tucked away in the ceiling. For the past 40 years, the standard masking signal employed is random, steady-state electronic noise — also known as “white noise.”

Braasch and his team had previously tested whether masking signals inspired by natural sounds might work just as well, or better, than the conventional signal. The idea was inspired by previous work by Braasch and his graduate student Mikhail Volf, which showed that people’s ability to regain focus improved when they were exposed to natural sounds versus silence or machine-based sounds.

Recently, Braasch and his graduate student Alana DeLoach built upon those results in a new experiment. They exposed [HOW MANY??] human participants to three different sound stimuli while performing a task that required them to pay close attention: typical office noises with the conventional random electronic signal; an office soundscape with a “natural” masker; and an office soundscape with no masker. The test subjects only encountered one of the three stimuli per visit.

The natural sound used in the experiment was designed to mimic the sound of flowing water in a mountain stream. “The mountain stream sound possessed enough randomness that it did not become a distraction,” DeLoach said. “This is a key attribute of a successful masking signal.”

They found that workers who listened to natural sounds were more productive than the workers exposed to the other sounds and reported being in better moods.

Braasch said using natural sounds as a masking signal could have benefits beyond the office environment. “You could use it to improve the moods of hospital patients who are stuck in their rooms for days or weeks on end,” Braasch said.

For those who might be wary of employers using sounds to influence their moods, Braasch argued that using natural masking sounds is no different from a company that wants to construct a new building near the coast so that its workers can be exposed to the soothing influence of ocean surf.

“Everyone would say that’s a great employer,” Braasch said. “We’re just using sonic means to achieve that same effect.”

3pAB2 – A design for a biomiometic dynamic sonar head. – Phllip Caspers, Yanqing Fu, and Rolf Müller

WASHINGTON, D.C., May 20, 2015 — Engineers at Virginia Tech have taken the first steps toward building a novel dynamic sonar system inspired by horseshoe bats that could be more efficient and take up less space than current man-made sonar arrays. They are presenting a prototype of their “dynamic biomimetic sonar” at the 169th Meeting of the Acoustical Society of America in Pittsburg, Penn.

Bats use biological sonar, called echolocation, to navigate and hunt, and horseshoe bats are especially skilled at using sounds to sense their environment. “Not all bats are equal when it comes to biosonar,” said Rolf Müller, a mechanical engineer at Virginia Tech. “Horseshoe bats hunt in very dense forests, and they are able to navigate and capture prey without bumping into anything. In general, they are able to cope with difficult sonar sensing environments much better than we currently can.”

To uncover the secrets behind the animal’s abilities, Müller and his team studied the ears and noses of bats in the laboratory. Using the same motion-capture technology used in Hollywood films, the team revealed that the bats rapidly deform their outer ear shapes to filter sounds according to frequency and direction and to suit different sensing tasks.

“They can switch between different ear configurations in only a tenth of a second – three times faster than a person can blink their eyes,” said Philip Caspers, a graduate student in Müller’s lab.

Unlike bat species that employ a less sophisticated sonar system, horseshoe bats emit ultrasound squeaks through their noses rather than their mouths. Using laser-Doppler measurements that detect velocity, the team showed that the noses of horseshoe bats also deform during echolocation–much like a megaphone whose walls are moving as the sound comes out.

The team has now applied the insights they’ve gathered about horseshoe bat echolocation to develop a robotic sonar system. The team’s sonar system incorporates two receiving channels and one emitting channel that are able to replicate some of the key motions in the bat’s ears and nose. For comparison, modern naval sonar arrays can have receivers that measure several meters across and many hundreds of separate receiving elements for detecting incoming signals.

By reducing the number of elements in their prototype, the team hopes to create small, efficient sonar systems that use less power and computing resources than current arrays. “Instead of getting one huge signal and letting a supercomputer churn away at it, we want to focus on getting the right signal,” Müller said.

3aBA9 – Acoustic separation of milk fat globules – Principles in large scale processing. – Thomas Leong, Linda Johansson, Pablo Juliano and Richard Manasseh

WASHINGTON, D.C., May 20, 2015 — Recently, scientists from Swinburne University of Technology in Australia and the Commonwealth Scientific and Industrial Research Organization (CSIRO), have jointly demonstrated cream separation from natural whole milk at liter-scales for the first time using ultrasonic standing waves–a novel, fast and nondestructive separation technique typically used only in small-scale settings.

At the 169th Meeting of the Acoustical Society of America (ASA), being held May 18-22 2015 in Pittsburgh, Pennsylvania, the researchers will report the key design and effective operating parameters for milk fat separation in batch and continuous systems.

The project, co-funded by the Geoffrey-Gardiner Dairy Foundation and the Australian Research Council, has established a proven ultrasound technique to separate fat globules from milk with high volume throughputs up to 30 liters per hour, opening doors for processing dairy and biomedical particulates on an industrial scale.

“We have successfully established operating conditions and design limitations for the separation of fat from natural whole milk in an ultrasonic liter-scale system,” said Thomas Leong, an ultrasound engineer and a postdoctoral researcher from the Faculty of Science, Engineering and Technology at the Swinburne University of Technology. “By tuning system parameters according to acoustic fundamentals, the technique can be used to specifically select milk fat globules of different sizes in the collected fractions, achieving fractionation outcomes desired for a particular dairy product.”

The Ultrasonic Separation Technique

According to Leong, when a sound wave is reflected upon itself, the reflected wave can superimpose over the original waves to form an acoustic standing wave. Such waves are characterised by regions of minimum local pressure, where destructive interference occurs at pressure nodes, and regions of high local pressure, where constructive superimposition occurs at pressure antinodes.

When an acoustic standing wave field is sustained in a liquid containing particles, the wave will interact with particles and produce what is known as the primary acoustic radiation force. This force acts on the particles, causing them to move towards either the node or antinode of the standing wave, depending on their density. Positioned thus, the individual particles will then rapidly aggregate into larger entities at the nodes or antinodes.

To date, ultrasonic separation has been mostly applied to small-scale settings, such as microfluidic devices for biomedical applications. Few demonstrations are on volume-scale relevant to industrial application, due to the attenuation of acoustic radiation forces over large distances.

Acoustic Separation of Milk Fat Globules at Liter Scales

To remedy this, Leong and his colleagues have designed a system consisting of two fully-submersible plate transducers placed on either end of a length-tunable, rectangular reaction vessel that can hold up to two liters of milk.

For single-plate operation, one of the plates produces one or two-megahertz ultrasound waves, while the other plate acts as a reflector. For dual-plate operation, both plates were switched on simultaneously, providing greater power to the system and increasing the acoustic radiation forces sustained.

To establish the optimal operation conditions, the researchers tested various design parameters such as power input level, process time, transducer-reflector distance and single or dual transducer set-ups etc.

They found that ultrasound separation makes the top streams of the milk contain a greater concentration of large fat globules (cream), and the bottom streams more small fat globules (skimmed milk), compared to conventional methods.

“These streams can be further fractionated to obtain smaller and larger sized fat globules, which can be used to produce novel dairy products with enhanced properties,” Leong said, as dairy studies suggested that cheeses made from milk with higher portion of small fat globules have superior taste and texture, while milk or cream with more large fat globules can lead to tastier butter.

Leong said the ultrasonic separation process only takes about 10 to 20 minutes on a liter scale – much faster than traditional methods of natural fat sedimentation and buoyancy processing, commonly used today for the manufacture of Parmesan cheeses in Northern Italy, which can take more than six hours.

The researchers’ next step is to work with small cheese makers to demonstrate the efficacy of the technique in cheese production.

On Bleats, in the Year of the Sheep

David G. Browning, 139 Old North Road, Kingston, RI 02881 decibeldb@aol.com

Peter M. Scheifele, Dept. of Communication Science, Univ. of Cincinnati, Cincinnati, OH 45267

 

A bleat is usually defined as the cry of a sheep or goat but they are just two voices in a large worldwide animal chorus that we are just starting to understand.

A bleat is a simple short burst of sound comprised of harmonic tones. It is easily voiced by young or small animals, who are the majority of the bleaters. From deer to polar bears; muskoxen to sea lions, the young bleats produce a sound of enough character to allow easy detection and possible identification by concerned mothers. As these animals mature usually their voices shift lower, longer, and louder and a vocabulary of other vocalizations are developed.

But for some notable exceptions this is not the case. For example, sheep and goats retain their bleating structure as their principal vocalization through adulthood – hence bleating is usually associated with them. Their bleats have been the most studied and show a characteristic varietal structure and at least a limited ability for maternal recognition of specific individuals.

For another example, at least four small varieties of toad, such as the Australian Bleating Toad and in America, the Eastern Narrow Mouthed Toad are strong bleaters through their entire life. Bleats provide them a signature signal that carries in the night and is easily repeatable and sustainable. But why these four amphibians? Our lack of an answer speaks to our still limited knowledge of the vast field of animal communication.

Perhaps most interestingly, the Giant Panda retains bleating while developing a complex mix of other vocalizations. It is probably the case that in the visually challenging environment of a dense bamboo thicket they must retain all possible vocal tools to communicate. Researchers link their bleating to male size and female age.

In summary, bleating is an important aspect of youth for many animals; for some it is the principal vocalization for life; and, for a few, a retained tool among many.