Blood Bubbles Reveal Oxygen Levels

Blood Bubbles Reveal Oxygen Levels

Acoustic tools detect vibrating microbubbles, act as oxygen sensors

Media Contact:
Larry Frum
AIP Media
301-209-3090
media@aip.org

SEATTLE, November 29, 2021 – Blood carries vital oxygen through our circulation system to muscles and organs. Acoustic tools can create small bubbles in our blood, capable of changing in response to oxygen and signifying oxygen levels.

During the 181st Meeting of the Acoustical Society of America, which will be held Nov. 29 to Dec. 3, Shashank Sirsi, from the University of Texas at Dallas, will discuss how circulating microbubbles can be used to measure oxygen levels. The talk, “Hemoglobin Microbubbles for In Vivo Blood Oxygen Level Dependent Imaging: Boldly Moving Beyond MRI,” will take place Monday, Nov. 29, at 11:25 a.m. Eastern U.S.

Microbubbles are smaller than one hundredth of a millimeter in diameter and can be made by emulsifying lipids or proteins with a gas. The gas filling of microbubbles causes them to oscillate and vibrate when ultrasound is applied, scattering energy and generating an acoustic response that can be detected by a clinical ultrasound scanner. They are routinely used in medical imaging to provide greater contrast in tissue.

Hemoglobin, the protein that gives red blood cells their signature color, will form a stable shell around microbubbles. It then continues to carry out its typical role of binding and releasing oxygen in blood.

Sirsi and his team developed microbubbles to acoustically detect blood oxygen levels, since the microbubble shells are altered by structural hemoglobin changes in response to oxygen. The hemoglobin shell is continually responsive to oxygen after surrounding the bubble and has been optimized to perform in living organisms’ circulation.

“When oxygen binds to hemoglobin, there are structural changes in the protein that change the mechanical properties,” said Sirsi. “The mechanical properties of the shell dictate the acoustic response of a bubble, so our hypothesis was that different acoustic responses would be seen as the shell gets stiffer or more elastic.”

Preliminary results show a strong correlation between oxygen concentration and the acoustic bubble response, highlighting the potential use of microbubbles as oxygen sensors. This capability would have many benefits for medicine and imaging, including evaluating oxygen-deprived regions of tumors and in the brain.

———————– MORE MEETING INFORMATION ———————–
USEFUL LINKS
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eventpilotadmin.com/web/planner.php?id=ASASPRING22
Press Room: https://acoustics.org/world-wide-press-room/

WORLDWIDE PRESS ROOM
In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.

PRESS REGISTRATION
We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

1pBAb5 – Predicting Spontaneous Preterm Birth Risk is Improved when Quantitative Ultrasound Data are Included with Prior Clinical Data

Barbara L. McFarlin, bmcfar1@uic.edu
Yuxuan Liu
Shashi Roshan
Aiguo Han
Douglas G. Simpson
William D. O’Brien, Jr.

Popular version of 1pBAb5 – Predicting spontaneous preterm birth risk is improved when quantitative ultrasound data are included with prior clinical data
Presented Monday afternoon, November 29, 2021
181st ASA Meeting
Click here to read the abstract

Preterm birth (PTB) is defined as birth before 37 completed weeks’ gestation. Annually in the U.S., more than 400,000 infants are born preterm, and over 1 billion globally. Consequences of PTB for survivors are severe, can be life-long and cost society $30 billion annually, a cost that far exceeds that of any major adult diagnosis. Predicting women at risk for sPTB has been medically challenging due to 1) lack of signs and symptoms of preterm labor until intervention is too late, and 2) lack of screening tools to signal sPTB risk early enough when an intervention would likely be effective. Spontaneous preterm labor is a syndrome associated with multiple etiologies of which only a portion may be associated with cervical insufficiency; however, regardless of the reason of PTB, the cervix (the opening to the womb) must get ready for birth to allow passage of the baby.

Our Novel quantitative ultrasound (QUS) technology has been developed by our multidisciplinary investigative team (ultrasound, engineering and nurse midwifery) and shows promise of becoming a widely available and a useful method for early detection of spontaneous preterm birth. Our preliminary results of 275 pregnant women who received two ultrasounds during pregnancy, determined that QUS improved prediction of preterm birth and was an added feature to current clinical and patient risk factors. QUS technology is a feature that can readily be added to current clinical ultrasound systems, thereby reducing the time from basic science innovation translation to improve clinical care of women.

This research was supported National Institutes of Health grant R01 HD089935

Can We Perceive Gender from Children’s Voices?

Can We Perceive Gender from Children’s Voices?

Classification rates for individual talkers (ages 5-11), with numbers indicating talker age (males in circles). Voices in shaded quadrants were identified correctly based on both sentences and isolated syllables. CREDIT: Barreda and Assmann

WASHINGTON, November 23, 2021 — The perception of gender in children’s voices is of special interest to researchers, because voices of young boys and girls are very similar before the age of puberty. Adult male and female voices are often quite different acoustically, making gender…click to read more

From the Journal: The Journal of the Acoustical Society of America
Link to article: Perception of gender in children’s voices
DOI: 10.1121/10.0006785

4aAA10 – Acoustic Effects of Face Masks on Speech: Impulse Response Measurements Between Two Head and Torso Simulators

Victoria Anderson – vranderson@unomaha.edu
Lily Wang – lilywang@unl.edu
Chris Stecker – cstecker@spatialhearing.org
University of Nebraska Lincoln at the Omaha Campus
1110 S 67th Street
Omaha, Nebraska

Popular version of 4aAA10 – Acoustic effects of face masks on speech: Impulse response measurements between two binaural mannikins
Presented Thursday morning, December 2nd, 2021
181st ASA Meeting
Click here to read the abstract

Due to the COVID-19 Pandemic, masks that cover both the mouth and nose have been used to reduce the spread of illness. While they are effective at preventing the transmission of COVID, they have also had a noticeable impact on communication. Many find it difficult to understand a speaker if they are wearing a mask. Masks effect the sound level and direction of speech, and if they are opaque, can block visual cues that help in understanding speech. There are many studies that explore the effect face masks have on understanding speech. The purpose of this project was to begin assembling a database of the effect that common face masks have on impulse responses from one head and torso simulator (HATS) to another. Impulse response is the measurement of sound radiating out from a source and how it bounces through a space. The resulting impulse response data can be used by researchers to simulate masked verbal communication scenarios.To see how the masks specifically effect the impulse response, all measurements were taken in an anechoic chamber so no reverberant noise would be included in the impulse response measurement. The measurements were taken with one HATS in the middle of the chamber to be used as the source, and another HATS placed at varying distances to act as the receiver. The mouth of the source HATS was covered with various face masks: paper, cloth, N95, nano, and face shield. These were put on individually and in combination with a face shield to get a wider range of potential masked combinations that would reasonably occur in real life. The receiver HATS took measurements at 90° and 45° from the source, at distances of 6’ and 8’. A sine sweep, which is a signal that changes frequency over a set amount of time, was played to determine the impulse response of each masked condition at every location. The receiver HATS measured the impulse response in both right and left ears, and the software used to produce the sine sweep was used to analyze and store the measurement data. This data will be available for use in simulated communication scenarios to better portray how sound would behave in a space when coming from a masked speaker.

masks masks head and torso simulator (HATS) masks

 

3aEA7 – Interactive Systems for Immersive Spaces

Samuel Chabot – chabos2@rpi.edu
Jonathan Mathews – mathej4@rpi.edu
Jonas Braasch – braasj@rpi.edu
Rensselaer Polytechnic Institute
110 8th St
Troy, NY, 12180

Popular version of 3aEA7 – Multi-user interactive systems for immersive virtual environments
Presented Wednesday morning, December 01, 2021
181st ASA Meeting
Click here to read the abstract

In the past few years, immersive spaces have become increasingly popular. These spaces, most prevalently used as exhibits and galleries, incorporate large displays that completely envelop groups of people, speaker arrays, and even reactive elements that can respond to the actions of the visitors within. One of the primary challenges in creating productive applications for these environments is the integration of intuitive interaction frameworks. For users to take full advantage of these spaces, whether it be for productivity, or education, or entertainment, the interfaces used to interact with data should be both easy to understand, and provide predictable feedback. In the Collaborative Research-Augmented Immersive Virtual Environment, or CRAIVE-Lab, at Rensselaer Polytechnic Institute, we have integrated a variety of technologies to foster natural interaction with the space. First, we developed a dynamic display environment for our immersive screen, written in JavaScript, to easily create display modules for everything from images to remote desktops. Second, we have incorporated spatial information into these display objects, so that audiovisual content presented on the screen generates spatialized audio over our 128-channel speaker array at the corresponding location. Finally, we have a multi-sensor platform installed, which integrates a top-down camera array, as well as a 16-channel spherical microphone to provide continuous tracking of multiple users, voice activity detection associated with each user, and isolated audio.

By combining these technologies together, we can create a user experience within the room that encourages dynamic interaction with data. For example, delivering a presentation in this space, a process that typically consists of several file transfers and a lackluster visual experience, can now be performed with minimal setup, using the presenter’s own device, and with spatial audio when needed.

Control of lights and speakers can be done via a unified control system. Feedback from the sensor system allows display elements to be positioned relative to the user. Identified users can take ownership of specific elements on the display, and interact with the system concurrently, which makes group interactions and shared presentations far less cumbersome than with typical methods. The elements which make up the CRAIVE-Lab are not particularly novel, as far as contemporary immersive rooms are concerned. However, these elements intertwine into a network which provides functionality for the occupants that is far greater than the sum of its parts.