Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, North Rhine-Westphalia, 52074, Germany
Fabian Kiessling – fkiessling@ukaachen.de
Institute for Experimental Molecular Imaging, RWTH Aachen University
Aachen, 52074
Germany
Popular version of 1aBAb3 – Monitoring of neoadjuvant chemotherapy response of breast cancer with ultrasound localization microscopy
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026657
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
One in every eight women develops breast cancer over her lifetime. Despite tremendous advances in therapy over the last decades, particularly aggressive breast cancer remains challenging to treat. The current clinical standard is to subject these patients to chemotherapy already before surgically removing their tumor. The ultimate goal of this treatment is to make the tumor disappear completely and to use the surgery solely to confirm that no cancer cells are remaining in the tissue. However, the majority of patients does not sufficiently respond to the chemotherapy treatment. In this case, the therapeutic outcome must be critically weighed against the expected side effects and risks. Therefore, it is highly important to 1) better identify patients that are likely to not sufficiently respond to the therapy (patient preselection) and to 2) make sure that patients subjected to therapy do indeed respond (therapy monitoring).
Figure 1: Super-resolution ultrasound image of a human breast tumor.
In our study, we are investigating the use of super-resolution ultrasound (Fig. 1) for these two applications. This emerging technique provides histology-like images of vessel trees and allows to determine the blood flow within each individual vessel, thus providing new information on microvascular perfusion. As the vascular system is tightly bound to tumor development, we hypothesize that super-resolution ultrasound might reveal differences between fully and incompletely responding patients.
To investigate this, we examined breast cancer patients during their chemotherapy treatment. More precisely, we characterized their tumor right before they received their first, second, and fourth dose of chemotherapeutics. Here, we measured the tumor size and recorded ultrasound videos in which we highlighted the vessels using a contrast agent. We subsequently post-processed these videos to obtain super-resolution images of the vessel architecture and blood flow velocities. Finally, we extracted a multitude of morphological and functional vessel features, which together form a vascular fingerprint.
Our approach revealed that patients responding fully to therapy differed noticeably from partial responders before the start of chemotherapy. Their tumors were more vascularized, their vessels more tortuous, and the vessel architecture was different. Both patient groups could be distinguished with high accuracy when applying a first classification approach. During chemotherapy, many features associated with malignancy normalized in case of full responders, meaning that the vessels resembled more and more those of healthy tissue. In contrast, this was not observed for partial responders.
These findings show that super-resolution ultrasound might be able to fill a tremendous gap in therapy monitoring and, especially, patient preselection of breast cancer patients. Being able to identify patients that insufficiently respond to therapy would avoid the side effects of an ineffective therapy and would allow the medical doctors to look for a more adequate treatment. In that way, super-resolution ultrasound could considerably improve the therapy outcome of these patients.
U.S. Army Engineer Research and Development Center, Vicksburg, MS, 39180, United States
Popular version of 1pPAb4 – The Infrasonic Choir: Decoding Songs to Inform Decisions
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026838
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Figure 1. Infrasound is a low frequency, sub-audible sound propagated over long distances (10’s to 1000’s of kilometers) and typically below the threshold of human hearing. Image courtesy of author.
The world around us is continuously evolving due to the actions of Mother Nature and man-made activities, impacting how we interact with the environment. Many of these activities generate infrasound, which is sound below the frequency threshold for human hearing (Figure 1). These signals can travel for long distances, 10s to 100s km based on source strength, while maintaining key information about what generated the signal. The multitude of signals can be thought of as an infrasonic choir with voices from a wide variety of sources which include natural signals such as surf and volcanic activity and man-made including infrastructure or industrial activities. Listening to, and deciphering, the infrasonic choir around us allows us to better understand how the world around us is evolving.
The infrasonic choir is observed by placing groupings, called arrays, of specialized sensors around the environment we wish to understand. These sensors are microphones designed to capture very low frequency sounds. An array’s geometry enables us to identify the direction the signal is observed. Using multiple arrays around a region allow for identification of the source location.
One useful application of decoding infrasonic songs is listening to infrastructure, such as a bridge. Bridges vibrate at frequencies related to the engineering characteristics of the structure, such as mass and stiffness. Bridges are surrounded by the fluid atmosphere which allow the bridge vibrations to create waves that can be measured with infrasound sensor arrays. One can visualize this as waves generated after a rock is thrown into a pond. As the bridge’s overall health degrades, whether through time or other events, its engineering characteristics change causing a change in the vibrational frequency. Being able to identify a change from a healthy, “in-tune” structure to an “out-of-tune”, unhealthy structure without having to see or inspect the bridge would enable continuous monitoring of entire regional road networks. The ability to conduct this type of monitoring after a natural disaster, such as hurricane or earthquake, would enable quick identification of damaged structures for prioritization of limited structural assessment resources.
Understanding how to decode the infrasonic choir within the symphony of the environment to better understand the world around us is the focus of ongoing research at the U.S. Army Engineer Research and Development Center. This research effort focuses on moving monitoring into source rich urban environments, the design of lightweight and low-cost sensors and mobile arrays, and the development of automated processing methods for analysis. When successful, continuous monitoring of this largely untapped source of information will provide a method for understanding the environment to better inform decisions.
Permission to publish was granted by the Director, Geotechnical and Structures Laboratory, U.S. Army Engineer Research and Development Center.
Popular version of 2pAB8 – Moving Cargo, Keeping Whales: Investigating Solutions for Ocean Noise Pollution
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027065
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Figure 1. Image Courtesy of ZoBell, Vanessa M., John A. Hildebrand, and Kaitlin E. Frasier. “Comparing pre-industrial and modern ocean noise levels in the Santa Barbara Channel.” Marine Pollution Bulletin 202 (2024): 116379.
Southern California waters are lit up with noise pollution (Figure 1). The Port of Los Angeles and the Port of Long Beach are the first and second busiest shipping ports in the western hemisphere, supporting transits from large container ships that radiated noise throughout the region. Underwater noise generated by these vessels dominate ocean soundscapes, negatively affecting marine organisms, like mammals, fish, and invertebrates, who rely on sound for daily life functions. In this project, we modeled what the ocean would sound like without human activity and compared it with what it sounds like in modern day. We found in this region, which encompasses the Channel Islands National Marine Sanctuary and feeding grounds of the endangered northeastern Pacific blue whale, modern ocean noise levels were up to 15 dB higher than pre-industrial levels. This would be like having a picnic in a meadow versus having a picnic on an airport tarmac.
Reducing ship noise in critical habitats has become an international priority for protecting marine organisms. A variety of noise reduction techniques have been discussed, with some already operationalized. To understand the effectiveness of these techniques, broad stakeholder engagement, robust funding, and advanced signal processing is required. We modeled a variety of noise reduction simulations and identified effective strategies to quiet whale habitats in the Santa Barbara Channel region. Simulating conservation scenarios will allow more techniques to be explored without having to be implemented, saving time, money, and resources in the pursuit of protecting the ocean.
Jian-yu Lu – jian-yu.lu@ieee.org
X (Twitter): @Jianyu_lu
Instagram: @jianyu.lu01
Department of Bioengineering, College of Engineering, The University of Toledo, Toledo, Ohio, 43606, United States
Popular version of 1pBAb4 – Reconstruction methods for super-resolution imaging with PSF modulation
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026777
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Imaging is an important fundamental tool to advance science, engineering, and medicine, and is indispensable in our daily life. Here we have a few examples: Acoustical and optical microscopes have helped to advance biology. Ultrasound imaging, X-ray radiography, X-ray computerized tomography (X-ray CT), magnetic resonance imaging (MRI), gamma camera, single-photon emission computerized tomography (SPECT), and positron emission tomography (PET) have been routinely used for medical diagnoses. Electron and scanning tunneling microscopes have revealed structures in nanometer or atomic scale, where one nanometer is one billionth of a meter. And photography, including the cameras in cell phones, is in our everyday life.
Despite the importance of imaging, it was first recognized by Ernest Abbe in 1873 that there is a fundamental limit known as the diffraction limit for resolution in wave-based imaging systems due to the diffraction of waves. This effects acoustical, optical, and electromagnetic waves, and so on.
Recently (see Lu, IEEE TUFFC, January 2024), the researcher developed a general method to overcome such a long-standing diffraction limit. This method is not only applicable to wave-based imaging systems such as ultrasound, optical, electromagnetic, radar, and sonar; it is in principle also applicable to other linear shift-invariant (LSI) imaging systems such as X-ray radiography, X-ray CT, MRI, gamma camera, SPECT, and PET since it increases image resolution by introducing high spatial frequencies through modulating the point-spread function (PSF) of an LSI imaging system. The modulation can be induced remotely from outside of an object to be imaged, or can be small particles introduced into or on the surface of the object and manipulated remotely. The LSI system can be understood with a geometric distortion corrected optical camera in the photography, where the photo of a person will be the same or invariant in terms of the size and shape if the person only shifts his/her position in the direction that is perpendicular to the camera optical axis within the camera field of view.
Figure 1 below demonstrates the efficacy of the method using an acoustical wave. The method was used to image a passive object (in the first row) through a pulse-echo imaging or to image wave source distributions (in the second row) with a receiver. The best images obtainable under the Abbe’s diffraction limit are in the second column, and the super-resolution (better than the diffraction limit) images obtained with the new method are in the last column. The super-resolution images had a resolution that was close to 1/3 of the wavelength used from a distance with an f-number (focal distance divided by the diameter of the transducer) close to 2.
Because the method developed is based on the convolution theory of an LSI system and many practical imaging systems are LSI, the method opens an avenue for various new applications in science, engineering, and medicine. With a proper choice of a modulator and imaging system, nanoscale imaging with resolution similar to that of a scanning electron microscope (SEM) is possible even with visible or infrared light.
Emma Holmes – emma.holmes@ucl.ac.uk
X (Twitter): @Emma_Holmes_90
University College London (UCL), Department of Speech Hearing and Phonetic Sciences, London, Greater London, WC1N 1PF, United Kingdom
Popular version of 4aPP4 – How does voice familiarity affect speech intelligibility?
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027437
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
It’s much easier to understand what others are saying if you’re listening to a close friend or family member, compared to a stranger. If you practice listening to the voices of people you’ve never met before, you might also become better at understanding them too.
Many people struggle to understand what others are saying in noisy restaurants or cafés. This can become much more challenging as people get older. It’s often one of the first changes that people notice in their hearing. Yet, research shows that these situations are much easier if people are listening to someone they know very well.
In our research, we ask people to visit the lab with a friend or partner. We record their voices while they read sentences aloud. We then invite the volunteers back for a listening test. During the test, they hear sentences and click words on a screen to show what they heard. This is made more difficult by playing a second sentence at the same time, which the volunteers are told to ignore. This is like having a conversation when there are other people talking around you. Our volunteers listen to many sentences over the course of the experiment. Sometimes, the sentence is one recorded from their friend or partner. Other times, it’s one recorded from someone they’ve never met. Our studies have shown that people are best at understanding the sentences spoken by their friend or partner.
In one study, we manipulated the sentence recordings, to change the sound of the voices. The voices still sounded natural. Yet, volunteers could no longer recognize them as their friend or partner. We found that participants were still better at understanding the sentences, even though they didn’t recognize the voice.
In other studies, we’ve investigated how people learn to become familiar with new voices. Each volunteer learns the names of three new people. They’ve never met these people, but we play them lots of recordings of their voices. This is like when you listen to a new podcast or radio show. We’ve found that people become very good at understanding these people. In other words, we can train people to become familiar with new voices.
In new work that hasn’t yet been published, we found that voice familiarization training benefits both older and younger people. So, it may help older people who find it very difficult to listen in noisy places. Many environments contain background noise—from office parties to hospitals and train stations. Ultimately, we hope that we can familiarize people with voices they hear in their daily lives, to make it easier to listen in noisy places.