A general method to obtain clearer images at a higher resolution than theoretical limit

Jian-yu Lu – jian-yu.lu@ieee.org
X (Twitter): @Jianyu_lu
Instagram: @jianyu.lu01
Department of Bioengineering, College of Engineering, The University of Toledo, Toledo, Ohio, 43606, United States

Popular version of 1pBAb4 – Reconstruction methods for super-resolution imaging with PSF modulation
Presented at the 186 ASA Meeting
Read the abstract at https://eppro02.ativ.me/web/index.php?page=IntHtml&project=ASASPRING24&id=3675355

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

Imaging is an important fundamental tool to advance science, engineering, and medicine, and is indispensable in our daily life. Here we have a few examples: Acoustical and optical microscopes have helped to advance biology. Ultrasound imaging, X-ray radiography, X-ray computerized tomography (X-ray CT), magnetic resonance imaging (MRI), gamma camera, single-photon emission computerized tomography (SPECT), and positron emission tomography (PET) have been routinely used for medical diagnoses. Electron and scanning tunneling microscopes have revealed structures in nanometer or atomic scale, where one nanometer is one billionth of a meter. And photography, including the cameras in cell phones, is in our everyday life.

Despite the importance of imaging, it was first recognized by Ernest Abbe in 1873 that there is a fundamental limit known as the diffraction limit for resolution in wave-based imaging systems due to the diffraction of waves. This effects acoustical, optical, and electromagnetic waves, and so on.

Recently (see Lu, IEEE TUFFC, January 2024), the researcher developed a general method to overcome such a long-standing diffraction limit. This method is not only applicable to wave-based imaging systems such as ultrasound, optical, electromagnetic, radar, and sonar; it is in principle also applicable to other linear shift-invariant (LSI) imaging systems such as X-ray radiography, X-ray CT, MRI, gamma camera, SPECT, and PET since it increases image resolution by introducing high spatial frequencies through modulating the point-spread function (PSF) of an LSI imaging system. The modulation can be induced remotely from outside of an object to be imaged, or can be small particles introduced into or on the surface of the object and manipulated remotely. The LSI system can be understood with a geometric distortion corrected optical camera in the photography, where the photo of a person will be the same or invariant in terms of the size and shape if the person only shifts his/her position in the direction that is perpendicular to the camera optical axis within the camera field of view.

Figure 1 below demonstrates the efficacy of the method using an acoustical wave. The method was used to image a passive object (in the first row) through a pulse-echo imaging or to image wave source distributions (in the second row) with a receiver. The best images obtainable under the Abbe’s diffraction limit are in the second column, and the super-resolution (better than the diffraction limit) images obtained with the new method are in the last column. The super-resolution images had a resolution that was close to 1/3 of the wavelength used from a distance with an f-number (focal distance divided by the diameter of the transducer) close to 2.

Figure 1. This figure was modified in courtesy of IEEE (doi.org/10.1109/TUFFC.2023.3335883).

Because the method developed is based on the convolution theory of an LSI system and many practical imaging systems are LSI, the method opens an avenue for various new applications in science, engineering, and medicine. With a proper choice of a modulator and imaging system, nanoscale imaging with resolution similar to that of a scanning electron microscope (SEM) is possible even with visible or infrared light.

These Sounds Are Out of This World! #ASA184

These Sounds Are Out of This World! #ASA184

Software program predicts environmental noise and modulates voices to simulate sound on other planets.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 11, 2023 – You may know how other planets look, like the rust orange, dusty surface of Mars or the vibrant teal of Uranus. But what do those planets sound like?

This illustration depicts Mars helicopter Ingenuity during a test flight on Mars. Ingenuity was taken to the red planet strapped to the belly of the Perseverance rover (seen in the background). Credit: NASA/JPL-Caltech

Timothy G. Leighton from the University of Southampton in the U.K. designed a software program that produces extraterrestrial environmental sounds and predicts how human voices might change in distant worlds. He will demonstrate his work at the upcoming 184th Meeting of the Acoustical Society of America, running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel. His presentation will take place Thursday, May 11, at 12:00 p.m. Eastern U.S. in the Chicago room.

The presentation is part of a special session that brings together the acoustics and planetary science communities. Acoustical studies became essential during the Huygens lander’s descent into Titan’s atmosphere in 2005 and in the more recent Mars InSight and Mars 2020 missions. These successful missions carried customized active and passive acoustic sensors operating over a wide spectrum, from very low frequencies (infrasound, below the human hearing threshold) to ultrasound (above human hearing).

“For decades, we have sent cameras to other planets in our solar system and learned a great deal from them. However, we never really heard what another planet sounded like until the very recent Mars Perseverance mission,” said Leighton.

Scientists can harness sound on other worlds to learn about properties that might otherwise require a lot of expensive equipment, like the chemical composition of rocks, how atmospheric temperature changes, or the roughness of the ground.

Extraterrestrial sounds could also be used in the search for life. At first glance, Jupiter’s moon Europa may seem a hostile environment, but below its shell of ice lies a potentially life-sustaining ocean.

“The idea of sending a probe on a seven-year trip through space, then drilling or melting to the seabed, poses mind-boggling challenges in terms of finance and technology. The ocean on Europa is 100 times deeper than Earth’s Arctic Ocean, and the ice cap is roughly 1,000 times thicker,” said Leighton. “However, instead of sending a physical probe, we could let sound waves travel to the seabed and back and do our exploring for us.”

Planets’ unique atmospheres impact sound speed and absorption. For example, the thin, carbon dioxide-rich Martian atmosphere absorbs more sound than Earth’s, so distant noises appear fainter. Anticipating how sound travels is important for designing and calibrating equipment like microphones and speakers.

Hearing the sound from other planets is beneficial not just for scientific purposes, but also for entertainment. Science-fiction films contain vivid imagery to mimic the look of other worlds but often lack the immersive quality of how those worlds would sound.

Leighton’s software will showcase predictions of the sounds of other worlds at planetariums and museums. In the case of Mars, it will include actual sounds thanks to the U.S./European Perseverance team and China’s Zhurong mission.

The special session, chaired by Leighton and Andi Petculescu, is the third forum on acoustics in planetary science organized at a meeting of the Acoustical Society of America.

“The success of the first two ASA special sessions on this subject has led to quite a few collaborations between the two communities, a trend that we hope will carry on,” said Petculescu.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Fighting Racial Bias in Next-Gen Breast Cancer Screening #ASA184

Fighting Racial Bias in Next-Gen Breast Cancer Screening #ASA184

A new virtual framework has enabled investigations into the effectiveness of optoacoustic tomography for cancer screening in darker-skinned individuals.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

CHICAGO, May 9, 2023 – Breast cancer is one of the most common and deadly types of cancer, and the best outcomes stem from early detection. But some screening techniques may be less effective for people with darker skin.

Seonyeong Park of the University of Illinois Urbana-Champaign will discuss experiments to measure this bias in her talk, “Virtual imaging trials to investigate impact of skin color on three-dimensional optoacoustic tomography of the breast.” The presentation will take place Tuesday, May 9, at 6:15 p.m. Eastern U.S. in room Chicago F/G, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.

Virtual imaging trials to investigate the impact of skin color on 3D optoacoustic tomography of the breast: (a) An example schematic of a 3D OAT scan (left) and a clinical image (right) scanned by TomoWave Laboratories, Inc. (Houston) using LOUISA-3D at MD Anderson Cancer Center; (b) VITs of OAT using numerical breast phantoms (top); (c) 3D visualization of region-of-interest difference images between the reconstructed images with and without a lesion under different skin tones, obtained via VITs of OAT. Credit: Seonyeong Park

Current standard screening for breast cancer is done with X-ray mammography, which can be uncomfortable and is less effective on dense breast tissue. An alternative, optoacoustic tomography, uses laser light to induce sound vibrations in breast tissue. The vibrations can be measured and analyzed to spot tumors. This method is safe and effective and does not require compression during imaging.

The technology that underlies OAT imaging is not new; it has been used in pulse oximetry for decades. Concerns about its interaction with darker skin have existed for almost as long.

“In 1990, a study found that pulse oximetry was about 2.5 times less accurate in patients with dark skin,” said Park. “Recently, an article suggested that unreliable measurements from pulse oximeters may have contributed to increased mortality rates in Black patients during the COVID-19 pandemic.”

With OAT emerging as an effective breast cancer screening method, Park and her team, led by professors Mark Anastasio at UIUC and Umberto Villa at the University of Texas at Austin, collaborating with professor Alexander Oraevsky of TomoWave Laboratories, Inc. in Houston, wanted to determine if this same bias was present. Rather than navigate the cost and ethics issues surrounding human test subjects, the team instead simulated a range of skin colors and tumor locations.

“By using an ensemble of realistic numerical breast phantoms, i.e., digital breasts, the evaluation can be conducted rapidly and cost-effectively,” said Park.

The results confirmed that tumors could be harder to locate in individuals with darker skin depending on the design of the OAT imager and the location of the tumor. Fortunately, a virtual framework developed by Park allows for more comprehensive investigations and can serve as a tool for evaluating and optimizing new OAT imaging systems in their early stages of development.

“To improve detectability in dark skin, the laser power to acoustic noise ratio should be increased,” said Park. “It is recommended that skin color-dependent detectability should be evaluated when designing new OAT breast imagers. Our team is actively conducting in-depth investigations utilizing our virtual framework to propose effective strategies for designing imaging systems that can help mitigate racial bias in OAT breast imaging.”

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASASPRING23&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Can we detect volcanic eruptions and venusquakes from a balloon floating high above Venus?

Siddharth Krishnamoorthy – siddharth.krishnamoorthy@jpl.nasa.gov

NASA Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109, United States

Daniel C. Bowman2, Emalee Hough3, Zach Yap3, John D. Wilding4, Jamey Jacob3, Brian Elbing3, Léo Martire1, Attila Komjathy1, Michael T. Pauken1, James A. Cutts1, Jennifer M. Jackson4, Raphaël F. Garcia5, and David Mimoun5

1. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA
2. Sandia National Laboratories, Albuquerque, New Mexico, USA
3. Oklahoma State University, Stillwater, OK, USA
4. Seismological Laboratory, California Institute of Technology, Pasadena, CA, USA
5. Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), Toulouse, France

Popular version of 4aPAa1 – Development of Balloon-Based Seismology for Venus through Earth-Analog Experiments and Simulations
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018837

Venus has often been described as a “hellscape” and deservedly so – the surface of Venus simultaneously scorches and crushes spacecraft that land on it with temperatures exceeding 460 degrees Celsius (~850 F) and atmospheric pressure exceeding 90 atmospheres. While the conditions on the surface of Venus are extreme, the temperature and pressure drop dramatically with altitude. At about 50-60 km above the surface, temperature (-10-70 C) and pressure (~0.2-1 atmosphere) resemble that on Earth. At this altitude, the challenge of surviving clouds of sulfuric acid is more manageable than that of surviving the simultaneous squeeze and scorch at the surface. This is evidenced by the fact that the two VeGa balloons floated in the atmosphere of Venus by the Soviet Union in 1985 transmitted data for approximately 48 hours (and presumably survived for much longer) compared to 2 hours and 7 minutes, which is the longest any spacecraft landed on the surface has survived. A new generation of Venus balloons is now being designed that can last over 100 days and can change their altitude to navigate different layers of Venus’ atmosphere. Our research focuses on developing technology to detect signatures of volcanic eruptions and “venusquakes” from balloons in the Venus clouds. Doing so allows us to quantify the level of ongoing activity on Venus, and associate this activity with maps of the surface, which in turn allows us to study the planet’s interior from high above the surface. Conducting this experiment from a balloon floating at an altitude of 50-60 km above the surface of Venus provides a significantly extended observation period, surpassing the lifespan of any spacecraft landed on the surface with current technology.

We propose to utilize low-frequency sound waves known as infrasound to detect and characterize Venus quakes and volcanic activity. These waves are generated due to coupling between the ground and the atmosphere of the planet – when the ground moves, it acts like a drum that produces weak infrasound waves in the atmosphere, which can then be detected by pressure sensors deployed from balloons as shown in figure 1. On Venus, the process of conversion from ground motion to infrasound is up to 60 times more efficient than Earth.

Figure 1: Infrasound is generated when the atmosphere reverberates in response to the motion of the ground and can be detected on balloons. Infrasound can travel directly from the site of the event to the balloon (epicentral) or be generated by seismic waves as they pass underneath the balloon and travel vertically upward (surface wave infrasound).

We are developing this technique by first demonstrating that earthquakes and volcanic eruptions on Earth can be detected by instruments suspended from balloons. These data also allow us to validate our simulation tools and generate estimates for what such signals may look like on Venus. In flight experiments over the last few years, not just several earthquakes of varying magnitudes and volcanic eruptions, but also other Venus-relevant phenomena such as lightning and mountain waves have been detected from balloons as shown in figure 2.

Figure 2: Venus-relevant events on Earth detected on high-altitude balloons using infrasound. Pressure waves from the originating event travel to the balloon and are recorded by barometers suspended from the balloon.

In the next phase of the project, we will generate a catalog of analogous signals on Venus and develop signal identification tools that can autonomously identify signals of interest on a Venus flight.

Copyright 2023, all rights reserved. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).

Noise reduction for low frequency sound measurements from balloons on Venus

Taylor Swaim – tswaim@okstate.edu

Oklahoma State University
Stillwater, Oklahoma 74078
United States

Kate Spillman
Emalee Hough
Zach Yap
Jamey D. Jacob
Brian R. Elbing (twitter: @ElbingProf)

Popular version of 2pCA6 – Infrasound noise mitigation on high altitude balloons
Presented at the 184 ASA Meeting
Read the article in Proceedings of Meetings on Acoustics

While there is great interest in studying the structure of Venus because it is believed to be similar to Earth, there are no direct seismic measurements on Venus. This is because the Venus surface temperature is too hot for electronics, but conditions are milder in the middle of the Venus atmosphere. This has motivated interest in studying seismic activity using low frequency sound measurements on high altitude balloons. Recently, this method was demonstrated on Earth with weak earthquakes being detected from balloons flying at twice the altitude of commercial airplanes. Video 1 shows a balloon launch for these test flights. Due to the denser atmosphere on Venus, the coupling between the Venus-quake and the sound waves should be much greater, which will make the sound louder on Venus. However, the higher density atmosphere combined with vertical changes in wind speed is also likely to increase the amount of wind noise on these sensor. Thus development of a new technology to reduce wind noise on a high altitude balloon is needed.

Video 1. Video of a balloon launch during the summer of 2021. Video courtesy of Jamey Jacob.

Several different designs were proposed and ground tested to identify potential materials for compact windscreens. The testing included a long-term deployment outdoors so that the sensors would be exposed to a wide range of wind speeds and conditions. Separately, the sensors were exposed to controlled low-frequency sounds to test if the windscreens were also reducing the loudness of the signals of interest. All of the designs showed significant reduction in wind noise with minimal reduction in the controlled sounds, but one design in particular outperformed the others. This design uses a canvas fabric on the outside of a box as shown in the Figure 1 combined with a dense foam material on the inside.

Figure 1. Picture of balloon carrying the low frequency sound sensors. Compared an early design to no windscreen with this flight. Image courtesy of Brian Elbing.

The next step is to fly this windscreen on a high altitude balloon, especially on windier days and with a long flight line to increase the amount of wind that the sensors will experience. The wind direction at the float altitude of these balloons will change in May and then rapidly increase, which this will be the target window to test this new design.

What is a webchuck?

Chris Chafe – cc@ccrma.stanford.edu

Stanford University
CCRMA / Music
Stanford, CA 94305
United States

Ge Wang
Stanford University

Michael Mulshine
Stanford University

Jack Atherton
Stanford University

Popular version of 1aCA1 – What would a Webchuck Chuck?
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018058

Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video

Video 1: The Metered Tide

The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.

Video 2: The Metered Tide with backing track

Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.

Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.

Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.