Muyinatu Bell — mledijubell@jhu.edu
Johns Hopkins University
3400 N. Charles St.
Baltimore, MD 21218
Popular version of paper 1aBAa4
Presented Monday morning, December 7, 2020
179th ASA Meeting, Acoustics Virtually Everywhere
Injuries to major blood vessels and nerves during surgical procedures such as neurosurgery, spinal fusion surgery, hysterectomies, and biopsies can lead to severe complications for the patient, like paralysis, or even death. Adding to the difficulty for surgeons is that in many cases, these bodily structures are not visible from their immediate viewpoint.
Photoacoustic imaging is a technique that has great potential to aid surgeons by utilizing acoustic responses from light transmission to make images of blood vessels and nerves. However, confusing artifacts that appear in the photoacoustic images, which are caused by acoustic reflections from bone and other highly reflective structures, challenge this technique and lead to inaccuracies in assumptions needed to form images.
Demonstration of ideal image formation (also known as beamforming) vs. beamforming that yields artifacts, distortions, incorrect localization, and acoustic reflections.
This paper summarizes novel methods developed by the Photoacoustic and Ultrasonic Systems Engineering (PULSE) Lab at Johns Hopkins University to eliminate surgical complications by creating more informative images for surgical guidance.
The overall goal of the proposed approach is to learn the unique shape-to-depth relationship of data from point-like photoacoustic sources – such as needle and catheter tips or the tips of surgical tools – in order to provide a deep learning-based image formation replacement that can more clearly guide surgeons. Accurately determining the proximity of these point-like tips to anatomical landmarks that appear in photoacoustic images — like major blood vessels and nerves—is a critical feature of the entire photoacoustic technology for surgical guidance. Convolutional neural networks (CNNs) – a class of deep neural networks, most commonly applied to analyzing visual imagery – were trained, tested, and implemented to achieve the end goal of producing clear and interpretable photoacoustic images.
After training networks using photoacoustic computer simulations, CNNs that achieved greater than 90% source classification accuracy were transferred to real photoacoustic data. These networks were trained to output the locations of both sources and artifacts, as well as classifications of the detected wavefronts. These outputs were then displayed in an image format called CNN-based images, which show both detected point source locations — such as a needle or catheter tip— and its location error, as illustrated below.
Overall, the classification rates ranged from 92-99.62% for simulated data. The network that utilized Resnet101 experienced both the greatest classification performances (99.62%) and the lowest misclassification rate (0.28%). A similar result was achieved with experimental water bath, phantom, ex vivo, and in vivo tissue data when using the Faster R-CNN architecture with the plain VGG16 convolutional neural network.
This success demonstrates two major breakthroughs for the field of deep learning applied to photoacoustic image formation. First, computer simulations of acoustic wave propagation can be used to successfully train deep neural networks, meaning that extensive experiments are not necessary to generate the thousands of example data needed to train CNNs for the proposed task. Second, these networks transfer well to real experimental data that were not included during training, meaning that the CNN based image can potentially be incorporated into future products that will use the photoacoustic process to minimize errors during surgeries and interventions.