Extracting Human Skull Properties by Using Ultrasound and Artificial Intelligence
Churan He1– firstname.lastname@example.org
Yun Jing2 – email@example.com
Aiguo Han1 – firstname.lastname@example.org
1. Department of Electrical and Computer Engineering
The University of Illinois at Urbana Champaign
306 North Wright Street
Urbana, IL 61801
2. Graduate Program in Acoustics
Pennsylvania State University
201 Applied Science Building
University Park, PA 16802
Popular version of paper ‘1aBAb9 – Human skull profile and speed of sound estimation using pulse-echo ultrasound signals with deep learning’
Presented Monday morning, November 29, 2021
181st Meeting of the Acoustical Society of America in Seattle, Washington.
Ultrasound is a tremendously valuable tool for medical imaging and therapy of the human body. When it comes to applications in the brain, however, the presence of the skull poses severe challenges to both imaging and therapy. The skulls of human adults induce significant distortions (also called phase aberrations) to the acoustic waves. The aberrations result in blurred brain images that are extremely challenging to interpret. The skull also distorts and shifts the acoustic focus, causing challenges in therapy of the brain (such as treating essential tremors and brain tumors) using high-intensity focused ultrasound.
Prior research has shown that phase aberrations can be most accurately corrected if the skull profile (i.e., thickness distribution) and speed of sound are known a priori. Various methods have been proposed to estimate the skull profile and speed of sound. The gold-standard method used in treatment planning derives the skull properties from computed-tomography (CT) images of the skull. The CT-based method, however, entails ionizing radiation, potentially causing harm to the patients.
We propose an ultrasound-based method to extract the skull properties. This method is safer because ultrasound does not cause ionizing radiation. We developed an artificial intelligence (AI) algorithm (specifically, a deep learning algorithm) that predicted the skull thickness and sound speed by using ultrasound echo signals reflected from the skull.
We tested the feasibility of our method through a simulation study (Figure 1). We performed acoustic simulations using realistic skull models built from CT scans of five ex vivo human skulls (see animation). The simulations generated a large number (=7891) of ultrasound signals from skull segments for which the thickness and sound speed were known. We used 80% of the data to train our AI algorithm and 20% for testing. We developed and tested two algorithm versions: One version took the original echo signal as the input and the other used a transformed signal (i.e., Fourier transform that displays the signal’s frequency spectrum).
Both versions of our AI algorithm achieved accurate results, while the version using the transformed signals appeared to be more accurate. Using the original signal as the input, we obtained a mean absolute error of 0.3 mm for skull thickness prediction and 31 m/s for sound speed prediction. When transformed signals were used, the error in thickness prediction was reduced to 0.2 mm (= 3% of the average skull thickness [6.3 mm]), and the error in sound speed prediction was reduced to 25 m/s (= 1% of the average sound speed [2340 m/s]). In the case of transformed signals, the correlation between predicted values and the ground truth was 0.98 for thickness and 0.81 for speed of sound (Figure 2), where a correlation value of 1 represents perfect correlation.
Collectively, our preliminary results demonstrate that the developed AI algorithm can accurately estimate skull thickness and speed of sound, providing a potentially powerful tool to correct skull phase aberration for transcranial ultrasound brain imaging and therapy.
[Animation: 3-dimensional density map of one of the skulls used in the study]
Figure 1. Schematic diagram of the simulation study
Figure 2. a) Scatter plot of extracted speed of sound versus ground truth; b) scatter plot of extracted thickness versus ground truth.