2aBAa5 – Sound Waves Helps Assess Bone Condition

Max Denis – denis.max@mayo.edu
507-266-7449

Leighton Wan – wan.leighton@mayo.edu
Matthew Cheong – cheong.matthew@mayo.edu
Mostafa Fatemi – fatemi.mostafa@mayo.edu
Azra Alizad – alizad.azra@mayo.edu
507-254-5970

Mayo Clinic College of Medicine
200 1st St SW
Rochester, MN 55905

Popular version of paper 2aBAa5, “Bone demineralization assessment using acoustic radiation force”
Presented Tuesday morning, May 24, 2016, 9:00 AM in Snowbird/Brighton room
171st ASA Meeting, Salt Lake City, Utah

The assessment of the human skeletal health condition is of great importance ranging from newborn infants to the elderly. Annually, approximately fifty percent of the 550,000 premature newborn infants in the United States suffer from bone metabolism related disorders such as osteopenia, which affect the bone development process into childhood. As we age through adulthood, reductions in our bone mass increases due an unbalance activity in the bone reformation process leading to bone diseases such as osteoporosis; putting a person at risk for fractures in the neck, hip and forearm areas.

Currently bone assessment tools include dual-energy X-ray absorptiometry (DEXA), and quantitative ultrasound (QUS). DEXA is the leading clinical bone quality assessment tool, detecting small changes in bone mineral content and density. However, DEXA uses ionizing radiation for imaging thus exposing patients to very low radiation doses. This can be problematic for frequent clinical visits to monitor the efficacy of prescribed medications and therapies.

QUS has been sought as a nonionizing and noninvasive alternative to DEXA. QUS utilizes measurements of ultrasonic waves between a transmitting and a receiving transducer aligned in parallel along bone surface. Speed of sound (SOS) measurements of the received ultrasonic signal is used to characterize the bone material properties. The determination of the SOS parameter is susceptible to the amount of soft tissue between the skin surface and the bone. Thus, we propose utilizing a high intensity ultrasonic wave known as a “push beam” to exert a force on the bone surface thereby generating vibrations. This will minimize the effects of the soft tissue. The radiate sound wave due to these vibrations are captured and used to analyze the bone mechanical properties.

This work demonstrates the feasibility of evaluating bone mechanical properties from sound waves due to bone vibrations. Under an approved protocol by the Mayo Clinic Institutional Review Board (IRB), human volunteers were recruited to undergo our noninvasive bone assessment technique. Our cohort consisted of clinically confirmed osteopenia and osteoporosis patients, as well as normal volunteers without a history of bone fractures. An ultrasound probe and hydrophone were placed along the volunteers’ tibia bone (Figure 1a). A B-mode ultrasound was used to guide the placement of our push beam focal point onto the bone surface underneath the skin layer (Figure 1b). The SOS was obtained from the measurements.

Denis1 bone

Figure 1. (a) Probe and hydrophone alignment along the tibia bone. (b) Diagram of an image-guided push beam focal point excitation on the bone surface.

In total 14 volunteers were recruited in our ongoing study. A boxplot comparison of SOS between normal and bone diseased (osteopenia and osteoporotic) volunteers in Figure 2, shows that typically sound travels faster in healthy bones than osteoporotic and osteopenia bones with SOS median values (red line) of 3733 m/s and 2566 m/s, respectively. Hence, our technique may be useful as a noninvasive method for monitoring the skeletal health status of the premature and aging population.

Denis2 bone

Figure 2. Normal and bone diseased volunteers sound of speed comparisons.

This ongoing project is being done under an approved protocol by Mayo Institutional Review Board.

3pBA5 – Using Acoustic Levitation to Understand, Diagnose, and Treat Cancer and Other Diseases

Brian D. Patchett – brian.d.patchett@gmail.com
Natalie C. Sullivan – nhillsullivan@gmail.com
Timothy E. Doyle – Timothy.Doyle@uvu.edu

Department of Physics
Utah Valley University
800 West University Parkway, MS 179
Orem, Utah 84058

Popular version of paper 3pBA5, “Acoustic Levitation Device for Probing Biological Cells With High-Frequency Ultrasound”
Presented Wednesday afternoon, November 4, 2015
170th ASA Meeting, Jacksonville

Imagine a new medical advancement that would allow scientists to measure the physical characteristics of diseased cells involved in cancer, Alzheimer’s, and autoimmune diseases. Through the use of high-frequency ultrasonic waves, such an advancement will allow scientists to test the normal healthy range of virtually any cell type for density and stiffness, providing new capabilities for analyzing healthy cell development as well as insight into the changes that occur as diseases develop and the cells’ characteristics begin to change.

Prior research methods of probing cells with ultrasound have relied upon growing the cells on the bottom of a Petri dish, which distorts not only the cells’ shape and structure, butlso interfere with the ultrasonic signals. A new method was therefore needed to probe the cells without disturbing their natural form, and to “clean up” the signals received by the ultrasound device. Research presented at the 2015 ASA meeting in Jacksonville Florida will show that the use of acoustic levitation is effective in providing the ideal conditions for probing the cells.

Acoustic levitation is a phenomenon whereby pressure differences of stationary sound waves can be used to suspend small objects in gases or fluids such as air or water. We are currently exploring a new frontier in acoustic levitation of cellular structures in a fluid medium by perfecting a method by which we can manipulate the shape and frequency of sound waves inside of special containers. By manipulating these sound waves in just the right fashion it is possible to isolate a layer of cells in a fluid such as water, which can then be probed with an ultrasound device. The cells are then in a more natural form and environment, and the interference from the floor of the Petri dish is no longer a hindrance.

This method has proven effective in the laboratory with buoyancy neutral beads that are roughly the same size and shape as human blood cells, and a study is currently underway to test the effectiveness of this method with biological samples. If effective, this will give researchers new experimental methods by which to study cellular processes, thus leading to a better understanding of the development of certain diseases in the human body.

2pSCb11 – Effect of Menstrual Cycle Hormone Variations on Dichotic Listening Results

Richard Morris – Richard.morris@cci.fsu.edu
Alissa Smith

Florida State University
Tallahassee, Florida

Popular version of poster presentation 2pSCb11, “Effect of menstrual phase on dichotic listening”
Presented Tuesday afternoon, November 3, 2015, 3:30 PM, Grand Ballroom 8

How speech is processed by the brain has long been of interest to researchers and clinicians. One method to evaluate how the two sides of the brain work when hearing speech is called a dichotic listening task. In a dichotic listening task two words are presented simultaneously to a participant’s left and right ears via headphones. One word is presented to the left ear and a different one to the right ear. These words are spoken at the same pitch and loudness levels. The listener then indicates what word was heard. If the listener regularly reports hearing the words presented to one ear, then there is an ear advantage. Since most language processing occurs in the left hemisphere of the brain, most listeners attend more closely to the right ear. The regular selection of the word presented to the right ear is termed a right ear advantage (REA).

Previous researchers reported different responses from males and females to dichotic presentation of words. Those investigators found that males more consistently heard the word presented to the right ear and demonstrated a stronger REA. The female listeners in those studies exhibited more variability as to the ear of the word that was heard. Further research seemed to indicate that women exhibit different lateralization of speech processing at different phases of their menstrual cycle. In addition, data from recent studies indicate that the degree to which women can focus on the input to one ear or the other varies with their menstrual cycle.

However, the previous studies used a small number of participants. The purpose of the present study was to complete a dichotic listening study with a larger sample of female participants. In addition, the previous studies focused on women who did not take oral contraceptives as they were assumed to have smaller shifts in the lateralization of speech processing. Although this hypothesis is reasonable, it needs to be tested. For this study, it was hypothesized that the women would exhibit a greater REA during the days that they menstruate than during other days of their menstrual cycle. This hypothesis was based on the previous research reports. In addition, it was hypothesized that the women taking oral contraceptives will exhibit smaller fluctuations in the lateralization of their speech processing.

Participants in the study were 64 females, 19-25 years of age. Among the women 41 were taking oral contraceptives (OC) and 23 were not. The participants listened to the sound files during nine sessions that occurred once per week. All of the women were in good general health and had no speech, language, or hearing deficits.

The dichotic listening task was executed using the Alvin software package for speech perception research. The sound file consisted of consonant-vowel syllables comprised of the six plosive consonants /b/, /d/, /g/, /p/, /t/, and /k/ paired with the vowel “ah”. The listeners heard the syllables over stereo headphones. Each listener set the loudness of the syllables to a comfortable level.

At the beginning of the listening session, each participant wrote down the date of the initiation of her most recent menstrual period on a participant sheet identified by her participant number. Then, they heard the recorded syllables and indicated the consonant heard by striking that key on the computer keyboard. Each listening session consisted of three presentations of the syllables. There were different randomizations of the syllables for each presentation. In the first presentation, the stimuli will be presented in a non-forced condition. In this condition the listener indicted the plosive that she heard most clearly. After the first presentation, the experimental files were presented in a manner referred to as a forced left or right condition. In these two conditions the participant was directed to focus on the signal in the left or right ear. The sequence of focus on signal to the left ear or to the right ear was counterbalanced over the sessions.

The statistical analyses of the listeners’ responses revealed that no significant differences occurred between the women using oral contraceptives and those who did not. In addition, correlations between the day of the women’s menstrual cycle and their responses were consistently low. However, some patterns did emerge for the women’s responses across the experimental sessions as opposed to the days of their menstrual cycle. The participants in both groups exhibited a higher REA and lower percentage of errors for the final sessions in comparison to earlier sessions.

The results from the current subjects differ from those previously reported. Possibly the larger sample size of the current study, the additional month of data collection, or the data recording method affected the results. The larger sample size might have better represented how most women respond to dichotic listening tasks. The additional month of data collection may have allowed the women to learn how to respond to the task and then respond in a more consistent manner. The short data collection period may have confused the learning to respond to a novel task with a hormonally dependent response. Finally, previous studies had the experimenter record the subjects’ responses. That method of data recording may have added bias to the data collection. Further studies with large data sets and multiple months of data collection are needed to determine any sex and oral contraceptive use effects on REA.

1pAB6 – Long-lasting suppression of spontaneous firing in inferior colliculus neurons: implication to the residual inhibition of tinnitus

A.V. Galazyuk – agalaz@neomed.edu
Northeast Ohio Medical University

Popular version of poster 1pAB6
Presented Monday morning, November 2, 2015, 3:25 PM – 3:45 PM, City Terrace 9
170th ASA Meeting, Jacksonville

More than one hundred years ago, US clinician James Spalding first described an interesting phenomenon when he observed tinnitus patients suffering from perceived phantom ringing [1]. Many of his patients reported that a loud, long-lasting sound produced by violin or piano made their tinnitus disappear for about a minute after the sound was presented. Nearly 70 years later, the first scientific study was conducted to investigate how this phenomenon, termed residual inhibition, is able to provide tinnitus relief [2]. Further research into this phenomenon has been conducted to understand the basic properties of this “inhibition of ringing” and to identify what sounds are most effective at producing the residual inhibition [3].

The research indicated that indeed, residual inhibition is an internal mechanism for temporary tinnitus suppression. However, at present, little is known about the neural mechanisms underlying residual inhibition. Increased knowledge about residual inhibition may not only shed light on the cause of tinnitus, but also may open an opportunity to develop an effective tinnitus treatment.

For the last four years we have studied a fascinating phenomenon of sound processing in neurons of the auditory system that may provide an explanation of what causes the residual inhibition in tinnitus patients. After presenting a sound to a normal hearing animal, we observed a phenomenon where firing activity of auditory neurons is suppressed [4, 5]. There are several striking similarities between this suppression in the normal auditory system and residual inhibition observed in tinnitus patients:

  1. Relatively loud sounds trigger both the neuronal firing suppression and residual inhibition.
  2. Both the suppression and residual inhibition last for the same amount of time after a sound, and increasing the duration of the sound makes both phenomena last longer.
  3. Simple tones produce more robust suppression and residual inhibition compared to complex sounds or noises.
  4. Multiple attempts to induce suppression or residual inhibition within a short timeframe make both much weaker.

These similarities make us believe that the normal sound-induced suppression of spontaneous firing is an underlying mechanism of residual inhibition.

The most unexpected outcome from our research is that the phenomenon of residual inhibition, which focuses on tinnitus patients, appears to be a natural feature of sound processing, because suppression was observed in both the normal hearing mice and in mice with tinnitus. If so, why is it that people with tinnitus experience residual inhibition whereas those without tinnitus do not?

It is well known that hyperactivity in auditory regions of the brain has been linked to tinnitus, meaning that in tinnitus, auditory neurons have elevated spontaneous firing rates [6]. The brain then interprets this hyperactivity as phantom sound. Therefore, suppression of this increased activity by a loud sound should lead to elimination or suppression of tinnitus. Normal hearing people also have this suppression occurring after loud sounds. However spontaneous firing of their auditory neurons remains low enough that they never perceive the phantom ringing that tinnitus sufferers do. Thus, even though there is suppression of neuronal firing by loud sounds in normal hearing people, it is not perceived.

Most importantly, our research has helped us identify a group of drugs that can alter this suppression response [5], as well as the spontaneous firing of the auditory neurons responsible for tinnitus. These drugs will be further investigated in our future research to develop effective tinnitus treatments.

This research was supported by the research grant RO1 DC011330 from the National Institute on Deafness and Other Communication Disorders of the U.S. Public Health Service.

[1] Spalding J.A. (1903). Tinnitus, with a plea for its more accurate musical notation. Archives of Otology, 32(4), 263-272.

[2] Feldmann H. (1971). Homolateral and contralateral masking of tinnitus by noise-bands and by pure tones. International Journal of Audiology, 10(3), 138-144.

[3] Roberts L.E. (2007). Residual inhibition. Progress in Brain Research, Tinnitus: Pathophysiology and Treatment, Elsevier, 166, 487-495.

[4] Voytenko SV, Galazyuk AV. (2010) Suppression of spontaneous firing in inferior colliculus neurons during sound processing. Neuroscience 165: 1490-1500.

[5] Voytenko SV, Galazyuk AV (2011) mGluRs modulate neuronal firing in the auditory midbrain. Neurosci Lett. 492: 145-149

[6] Eggermont JJ, Roberts LE. (2015) Tinnitus: animal models and findings in humans. Cell Tissue Res. 361: 311-336.

3aBA5 – Fabricating Blood Vessels with Ultrasound

Diane Dalecki, Ph.D.
Eric S. Comeau, M.S.
Denise C. Hocking, Ph.D.
Rochester Center for Biomedical Ultrasound
University of Rochester
Rochester, NY 14627

Popular version of paper 3aBA5, “Applications of acoustic radiation force for microvascular tissue engineering”
Presented Wednesday morning May 20, 9:25 AM, in room Kings 2
169th ASA Meeting, Pittsburgh

Tissue engineering is the field of science dedicated to fabricating artificial tissues and organs that can be made available for patients in need of organ transplantation or tissue reconstructive surgery. Tissue engineers have successfully fabricated relatively thin tissues, such as skin substitutes, that can receive nutrients and oxygen by simple diffusion. However, recreating larger and/or more complex tissues and organs will require developing methods to fabricate functional microvascular networks to bring nutrients to all areas of the tissue for survival.

In the laboratories of Diane Dalecki, Ph.D. and Denise C. Hocking, Ph.D., research is underway to develop new ultrasound technologies to control and enhance the fabrication of artificial tissues1. Ultrasound fields are sound fields at frequencies higher than humans can hear (i.e., > 20 kHz). Dalecki and Hocking have developed a technology that uses a particular type of ultrasound field, called an ultrasound standing wave field, as a tool to non-invasively engineer complex spatial patterns of cells2 and fabricate microvessel networks3,4 within artificial tissue constructs.

When a solution of collagen and cells is exposed to an ultrasound standing wave field, the forces associated with the field lead to the alignment of the cells into planar bands (Figure 1). The distance between the bands of cells is controlled by the ultrasound frequency, and the density of cells within each band is controlled by the intensity of the sound field. The collagen polymerizes into a solid gel during the ultrasound exposure, thereby maintaining the spatial organization of the cells after the ultrasound is turned off. More complex patterning can be achieved by use of more than one ultrasound transducer.

Dalecki-1-ASA

Figure 1. Acoustic-patterning of microparticles (dark bands) using an ultrasound standing wave field. Distance between planar bands is 750 µm. Scale bar = 100 μm

An exciting application of this technology involves the fabrication of microvascular networks within artificial tissue constructs. Specifically, acoustic-patterning of endothelial cells into planar bands within collagen hydrogels leads to the rapid development of microvessel networks throughout the entire volume of the hydrogel. Interestingly, the structure of the resultant microvessel network can be controlled by choice of the ultrasound exposure parameters. As shown in Figure 2, ultrasound standing wave fields can be employed to fabricate microvessel networks with different physiologically relevant morphologies, including capillary-like networks (left panel), aligned non-branching vessels (center panel) or aligned vessels with hierarchically branching microvessels. Ultrasound fields provide an ideal technology for microvascular engineering; the technology is rapid, noninvasive, can be broadly applied to many types of cells and hydrogels, and can be adapted to commercial fabrication processes.

Dalecki-2-ASA - Ultrasound-fabricated microvessel

Figure 2. Ultrasound-fabricated microvessel networks within collagen hydrogels. The ultrasound pressure amplitude used for initial patterning determines the final microvessel morphology, which can resemble torturous capillary-like networks (left panel), aligned non-branching vessels (center panel) or aligned vessels with hierarchically branching microvessels. Scale bars = 100 μm.

To learn more about this research, please view this informative video (https://www.youtube.com/watch?v=ZL-cx21SGn4).

References:

[1] Dalecki D, Hocking DC. Ultrasound technologies for biomaterials fabrication and imaging. Annals of Biomedical Engineering 43:747-761; 2015.

[2] Garvin KA, Hocking DC, Dalecki D. Controlling the spatial organization of cells and extracellular matrix proteins in engineered tissues using ultrasound standing wave fields. Ultrasound Med. Biol. 36:1919-1932; 2010.

[3] Garvin KA, Dalecki D, Hocking DC. Vascularization of three-dimensional collagen hydrogels using ultrasound standing wave fields. Ultrasound Med. Biol. 37:1853-1864; 2011.

[4] Garvin KA, Dalecki D, Youssefhussien M, Helguera M, Hocking DC. Spatial patterning of endothelial cells and vascular network formation using ultrasound standing wave fields. J. Acoust. Soc. Am. 134:1483-1490; 2013.