2pBA2 – Medical ultrasound imaging for the detection of netrin-1 in breast cancer – Jennifer Wischhusen

Medical ultrasound imaging for the detection of netrin-1 in breast cancer

Jennifer Wischhusen- jennifer.wischhusen@inserm.fr
Rodolfo Molina
Frederic Padilla
LabTAU U1032, INSERM
French National Institute of Health and Medical Research
University of Lyon
Lyon, France

Jean-Guy Delcros
Benjamin Gibert
Patrick Mehlen
Cancer Research Center Lyon
French National Institute of Health and Medical Research
University of Lyon
Lyon, France

Katheryne E. Wilson
Juergen K. Willmann
Radiology, MIPS, School of Medicine
Stanford University
Stanford, CA, United States

Popular version of paper 2pBA2, “Ultrasound molecular imaging of the secreted tumor marker Netrin-1 in multiple breast cancer models”

Presented Monday, December 04, 2017, 1:15-1:30 PM, Balcony N

174th ASA meeting, New Orleans

Cancer is a disease that is defined by uncontrolled growth of cells in our body. The aberrant growth is caused by genetic errors which lead either to the gain of growth signals or the loss of growth inhibitors. Both scenarios result in normal cells growing and replicating in abnormal ways and leading to tumors. Today, molecularly targeted therapies aim at re-establishing the equilibrium of cell growth regulators in order to stop tumor growth. Unfortunately, the abnormal signals causing tumors can vary between patients. In fact, even different tumors in the same patient can have different underlying growth signals. This phenomenon is called heterogeneity. It is crucial to understand which abnormal signaling molecules are causing the patient’s tumor prior to treatment. With this information, a physician can make a more educated decision on treatment choices for each patient and their particular tumor in order to increase the chances for a positive response to therapy. This new approach is known as personalized or precision medicine.

Netrin-1 is a tumor-stimulating molecule which was discovered to contribute to tumor growth in different types of cancer, including 60% of metastatic breast cancer (most frequent type of cancer in women worldwide). A therapy was developed aiming at the inhibition of netrin-1’s activity and reducing tumor growth. Only tumors presenting netrin-1 are expected to benefit from netrin-1-targeted therapy while tumors without netrin-1 require alternative therapies (Figure 1). To identify breast cancer patients presenting netrin-1, we propose the use of medical ultrasound imaging. To do so, we used microbubbles, which serve as a contrast medium in ultrasound imaging. These microbubbles were modified to recognize the netrin-1 molecule when injected into the blood circulation (Figure 2).

In an imaging study, the signal of netrin-1-targeted microbubbles and control microbubbles was collected from breast tumors that were known to either present netrin-1 or lack netrin-1. Our results showed an increased signal with netrin-1-targeted microbubbles in netrin-1-presenting tumors while a much lower signal was observed with control microbubbles in the same tumors (Figure 3). Tumors that lacked netrin-1 showed no accumulation of netrin-1-targeted microbubbles

In conclusion, our imaging study showed that these netrin-1-targeted microbubbles enable the non-invasive and near real-time visualization of netrin-1 in breast tumors using medical ultrasound imaging. We are convinced that medical ultrasound imaging can allow the detection of tumor-promoting molecules, such as netrin-1, and enable personalized medicine, which means to diagnose the molecular profile of breast cancer patients and adapt the therapy approach to the specific needs of the patient.

Figure 1: Differences in molecular composition between different tumors. The tumor of the right patient presents netrin-1 and makes the patient eligible for netrin-1-targeted therapy. The patient on the left lacks netrin-1 and requires an alternative therapy.

Figure 2: With medical ultrasound, microbubble contrast medium can be detected. In tumors lacking netrin-1, no microbubbles accumulate and only weak background signal is detected. In tumors presenting netrin-1, netrin-1-targeting microbubbles accumulate and generate a strong signal in ultrasound imaging.

Figure 3: Medical ultrasound imaging with netrin-1-targeted microbubbles. Netrin-1-targeted microbubbles accumulate in breast tumors that present netrin-1 and were shown to cause a higher imaging signal than control microbubbles. The difference in imaging signal was verified by statistical analysis (*: with an e

3aPPa3 – When cognitive demand increases, does the right ear have an advantage? – Danielle Sacchinell

When cognitive demand increases, does the right ear have an advantage?

Danielle Sacchinelli  -dms0043@auburn.edu
Aurora J. Weaver – ajw0055@auburn.edu
Martha W. Wilson – paxtomw@auburn.edu
Anne Rankin Cannon- arc0073@auburn.edu
Auburn University
1199 Haley Center
Auburn, AL 36849

 

Popular version of paper 3aPPa3, “Does the right ear advantage persist in mature auditory systems when cognitive demand for processing increases?”

Presented Wednesday morning, December 6, 2017, 8:00-12:00 AM, Studios Foyer

174nd ASA Meeting, New Orleans

 

A dichotic listening task presents two different sound sequences simultaneously to both ears. Performance on these tasks measures selective auditory attention for each ear, either binaural separation or binaural integration (see Figure 1 for examples). Based on the anatomical model of auditory processing, the right ear has a slight advantage, compared with the left ear, on dichotic listening tasks. This is due to left brain hemispheric dominance for language, which receives direct auditory input from the right ear (i.e., strong contralateral auditory pathway; Kimura, 1967).

Clinical tests of auditory function quantify this right ear advantage for dichotic listening tasks to assess maturity of the auditory system, in addition to other clinical implications. Accurate performance on dichotic tests relies on both sensory organization and memory. As a child matures, the right ear advantage decreases until it is no longer clinically significant. However, clinically available dichotic-digits tests use only 1, 2, (e.g., Dichotic digits test; Musiek, 1983; Musiek, et al., 1991) or 3 (i.e., Dichotic DigitsMAPA; Schow, Seikel, Brockett, & Whitaker, 2007) digit sets in each ear for testing. See Figure 1 for maximum task demands of clinical tests for binaural integration, instructions “B”, using free recall protocol (Guenette, 2006).

Daily listening often requires an adult to process competing information that extends six items of sensory input. This study investigated the impact of increasing cognitive demands on ear performance asymmetries (i.e., right versus left) in mature auditory systems. Forty-two participants (i.e., 19-28 year-olds) performed dichotic binaural separation tasks (adapted from the Dspan Task; Nagaraj, 2017), for 2, 3, 4, 5, 6, 7, 8, and 9-digit lists. Listeners recalled the sequence presented to one ear while ignoring the sequence presented to the opposite ear (i.e., binaural separation; directed ear protocol). See Figure 1 for an example of the experimental binaural separation tasks (i.e., digit length = 3 used for condition 2) and instructions “A” for directed ear recall.

Results in Figure 2 show a significant effect for directed ear performance as task demands increase (i.e., digit list length). The overall evaluation of the list length (Figure 2) does not reveal the impact of working memory capacity limits (i.e., maximum items that can be recalled for an ongoing task) for each participant. Therefore, a digit span was measured to estimate each participant’s simple working memory capacity. Planned comparisons for ear performance relative to a participant’s digit span (i.e., below = n-2, at span = n, and above span = n+2 digit lists, where n = digit span) evaluated the role of cognitive demand on ear asymmetries.

Planned t-test comparisons revealed a significant performance asymmetry above span (i.e., n+2). No significant differences were identified for performance relative to, or below, an individual’s simple memory capacity. This indicates the persistence of the right ear advantage in mature auditory systems when listening demands exceeded an individual’s auditory memory capacity.

Overall, the study found the right ear continues to show better performance on dichotic listening tasks, even in mature adults. This persistent right ear advantage occurred when the number of digits in the sequence exceeded the participants’ digit span capacity. We believe such demands are a realistic aspect of every day listening, as individuals attempt to retain sensory information in demanding listening environments. These results may help us modify existing clinical tests, or develop a new task, to more precisely reveal performance asymmetries based on an individual’s auditory working memory capacity.

Figure 1. Displays an example of dichotic digit stimuli presentation, with both “A” binaural separation tasks (i.e., directed ear) and “B” binaural integration (i.e., free recall) instructions.

Figure 2. Displays ear performance on the binaural separation task across all participants. Note: the orange box highlights the maximum demands of commercially available dichotic-digits tests; participant performance reflects a lack of asymmetry under these cognitive demands.

Figure 3. Displays participant ear performance on the binaural separation task relative to digit span.

 

  1. Kimura, D. (1967). Functional asymmetry of the brain in dichotic listening. Cortex, 3(2), 163- 176.
  2. Musiek, F., (1983). Assessment of central auditory dysfunction: The dichotic digit test revisited. Ear and Hearing, 4(2), 79-83.
  3. Musiek, F., Gollegly, K., Kibbe, K., & Verkest-Lenz, S. (1991). Proposed screening test for central auditory disorders: Follow-up on the dichotic digits test. The AmericanJournal of Otology, 12:2, 109-113.
  4. Schow, R., Seikel, A., Brockett, J., Whitaker, M., (2007). Multiple Auditory Processing Assessment (MAPA); Test Manual 1.0 version. AUDITEC, St. Louis, MO. PDF available from http://www2.isu.edu/csed/audiology/mapa/MAPA_Manual.pdf
  5. Guenette, L.A. (2006). How to administer the Dichotic Digit Test. The Hearing Journal, 59 (2), 50.
  6. Nagaraj, N. K. (2017). Working Memory and Speech Comprehension in Older Adults with Hearing Impairment. Journal of Speech Language and Hearing Research, 60(10), 2949-2964. doi: 10.1044/2017_JSLHR-H-17-0022.

2pBA3 – Semi-Automated Smart Detection of Prostate Cancer using Machine Learning and a Novel Near-Microscopic Imaging Platform – Daniel Rohrbach

Semi-Automated Smart Detection of Prostate Cancer using
Machine Learning and a Novel Near-Microscopic Imaging Platform

Daniel Rohrbach- drohrbach@RiversideResearch.org , Jonathan Mamou and Ernest Feleppa
Lizzi Center for Biomedical Engineering, Riverside Research
New York, NY, USA, 10038

Brian Wodlinger and Jerrold Wen
Exact Imaging, Markham
Ontario, Canada, L3R 2N2

 

Popular version of paper 2pBA3, “Quantitative-ultrasound-based prostate-cancer imaging by means of a novel micro-ultrasound scanner”

Presented Tuesday, December 05, 2017, 1:45-2:00 PM, Balcony M

174th ASA meeting, New Orleans

 

Prostate cancer is the second-leading cause of male cancer-related death in the U.S. with approximately 1 in 7 men being diagnosed with prostate cancer during their lifetime[i].  Detection and diagnosis of this significant disease presents a major clinical challenge because the current standard-of-care imaging method, conventional transrectal ultrasound, cannot reliably distinguish cancerous from non-cancerous prostate tissue.  Therefore, prostate biopsies for definitively diagnosing cancer are currently delivered in a systematic but “blind” pattern.  Other imaging methods, such as MRI, have been investigated for guiding biopsies, but MRI involves complicated procedures, is costly, is poorly tolerated by most patients, and  demonstrates significant variability among clinical sites and practitioners.  Our study investigated sophisticated tissue-typing algorithms for possible use in a novel, fine-resolution, ultrasound instrument called the ExactVu™ micro-ultrasound instrument by Exact Imaging, Markham, Ontario.  The ExactVu recently has been approved for commercial sale in North America and Europe.  The term micro-ultrasound refers to the near-microscopic resolution of the device.  This new, fine-resolution instrument allows clinicians to visualize previously unseen features of the prostate in real time and enables them to differentiate suspicious regions of the prostate so that they can “target” biopsies to those suspicious regions.  To enable more-objective interpretation of tissue features made visible by the ExactVu, a cancer-risk-identification protocol – called PRI-MUS™ (prostate risk Identification using micro-ultrasound)[ii] – has been developed and validated to distinguish benign prostate tissue from tissue that has a high probability of being cancerous based on its appearance in a micro-ultrasound image.

The paper, “High-frequency quantitative ultrasound for prostate-cancer imaging using a novel micro-ultrasound scanner, which is being presented at the 174th Acoustical Society of America, shows promising results from a collaborative research project undertaken by Riverside Research, a leading biomedical research institution in New York, NY, and Exact Imaging.  The paper describes an approach that successfully applies a combination of (1) sophisticated ultrasound signal processing methods known as quantitative ultrasound and (2) machine-learning and artificial intelligence to analysis of fine-resolution data acquired with the novel micro-ultrasound imaging platform to automate detection of cancerous tissue in the prostate.  Results of the study were very encouraging and showed a promising ability of the methods to distinguish cancerous from non-cancerous prostate tissue.

A database of 12,000 fine-resolution, micro-ultrasound images and correlated biopsy histology has been developed.  The new algorithm for automated detection continues to evolve and is applied to this growing data set.

Future clinical application of the algorithms implemented in the ExactVu would involve scanning a patient with indications of prostate cancer (e.g., as a result of a transrectal palpation or a high level of prostate-specific antigen in the blood) to identify regions of the gland that are sufficiently suspicious for cancer to warrant a biopsy.  As the scan proceeds, the algorithm continuously analyzes the ultrasound signals and automatically indicates to the examining urologist any regions that have a significant risk of being cancerous.  The urologist evaluates the indicated region and makes a clinical judgement on whether the region in fact warrants a biopsy.

The results of this study show an encouraging ability of ultrasound-signal processing and the machine-learning algorithm together with the novel micro-ultrasound instrumentation to depict regions of the prostate that are cancerous with high reliability.  The study demonstrates a promising potential of the algorithms and micro-ultrasound to improve targeting of biopsies, to increase cancer-detection rates, to avoid unnecessary biopsies and associated risks, to support focal therapy more effectively, and consequently to achieve better patient outcomes.

 

Referenced abstract:
High-frequency quantitative ultrasound for prostate-cancer imaging using a novel micro-ultrasound scanner

[i] American Cancer Association: https://cancerstatisticscenter.cancer.org/?_ga=2.177940773.1025752599.1511161127-1043893878.1511161127#!/

[ii] Ghai S, et al: Assessing Cancer Risk on Novel 29 MHz Micro-Ultrasound Images of the Prostate: Creation of the Micro-Ultrasound Protocol for Prostate Risk Identification. J. Urol. 2016; 196: 562–569.

1pAO9 – The Acoustic Properties of Crude Oil – Scott Loranger

The Acoustic Properties of Crude Oil

Scott Loranger – sloranger@ccom.unh.edu
Department of Earth Science
University of New Hampshire
Durham, NH, United States

Christopher Bassett – chris.bassett@noaa.gov
Alaska Fisheries Science Center
National Marine Fisheries Service
Seattle, WA, United States

Justin P. Cole – jpq68@wildcats.unh.edu
Department of Chemistry
Colorado State University
Fort Collins, CO, United States

Thomas C. Weber – Weber@ccom.unh.edu
Department of Mechanical Engineering
University of New Hampshire
Durham, NH, United States

 

Popular version of paper 1pAO9, “The Acoustic Properties of Three Crude Oils at Oceanographically Relevant Temperatures and Pressures”

Presented Monday afternoon, December 04, 2017, 3:35-3:50 PM, Balcony M

174th ASA Meeting, New Orleans, LA

 

The difficulty of detecting and quantifying oil in the ocean has limited our understanding of the fate and environmental impact of oil from natural seeps and man-made spills. Oil on the surface can be detected by satellite (figure 1) and studied with optical instrumentation, however, as researchers look deeper to study oil as it rises through the water column, the rapid attenuation of light in the ocean limits the usefulness of these systems. Active sonar – where an acoustic transmitter generates a pulse of sound and a receiver listens for the sound reflected from an object – takes advantage of the low attenuation of sound in the ocean to detect things farther away than optical instruments. However, oil is difficult to detect acoustically because oil and seawater have similar physical properties. The amount of sound reflected from an object is dependent to the object’s size, shape and a physical property called the acoustic impedance – the product of the density and sound speed of the material being measured. When an object has an acoustic impedance similar to the medium that surrounds it, the object reflects relatively little sound. The acoustic impedance of oil (which differs by type of oil) and sea water is often very similar. In fact, under certain conditions oil droplets could be acoustically invisible. To study oil acoustically, we need to better understand the physical properties that affect its acoustic impedance.

Most measurements of the density and sound speed of oil come from oil exploration research which focuses on studying oil under reservoir conditions – high temperatures and pressures associated with oil deep underground.  As oil cools to oceanographically relevant temperatures it can transition from a liquid to a waxy semisolid. This transition may result in significant changes to the acoustic properties of oil which would not be predicted by measurements made at reservoir conditions. To inform models of acoustic scattering from oil and produce quantitatively meaningful measurements it is necessary to have well-understood properties at relevant temperatures and pressures. Density and sound speed can be measured directly, while the shape of an oil droplet can be predicted from the density and viscosity. Density and viscosity will tell you how quickly a droplet will rise, and how the drag force of the surrounding water will modify its shape. Droplets can range from spheres to more pancake like shapes that one could produce by pushing down on an inflated balloon.

To better understand these important properties, we obtained samples of three different crude oils. Each sample was sent for “fingerprinting” to identify differences in the molecular composition of the oils. “Fingerprinting” is a technique used by oil exploration scientists and spill responders to identify different crude oils. Measurements of the sound speed, density, and viscosity were made from -10°C (14°F) to 30°C (86°F). A sound speed chamber was specifically designed to measure sound speed at the same temperature range but with the added effects of pressure (0 to 2500 psi – equivalent to approximately 1700 m depth, deeper than the Deepwater Horizon well).

Light, medium and heavy crude oil was tested. Each of these is typically defined by their American Petroleum Institute (API) gravity. API gravity is a common descriptor of oils and is a measure of the density of oil relative to water. The properties of the medium and heavy crude oil are in the figure (2) below. The sound speed is different both in amplitude and shape, while the viscosity only differs in amplitude, suggesting that the changes to shape of the sound speed curve may not be related to the viscosity. The heavy oil is currently limited to measurements above 5°C because below that temperature it becomes very difficult to transfer sound through the oil. Part of this ongoing research is to develop new techniques to measure sound speed, and to use these techniques to extend our measurements of heavy oils to cold temperatures similar to those found in Arctic regions where oil can be trapped in ice. By better understanding these physical properties of oil, the methods and models used to detect and quantify oil in the marine environment can be improved.

Figure 1: Satellite image of surface oil slicks from natural seeps.

Figure 2: Experimental measurements of the physical properties of a Medium and Heavy crude oil.

4pSC11 – The role of familiarity in audiovisual speech perception – Chao-Yang Lee

The role of familiarity in audiovisual speech perception

Chao-Yang Lee – leec1@ohio.edu
Margaret Harrison – mh806711@ohio.edu
Ohio University
Grover Center W225
Athens, OH 45701

Seth Wiener – sethw1@cmu.edu
Carnegie Mellon University
160 Baker Hall, 5000 Forbes Avenue
Pittsburgh, PA 15213

 

Popular version of paper 4pSC11, “The role of familiarity in audiovisual speech perception”

Presented Thursday afternoon, December 7, 2017, 1:00-4:00 PM, Studios Foyer

174th ASA Meeting, New Orleans

 

When we listen to someone talk, we hear not only the content of the spoken message, but also the speaker’s voice carrying the message. Although understanding content does not require identifying a specific speaker’s voice, familiarity with a speaker has been shown to facilitate speech perception (Nygaard & Pisoni, 1998) and spoken word recognition (Lee & Zhang, 2017).

Because we often communicate with a visible speaker, what we hear is also affected by what we see. This is famously demonstrated by the McGurk effect (McGurk & MacDonald, 1976). For example, an auditory “ba” paired with a visual “ga” usually elicits a perceived “da” that is not present in the auditory or the visual input.

Since familiarity with a speaker’s voice affects auditory perception, does familiarity with a speaker’s face similarly affect audiovisual perception? Walker, Bruce, and O’Malley (1995) found that familiarity with a speaker reduced the occurrence of the McGurk effect. This finding supports the “unity” assumption of intersensory integration (Welch & Warren, 1980), but challenges the proposal that processing facial speech is independent of processing facial identity (Bruce & Young, 1986; Green, Kuhl, Meltzoff, & Stevens, 1991).

In this study, we explored audiovisual speech perception by investigating how familiarity with a speaker affects the perception of English fricatives “s” and “sh”. These two sounds are useful because they contrast visibly in lip rounding. In particular, the lips are usually protruded for “sh” but not “s”, meaning listeners can potentially identify the contrast based on visual information.

Listeners were asked to watch/listen to stimuli that were audio-only, visual-only, audiovisual-congruent, or audiovisual-incongruent (e.g., audio “save” paired with visual “shave”). The listeners’ task was to identify whether the first sound of the stimuli was “s” or “sh”. We tested two groups of native English listeners – one familiar with the speaker who produced the stimuli and one unfamiliar with the speaker.

The results showed that listeners familiar with the speaker identified the fricatives faster in all conditions (Figure 1) and more accurately in the visual-only condition (Figure 2). That is, listeners familiar with the speaker were more efficient in identifying the fricatives overall, and were more accurate when visual input was the only source of information.

We also examined whether visual familiarity affects the occurrence of the McGurk effect. Listeners were asked to identify syllable-initial stops (“b”, “d”, “g”) from stimuli that were audiovisual-congruent or incongruent (e.g., audio “ba” paired with visual “ga”). A blended (McGurk) response was indicated by a “da” response to an auditory “ba” paired with a visual “ga”.

Contrary to the “s”-“sh” findings reported earlier, the results from our identification task showed no difference between the familiar and unfamiliar listeners in the proportion of McGurk responses. This finding did not replicate Walker, Bruce, and O’Malley (1995).

In sum, familiarity with a speaker facilitated the speed of identifying fricatives from audiovisual stimuli. Familiarity also improved the accuracy of fricative identification when visual input was the only source of information. Although we did not find an effect of familiarity on the McGurk responses, our findings from the fricative task suggest that processing audiovisual speech is affected by speaker identity.

Figure 1- Reaction time of fricative identification from stimuli that were audio-only, visual-only, audiovisual-congruent, or audiovisual-incongruent. Error bars indicate 95% confidence intervals.

 

Figure 2- Accuracy of fricative identification (d’) from stimuli that were audio-only, visual-only, audiovisual-congruent, or audiovisual-incongruent (e.g., audio “save” paired with visual “shave”). Error bars indicate 95% confidence intervals.

Figure 3- Proportion of McGurk response (“da” response to audio “ba” paired with visual “ga”).

 

Video 1- Example of an audiovisual-incongruent stimulus (audio “save” paired with visual “shave”).

 

Video 2- Example of an audiovisual-incongruent stimulus (audio “ba” paired with visual “ga”).

 

References:

 

Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305-327.

Green, K. P., Kuhl, P. K., Meltzoff, A. N., & Stevens, E. B. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics, 50, 524-536.

Lee, C.-Y., & Zhang, Y. (in press). Processing lexical and speaker information in repetition and semantic/associative priming. Journal of Psycholinguistic Research.

McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 26, 746-748.

Nygaard, L. C., & Pisoni, D. B. (1998). Talker-specific learning in speech perception. Perception & Psychophysics, 60, 355-376.

Walker, S., Bruce, V., & O’Malley, C. (1995). Facial identity and facial speech processing: Familiar faces and voices in the McGurk effect. Perception and Psychophysics, 57, 1124-1133.

Welch, R. B., & Warren, D. H. (1980). Immediate perceptual response to intersensory discrepancy. Psychological Bulletin, 88, 638-667.