4pSC15 – Reading aloud in a clear speaking style may interfere with sentence recognition memory

Sandie Keerstock – keerstock@utexas.edu
Rajka Smiljanic – rajka@austin.utexas.edu
Department of Linguistics, The University of Texas at Austin
305 E 23rd Street, B5100, Austin, TX 78712

Popular version of paper 4pSC15
Presented Thursday afternoon, May 16, 2019
177th ASA Meeting, Louisville, KY

Can you improve your memory by speaking clearly? If, for example, you are rehearsing for a presentation, what speaking style will better enhance your memory of the material: reading aloud in a clear speaking style, or reciting the words casually, as if speaking with a friend?

When conversing with a non-native listener or someone with a hearing problem, talkers spontaneously switch to clear speech: they slow down, speak louder, use a wider pitch range, and hyper-articulate their words. Compared to more casual speech, clear speech enhances a listener’s ability to understand speech in a noisy environment. Listeners also better recognize previously heard sentences and recall what was said if the information was spoken clearly.

Figure 1. Illustration of the procedure of the recognition memory task.

In this study, we set out to examine whether talkers, too, have better memory of what they said if they pronounced it clearly.In the training phase of the experiment, 60 native and 30 non-native English speakers were instructed to read aloud and memorize 60 sentences containing high-frequency words, such as “The hot sun warmed the ground,” as they were presented one by one on a screen. Each screen directed the subject with regard to speaking style, alternating between “clear” and “casual” every ten slides. During the test phase, they were asked to identify as “old” or “new” 120 sentences written on the screen one at a time: 60 they had read aloud in either style, and 60 they had not.

clear speech

Figure 2. Average of d’ (discrimination sensitivity index) for native (n=60) and non-native English speakers (n=30) for sentences produced in clear (light blue) and casual (dark blue) speaking styles. Higher d’ scores denote enhanced accuracy during the recognition memory task. Error bars represent standard error.

Unexpectedly, both native and non-native talkers in this experiment showed enhanced recognition memory for sentences they read aloud in a casual style. Unlike in perception, where hearing clearly spoken sentences improved listeners’ memory, findings from the present study tend to indicate a memory cost when talkers themselves produced clear sentences. This asymmetry between the production and perception effect on memory may be related to the same underlying mechanism, namely the Effortfulness Hypothesis (McCoy et al. 2005). In perception, more cognitive resources are used during processing of more-difficult-to-understand casual speech and fewer resources remain available for storing information in memory. Conversely, cognitive resources may be more depleted during the production of hyper-articulated clear sentences, which could lead to poorer memory encoding. This study suggests that the benefit of clear speech may be limited to the retention of spoken information in long-term memory of listeners, but not talkers.

4aSP4 – Streaming Video through Biological Tissues using Ultrasonic Communication

Gizem Tabak – tabak2@illinois.edu
Michael Oelze – oelze@illinois.edu
Andrew Singer – acsinger@illinois.edu
University of Illinois at Urbana-Champaign
306 N Wright St
Urbana, IL 61801

Popular version of paper 4aSP4
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

Researchers at the University of Illinois at Urbana-Champaign have developed a fast, wireless communication alternative that also has biomedical implications. Instead of using radio frequency (RF) to transmit signals, the team is using ultrasonic waves to send signals at high enough data rates to transmit video through animal or human tissue.

The team of electrical and computer engineering professors Andrew Singer and Michael Oelze and graduate researcher Gizem Tabak have achieved a transmission rate of 4 megabits per second through animal tissue with 2-mm transmitting devices. This rate is high enough to send high definition video (3 Mbps) and 15 times faster than that RF waves can currently deliver.

ultrasonic communication

Figure 1 – Experimental setup for streaming at 4Mbps through 2” beef liver

The team is using this approach for communicating with implanted medical devices, like those used to scan tissue in a patients’ gastrointestinal (GI) tract.

Currently one of two methods are used to image the GI tract. The first is video endoscopy, which involves inserting a long probe with a camera and light down the throat to take real-time video and send it to an attached computer. This method has limitations in that it cannot reach the midsection of the GI tract and is highly invasive.

The second method involves a patient swallowing a pill that contains a mini camera that can take images throughout the tract. After a day or so, the pill is retrieved, and the physician can extract the images. This method, however, is entirely offline, meaning there is no real-time interaction with the camera inside the patient.

A third option uses the camera pill approach but sends the images through RF waves, which are absorbed by the surrounding tissue. Due to safety regulations governing electromagnetic radiation, the transmitted signal power is limited, resulting in data rates of only 267 kilobits per second.

The Illinois team is proposing to use ultrasound, a method that has already proven safe for medical imaging, as a communication method. Having achieved data rates of 4 Mbps with this system through animal tissue, the team is translating the approach to operate in real-time for use in the human body.

Pairing this communication technology with the camera pill approach, the device not only could send real-time video, but also could be remotely controlled. For example, it might travel to specific areas and rotate to arbitrary orientations. It may even be possible to take tissue samples for biopsy, essentially replacing endoscopic procedures or surgeries through such mini-remote controlled robotic devices.

4aPA – Using Sound Waves to Quantify Erupted Volumes and Directionality of Volcanic Explosions

Alexandra Iezzi – amiezzi@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

David Fee – dfee1@alaska.edu
Geophysical Institute, Alaska Volcano Observatory
University of Alaska Fairbanks
2156 Koyukuk Drive
Fairbanks, AK 99775

Popular version of paper 4aPA
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

Volcanic eruptions can produce serious hazards, including ash plumes, lava flows, pyroclastic flows, and lahars. Volcanic phenomena, especially explosions, produce a substantial amount of sound, particularly in the infrasound band (<20 Hz, below human hearing) that can be detected at both local and global distances using dedicated infrasound sensors. Recent research has focused on inverting infrasound data collected within a few kilometers of an explosion, which can provide robust estimates of the mass and volume of erupted material in near real time. While the backbone of local geophysical monitoring of volcanoes typically relies on seismometers, it can sometimes be difficult to determine whether a signal originates from the subsurface only or has become subaerial (i.e. erupting). Volcano infrasound recordings can be combined with seismic monitoring to help illuminate whether or not material is actually coming out of the volcano, therefore posing a potential threat to society.

This presentation aims to summarize results from many recent studies on acoustic source inversions for volcanoes, including a recent study by Iezzi et al. (in review) at Yasur volcano, Vanuatu. Yasur is easily accessible and has explosions every 1 to 4 minutes making it a great place to study volcanic explosion mechanisms (Video 1).

Video 1 – Video of a typical explosion at Yasur volcano, Vanuatu.

Most volcano infrasound inversion studies assume that sound radiates equally in all directions. However, the potential for acoustic directionality from the volcano infrasound source mechanism is not well understood due to infrasound sensors usually being deployed only on Earth’s surface. In our study, we placed an infrasound sensor on a tethered balloon that was walked around the volcano to measure the acoustic wavefield above Earth’s surface and investigate possible acoustic directionality (Figure 1).

Figure 1 [file missing] – Image showing the aerostat on the ground prior to launch (left) and when tethered near the crater rim of Yasur (right).

Volcanos typically have high topographic relief that can significantly distort the waveform we record, even at distances of only a few kilometers. We can account for this effect by modeling the acoustic propagation over the topography (Video 2).

Video 2 – Video showing the pressure field that results from inputting a simple compressional source at the volcanic vent and propagating the wavefield over a model of topography. The red denotes positive pressure (compression) and blue denotes negative pressure (rarefaction). We note that all complexity past the first red band is due to topography.

Once the effects of topography are constrained, we can assume that when we are very close to the source, all other complexity in the infrasound data is due to the acoustic source. This allows us to solve for the volume flow rate (potentially in real time). In addition, we can examine directionality for all explosions, which may lead to volcanic ejecta being launched more often and farther in one direction than in others. This poses a great hazard to tourists and locals near the volcano and may be mitigated by studying the acoustic source from a safe distance using infrasound.

4APP28 – Listening to music with bionic ears: Identification of musical instruments and genres by cochlear implant listeners

Ying Hsiao – ying_y_hsiao@rush.edu
Chad Walker
Megan Hebb
Kelly Brown
Jasper Oh
Stanley Sheft
Valeriy Shafiro – Valeriy_Shafiro@rush.edu
Department of Communication Disorders and Sciences
Rush University
600 S Paulina St
Chicago, IL 60612, USA

Kara Vasil
Aaron Moberly
Department of Otolaryngology – Head & Neck Surgery
Ohio State University Wexner Medical Center
410 W 10th Ave
Columbus, OH 43210, USA

Popular version of paper 4APP28
Presented Thursday morning, May 16, 2019
177th ASA Meeting, Louisville, KY

For many people, music is an integral part of everyday life. We hear it everywhere: cars, offices, hallways, elevators, restaurants, and, of course, concert halls and peoples’ homes. It can often make our day more pleasant and enjoyable, but its ubiquity also makes it easy to take it for granted. But imagine if the music you heard around you sounded garbled and distorted. What if you could no longer tell apart different instruments that were being played, rhythms were no longer clear, and much of it sounded out of tune? This unfortunate experience is common for people with hearing loss who hear through cochlear implants, or CIs, the prosthetic devices that convert sounds around a person to electrical signals that are then delivered directly to the auditory nerve, bypassing the natural sensory organ of hearing – the inner ear. Although CIs have been highly effective in improving speech perception for people with severe to profound hearing loss, music perception has remained difficult and frustrating for people with CIs.

Audio 1.mp4, “Music processed with the cochlear implant simulator, AngelSim by Emily Shannon Fu Foundation”

Audio 2.mp4, “Original version [“Take Five” by Francesco Muliedda is licensed under CC BY-NC-SA]”

To find out how well CI listeners identify musical instruments and music genres, we used a version of a previously developed test – Appreciation of Music in Cochlear Implantees (AMICI). Unlike other tests that examine music perception in CI listeners using simple-structured musical stimuli to pinpoint specific perceptual challenges, AMICI takes a more synthetic approach and uses real-world musical pieces, which are acoustically more complex. Our findings confirmed that CI listeners indeed have considerable deficits in music perception. Participants with CIs correctly identify musical instruments only 69% of the time and musical genres 56% of the time, whereas their age-matched normal-hearing peers identified instruments and genres with 99% and 96% correct, respectively. The easiest instrument for CI listeners were drums, which were correctly identified 98% of the time. In contrast, the most difficult instrument was flute, with only 18% identification accuracy. Flute was more often, 77% of the time, confused with string instruments. Among the genres, identification of classical music was the easiest, reaching 83% correct, while Latin and rock/pop music were most difficult (41% correct). Remarkably, CI listeners’ abilities to identify musical instruments and genres correlated with their ability to identify common environmental sounds (such as dog barking, car horn) and also spoken sentences in noise. These results provide a foundation for future work that will focus on rehabilitation in music perception for CI listeners, so that music may sound pleasing and enjoyable to them once again, with possible additional benefits for speech and environmental sound perception.

1aSP1 – From Paper Cranes to New Tech Gains: Frequency Tuning through Origami Folding

Kazuko Fuchi – kfuchi1@udayton.edu
University of Dayton Research Institute
300 College Park, Dayton, OH 45469

Andrew Gillman – andrew.gillman.1.ctr@us.af.mil
Alexander Pankonien – alexander.pankonien.1@us.af.mil
Philip Buskohl – philip.buskohl.1@us.af.mil
Air Force Research Laboratory
Wright-Patterson Air Force Base, OH 45433

Deanna Sessions – deanna.sessions@psu.edu
Gregory Huff – ghuff@psu.edu
Department of Electrical Engineering and Computer Science
Penn State University
207 Electrical Engineering West, University Park, PA 16802

Popular version of lecture: 1aSP1 Topology optimization of origami-inspired reconfigurable frequency selective surfaces
Presented Monday morning, 9:00 AM – 11:15 AM, May 13, 2019
177th ASA Meeting, Louisville, Kentucky

The use of mathematics and computer algorithms by origami artists has led to a renaissance of the art of origami in recent decades. Combining scientific tools with their imagination and artistic skills, these artists discover intricate origami designs that inspire expansive possibilities of the art form.

The intrigue of realizing incredibly complex creatures and exquisite patterns from a piece of paper has captured the attention of the scientific and engineering communities. Our research team and others in the engineering community wanted to make use of the language of origami, which gives us a natural way to navigate through complex geometric transformations through 2D (flat), 3D (folded) and 4D (folding motion) spaces. This beautiful language has enabled numerous innovative technologies including foldable and deployable satellites, self-folding medical devices and shape-changing robots.

Origami, as it turns out, is also useful in controlling how sound and radio waves travel. An electromagnetic device called an origami frequency selective surface for radio waves can be created by laser-scoring and folding a plastic sheet into a repeating pattern called a periodic tessellation and printing electrically conductive, copper decorations aligned with the pattern on the sheet (Figure 1). We have shown that this origami folded device can be used as a filter to block unwanted signals at a specific operating frequency. We can fold and unfold this device to tune the operating frequency, or we can design a device that can be folded, unfolded, bent and twisted into a complex surface shape without changing the operating frequency, all depending on the design of the folding and printing patterns. These findings encourage more research in origami-based innovative designs to accomplish demanding goals for radar, communication and sensor technologies.

origamiFigure 1: Fabricated prototype of origami folded frequency selective surface made of a folded plastic sheet and copper prints, ready to be tested in an anechoic chamber – a room padded with radio-wave-absorbing foam pyramids.

Origami can be used to choreograph complex geometric rearrangements of the active components. In the case of our frequency selective surface, the folded plastic sheet acts as the medium that hosts the electrically active copper prints. As the sheet is folded, the copper prints fold and move relative to each other in a controlled manner. We used our theoretical knowledge along with insight gained from computer simulations to understand how the rearrangements impact the physics of the device’s working mechanism and to decide what designs to fabricate and test in the real world. In this, we attempt to imitate the origami artist’s magical creation of awe-inspiring art in the engineering domain.