3aEAa4 – Creating virtual touch using ultrasonic waves

Brian Kappus – brian.kappus@ultrahaptics.com
Ben Long – ben.long@ultrahaptics.com
Ultrahaptics Ltd.
The West Wing, Glass Wharf
Bristol, BS2 0EL, United Kingdom

Popular version of paper 3aEAa4 “Spatiotemporal modulation for mid-air haptic feedback from an ultrasonic phased array” presented Wednesday morning, May 9, 2017, 9 AM, Greenway D 175th ASA Meeting, Minneapolis

ultrasonic wavesHaptic feedback is the use of the sense of touch in computing and interface design to communicate with a user. The average person most often experiences haptic feedback when interfacing with modern mobile devices. These uses are relatively basic: a buzz to alert the user to an incoming call or a vibration when using a touchscreen keyboard.

Interfaces enabled by gesture recognition and virtual/augmented reality, however, typically lack haptic feedback. In this paper, we present “virtual touch” technology developed at Ultrahaptics. It enables the generation of haptic feedback in mid-air, on a user’s bare hands, by the efficient creation and manipulation of ultrasonic waves (i.e. frequencies beyond the range of human hearing).

There are a variety of mechanical receptors present on the hand that are sensitive to different types of sensation including temperature, static pressure, and vibration [1]. Receptors sensitive to vibration can be stimulated through focused acoustic pressure. Ultrahaptics’ technology uses ultrasonic transducers coupled with phase delays so that the resulting interference patterns create focused acoustic pressure at focal points. The pressure is sufficient to create tactile sensations without generating audible sound.

Because the vibration-sensitive receptors on the hand are not capable of perceiving ultrasonic frequencies, to create tactile sensations the acoustic pressure then needs to be switched off and back on again (modulated) at lower frequencies – around the range of 40-400Hz.

Previous versions of this technology have been limited to discrete points of acoustic pressure which are turned on and off at the necessary frequencies to create a tactile effect [2] [3] [4]. However, another way to create the tactile effect is to move the focal point back and forth, its movement away and back again providing the modulation at a frequency perceptible by the receptors. While it is away, pressure at the starting point is low. It then returns to the starting (high) pressure when it returns. From the perspective of this starting point, the acoustic pressure varies in amplitude. This creates tactile sensations.

The technique is called spatiotemporal modulation and using it a closed curve can be repeated almost continuously, forming a robust area of stimulation instead of discrete points. Advantages of spatiotemporal modulation include the ability to render an infinite variety of curves and volumes.

Previously, spatiotemporal modulation was impractical due to the hardware computing requirements. In this paper, we present algorithms developed at Ultrahaptics that realize the required acoustic fields with a fraction of the computing power. This enables fast update rates on reasonable hardware and opens up a new class of haptics.

[Video file missing]
Caption: Oil-bath visualization of spatiotemporal modulation creating 3 simultaneous shapes. Acoustic pressure is able to deform the surface of a liquid and is used to visualize the acoustic field using edge lightning. In this example, 3 shapes are traced 200 times per second to create continuous lines of pressure.

[1] S. J. Lederman and R. L. Klatzky, “Haptic perception: A tutorial,” Attention Perception & Psychophysics, vol. 71, no. 7, pp. 1439-1459, 2009.
[2] T. Iwamoto, M. Tatezono and H. Shinoda, “Non-contact Method for Producing Tactile Sensation Using Airborne Ultrasound,” EuroHaptics, LNCS 5024, 504-513, 2008.
[3] T. Carter, S.A. Seah, B.Long, B. Drinkwater, S. Subramanian, UltraHaptics: Multi-PointMid-AirHaptic Feedback for Touch Surfaces, Proceedings of the 26th annual ACM symposium on User interface software and technology. (UIST ’13), New York, NY, USA, 8–11 October (2013).
[4] B. Long, S. A. Seah, T. Carter and S. Subramanian, “Rendering Volumetric Haptic Shapes in Mid-Air Using Ultrasound,” Transactions on Graphics (Proceedings of SIGGRAPH Asia) 33 (6) 181 (2014).

1pUW4 – Videos of ultrasonic wave propagation through transparent acrylic objects in water for introductory physics courses produced using refracto-vibrometry

Matthew Mehrkens – mmehrken@gustavus.edu
Benjamin Rorem – brorem@gustavus.edu
Thomas Huber – huber@gustavus.edu
Gustavus Adolphus College
Department of Physics
800 West College Avenue
Saint Peter, MN 56082

Popular version of paper 1pUW4, “Videos of ultrasonic wave propagation through transparent acrylic objects in water for introductory physics courses produced using refracto-vibrometry”
Presented Monday afternoon, May 7, 2018, 2:30pm – 2:45pm, Greenway B
175th ASA Meeting, Minneapolis

In most introductory physics courses, there are units on sound waves and optics. These may include readings, computer simulations, and lab experiments where properties such as reflection and refraction of light are studied. Similarly, students may study how an object, such as an airplane, traveling faster than the speed of sound can produce a Mach cone. Equations, such as Snell’s Law of Refraction or the Mach angle equation are derived or presented that allow students to perform calculations. However, there is an important piece that is missing for some students – they are not able to actually see the sound or light waves traveling.

The goal of this project was to produce videos of ultrasonic wave propagation through a transparent acrylic sample that could be incorporated into introductory high-school and college physics courses. Students can observe and quantitatively study wave phenomena such as reflection, refraction and Mach cone formation. By using rulers, protractors, and simple equations, students can use these videos to determine the velocity of sound in water and acrylic.

Video that demonstrates ultrasonic waves propagating in acrylic samples measured using refracto-vibrometry.

To produce these videos, an optical technique called refracto-vibrometry was used. As shown in Figure 1, the laser from a scanning laser Doppler vibrometer was directed through a water-filled tank at a retroreflective surface.

refracto-vibrometry

Figure 1: (a) front view, and (b) top view. The pulse from an ultrasound transducer passes through water and is incident on a transparent rectangular target. To measure propagating wave fronts using refracto-vibrometery, the laser from the vibrometer traveled through the water and was reflected off a retro reflector.

 

The vibrometer detected the density changes as the ultrasound wave pulse passed through the laser beam. This process of measuring the ultrasound arrival time was performed thousands of times when the laser was directed at a large collection of scan points. These data sets were used to create videos of the propagating ultrasound.

In one measurement, a transparent rectangular acrylic block, tilted at an angle, was placed in the water tank. Figure 2 is a single frame from a video showing the traveling ultrasonic waves emitted from a transducer and reflected/refracted by the block. By using the video, along with a ruler and protractor, students can determine the speed of sound in the water and acrylic block.

Video showing ultrasonic waves traveling through water as they are reflected and refracted by a transparent acrylic block.

Figure 2: Ultrasonic wave pulses (cyan and red colored bands) as they travel from water into the acrylic block (the region outlined in magenta). The path of the maximum position of the waves are shown by the green and blue dots.

In a similar measurement, a transparent acrylic cylinder was suspended in the water tank by fine monofilament string.  As an ultrasonic pulse traveled in the cylinder, it created a small bulge in the surface. Because this bulge in the acrylic cylinder traveled faster than the speed of sound in water, it produced a Mach cone that can be seen in the video and in Figure 3.  Students can determine the speed of sound in the cylinder by measuring the angle of this cone.

Figure 3: Mach cone produced by ultrasonic waves traveling faster in acrylic cylinder than in water.

Video showing formation of a Mach cone resulting from ultrasonic waves traveling faster through an acrylic cylinder than in water.

By interacting with these videos, students should be able to gain a better understanding of wave behavior. The videos are available for download from http://physics.gustavus.edu/~huber/acoustics

This material is based upon work supported by the National Science Foundation under Grant Numbers 1300591 and 1635456. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

4aPP7 – The Loudness of an Auditory Scene

William A. Yost – william.yost@asu.edu
Michael Torben Pastore – m.torben.pastore@gmail,edu

Speech and Hearing Science
Arizona State University
PO Box 870102
Tempe AZ, 85287-0102

Popular version of paper 4aPP7
Presented Thursday morning, May 10, 2018
175th ASA Meeting, Minneapolis, MN

This paper is part of special session honoring Dr. Neil Viemeister, University of Minnesota, for his brilliant career. One of the topics Dr. Viemeister studies is loudness perception. Our presentation deals with the perceived loudness of an auditory scene when several people talk at about the same time. In the real world, the sounds of all the talkers are combined into one complex sound before they reach a listener’s ears. The auditory brain sorts this single complex sound into acoustic “images“, where each image represents the sound of one of the talkers. In our research, we try to understand how many such images can be ”pulled out” of an auditory scene so that they are perceived as separate, identifiable talkers.

In one type of simple experiment listeners are asked to determine how many more talkers it takes for listeners to notice that the number of talkers has increased. When we increase the number of talkers, the additional talkers make the overall sound louder and the change in loudness can be used as a cue to help listeners discriminate which sound has more talkers. If we make the overall loudness of a four-talker scene (as an example) and a six-talker scene (as an example) the same, the loudness of the individual talkers in the six-talker scene will be less than the loudness of the individual talkers in the four-talker scene.

If listeners can focus on the individual talkers in the two scenes, they might be able to use the change in loudness of individual talkers as a cue for discrimination. If listeners cannot focus on individual talkers in a scene, then the two scenes may not be discriminable and they are likely to be judged as equally loud. We have found that listeners can make loudness judgments of the individual talkers for scenes of two or three talkers, but not more. This indicates that the loudness of a complex sound may depend on how well the individual components of the sound are perceived and, if so, that only two or three such components (images, talkers) can be processed by the auditory brain at a given time.

Trying to listen to one or more people in a situation of many people talking at the same time is difficult, especially for people who are hard of hearing. If the normal auditory system can only process a few sound sources presented at the same time, this reduces the complexity of devices (e.g., hearing aids) that might be designed to help people with hearing impairment process sounds in complex acoustic environments. In auditory virtual reality (AVR) scenarios, there is a computational cost associated with processing each sound source. If an AVR system only has to process a few sound sources to mimic normal hearing, it would be a lot less expensive than if the system has to process many sound sources.  (Supported by grants from National Institutes of Health, NIDCD and Oculus VR, LLC)

2aEA3 – Insect Ears Inspire Miniature Microphones

James Windmill – james.windmill@strath.ac.uk
University of Strathclyde
204 George Street
Glasgow, G1 1XW
United Kingdom

Popular version of paper 2aEA3
Presented Tuesday morning, May 8, 2018
175th ASA Meeting, Minneapolis, MN

Miniature microphones are a technology that everyone uses everyday without thinking about it. They are used in smartphones, laptops, tablets, and more recently in smart home equipment. However, working with sound technology always means there are issues, like how to deal with background noise. Engineers have always looked for ways to make technology better, and in miniature microphones one of the paths for improvement has been to look at how insects hear. If you want to design a really small microphone, then why not look at how the ear of a really small animal works?

In the 1990’s researchers discovered that a small fly (Ormia ochracea) had a very directional ear. That is, it can tell the direction that sound was coming from with a lot higher accuracy than predicted. Since that discovery many engineers have made attempts to make microphones copying the mechanism in the Ormia ear. Much of the effort has spent been trying to get round the problem that the Ormia is only interested in hearing one specific frequency. Humans want microphones that cover all the frequencies we can hear. Why bother copying this insect ear? If you could make a tiny directional microphone then a lot of background noise drops simply because the microphone points towards the person speaking.

At Strathclyde we have developed a variety of microphones based on the Ormia ear mechanism. The main push in this work has been to try and get more sensitive microphones working across more frequencies. To do this we have put four microphones into one Ormia type design, as in Figure 1. So instead of a single frequency, the microphone works as a miniature directional microphone across four main frequencies [1].

insect ear

Figure 1. Four frequency Ormia inspired miniature microphone.

Work on the Ormia system at Strathclyde encouraged us to think of other things that insect ears do, and their structure, to see if there are other advantages to find. This work has taken two main themes. Firstly, many hearing systems in nature are not just simple mechanical systems; they are active sensors. That is they change how they function depending on what sound they’re listening to. So for a quiet sound they increase the amplification of the signal in the ear, or for a loud sound they turn it down. Some ears also change their frequency response, changing the frequencies they are tuned to. Strathclyde researchers have taken these ideas and produced miniature microphone systems that can do the same thing [2]. Why do this, when you can just do it in signal processing? By making the microphone “smart” you can free up processor power to do other things, or reduce the delay between a sound arriving and the electronic signal being used.

Figure 2. Graphs showing the results of a miniature microphone actively changing its frequency (A) and gain response (B).

Secondly, we thought about how you make miniature microphones. The ones we use in phones, computers etc today are made using computer chip technology, so are made very flat out of very hard silicon. Insect ears are made of a relatively soft material, and come in a huge variety of three dimensional shapes. The obvious thing it seemed to us was to try making insect inspired microphones using 3D printing techniques. This is very early work, its not easy to do. But we have had some success making microphone sensors using 3D printers [3]. Figure 3 shows an “acoustic sensor” that was inspired by how the locust hears sound.

Figure 3. 3D printed acoustic sensor inspired by the ear of a locust.

There is still a lot of work to do, both on developing these techniques and technologies, and on working out how best to use them in everyday technologies like the smartphone. Then again, a huge number of different insects have ears, each working in slightly different ways to hear different things for different reasons, so there are a lot of ears out there we can take inspiration from.

[1] Bauer R et al. (2017), Housing influence on multi-band directional MEMS microphones inspired by Ormia ochracea, IEEE Sensors Journal, 17: 5529-5536.
http://dx.doi.org/10.1109/JSEN.2017.2729619

[2] Guerreiro J et al. (2017), Simple Ears Inspire Frequency Agility in an Engineered Acoustic Sensor System, IEEE Sensors Journal, 17: 7298-7305.
http://dx.doi.org/10.1109/JSEN.2017.2699697

[3] Domingo-Roca R et al. (2018), Bio-inspired 3D-printed piezoelectric device for acoustic frequency selection, Sensors & Actuators: A. Physical, 271: 1-8.
https://doi.org/10.1016/j.sna.2017.12.056

4aSC12 – When it comes to recognizing speech, being in noise is like being old

Kristin Van Engen – kvanengen@wustl.edu
Avanti Dey
Nichole Runge
Mitchell Sommers
Brent Spehar
Jonathen E. Peelle

Washington University in St. Louis
1 Brookings Drive
St. Louis, MO 63130

Popular version of paper 4aSC12
Presented Thursday morning, May 10, 2018
175th ASA Meeting, Minneapolis, MN

How hard is it to recognize a spoken word?

Well, that depends. Are you old or young? How is your hearing? Are you at home or in a noisy restaurant? Is the word one that is used often, or one that is relatively uncommon? Does it sound similar to lots of other words in the language?

As people age, understanding speech becomes more challenging, especially in noisy situations like parties or restaurants. This is perhaps unsurprising, given the large proportion of older adults who have some degree of hearing loss. However, hearing measurements do not actually do a very good job of predicting the difficulty a person will have with speech recognition, and older adults tend to do worse than younger adults even when their hearing is good.

We also know that some words are more difficult to recognize than others. Words that are used rarely are more difficult than common words, and words that sound similar to many other words in the language are recognized less accurately than unique-sounding words. Relatively little is known, however, about how these kinds of challenges interact with background noise to affect the process of word recognition or how such effects might change across the lifespan.

In this study, we used eye tracking to investigate how noise and word frequency affect the process of understanding spoken words. Listeners were shown a computer screen displaying four images, and listened the instruction “Click on the” followed by a target word (e.g., “Click on the dog.”). As the speech signal unfolds, the eye tracker records the moment-by-moment direction of the person’s gaze (60 times per second). Since listeners direct their gaze toward the visual information that matches incoming auditory information, this allows us to observe the process of word recognition in real time.

Our results indicate that word recognition is slower in noise than in quiet, slower for low-frequency words than high-frequency words, and slower for older adults than younger adults. Interestingly, young adults were more slowed down by noise than older adults. The main difference, however, was that young adults were considerably faster to recognize words in quiet conditions. That is, word recognition by older adults didn’t differ much from quiet to noisy conditions, but young listeners looked like older listeners when tasked with listening to speech in noise.