Making Table Tennis Accessible for Blind Players #Acoustics23

Making Table Tennis Accessible for Blind Players #Acoustics23

Object tracking combined with a speaker array can provide real-time audio feedback in three dimensions.

SYDNEY, Dec. 6, 2023 – Table tennis has been played for decades as a more accessible version of tennis. The sport is particularly beginner-friendly while maintaining a rich level of competitive play. However, like many sports, it remains inaccessible to people who are blind or have low vision.

Phoebe Peng, an Engineering Honours student at the University of Sydney, is researching ways to allow people with low vision and blindness to play pingpong using sound.

The process uses neuromorphic cameras and an array of loudspeakers, designed to allow players to track the ball and movements based on sound. Peng will present her work Dec. 6 at 10:20 a.m. Australian Eastern Daylight Time, as part of Acoustics 2023 Sydney running Dec. 4-8 at the International Convention Centre Sydney.

table tennis

Motion tracking cameras and an array of linked speakers give real-time audio feedback to table tennis players with low vision. Credit: Phoebe Peng

According to Peng, table tennis makes a perfect test case for this kind of technology.

“The small size of the ball and table, along with the movement of the ball in 3D space, are things that make table tennis difficult to play for those with low vision and complete blindness,” said Peng, who completed the work as part of her Honours thesis. “Making this sport more accessible while also exploring the potential of neuromorphic cameras were my two biggest motivators.”

The neuromorphic cameras Peng employed are ideal for tracking small objects like table tennis balls. Unlike normal cameras that capture complete images of a scene, neuromorphic cameras track changes in an image over time. Using two perfectly positioned cameras, Peng could identify and track a ball in three dimensions in real time. She then fed that data into an algorithm controlling an array of loudspeakers along the sides of the table, which created a sound field matching the position of the ball.

While this system works well, Peng says more experimentation is needed before it will be ready for actual play.

“An ongoing technical challenge is the matter of human perception of sound,” said Peng. “There are limitations on how accurately people can perceive sound localization. What type of sound should be used? Should the sound be continuous? This is a technical challenge we’ll be tackling in the next stage of development.”

###

Contact:
AIP Media
301-209-3090
media@aip.org

———————– MORE MEETING INFORMATION ———————–

The Acoustical Society of America is joining the Australian Acoustical Society to co-host Acoustics 2023 Sydney. This collaborative event will incorporate the Western Pacific Acoustics Conference and the Pacific Rim Underwater Acoustics Conference.

Main meeting website: https://acoustics23sydney.org/
Technical program: https://eppro01.ativ.me/src/EventPilot/php/express/web/planner.php?id=ASAFALL23

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are summaries (300-500 words) of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at
https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

ABOUT THE AUSTRALIAN ACOUSTICAL SOCIETY
The Australian Acoustical Society (AAS) is the peak technical society for individuals working in acoustics in Australia. The AAS aims to promote and advance the science and practice of acoustics in all its branches to the wider community and provide support to acousticians. Its diverse membership is made up from academia, consultancies, industry, equipment manufacturers and retailers, and all levels of Government. The Society supports research and provides regular forums for those who practice or study acoustics across a wide range of fields The principal activities of the Society are technical meetings held by each State Division, annual conferences which are held by the State Divisions and the ASNZ in rotation, and publication of the journal Acoustics Australia. https://www.acoustics.org.au/

Let’s go soundwalking!

David Woolworth – dwoolworth@rwaconsultants.net

Roland, Woolworth & Associates, Oxford, MS, 38655, United States

Bennett Brooks and Brigitte Schulte-Fortkamp

Popular version of 4pAAb1 – Introduction to Soundwalking – an important part of the soundscape method
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023505

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Our acoustic environment is a critical part of our everyday experience; it is often unconsciously processed with all other stimuli to form an impression of a place and time, but its impact is not always fully understood. Soundscape is a method of assessing the acoustic environment where perception is prioritized. The soundscape method and the soundwalk tool integrate measurements of the human perception of sound with other observations that characterize the environment, such as the sound levels, the type of location and the various sound sources. The combination of these perceptual measurements with other observations helps us to understand how the acoustic environment impacts the people there and can provide directions for possible changes that can improve their quality of life.

The soundscape method suggests assessing all sounds which occur in an environment using collected data related to human perception, the physical acoustic setting, and context. Context includes visual cues, geographic, social, psychological and cultural aspects, including one’s mental image or memory of a place. Soundscape transcends the common studies of noise and sound levels, and is a powerful tool for effecting positive results with regard to the quality of life for stakeholders in the acoustic environment; standardized methodology has been developed that can be adapted to various applications, using sound as a resource. Soundwalks are an important part of the soundscape method and are a useful way to engage stakeholders who participate by consciously observing and evaluating the soundscape.

Figure 1

A soundwalk is an element of the soundscape method that typically will include a walking tour of observation locations over a predetermined route to solicit perceptual feedback from the participants regarding the acoustic environment (see Figures 1 and 2). The participants of the soundwalk typically include stakeholders or “local experts”: members of the community that experience the soundscape daily, users/patrons of a space, residents, business people, and local officials. Soundwalks can be performed from urban areas to wilderness settings, indoors and outdoors; the information collected can have many applications including ordinances and planning, preservation or improvement of the acoustic environment, and building public/self-awareness of the acoustic environment.

Figure 2

The perceptual information collected during a soundwalk includes the sounds heard by the participants and often directed questions with scaled answers; this along with objective sound level measurements and audio recordings can be used to assess an acoustic space(s) in an effort to effect the purpose of the soundwalk. (see Figures 3 and 4) In some cases, the participants are interviewed to get a deeper understanding of their responses or the data can be taken to a lab for further study.

Figure 3

The soundwalk and post processing of collected information is flexible relative to soundscape standard methods to target an acoustic space and purpose of the investigation. This makes it an adaptable and powerful tool for assessing an acoustic environment and improving the quality of life for the those that live in or use that environment, using their own perceptions and feedback.

Figure 4

Enhancing Museum Experiences: The Impact of sounds on visitor perception.

Milena J. Bem – jonasm@rpi.edu

School of Architecture, Rensselaer Polytechnic Institute, TROY, New York, 12180, United States

Samuel R.V. Chabot – Rensselaer Polytechnic Institute
Jonas Braasch – Rensselaer Polytechnic Institute

Popular version of 4aAA8 – Effects of sounds on the visitors’ experience in museums
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023459

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.


Have you ever wondered how a museum’s subtle backdrop of sound affects your experience? Are you drawn to the tranquility of silence, the ambiance of exhibition-congruent sounds, or perhaps the hum of people chatting and footsteps echoing through the halls?

Museums increasingly realize that acoustics are crucial in shaping a visitor’s experience. There are acoustic challenges in museum environments, such as finding the right balance between speech intelligibility and privacy, particularly in spaces with open-plan exhibition halls, coupled rooms, high volumes, and highly reflective surfaces.

Addressing the Challenge
Our proposal focuses on using sound masking systems to tackle these challenges. Sound masking is a proven and widely used technique in diverse settings, from offices to public spaces. Conventionally, it involves introducing low-level broadband noise to mask or diminish unwanted sounds, reducing distractions.

Context is Key
In recognizing the pivotal role of context in shaping human perception, strategically integrating sounds as design elements emerges as a powerful tool for enhancing visitor experiences. In line with this, we propose using sounds congruent with the museum environment more effectively than conventional masking sounds like low-level broadband noise. This approach reduces background noise distractions and enhances artwork engagement, creating a more immersive and comprehensive museum experience.

Evaluating the Effects: The Cognitive Immersive Room (CIR)
We assessed these effects using the Cognitive Immersive Room at Rensselaer Polytechnic Institute. This cutting-edge space features a 360° visual display and an eight-channel loudspeaker system for spatial audio rendering. We projected panoramic photographs and ambisonic audio recordings from 16 exhibitions across five relevant museums — MASS MoCA, New York State Museum, Williams College Museum of Art, UAlbany Art Museum, and Hessel Museum of Art.

The Study Setup
Each participant experienced four soundscape scenarios: the original recorded soundscape in each exhibition, the recorded soundscape combined with a conventional sound masker, the recorded soundscape combined with a congruent sound masker, and “silence” which does not involve any recording, comprising the ambient room noise of 41 dB. Figure 1 shows one of the displays used in the experiment and below the presented sound stimulus.

Figure1: Birds of New York exhibition – New York State Museum. The author took the photo with the permission of the museum’s Director of Exhibitions.

Scenario 1: originally recorded soundscape in situ.
Scenario 2: recorded soundscape combined with a conventional sound masker.
Scenario 3: the recorded soundscape combined with a congruent sound masker.

After each sound stimulus, they responded to a questionnaire. It was applied through a program developed for this research where participants could interact and answer the questions using an iPad. After experiencing the four soundscapes, a final question was asked regarding the participants’ soundscape preference within the exhibition context. Figure 2 shows the experiment design.

Figure 2

Key Findings
The statistically significant results showed a clear preference for congruent sounds, significantly reducing distractions, enhancing focus, and fostering comprehensive and immersive experiences. A majority of 58% of participants preferred the congruent sound scenario, followed by silence at 20%, original soundscape at 14%, and conventional maskers at 8%.

A virtual reality system to ‘test drive’ hearing aids in real-world settings

Matthew Neal – mathew.neal.2@louisville.edu
Instagram: @matthewneal32

Department of Otolaryngology and other Communicative Disorders
University of Louisville
Louisville, Kentucky 40208
United States

Popular version of 3pID2 – A hearing aid “test drive”: Using virtual acoustics to accurately demonstrate hearing aid performance in realistic environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018736

Many of the struggles experienced by patients and audiologists during the hearing aid fitting process stem from a simple difficulty: it is really hard to describe in words how something will sound, especially if you have never heard it before. Currently, audiologists use brochures and their own words to counsel a patient during the hearing aid purchase process, but a device often must be purchased first before patients can try them in their everyday life. This research project has developed virtual reality (VR) hearing aid demonstration software which allows patients to listen to what hearing aids will sound like in real-world settings, such as noisy restaurants, churches, and the places where they need devices the most. Using the system, patient can make more informed purchasing decisions and audiologists can program hearing aids to an individual’s needs and preferences more quickly.

This technology can also be thought of as a VR ‘test drive’ of wearing hearing aids, letting audiologists act as tour guides as patients try out features on a hearing aid. After turning a new hearing aid feature on, a patient will hear the devices update in a split second, and the audiologist can ask, “Was it better before or after the adjustment?” On top of getting device settings correct, hearing aid purchasers must also decide which ‘technology level’ they would like to purchase. Patients are given an option between three to four technology levels, ranging from basic to premium, with an added cost of around $1,000 per increase in level. Higher technology levels incorporate the latest processing algorithms, but patients must decide if they are worth the price, often without the ability to hear the difference. The VR hearing aid demonstration lets patients try out these different levels of technology, hear the benefits of premium devices, and decide if the increase in speech intelligibility or listening comfort is worth the added cost.

A patient using the demo first puts on a custom pair of wired hearing aids. These hearing aids are the same devices sold that are sold in audiology clinics, but their microphones have been removed and replaced with wires for inputs. The wires are connected back to the VR program running on a computer which simulates the audio in a given scene. For example, in the VR restaurant scene shown in Video 1, the software maps audio in a complex, noisy restaurant to the hearing aid microphones while worn by a patient. The wires send the audio that would have been picked up in the simulated restaurant to the custom hearing aids, and they process and amplify the sound just as they would in that setting. All of the audio is updated in real-time so that a listener can rotate their head, just as they might do in the real world. Currently, the system is being further developed, and it is planned to be implemented in audiology clinics as an advanced hearing aid fitting and patient counseling tool.

Video 1: The VR software being used to demonstrate the Speech in Loud Noise program on a Phonak Audeo Paradise hearing aid. The audio in this video is the directly recorded output of the hearing aid, overlaid with a video of the VR system in operation. When the hearing aid is switched to the Speech in Loud noise program on the phone app, it becomes much easier and more comfortable to listen to the frontal talker, highlighting the benefits of this feature in a premium hearing aid.

Improving pitch sensitivity by cochlear-implant users

John Middlebrooks – middlebj@hs.uci.edu

University of California, Irvine, Irvine, CA, 92697-5310, United States

Matthew Richardson and Harrison Lin
University of California, Irvine

Robert Carlyon
University of Cambridge

Popular version of 2aPP6 – Temporal pitch processing in an animal model of normal and electrical hearing
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018352

A cochlear implant can restore reasonable speech perception to a deaf individual. Sensitivity to the pitches of sounds, however, typically is negligible. Lack of pitch sensitivity deprives implant users of appreciation of musical melodies, disrupts pitch cues that are important for picking out a voice amid competing sounds, and impairs understanding of lexical tones in tonal languages (like Mandarin or Vietnamese, for example). Efforts to improve pitch perception by cochlear-implant users could benefit from studies in experimental animals, in which the investigator can control the history of deafness and electrical stimulation and can evaluate novel implanted devices. We are evaluating cats for studies of pitch perception in normal and electrical hearing.

We train normal-hearing cats to detect changes in the pitches of trains of sound pulses – this is “temporal pitch” sensitivity. The cat presses a pedal to start a pulse train at a particular base rate. After a random delay, the pulse rate is changed and the cat can release the pedal to receive a food reward. The range of temporal pitch sensitivity by cats corresponds well to that of humans, although the pitch range of cats is shifted somewhat higher in frequency in keeping with the cat’s higher frequency range of hearing.

We record small voltages from the scalps of sedated cats. The frequency-following response (FFR) consists of voltages originating in the brainstem that synchronize to the stimulus pulses. We can detect FFR signals across the range of pulse rates that is relevant for temporal pitch sensitivity. The acoustic change complex (ACC) is a voltage that arises from the auditory cortex in response to a change in an ongoing stimulus. We can record ACC signals in response to pitch changes across ranges similar to the sensitive ranges seen in the behavioral trials in normal-hearing cats.

We have implanted cats with devices like cochlear implants used by humans. Both FFR and ACC could be recorded in response to electrical stimulation of the implants.

The ACC could serve as a surrogate for behavioral training for conditions in which a cat’s learning might not keep up with changes in stimulation strategies, like when a cochlear implant is newly implanted or a novel stimulating pattern is tested.

We have found previously in short-term experiments in anesthetized cats that an electrode inserted into the auditory (hearing) nerve can selectively stimulate pathways that are specialized for transmission of timing information, e.g., for pitch sensation. In ongoing experiments, we plan to place long-term indwelling electrodes in the auditory nerve. Pitch sensitivity with those electrodes will be evaluated with FFR and ACC recording. If performance of the auditory nerve electrodes in the animal model turns out as anticipated, such electrode could offer improved pitch sensitivity to human cochlear implant users.

The ability to differentiate between talkers based on their voice cues changes with age

Yael Zaltz – yaelzalt@tauex.tau.ac.il

Department of Communication Disorders, Steyer School of Health Professions, Faculty of Medicine, and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, -, 6997801, Israel

Popular version of 4aPP2 – The underlying mechanisms for voice discrimination across the life span
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018859

By using voice cues, a listener can keep track of a specific talker and tell it apart from other relevant and irrelevant talkers. Voice cues help listeners understand speech in everyday, noisy environments that include multiple talkers. The present study demonstrates that both young children and older adults aren’t as good at voice discrimination compared to young adults. Young children and older adults use more top-down, high-order cognitive resources for voice discrimination.

Four experiments were designed to assess voice discrimination based on two voice cues: the speaker’s fundamental frequency and formant frequencies. These are the resonant frequencies of the vocal tract, reflecting vocal tract length. Two of the experiments assessed voice discrimination in quiet conditions, one experiment assessed the effect of noise on voice discrimination, and one experiment assessed the effect of different testing methods on voice discrimination. In all experiments, an adaptive procedure was used to assess voice discrimination. In addition, high-order cognitive abilities such as non-verbal intelligence, attention, and processing speed were evaluated. The results showed that the youngest children and the older adults displayed the poorest voice discrimination, with significant correlations between voice discrimination and top-down, cognitive abilities; children and older adults with better attention skills and faster processing speed (Figure 1) achieved better voice discrimination. In addition, voice discrimination for the children was shown to depend more on comprehensive acoustic and linguistic information, compared to young adults, and their ability to form an acoustic template in memory to be used as perceptual anchor for the task was less efficient. The outcomes provide an important insight on the effect of age on basic auditory abilities and suggest that voice discrimination is less automatic for children and older adults, perhaps as a result of less mature or deteriorated peripheral (spectral and/or temporal) processing. These findings may partly explain the difficulties of children and older adults in understanding speech in multi-talker situations.

Figure 1: Individual voice discrimination results for (a) the children and (b) the older adults as a function of their scores in the Trail Making Test that assess attention skills and processing speed.