4aAA9 – The successful application of acoustic lighting in restaurants

Zackery Belanger, zb@arcgeometer.com
Arcgeometer LC, Detroit, MI

Popular version of 4aAA9 – The successful application of acoustic lighting in restaurants’
Presented Thursday morning, December 2, 2021
181st ASA Meeting, Seattle, WA
Click here to read the abstract

To understand the rise of acoustic lighting in restaurants, it is best to go back to the beginning of modern architectural acoustics. In 1895, Harvard University opened the new Fogg Art Museum with its centerpiece lecture hall, which failed immediately due to the room’s tendency to sustain sound. The reverberation was so long the voice of the lecturer would drown itself. Given that neither an acoustic product industry nor the acousticians who prescribed them yet existed, Harvard could do only one thing: appeal to their own physics department. A sympathetic ear was found in graduate student Wallace Clement Sabine who, through tedious work and borrowed seat cushions from a nearby theater, came to understand the sound of the hall and how to fix it. Seat cushions started all this, and the remarkable thing is that they were likely never intended to be acoustic at all.

As 20th century design marched on, the ornament went away, the surfaces flattened, and reverberation flourished. The acoustic product industry arose with dedicated, engineered acoustic surfaces to counter this change, and architecture largely forgot that everything else about architecture still had acoustic properties.

Lighting, like seat cushions, was always acoustic because it has form and a physical presence. It was only a matter of time before lighting designers began to ask how they could have more acoustic influence. This happened recently with informed shifts in the material and scale of their designs, and lighting stepped firmly into the realm of reverberation control.

Arcgeometer-Light-Fixture-Simulation

[Arcgeometer-Light-Fixture-Simulation.mp4, Simulation of the acoustic influence of traditional lighting]

The common problem in restaurant acoustics is excessive noise, which results when patrons feel they are not being heard. They subconsciously raise their voices to compensate for poor acoustics. The solution can be quite simple: get enough absorption in the room to change the behavior of the crowds. Give them acoustic comfort. Since restaurant owners and patrons tend to enjoy a sense of liveliness, the amount of absorption needed to fix a room is usually fairly low.

[LightArt-Echo-300-S-Wacker.jpg, A Chicago cafe with a prevalence of planar glass]
Credit: Courtesy of LightArt
300 S Wacker St.
Photo by Huntsman Architectural Group

Other barriers to good acoustics arise, including visual design, conflict with elements like sprinklers, post-opening timing, and a lack of confidence in proposed solutions. Restaurant owners with noisy crowds consider it a good problem to have, and are averse to big changes no matter how poor the acoustics of the space. This is where tapping into lighting makes sense. Lighting is usually meant to be seen, is accepted as something more centrally-located in spaces, can be integrated with other acoustically absorptive surfaces, is easy to install, and has the indispensable primary function of providing light.

[LightArt-Penn-State.jpg, Acoustic lighting integrated with other absorption]]
Credit: Courtesy of LightArt
Penn State Health, Hampden Medical Center
Photo by CannonDesign

The efficacy of this approach has been demonstrated with lab results that confirm the performance of these fixtures, and with numerous case studies in existing and new restaurants. Acoustic lighting brings reverberation control in a way that is palatable to restaurant owners, and in doing so may lead the way into a future for acoustics that re-integrates the forgotten influence of everything else. It is hard to imagine furniture, art, textiles, plants, and all manner of visual presences not following suit.

[LightArt-Echo-Portage-Bay.jpg, A restaurant that was acoustically improved with lighting.]
Credit: Courtesy of LightArt
Portage Bay Cafe
Photo by Chris Bowden

 

 

1pBAb5 – Predicting Spontaneous Preterm Birth Risk is Improved when Quantitative Ultrasound Data are Included with Prior Clinical Data

Barbara L. McFarlin, bmcfar1@uic.edu
Yuxuan Liu
Shashi Roshan
Aiguo Han
Douglas G. Simpson
William D. O’Brien, Jr.

Popular version of 1pBAb5 – Predicting spontaneous preterm birth risk is improved when quantitative ultrasound data are included with prior clinical data
Presented Monday afternoon, November 29, 2021
181st ASA Meeting
Click here to read the abstract

Preterm birth (PTB) is defined as birth before 37 completed weeks’ gestation. Annually in the U.S., more than 400,000 infants are born preterm, and over 1 billion globally. Consequences of PTB for survivors are severe, can be life-long and cost society $30 billion annually, a cost that far exceeds that of any major adult diagnosis. Predicting women at risk for sPTB has been medically challenging due to 1) lack of signs and symptoms of preterm labor until intervention is too late, and 2) lack of screening tools to signal sPTB risk early enough when an intervention would likely be effective. Spontaneous preterm labor is a syndrome associated with multiple etiologies of which only a portion may be associated with cervical insufficiency; however, regardless of the reason of PTB, the cervix (the opening to the womb) must get ready for birth to allow passage of the baby.

Our Novel quantitative ultrasound (QUS) technology has been developed by our multidisciplinary investigative team (ultrasound, engineering and nurse midwifery) and shows promise of becoming a widely available and a useful method for early detection of spontaneous preterm birth. Our preliminary results of 275 pregnant women who received two ultrasounds during pregnancy, determined that QUS improved prediction of preterm birth and was an added feature to current clinical and patient risk factors. QUS technology is a feature that can readily be added to current clinical ultrasound systems, thereby reducing the time from basic science innovation translation to improve clinical care of women.

This research was supported National Institutes of Health grant R01 HD089935

4aAA10 – Acoustic Effects of Face Masks on Speech: Impulse Response Measurements Between Two Head and Torso Simulators

Victoria Anderson – vranderson@unomaha.edu
Lily Wang – lilywang@unl.edu
Chris Stecker – cstecker@spatialhearing.org
University of Nebraska Lincoln at the Omaha Campus
1110 S 67th Street
Omaha, Nebraska

Popular version of 4aAA10 – Acoustic effects of face masks on speech: Impulse response measurements between two binaural mannikins
Presented Thursday morning, December 2nd, 2021
181st ASA Meeting
Click here to read the abstract

Due to the COVID-19 Pandemic, masks that cover both the mouth and nose have been used to reduce the spread of illness. While they are effective at preventing the transmission of COVID, they have also had a noticeable impact on communication. Many find it difficult to understand a speaker if they are wearing a mask. Masks effect the sound level and direction of speech, and if they are opaque, can block visual cues that help in understanding speech. There are many studies that explore the effect face masks have on understanding speech. The purpose of this project was to begin assembling a database of the effect that common face masks have on impulse responses from one head and torso simulator (HATS) to another. Impulse response is the measurement of sound radiating out from a source and how it bounces through a space. The resulting impulse response data can be used by researchers to simulate masked verbal communication scenarios.To see how the masks specifically effect the impulse response, all measurements were taken in an anechoic chamber so no reverberant noise would be included in the impulse response measurement. The measurements were taken with one HATS in the middle of the chamber to be used as the source, and another HATS placed at varying distances to act as the receiver. The mouth of the source HATS was covered with various face masks: paper, cloth, N95, nano, and face shield. These were put on individually and in combination with a face shield to get a wider range of potential masked combinations that would reasonably occur in real life. The receiver HATS took measurements at 90° and 45° from the source, at distances of 6’ and 8’. A sine sweep, which is a signal that changes frequency over a set amount of time, was played to determine the impulse response of each masked condition at every location. The receiver HATS measured the impulse response in both right and left ears, and the software used to produce the sine sweep was used to analyze and store the measurement data. This data will be available for use in simulated communication scenarios to better portray how sound would behave in a space when coming from a masked speaker.

masks masks head and torso simulator (HATS) masks

 

3aEA7 – Interactive Systems for Immersive Spaces

Samuel Chabot – chabos2@rpi.edu
Jonathan Mathews – mathej4@rpi.edu
Jonas Braasch – braasj@rpi.edu
Rensselaer Polytechnic Institute
110 8th St
Troy, NY, 12180

Popular version of 3aEA7 – Multi-user interactive systems for immersive virtual environments
Presented Wednesday morning, December 01, 2021
181st ASA Meeting
Click here to read the abstract

In the past few years, immersive spaces have become increasingly popular. These spaces, most prevalently used as exhibits and galleries, incorporate large displays that completely envelop groups of people, speaker arrays, and even reactive elements that can respond to the actions of the visitors within. One of the primary challenges in creating productive applications for these environments is the integration of intuitive interaction frameworks. For users to take full advantage of these spaces, whether it be for productivity, or education, or entertainment, the interfaces used to interact with data should be both easy to understand, and provide predictable feedback. In the Collaborative Research-Augmented Immersive Virtual Environment, or CRAIVE-Lab, at Rensselaer Polytechnic Institute, we have integrated a variety of technologies to foster natural interaction with the space. First, we developed a dynamic display environment for our immersive screen, written in JavaScript, to easily create display modules for everything from images to remote desktops. Second, we have incorporated spatial information into these display objects, so that audiovisual content presented on the screen generates spatialized audio over our 128-channel speaker array at the corresponding location. Finally, we have a multi-sensor platform installed, which integrates a top-down camera array, as well as a 16-channel spherical microphone to provide continuous tracking of multiple users, voice activity detection associated with each user, and isolated audio.

By combining these technologies together, we can create a user experience within the room that encourages dynamic interaction with data. For example, delivering a presentation in this space, a process that typically consists of several file transfers and a lackluster visual experience, can now be performed with minimal setup, using the presenter’s own device, and with spatial audio when needed.

Control of lights and speakers can be done via a unified control system. Feedback from the sensor system allows display elements to be positioned relative to the user. Identified users can take ownership of specific elements on the display, and interact with the system concurrently, which makes group interactions and shared presentations far less cumbersome than with typical methods. The elements which make up the CRAIVE-Lab are not particularly novel, as far as contemporary immersive rooms are concerned. However, these elements intertwine into a network which provides functionality for the occupants that is far greater than the sum of its parts.

2pAB4 – Towards understanding how dolphins use sound to understand their environment

YeonJoon Cheong – yjcheong@umich.edu
K. Alex Shorter – kshorter@umich.edu
Bogdan-Ioan Popa – bipopa@umich.edu
University of Michigan, Ann Arbor
2350 Hayward St
Ann Arbor, MI 48109-2125

Popular version of 2pAB4 – Acoustic scene modeling for echolocation in bottlenose dolphin
Presented Tuesday Morning, November 30, 2021
181st ASA Meeting
Click here to read the abstract

Dolphins are excellent at using ultrasound to discover their surroundings and find hidden objects. In a process called echolocation, dolphins project outgoing ultrasound pulses called clicks and receive echoes from distant objects, which are converted into a model of the surroundings. Despite significant research on echolocation, how dolphins process echoes to find objects in cluttered environments, and how they adapt their searching strategy based on the received echoes are still open questions.

Fig. 1. A target discrimination task where the dolphin finds and touches the target of interest. During the experiment the animal was asked to find a target shape in the presence of up to three additional “distraction” objects randomly placed in four locations (red dashed locations). The animal was blindfolded using “eye-cups”, and data from the trials were collected using sound (Dtag) and motion recording tags (MTag) on the animal, overhead video, and acoustic recorders at the targets.

Here we developed a framework that combines experimental measurements and physics-based models of the acoustic source and environments to provide new insight into echolocation. We conducted echolocation experiments at Dolphin Quest Oahu, Hawaii, which consisted of two stages. In the first stage, a dolphin was trained to search for a designated target using both vision and sound. In the second stage, the dolphin was asked to find the designated target placed randomly in the environment in the presence of distraction objects while “blind-folded” using suction cups, Fig. 1. After each trial, the dolphin was rewarded with a fish if it selected the correct target.
Target discrimination tasks have been used by many research groups to investigate echolocation. Interesting behavior has been observed during these tasks. For example, animals sometimes swim from object to object, carefully inspecting them before making a decision. Other times they swim without hesitation straight to the target. These types of behavior are often characterized using measurements of animal acoustics and movement, but how clutter in the environment changes the difficulty of the discrimination task or how much information the animals gather about the acoustic scene before target selection are not fully understood.
Our approach assumes that the dolphins memorize target echoes from different locations in the environment during training. We hypothesize that in a cluttered environment the dolphin selects the object that best matches the learned target echo signature, even if it is not an exact match. Our framework enables the calculation of a parameter that quantifies how well a received echo matches the learned echo, called the “likelihood parameter”. This parameter was used to build a map of the most likely target locations in the acoustic scene.

During the experiments, the dolphin swam to and investigated positions in the environment with high predicted target likelihood, as estimated by our approach. When the cluttered scene resulted in multiple objects with high likelihood values, the animal was observed to move towards and scan those areas to collect information before the decision. In other scenarios, the computed likelihood parameter was large at only one position, which explained why the animal swam to that position without hesitation. These results suggest that dolphins might create a similar “likelihood map” as information is gathered before target selection.
The proposed approach provides important additional insight into the acoustic scene formulated by echolocating dolphins, and how the animals use this evolving information to classify and locate targets. Our framework will lead to a more complete understanding of the complex perception procedure used by the echolocating animals.