1/2-22 Kirkham Road West, Keysborough, Melbourne, victoria, 3173, Australia
Ulrich Gerhaher
Helmut Bertsch
Sebastian Wiederin
Popular version of 4pEA7 – Bringing free weight areas under acoustic control
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023540
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
In fitness studios, the ungentle dropping of weights, such as heavy dumbbells at height of 2 meters, is part of everyday life. As the studios are often integrated into residential or office buildings, the floor structures must be selected in such a way that impact energy is adequately insulated to meet the criteria for airborne noise in other parts of the building. Normally accurate prediction of the expected sound level for selection of optimal floor covering, can only be achieved through extensive measurements and using different floor coverings on site.
To be able to make accurate predictions without on-site measurements, Getzner Werkstoffe GmbH carried out more than 300 drop tests (see Figure 1) and measured the ceiling vibrations and sound pressure level in the room below. Dumbbells weighing 10 kg up to packages of 100 kg were dropped from heights of 10 cm up to 160 cm. This covers the entire range of dumbbells drops, approximately, to heavy barbells. collection of test results is integrated into a prediction tool
developed by Getzner.
The tested g-fit Shock Absorb superstructures consist of 1 to 3 layers of PU foam mats with different dynamic stiffnesses and damping values. These superstructures are optimized for the respective area of application: soft superstructures for low weights or drop heights and stiffer superstructures for heavy weights and high drop heights to prevent impact on the subfloor. The high dynamic damping of the materials reduces the rebound of the dumbbells to prevent injuries.
Heat maps of the maxHold values of the vibrations were created for each of the four g-fit Shock Absorb superstructures and a sports floor covering (see Figure 2). This database can now be used in the prediction tool for two different forecasting approaches.
Knowing the dumbbell weight and the drop height, the sound pressure level can be determined for all body variants for the room below, considering the ceiling thickness using mean value curves. No additional measurement on site is required. Figure 3 shows measured values of a real project vs. the predicted values. The deviations between measurement and prediction tool are -1.5 dB and 4.6 dB which is insignificant. The improvement of the setup (40 mm rubber granulate sports flooring) is -9.5 dB for advanced version and -22.5 dB for pro version of g-fit shock absorb floor construction.
To predict the sound pressure level in another room in the building, sound level should be measured for three simple drops in the receiver room using a medium-thickness floor structure. Based on these measured values and drop tests database, the expected frequency spectrum and the sound pressure level in the room could then be predicted.
The tool described makes it easier for Getzner to evaluate the planned floor structures of fitness studios. The solution subsequently offered enables compliance with the required sound insulation limits.
Figure 1, Carrying out the drop tests in the laboratory.
Figure 2, Maximum value of the ceiling vibration per third octave band as a function of the drop energy
Figure 3, measured and predicted values of a CrossFit studio, on the left only sports flooring without g-fit Shock Absorb, in the middle with additional g-fit Shock Absorb advanced and on the right with gfit Shock Absorb pro, dumbbell weights up to 100 kg
School of Architecture, Rensselaer Polytechnic Institute, TROY, New York, 12180, United States
Samuel R.V. Chabot – Rensselaer Polytechnic Institute
Jonas Braasch – Rensselaer Polytechnic Institute
Popular version of 4aAA8 – Effects of sounds on the visitors’ experience in museums
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023459
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Have you ever wondered how a museum’s subtle backdrop of sound affects your experience? Are you drawn to the tranquility of silence, the ambiance of exhibition-congruent sounds, or perhaps the hum of people chatting and footsteps echoing through the halls?
Museums increasingly realize that acoustics are crucial in shaping a visitor’s experience. There are acoustic challenges in museum environments, such as finding the right balance between speech intelligibility and privacy, particularly in spaces with open-plan exhibition halls, coupled rooms, high volumes, and highly reflective surfaces.
Addressing the Challenge
Our proposal focuses on using sound masking systems to tackle these challenges. Sound masking is a proven and widely used technique in diverse settings, from offices to public spaces. Conventionally, it involves introducing low-level broadband noise to mask or diminish unwanted sounds, reducing distractions.
Context is Key
In recognizing the pivotal role of context in shaping human perception, strategically integrating sounds as design elements emerges as a powerful tool for enhancing visitor experiences. In line with this, we propose using sounds congruent with the museum environment more effectively than conventional masking sounds like low-level broadband noise. This approach reduces background noise distractions and enhances artwork engagement, creating a more immersive and comprehensive museum experience.
Evaluating the Effects: The Cognitive Immersive Room (CIR)
We assessed these effects using the Cognitive Immersive Room at Rensselaer Polytechnic Institute. This cutting-edge space features a 360° visual display and an eight-channel loudspeaker system for spatial audio rendering. We projected panoramic photographs and ambisonic audio recordings from 16 exhibitions across five relevant museums — MASS MoCA, New York State Museum, Williams College Museum of Art, UAlbany Art Museum, and Hessel Museum of Art.
The Study Setup
Each participant experienced four soundscape scenarios: the original recorded soundscape in each exhibition, the recorded soundscape combined with a conventional sound masker, the recorded soundscape combined with a congruent sound masker, and “silence” which does not involve any recording, comprising the ambient room noise of 41 dB. Figure 1 shows one of the displays used in the experiment and below the presented sound stimulus.
Figure1: Birds of New York exhibition – New York State Museum. The author took the photo with the permission of the museum’s Director of Exhibitions.
Scenario 1: originally recorded soundscape in situ. Scenario 2: recorded soundscape combined with a conventional sound masker. Scenario 3: the recorded soundscape combined with a congruent sound masker.
After each sound stimulus, they responded to a questionnaire. It was applied through a program developed for this research where participants could interact and answer the questions using an iPad. After experiencing the four soundscapes, a final question was asked regarding the participants’ soundscape preference within the exhibition context. Figure 2 shows the experiment design.
Figure 2
Key Findings
The statistically significant results showed a clear preference for congruent sounds, significantly reducing distractions, enhancing focus, and fostering comprehensive and immersive experiences. A majority of 58% of participants preferred the congruent sound scenario, followed by silence at 20%, original soundscape at 14%, and conventional maskers at 8%.
Centre for Marine Science and Technology, Curtin University, Bentley, Western Australia, 6102, Australia
Benjamin Saunders
School of Molecular and Life Sciences
Curtin University
Bentley, Western Australia, Australia
Christine Erbe, Iain Parnum, Chong Wei, and Robert McCauley
Centre for Marine Science and Technology
Curtin University
Bentley, Western Australia, Australia
Popular version of 5aAB6 – The search to identify the fish species chorusing along the southern Australian continental shelf
Presented at the 185 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023649
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Unknown fish species are singing in large aggregations along almost the entire southern Australian continental shelf on a daily basis, yet we still have little idea of what species these fish are or what this means to them. These singing aggregations are known as fish choruses, they occur when many individuals call continuously for a prolonged period, producing a cacophony of sound that can be detected kilometres away. It is difficult to identify fish species that chorus in offshore marine environments. The current scientific understanding of the sound-producing abilities of all fish species is limited and offshore marine environments are challenging to access. This project aimed to undertake a pilot study which attempted to identify the source species of three fish chorus types (shown below) detected along the southern Australian continental shelf off Bremer Bay in Western Australia from previously collected acoustic recordings.
Each fish chorus type occurred over the hours of sunset, dominating the soundscape within unique frequency bands. Have a listen to the audio file below to get a feeling for how noisy the waters off Bremer Bay become as the sun goes down and the fish start singing. The activity of each fish chorus type changed over time, indicating seasonality in presence and intensity. Chorus I and II demonstrated a peak in calling presence and intensity over late winter to early summer, while Chorus III demonstrated peak calling over late winter to late spring. This informed the sampling methodology of the pilot study, and in December 2019, underwater acoustic recorders and unbaited video recorders were deployed simultaneously on the seafloor along the continental shelf off Bremer Bay to attempt to collect evidence of any large aggregations of fish species present during the production of the fish choruses. Chorus I and the start of Chorus II were detected on the acoustic recordings, corresponding with video recordings of large aggregations of Red Snapper (Centroberyx gerrardi) and Deep Sea Perch (Nemadactylus macropterus). A spectrogram of the acoustic recordings and snapshots from the corresponding underwater video recordings are shown below.
The presence of large aggregations of Red Snapper present while Chorus I was also present was of particular interest to the authors. Previous dissections of this species had revealed that Red Snapper possessed anatomical features that could support sound production through the vibration of their swimbladder using specialised muscles. To explore this further, computerized tomography (CT) scans of several Red Snapper specimens were undertaken. We are currently undertaking 3D modelling of the sound-producing mechanisms of this species to compute the resonance frequency of the fish to better understand if this species could be producing Chorus I.
Listening to fish choruses can tell us about where these fish live, what habitats they use, their spawning behaviour, their feeding behaviour, can indicate their biodiversity, and in certain circumstances, can determine the local abundance of a fish population. For this information to be applied to marine spatial planning and fish species management, it is necessary to identify which fish species are producing these choruses. This pilot study was the first step in an attempt to develop an effective methodology that could be used to address the challenging task of identifying the source species of fish choruses present in offshore environments. We recommend that future studies take an integrated approach to species identification, including the use of arrays of hydrophones paired with underwater video recorders.
Council for Scientific and Industrial Research, Council for Scientific and Industrial Research, Gauteng, 0001, South Africa
Popular version of 3pAAb – Classroom acoustics: a case study of the cost-benefit of retrofitted interventions
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023323
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Achieving the right acoustic conditions for classrooms is often dismissed by school planners as being too difficult or too expensive. This is to the detriment of students who are unable to hear the teacher properly, especially for children who are being taught in their second language, as is common in South Africa. This study proves that acoustic treatment need not be difficult or costly to achieve.
To refute the notion that acoustic improvements are expensive and specialized, this experimental case study was designed and carried out in a typical classroom in the small rural village of Cofimvaba in the Eastern Cape, South Africa. The ideal classroom environment has a low ambient noise level of 35 dB and a reverberation time below 0.7 seconds, but this classroom has a reverberation time of 1 second. Reverberation time refers to the time it takes for a noise to die down and essentially refers to how much a room echoes, which negatively affects speech clarity. The experimental intervention simulated the installation of floating ceiling islands by installing different materials on the roof of temporary gazebos in the classroom.
The four materials used were acoustic ceiling tiles which represent a typical solution and three DIY solutions using carboard egg cartons, thermal insulation batting, and sponge foam bed mattresses. Each material provided an improved reverberation time. The best performing was the sponge at 0.6 seconds, while the other three materials performed equally at 0.8 seconds.
The cost of each material was reduced to a rate per square meter. The most expensive material was the acoustic ceiling tiles at R 363.85/m2 while the cheapest was the egg cartons at R 22.22/m2, or less if they are available as waste items.
The availability of materials was evaluated in terms of the distance to supply and whether the product is available in a retail store or requires a special order and delivery. The batting is available from hardware stores nationwide and could be purchased by walk-in from the local hardware store, within a 2 km radius of the site. The egg cartons could be ordered online and delivered from a packaging company within a 150 km radius. The foam mattresses could be purchased by walk-in at a local retailer within a 5 km radius of the site. The acoustic ceiling tiles were ordered online and delivered from the warehouse within a 700 km radius of the site.
Using the weighted sum model and assigning equal weighting to each attribute of acoustic performance, cost, distance to supply, and walk-in availability, a performance score for each intervention material was calculated. The batting ranked number one, followed in order by the sponge, egg cartons and lastly acoustic tiles.
The case study demonstrates that an improvement in acoustic conditions of at least a 0.2 second reduction in reverberation time can be achieved without significant cost. Although the batting did not achieve the ideal reverberation time, when only the speech frequencies were considered, it fell within the recommended maximum of 0.7 seconds.
The recommended design intervention is a frame containing batting covered with a taught fabric and suspended from ceiling hooks, thus avoiding disruptive construction works. This shows that improved classroom acoustics can be achieved without high cost or technical difficulty.
Centre for Marine Science and Technology, Curtin University, Bentley, WA, 6102, Australia
David Dall’Osto
Applied Physics Laboratory
University of Washington
Seattle, Washington
United States
Popular version of 1pAO2 – Long-range underwater acoustic detection of aircraft surface impacts – the influence of acoustic propagation conditions and impact parameters
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022761
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
In the right circumstances, sound can travel thousands of kilometres through water, so when Malaysian Airlines flight MH370 went missing in the Indian Ocean in 2014 we searched recordings from underwater microphones called hydrophones for any signal that could be connected to that tragic event. One signal of interest was found, but when we looked at it more carefully it seemed unlikely to be related to the loss of the aircraft.
Fast-forward five years and in 2019 the fatal crash of an F35 fighter aircraft in the Sea of Japan was detected by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) using hydrophones near Wake Island, in the north-western Pacific, some 3000 km from the crash site1.
Fig. 1. Locations of the F35 crash and the CTBTO HA11N hydroacoustic station near Wake Island that detected it.
With the whereabouts of MH370 still unknown, we decided to compare the circumstances of the F35 crash with those of the loss of MH370 to see whether we should change our original conclusions about the signal of interest.
Fig. 2. Location of the CTBTO HA01 hydroacoustic station off the southwest corner of Australia. The two light blue lines are the measured bearing of the signal of interest with an uncertainty of +/- 0.75 degrees.
We found that long range hydrophone detection of the crash of MH370 is much less likely than that of the F35, so our conclusions still stand, however there is some fascinating science behind the differences.
Fig. 3. Top: comparison of modelled received signal strengths versus distance from the hydrophones for the MH370 and F35 cases. Bottom: water depth and deep sound channel (DSC) axis depth along each path.
Aircraft impacts generate lots of underwater sound, but most of this travels steeply downward then bounces up and down between the seafloor and sea surface, losing energy each time, and dying out before it has a chance to get very far sideways. For long range detection to be possible the sound must be trapped in the deep sound channel (DSC), a depth region where the water properties stop the sound hitting the seabed or sea surface. There are two ways to get the sound from a surface impact into the DSC. The first is by reflections from a downward sloping seabed, and the second is if the impact occurs somewhere the deep sound channel comes close to the sea surface. Both these mechanisms occurred for the F35 case, leading to very favourable conditions for coupling the sound into the deep sound channel.
Fig. 4. Sound speed and water depth along the track from CTBTO’s HA11N hydroacoustic station (magenta circle) to the estimated F35 crash location (magenta triangle). The broken white line is the deep sound channel axis.
We don’t know where MH370 crashed, but the signal of interest came from somewhere along a bearing that extended northwest into the Indian Ocean from the southwest corner of Australia, which rules out the second mechanism, and there are very few locations along this bearing where the first mechanism would come into play.
Fig. 5. Sound speed and water depth in the direction of interest from CTBTO’s HA01 hydroacoustic station off Cape Leeuwin, Western Australia (magenta circle). The broken white line is the deep sound channel axis.
This analysis doesn’t completely rule out the signal of interest being related to MH370, but it still seems less likely than it being due to low-level seismic activity, something that results in signals at HA01 from similar directions about once per day.
[1] Metz D, Obana K, Fukao Y, “Remote Hydroacoustic Detection of an Airplane Crash”, Pure and Applied Geophysics, 180 (2023), 1343-1351. https://doi.org/10.1007/s00024-022-03117-6
Arup, L5 Barrack Place 151 Clarence Street, Sydney, NSW, 2000, Australia
Additional authors: Mitchell Allen (Arup) , Kashlin McCutcheon
Popular version of 3aSP4 – Development of a Data Sonification Toolkit and Case Study Sonifying Astrophysical Phenomena for Visually Impaired Individuals
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023301
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Have you ever listened to stars appearing in the night sky?
Acousticians at Arup had the exciting opportunity to collaborate with astrophysicist Chris Harrison to produce data sonifications of astronomical events for visually impaired individuals. The sonifications were presented at the 2019 British Science Festival (at a show entitled A Dark Tour of The Universe).
There are many sonification tools available online. However, many of these tools require in-depth knowledge of computer programming or audio software.
The researchers aimed to develop a sonification toolkit which would allow engineers working at Arup to produce accurate representations of complex datasets in Arup’s spatial audio lab (called the SoundLab), without needing to have an in-depth knowledge of computer programming or audio software.
Using sonifications to analyse data has some benefits over data visualisation. For example:
Humans are capable of processing and interpreting many different sounds simultaneously in the background while carrying out a task (for example, a pilot can focus on flying and interpret important alarms in the background, without having to turn his/her attention away to look at a screen or gauge),
The human auditory system is incredibly powerful and flexible and is capable of effortlessly performing extremely complex pattern recognition (for example, the health and emotional state of a speaker, as well as the meaning of a sentence, can be determined from just a few spoken words) [source],
and of course, sonification also allows visually impaired individuals the opportunity to understand and interpret data.
The researchers scaled down and mapped each stream of astronomical data to a parameter of sound and they successfully used their toolkit to create accurate sonifications of astronomical events for the show at the British Science Festival. The sonifications were vetted by visually impaired astronomer Nicolas Bonne to validate their veracity.
Information on A Dark Tour of the Universe is available at the European Southern Observatory website, as are links to the sonifications. Make sure you listen to stars appearing in the night sky and galaxies merging! Table 1 gives specific examples of parameter mapping for these two sonifications. The concept of parameter mapping is further illustrated in Figure 1.
Table 1
Figure 1: image courtesy of NASA’s Space Physics Data Facility