Determination of the sound pressure level in fitness studios

Max Brahman – max.brahman@getzner.com

1/2-22 Kirkham Road West, Keysborough, Melbourne, victoria, 3173, Australia

Ulrich Gerhaher
Helmut Bertsch
Sebastian Wiederin

Popular version of 4pEA7 – Bringing free weight areas under acoustic control
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023540

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

In fitness studios, the ungentle dropping of weights, such as heavy dumbbells at height of 2 meters, is part of everyday life. As the studios are often integrated into residential or office buildings, the floor structures must be selected in such a way that impact energy is adequately insulated to meet the criteria for airborne noise in other parts of the building. Normally accurate prediction of the expected sound level for selection of optimal floor covering, can only be achieved through extensive measurements and using different floor coverings on site.

To be able to make accurate predictions without on-site measurements, Getzner Werkstoffe GmbH carried out more than 300 drop tests (see Figure 1) and measured the ceiling vibrations and sound pressure level in the room below. Dumbbells weighing 10 kg up to packages of 100 kg were dropped from heights of 10 cm up to 160 cm. This covers the entire range of dumbbells drops, approximately, to heavy barbells. collection of test results is integrated into a prediction tool
developed by Getzner.

The tested g-fit Shock Absorb superstructures consist of 1 to 3 layers of PU foam mats with different dynamic stiffnesses and damping values. These superstructures are optimized for the respective area of application: soft superstructures for low weights or drop heights and stiffer superstructures for heavy weights and high drop heights to prevent impact on the subfloor. The high dynamic damping of the materials reduces the rebound of the dumbbells to prevent injuries.

Heat maps of the maxHold values of the vibrations were created for each of the four g-fit Shock Absorb superstructures and a sports floor covering (see Figure 2). This database can now be used in the prediction tool for two different forecasting approaches.

Knowing the dumbbell weight and the drop height, the sound pressure level can be determined for all body variants for the room below, considering the ceiling thickness using mean value curves. No additional measurement on site is required. Figure 3 shows measured values of a real project vs. the predicted values. The deviations between measurement and prediction tool are -1.5 dB and 4.6 dB which is insignificant. The improvement of the setup (40 mm rubber granulate sports flooring) is -9.5 dB for advanced version and -22.5 dB for pro version of g-fit shock absorb floor construction.

To predict the sound pressure level in another room in the building, sound level should be measured for three simple drops in the receiver room using a medium-thickness floor structure. Based on these measured values and drop tests database, the expected frequency spectrum and the sound pressure level in the room could then be predicted.

The tool described makes it easier for Getzner to evaluate the planned floor structures of fitness studios. The solution subsequently offered enables compliance with the required sound insulation limits.

Figure 1, Carrying out the drop tests in the laboratory.
Figure 2, Maximum value of the ceiling vibration per third octave band as a function of the drop energy
Figure 3, measured and predicted values of a CrossFit studio, on the left only sports flooring without g-fit Shock Absorb, in the middle with additional g-fit Shock Absorb advanced and on the right with gfit Shock Absorb pro, dumbbell weights up to 100 kg

Data sonification & case study presenting astronomical events to the visually Impaired via sound

Kim-Marie Jones – kim.jones@arup.com

Arup, L5 Barrack Place 151 Clarence Street, Sydney, NSW, 2000, Australia

Additional authors: Mitchell Allen (Arup) , Kashlin McCutcheon

Popular version of 3aSP4 – Development of a Data Sonification Toolkit and Case Study Sonifying Astrophysical Phenomena for Visually Impaired Individuals
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023301

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Have you ever listened to stars appearing in the night sky?

Image courtesy of NASA & ESA; CC BY 4.0

Data is typically presented in a visual manner. Sonification is the use of non-speech audio to convey information.

Acousticians at Arup had the exciting opportunity to collaborate with astrophysicist Chris Harrison to produce data sonifications of astronomical events for visually impaired individuals. The sonifications were presented at the 2019 British Science Festival (at a show entitled A Dark Tour of The Universe).

There are many sonification tools available online. However, many of these tools require in-depth knowledge of computer programming or audio software.

The researchers aimed to develop a sonification toolkit which would allow engineers working at Arup to produce accurate representations of complex datasets in Arup’s spatial audio lab (called the SoundLab), without needing to have an in-depth knowledge of computer programming or audio software.

Using sonifications to analyse data has some benefits over data visualisation. For example:

  • Humans are capable of processing and interpreting many different sounds simultaneously in the background while carrying out a task (for example, a pilot can focus on flying and interpret important alarms in the background, without having to turn his/her attention away to look at a screen or gauge),
  • The human auditory system is incredibly powerful and flexible and is capable of effortlessly performing extremely complex pattern recognition (for example, the health and emotional state of a speaker, as well as the meaning of a sentence, can be determined from just a few spoken words) [source],
  • and of course, sonification also allows visually impaired individuals the opportunity to understand and interpret data.

The researchers scaled down and mapped each stream of astronomical data to a parameter of sound and they successfully used their toolkit to create accurate sonifications of astronomical events for the show at the British Science Festival. The sonifications were vetted by visually impaired astronomer Nicolas Bonne to validate their veracity.

Information on A Dark Tour of the Universe is available at the European Southern Observatory website, as are links to the sonifications. Make sure you listen to stars appearing in the night sky and galaxies merging! Table 1 gives specific examples of parameter mapping for these two sonifications. The concept of parameter mapping is further illustrated in Figure 1.

Table 1
Figure 1: image courtesy of NASA’s Space Physics Data Facility

Playability maps as aid for musicians

Vasileios Chatziioannou – chatziioannou@mdw.ac.at

Department of Music Acoustics, University of Music and Performing Arts Vienna, Vienna, Vienna, 1030, Austria

Alex Hofmann
Department of Music Acoustics
University of Music and Performing Arts Vienna
Vienna, Vienna, 1030
Austria

Popular version of 5aMU6 – Two-dimensional playability maps for single-reed woodwind instruments
Presented at the 185 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023675

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Musicians show incredible flexibility when generating sounds with their instruments. Nevertheless, some control parameters need to stay within certain limits for this to occur. Take for example a clarinet player. Using too much or too little blowing pressure would result in no sound being produced by the instrument. The required pressure value (depending on the note being played and other instrument properties) has to stay within certain limits. A way to study these limits is to generate ‘playability diagrams’. Such diagrams have been commonly used to analyze bowed-string instruments, but may be also informative for wind instruments, as suggested by Woodhouse at the 2023 Stockholm Music Acoustics Conference. Following this direction, such diagrams in the form of playability maps can highlight the playable regions of a musical instrument, subject to variation of certain control parameters, and eventually support performers in choosing their equipment.

One way to fill in these diagrams is via physical modeling simulations. Such simulations allow predicting the generated sound while slowly varying some of the control parameters. Figure 1 shows such an example, where a playability region is obtained while varying the blowing pressure and the stiffness of the clarinet reed. (In fact, the parameter varied on the y-axis is the effective stiffness per unit area of the reed, corresponding to the reed stiffness after it has been mounted on the mouthpiece and the musician’s lip is in contact with it). Black regions indicate ‘playable’ parameter combinations, whereas white regions indicate parameter combinations, where no sound is produced.

Figure 1: Pressure-stiffness playability map. The black regions correspond to parameter combinations that generate sound.

One possible observation is that, when players wish to play with a larger blowing pressure (resulting in louder sounds) they should use stiffer reeds. As indicated by the plot, for a reed of stiffness per area equal to 0.6 Pa/m (soft reed) it is not possible to generate a note with a blowing pressure above 2750 Pa. However, when using a harder reed (say with a stiffness of 1 Pa/m) one can play with larger blowing pressures, but it is impossible to play with a pressure lower than 3200 Pa in this case. Varying other types of control parameters could highlight similar effects regarding various instrument properties. For instance, playability maps subject to different mouthpiece geometries could be obtained, which would be valuable information for musicians and instrument makers alike.

Documenting the sounds of southwest Congo: the case of North Boma

Lorenzo Maselli – lorenzo.maselli@ugent.be

Instagram: @mundenji

FWO, UGent, UMons, BantUGent, Ghent, Oost-Vlaanderen, 9000, Belgium

Popular version of 1aSC2 – Retroflex nasals in the Mai-Ndombe (DRC): the case of nasals in North Boma B82
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022724

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

“All language sounds are equal but some language sounds are more equal than others” – or, at least, that is the case in academia. While French i’s and English t’s are constantly re-dotted and re-crossed, the vast majority of the world’s linguistic communities remain undocumented, with their unique sound heritage gradually fading into silence. The preservation of humankind’s linguistic diversity relies solely on detailed documentation and description.

Over the past few years, a team of linguists from Ghent, Mons, and Kinshasa have dedicated their efforts to recording the phonetic and phonological oddities of southwest Congo’s Bantu varieties. Among these, North Boma (Figure 1) stands out for its display of rare sounds known as “retroflexes”. These sounds are particularly rare in central Africa, which mirrors a more general state of under-documentation of the area’s sound inventories. Through extensive fieldwork in the North Boma area, meticulous data analysis, and advanced statistical processing, these researchers have unveiled the first comprehensive account of North Boma’s retroflexes. As it turns out, North Boma retroflexes are exclusively nasal, a striking typological circumstance. Their work, presented in Sydney this year, not only enriches our understanding of these unique consonants but also unveils potential historical implications behind their prevalence in the region.

North BomaFigure 1 – the North Boma area

The study highlights the remarkable salience of North Boma’s retroflexes, characterised by distinct acoustic features that sometimes align and sometimes deviate from those reported in the existing literature. This is clearly shown in Figure 2, where the North Boma nasal space is plotted using a technique known as “Multiple Factor Analysis” allowing for the study of small corpora organised into clear variable groups. As can be seen, their behaviour differs greatly from that of the other nasals of North Boma. This uniqueness also suggests that their presence in the area may stem from interactions with long-lost hunter-gatherer forest languages, providing invaluable insights into the region’s history.

North Boma Figure 2 – MFA results show that retroflex and non-retroflex nasals behave very differently in North Boma

Extraordinary sound patterns are waiting to be discovered in the least documented language communities of the world. North Boma serves as just one compelling example among many. As we navigate towards an unprecedented language loss crisis, the imperative for detailed phonetic documentation becomes increasingly evident.

Turning Up Ocean Temperature & Volume – Underwater Soundscapes in a Changing Climate

Freeman Lauren – lauren.a.freeman3.civ@us.navy.mil

Instagram: @laur.freeman

NUWC Division Newport, NAVSEA, Newport, RI, 02841, United States

Dr. Lauren A. Freeman, Dr. Daniel Duane, Dr. Ian Rooney from NUWC Division Newport and
Dr. Simon E. Freeman from ARPA-E

Popular version of 1aAB1 – Passive Acoustic Monitoring of Biological Soundscapes in a Changing Climate
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018023

Climate change is impacting our oceans and marine ecosystems across the globe. Passive acoustic monitoring of marine ecosystems has been shown to provide a window into the heartbeat of an ecosystem, its relative health, and even information such as how many whales or fish are present in a given day or month. By studying marine soundscapes, we collate all of the ambient noise at an underwater location and attribute parts of the soundscape to wind and waves, to boats, and to different types of biology. Long term biological soundscape studies allow us to track changes in ecosystems with a single, small, instrument called a hydrophone. I’ve been studying coral reef soundscapes for nearly a decade now, and am starting to have time series long enough to begin to see how climate change affects soundscapes. Some of the most immediate and pronounced impacts of climate change on shallow ocean soundscapes are evident in varying levels of ambient biological sound. We found a ubiquitous trend at research sites in both the tropical Pacific (Hawaii) and sub-tropical Atlantic (Bermuda) that warmer water tends to be associated with higher ambient noise levels. Different frequency bands provide information about different ecological processes (such as fish calls, invertebrate activity, and algal photosynthesis). The response of each of these processes to temperature changes is not uniform, however each type of ambient noise increases in warmer water. At some point, ocean warming and acidification will fundamentally change the ecological structure of a shallow water environment. This would also be reflected in a fundamentally different soundscape, as described by peak frequencies and sound intensity. While I have not monitored the phase shift of an ecosystem at a single site, I have documented and shown that healthy coral reefs with high levels of parrotfish and reef fish have fundamentally different soundscapes, as reflected in their acoustic signature at different frequency bands, than coral reefs that are degraded and overgrown with fleshy macroalgae. This suggests that long term soundscape monitoring could also track these ecological phase shifts under climate stress and other impacts to marine ecosystems such as overfishing.

A healthy coral reef research site in Hawaii with vibrant corals, many reef fish, and copious nooks and crannies for marine invertebrates to make their homes.
Soundscape segmented into three frequency bands capturing fish vocalizations (blue), parrotfish scrapes (red), and invertebrate clicks along with algal photosynthesis bubbles (yellow). All features show an increase in ambient noise level (PSD, y-axis) with increasing ocean temperature at each site studied in Hawaii.