Popular version of 2pAB8 – Moving Cargo, Keeping Whales: Investigating Solutions for Ocean Noise Pollution
Presented at the 186th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0027065
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Figure 1. Image Courtesy of ZoBell, Vanessa M., John A. Hildebrand, and Kaitlin E. Frasier. “Comparing pre-industrial and modern ocean noise levels in the Santa Barbara Channel.” Marine Pollution Bulletin 202 (2024): 116379.
Southern California waters are lit up with noise pollution (Figure 1). The Port of Los Angeles and the Port of Long Beach are the first and second busiest shipping ports in the western hemisphere, supporting transits from large container ships that radiated noise throughout the region. Underwater noise generated by these vessels dominate ocean soundscapes, negatively affecting marine organisms, like mammals, fish, and invertebrates, who rely on sound for daily life functions. In this project, we modeled what the ocean would sound like without human activity and compared it with what it sounds like in modern day. We found in this region, which encompasses the Channel Islands National Marine Sanctuary and feeding grounds of the endangered northeastern Pacific blue whale, modern ocean noise levels were up to 15 dB higher than pre-industrial levels. This would be like having a picnic in a meadow versus having a picnic on an airport tarmac.
Reducing ship noise in critical habitats has become an international priority for protecting marine organisms. A variety of noise reduction techniques have been discussed, with some already operationalized. To understand the effectiveness of these techniques, broad stakeholder engagement, robust funding, and advanced signal processing is required. We modeled a variety of noise reduction simulations and identified effective strategies to quiet whale habitats in the Santa Barbara Channel region. Simulating conservation scenarios will allow more techniques to be explored without having to be implemented, saving time, money, and resources in the pursuit of protecting the ocean.
Jian-yu Lu – jian-yu.lu@ieee.org
X (Twitter): @Jianyu_lu
Instagram: @jianyu.lu01
Department of Bioengineering, College of Engineering, The University of Toledo, Toledo, Ohio, 43606, United States
Popular version of 1pBAb4 – Reconstruction methods for super-resolution imaging with PSF modulation
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026777
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Imaging is an important fundamental tool to advance science, engineering, and medicine, and is indispensable in our daily life. Here we have a few examples: Acoustical and optical microscopes have helped to advance biology. Ultrasound imaging, X-ray radiography, X-ray computerized tomography (X-ray CT), magnetic resonance imaging (MRI), gamma camera, single-photon emission computerized tomography (SPECT), and positron emission tomography (PET) have been routinely used for medical diagnoses. Electron and scanning tunneling microscopes have revealed structures in nanometer or atomic scale, where one nanometer is one billionth of a meter. And photography, including the cameras in cell phones, is in our everyday life.
Despite the importance of imaging, it was first recognized by Ernest Abbe in 1873 that there is a fundamental limit known as the diffraction limit for resolution in wave-based imaging systems due to the diffraction of waves. This effects acoustical, optical, and electromagnetic waves, and so on.
Recently (see Lu, IEEE TUFFC, January 2024), the researcher developed a general method to overcome such a long-standing diffraction limit. This method is not only applicable to wave-based imaging systems such as ultrasound, optical, electromagnetic, radar, and sonar; it is in principle also applicable to other linear shift-invariant (LSI) imaging systems such as X-ray radiography, X-ray CT, MRI, gamma camera, SPECT, and PET since it increases image resolution by introducing high spatial frequencies through modulating the point-spread function (PSF) of an LSI imaging system. The modulation can be induced remotely from outside of an object to be imaged, or can be small particles introduced into or on the surface of the object and manipulated remotely. The LSI system can be understood with a geometric distortion corrected optical camera in the photography, where the photo of a person will be the same or invariant in terms of the size and shape if the person only shifts his/her position in the direction that is perpendicular to the camera optical axis within the camera field of view.
Figure 1 below demonstrates the efficacy of the method using an acoustical wave. The method was used to image a passive object (in the first row) through a pulse-echo imaging or to image wave source distributions (in the second row) with a receiver. The best images obtainable under the Abbe’s diffraction limit are in the second column, and the super-resolution (better than the diffraction limit) images obtained with the new method are in the last column. The super-resolution images had a resolution that was close to 1/3 of the wavelength used from a distance with an f-number (focal distance divided by the diameter of the transducer) close to 2.
Because the method developed is based on the convolution theory of an LSI system and many practical imaging systems are LSI, the method opens an avenue for various new applications in science, engineering, and medicine. With a proper choice of a modulator and imaging system, nanoscale imaging with resolution similar to that of a scanning electron microscope (SEM) is possible even with visible or infrared light.
1/2-22 Kirkham Road West, Keysborough, Melbourne, victoria, 3173, Australia
Ulrich Gerhaher
Helmut Bertsch
Sebastian Wiederin
Popular version of 4pEA7 – Bringing free weight areas under acoustic control
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023540
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
In fitness studios, the ungentle dropping of weights, such as heavy dumbbells at height of 2 meters, is part of everyday life. As the studios are often integrated into residential or office buildings, the floor structures must be selected in such a way that impact energy is adequately insulated to meet the criteria for airborne noise in other parts of the building. Normally accurate prediction of the expected sound level for selection of optimal floor covering, can only be achieved through extensive measurements and using different floor coverings on site.
To be able to make accurate predictions without on-site measurements, Getzner Werkstoffe GmbH carried out more than 300 drop tests (see Figure 1) and measured the ceiling vibrations and sound pressure level in the room below. Dumbbells weighing 10 kg up to packages of 100 kg were dropped from heights of 10 cm up to 160 cm. This covers the entire range of dumbbells drops, approximately, to heavy barbells. collection of test results is integrated into a prediction tool
developed by Getzner.
The tested g-fit Shock Absorb superstructures consist of 1 to 3 layers of PU foam mats with different dynamic stiffnesses and damping values. These superstructures are optimized for the respective area of application: soft superstructures for low weights or drop heights and stiffer superstructures for heavy weights and high drop heights to prevent impact on the subfloor. The high dynamic damping of the materials reduces the rebound of the dumbbells to prevent injuries.
Heat maps of the maxHold values of the vibrations were created for each of the four g-fit Shock Absorb superstructures and a sports floor covering (see Figure 2). This database can now be used in the prediction tool for two different forecasting approaches.
Knowing the dumbbell weight and the drop height, the sound pressure level can be determined for all body variants for the room below, considering the ceiling thickness using mean value curves. No additional measurement on site is required. Figure 3 shows measured values of a real project vs. the predicted values. The deviations between measurement and prediction tool are -1.5 dB and 4.6 dB which is insignificant. The improvement of the setup (40 mm rubber granulate sports flooring) is -9.5 dB for advanced version and -22.5 dB for pro version of g-fit shock absorb floor construction.
To predict the sound pressure level in another room in the building, sound level should be measured for three simple drops in the receiver room using a medium-thickness floor structure. Based on these measured values and drop tests database, the expected frequency spectrum and the sound pressure level in the room could then be predicted.
The tool described makes it easier for Getzner to evaluate the planned floor structures of fitness studios. The solution subsequently offered enables compliance with the required sound insulation limits.
Figure 1, Carrying out the drop tests in the laboratory.
Figure 2, Maximum value of the ceiling vibration per third octave band as a function of the drop energy
Figure 3, measured and predicted values of a CrossFit studio, on the left only sports flooring without g-fit Shock Absorb, in the middle with additional g-fit Shock Absorb advanced and on the right with gfit Shock Absorb pro, dumbbell weights up to 100 kg
Arup, L5 Barrack Place 151 Clarence Street, Sydney, NSW, 2000, Australia
Additional authors: Mitchell Allen (Arup) , Kashlin McCutcheon
Popular version of 3aSP4 – Development of a Data Sonification Toolkit and Case Study Sonifying Astrophysical Phenomena for Visually Impaired Individuals
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023301
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Have you ever listened to stars appearing in the night sky?
Acousticians at Arup had the exciting opportunity to collaborate with astrophysicist Chris Harrison to produce data sonifications of astronomical events for visually impaired individuals. The sonifications were presented at the 2019 British Science Festival (at a show entitled A Dark Tour of The Universe).
There are many sonification tools available online. However, many of these tools require in-depth knowledge of computer programming or audio software.
The researchers aimed to develop a sonification toolkit which would allow engineers working at Arup to produce accurate representations of complex datasets in Arup’s spatial audio lab (called the SoundLab), without needing to have an in-depth knowledge of computer programming or audio software.
Using sonifications to analyse data has some benefits over data visualisation. For example:
Humans are capable of processing and interpreting many different sounds simultaneously in the background while carrying out a task (for example, a pilot can focus on flying and interpret important alarms in the background, without having to turn his/her attention away to look at a screen or gauge),
The human auditory system is incredibly powerful and flexible and is capable of effortlessly performing extremely complex pattern recognition (for example, the health and emotional state of a speaker, as well as the meaning of a sentence, can be determined from just a few spoken words) [source],
and of course, sonification also allows visually impaired individuals the opportunity to understand and interpret data.
The researchers scaled down and mapped each stream of astronomical data to a parameter of sound and they successfully used their toolkit to create accurate sonifications of astronomical events for the show at the British Science Festival. The sonifications were vetted by visually impaired astronomer Nicolas Bonne to validate their veracity.
Information on A Dark Tour of the Universe is available at the European Southern Observatory website, as are links to the sonifications. Make sure you listen to stars appearing in the night sky and galaxies merging! Table 1 gives specific examples of parameter mapping for these two sonifications. The concept of parameter mapping is further illustrated in Figure 1.
Table 1
Figure 1: image courtesy of NASA’s Space Physics Data Facility
Department of Music Acoustics, University of Music and Performing Arts Vienna, Vienna, Vienna, 1030, Austria
Alex Hofmann
Department of Music Acoustics
University of Music and Performing Arts Vienna
Vienna, Vienna, 1030
Austria
Popular version of 5aMU6 – Two-dimensional playability maps for single-reed woodwind instruments
Presented at the 185 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023675
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Musicians show incredible flexibility when generating sounds with their instruments. Nevertheless, some control parameters need to stay within certain limits for this to occur. Take for example a clarinet player. Using too much or too little blowing pressure would result in no sound being produced by the instrument. The required pressure value (depending on the note being played and other instrument properties) has to stay within certain limits. A way to study these limits is to generate ‘playability diagrams’. Such diagrams have been commonly used to analyze bowed-string instruments, but may be also informative for wind instruments, as suggested by Woodhouse at the 2023 Stockholm Music Acoustics Conference. Following this direction, such diagrams in the form of playability maps can highlight the playable regions of a musical instrument, subject to variation of certain control parameters, and eventually support performers in choosing their equipment.
One way to fill in these diagrams is via physical modeling simulations. Such simulations allow predicting the generated sound while slowly varying some of the control parameters. Figure 1 shows such an example, where a playability region is obtained while varying the blowing pressure and the stiffness of the clarinet reed. (In fact, the parameter varied on the y-axis is the effective stiffness per unit area of the reed, corresponding to the reed stiffness after it has been mounted on the mouthpiece and the musician’s lip is in contact with it). Black regions indicate ‘playable’ parameter combinations, whereas white regions indicate parameter combinations, where no sound is produced.
Figure 1: Pressure-stiffness playability map. The black regions correspond to parameter combinations that generate sound.
One possible observation is that, when players wish to play with a larger blowing pressure (resulting in louder sounds) they should use stiffer reeds. As indicated by the plot, for a reed of stiffness per area equal to 0.6 Pa/m (soft reed) it is not possible to generate a note with a blowing pressure above 2750 Pa. However, when using a harder reed (say with a stiffness of 1 Pa/m) one can play with larger blowing pressures, but it is impossible to play with a pressure lower than 3200 Pa in this case. Varying other types of control parameters could highlight similar effects regarding various instrument properties. For instance, playability maps subject to different mouthpiece geometries could be obtained, which would be valuable information for musicians and instrument makers alike.