–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Consider a black hole in outer space, where gravity is so strong that not even light waves can escape. Now, imagine a device here on Earth that can slow sound waves so much that they cannot escape. Scientists call this intriguing phenomenon an “acoustic black hole” (ABH). An ABH structure can trap sound waves and produce a unique environment for acoustic measurement and manipulation.
How can a structure be designed to trap sound in this way? The acoustic black hole effect is achieved by altering the way sound travels down a duct. Traditional ABHs are based upon the pioneering research of Mironov and Pislyakov (2002) that used specific shapes to guide sound waves, such as rings with inner radii that vary down the length of the duct. However, in this work, the approach is different: varying the mechanical impedance of the duct walls themselves (see Figure 1). Mechanical impedance refers to how much a structure resists motion when sound waves press against it. By engineering an impedance profile—essentially, the way the walls respond to sound throughout the duct—researchers can create a situation where sound waves decrease in speed as they travel through the duct. A gradual reduction in speed effectively simulates the event horizon of a black hole, causing the sound waves to be trapped and significantly attenuated (see Figure 2).
Figure 1. A sound wave enters a duct where the walls are stiffer at the entrance and softer at the base. As the wave moves through the duct, it slows down due to the changing properties of the walls.
To better understand this phenomenon, the researchers derived and solved governing equations using two methods. First, they used a mathematical technique called the WKB approximation, which helps find approximate solutions to wave equations. Second, they used numerical simulation, which involves using computers to model complex systems. The solutions they obtained from these approaches revealed that specific impedance profiles could effectively decelerate and absorb acoustic waves, resulting in very little reflection or transmission of sound.
To verify their findings, the researchers employed a sophisticated program called Sierra/SD. This program uses a fully coupled structural-acoustic finite element algorithm. In brief, this algorithm allows researchers to create a computer model of any design they want and test how it responds to any sound source. This tool allows for detailed simulations of how sound interacts with various structures and provides a robust framework for testing theoretical predictions.
Overall, this research not only enhances understanding of the acoustic black hole effect, but also paves the way for the development of innovative acoustic materials and devices. By using the principles of ABH, these advancements could lead to improved noise control and enhanced manipulation of sound waves, with potential applications in various fields such as engineering, architecture, and environmental science.
Figure 2. An illustration of a sound wave vanishing in an acoustic black hole structure. The ABH effect is seen from the wavefronts becoming closer together (slower sound speed) and lower in amplitude (lower peaks, higher troughs) at the right end.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525.
Department of Interior Architecture and Environmental Design, Bilkent University, Ankara, Turkey, 06800, Turkey
Ela Fasllija, Enkela Alimadhi, Zekiye Şahin, Elif Mercan, Donya Dalirnaghadeh
Popular version of 5aPP9 – A Corpus-based Approach to Define Turkish Soundscape Attributes
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0019179
We hear sound wherever we are, on buses, in streets, in cafeterias, museums, universities, halls, churches, mosques, and so forth. How we describe sound environments (soundscapes) changes according to the different experiences we have throughout our lives. Based on this, we wonder how people delineate sound environments and, thus how they perceive them.
There are reasons to believe there may be variances in how soundscape affective attributes are called in a Turkish context. Considering the historical and cultural differences countries have, we thought that it would be important to assess the sound environment by asking individuals of different ages all over Turkey. For our aim, we used the Corpus-driven approach (CDA), an approach found in Cognitive Linguistics. This allowed us to collect data from laypersons to effectively identify soundscapes based on adjective usage.
In this study, the aim is to discover linguistically and culturally appropriate equivalents of Turkish soundscape attributes. The study involved two phases. In the first phase, an online questionnaire was distributed to native Turkish speakers proficient in English, seeking adjective descriptions of their auditory environment and English-to-Turkish translations. This CDA phase yielded 79 adjectives.
Figure 1 Example public spaces; a library and a restaurant
In the second phase, a semantic-scale questionnaire was used to evaluate recordings of different acoustic environments in public spaces. The set of environments comprised seven distinct types of public spaces, including cafes, restaurants, concert halls, masjids, libraries, study areas, and design studios. These recordings were collected at various times of the day to ensure they also contained different crowdedness and specific features. A total of 24 audio recordings were evaluated for validity; each listened to 10 times by different participants. In total, 240 audio clips were randomly assessed, with participants rating 79 adjectives per recording on a five-point Likert scale.
Figure 2 The research process and results
The results of the study were analyzed using a principal component analysis (PCA), which showed that there are two main components of soundscape attributes: Pleasantness and Eventfulness. The components were organized in a two-dimensional model, where each is associated with a main orthogonal axis such as annoying-comfortable and dynamic-uneventful. This circular organization of soundscape attributes is supported by two additional axes, namely chaotic-calm and monotonous-enjoyable. It was also observed that in the Turkish circumplex, the Pleasantness axis was formed by adjectives derived from verbs in a causative form, explaining the emotion the space causes the user to feel. It was discovered that Turkish has a different lexical composition of words compared to many other languages, where several suffixes are added to the root term to impose different meanings. For instance, the translation of tranquilizer in Turkish is sakin-leş (reciprocal suffix) -tir (causative suffix)- ici (adjective suffix).
The study demonstrates how cultural differences impact sound perception and language’s role in expression. Its method extends beyond soundscape research and may benefit other translation projects. Further investigations could probe parallel cultures and undertake cross-cultural analyses.
3D-Printed Violins Bring Music into More Hands #ASA183
Modern materials and techniques can revolutionize music and provide access to low-cost instruments for music students.
Media Contact: Ashley Piccone AIP Media 301-209-3090 media@aip.org
NASHVILLE, Tenn., Dec. 6, 2022 – There’s nothing quite like the sound of a Stradivarius violin. Building such a quality string instrument takes time, perfect materials, and a lot of skill, and the best ones can cost millions of dollars. Even mediocre violins can cost thousands, which puts them out of reach for most beginners and music classrooms.
Dr. Mary-Elizabeth Brown rehearses Harry Stafylakis’ concerto “Singularity” on an early iteration of the 3D printed violin. Credit: Shawn Peters
One group is looking to rectify this by 3D-printing low-cost, durable violins for music students. In the process, they explored the factors that result in the best violin sounds and performed a concerto composed specifically for 3D-printed instruments.
Mary-Elizabeth Brown, Director of the AVIVA Young Artists Program, will discuss the steps taken and the lessons learned in her presentation, “Old meets new: 3D printing and the art of violin-making.” The presentation will take place on Dec. 6 at 10:35 a.m. Eastern U.S. in the Golden Eagle B room, as part of the 183rd Meeting of the Acoustical Society of America running Dec. 5- 9 at the Grand Hyatt Nashville Hotel.
“The team’s inspiration roots in multiple places,” said Brown. “Our goals were to explore the new sound world created by using new materials, to leverage the new technology being used in other disciplines, and to make music education sustainable and accessible through the printing of more durable instruments.”
The 3D-printed violin was created in two sections. The violin’s body is made of a plastic polymer material, in the same manner as a traditional acoustic violin, and designed to produce a resonant tone, while the neck and fingerboard are printed in smooth ABS plastic to be comfortable in the musician’s hands. The result is a violin that produces a darker, more mellow sound than traditionally made instruments.
“The next step is to explore design modifications as well as efforts to lower the costs of production while making such instruments more widely available, especially in the realm of education,” said Brown.
ASA PRESS ROOM In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.
LAY LANGUAGE PAPERS ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.
PRESS REGISTRATION ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.
Christopher Kube – kube@psu.edu
Twitter: @_chriskube
Penn State University, 212 Earth and Engineering Sciences Bldg, University Park, PA, 16802, United States
Tao Sun, University of Virginia
Samuel Clark, Advanced Photon Source, Twitter: @advancedphoton
Find the authors on LinkedIn:
www.linkedin.com/in/chriskube
www.linkedin.com/in/suntao
Popular version of 3pID2-Acoustics for in-process melt pool monitoring during metal additive manufacturing, presented at the 183rd ASA Meeting.
3D printed or additively manufactured (AM) metal parts are disrupting the status quo in a variety of industries including defense, transportation, energy, and space exploration. Engineers now design and produce customizable parts unimaginable only a decade ago. New geometrical or part shape freedom inherent to AM has already led to part performance often beyond traditionally manufactured counterparts. In the years to come, another revolutionary performance jump is expected by enabling the AM process to control the grain layout and structural features on the microscopic scale. Grains are the building blocks of metal parts that dictate many of the performance metrics associated with the descriptors of bigger, faster, and stronger.
The second performance revolution of AM metal parts requires uncovering new knowledge in the complicated physics present during the AM process. 3D printed metals are born from an energy source such as a laser or electron beam to selectively melt feedstock material at microscopic locations dictated by the computerized part drawing. Melted locations temporarily form liquid metal melt pools that solidify after the energy source moves to another location. Resulting grain structure and pore/defect formation strongly depends on how the melt pool cools and solidifies.
Over the past five years, high-energy X-rays only available at particle accelerators are used for direct real-time visualization of AM melt pool dynamics and solidification. Figure 1 shows an example X-ray frame, which captured a laser-generated melt pool moving in a single direction with a speed of 800 mm/ms.
This situation mimics the laser and melt pool movement found during 3D printing metal parts. Being able to directly observe melt pool behavior has led to new and improved understanding of the underlying physics. Unfortunately, experiments at such X-ray sources is difficult to ascertain because of extremely high demand across the sciences. Additionally, the measurement technique relegated to high-energy X-ray sources is not transferrable to metal 3D printers that exist in normal industrial settings. For these reasons, ultrasonics are being explored as a melt pool monitoring technology that can be deployed within real 3D printers.
Ultrasound is commonly used for imaging and detecting features inside of solid materials. For example, ultrasound is applied in medical settings during pregnancy or for diagnostics. Application of ultrasound for melt pool monitoring is made possible because of the tendency of ultrasound to scatter from the melt pool’s solid/liquid boundary. The development of the technique is being supported alongside X-ray imaging at the Advanced Photon Source at Argonne National Laboratory. X-ray imaging is providing the extremely important ground truth melt pool behavior allowing for easy interpretation of the ultrasonic response. In Figure 1, the ultrasonic response from the exact same melt pool given in the X-ray video is being shown for two different sensors. As the melt pool enters the field of view of the ultrasonic sensors (see online video), features in the ultrasound response confirms their sensitivity to the melt pool.
In this research, high-energy X-rays are being used to develop the ultrasonic technique and technology. In the coming year, the knowledge developed will be leveraged such that ultrasound can be applied on its own for melt pool monitoring in real metal 3D printers. Currently, no existing technology can capture the highly dynamic melt pool behavior through the depth of the part or substrate.
Practical benefits and value of melt pool monitoring within 3D printers are significant. Ultrasound can provide a quick check to determine the optimal laser power and speed combinations toward accelerated determination of process parameters. Currently, determination of the optimal process parameters requires destructive postmortem microscopy techniques that are extremely costly, time-consuming (sometimes more than a year), and wasteful. Ultrasound has the potential to reduce these factors by an order of magnitude. Furthermore, metal 3D printing processes are highly variable over many months, across different machines, and even when using feedstock powder from different suppliers. Ultrasonic melt pool monitoring can provide period checks to assure variability is minimized.
When most people think of microphones, they think of the ones singers use or you would find in a karaoke machine, but they might not realize that much smaller microphones are all around us. Current smartphones have about three or four microphones that are small. The miniaturization of microphones is therefore a desire in technological development. These microphones are strategically placed to achieve directionality. Directionality means that the microphone’s goal is to discard undesirable noise coming from directions other than the speaker’s as well as to detect and transmit the sound signal. For hearing implant users this functionality is also desirable. Ideally, you want to be able to tell what direction a sound is coming from, as people with unimpaired hearing do.
But dealing with small size and directionality presents problems. People with unimpaired hearing can tell where sound is coming from by comparing the input received by each of our ears, conveniently sitting on opposite sides of our heads and therefore receiving sounds at slightly different times and with different intensities. The brain can do the math and compute what direction sound must be coming from. The problem is that, to use this trick, you need two microphones that are separated so the time of arrival and difference in intensity are not negligible, and that goes against microphone miniaturization. What to do if you want a small but directional microphone, then?
When looking for inspiration for novel solutions, scientists often look to nature, where energy efficiency and simple designs are prioritized in evolution. Insects are one such example that faces the challenge of directional hearing at small scales. The researchers have chosen to look at the lesser wax moth (fig 1), observed to have directional hearing in the 1980s. The males produce a mating call that the females can track even when one of their ears is pierced. This implies that, instead of using both ears as humans do, these moths’ directional hearing is achieved with just one ear.
Lesser wax moth specimen with scale bar. Image courtesy of Birgit E. Rhode (CC BY 4.0).
The working hypothesis is that directionality must be achieved by the asymmetrical shape and characteristics of the moth’s ear itself. To test this hypothesis, the researchers designed a model that resembles the moth’s ear and checked how it behaved when exposed to sound. The model consists of a thin elliptical membrane with two halves of different thicknesses. For it, they used a readily available commercial 3D printer that allows customization of the design and fabrication of samples in just a few hours. The samples were then placed on a turning surface and the behavior of the membrane in response to sound coming from different directions was investigated (fig 2). It was found that the membrane moves more when sound comes from one direction rather than all the others (fig 3), meaning the structure is therefore passively directional. This means it could inspire a single small directional microphone in the future.
Laboratory setup to turn the sample (in orange, center of the picture) and expose it to sound from the speaker (left of the picture). Researcher’s own picture.
Image adapted from Lara Díaz-García’s original paper. Sounds coming from 0º direction elicit a stronger movement in the membrane than other directions.
Peter Stepanishen, steppipr@uri.edu University of Rhode Island Department of Ocean Engineering Narragansett, RI 02871
Popular version of paper 2pSA9 Presented on Tuesday December 3, 2019 178th ASA Meeting, San Diego, CA
The origin of wind chimes dates back to 1100 BC in Eastern and Southern Asia where the chimes were intended to ward off evil spirits and attract benevolent spirits. Modern wind chimes typically consist of 4 to 8 aluminum tubes with varying lengths and associated resonant frequencies corresponding to a specific musical scale. In addition the wind chimes also include a wind catcher and associated wind clapper to impact the chimes as illustrated in Figure 1 below:
The present paper addresses the underlying physics of wind chimes from the viewpoint of a structural acoustician. The impact excitation and vibration of the structure is addressed including the effects of the surrounding air on the vibration characteristics of the wind chime which is modeled as a sum of cylindrical pipes or beams. The directional characteristics of the transient acoustic field are then addressed.
The dominant sound producing features for each cylindrical pipe/beam are simply described as a sum of temporally decaying modal beam vibrations with different resonant frequencies which are inversely related to the square of the length of the pipe. A simple illustration of the predominant sound producing lowest frequency modal vibration is illustrated in the accompanying video:
The video simply illustrates the lowest modal vibration of a free free beam which is presented as a simple model of a cylindrical wind chime vis-à-vis a cylindrical shell model. The ends of the beam/pipe undergo the maximum transverse deflection and vibration whereas two nodal points with zero deflection are also apparent for the fundamental modal vibration. In contrast to stringed musical instruments, the higher order modal vibrations are associated with a nonharmonic series of resonant frequencies with an increasing number of nodal points as the modal number increases. Experimental results confirm the validity and usefulness of the cylindrical beam model of the wind chime pipes.
Acoustic transient radiation from a vibrating chime is addressed in the paper using a space-time superposition of ring sources along the axis of the chime. The ring sources are shown to result in a space-time varying force on the air in contact with the chime. Furthermore, the force is simply related to the previously noted sum of temporally decaying modal beam vibrations. The directional properties of the acoustic field are discussed and it is shown that the field exhibits nulls in the directions along and perpendicular to the axis of the chime.