Popular version of poster 5aMU1 Presented Friday morning, May 22, 2015, 8:35 AM – 8:55 AM, Kings 4 169th ASA Meeting, Pittsburgh
In this paper the relationship between musical instruments and the rooms they are performed in was investigated. A musical instrument is typically characterized as a system that consists of a tone generator combined with a resonator. A saxophone for example has a reed as a tone generator and a comical shaped resonator that can be effectively changed in length with keys to produce different musical notes. Often neglected is the fact that there is a second resonator for all wind instruments coupled to the tone generator – the vocal cavity. We use our vocal cavity everyday when we speak to form characteristic formants, local enhancements in frequency to shape vowels. This is achieved by varying the diameter of the vocal tract at specific local positions along its axis. In contrast to the resonator of a wind instrument, the vocal tract is fixed its length by the dimensions between the vocal chords and the lips. Consequently, the vocal tract cannot be used to change the fundamental frequency over a larger melodic range. For out voice, the change in frequency is controlled via the tension of the vocal chords. The musical instrument’s instrument resonator however is not an adequate device to control the timbre (harmonic spectrum) of an instrument because it can only be varied in length but not in width. Therefore, the players adjustment of the vocal tract is necessary to control the timbre if the instrument. While some instruments posses additional mechanisms to control timbre, e.g., via the embouchure to control the tone generator directly using the lip muscles, for others like the recorder changes in the wind supply provided by the lungs and the changes of the vocal tract. The role of the vocal tract has not been addressed systematically in literature and learning guides for two obvious reasons. Firstly, there is no known systematic approach of how to quantify internal body movements to shape the vocal tract. Each performer has to figure out the best vocal tract configurations in an intuitive manner. For the resonator system, the changes are described through the musical notes, and in cases where multiple ways exist to produce the same note, additional signs exist to demonstrate how to finger this note (e.g., by providing a specific key combination). Secondly, in western classic music culture the vocal tract adjustments predominantly have a correctional function to balance out the harmonic spectrum to make the instrument sound as even as possible across the register.
PVC-Didgeridoo adapter for soprano saxophone
In non-western cultures, the role of the oral cavity can be much more important to convey musical meaning. The didgeridoo, for example, has a fixed resonator with no keyholes and consequently it can only produce a single pitched drone. The musical parameter space is then defined by modulating the overtone spectrum above the tone by changing the vocal tract dimensions and creating vocal sounds on top of the buzzing lips on the didgeridoo edge. Mouthpieces of Western brass instruments have a cup behind the rim with a very narrow opening to the resonator, the throat. The didgeridoo does not have a cup, and the rim is the edge of the resonator with a ring of bee wax. While the narrow throat of western mouthpiece mutes additional sounds produced with the voice, didgeridoos are very open from end to end and carry the voice much better.
The room, a musical instrument is performed in acts as a third resonator, which also affect the timbre of the instrument. In our case, the room was simulated using a computer model with early reflections and late reverberation.
Tone generators for soprano saxophone from left to right: Chinese Bawu, soprano saxophone, Bassoon reed, cornetto.
In general, it is difficult to assess the effect of a mouthpiece and resonator individually, because both vary across instruments. The trumpet for example has a narrow cylindrical bore with a brass mouthpiece, the saxophone has a wide conical bore with reed-based mouthpiece. To mitigate this effect, several tone generators were adapted for a soprano saxophone, including a brass mouthpiece from a cornetto, a bassoon mouthpiece and a didgeridoo adapter made from a 140 cm folded PCV pipe that can be attached to the saxophone as well. It turns out that the exchange of tone generators change the timbre of the saxophone significantly. The cornetto mouthpiece gives the instrument a much mellower tone. Similar to the baroque cornetto, the instruments sounds better in a bright room with lot of high frequencies, while the saxophone is at home at a 19th-century concert hall with a steeper roll off at high frequencies.
Sound penetrates our life everywhere. It is an essential component of our social life. We need it for communication, orientation and as a warning signal. The auditory system is continuously analyzing acoustic information, including unwanted and disturbing sound, which is filtered and interpreted by different cortical (conscious perception and processing) and sub-cortical brain structures (non-conscious perception and processing). The terms “sound” and “noise” are often used synonymously. Sound becomes noise when it causes adverse health effects, including annoyance, sleep disturbance, cognitive impairment, mental or physiological disorders, including hearing loss and cardiovascular disorders. The evidence is increasing that ambient noise levels below hearing damaging intensities are associated with the occurrence of metabolic disorders (type 2 diabetes), high blood pressure (hypertension), coronary heart diseases (including myocardial infarction), and stroke. Environmental noise from transportation noise sources, including road, rail and air traffic, is increasingly recognized as a significant public health issue.
Systematic research on the non-auditory physiological effects of noise has been carried out for a long time starting in the post war period of the last century. The reasoning that long-term exposure to environmental noise causes cardiovascular health effects is based on the following experimental and empirical findings:
Short-term laboratory studies carried out on humans have shown that the exposure to noise affects the autonomous nervous system and the endocrine system. Heart rate, blood pressure, cardiac output, blood flow in peripheral blood vessels and stress hormones (including epinephrine, nor-epinephrine, cortisol) are affected. At moderate environmental noise levels such acute reactions are found, particularly, when the noise interferes with activities of the individuals (e.g. concentration, communication, relaxation).
Noise-induced instantaneous autonomic responses do not only occur in waking hours, but also in sleeping subjects even when they report not being disturbed by the noise.
The responses do not adapt on a long-term basis. Subjects who had lived for several years in a noisy environment still respond to acute noise stimuli.
The long-term effects of chronic noise exposure have been studied in animals at high noise levels showing manifest vascular changes (thickening of vascular walls) and alterations in the heart muscle (increases of connective tissue) that indicate an increased aging of the heart and a higher risk of cardiovascular mortality.
Long-term effects of chronic noise exposure in humans have been studied in workers exposed to high noise levels in the occupational environment showing higher rates of hypertension and ischemic heart diseases in exposed subjects compared with less exposed subjects.
These findings make it plausible to deduct that similar long-term effects of chronic noise exposure may also occur at comparably moderate or low environmental noise levels. It is important to note that non-auditory noise effects do not follow the toxicological principle of dosage. This means that it is not simply the accumulated total sound energy that causes the adverse effects. Instead, the individual situation and the disturbed activity need to be taken into account (time activity patterns). It may very well be that an average sound pressure level of 85 decibels (dB) at work causes less of an effect than 65 dB at home when carrying out mental tasks or relaxing after a stressful day, or 50 dB when being asleep. This makes a substantial difference compared to many other environmental exposures where the accumulated dose is the hazardous factor, e. g. air pollution (“dealing with decibels is not like summing up micrograms as we do for chemical exposures”).
The general stress theory is the rationale and biological model for the non-auditory physiological effects of noise on man. According to the general stress concept, repeated temporal changes in biological responses disturb the biorhythm, cause permanent dysregulation, resulting in physiological and metabolic imbalance and disturbed haemostasis of the organism leading to chronic diseases in the long run. In principle, a variety of body functions may be affected, including the cardiovascular system, the gastrointestinal system, and the immune system, for example. Noise research has been focusing on cardiovascular health outcomes because cardiovascular diseases have a high prevalence in the general population. Noise-induced cardiovascular effects may therefore be relevant for public health and provide a strong argument for noise abatement policies within the global context of adverse health effects due to community noise, including annoyance and sleep disturbance.
Figure 1 shows a simplified reaction scheme used in epidemiological noise research. It simplifies the cause-effect chain i.e.: sound > disturbance > stress response > (biological) risk factors > disease. Noise affects the organism either directly through nervous interactions of the acoustic nerve with other regions of the central nervous system, or indirectly through the emotional and the cognitive perception of sound. The objective noise exposure (sound level) and the subjective noise exposure (annoyance) may both be predictors in the relationship between noise and health endpoints. The direct, non-conscious, pathway may be predominant in sleeping subjects.
The body of epidemiological studies regarding the association between transportation noise (mainly road traffic and aircraft noise) and cardiovascular diseases (hypertension, coronary heart disease, stroke) has increased a lot in the recent years. Most of the studies suggest a continuous increase in risk with increasing noise level. Exposure modifiers such as long years of residence and the location of rooms (facing the street) have been associated with a stronger risk supporting the causal interpretation of findings. The question is no longer whether environmental noise causes cardiovascular disorders, the question is rather to what extent (the slope of the exposure-response curve) and at which threshold (the empirical onset of the exposure-response curve (reference level)). Different noise sources differ in their noise characteristics with respect to the maximum noise level, the time course including the number of events, the noise level rise time of a single event, the frequency spectrum, the tonality and their informational content. In principle, different exposure-response curves must be considered for different noise sources. This not only applies to noise annoyance where aircraft noise is found to be more annoying than road traffic noise and railway noise (at the same average noise level), but may, in principle, also be true for the physiological effects of noise.
So called meta-analyses have been carried out pooling the results of relevant studies on the same associations for deriving common exposure-response relationships that can be used for a quantitative risk assessment. Figure 2 shows pooled exposure-response relationships of the associations between road traffic noise and hypertension (24 studies, weighted pooled reference level 50 dB), road traffic noise and coronary heart disease (14 studies, weighted pooled reference level 52 dB), aircraft noise and hypertension (5 studies, weighted pooled reference level 49 dB), and aircraft noise and coronary heart disease (3 studies weighted pooled reference level 48 dB). Conversions of different noise indicators were made with respect to the 24-hour day(+0 dB)-evening(+5 dB)-night(+10 dB)-weighted annual A-weighted equivalent continuous sound pressure level Lden which is commonly used for noise mapping in Europe and elsewhere, referring to the most exposed façade of the buildings. The curves suggest increases in risks (hypertension, coronary heart disease) between 5 and 10 percent per increase of the noise indicator Lden by 10 dB, starting at noise levels around 50 dB. This corresponds with approximately 10 dB lower night noise levels Lnight of approximately 40 dB. According to the graphs, subjects that live in areas where the ambient average noise level Lden exceeds 65 dB run an approximately 15-25 percent higher risk of cardiovascular diseases compared with subjects that live in comparably quiet areas. With respect to high blood pressure the risk tends to be larger for aircraft noise compared with road traffic noise which may have to do with the fact that people do not have access to a quiet side when the noise comes from above. However, the number of aircraft noise studies is much smaller than the number of road traffic noise studies. More research is needed in this field. Nevertheless, the available data provide information for action taking.
The decision upon critical noise levels and “accepted” public health risks within a social and economic context is not a scientific one but a political one. Expert groups had concluded that average A-weighted road traffic noise levels at the facades of the houses exceeding 65 dB during daytime and 55 dB during the night were to be considered as being detrimental to ill-health. New studies that were able to assess the noise level in more detail at the lower end of the exposure range (e. g. including secondary roads) tended to find lower threshold values for the onset of the increase in risk than the earlier studies where noise data were area-wide not available (e. g. only primary road network). Based on the current knowledge regarding the cardiovascular health effects of environmental noise it seems justified to refine the recommendations towards lower critical noise levels, particularly with respect to the exposure during the night. Sleep is an important modulator of cardiovascular function. Some studies showed stronger associations of cardiovascular outcomes with the exposure during the night than with the exposure during the day. Noise-disturbed sleep, in this respect, must be considered as a particular potential pathway for the development of cardiovascular disorders.
The WHO (World Health Organization) Regional Office for Europe is currently developing a new set of guidelines (“WHO Environmental Noise Guidelines for the European Region”) to provide suitable scientific evidence and recommendations for policy makers of the Member States in the European Region. The activity can be viewed as an initiative to update the WHO Community Noise Guidelines from 1999 where cardiovascular effects of environmental noise were not explicitly considered in the recommendations. This may change in the new version of the document.
Figure 1. Noise reaction model according to Babisch (2014) [Babisch, W. (2014). Updated exposure-response relationship between road traffic noise and coronary heart diseases: A meta-analysis. Noise Health 16 (68): 1-9.]
Figure 2. Exposure-response relationships of the associations between transportation noise and cardiovascular health outcomes. Data taken from:
Babisch, W. and I. van Kamp (2009). Exposure-response relationship of the association between aircraft noise and the risk of hypertension. Noise Health 11 (44): 149-156.
van Kempen, E. and W. Babisch (2012). The quantitative relationship between road traffic noise and hypertension: a meta-analysis. Journal of Hypertension 30(6): 1075-1086.
Babisch, W. (2014). Updated exposure-response relationship between road traffic noise and coronary heart diseases: A meta-analysis. Noise Health 16 (68): 1-9.
Vienneau, D., C. Schindler, et al. (2015). The relationship between transportation noise exposure and ischemic heart disease: A meta analysis. Environmental Research 138: 372-380.
Note: Study-specific reference values were pooled after conversion to Lden using the derived meta-analysis weights of each study (according to Vienneau et al. (2015)).
Popular version of poster 2pSC14 “Improving the accuracy of speech emotion recognition using acoustic landmarks and Teager energy operator features.” Presented Tuesday afternoon, May 19, 2015, 1:00 pm – 5:00 pm, Ballroom 2 169th ASA Meeting, Pittsburgh
“You know, I can feel the fear that you carry around and I wish there was… something I could do to help you let go of it because if you could, I don’t think you’d feel so alone anymore.” — Samantha, a computer operating system in the movie “Her”
Introduction Computers that can recognize human emotions could react appropriately to a user’s needs and provide more human like interactions. Emotion recognition can also be used as a diagnostic tool for medical purposes, onboard car driving systems to keep the driver alert if stress is detected, a similar system in aircraft cockpits, and also electronic tutoring and interaction with virtual agents or robots. But is it really possible for computers to detect the emotions of their users?
During the past fifteen years, computer and speech scientists have worked on the automatic detection of emotion in speech. In order to interpret emotions from speech the machine will gather acoustic information in the form of sound signals, then extract related information from the signals and find patterns which relate acoustic information to the emotional state of speaker. In this study new combinations of acoustic feature sets were used to improve the performance of emotion recognition from speech. Also a comparison of feature sets for detecting different emotions is provided.
Methodology Three sets of acoustic features were selected for this study: Mel-Frequency Cepstral Coefficients, Teager Energy Operator features and Landmark features.
Mel-Frequency Cepstral Coefficients: In order to produce vocal sounds, vocal cords vibrate and produce periodic pulses which result in glottal wave. The vocal tract starting from the vocal cords and ending in the mouth and nose acts as a filter on the glottal wave. The Cepstrum is a signal analysis tool which is useful in separating source from filter in acoustic waves. Since the vocal tract acts as a filter on a glottal wave we can use the cepstrum to extract information only related to the vocal tract.
The mel scale is a perceptual scale for pitches as judged by listeners to be equal in distance from one another. Using mel frequencies in cepstral analysis approximates the human auditory system’s response more closely than using the linearly-spaced frequency bands. If we map frequency powers of energy in original speech wave spectrum to mel scale and then perform cepstral analysis we get Mel-Frequency Cepstral Coefficients (MFCC). Previous studies use MFCC for speaker and speech recognition. It has also been used to detect emotions.
Teager Energy Operator features: Another approach to modeling speech production is to focus on the pattern of airflow in the vocal tract. While speaking in emotional states of panic or anger, physiological changes like muscle tension alter the airflow pattern and can be used to detect stress in speech. It is difficult to mathematically model the airflow, therefore Teager proposed the Teager Energy Operators (TEO), which computes the energy of vortex-flow interaction at each instance of time. Previous studies show that TEO related features contain information which can be used to determine stress in speech.
Acoustic landmarks: Acoustic landmarks are locations in the speech signal where important and easily perceptible speech properties are rapidly changing. Previous studies show that the number of landmarks in each syllable might reflect underlying cognitive, mental, emotional, and developmental states of the speaker.
Figure 1 – Spectrogram (top) and acoustic landmarks (bottom) detected in neutral speech sample
Sound File 1 – A speech sample with neutral emotion
Figure 2 – Spectrogram (top) and acoustic landmarks (bottom) detected in anger speech sample
Sound File 2 – A speech sample with anger emotion
Classification: The data used in this study came from the Linguistic Data Consortium’s Emotional Prosody and Speech Transcripts. In this database four actresses and three actors, all in their mid-20s, read a series of semantically neutral utterances (four-syllable dates and numbers) in fourteen emotional states. A description for each emotional state was handed over to the participants to be articulated in the proper emotional context. Acoustic features described previously were extracted from the speech samples in this database. These features were used for training and testing Support Vector Machine classifiers with the goal of detecting emotions from speech. The target emotions included anger, fear, disgust, sadness, joy, and neutral.
Results The results of this study show an average detection accuracy of approximately 91% among these six emotions. This is 9% better than a previous study conducted at CMU on the same data set.
Specifically TEO features resulted in improvements in detecting anger and fear and landmark features improved the results for detecting sadness and joy. The classifier had the highest accuracy, 92%, in detecting anger and the lowest, 87%, in detecting joy.
Australian researchers are the first to demonstrate milk fat separation at large-scales using an ultrasonic separation technique, with potential industrial dairy applications
WASHINGTON, D.C., May 20, 2015 — Recently, scientists from Swinburne University of Technology in Australia and the Commonwealth Scientific and Industrial Research Organization (CSIRO), have jointly demonstrated cream separation from natural whole milk at liter-scales for the first time using ultrasonic standing waves–a novel, fast and nondestructive separation technique typically used only in small-scale settings.
At the 169th Meeting of the Acoustical Society of America (ASA), being held May 18-22 2015 in Pittsburgh, Pennsylvania, the researchers will report the key design and effective operating parameters for milk fat separation in batch and continuous systems.
The project, co-funded by the Geoffrey-Gardiner Dairy Foundation and the Australian Research Council, has established a proven ultrasound technique to separate fat globules from milk with high volume throughputs up to 30 liters per hour, opening doors for processing dairy and biomedical particulates on an industrial scale.
“We have successfully established operating conditions and design limitations for the separation of fat from natural whole milk in an ultrasonic liter-scale system,” said Thomas Leong, an ultrasound engineer and a postdoctoral researcher from the Faculty of Science, Engineering and Technology at the Swinburne University of Technology. “By tuning system parameters according to acoustic fundamentals, the technique can be used to specifically select milk fat globules of different sizes in the collected fractions, achieving fractionation outcomes desired for a particular dairy product.”
The Ultrasonic Separation Technique According to Leong, when a sound wave is reflected upon itself, the reflected wave can superimpose over the original waves to form an acoustic standing wave. Such waves are characterised by regions of minimum local pressure, where destructive interference occurs at pressure nodes, and regions of high local pressure, where constructive superimposition occurs at pressure antinodes.
When an acoustic standing wave field is sustained in a liquid containing particles, the wave will interact with particles and produce what is known as the primary acoustic radiation force. This force acts on the particles, causing them to move towards either the node or antinode of the standing wave, depending on their density. Positioned thus, the individual particles will then rapidly aggregate into larger entities at the nodes or antinodes.
To date, ultrasonic separation has been mostly applied to small-scale settings, such as microfluidic devices for biomedical applications. Few demonstrations are on volume-scale relevant to industrial application, due to the attenuation of acoustic radiation forces over large distances.
Acoustic Separation of Milk Fat Globules at Liter Scales To remedy this, Leong and his colleagues have designed a system consisting of two fully-submersible plate transducers placed on either end of a length-tunable, rectangular reaction vessel that can hold up to two liters of milk.
For single-plate operation, one of the plates produces one or two-megahertz ultrasound waves, while the other plate acts as a reflector. For dual-plate operation, both plates were switched on simultaneously, providing greater power to the system and increasing the acoustic radiation forces sustained.
To establish the optimal operation conditions, the researchers tested various design parameters such as power input level, process time, transducer-reflector distance and single or dual transducer set-ups etc.
They found that ultrasound separation makes the top streams of the milk contain a greater concentration of large fat globules (cream), and the bottom streams more small fat globules (skimmed milk), compared to conventional methods.
“These streams can be further fractionated to obtain smaller and larger sized fat globules, which can be used to produce novel dairy products with enhanced properties,” Leong said, as dairy studies suggested that cheeses made from milk with higher portion of small fat globules have superior taste and texture, while milk or cream with more large fat globules can lead to tastier butter.
Leong said the ultrasonic separation process only takes about 10 to 20 minutes on a liter scale – much faster than traditional methods of natural fat sedimentation and buoyancy processing, commonly used today for the manufacture of Parmesan cheeses in Northern Italy, which can take more than six hours.
The researchers’ next step is to work with small cheese makers to demonstrate the efficacy of the technique in cheese production.
WORLDWIDE PRESS ROOM In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.
PRESS REGISTRATION We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.
“Natural” Sounds Improves Mood and Productivity, Study Finds
Work presented at the 169th Acoustical Society of America (ASA) Meeting in Pittsburgh may have benefits from the office cube to the in-patient ward
WASHINGTON, D.C., May 19, 2015 — Playing natural sounds such as flowing water in offices could boosts worker moods and improve cognitive abilities in addition to providing speech privacy, according to a new study from researchers at Rensselaer Polytechnic Institute. They will present the results of their experiment at the 169th Meeting of the Acoustical Society of America in Pittsburgh.
An increasing number of modern open-plan offices employ sound masking systems that raise the background sound of a room so that speech is rendered unintelligible beyond a certain distance and distractions are less annoying.
“If you’re close to someone, you can understand them. But once you move farther away, their speech is obscured by the masking signal,” said Jonas Braasch, an acoustician and musicologist at the Rensselaer Polytechnic Institute in New York.
Sound masking systems are custom designed for each office space by consultants and are typically installed as speaker arrays discretely tucked away in the ceiling. For the past 40 years, the standard masking signal employed is random, steady-state electronic noise — also known as “white noise.”
Braasch and his team had previously tested whether masking signals inspired by natural sounds might work just as well, or better, than the conventional signal. The idea was inspired by previous work by Braasch and his graduate student Mikhail Volf, which showed that people’s ability to regain focus improved when they were exposed to natural sounds versus silence or machine-based sounds.
Recently, Braasch and his graduate student Alana DeLoach built upon those results in a new experiment. They exposed [HOW MANY??] human participants to three different sound stimuli while performing a task that required them to pay close attention: typical office noises with the conventional random electronic signal; an office soundscape with a “natural” masker; and an office soundscape with no masker. The test subjects only encountered one of the three stimuli per visit.
The natural sound used in the experiment was designed to mimic the sound of flowing water in a mountain stream. “The mountain stream sound possessed enough randomness that it did not become a distraction,” DeLoach said. “This is a key attribute of a successful masking signal.”
They found that workers who listened to natural sounds were more productive than the workers exposed to the other sounds and reported being in better moods.
Braasch said using natural sounds as a masking signal could have benefits beyond the office environment. “You could use it to improve the moods of hospital patients who are stuck in their rooms for days or weeks on end,” Braasch said.
For those who might be wary of employers using sounds to influence their moods, Braasch argued that using natural masking sounds is no different from a company that wants to construct a new building near the coast so that its workers can be exposed to the soothing influence of ocean surf.
“Everyone would say that’s a great employer,” Braasch said. “We’re just using sonic means to achieve that same effect.”
WORLDWIDE PRESS ROOM In the coming weeks, ASA’s Worldwide Press Room will be updated with additional tips on dozens of newsworthy stories and with lay language papers, which are 300 to 500 word summaries of presentations written by scientists for a general audience and accompanied by photos, audio and video. You can visit the site during the meeting at https://acoustics.org/world-wide-press-room/.
PRESS REGISTRATION We will grant free registration to credentialed journalists and professional freelance journalists. If you are a reporter and would like to attend, contact AIP Media Services at media@aip.org. For urgent requests, staff at media@aip.org can also help with setting up interviews and obtaining images, sound clips, or background information.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.