Central Washington University, Department of Physics, Ellensburg, WA, 98926, United States
Seth Lowery
Ph.D. candidate, University of Texas
Dept. of Mechanical Engineering
Austin, TX
Popular version of 4pMU3 – An experiment to measure changes in violin instrument response due to playing-in
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023547
How is a violin like a pair of hiking boots? Many violinists would respond “They both improve with use.” Just as boots need to be “broken in” by being worn several times to make them more supple, many musicians believe that a new violin, cello, or guitar, needs to be “played in” for a period of time, typically months, in order to fully develop its acoustic properties. There is even a commercial product, the Tone-Rite, that is marketed as a way to accelerate the playing-in process, with the claim of dramatically increasing “resonance, balance, and range,” and some builders of stringed instruments, known as luthiers, provide a service of pre-playing-in their instruments, using their own methods of mechanical stimulus, prior to selling them. But do we know if violins actually improve with use?
We tested the hypothesis that putting vibrational energy into a violin will, over time, change how the violin body responds to the vibration of the strings, which is measured as the frequency response. We used three violins in our experiment: one was left alone, serving as a control, while the two test violins were “played” by applying mechanical vibrations directly to the bridge. One of the mechanical sources was the Tone Rite, the other was a shaker driven with a signal created from a Vivaldi violin concerto as shown in the video below. The total time of vibration exceeded 1600 hours, equivalent to ten months of being played six hours per day.
Approximately once per week, we measured the frequency response of all three violins using two standard methods: bridge admittance, which characterizes the vibration of the violin body, and acoustic radiativity, which is based on the sound radiated by the violin. The measurement set up is illustrated in Figure 1.
Figure 1: Measuring the frequency response of a violin in an anechoic chamber.
Having a control violin allowed us to account for factors not associated with playing-in, such as fluctuating environmental conditions or simple aging, that might affect the frequency response. If mechanical vibrations had the hypothesized effect of physically altering the violin body, such as creating microcracks in the wood, glue, or varnish, and if the result were an increase in “resonance, balance, and range”, then we would expect a noticeable and cumulative change in the frequency response of the test violins compared to the control violin.
We did not observe any changes in the frequency responses of the violins that correlate with the amount of vibration. In Figure 2a, we plot a normalized difference in the bridge admittance between the two test violins and the control violin; Figure 2b shows a similar plot for the acoustic radiativity.
In both plots, we see no evidence that the difference between the test violins and the control violin increases with more vibration; instead we see random fluctuations that can be attributed to the slightly different experimental conditions of each measurement. This applies to both the Tone-Rite, which vibrates primarily with the 60 Hz frequency of the electric power it is plugged into, and the shaker, which provided the same frequencies that a violinist practicing her instrument would create.
Our conclusion is that long term vibrational stimulus of a violin, whether achieved mechanically or by actual playing, does not produce a physical change in the violin body that could affect its tonal characteristics.
Musicians show incredible flexibility when generating sounds with their instruments. Nevertheless, some control parameters need to stay within certain limits for this to occur. Take for example a clarinet player. Using too much or too little blowing pressure would result in no sound being produced by the instrument. The required pressure value (depending on the note being played and other instrument properties) has to stay within certain limits. A way to study these limits is to generate ‘playability diagrams’. Such diagrams have been commonly used to analyze bowed-string instruments, but may be also informative for wind instruments, as suggested by Woodhouse at the 2023 Stockholm Music Acoustics Conference. Following this direction, such diagrams in the form of playability maps can highlight the playable regions of a musical instrument, subject to variation of certain control parameters, and eventually support performers in choosing their equipment.
One way to fill in these diagrams is via physical modeling simulations. Such simulations allow predicting the generated sound while slowly varying some of the control parameters. Figure 1 shows such an example, where a playability region is obtained while varying the blowing pressure and the stiffness of the clarinet reed. (In fact, the parameter varied on the y-axis is the effective stiffness per unit area of the reed, corresponding to the reed stiffness after it has been mounted on the mouthpiece and the musician’s lip is in contact with it). Black regions indicate ‘playable’ parameter combinations, whereas white regions indicate parameter combinations, where no sound is produced.
Figure 1: Pressure-stiffness playability map. The black regions correspond to parameter combinations that generate sound.
One possible observation is that, when players wish to play with a larger blowing pressure (resulting in louder sounds) they should use stiffer reeds. As indicated by the plot, for a reed of stiffness per area equal to 0.6 Pa/m (soft reed) it is not possible to generate a note with a blowing pressure above 2750 Pa. However, when using a harder reed (say with a stiffness of 1 Pa/m) one can play with larger blowing pressures, but it is impossible to play with a pressure lower than 3200 Pa in this case. Varying other types of control parameters could highlight similar effects regarding various instrument properties. For instance, playability maps subject to different mouthpiece geometries could be obtained, which would be valuable information for musicians and instrument makers alike.
Songs of the Oceans Raise Environmental Awareness #ASA184
Oceanic data is transformed into hypnotic and impactful music that encourages reflection.
Media Contact: Ashley Piccone AIP Media 301-209-3090 media@aip.org
CHICAGO, May 10, 2023 – For many people, there are few sounds as relaxing as ocean waves. But the sound of the seas can also convey deeper emotions and raise awareness about pollution.
Malloy with the oil drum used for his musical performances. Credit: Colin Malloy
At the upcoming 184th Meeting of the Acoustical Society of America, Colin Malloy of Ocean Network Canada will present his method to transform ocean data into captivating, solo percussion songs. The talk, “Sonification of ocean data in art-science,” will take place Wednesday, May 10, at 3:25 p.m. in the Indiana/Iowa room. The meeting will run May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.
To construct his compositions, Malloy employs sound from underwater microphones, called hydrophones, and introduces elements inspired by ocean-related data such as temperature, acidity, and oxygenation. Listeners can find performances of Malloy’s music on YouTube.
In his piece, Oil & Water, Malloy represents the impact of oil production on the oceans. He plays an eerily catchy melody on steel drums and inserts noise to represent oil production over the past 120 years. The interjections increase throughout the piece to mimic the increased production in recent years. Near the end of the song, he uses oil consumption data as the oscillator of a synthesizer.
By representing data in this way, he hopes his music encourages listeners to reflect on the meaning and the medium.
“Art helps people digest information on an emotional level that typical science communication may not,” Malloy said. “I hope that in listening to these pieces, people use them as a space to reflect on what each piece is trying to portray. Ultimately, I’d like for them to help create awareness of the various issues surrounding the oceans.”
The aptly named field ArtScience encourages scientists and artists to learn from each other about communication, connection, and science. Ocean Network Canada’s artist-in-residence program recruits artists to work with scientists, engage with research, and connect to a larger cultural audience.
Malloy, who has an educational background in mathematics, computer science, and music, believes working in the balance of science and art provides him with a unique perspective.
“There is a lot of art in science and a lot of science to art — more than most people realize for either direction,” said Malloy.
ASA PRESS ROOM In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.
LAY LANGUAGE PAPERS ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.
PRESS REGISTRATION ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.
Beyond Necessity, Hearing Aids Bring Enjoyment Through Music #ASA184
Hearing aids aren’t particularly good at preserving the sound quality of music – but some manufacturers do better than others.
Media Contact: Ashley Piccone AIP Media 301-209-3090 media@aip.org
CHICAGO, May 8, 2023 – For decades, hearing aids have been focused on improving communication by separating speech from background noise. While the technology has made strides in terms of speech, it is still subpar when it comes to music.
Over the years, hearing aids have improved in terms of speech. But they are still subpar when it comes to music. Credit: Emily Sandgren
In their talk, “Evaluating the efficacy of music programs in hearing aids,” Emily Sandgren and Joshua Alexander of Purdue University will describe experiments to determine the best hearing aids for listening to music. The presentation will take place Monday, May 8, at 11:45 a.m. Eastern U.S. in the Indiana/Iowa room, as part of the 184th Meeting of the Acoustical Society of America running May 8-12 at the Chicago Marriott Downtown Magnificent Mile Hotel.
“Americans listen to music for more than two hours a day on average, and music can be related to mental and emotional health. But research over the past two decades has shown that hearing aid users are dissatisfied with the sound quality of music when using their hearing aids,” said Sandgren. “People with hearing loss deserve both ease of communication and to maintain quality of life by enjoying sources of entertainment like music.”
In response to this problem, hearing aid manufacturers have designed music programs for their devices. To test and compare each of these programs, Sandgren and Alexander took over 200 recordings of music samples as processed by hearing aids from seven popular manufacturers.
They asked study participants to rate the sound quality of these recordings and found that the hearing aids had lower ratings for music than their control stimuli. The researchers found bigger differences in music quality between hearing aid brands than between speech and music programs, with two manufacturers standing out among the rest.
The team is still trying to determine the causes behind these differences.
“One contributing factor is how hearing aids adapt to loud, sudden sounds,” said Sandgren. “When you’re listening to a conversation, if a door slams behind you, you don’t want that door slam to be amplified very much. But with music, there are loud sudden sounds that we do want to hear, like percussion instruments.”
Distortion may be one of the biggest problems. Unlike speech, music often has intense low-frequency harmonics.
“Our analyses suggest that brands rated highest in music quality processed the intense ultralow frequency peaks with less distortion than those rated lowest in music quality,” said Alexander.
This work will improve future technology and help audiologists select the best current hearing aids for their patients.
ASA PRESS ROOM In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.
LAY LANGUAGE PAPERS ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.
PRESS REGISTRATION ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org. For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.
ABOUT THE ACOUSTICAL SOCIETY OF AMERICA The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.
Settler Scholar and Public Historian with Brothertown Indian Nation, Ridgewood, NY, 11385, United States
Jessica Ryan – Vice Chair of the Brothertown Tribal Council
Popular version of 3pAA6 – Case study of a Brothertown Indian Nation cultural heritage site–toward a framework for acoustics heritage research in simulation, analysis, and auralization
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018718
The Brothertown Indian Nation has a centuries old heritage of group singing. Although this singing is an intangible heritage, these aural practices have left a tangible record through published music, as well as extensive personal correspondence and journal entries about the importance of singing in the political formation of the Tribe. One specific tangible artifact of Brothertown ancestral aural heritage–and focus of the acoustic research in this case study–is a house built in the 18th century by Andrew Curricomp, a Tunxis Indian.
Figure 1: Images courtesy of authors
In step with the construction of the house at Tunxis Sepus, Brothertown political formation also solidified in the 18th century between members of seven parent Tribes: various Native communities of Southern New England including Mohegan, Montauk, Narragansett, Niantic, Stonington (Pequot), Groton/Mashantucket (Pequot) and Farmington (Tunxis). Settler colonial pressure along the Northern Atlantic coast forced Brothertown Indian ancestors to leave various Indigenous towns and settlements to form into a body politic named Brotherton (Eeyamquittoowauconnuck). Nearly a century later, after multiple forced relocations, the Tribe–including many of Andrew Curricomp’s grand, and great grandchildren–were displaced again to the Midwest. Today, after nearly two more centuries, the Brothertown Indian Nation Community Center and museum are located in Fond du Lac, WI, just south of their original Midwestern settlement.
During contemporary trips back to visit parent tribes, members of the Brothertown Indian Nation have visited the Curricomp House at Tunxis Sepus.
Figure 2: Image courtesy of authors
However, by then it was known as the William Day Museum of Indian Artifacts. After the many relocations of Brothertown and their parent Tribes, the Curricomp house was purchased by a local landowner of European descent. The man’s groundskeeper, Bill Day, had a hobby of collecting stone lithic artifacts he would find during his gardening around the property. The land owner decided that having the Curricomp house would be a perfect home for his groundskeeper’s musings, as it was locally told that the house belonged to the last living Indian in the town. He had the Curricomp House moved to his property and named it for his gardener, the William Day Museum of Indian Artifacts.
The myth of the vanishing Indian is a commonly held trope in popular Western Culture. This colonial, or “last living Indian” history that dominates the archive, includes no real information about what Native communities actually used the space for, or where the descendants of Tunxis are now living. This acoustics case study intends for the living descendants of Tunxis Sepus to have sovereignty over the digital content created, as the house serves as a tangible cultural signifier of their intangible aural heritage.
Architectural acoustic heritage throughout Brothertown’s history of displacement is of value to their vibrant contemporary culture. Many of these tangible heritage sites have been made intangible to the Brothertown Community, as they are settler owned, demolished, or geographically inaccessible to the Brothertown diaspora–requiring creative solutions to make this heritage available. Both in-situ and web-based immersive interfaces are being designed to interact with the acoustic properties of the Curricomp house.
Figure 3: Image courtesy of authors
These interfaces use various music and speech source media that feature Brothertown aural Heritage. The acoustic simulations and auralizations created during this case study of the Curricomp House are tools: a means by which living descendants might hear one another in the difficult to access acoustic environments of their ancestors.
Stanford University CCRMA / Music Stanford, CA 94305 United States
Ge Wang Stanford University
Michael Mulshine Stanford University
Jack Atherton Stanford University
Popular version of 1aCA1 – What would a Webchuck Chuck? Presented at the 184 ASA Meeting Read the abstract at https://doi.org/10.1121/10.0018058
Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video
Video 1: The Metered Tide
The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.
Video 2: The Metered Tide with backing track
Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.
Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.
Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.