Walk to the Beat: How Your Playlist Can Shape Your Emotional Balance

Man Hei LAW – mhlawaa@connect.ust.hk

Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, -, -, Hong Kong

Andrew HORNER
Computer Science and Engineering
Hong Kong University of Science and Technology
Hong Kong

Popular version of 1aCA2 – Exploring the Therapeutic Effects of Emotion Equalization App During Daily Walking Activities
Presented at the 187th ASA Meeting
Read the abstract at https://eppro01.ativ.me/web/index.php?page=Inthtml&project=ASAFALL24&id=3771973

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–


During our daily tasks, we spend a lot of time getting things done. When walking, some people may find it boring and feel like time drags on. On the other hand, some see it as a chance to think and plan ahead. Our researchers believe that we can use this short period of time to help people rebalance their emotions. This way, individuals can feel refreshed and energized as they walk to their next destination.

Our idea is to provide each participant with a specific music playlist to listen to while walking. The playlists consisted of Uplifting, Relaxing, Angry, and Sad music, each lasting for 15 minutes. While our listeners were walking, they were using our Emotion Equalization App (Figures 1a to 1d) for accessing the playlist and collect all users’ data.

Figures 1a to 1d: The interface of the Emotion Equalization App

The key data we focused on was assessing the changes in emotions. To understand the listeners’ emotions, we used the Self-Assessment Manikin scale (SAM), a visual tool that helps depict emotions based on internal energy levels and mood positivity (refer to Figure 2). After the tests, we analyzed at how their emotions changed before and after listening to the music.

Figure 2: The Self-Assessment Manikin scale, showing energy levels at the top and mood positivity at the bottom [1]

The study found that the type of music influenced how far participants walked. Those listening to Uplifting music walked the farthest, followed by Angry, Relaxing, and Sad music. It was as expected that the music’s energy could affect the participants’ physical energy.

So, if music can affect physical energy, can it also have a positive effect on emotions? Can negative music help in mood regulation? An unexpected finding was that Angry music was found to be the most effective therapeutic music for walking. Surprisingly, listening to Angry music while walking not only elevated internal energy levels but also promoted positive feelings. On the other hand, Uplifting and Sad music only elicited positive emotions in listeners. However, Relaxing music during walking did not contribute to increased internal energy levels or positive feelings. This result breaks the impression on the therapeutic use of music while engaging in walking activities. Angry music has a negative vibe, but our study proved it to be beneficial in helping individuals relieve stress while walking, ultimately enhancing internal energy and mood.

If you’re having a tough day, consider listening to an Angry music playlist while taking a walk. It can help in balancing your emotions and uplifting your mood for your next activity.

[1] A. Mehrabian and J. A. Russell, An approach to environmental psychology. in An approach to environmental psychology. Cambridge, MA, US: The MIT Press, 1974, pp. xii, 266.

Listen In: Infrasonic Whispers Reveal the Hidden Structure of Planetary Interiors and Atmospheres

Quentin Brissaud – quentin@norsar.no
X (twitter): @QuentinBrissaud

Research Scientist, NORSAR, Kjeller, N/A, 2007, Norway

Sven Peter Näsholm, University of Oslo and NORSAR
Marouchka Froment, NORSAR
Antoine Turquet, NORSAR
Tina Kaschwich, NORSAR

Popular version of 1pPAb3 – Exploring a planet with infrasound: challenges in probing the subsurface and the atmosphere
Presented at the 186 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0026837

–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–

infrasoundLow frequency sound, called infrasound, can help us better understand our atmosphere and explore distant planetary atmospheres and interiors.

Low-frequency sound waves below 20 Hz, known as infrasound, are inaudible to the human ear. They can be generated by a variety of natural phenomena, including volcanoes, ocean waves, and earthquakes. These waves travel over large distances and can be recorded by instruments such as microbarometers, which are sensitive to small pressure variations. This data can give unique insight into the source of the infrasound and the properties of the media it traveled through, whether solid, oceanic, or atmospheric. In the future, infrasound data might be key to build more robust weather prediction models and understand the evolution of our solar system.

Infrasound has been used on Earth to monitor stratospheric winds, to analyze the characteristics of man-made explosions, and even to detect earthquakes. But its potential extends beyond our home planet. Infrasound waves generated by meteor impacts on Mars have provided insight into the planet’s shallow seismic velocities, as well as near-surface winds and temperatures. On Venus, recent research considers that balloons floating in its atmosphere, and recording infrasound waves, could be one of the few alternatives to detect “venusquakes” and explore its interior, since surface pressures and temperatures are too extreme for conventional instruments.

Sonification of sound generated by the Flores Sea earthquake as recorded by a balloon flying at 19 km altitude.

Until recently, it has been challenging to map infrasound signals to various planetary phenomena, including ocean waves, atmospheric winds, and planetary interiors. However, our research team and collaborators have made significant strides in this field, developing tools to unlock the potential of infrasound-based planetary research. We retrieve the connections between source and media properties, and sound signatures through 3 different techniques: (1) training neural networks to learn the complex relationships between observed waveforms and source and media characteristics, (2) perform large-scale numerical simulations of seismic and sound waves from earthquakes and explosions, and (3) incorporate knowledge about source and seismic media from adjacent fields such as geodynamics and atmospheric chemistry to inform our modeling work. Our recent work highlights the potential of infrasound-based inversions to predict high-altitude winds from the sound of ocean waves with machine learning, to map an earthquake’s mechanism to its local sound signature, and to assess the detectability of venusquakes from high-altitude balloons.

To ensure the long-term success of infrasound research, dedicated Earth missions will be crucial to collect new data, support the development of efficient global modeling tools, and create rigorous inversion frameworks suited to various planetary environments. Nevertheless, Infrasound research shows that tuning into a planet’s whisper unlocks crucial insights into its state and evolution.

Consumer label for the noise properties of tires and road pavements

Ulf Sandberg – ulf.sandberg@vti.se

Swedish National Road and Transport Research Institute (VTI), Linkoping, -, SE-58195, Sweden

Popular version of 1pNSb9 – Acoustic labelling of tires, road vehicles and road pavements: A vision for substantially improved procedures
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022814

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Not many vehicle owners know that they can contribute to reducing traffic noise by making an informed choice of their tires, while not sacrificing safety or economy. At least you can do so in Europe, where there is a regulation requiring tires be labelled with noise level (among others). But it has substantial flaws for which we propose solutions by applying state-of-the-art and innovative solutions.

It is here where consumer labels come in. In most parts of the world, we have consumer labels including noise levels on household appliances, lawn mowers, printers, etc. But when it comes to vehicles, tires, and road pavements, a noise label on the product is rare. So far, it is mandatory only on tires sold in the European Union, and it took a lot of efforts of noise researchers to get it accepted along with the more “popular” labels for energy (rolling resistance), and wet grip (skid resistance). Figure 1 shows and explains the European label.

Figure 1: The present European tire label, which must be attached to all tires sold in the European Union, here supplemented by explanations.

Why so much focus on tires? Figure 2 illustrates how much of the noise energy that comes from European car tires compared to the “propulsion noise”; i.e. noise from engine, exhaust, transmission, and fans. For speeds above 50 km/h (31 mph) over 80 % of the noise comes from tires. For trucks and busses, the picture is similar although above 50 km/h it may be 50-80 % from the tires. For electric powered vehicles, of course, the tires are almost entirely dominating as a noise source at all speeds. Thus, already now but even more in the future, consumer choices favouring lower noise tires will have an impact on traffic noise exposure. To achieve this progress, tire labels including noise are needed, and they must be fair and discriminate between the quiet and the noisy.

Figure 2: Distribution of tire/road vs propulsion noise. Calculated for typical traffic with 8 % heavy vehicles in Switzerland [Heutschi et al., 2018].

The EU label is a good start, but there are some problems. When we have purchased tires and made noise measurements on them (in A-weighted dB), there is almost no correlation between the noise labels and our measured dB levels. To identify the cause of the problem and suggest improvements, the European Road Administrations (CEDR) funded a project named STEER (Strengthening the Effect of quieter tyres on European Roads), also supplemented by a supporting project by the Swedish Road Administration. STEER found that there were two severe problems in the noise measuring procedure: (1) the test track pavement defined in an ISO standard showed rather large variations from test site to test site, and (2) in many cases only the noisiest tires were measured, and all other tires of the same type (“family”) were labelled with the same value although they could be up to 6 dB quieter. Such “families” may include over 100 different dimensions, as well as load and speed ratings. Consequently, the full potential of the labelling system is far from being used.

The author’s presentation at Acoustics 2023 will deal with the noise labelling problem and suggest in more detail how the measurement procedures may be made much more reproducible and representative. This includes using special reference tires for calibrating test track surfaces, production of such test track surfaces by additive manufacturing (3D printing) from digitally described originals, and calculating the noise levels by digital simulations, modelling, and using AI. Most if not all the noise measurements can go indoors, see an existing facility in Figure 3, to be conducted in laboratories that have large steel drums. Also in such a case a drum surface made by 3D printing is needed.

 

Figure 3: Laboratory drum facility for measurement of both rolling resistance and noise emission of tires (both for cars and trucks). Note the microphones. The tire is loaded and rolled against one of the three surfaces on the drum. Photo from the Gdansk University of Technology, courtesy of Dr P Mioduszewski.

What is a webchuck?

Chris Chafe – cc@ccrma.stanford.edu

Stanford University
CCRMA / Music
Stanford, CA 94305
United States

Ge Wang
Stanford University

Michael Mulshine
Stanford University

Jack Atherton
Stanford University

Popular version of 1aCA1 – What would a Webchuck Chuck?
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018058

Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video

Video 1: The Metered Tide

The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.

Video 2: The Metered Tide with backing track

Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.

Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.

Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.

Virtual Reality Musical Instruments for the 21st Century

Rob Hamilton – hamilr4@rpi.edu
Twitter: @robertkhamilton

Rensselaer Polytechnic Institute, 110 8th St, Troy, New York, 12180, United States

Popular version of 1aCA3 – Real-time musical performance across and within extended reality environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018060

Have you ever wanted to just wave your hands to be able to make beautiful music? Sad your epic air-guitar skills don’t translate into pop/rock super stardom? Given the speed and accessibility of modern computers, it may come as little surprise that artists and researchers have been looking to virtual and augmented reality to build the next generation of musical instruments. Borrowing heavily from video game design, a new generation of digital luthiers is already exploring new techniques to bring the joys and wonders of live musical performance into the 21st Century.

Image courtesy of Rob Hamilton.

One such instrument is ‘Coretet’: a virtual reality bowed string instrument that can be reshaped by the user into familiar forms such as a violin, viola, cello or double bass. While wearing a virtual reality headset such as Meta’s Oculus Quest 2, performers bow and pluck the instrument in familiar ways, albeit without any physical interaction with strings or wood. Sound is generated in Coretet using a computer model of a bowed or plucked string called a ‘physical model’ driven by the motion of a performer’s hands and the use of their VR game controllers. And borrowing from multiplayer online games, Coretet performers can join a shared network server and perform music together.

Our understanding of music, and live musical performance on traditional physical instruments is tightly coupled to time, specifically the understanding that when a finger plucks a string, or a stick strikes a drum head, a sound will be generated immediately, without any delay or latency. And while modern computers are capable of streaming large amounts of data at the speed of light – significantly faster than the speed of sound – bottlenecks in the CPUs or GPUs themselves, or in the code designed to mimic our physical interactions with instruments, or even in the network connections that connect users and computers alike, often introduce latency, making virtual performances feel sluggish or awkward.

This research focuses on some common causes for this kind of latency and looks at ways that musicians and instrument designers can work around or mitigate these latencies both technically and artistically.

Coretet overview video: Video courtesy of Rob Hamilton.

Listen to the Toilet — It Could Detect Disease #ASA183

Listen to the Toilet — It Could Detect Disease #ASA183

Microphone sensor and machine learning can classify excretion events, identify cholera or other bowel diseases, all without identifiable information.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

NASHVILLE, Tenn., Dec. 5, 2022 – Cholera, a bacterial disease that induces diarrhea, affects millions of people and results in about 150,000 deaths each year. Identifying potential communal disease spread for such an outbreak would alert health professionals early and improve the allocation of resources and aid. However, for obvious reasons, monitoring this and other bowel diseases is a sensitive matter.

The sensor in use over a toilet. Credit: Maia Gatlin

In her presentation, “The feces thesis: Using machine learning to detect diarrhea,” Maia Gatlin of the Georgia Institute of Technology will describe how a noninvasive microphone sensor could identify bowel diseases without collecting any identifiable information. The presentation will take place Dec. 5 at 4:35 p.m. Eastern U.S. in Summit C, as part of the 183rd Meeting of the Acoustical Society of America running Dec. 5-9 at the Grand Hyatt Nashville Hotel.

Gatlin and her team tested the technique on audio data from online sources. Each audio sample of an excretion event was transformed into a spectrogram, which essentially captures the sound in an image. Different events produce different features in the audio and the spectrogram. For example, urination creates a consistent tone, while defecation may have a singular tone. In contrast, diarrhea is more random.

Spectrogram images were fed to a machine learning algorithm that learned to classify each event based on its features. The algorithm’s performance was tested against data with and without background noises to make sure it was learning the right sound features, regardless of the sensor’s environment.

“The hope is that this sensor, which is small in footprint and noninvasive in approach, could be deployed to areas where cholera outbreaks are a persistent risk,” said Gatlin. “The sensor could also be used in disaster zones (where water contamination leads to spread of waterborne pathogens), or even in nursing/hospice care facilities to automatically monitor bowel movements of patients. Perhaps someday, our algorithm can be used with existing in-home smart devices to monitor one’s own bowel movements and health!”

In the future, Gatlin and her colleagues plan to gather real-world acoustic data so that their machine learning model can adapt to work in a variety of bathroom environments.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.