Consumer label for the noise properties of tires and road pavements

Ulf Sandberg – ulf.sandberg@vti.se

Swedish National Road and Transport Research Institute (VTI), Linkoping, -, SE-58195, Sweden

Popular version of 1pNSb9 – Acoustic labelling of tires, road vehicles and road pavements: A vision for substantially improved procedures
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022814

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Not many vehicle owners know that they can contribute to reducing traffic noise by making an informed choice of their tires, while not sacrificing safety or economy. At least you can do so in Europe, where there is a regulation requiring tires be labelled with noise level (among others). But it has substantial flaws for which we propose solutions by applying state-of-the-art and innovative solutions.

It is here where consumer labels come in. In most parts of the world, we have consumer labels including noise levels on household appliances, lawn mowers, printers, etc. But when it comes to vehicles, tires, and road pavements, a noise label on the product is rare. So far, it is mandatory only on tires sold in the European Union, and it took a lot of efforts of noise researchers to get it accepted along with the more “popular” labels for energy (rolling resistance), and wet grip (skid resistance). Figure 1 shows and explains the European label.

Figure 1: The present European tire label, which must be attached to all tires sold in the European Union, here supplemented by explanations.

Why so much focus on tires? Figure 2 illustrates how much of the noise energy that comes from European car tires compared to the “propulsion noise”; i.e. noise from engine, exhaust, transmission, and fans. For speeds above 50 km/h (31 mph) over 80 % of the noise comes from tires. For trucks and busses, the picture is similar although above 50 km/h it may be 50-80 % from the tires. For electric powered vehicles, of course, the tires are almost entirely dominating as a noise source at all speeds. Thus, already now but even more in the future, consumer choices favouring lower noise tires will have an impact on traffic noise exposure. To achieve this progress, tire labels including noise are needed, and they must be fair and discriminate between the quiet and the noisy.

Figure 2: Distribution of tire/road vs propulsion noise. Calculated for typical traffic with 8 % heavy vehicles in Switzerland [Heutschi et al., 2018].

The EU label is a good start, but there are some problems. When we have purchased tires and made noise measurements on them (in A-weighted dB), there is almost no correlation between the noise labels and our measured dB levels. To identify the cause of the problem and suggest improvements, the European Road Administrations (CEDR) funded a project named STEER (Strengthening the Effect of quieter tyres on European Roads), also supplemented by a supporting project by the Swedish Road Administration. STEER found that there were two severe problems in the noise measuring procedure: (1) the test track pavement defined in an ISO standard showed rather large variations from test site to test site, and (2) in many cases only the noisiest tires were measured, and all other tires of the same type (“family”) were labelled with the same value although they could be up to 6 dB quieter. Such “families” may include over 100 different dimensions, as well as load and speed ratings. Consequently, the full potential of the labelling system is far from being used.

The author’s presentation at Acoustics 2023 will deal with the noise labelling problem and suggest in more detail how the measurement procedures may be made much more reproducible and representative. This includes using special reference tires for calibrating test track surfaces, production of such test track surfaces by additive manufacturing (3D printing) from digitally described originals, and calculating the noise levels by digital simulations, modelling, and using AI. Most if not all the noise measurements can go indoors, see an existing facility in Figure 3, to be conducted in laboratories that have large steel drums. Also in such a case a drum surface made by 3D printing is needed.

 

Figure 3: Laboratory drum facility for measurement of both rolling resistance and noise emission of tires (both for cars and trucks). Note the microphones. The tire is loaded and rolled against one of the three surfaces on the drum. Photo from the Gdansk University of Technology, courtesy of Dr P Mioduszewski.

What is a webchuck?

Chris Chafe – cc@ccrma.stanford.edu

Stanford University
CCRMA / Music
Stanford, CA 94305
United States

Ge Wang
Stanford University

Michael Mulshine
Stanford University

Jack Atherton
Stanford University

Popular version of 1aCA1 – What would a Webchuck Chuck?
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018058

Take all of computer music, advances in programming digital sound, the web and web browsers and create an enjoyable playground for sound exploration. That’s Webchuck. Webchuck is a new platform for real-time web-based music synthesis. What would it chuck? Primarily, musical and artistic projects in the form of webapps featuring real-time sound generation. For example, The Metered Tide video below is a composition for electric cellist and the tides of San Francisco Bay. A Webchuck webapp produces a backing track that plays in a mobile phone browser as shown in the second video

Video 1: The Metered Tide

The backing track plays a sonification of a century’s worth of sea level data collected at the location while the musician records the live session. Webchuck has fulfilled a long-sought promise for accessible music making and simplicity of experimentation.

Video 2: The Metered Tide with backing track

Example webapps from this new Webchuck critter are popping up rapidly and a growing body of musicians and students enjoy how they are able to produce music easily and on any system. New projects are fun to program and can be made to appear anywhere. Sharing work and adapting prior examples is a breeze. New webapps are created by programming in the Chuck musical programming language and can be extended with JavaScript for open-ended possibilities.

Webchuck is deeply rooted in the computer music field. Scientists and engineers enjoy the precision that comes with its parent language, Chuck, and the ease with which large-scale audio programs can be designed for real-time computation within the browser. Similar capabilities in the past have relied on special purpose apps requiring installation (often proprietary). Webchuck is open source, runs everywhere a browser does and newly-spawned webapps are available as freely-shared links. Like in any browser application, interactive graphics and interface objects (sliders, buttons, lists of items, etc.) can be included. Live coding is the most common way of using Webchuck, developing a program by hearing changes as they are made. Rapid prototyping in sound has been made possible by the Web Audio API browser standard and Webchuck combines this with Chuck’s ease of abstraction so that programmers can build up from low-level details to higer-level features.

Combining the expressive music programming power of Chuck with the ubiquity of web browsers is a game changer that researchers have observed in recent teaching experiences. What could a Webchuck chuck? Literally everything that has been done before in computer music and then some.

Virtual Reality Musical Instruments for the 21st Century

Rob Hamilton – hamilr4@rpi.edu
Twitter: @robertkhamilton

Rensselaer Polytechnic Institute, 110 8th St, Troy, New York, 12180, United States

Popular version of 1aCA3 – Real-time musical performance across and within extended reality environments
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018060

Have you ever wanted to just wave your hands to be able to make beautiful music? Sad your epic air-guitar skills don’t translate into pop/rock super stardom? Given the speed and accessibility of modern computers, it may come as little surprise that artists and researchers have been looking to virtual and augmented reality to build the next generation of musical instruments. Borrowing heavily from video game design, a new generation of digital luthiers is already exploring new techniques to bring the joys and wonders of live musical performance into the 21st Century.

Image courtesy of Rob Hamilton.

One such instrument is ‘Coretet’: a virtual reality bowed string instrument that can be reshaped by the user into familiar forms such as a violin, viola, cello or double bass. While wearing a virtual reality headset such as Meta’s Oculus Quest 2, performers bow and pluck the instrument in familiar ways, albeit without any physical interaction with strings or wood. Sound is generated in Coretet using a computer model of a bowed or plucked string called a ‘physical model’ driven by the motion of a performer’s hands and the use of their VR game controllers. And borrowing from multiplayer online games, Coretet performers can join a shared network server and perform music together.

Our understanding of music, and live musical performance on traditional physical instruments is tightly coupled to time, specifically the understanding that when a finger plucks a string, or a stick strikes a drum head, a sound will be generated immediately, without any delay or latency. And while modern computers are capable of streaming large amounts of data at the speed of light – significantly faster than the speed of sound – bottlenecks in the CPUs or GPUs themselves, or in the code designed to mimic our physical interactions with instruments, or even in the network connections that connect users and computers alike, often introduce latency, making virtual performances feel sluggish or awkward.

This research focuses on some common causes for this kind of latency and looks at ways that musicians and instrument designers can work around or mitigate these latencies both technically and artistically.

Coretet overview video: Video courtesy of Rob Hamilton.

Listen to the Toilet — It Could Detect Disease #ASA183

Listen to the Toilet — It Could Detect Disease #ASA183

Microphone sensor and machine learning can classify excretion events, identify cholera or other bowel diseases, all without identifiable information.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

NASHVILLE, Tenn., Dec. 5, 2022 – Cholera, a bacterial disease that induces diarrhea, affects millions of people and results in about 150,000 deaths each year. Identifying potential communal disease spread for such an outbreak would alert health professionals early and improve the allocation of resources and aid. However, for obvious reasons, monitoring this and other bowel diseases is a sensitive matter.

The sensor in use over a toilet. Credit: Maia Gatlin

In her presentation, “The feces thesis: Using machine learning to detect diarrhea,” Maia Gatlin of the Georgia Institute of Technology will describe how a noninvasive microphone sensor could identify bowel diseases without collecting any identifiable information. The presentation will take place Dec. 5 at 4:35 p.m. Eastern U.S. in Summit C, as part of the 183rd Meeting of the Acoustical Society of America running Dec. 5-9 at the Grand Hyatt Nashville Hotel.

Gatlin and her team tested the technique on audio data from online sources. Each audio sample of an excretion event was transformed into a spectrogram, which essentially captures the sound in an image. Different events produce different features in the audio and the spectrogram. For example, urination creates a consistent tone, while defecation may have a singular tone. In contrast, diarrhea is more random.

Spectrogram images were fed to a machine learning algorithm that learned to classify each event based on its features. The algorithm’s performance was tested against data with and without background noises to make sure it was learning the right sound features, regardless of the sensor’s environment.

“The hope is that this sensor, which is small in footprint and noninvasive in approach, could be deployed to areas where cholera outbreaks are a persistent risk,” said Gatlin. “The sensor could also be used in disaster zones (where water contamination leads to spread of waterborne pathogens), or even in nursing/hospice care facilities to automatically monitor bowel movements of patients. Perhaps someday, our algorithm can be used with existing in-home smart devices to monitor one’s own bowel movements and health!”

In the future, Gatlin and her colleagues plan to gather real-world acoustic data so that their machine learning model can adapt to work in a variety of bathroom environments.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

Machine Learning Diagnoses Pneumonia by Listening to Coughs #ASA183

Machine Learning Diagnoses Pneumonia by Listening to Coughs #ASA183

A new algorithm could spot early signs of respiratory diseases in hospitals and at home.

Media Contact:
Ashley Piccone
AIP Media
301-209-3090
media@aip.org

NASHVILLE, Tenn., Dec. 5, 2022 – Pneumonia is one of the world’s leading causes of death and affects over a million people a year in the United States. The disease disproportionately impacts children, older adults, and hospitalized patients. To give them the greatest chance at recovery, it is crucial to catch and treat it early. Existing diagnosis methods consist of a range of blood tests and chest scans, and a doctor needs to suspect pneumonia before ordering them.

A machine learning algorithm identifies cough sounds and determines whether the subject is suffering from pneumonia. Credit: Jin Yong Jeon

Jin Yong Jeon of Hanyang University will discuss a technique to diagnose pneumonia through passive listening in his session, “Pneumonia diagnosis algorithm based on room impulse responses using cough sounds.” The presentation will take place Dec. 5 at 4:20 p.m. Eastern U.S. in Summit C, as part of the 183rd Meeting of the Acoustical Society of America running Dec. 5-9 at the Grand Hyatt Nashville Hotel.

Jeon and fellow researchers developed a machine learning algorithm to identify cough sounds and determine whether the subject was suffering from pneumonia. Because every room and recording device is different, they augmented their recordings with room impulse responses, which measure how the acoustics of a space react to different sound frequencies. By combining this data with the recorded cough sounds, the algorithm can work in any environment.

“Automatically diagnosing a health condition through information on coughing sounds that occur continuously during daily life will facilitate non-face-to-face treatment,” said Jeon. “It will also be possible to reduce overall medical costs.”

Currently, one company has plans to apply this algorithm for remote patient monitoring. The team is also looking to implement it as an app for in-home care, and they plan to make the experience simpler and more user-friendly.

“Our research team is planning to automate each step-by-step process that is currently performed manually to improve convenience and applicability,” said Jeon.

———————– MORE MEETING INFORMATION ———————–
Main meeting website: https://acousticalsociety.org/asa-meetings/
Technical program: https://eppro02.ativ.me/web/planner.php?id=ASAFALL22&proof=true

ASA PRESS ROOM
In the coming weeks, ASA’s Press Room will be updated with newsworthy stories and the press conference schedule at https://acoustics.org/asa-press-room/.

LAY LANGUAGE PAPERS
ASA will also share dozens of lay language papers about topics covered at the conference. Lay language papers are 300 to 500 word summaries of presentations written by scientists for a general audience. They will be accompanied by photos, audio, and video. Learn more at https://acoustics.org/lay-language-papers/.

PRESS REGISTRATION
ASA will grant free registration to credentialed and professional freelance journalists. If you are a reporter and would like to attend the meeting or virtual press conferences, contact AIP Media Services at media@aip.org.  For urgent requests, AIP staff can also help with setting up interviews and obtaining images, sound clips, or background information.

ABOUT THE ACOUSTICAL SOCIETY OF AMERICA
The Acoustical Society of America (ASA) is the premier international scientific society in acoustics devoted to the science and technology of sound. Its 7,000 members worldwide represent a broad spectrum of the study of acoustics. ASA publications include The Journal of the Acoustical Society of America (the world’s leading journal on acoustics), JASA Express Letters, Proceedings of Meetings on Acoustics, Acoustics Today magazine, books, and standards on acoustics. The society also holds two major scientific meetings each year. See https://acousticalsociety.org/.

2pCA8 – Sonic boom propagation using an improved ray tracing technique

Kimberly Riegel – kriegel@qcc.cuny.edu
William Costa
George Seaton
Christian Gomez
Queensborough Community College
222-05 56th Avenue
Bayside, NY 11364

Popular version of 2pCA8 – Sonic boom propagation in a non-homogeneous atmosphere using a stratified ray tracing technique’
Presented Tuesday afternoon, November 30, 2019
181st ASA Meeting
Click here to read the abstract

Supersonic air travel could reduce flight times by half, vastly improving long range air travel. To make this type of travel commercially viable, however, the current ban on overland flight would need to be lifted while ensuring residents below are still protected from the high noise levels in the flight paths of these new aircraft. There has been a recent increase in supersonic aircraft investment. United Airlines just invested in 15 supersonic jets provided by BOOM supersonic. These aircraft are expected to fly in 2029 but will remain restricted to over water flight. Lockheed Martin in partnership with NASA is building a low boom demonstrator aircraft. This aircraft is expected to perform some community-based test flights next year. Therefore, a computationally efficient prediction tool that can predict the impact of sonic booms in urban areas would be a useful tool for researchers and legislators.

Previously a ray tracing simulation tool to predict the sound behavior in urban environments was developed. The simulation included the ability to read in 3D renderings of the environments. This made it possible to simulate any complicated shape including detailed buildings and multiple buildings. All surfaces are represented by a mesh of triangular faces. The more complicated the building, the more triangles were required to accurately represent it. The biggest limitation of the code was that it could take several days to complete one simulation of a complicated building. The purpose of this work is to reduce the computational time to make the numerical simulation more accessible while not sacrificing the accuracy of the results.

In order to reduce the computation time for complex geometries the entire environment was cut into horizontal slices. Only the slice where the origin of the ray is considered at a time. This allows for a significant reduction in the number of building facets that needs to be assessed for each step. Figure 1 shows the total building in grey and the slice under consideration in green.

 

[IMAGE MISSING]
Figure 1. Representation of a simple building/ray interaction and the vertical slices where the building is segmented.

To determine how the modifications to the code improved the result, several environments were run and compared to those environments for previous version of the code. Table 1 shows the improvements. From the timing of the different versions of the code it is clear that updates to the code have drastically reduced the computation times for complex environments. The resulting pressures at the receivers have no noticeable difference in the pressure results. This will improve the useability of the simulation and make it more convenient to predict sonic booms in urban areas.