Decibel Diversity: A Sonic Exploration of Varied Noise Requirements on Inland Rail

Arvind Deivasigamani – adeivasigamani@slrconsulting.com

Associate – Acoustics and Vibration, SLR Consulting Australia Pty Ltd, Melbourne, Victoria, 3002, Australia

Aaron McKenzie
Technical Director – Acoustics and Vibration
SLR Consulting Australia Pty Ltd

Susan Kay
Senior Program Environment Advisor – Acoustics
Australian Rail Track Corporation

Popular version of 1pNSb3 – Rail Noise Across Three States in Australia – Operational Noise Assessment on Inland Rail
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022808

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

How do we manage noise emissions from the largest rail project in Australia? The answer to that question is not trivial, especially if the project spans across the three eastern coast states of Australia. Currently Australia’s longest rail project, Inland Rail, is a proposed 1600 km rail line that connects Melbourne to Brisbane freight in 24 hours via the States of Victoria, New South Wales (NSW) and Queensland, with a combination of new rail infrastructure and upgrade of existing infrastructure.

image courtesy of inlandrail.com.au

Rail noise across each State is regulated and managed differently with their respective guidelines and policy documents. Victoria and NSW have day and night decibel thresholds, whilst Queensland has a 24-hour exposure threshold. Similarly, for sections where existing rail are being upgraded, all three States have slightly different thresholds which include an absolute threshold in Queensland or a combination of an absolute threshold and a relative increase in noise in Victoria and NSW. Furthermore, considerations of factors which affect rail noise such as rail speeds, track joints, level crossing bells and train horns are considered differently across the three States. In this regard, the modelling of future rail noise levels needs to carefully account for these differences to assess the predicted impacts in each jurisdiction against the respective thresholds.

One important parameter for assessing rail noise impacts is a pass-by maximum noise level (Lmax). This parameter is critical for a freight-dominated project like Inland Rail as it quantifies the impact of locomotives as they go past the residences. Typically, this is assessed as a 95th percentile Lmax, which means that any unusually rare and loud events are excluded (as they would fall within the top 5%). However, in Queensland, the criterion is a Single Event Maximum (SEM) defined as the arithmetic average of the 15 loudest pass-by maximum levels within a given 24-hour period. This parameter is challenging to predict, especially for new rail infrastructure where it is not possible to measure the SEM on field. To overcome this challenge, a prediction method based on a ‘Mote-Carlo’ statistical model was adopted. In this model, rail pass-by noise levels are randomly picked from databases of numerous pass-by noise levels to simulate the noise levels on a given day, and these random values are averaged to obtain the SEM. This random selection of train pass-bys is repeated several thousand times to obtain a trend and derive the most likely SEM that can be expected on field. This mathematical prediction technique was tested on existing rail lines and found to correlate well with field measurements.

There exists a need to support a consistent project-wide rail noise criteria that is effective in addressing all the nuanced differences in the criteria, whilst being simple and effective to implement and understand for all stakeholders. We recommend technical assessments and engagement with state authorities early in the project development phase to investigate noise emissions, controls and development of appropriate criteria. Once approved, the project criteria can be used across all sections of the project to ensure residents adjacent to the project get a consistent outcome.

Why Australian Aboriginal languages have small vowel systems

Andrew Butcher – endymensch@gmail.com

Flinders University, GPO Box 2100, Adelaide, SA, 5001, Australia

Popular version of 1pSC6 – On the Small Flat Vowel Systems of Australian Languages
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022855

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Australia originally had 250-350 Aboriginal languages. Today, about 20 of these survive and none has more than 5,000 speakers. Most of the original languages shared very similar sound systems. About half of them had just three vowels, another 10% or so had four, and a further 25% or so had a five-vowel system. Only 16% of the world’s languages have a vowel inventory of four or less (the average number is six; some Germanic languages, such as Danish, have 20 or so).

This paper asks why many Australian languages have so few vowels. Our research shows that the vowels of Aboriginal languages are much more ‘squashed down’ in the acoustic space than those of European languages (Fig 1), indicating that the tongue does not come as close to the roof of the mouth as in European languages. The two ‘closest’ vowels are [e] (a sound with the tongue at the front of the mouth, between ‘pit’ and ‘pet’) and [o] (at the back of the mouth with rounded lips, between ‘put’ and ‘pot’). The ‘open’ (low-tongue) vowel is best transcribed [ɐ], a sound between ‘pat’ and ‘putt’, but with a less open jaw. Four- and five-vowel systems squeeze the extra vowels in between these, adding [ɛ] (between ‘pet’ and ‘pat’) and [ɔ] (more or less exactly as in ‘pot’), with little or no expansion of the acoustic space. Thus, the majority of Australian languages lack any true close (high-tongue) vowels (as in ‘peat’ and ‘pool’).
So why do Australian languages have a ‘flattened’ vowel space? The answer may lie in the ears of the speakers rather than in their mouths. Aboriginal Australians have by far the highest prevalence of chronic middle ear infection in the world. Our research with Aboriginal groups of diverse age, language and geographical location shows 30-60% of speakers have a hearing impairment in one or both ears (Fig 2). Nearly all Aboriginal language groups have developed an alternate sign language to complement the spoken one. Our previous analysis has shown that the sound systems of Australian languages resemble those of individual hearing-impaired children in several important ways, leading us to hypothesise that the consonant systems and the word structure of these languages have been influenced by the effects of chronic middle ear infection over generations.

A reduction in the vowel space is another of these resemblances. Middle ear infection affects the low frequency end of the scale (under 500 Hz), thus reducing the prominence of the distinctive lower resonances of close vowels, such as in ‘peat’ and ‘pool’ (Fig 3). It is possible that, over generations, speakers have raised the frequencies of these resonances to make them more hearable, thereby constricting the acoustic space the languages use. If so, we may ask whether, on purely acoustic grounds, communicating in an Aboriginal language in the classroom – using a sound system optimally attuned to the typical hearing profile of the speech community – might offer improved educational outcomes for indigenous children in the early years.

Playability maps as aid for musicians

Vasileios Chatziioannou – chatziioannou@mdw.ac.at

Department of Music Acoustics, University of Music and Performing Arts Vienna, Vienna, Vienna, 1030, Austria

Alex Hofmann
Department of Music Acoustics
University of Music and Performing Arts Vienna
Vienna, Vienna, 1030
Austria

Popular version of 5aMU6 – Two-dimensional playability maps for single-reed woodwind instruments
Presented at the 185 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023675

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Musicians show incredible flexibility when generating sounds with their instruments. Nevertheless, some control parameters need to stay within certain limits for this to occur. Take for example a clarinet player. Using too much or too little blowing pressure would result in no sound being produced by the instrument. The required pressure value (depending on the note being played and other instrument properties) has to stay within certain limits. A way to study these limits is to generate ‘playability diagrams’. Such diagrams have been commonly used to analyze bowed-string instruments, but may be also informative for wind instruments, as suggested by Woodhouse at the 2023 Stockholm Music Acoustics Conference. Following this direction, such diagrams in the form of playability maps can highlight the playable regions of a musical instrument, subject to variation of certain control parameters, and eventually support performers in choosing their equipment.

One way to fill in these diagrams is via physical modeling simulations. Such simulations allow predicting the generated sound while slowly varying some of the control parameters. Figure 1 shows such an example, where a playability region is obtained while varying the blowing pressure and the stiffness of the clarinet reed. (In fact, the parameter varied on the y-axis is the effective stiffness per unit area of the reed, corresponding to the reed stiffness after it has been mounted on the mouthpiece and the musician’s lip is in contact with it). Black regions indicate ‘playable’ parameter combinations, whereas white regions indicate parameter combinations, where no sound is produced.

Figure 1: Pressure-stiffness playability map. The black regions correspond to parameter combinations that generate sound.

One possible observation is that, when players wish to play with a larger blowing pressure (resulting in louder sounds) they should use stiffer reeds. As indicated by the plot, for a reed of stiffness per area equal to 0.6 Pa/m (soft reed) it is not possible to generate a note with a blowing pressure above 2750 Pa. However, when using a harder reed (say with a stiffness of 1 Pa/m) one can play with larger blowing pressures, but it is impossible to play with a pressure lower than 3200 Pa in this case. Varying other types of control parameters could highlight similar effects regarding various instrument properties. For instance, playability maps subject to different mouthpiece geometries could be obtained, which would be valuable information for musicians and instrument makers alike.

Documenting the sounds of southwest Congo: the case of North Boma

Lorenzo Maselli – lorenzo.maselli@ugent.be

Instagram: @mundenji

FWO, UGent, UMons, BantUGent, Ghent, Oost-Vlaanderen, 9000, Belgium

Popular version of 1aSC2 – Retroflex nasals in the Mai-Ndombe (DRC): the case of nasals in North Boma B82
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022724

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

“All language sounds are equal but some language sounds are more equal than others” – or, at least, that is the case in academia. While French i’s and English t’s are constantly re-dotted and re-crossed, the vast majority of the world’s linguistic communities remain undocumented, with their unique sound heritage gradually fading into silence. The preservation of humankind’s linguistic diversity relies solely on detailed documentation and description.

Over the past few years, a team of linguists from Ghent, Mons, and Kinshasa have dedicated their efforts to recording the phonetic and phonological oddities of southwest Congo’s Bantu varieties. Among these, North Boma (Figure 1) stands out for its display of rare sounds known as “retroflexes”. These sounds are particularly rare in central Africa, which mirrors a more general state of under-documentation of the area’s sound inventories. Through extensive fieldwork in the North Boma area, meticulous data analysis, and advanced statistical processing, these researchers have unveiled the first comprehensive account of North Boma’s retroflexes. As it turns out, North Boma retroflexes are exclusively nasal, a striking typological circumstance. Their work, presented in Sydney this year, not only enriches our understanding of these unique consonants but also unveils potential historical implications behind their prevalence in the region.

North BomaFigure 1 – the North Boma area

The study highlights the remarkable salience of North Boma’s retroflexes, characterised by distinct acoustic features that sometimes align and sometimes deviate from those reported in the existing literature. This is clearly shown in Figure 2, where the North Boma nasal space is plotted using a technique known as “Multiple Factor Analysis” allowing for the study of small corpora organised into clear variable groups. As can be seen, their behaviour differs greatly from that of the other nasals of North Boma. This uniqueness also suggests that their presence in the area may stem from interactions with long-lost hunter-gatherer forest languages, providing invaluable insights into the region’s history.

North Boma Figure 2 – MFA results show that retroflex and non-retroflex nasals behave very differently in North Boma

Extraordinary sound patterns are waiting to be discovered in the least documented language communities of the world. North Boma serves as just one compelling example among many. As we navigate towards an unprecedented language loss crisis, the imperative for detailed phonetic documentation becomes increasingly evident.

The Secret Symphony of City Nightlife: Unveiling the Soundscapes of Pubs and Bars

Wai Ming To – wmto@mpu.edu.mo

Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, Macao, 00000, Macao

Andy Chung

Popular version of 3aNSb – Noise Dynamics in City Nightlife: Assessing Impact and Potential Solutions for Residential Proximity to Pubs and Bars
Presented at the 185 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023229

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

Picture a typical evening in the heart of a bustling city: pubs and bars come alive, echoing with laughter, music, and the clink of glasses. These hubs of social life create a vibrant tapestry of sounds. But what happens when this symphony overshadows the tranquility of those living just around the corner?

soundscapeImage courtesy of Kvikoo, Singapore

Our journey begins in the lively interiors of these establishments. In countries rich in nightlife, you’ll find a high concentration of pubs and bars – sometimes up to 150 per 100,000 people. Inside a pub in Hong Kong, for instance, noise levels can soar to 80 decibels during peak hours, akin to the din of city traffic. Even during ‘happy hours,’ the decibel count hovers around 75, still significant.

But let’s step outside these walls. Here, the story takes a different turn. In residential areas adjacent to these nightspots, the evening air is often filled with an unintended soundtrack: the persistent hum of nightlife. In a study from Macedonia, for instance, residents experienced noise levels of about 67 decibels in the evening – a consistent background murmur disrupting the peace of homes.

This issue isn’t just about sound; it’s about the voices of those affected. Residents’ complaints about noise pollution have become a chorus in many parts of the world, including the United Kingdom, Hong Kong, and Australia. These complaints highlight a pressing question: How can we maintain the lively spirit of our cities while respecting the need for quiet?

Governments and communities are tuning into this challenge. Their responses, colored by cultural and historical factors, range from strict regulations to innovative solutions. For example, in Hong Kong, efforts to control noise at its source, as detailed in a government booklet, showcase one way of striking a balance.

This is a story of harmony – finding a middle ground where the joyous buzz of pubs and bars coexists with the serene rhythm of residential life. It’s about understanding that in the symphony of city life, every note, from the loudest cheer to the softest whisper, plays a crucial role.

Behaviors produced by a variety of sounds among eagles: A study with survival implications

JoAnn McGee – mcgeej@umn.edu

University of Minnesota
75 East River Parkway
Minneapolis, MN 55455
United States

Christopher Feist
Christopher Milliren
Lori Arent
Julia B. Ponder
Peggy Nelson
Edward J. Walsh

Popular version of 3aABb4 – Behavioral responses of bald eagles (Haliaeetus leucocephalus) to acoustic stimuli in a laboratory setting
Presented at the 184 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0018607
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

The ultimate goal of this project is to protect eagles by discouraging these charismatic birds from entering the airspace of wind energy facilities. The specific question under consideration centers on whether or not an acoustic cue, a sound, can be used for that purpose, to steer eagles away from harm’s way. Our specific goal in this particular study was to take the next step along our overall research path and determine if behaviors of bald eagles in particular were affected by different sound stimuli in a controlled laboratory environment.

Perhaps to be expected, behavioral responses varied significantly. Some birds explored their immediate airspace avidly, while others exhibited a more restrained set of behavioral responses to sound stimulation.

To get a feeling for the task, consider the reaction of this eagle to a sound stimulus in a quiet laboratory setting .

To collect these data, a bird was placed in a sound-damped room and the experiment was conducted from a control center just outside the exposure space. Birds were videotaped as sounds were delivered to one of two speakers and a group of unbiased judges was asked to determine (1) whether the bird responded to the sound based on its behavior, (2) to qualitatively assess the strength of the response, and (3) to identify the behaviors associated with the response. Twelve sounds were tested and judges were instructed to observe the eagle during a specified time window without knowing which sound, if any, had been played. Spectrograms of the sounds tested are shown in the figure.


By far the most common response was an attempt to localize the sound source based on head turning toward a speaker, although birds also frequently tilted their heads in response to stimuli. Females were slightly more responsive to sound stimuli than males, and not surprisingly, stimuli that elicited a higher number of responses also elicited stronger or more evident responses. Complex and natural sounds, for example, sounds produced by eagles, eaglets and pesky mobbing crow sounds, elicited more and stronger responses than man-made stimuli. Generally, bald eagles were fairly accurate in locating the direction that the sound originated, and, as before, females performed better than males.

The results from this study provide a critical step in an effort to protect eagles as we move away from the use of fossil fuels and rely more on wind power. We come away from this study with a better understanding of the types of sound signals that elicit more and stronger responses in bald eagles, and with the confidence that we will be able to objectively assess behavioral responses in more natural settings. We now know what these magnificent birds can hear, and we know that certain sound stimuli are more effective than others in evoking behavioral responses, taking us one step closer to our ultimate goal, to save bald eagles from undesirable outcomes and to give wind facility developers the tools needed to manage their facilities in an even more eco-friendly manner.