Arup, L5 Barrack Place 151 Clarence Street, Sydney, NSW, 2000, Australia
Additional authors: Mitchell Allen (Arup) , Kashlin McCutcheon
Popular version of 3aSP4 – Development of a Data Sonification Toolkit and Case Study Sonifying Astrophysical Phenomena for Visually Impaired Individuals
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023301
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Have you ever listened to stars appearing in the night sky?
Acousticians at Arup had the exciting opportunity to collaborate with astrophysicist Chris Harrison to produce data sonifications of astronomical events for visually impaired individuals. The sonifications were presented at the 2019 British Science Festival (at a show entitled A Dark Tour of The Universe).
There are many sonification tools available online. However, many of these tools require in-depth knowledge of computer programming or audio software.
The researchers aimed to develop a sonification toolkit which would allow engineers working at Arup to produce accurate representations of complex datasets in Arup’s spatial audio lab (called the SoundLab), without needing to have an in-depth knowledge of computer programming or audio software.
Using sonifications to analyse data has some benefits over data visualisation. For example:
Humans are capable of processing and interpreting many different sounds simultaneously in the background while carrying out a task (for example, a pilot can focus on flying and interpret important alarms in the background, without having to turn his/her attention away to look at a screen or gauge),
The human auditory system is incredibly powerful and flexible and is capable of effortlessly performing extremely complex pattern recognition (for example, the health and emotional state of a speaker, as well as the meaning of a sentence, can be determined from just a few spoken words) [source],
and of course, sonification also allows visually impaired individuals the opportunity to understand and interpret data.
The researchers scaled down and mapped each stream of astronomical data to a parameter of sound and they successfully used their toolkit to create accurate sonifications of astronomical events for the show at the British Science Festival. The sonifications were vetted by visually impaired astronomer Nicolas Bonne to validate their veracity.
Information on A Dark Tour of the Universe is available at the European Southern Observatory website, as are links to the sonifications. Make sure you listen to stars appearing in the night sky and galaxies merging! Table 1 gives specific examples of parameter mapping for these two sonifications. The concept of parameter mapping is further illustrated in Figure 1.
Table 1
Figure 1: image courtesy of NASA’s Space Physics Data Facility
Associate – Acoustics and Vibration, SLR Consulting Australia Pty Ltd, Melbourne, Victoria, 3002, Australia
Aaron McKenzie
Technical Director – Acoustics and Vibration
SLR Consulting Australia Pty Ltd
Susan Kay
Senior Program Environment Advisor – Acoustics
Australian Rail Track Corporation
Popular version of 1pNSb3 – Rail Noise Across Three States in Australia – Operational Noise Assessment on Inland Rail
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022808
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
How do we manage noise emissions from the largest rail project in Australia? The answer to that question is not trivial, especially if the project spans across the three eastern coast states of Australia. Currently Australia’s longest rail project, Inland Rail, is a proposed 1600 km rail line that connects Melbourne to Brisbane freight in 24 hours via the States of Victoria, New South Wales (NSW) and Queensland, with a combination of new rail infrastructure and upgrade of existing infrastructure.
image courtesy of inlandrail.com.au
Rail noise across each State is regulated and managed differently with their respective guidelines and policy documents. Victoria and NSW have day and night decibel thresholds, whilst Queensland has a 24-hour exposure threshold. Similarly, for sections where existing rail are being upgraded, all three States have slightly different thresholds which include an absolute threshold in Queensland or a combination of an absolute threshold and a relative increase in noise in Victoria and NSW. Furthermore, considerations of factors which affect rail noise such as rail speeds, track joints, level crossing bells and train horns are considered differently across the three States. In this regard, the modelling of future rail noise levels needs to carefully account for these differences to assess the predicted impacts in each jurisdiction against the respective thresholds.
One important parameter for assessing rail noise impacts is a pass-by maximum noise level (Lmax). This parameter is critical for a freight-dominated project like Inland Rail as it quantifies the impact of locomotives as they go past the residences. Typically, this is assessed as a 95th percentile Lmax, which means that any unusually rare and loud events are excluded (as they would fall within the top 5%). However, in Queensland, the criterion is a Single Event Maximum (SEM) defined as the arithmetic average of the 15 loudest pass-by maximum levels within a given 24-hour period. This parameter is challenging to predict, especially for new rail infrastructure where it is not possible to measure the SEM on field. To overcome this challenge, a prediction method based on a ‘Mote-Carlo’ statistical model was adopted. In this model, rail pass-by noise levels are randomly picked from databases of numerous pass-by noise levels to simulate the noise levels on a given day, and these random values are averaged to obtain the SEM. This random selection of train pass-bys is repeated several thousand times to obtain a trend and derive the most likely SEM that can be expected on field. This mathematical prediction technique was tested on existing rail lines and found to correlate well with field measurements.
There exists a need to support a consistent project-wide rail noise criteria that is effective in addressing all the nuanced differences in the criteria, whilst being simple and effective to implement and understand for all stakeholders. We recommend technical assessments and engagement with state authorities early in the project development phase to investigate noise emissions, controls and development of appropriate criteria. Once approved, the project criteria can be used across all sections of the project to ensure residents adjacent to the project get a consistent outcome.
Flinders University, GPO Box 2100, Adelaide, SA, 5001, Australia
Popular version of 1pSC6 – On the Small Flat Vowel Systems of Australian Languages
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022855
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Australia originally had 250-350 Aboriginal languages. Today, about 20 of these survive and none has more than 5,000 speakers. Most of the original languages shared very similar sound systems. About half of them had just three vowels, another 10% or so had four, and a further 25% or so had a five-vowel system. Only 16% of the world’s languages have a vowel inventory of four or less (the average number is six; some Germanic languages, such as Danish, have 20 or so).
This paper asks why many Australian languages have so few vowels. Our research shows that the vowels of Aboriginal languages are much more ‘squashed down’ in the acoustic space than those of European languages (Fig 1), indicating that the tongue does not come as close to the roof of the mouth as in European languages. The two ‘closest’ vowels are [e] (a sound with the tongue at the front of the mouth, between ‘pit’ and ‘pet’) and [o] (at the back of the mouth with rounded lips, between ‘put’ and ‘pot’). The ‘open’ (low-tongue) vowel is best transcribed [ɐ], a sound between ‘pat’ and ‘putt’, but with a less open jaw. Four- and five-vowel systems squeeze the extra vowels in between these, adding [ɛ] (between ‘pet’ and ‘pat’) and [ɔ] (more or less exactly as in ‘pot’), with little or no expansion of the acoustic space. Thus, the majority of Australian languages lack any true close (high-tongue) vowels (as in ‘peat’ and ‘pool’).
So why do Australian languages have a ‘flattened’ vowel space? The answer may lie in the ears of the speakers rather than in their mouths. Aboriginal Australians have by far the highest prevalence of chronic middle ear infection in the world. Our research with Aboriginal groups of diverse age, language and geographical location shows 30-60% of speakers have a hearing impairment in one or both ears (Fig 2). Nearly all Aboriginal language groups have developed an alternate sign language to complement the spoken one. Our previous analysis has shown that the sound systems of Australian languages resemble those of individual hearing-impaired children in several important ways, leading us to hypothesise that the consonant systems and the word structure of these languages have been influenced by the effects of chronic middle ear infection over generations.
A reduction in the vowel space is another of these resemblances. Middle ear infection affects the low frequency end of the scale (under 500 Hz), thus reducing the prominence of the distinctive lower resonances of close vowels, such as in ‘peat’ and ‘pool’ (Fig 3). It is possible that, over generations, speakers have raised the frequencies of these resonances to make them more hearable, thereby constricting the acoustic space the languages use. If so, we may ask whether, on purely acoustic grounds, communicating in an Aboriginal language in the classroom – using a sound system optimally attuned to the typical hearing profile of the speech community – might offer improved educational outcomes for indigenous children in the early years.
Popular version of 1aSC2 – Retroflex nasals in the Mai-Ndombe (DRC): the case of nasals in North Boma B82
Presented at the 185th ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0022724
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
“All language sounds are equal but some language sounds are more equal than others” – or, at least, that is the case in academia. While French i’s and English t’s are constantly re-dotted and re-crossed, the vast majority of the world’s linguistic communities remain undocumented, with their unique sound heritage gradually fading into silence. The preservation of humankind’s linguistic diversity relies solely on detailed documentation and description.
Over the past few years, a team of linguists from Ghent, Mons, and Kinshasa have dedicated their efforts to recording the phonetic and phonological oddities of southwest Congo’s Bantu varieties. Among these, North Boma (Figure 1) stands out for its display of rare sounds known as “retroflexes”. These sounds are particularly rare in central Africa, which mirrors a more general state of under-documentation of the area’s sound inventories. Through extensive fieldwork in the North Boma area, meticulous data analysis, and advanced statistical processing, these researchers have unveiled the first comprehensive account of North Boma’s retroflexes. As it turns out, North Boma retroflexes are exclusively nasal, a striking typological circumstance. Their work, presented in Sydney this year, not only enriches our understanding of these unique consonants but also unveils potential historical implications behind their prevalence in the region.
Figure 1 – the North Boma area
The study highlights the remarkable salience of North Boma’s retroflexes, characterised by distinct acoustic features that sometimes align and sometimes deviate from those reported in the existing literature. This is clearly shown in Figure 2, where the North Boma nasal space is plotted using a technique known as “Multiple Factor Analysis” allowing for the study of small corpora organised into clear variable groups. As can be seen, their behaviour differs greatly from that of the other nasals of North Boma. This uniqueness also suggests that their presence in the area may stem from interactions with long-lost hunter-gatherer forest languages, providing invaluable insights into the region’s history.
Figure 2 – MFA results show that retroflex and non-retroflex nasals behave very differently in North Boma
Extraordinary sound patterns are waiting to be discovered in the least documented language communities of the world. North Boma serves as just one compelling example among many. As we navigate towards an unprecedented language loss crisis, the imperative for detailed phonetic documentation becomes increasingly evident.
Macao Polytechnic University, R. de Luís Gonzaga Gomes, Macao, Macao, 00000, Macao
Andy Chung
Popular version of 3aNSb – Noise Dynamics in City Nightlife: Assessing Impact and Potential Solutions for Residential Proximity to Pubs and Bars
Presented at the 185 ASA Meeting
Read the abstract at https://doi.org/10.1121/10.0023229
Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.
Picture a typical evening in the heart of a bustling city: pubs and bars come alive, echoing with laughter, music, and the clink of glasses. These hubs of social life create a vibrant tapestry of sounds. But what happens when this symphony overshadows the tranquility of those living just around the corner?
Image courtesy of Kvikoo, Singapore
Our journey begins in the lively interiors of these establishments. In countries rich in nightlife, you’ll find a high concentration of pubs and bars – sometimes up to 150 per 100,000 people. Inside a pub in Hong Kong, for instance, noise levels can soar to 80 decibels during peak hours, akin to the din of city traffic. Even during ‘happy hours,’ the decibel count hovers around 75, still significant.
But let’s step outside these walls. Here, the story takes a different turn. In residential areas adjacent to these nightspots, the evening air is often filled with an unintended soundtrack: the persistent hum of nightlife. In a study from Macedonia, for instance, residents experienced noise levels of about 67 decibels in the evening – a consistent background murmur disrupting the peace of homes.
This issue isn’t just about sound; it’s about the voices of those affected. Residents’ complaints about noise pollution have become a chorus in many parts of the world, including the United Kingdom, Hong Kong, and Australia. These complaints highlight a pressing question: How can we maintain the lively spirit of our cities while respecting the need for quiet?
Governments and communities are tuning into this challenge. Their responses, colored by cultural and historical factors, range from strict regulations to innovative solutions. For example, in Hong Kong, efforts to control noise at its source, as detailed in a government booklet, showcase one way of striking a balance.
This is a story of harmony – finding a middle ground where the joyous buzz of pubs and bars coexists with the serene rhythm of residential life. It’s about understanding that in the symphony of city life, every note, from the loudest cheer to the softest whisper, plays a crucial role.