2aBAa3 – Towards a better understanding of myopia with high-frequency ultrasound

Jonathan Mamou – jmamou@riversideresearch.org
Daniel Rohrbach
Lizzi Center for Biomedical Engineering, Riverside Research, New York, NY, USA

Sally A. McFadden – sally.mcfadden@newcastle.edu.au
Vision Sciences, Hunter Medical Research Institute and School of Psychology, Faculty of Science, University of Newcastle, NSW, Australia

Quan V. Hoang – donny.hoang@snec.com.sg
Department of Ophthalmology, Columbia University Medical Center, New York, NY USA
Singapore Eye Research Institute, Singapore National Eye Centre, DUKE-NUS, Singapore

Myopia, or near-sightedness, affects up to 2.3 billion people and has a high prevalence. Although minimal levels of myopia are considered a minor inconvenience, high myopia is associated with sight-threatening pathology in 70% of patients and is highly prevalent in East Asians. By 2050, an estimated one billion people will have high myopia. High-myopia patients are prone to developing “pathologic myopia”, in which a high likelihood of permanent vision loss exists. Myopia is caused by an excessive eye length for the focusing power of the eye. Pathologic myopia occurs at extreme levels of lifelong, progressive eye elongation and subsequent thinning of the eye wall (sclera) and development of localized outpouchings (staphyloma). A breakdown in the structural integrity of the eye wall likely underlies myopic progression and precedes irreversible vision loss.

The guinea pig is a well-established animal model of myopia. With imposed blurring of the animals vision early in life, guinea pigs experience excessive eye elongation and develop high myopia within a week, which leads to pathologic myopia within 6 weeks. Therefore, we investigated two, fine-resolution ultrasound-based approaches to better understand and quantify the microstructural changes occurring in the posterior sclera associated with high-myopia development. The first approach termed quantitative-ultrasound (QUS) was applied to intact ex-vivo eyeballs of myopic and control guinea-pig eyes using an 80-MHz ultrasound transducer (Figure 1).

myopia

QUS yields parameters associated with the microstructure of tissue and therefore is hypothesized to provide contrast between control and myopic tissues. The second approach used a scanning-acoustic-microscopy (SAM) system operating at 250 MHz to form two-dimensional maps of acoustic properties of thin sections of the sclera with 7-μm resolution (Figure 2).

myopia

Like QUS, SAM maps provide striking contrast in the mechanical properties of control and myopic tissues at fine resolution. Initial results indicated that QUS- and SAM-sensed properties are altered in myopia and that QUS and SAM can provide new contrast mechanisms to quantify the progression and severity of the disease as well as to determine what regions of the sclera are most affected. Ultimately, these methods will provide novel knowledge about the microstructure of the myopic sclera that can improve monitoring and managing high myopia patients.

5aSC1 – Understanding how we speak using computational models of the vocal tract

Connor Mayer – connomayer@ucla.edu
Department of Linguistics – University of California, Los Angeles

Ian Stavness – ian.stavness@usask.ca
Department of Computer Science – University of Saskatchewan

Bryan Gick – gick@mail.ubc.ca
Department of Linguistics – University of British Columbia; Haskins Labs

Popular version of poster 5aSC1, “A biomechanical model for infant speech and aerodigestive movements”
Presented Friday morning, November 9, 2018, 8:30-11:30 AM, Upper Pavilion
176th ASA Meeting and 2018 Acoustics Week in Canada, Victoria, Canada

Speaking is arguably the most complex voluntary movement behaviour in the natural world. Speech is also uniquely human, making it an extremely recent innovation in evolutionary history. How did our species develop such a complex and precise system of movements in so little time? And how can human infants learn to speak long before they can tie their shoes, and with no formal training?

Answering these questions requires a deep understanding of how the human body makes speech sounds. Researchers have used a variety of techniques to understand the movements we make with our vocal tracts while we speak – acoustic analysis, ultrasound, brain imaging, and so on. While these approaches have increased our understanding of speech movements, they are limited. For example, the anatomy of the vocal tract is quite complex, and tools that measure muscle activation, such as EMG, are too invasive or imprecise to be used effectively for speech movements.

Computational modeling has become an increasingly promising method for understanding speech. The biomechanical modeling platform Artisynth (https://www.artisynth.org), for example, allows scientists to study realistic 3D models of the vocal tract that are built using anatomical and physiological data.

These models can be used to see aspects of speech that are hard to visualize using other tools. For example, we can see what shape the tongue takes when a specific set of muscles activates. Or we can have the model perform a certain action and measure aspects of the outcome, like having the model produce the syllable “ba” and looking at how much the lips deform by mutual compression during their contact in the /b/ sound. We can also predict how changes to typical vocal tract anatomy, such as the removal of part of the tongue in response to oral cancer, affect the ability to perform speech movements.

In our project at the 176th ASA Meeting, we present a model of the vocal tract of an 11 month old infant. A detailed model of the adult vocal tract named ‘Frank’ has already been implemented in Artisynth, but the infant vocal tract has different proportions than an adult vocal tract. Using Frank as a starting point, we modified the relative scale of the different structures based on measurements taken from CT scan images of an infant vocal tract (see Figure 1).

Going forward, we plan to use this infant vocal tract model (see Figure 2) to simulate both aerodigestive movements and speech movements. One of the hypotheses for how infants learn to speak so quickly is that they build on movements they can carry out at birth, such as swallowing or suckling. The results of these simulations will help supplement neurological, clinical, and kinematic evidence bearing on this hypothesis. In addition, the model will be generally useful for researchers interested in the infant vocal tract. 

vocal tractFigure 1: Left: A cross-section of the Frank model of an adult vocal tract with measurement lines. Right: A cross-sectional CT scan image of an 11 month old infant with measurement lines. The relative proportions of each vocal tract were compared to generate the infant model.

 Figure 2: A modified Frank vocal tract conforming to infant proportions.

2pNS3 – Love thy (Gym) Neighbour – A Case Study on Noise Mitigation for Specialty Fitness Centres

Brigette Martin – martin@bkl.ca
BKL Consultants Ltd.
#308-1200 Lynn Valley Road
North Vancouver, BC V7J 2A2

Paul Marks – marks@bkl.ca
BKL Consultants Ltd.
#308-1200 Lynn Valley Road
North Vancouver, BC V7J 2A2

Popular version of paper “Specialty fitness centres – a case study
Presented November 5, 2018
176th ASA Meeting, Victoria, BC, Canada

Please keep in mind that the research described in this Lay Language Paper may not have yet been peer reviewed.

The sudden rise of group fitness rooms, CrossFit, and spin cycling studios in the community over the last decade is undeniable.  These specialty fitness centres can be located in mixed-use buildings (adjacent to either residential areas or retail stores), emitting a level of noise that can be obtrusive to their neighbours. Many specialty fitness centres have been proactive in ensuring they meet the appropriate noise standards by seeking support from acousticians. This exploratory paper considers the noise levels for various popular specialty fitness centres and outlines noise mitigation options for each one.

Multi-purpose group fitness rooms are versatile in the activities they host, including weight classes that use regular high-impact activities to improve anaerobic fitness. Often, these sounds are accompanied by music blasting through loudspeakers suspended from the ceiling. In one circumstance, a building landlord engaged our team to conduct sound level measurements at their group fitness room to determine noise transmission to adjacent residential apartments. After simulating impact activities (e.g. people jumping, the dropping of 20-lb kettle bells and sandbags) on seven different potential floor build-ups and quantifying sound levels played in group fitness rooms, we were able to determine noise mitigation options that achieved the landlord’s level of acceptability. This included the implementation of isolated flooring and maintaining music levels within an acceptable threshold.

Combining aspects of running, weightlifting and gymnastics, CrossFit spaces are unquestionably noisy. In order to lessen the audibility of noise to adjoining office spaces, our team was asked by a CrossFit space’s landlord to undertake measurements and a noise assessment. Together, we worked on a noise management plan for the CrossFit gym, employing a number of measures to control noise impacts including the use of additional cushioned matting, dedicated lifting platforms, and an outline of noise control measures. Mitigation included a combination of installing acoustical treatments and management procedures limiting the types of activities in the gym.

With amplified music and enthusiastic instructors constantly cheering on rows of avid cyclists, spin classes have sound levels that are comparable to nightclubs. These can be adjacent to general offices, retail spaces or even residential apartments. Solutions for these types of spaces have including limiting the noise level or “bassbeat” in the studio, providing masking noise in the adjacent space, or increasing the sound isolation of the demising wall or shared floor/ceiling assemblies.

In an effort to address numerous noise complaints, we left an unattended sound analyzer to capture noise levels in an adjacent retail space during spin classes and times without classes. We determined that it is ultimately the bass noise level content that is the most audible part to the retail unit occupants during spin classes and recommended that spin studio additionally control bass sounds to ameliorate the intrusive effects.

While a “one-size-fits-all” solution does not necessarily exist for all specialty fitness centres, it is clear that by being proactive, fitness centres can better control noise emitted to adjacencies by including measures to mitigate the effects within their original studio designs.

2aAA8 – Nature as Muse: The characteristics of caves can help us add an individual touch to our music

Yuri Lysoivanov – yuri.lysoivanov@columbiacollege.edu
Flashpoint Chicago, A Campus of Columbia College Hollywood
28 N. Clark St. #500
Chicago, IL 60602

Popular version of paper 2aAA8
Presented Tuesday morning, November 6, 2018
176th ASA Meeting, Victoria, Canada

The use of artificial reverberation in recorded music has been available since the late 1940s and is commonly credited to the ingenuity of Bill Putnam Sr. [1]. Following decades of technological achievement audio engineers are able to access an ever-growing variety of echo chambers, metal plates, springs, and digital emulations of an abundance of environments. A popular method in use today is the convolution reverb, a digital technique that uses controlled recordings of real spaces (called Impulse Responses or IRs) and applies them to every sample of a source sound, achieving an incredibly realistic simulation of that sound in the space.

Curiously, given their unique acoustic qualities, impulse responses of caves are generally underrepresented in the audio engineer’s toolkit. A browse through the responses in Altiverb, a popular high-end convolution reverb (figure 1), shows a small selection of caves relegated to the post-production (i.e. film sound) category ready to use for enterprising sound designers. This selection is far smaller than the availability of concert halls, churches, tombs, rooms and other acoustically critical spaces.

Figure 1: A search for “cave” in Altiverb reveals Howe’s Cavern in NY and two locations in Malta, in addition to several man-made structures.

One potential reason for the lack of availability of cave impulse responses could be the logistical difficulty in getting recording and measuring equipment into the caves. Another reason may be simply a lack of consumer interest, with so many fantastic impulse responses of man-made structures readily available.

For this paper, we sought to explore nature as architect and to demonstrate how incorporating the characteristics of these distinct structures can make a meaningful contribution to the audio engineer’s creative palate. With the aid of scientists from the National Parks Service, we chose a few locations for analysis within Mammoth Cave – the longest cave system in the world.

After capturing impulse responses, we analyzed the spaces to develop a set of useful applications for audio professionals. The Methodist Church was found to have a warm and pleasant sounding reverb to the ear (Figure 2), with a decay characteristic similar to a small concert hall. Lake Lethe, is an isolated, lengthy subterranean waterway, presents a smooth long decay (Figure 3) and is ideal for a multitude of echo applications. The Wooden Bowl Room (Figures 4 and 5) and Cleveland Avenue (Figures 6 and 7), were selected by our host scientist for having beautiful low, sustained resonances (which we found to be 106.2 Hz and 118.6 Hz, respectively) – suitable for applying depth and tension to a variety of sounds.

Figure 2: Reverb Time (T20) measurement for the Methodist Church.

Figure 3: Reverb Time (T30) measurement for Lake Lethe
nature caveFigure 4: Interior of the Wooden Bowl Room Figure 5: 1000ms waterfall analysis of Wooden Bowl Room showing a sustained resonance at 106.2 Hz

These locations, carved over millions of years, provide opportunities for engineers to sculpt sounds that add an idiosyncratic character beyond the common reverbs available on the market. We hope that our work lays a foundation for further analysis of the characteristics of cave interiors and to a more individualized approach in using cave ambiences in music and sound design.

nature - caveFigure 6: Cleveland Avenue Figure 7: 1000ms waterfall analysis of Cleveland Avenue showing a sustained resonance at 118.6 Hz

[1] Weir, William. (2012, June 21). How Humans Conquered Echo. The Atlantic. Retrieved from https://www.theatlantic.com/

5pSP6 – Assessing the Accuracy of Head Related Transfer Functions in a Virtual Reality Environment

Joseph Esce – esce@hartford.edu
Eoin A King – eoking@hartford.edu
Acoustics Program and Lab
Department of Mechanical Engineering
University of Hartford
200 Bloomfield Avenue
West Hartford
CT 06119
U.S.A

Popular version of paper 5pSP6: “Assessing the Accuracy of Head Related Transfer Functions in a Virtual Reality Environment”, presented Friday afternoon, November 9, 2018, 2:30 – 2:45pm, RATTENBURY A/B, ASA 176th Meeting/2018 Acoustics Week in Canada, Victoria, Canada.

Virtual RealityIntroduction
While visual graphics in Virtual Reality (VR) systems are very well developed, the manner in which acoustic environments and sounds may be recreated in a VR system is not. Currently, the standard procedure to represent sound in a virtual environment is to use a generic head related transfer function (HRTF), i.e. a user selects a generic HRTF from a library, with limited personal information. It is essentially a ‘best-guess’ representation of an individual’s perception of a sound source. This limits the accuracy of the representation of the acoustic environment, as every person has a HRTF that is unique to themselves.

What is a HRTF?
If you close your eyes and someone jangles keys behind your head, you will be able to identify the general location of the keys just from the sound you hear. A HRTF is a mathematical function that captures these transformations, and can be used to recreate the sound of those keys in a pair of headphones – so that it appears that the sound recording of the keys has a direction associated with it. However, everyone has vastly different ear and head shapes, therefore HRTFs are unique to each person. The objective of our work was to determine how the accuracy of sound localization in a VR world varies for different users, and how we can improve it.

Test procedure
In our tests, volunteers entered a VR world, which was essentially an empty room, and an invisible sound source made a short bursts of noise at various positions in the room. Volunteers were asked to point to the location of the sound source, and results were captured using the VR’s motion tracking system. Results were captured to the nearest millimeter. We tested three cases: 1) where volunteers were not allowed to move their head to assist in the localization, 2) where some slight head movements were allowed to assist in sound localization, and 3) where volunteers could turn around freely and ‘search’ (with their ears) for the sound source. The head movement was tracked by using the VR system to track the volunteer’s eye movement, and if the volunteer moved, the sound source was switched off.

Results
We observed that the accuracy with which volunteers were able to localize the sound source varied significantly from person to person. There was significant error when volunteers’ head movements were restricted, but the accuracy significantly improved when people were able to move around and listen to the sound source. This suggests that the initial impression of a sounds location in a VR world is refined when the user can move their head to refine their search.

Future Work
We are currently analyzing our results in more detail to account for the different characteristics of each user (e.g. head size, size and shape of ear, etc). Further, we are aiming to develop the experimental methodology to use machine learning algorithms enabling each user to create a pseudo-personalized HRTF, which would improve the immersive experience for all VR users.

1pAB – Could lobsters use sounds to communicate between each other?

Youenn Jézéquel1, Julien Bonnel2, Jennifer Coston-Guarini1, Jean Marc Guarini1, Laurent Chauvaud1

1Laboratoire des Sciences de l’Environnement Marin, UBO, CNRS, IRD, Ifremer, LIA BeBEST, UMR 6539, rue Dumont D’Urville, 29280 Plouzané, France
2Woods Hole Oceanographic Institution, Woods Hole, MA 02543 USA

Session 1 pAB, Fish and Marine Invertebrate Bioacoustics II
Buzzing sounds as a mean of intra species-specific communication during agonistic encounters in male European lobsters (Homarus gammarus)?

An important application of marine ecological knowledge today is designing new indicators of marine ecosystems’ health. Passive acoustics, which simply consists on listening to sounds, is promising because it is non invasive and non destructive. However to develop passive acoustics as a tool for monitoring, we need to identify sound-emitting species with high potential for this type of application. Then, the sounds need to be analysed and and understood within their ecological context. In the coastal waters of Brittany (France), crustaceans would seem to be good study model, because they emit a wide range of sounds and also have a high commercial and cultural importance.

Figure 1: The European lobster (Homarus gammarus). Photographer: E. Amice (CNRS)

My PhD research, is focussed on the European lobster (Homarus gammarus, Figure 1). In our first study, we have shown that when stressed, the European lobster produces a species-specific sound that we call a “buzz” (Jézéquel et al. 2018, insert link for sound file). These sounds are characteristic low frequency and continuous sounds. We have shown that they are similar to those produced by the American lobster, but we .

While no studies have described the behaviours of the European lobster with ethograms (sequences of observed behaviours during behavioural experiments), there is a large literature on behaviours of American lobsters. Researchers have found that male American lobsters use agonistic encounters through aggressive behaviours to establish dominance between individuals (Figure 2A).

lobsters2A lobsters2B

Figure 2: Agonsitic encounters between male American lobsters (A) (Atema and Voigt 1995) and male European lobsters (B) (Photographer: Y. Jézéquel, Université de Bretagne Occidentale)

This allows them to gain access more easily to shelters and suitable mates during reproduction periods. These researchers have also shown that visual and chemical signals are used, but no studies have reported the use of sounds during these events to communicate. In our study, we have done agonistic encounters with male European lobsters to understand if they use sounds as a mean of intra species-specific communication  (Figure2B).

Our results show that male European lobsters use a highly complex panel of behaviours, from physical display to aggressive claw contact, in order to establish dominance. Once the dominant and submissive individuals are determined, they each adopt different behaviours:  the “winners” (dominants) continue physical and aggressive displays toward the submissive individuals that attempt to escape from their opponent’s presence.

During these experiments, we did not record buzzing sounds, probably because of the poor propagation of low frequencies (like those of the buzzing sounds) in the experimental tanks.  We concluded that this could explain the non detection of these sounds by the hydrophones installed for the experiments (Jézéquel et al. 2018).

The mechanism of sound production in both American and European lobsters is known: they contract rapidly internal muscles located at the base of their antennas to vibrate their carapace which produces the buzzing sound. We completed a new series of agonistic encounters with male European lobsters, but this time adding high frequency sampling accelerometers on their carapace. The accelerometery data clearly showed that European lobsters vibrated their carapace during agonistic encounters (with up to 90 vibration episodes per 15 minutes of experiment per individual), but their associated buzzing sounds were not recorded with hydrophones. Carapace vibrations were emitted by both dominant and submissive individuals, even if submissive individuals produced significantly more vibration episodes than dominant ones. These vibrations were associated to particular behaviours such as physical display and fleeing.

We have shown for the first time that male European lobsters exhibit complex, rapid patterns of movements during agonistic encounters that include carapace vibration episodes. However  during these events, the reactions of the receivers toward these signals remain unclear. We remain uncertain if the lobsters “sense” the carapace vibrations or their associated buzzing sounds in the experimental tanks.

Even if it is too soon yet to talk about a new type of communication in crustaceans, we have shown that buzzing sounds might have a role in the intra species-specific interactions displayed during agonistic encounters between male European lobsters. Field  experiments with better sound propagation conditions are in progress to determine if these sounds are indeed used as a mean of communication (Figure 3).

Figure 3:Bioacoustic experiments conducted in cages in coastal waters with European lobsters. Photographer: E. Amice (CNRS)