3aPPb7 – Influence of Age and Instrumental-Musical Training on Pitch Memory in Children and Adults

Aurora J. Weaver – ajw0055@auburn.edu
Molly Murdock- mem0092@auburn.edu
Auburn University
1199 Haley Center
Auburn, AL 36849

Jeffrey J. DiGiovanni – digiovan@ohio.edu
Ohio University
W151a Grover Center
Athens, Ohio

Dennis T. Ries – Dennis.Ries@ucdenver.edu
University of Colordo Anshutz Medical Campus
Building 500, Mailstop F546
13001 East 17th Place, Room E4326C
Aurora, CO 80045

Popular version of paper 3aPPb7
Presented Wednesday morning, December 6, 2017
174th ASA Meeting, New Orleans

Infants are inherently sensitive to the relational properties of music (e.g., musical intervals, melody).1 Knowledge of complex structural properties of music (e.g., key, scale), however, are learned to varying degrees through early school age.1-3 Acquisition of some features does not require specialized instruction, but extensive musical training further enhances the ability to learn musical structures.4 Related to this project, formal musical instruction is linked to improvement in listening tasks (other than music) that stress attention in adult participants.5,6,7   

Musical training influences sound processing in the brain through learning-based processes while also enhancing lower-level acoustic processing within the brainstem8. Behavioral and physiological evidence suggest there is a critical period for pitch processing refinement within these systems between the ages of 7-to-11 years.9-13 The purpose of this project was to determine the contributions of musical training and age to refinement of pitch processing beyond this critical period.

Individuals with extensive and active instrumental musical training were matched in age with individuals with limited instrumental musical training. This comparison served as a baseline to evaluate the extent of presumed physiologic changes within the brain/brainstem relative to the amount and duration of musical training.14,15 We hypothesized that the processing mechanisms for active musicians become increasingly more efficient over time, due to training. Therefore, this group can focus more mental resources on the retention of sound information during pitch perception tasks of varying difficulty. Sixty-six participants, in three different age groups (i.e., 10-12 year olds; 13-15 year olds, and adults), completed two experiments.

The first experiment included a measure of non-verbal auditory working memory (pitch pattern span [PPS]).16 The second experiment used a pitch matching task, which closely modeled the procedure implemented by Ross and colleagues.17-19 Figure 1 displays the individual PPS scores for each instrumental training group as a function of age in years.

Musical Training

Figure 1. Individual PPS scores (y-axis) for each instrumental training group as a function of age in years (x- axis). The participant scores in the active group are represented by filled in circles, and the participants with limited instrumental training are open circles.

The second experimental task, a pitch matching production task, eliminated the typical need to understand musical terminology (e.g. naming musical notes). This method provided a direct comparison of musicians and non-musicians, when they could only rely on their listening skills to remember a target, and to match the pitch to an ongoing tonal sequence.17-19 We wanted to evaluate pitch matching accuracy (via constant error) and consistency (via standard deviation) in individuals with limited and active instrumental musical training. Figure 2 illustrates the timing pattern and describes the task procedure. Each participant completed thirty pitch matches.

Figure 2. Schematic representation of timing pattern of the pure-tones showing the target and examples of the first three comparison tones that might have followed. Once the pitch target had been presented, an adjustable dial appeared on a touch screen and the presentation of the first comparison stimulus occurred 0.2 seconds later. Note the frequency of the 1st comparison tone was placed randomly 4-6 semitones above or below the target tone (not represent in this figure). The values of subsequent tones were controlled by the participant through movement of the onscreen dial. Presentation of comparison tones continued, at the same time interval, until a participant had adjusted the pitch of the ongoing comparison tones using the GUI dial to match the pitch target

Figure 3 depicts distribution of responses across age groups and instrumental training groups (see figure legend). Statistical analyses (i.e., Manova and Linear Regression) revealed that duration of instrumental musical training and age uniquely contribute to enhanced memory for pitch, indicated by greater PPS scores and smaller standard deviations of the pitch matches. Unexpectedly, based on the task procedures where participates are equally likely to match a pitch above or below the target, the youngest children (ages 10-12) demonstrated significantly sharper pitch matches (i.e., positive constant error) across pitch matches than the older participants (13 and older; see Figure 3 dashed lines). That is, across music groups, the youngest participants on average tended to produce sharper pitch matches than the presented target pitch.

Figure 3. Displays the proportion of response matches produced as a function of the deviation in half-steps (smallest musical distance between notes, e.g., progressively going up the white and black keys on a piano) across age groups in rows (ages 10-12 years, top; ages 13-15 years, middle; ages 18-35 years, bottom) and instrumental training groups by column (Limited, left; Active, right). The dashed line depicts the overall accuracy (i.e., constant error) across pitch matches produced by each participant subgroup.

Matching individuals in age groups, with and without active musical training, allowed the comparison of the unique contributions of age and duration of music training on pitch memory. Consistent with our hypothesis, individuals with active and longer durations of musical training produced greater PPS scores and performance on pitch matching was less degraded (i.e., produced smaller standards deviations across pitch matches) than age-matched groups. Most individuals can distinguish pitch changes in half note steps, although they may have considerable difficulty establishing a reliable relationship between a frequency and its note value.20,21,23,24 There are individuals, however, with absolute pitch, who have the capacity to name a musical note without the use of a reference tone.24 While no participant in either music group (Active or Limited) reported absolute pitch, two participants in the active music group matched all thirty pitch matches within 1 semitone; that is, within one half step (HS) of the target. This may indicate that the two listeners were using memory of the categorical notes to facilitate pitch matches (e.g., using their memory of the note A4, could help when matching a target pitch close to 440 Hz in the task). Consist with previous application of this method,17,18,19 the pitch matching production task did identify participants who possess similar categorical memory for tonal pitch when musical notes and terminology were removed from the production method.

References

  1. Schellenberg, E. G., & Trehub, S. E. (1996). Natural musical intervals: Evidence from infant listeners. Psychological Science, 7(5), 272-277.
  2. Fujioka, T., Ross, B., Kakigi, R., Pantev, C., & Trainor, L. (2006). One year of musical training affects development of auditory cortical evoked fields in young children. Brain, 129(10), 2593-2608.
  3. Trehub, S. E., Bull, D., & Thorpe, L. A. (1984). Infants’ perception of melodies: The role of melodic contour. Child Development, 55(3), 821-830. doi:10.1111/1467-8624.ep12424362
  4. Morrongiello, B. A., & Roes, C. L. (1990). Developmental changes in children’s perception of musical sequences: Effects of musical training. Developmental Psychology, 26(5), 814-820.
  5. Strait, D., Kraus, N., Parbery-Clark, A., & Ashley, R. (2010). Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance. Hearing Research, 261, 22-29.
  6. Williamson, V. J., Baddeley, A. D., & Hitch, G. J. (2010). Musicians’ and nonmusicians’ memory for verbal and musical sequences: Comparing phonological similarity and pitch proximity. Memory and Cognition, 38(2), 163-175. doi: 10.3758/MC.38.2.163.
  7. Schön, D., Magne, C. & Besson, M. The music of speech: Music training facilitates pitch processing in both music and language. Psychophysiology 41, 341–349 (2004).
  8. Kraus, N., Skoe, E., Parbery-Clark, A. & Ashley, R. Experience-induced Malleability in Neural Encoding of Pitch, Timbre, and Timing. N. Y. Acad. Sci. 1169, 543–557 (2009).
  9. Banai, K., Sabin, A.T., Wright, B.A. (2011). Separable developmental trajectories for the abilities to detect auditory amplitude and frequency modulation. Hearing Research, 280, 219-227.
  10. Dawes, P., & Bishop, D.V., 2008. Maturation of visual and auditory temporal processing in school-aged children. J. Speech. Lang. Hear. Res. 51, 1002-1015.
  11. Moore, D., Cowan, J., Riley, A., Edmondson-Jones, A., & Ferguson, M. (2011). Development of auditory processing in 6- to 11-yr-old children. Ear and Hearing, 32, 269-285.
  12. Morrongiello, B. A., & Roes, C. L. (1990). Developmental changes in children’s perception of musical sequences: Effects of musical training. Developmental Psychology, 26, 814-820.
  13. Sutcliffe, P., & Bishop, D. (2005). Psychophysical design influences frequency discrimination performance in young children. Journal of Experimental Child. Psychology, 91, 249-270
  14. Habib, M., & Besson, M. (2009). What do musical training and musical experience teach us about brain plasticity? Music Perception, 26, 279-285.
  15. Zatorre, R. J. (2003). Music and the brain. Annals of the New York Academy of    Sciences, 999, 4-14
  16. Weaver, A.J., DiGiovanni, J.J & Ries, D.T. (2015). The Influence of Musical Training and Maturation on Pitch Perception and Memory. Poster AAS, Scottsdale, AZ
  17. Ross, D. A., & Marks, L. E. (2009). Absolute pitch in children prior to the beginning of musical training. Annals of the New York Academy of Sciences, 1169, 199-204. doi:10.1111/j.1749-6632.2009.04847.x
  18. Ross, D. A., Olson, I. R., & Gore, J. (2003). Absolute pitch does not depend on early musical training. Annals of the New York Academy of Sciences, 999(1), 522-526.
  19. Ross, D. A., Olson, I. R., Marks, L., & Gore, J. (2004). A nonmusical paradigm for identifying absolute pitch possessors. Journal of the Acoustical Society of America, 116, 1793-1799. Ross, Olsen, and Gore’s procedure (2003)
  20. Levitin, D. (2006). This is your brain on music: The science of human obsession. New York, NY: Dutton.
  21. Moore, B. C. J. (2003). An introduction to the psychology of hearing. London, UK: Academic Press.
  22. Hyde KL, Peretz I, Zatorre RJ. Evidence for the role of the right auditory cortex in fine pitch resolution. Neuropsychologia 2008;46:632–639. [PubMed: 17959204]
  23. McDermott, J. H., & Oxenham, A. J. (2008). Music perception, pitch, and the auditory system. Current Opinion in Neurobiology18(4), 452–463. http://doi.org/10.1016/j.conb.2008.09.005
  24. Dooley, K., & Deutsch, D. (2010). Absolute pitch correlates with high performance on musical dictation. Journal of the Acoustic Society of America, 128(2), 890-893. doi:10.1121/1.3458848

3aAB8 – Sea turtles are silent… until there is something important to communicate: first sound recording of a sea turtle

Amaury Cordero-Tapia  – acordero@cibnor.mx
Eduardo Romero-Vivas – evivas@cibnor.mx
CIBNOR
Mar Bermejo 195
Playa Palo de Santa Rita Sur 23090
La Paz, BCS, Mexico

Popular version of paper 3aAB8, “Opportunistic underwater recording of what might be a distress call of Chelonya mydas agassizii”
Presented Wednesday morning, December 6, 2017, 10:15-10:30 AM, Salon F/G/H
174th ASA Meeting, New Orleans, Louisiana
Click here to read the abstract.

Sea turtles are considered “the least vocal of all living reptiles” (DOSIT), since their vocalization has been documented only during nesting (Cook & Forrest, 2005). Although they distribute worldwide in the oceans, there seems to be no recordings of sounds produced by them, perhaps until now.

In Baja California Sur Mexico there is a conservation program run by Government Authorities, Industry, and Non-Governmental Agencies focused on vulnerable, threatened and endangered marine species. In zones of high density of sea turtles, special nets, which allow them to surface for breathing, are deployed monthly for monitoring purposes. Nets are checked by divers every 2 hours during the 24 Hrs. of the census.

During one of these checks a female specimen of Green Turtle (Chelonya mydas agassizii) was video recorded using an action cam. Posterior analysis of the underwater recording showed a clear pattern of pulsed sound when the diver was at close proximity to the turtle. The signal covers the reported audition range for this species (Ketten & Bartol, 2005; Romero-vivas & Cordero-Tapia, 2008) and given the circumstances we think that it might be a distress call. With more recordings we will confirm if such is the case, although this first recording gives an initial hint of what to look for. Maybe sea turtles are not that silent; there was just no need to break the silence

Figure 1. Green turtle in the special net & sound recording

 

Dosits.org. (2017). DOSITS: How do sea turtles hear?. [online] Available at: http://dosits.org/animals/sound-reception/how-do-sea-turtles-hear/ [Accessed 16 Nov 2017].

Cook, S. L., and T. G. Forrest. 2005, Sounds produced by nesting Leatherback sea turtles (Dermochelys coriacea). Herpetological Review 36:387–390.

Ketten, D.R. and Bartol, S.M. 2005, Functional Measures of Sea Turtle Hearing. Woods Hole Oceanographic Institution: ONR Award No: N00014-02-1-0510.

Romero-Vivas, E. and Cordero-Tapia, A. 2008, Behavioral Acoustic Response of two endangered sea turtle species: Chelonia Mydas Agassizzi –Tortuga Prieta- and Lepidochelys Olivaceas –Tortuga Golfina- XV Mexican International Congress on Acoustics, Taxco 380-385.

 

 

 

 

 

3pIDa1 – Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles

Lora J. Van Uffelen, Ph.D – loravu@uri.edu
University of Rhode Island
Department of Ocean Engineering &
Graduate School of Oceanography
45 Upper College Rd
Kingston, RI 02881

Popular version of paper 3pIDa1, “Hydronephones: Acoustic Receivers on Unmanned Underwater Vehicles”
Presented Wednesday, December 06, 2017, 1:05-1:25 PM, Salon E
174th ASA meeting, New Orleans

What do you think of when you think of a drone?  A quadcopter that your neighbor flies too close to your yard?  A weaponized military system?  A selfie drone?  The word drone typically refers to an unmanned aerial vehicle (UAV), but it also now is used to refer to an unmanned underwater vehicle (UUV).  Aerial drones are typically outfitted with cameras, but cameras are not always the best way to “see” underwater.  Hydronephones are underwater vehicles, or underwater drones, equipped with hydrophones, or underwater microphones, which receive and record sound underwater.   Sound is one of the best tools for sensing or “seeing” the underwater environment.

Sound travels 4-5 times faster in the ocean than it does in air. The speed of sound depends on ocean temperature, salinity, and pressure. Sound can also travel far – hundreds of miles under the right conditions! – which makes sound an excellent tool for things like underwater communication, navigation, and even measuring oceanographic properties like temperature and currents.

Here, the term hydronephone is used specifically to refer to an ocean glider, a subclass of UUV, used as an acoustic receiver [Figure 1].  Gliders are autonomous underwater vehicles (AUVs) because they do not require constant piloting.  A pilot can only communicate with a glider when it is at the sea surface; while it is underwater it travels autonomously.  Gliders do not have propellers, but they move through the water by controlling their buoyancy and using hydrofoil wings to “glide” through the water. Key advantages of these vehicles are that they are relatively quiet, they have low power consumption so they can be deployed for long durations of time, they can operate in harsh environments, and they are much more cost-effective than traditional ship-based observational methods.

Hydronephones

Figure 1: Seaglider hydronephones (SG196 and SG198) on the deck of the USCGC Healy prior to deployment in the Arctic Ocean north of Alaska in August 2016.

Two hydronephones were deployed August-September of 2016 and 2017 in the Arctic Ocean.  They recorded sound signals at ranges up to 480 kilometers (about 300 miles) from six underwater acoustic sources that were placed in the Arctic Ocean north of Alaska as part of a large-scale ocean acoustics experiment funded by the Office of Naval Research [Figure 2].  This acoustic system was designed to learn about how sound travels in the Arctic ocean where the temperatures and ice conditions are changing.  The hydronephones were a mobile addition to this stationary system, allowing for measurements at many different locations.

Figure 2: Map of Seaglider SG196 and SG198 tracks in the Arctic Ocean in August/September of 2016 and 2017. Locations of stationary sound sources are shown as yellow pins.

One of the challenges of using gliders is figuring our exactly where they are when they are underwater.  When the gliders are at the surface, they can get their position in latitude and longitude using Global Positioning System (GPS) satellites, in a similar way to how a handheld GPS or a cellphone gets position.  Gliders only have access to GPS when they come to the ocean surface because the GPS signals are electromagnetic waves, which do not travel far underwater.   The gliders only come to the surface a few times a day and can travel several miles between surfacings, so a different method is needed to determine where they are while they are deep underwater. For the case of the Arctic experiment, the recordings of the acoustic transmissions from the six sources on the hydronephones could be used to position them underwater using sound in a way that is analogous to the way that GPS uses electromagnetic signals for positioning.

Improvements in underwater positioning will make hydronephones an even more valuable tool for ocean acoustics and oceanography.  As vehicle and battery technology improves and as data storage continues to become smaller and cheaper, hydronephones will also be able to record for longer periods of time allowing more extensive exploration of the underwater world.

Acknowledgments:  Many investigators contributed to this experiment including Sarah Webster, Craig Lee, and Jason Gobat from the University of Washington, Peter Worcester and Matthew Dzieciuch from Scripps Institution of Oceanography, and Lee Freitag from the Woods Hole Oceanographic Institution. This project was funded by the Office of Naval Research.

4aAB4 – Analysis of bats’ gaze and flight control based on the estimation of their echolocated points with time-domain acoustic simulation

Taito Banda – dmq1001@mail4.doshiha.ac.jp
Miwa Sumiya – miwa1804@gmail.com
Yuya Yamamoto – dmq1050@mail4.doshisha.ac.jp
Yasufumi Yamada – yasufumi.yamada@gmail.com
Faculty of Life and Medical Sciences, Doshisha UniversityKyotanabe, Kyoto, Japan

Yoshiki Nagatani – nagatani@ultrasonics.jp
Department of Electronics, Kobe City College of Technology, Kobe, Japan.

Hiroshi Araki – Araki.Hiroshi@ak.MitsubishiElectric.co.jp
Advanced Technology R&D Center, Mitsubishi Electric Corporation, Amagaski, Japan

Kohta I. Kobayasi – kkobayas@mail.doshisha.ac.jp
Shizuko Hiryu – shiryu@mail.doshisha.ac.jp
Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, Kyoto, Japan

Popular version of paper 4aAB4 “Analysis of bats’ gaze and flight control based on the estimation of their echolocated points with time-domain acoustic simulation.”
Presented Friday morning, December 7, 2017, 8:45-9:00 AM, Salon F/G/H
174th ASA in New Orleans

Bats broadcast ultrasound and listen to the echoes to understand surrounding information. It is called echolocation. By analyzing those echoes, i.e., arrival time of echoes, bats can detect the position of objects, shape or texture [1-3]. Contrary to the way people use visual information, bats use the sound for sensing the world. How is the world different between the two by sensing? Because both senses are completely different, we cannot imagine how bats see the world.

To address this question, we simulated the echoes arriving at the bats during obstacle-avoiding flight based on the behavioral data so that we could investigate how the surrounding objects were described acoustically.

First, we arranged microphone arrays (24 microphones) and two high-speed cameras in an experimental flight chamber (Figure 1) [4]. The timing, positions and directions of emitted ultrasound as well as the flight paths were measured. A small telemetry-microphone was attached on the back of the bat so that the intensity of emitted ultrasound could be recorded accurately [5]. The bat was forced to follow a S-shaped flight pattern to avoid the obstacle acrylic boards.

Based on those behavioral data, we simulated propagation of sounds with the measured strength and direction emitted at the position of the bat in the experiment, and we could obtain echoes reaching both left and right ears from the obstacles. By using interaural time difference of echoes, we could acoustically identify the echolocated points in the space for all emissions (square plots in Fig.2). We also investigated how the bats show spatial and temporal changes in the echolocated points in the space as they became familiar with the space (top and bottom panels).

We analyzed changes in the echolocated points by using this acoustic simulation, corresponding to which part of objects the bats intended to gaze at. In a comparison between before and after the habituation in the same obstacle layout, there are differences in the wideness of echolocated points on the objects. By flying the same layout repeatedly, false detection of objects was reduced, and their echolocating fields became narrower.

It is natural for animals to pay their attention toward objects adequately and adapt both flight and sensing controls cooperatively as they became familiar with the space. These finding suggests that our approach in this paper, i.e., acoustic simulation based on behavioral experiment is one of effective ways to visualize how the object groups are acoustically structured and represented in the space for bats by echolocation during flight. We believe that it might serve a tip to the question; “What is it like to see as a bat?”

ehcolocation
Figure 1 Diagram of bat flight experiment. Blue and red circles indicate microphones on the wall and the acrylic boards, respectively. Two high-speed video cameras are attached at the two corners of the room. Three acrylic boards are arranged to make bats follow S-shaped flight pattern to avoid the obstacles.

echolocation
Figure 2 Comparison of echolocated points between before and after space habituation. The measured positions where the bat emitted the sound are shown with circle plots meanwhile the calculated echolocated points are shown with square plots. Color variation from blue to red for both circle and square plots corresponds to temporal sequence of the flight. Sizes of circle and square plots correspond to the strength of emissions and their echoes from the obstacles at the bat, respectively.

References:
[1] Griffim D. R., Listning in the dark, Yle University, New Haven, CT, 1958

[2] Simmons J.A., Echolocation in bats: signal processing of echoes for target range, Science, vol. 171, pp.925-928., 1971

[3] Kick S. A., Target-Detection by the Echolocating Bat, Eptesicus fuscus, J Comp Physiol, A., vol. 145, pp.431-435, 1982

[4] Matsuta N, Hiryu S, Fujioka E, Yamada Y, Riquimaroux H, Watanabe Y., Adaptive beam-width control of echolocation sounds by CF-FM bats, Rhinolophus ferrumequinum nippon, during prey-capture flight, J Exp Biol., vol. 206, pp.1210-1218, 2013

[5] Hiryu S, Shiori Y, Hosokawa T, Riquimaroux H, Watanabe Y., On-board telemetry of emitted sounds from free-flying bats: compensation for velocity and distance stabilizes echo frequency and amplitude, J Comp Physiol A., vol. 194, pp.841-851, 2008

2aEA6 – Carbon Nanotube Speakers – Future of Transparent and Lightweight Solid-State Speakers

Suraj M Prabhu – smprabhu@mtu.edu
Dr. Andrew Barnard – arbarnar@mtu.edu
Dynamic Systems Laboratory
R.L. Smith Building, 1400 Townsend Drive
Houghton, MI 49931

Popular Version of paper 2aEA6, “Carbon Nanotube Coaxial Thermophone for Automotive Exhaust Noise Cancellation”
Presented Tuesday morning, November 5, 2017
174th ASA Meeting, New Orleans

Everyday noise affects human health and is a major irritant for people of all ages. Automotive exhaust noise is the one of the most common community noises. Exhaust noise is generated in the engine, travels down the exhaust pipe with the exhaust gases, and is radiated out to the atmosphere. Due to the presence of enormous numbers of automobiles, people are exposed to significant levels of exhaust noise over their lifetime and, so it is important to control the exhaust noise through engineering noise control technology.

The two types of noise control systems used are passive control system and active control systems. Passive control system uses a muffler to attenuate and/or absorb the exhaust noise. An active control system has a loudspeaker that generates a sound with equal amplitude and opposite phase to cancel the exhaust noise. This works just like noise canceling headphones that are used by many air travelers, except it is done at the source.

Carbon nanotubes (CNT) are carbon nano-structures which when stretched form extremely lightweight, flexible films. These films adhere to any conductive surface to form a thermal speaker and when current is passed through them; their surface temperature fluctuates very rapidly. These fluctuations produce pressure waves in the medium near the film, thereby producing sound. The speaker uses no moving parts to produce sound and hence it is solid state in operation.

Figure 1: Automotive exhaust noise schematic

As the operating temperature of CNT speakers is high as compared to the temperature of the exhaust gases, the speakers can be mounted directly onto the tailpipe. The design of the CNT speaker is a coaxial transducer in the form of a co-axial spool. In addition, compared to the other components of the speaker, the CNT film itself is massless and so the overall weight of the speaker is much less compared to traditional loudspeakers with magnets. When tested in the laboratory for noise cancellation, an average cancellation of 12 – 15 dB was achieved across exhaust frequency ranges.   

Figure 2: Passive control system schematic indicating the location of a single muffler along the tailpipe

Figure 3: Active control system schematic indicating the mounting of the loudspeaker at the end of a side branch and the mounting of the entire setup on the tailpipe.

Carbon nanotubes

Figure 4: Planar CNT speaker having the film stretched between two electrodes and attached to an insulating base with electrical connector

Carbon nanotubes

Figure 5: Coaxial CNT speaker (prototype) having two end plates (white discs), electrodes with wires for connection, CNT (black film) wrapped around the electrodes protected from the atmosphere by a transparent cover

1pEAa4 – How does the stethoscope actually work?

Lukasz J. Nowak – lnowak@ippt.pan.pl
Institute of Fundamental Technological Research, Polish Academy of Sciences
Pawinskiego 5B
02-106 Warszawa, Poland

Popular version of paper 1pEAa4, “An Experimental Study on the Role and Function of the Diaphragm in Modern Acoustic Stethoscopes”
Presented Monday afternoon, December 04, 2017, 1:45 PM, Balcony N
174th ASA meeting, New Orleans

Acoustic stethoscope, invented over 200 years ago by French physician, Rene Laennec, is the most commonly used medical diagnostic device and also the symbol of medical professionals. Thus, it might sound surprising, that the physics underlying the operation of this simple, mechanical device is still not well understood. The theory of operation of the stethoscope is widely described in the medical literature. However, most of the presented statements are based on purely intuitive conclusions, subjective impressions or on the results of experiments which do not reflect the complex mechanical problem of the chestpiece – patient interaction. Some recently published findings[1,2] suggest, that the state of the art in the field should be verified.

One of the main challenges in determining acoustic properties of stethoscopes is the fact, that under patient examination conditions (i.e. the only case that actually matters for the considered problem) the chestpiece of the stethoscope is mechanically coupled with a body. The effects of this coupling significantly alter the sought parameters. Thus, you cannot simply replace a patient with a loudspeaker, in order to use harmonic test signals, as in the normal case of measuring acoustic parameters of any standard audio device. You have to perform the analysis and draw the conclusions based on the sounds from the inside of the body of a patient, and those are relatively quiet, noisy, and variable in nature.

The present study focuses on the role and function of the diaphragm in modern acoustic stethoscopes. During auscultation, the diaphragm is excited to vibrate by the underlying body surface, and thus it is the source of sound transmitted through the hollow tubes of a stethoscope to the ears of the physician. The higher are the velocity level values distributed across the surface of the diaphragm, the louder will be the perceived sound. Loudness is a crucial parameter, as the auscultation sounds are very quiet in general, and the diagnosis is often obtained based on the distinction of very subtle changes in those signals. Different stethoscope manufacturers use various materials, shapes, sizes and attaching means for the diaphragms, claiming that specific solutions provide optimal sound parameters. However, no objective data regarding this problem are available, and thus, such statements cannot be accepted from the scientific point of view.

A detailed experimental methodology for determining vibroacoustic properties of different kinds of diaphragms is introduced in the present study. A laser Doppler vibrometer is used to measure the velocity of vibrations of various points on the surface of a diaphragm during heart auscultation (Figure 2). At the same time, an electrocardiography (ECG) signal is also recorded. The ECG signal is used to extract only a subset of clean and uncorrupted velocity signals, without noise and other, interfering body sounds (see Figure 3). The parameters of the extracted and selected fragments are statistically analyzed.

The box plot in Figure 4 shows the values of velocity of vibrations determined at the center and close to the edge for various types of diaphragms encountered in modern acoustic stethoscopes. The first two boxes on the left correspond to the case without a diaphragm. In general, the higher values, and the lower differences between the center and the edge – the better. As it can be seen, the results differ significantly between various diaphragm types. The drawn conclusions are especially important from the physicians’ point of view, as the acoustic efficiency of a stethoscope translates directly into the quality of the diagnosis. An open question remains if and how it could be possible to significantly improve the efficiency of the existing solutions? The analysis of the obtained results states a good foundation for further investigations in this direction, as it allows to better understand the phenomena underlying auscultation examination and to formulate some general assumptions regarding the most promising solutions.

stethoscope
Figure 1. A modern acoustic stethoscope with a diaphragm chestpiece

 
Figure 2. The laboratory stand used for experimental investigations on the vibroacoustic parameters of various kinds of stethoscope diaphragms

 
Figure 3. All the vibration velocity signals extracted from a single recording, including noisy and corrupted ones (top), and the corresponding subset of signals selected for further analysis (bottom)


Figure 4. Box plot presenting the distribution of the measured velocity of vibrations values at the center and edge points for each of the considered cases

[1] Nowak, L. J., and Nowak, K. M. (2017). “Acoustic characterization of stethoscopes using auscultation sounds as test signals,” J. Acoust. Soc. Am., 141, 1940–1946. doi:10.1121/1.4978524

[2] Nowak, K. M., and Nowak, L. J. (2017). “Experimental validation of the tuneable diaphragm effect in modern acoustic stethoscopes,” Postgrad. Med. J., , doi: 10.1136/postgradmedj-2017-134810. doi:10.1136/postgradmedj-2017-134810