1aPP1 – With two ears and a cochlear implant, each ear is tuned differently

David Landsberger – David.Landsberger@nyumc.org

New York University School of Medicine
Department of Otolaryngology – EAR-Lab
462 First Ave STE NBV 5E5
New York, NY 10016, USA
www.ear-lab.org

Popular version of 1aPP1 Electrode length, placement, and frequency allocation distort place coding for bilateral, bimodal, and single-sided deafened cochlear implant users
Presented Monday morning, May 7, 2018, 8:05-8:25 AM, Nicollet D2
175th ASA Meeting, Minneapolis, Minnesota.

Imagine listening to the world with two ears that are tuned differently from each other. A key pressed on a piano would be perceived as different notes in the left and right ear. A person talking would sound like two different people simultaneously saying the same thing, one to each ear. This is in fact the experience for many people listening with two ears where one of the two ears has a cochlear implant.

The cochlea in a normal hearing ear is arranged “tonotopically.”  That is, high frequencies are represented in the bottom (base) of the cochlea and low frequencies are represented at the top (apex) of the cochlea. The regions between the base and apex of the cochlea represent different frequencies and are ordered along the cochlea from low (in the apical region) to high (in the basal region) along the cochlea.

Cochlear implants take advantage of the tonotopic property using an array of electrodes inside the cochlea. Stimulation from an electrode placed deeper into the cochlea provides a lower pitch than an electrode placed closer to the base of the cochlea.  Cochlear implant signal processing therefore provides information about low frequencies on apical electrodes and high frequencies on basal electrodes.

However, there is a mismatch between the frequency represented by a given electrode and the frequency expected by a normal ear at the same location. For example, the deepest electrode might represent 150-200 Hz but be placed in a location that expects approximately 1000 Hz. One factor effecting this relationship is the placement of the electrodes in the cochlea.  This depends on electrode length, surgical placement, and size of the individual’s cochlea.  Another factor is the “frequency allocation” which is the mapping of which frequency ranges are represented by each electrode [1]. The result is that the world is presented pitch shifted (and warped) by a cochlear implant relative to what would be expected by a normal ear.

This distortion may or may not be an issue for traditional cochlear implant users who are bilaterally deaf and listen to the world via a single unilateral implant. For these users, although pitch may be transposed, the transposition is consistent and therefore may be easier to perceptually manage. However, it has become more common for cochlear implant users to listen to the world with two ears (i.e. a cochlear implant in each ear, or a cochlear implant in one ear with acoustic hearing in the other). In this situation, each ear will be differently transposed. This may result in a single auditory object being perceived as two independent auditory objects and may provide contralateral spectral interference. The bilateral listener with a cochlear implant will likely listen to the world with conflicting information provided to each ear.

In the following presentation, we will quantify the magnitudes of these distortions across ears. We will discuss limitations (and potential modifications) to electrode design frequency allocations to minimize this problem for cochlear implant users listening with two ears.

Audio Demos:

(Figure 1) audio files “chickenleg.wav” and “ring.wav”

“Two audio demonstartions of listening to sounds that are differently tuned in each ear. In each sample, a sound is presented normally to one ear and pitch shifted to the other ear.  The first sample consists of speech while the second sample consists of music. These samples simulate only a pitch shift and not hearing loss or the sound quality of a cochlear implant. Note: demos should be played back over headphones.”

[1] D.M. Landsberger, M. Svrakic, J.T. Roland and M. Svirsky, “The Relationship Between Insertion Angles, Default Frequency Allocations, and Spiral Ganglion Place Pitch in Cochlear Implants,” Ear Hear, vol. 36, pp. e207-13., 2015.

4aMU6 – How Strings Sound Like Metal: The Illusion of the Duck-Herders Musical Cape

Indraswari Kusumaningtyas – i.kusumaningtyas@ugm.ac.id
Gea Parikesit – gofparikesit@ugm.ac.id

Faculty of Engineering, Universitas Gadjah Mada
Jl. Grafika 2, Kampus UGM
Yogyakarta, 55281, INDONESIA

Popular version of paper 4aMU6, “Computational analysis of the Bundengan, an endangered musical instrument from Indonesia”
Presented Thursday morning, May 10, 2018, 10:00-10:15 AM, Lakeshore A
175th ASA Meeting, Minneapolis, MN

Bundengan is an endangered musical instrument from Indonesia. It has a distinctive half-dome structure, which is originally built by duck herders and used as a cape to protect themselves from adverse weather when tending their flocks. To pass their time in the fields, the duck herders play music and sing. The illusive sound of the bundengan is produced by plucking a set of strings equipped with small bamboo clips and a number of long, thin bamboo plates fitted on the resonating dome; see Figure 1. The clipped strings and the long, thin bamboo plates allow the bundengan to imitate the sound of the gongs and kendangs (cow-hide drums) in a gamelan ensemble, respectively. Hence, it is sometimes referred to as the poor-man’s gamelan. Examples of the bundengan sound can be found from: http://www.auralarchipelago.com/auralarchipelago/bundengan.

Kusumaningtyas Parikesit – Figure 1. Construction of the bundengan 300 dpi.jpeg
Figure 1. The construction of the bundengan (left). A set of strings with small bamboo clips and a number of long, thin bamboo plates are fitted on the grid (right).

Amongst the components of the bundengan, arguably the most intriguing are the strings. We use computational simulations to investigate how the clipped strings produce the gong-like sound. By building a finite element model of a bundengan string, we visualize how the string vibration changes when the number, size (hence mass), and position of the bamboo clips are varied.

We first simulate the vibration of a 20 cm string, first with no bamboo clip and then with one bamboo clip placed at 6 cm from one of its end. Compared to the string with no clip (Figure 2a), the addition of the bamboo clip alters the string vibration (Figure 2b), such that two vibrations of different frequencies emerge, each located at different sections of the string divided by the bamboo clip. A relatively high frequency vibration occurs at the longer part of the string, whereas a relatively low frequency vibration occurs at the shorter part of the string. This correlates well with our high-speed recording of the bundengan string vibration; see http://ugm.id/bundengan.

Kusumaningtyas Parikesit - Figure 2. Bundengan string without and with clip 300 dpi.jpeg
Figure 2. Contour plot of the bundengan string vibration when plucked at the centre of the string for (a) no bamboo clip, and (b) one bamboo clip located at 0.06 m.

We also simulate how the position of the bamboo clip affect the frequencies of the string vibration and, hence, the sound produced by the clipped string. Figure 3 demonstrates that, for the string with a bamboo clip, we have two strong peaks at frequencies lower and higher than the frequency of the peak when there is no clip. The magnitudes of these two peaks change as the clip is shifted away from the end of the string, changing the pitch of the sound.

Kusumaningtyas Parikesit - Figure 3. Frequency spectrum 300 dpi.jpegFigure 3. Frequency spectra of the bundengan string vibration when the location of the bamboo clip is shifted from 1 cm to 9 cm from one end of the 20 cm string. The spectrum for the string with no clip is also given (top graph).

In a bundengan string equipped with bamboo clip, the emergence of the two different-frequency vibrations at different sections of the string is the key to the production of the gong-like sound. The vibration spectra allow us to understand the tuning of the bundengan string due to the position of the bamboo clip. This can serve as a guide to design the bundengan, providing possibilities for future developments.

 

List of Figures.
Kusumaningtyas Parikesit – Figure 1. Construction of the bundengan 300 dpi.jpeg 
Kusumaningtyas Parikesit – Figure 2. Bundengan string without and with clip 300 dpi.jpeg
Kusumaningtyas Parikesit – Figure 3. Frequency spectrum 300 dpi.jpeg

1aNS3 – Low-frequency sound control by means of bio-inspired and fractal designs

Anastasiia O. Krushynska – akrushynska@gmail.com
Federico Bosia – fbosia@unito.it
Nicola M. Pugno – nicola.pugno@unitn.it
Laboratory of Bio-inspired and Graphene Nanomechanics
Department of Civil, Environmental and Mechanical Engineering
Uiversity of Trento
Via Mesiano 77
Trento, 38123, ITALY

Popular version of paper 1aNS3, “Fractal and bio-inspired labyrinthine acoustic metamaterials”
Presented Monday morning, May 7, 2018, 9:15-9:35, Nicolett 3D
175th ASA Meeting, Minneapolis

Road, rail, airports, industry, urban environments, crowds – all generate high-volume sound. When sound becomes uncomfortable or even painful to the ear, it is generally called noise. Nowadays, noise is one of the most widespread environmental problems in developed countries, negatively affecting public health and quality of life. Recent findings of the World Health Organization show that noise pollution is not only annoying for a large percentage of the population, but also causes sleep disturbance, increases the risk of cardiovascular diseases, intensifies the level of stress and hinders learning processes. Low-frequency noise is the most troublesome type and is mainly produced by road vehicles, aircraft, industrial machinery, wind turbines, compressors, air-conditioning units, etc.

The attenuation or elimination of low-frequency noise is a challenging task due to its numerous sources, its ability to bypass obstacles, and the limited efficiency of most sound barriers. The laws of acoustics tell us that if a solid wall is used to attenuate noise, sound transmission is inversely proportional to its mass per unit area and the sound frequency. This means that very heavy walls, more than ten meters thick (!), are necessary to efficiently reduce typical low-frequency noise in the frequency range between 10 and 1000 Hz.

Fortunately, modern technology can provide more innovative and efficient solutions, based on so-called acoustic metamaterials. These are engineered structures capable of effectively slowing down sound speed and reducing sound intensity thanks to enhanced internal structural losses. The latter can be induced by incorporating internal resonators, which transfer mechanical vibrational energy into heat, or by using a geometry-related mechanism, exploiting the artificial elongation of sound propagation paths by means of narrow, so-called “labyrinthine” channels. In this work, we develop labyrinthine acoustic metamaterials with long narrow channels inspired by the structure of spider webs or arranged along fractal space-filling curves. These particular designs help to extend the metamaterial functionalities as compared to simpler configurations analyzed in previous years.

What happens if a sound wave enters a straight narrow channel? Depending on the channel geometry, it can either propagate through it, or be attenuated. For narrow channels, friction effects in the vicinity of the channel walls hinder wave propagation, and can eventually lead to its total attenuation. For moderately wide channels, if the sound wavelength matches the distance between the two channel edges (i.e., it equals an integer number of half wavelengths), resonance takes place, allowing to amplify the sound transmission. Both the described effects take place at single frequencies.

But what happens if the channels are arranged in the shape of a maze or if there is a set of coiled channels? We now know that for certain configurations, other types of collective resonances can be induced – Mie resonances – that enable the achievement of total reflection at rather wide frequency ranges.

We have found out that natural spider-web designs for the channel labyrinths provide sufficient freedom for the development of metamaterials with switch on/off regimes between total transmission and total reflection that can be easily adapted for controlling low-frequency sound. In particular, we have shown that a light-weight re-configurable structure with a square cross section of 0.81 m2 is capable of totally reflecting airborne sound at frequencies of 50-100 Hz and above [1]. Moreover, by modifying the channel thickness and length, we can tune operating frequencies to desired ranges. In fact, the proposed metamaterials provide exceptional versatility for application in low-frequency sound control and noise abatement.

Incorporation of more advanced designs, e.g. coiling wave paths along space-filling curves, enables to develop more compact configurations and opens a route for creating efficient sound absorbers [2]. Space-filling curves are lines constructed by an infinite iterative process with the aim to fill in a certain area, e.g. a square or cube. Since the work of G. Peano (1890) until the 1980s, these curves were considered no more than mathematical curiosities, and only recently have they found application in fields like data science and routing systems. The use of space-filling curves for wave path labyrinths in combination with the added effect of friction in narrow channels has allowed us to achieve total reflection or to improve wave absorption of low-frequency sound. The absorption can be increased up to 100 % at selected frequencies, if a hybrid configuration with incorporated Helmholtz resonators is used [3]. This could be the next chapter to be written in the story of efficient noise abatement through innovative metamaterials.

fractal 

[1] A.O. Krushynska, F. Bosia, M. Miniaci and N. M. Pugno, “Spider web-structured labyrinthine acoustic metamaterials for low-frequency sound control,” New J. Phys., vol. 19, pp. 105001, 2017.

[2] A.O. Krushynska, F. Bosia, and N. M. Pugno, “Labyrinthine acoustic metamaterials with space-coiling channels for low-frequency sounf control,” Acta Acust.united Ac., vol. 104, pp. 200–210, 2018.

[3] A.O. Krushynska, V. Romero-García, F. Bosia, N.M. Pugno, J.P. Groby, “Extra-thin metamaterials with space-coiling designs for perfect sound absorption”, (working paper), 2018.

4pBA5 – Plane-wave vector-flow imaging of adult mouse heart

Jeffrey Ketterling– jketterling@riversideresearch.org
Lizzi Center for Biomedical Engineering
Riverside Research
New York, NY 10038

Akshay Shekhar, Orlando Aristizabal
Skirball Institute of Biomolecular Medicine
NYU School of Medicine
New York, NY

Anthony Podkowa
Electrical and Computer Engineering
4251 Beckman Institute MC 251
405 N. Mathews, Urbana Illinois 61801

Billy Y.S. Yiu, Alfred C.H. Yu
Department of Electrical and Computer Engineering
University of Waterloo
Waterloo, ON, Canada

Popular version of paper 4pBA5, “Plane-wave vector-flow imaging of adult mouse heart”
Presented Thursday afternoon, December 6, 2017, 4:30 PM
175th ASA Meeting, Minneapolis

heart

The blood injected in the left ventricle of the mouse heart results in a vortex pattern just as in humans.

Doppler ultrasound is a well established clinical technique to measure blood flow in humans. The method makes use of the Doppler effect to detect small changes in position over time. It is used extensively for cardiovalscular evaluations to detect abnormal blood flow conditions. Traditional Doppler is used either to detect the presence of blood or to assess flow conditions in blood vessels where the flow is more or less steady. Traditional Doppler is only able to assess the flow in the direction normal to the transducer or essentially in the direction that the ultrasound propogates. To estimate the flow velocity that is not in the normal direction, an estimate must be made of the angle between the normal direction and the flow direction. Traditional Doppler is not very effective when trying to image complex flow patterns such as those found in the heart where vortex patterns are formed.

In recent years, advances in ultrasound equipment and computational power have permitted the detection of flow patterns through estimates of local flow vectors using Doppler and other approaches. The methods have been used on humans and the equipment required to perform this type of blood-flow imaging is becoming more widespread and clinical applications are slowly emerging.

Mice are used extensively for cardiovascular studies because many diseases in humans are represented in mouse models. Specialized ultrasound equipment is available to perform Doppler studies on mice. The main difference between the equipment for humans and the equipment for mice is the operating ultrasound frequency. Humans require around 10 MHz frequencies and mice upwards of 20 MHz. Because of this, the vector-flow methods applied to humans have not yet been adapted to imaging mice. The ability to apply the vector-flow approaches to mice would allow for direct translational studies that would facilitate understanding how the complex blood flow patterns in the heart related to healthy heart function.

We undertook initial studies to obtain vector flow information from the left ventricle of a mouse. Data were acquired transmitted ultrasound at an absolute rate of 30,000 frames per second. The effective frame rate after processing was 10,000 frames per second. In terms of flow, the maximum velocity that can be resolved before aliasing in the direction of the ultrasound was 21 cm/s. A video clip [movie] showing 3 hearts cycles, spanning 300 ms, is shown. The flow is indicated by vectors that point in the direction of flow and are colored based on the flow velocity.  Over the heart cycle, the left ventricle can clearly be seen filling via the mitral valve [Fig 1] before developing a vortex pattern [Fig 2] and then the blood is ejected through the aortic valve.

heartFigure 1. Blood flow into the left ventricle through the mitral valve. The flow velocity is near 100 cm/s. Doppler spectrogram from a region near mitral valve. heartFigure 2. After the mitral valve close, a vortex pattern has developed prior to ejection of the blood in the left ventricle.

These initial studies show that the sophisticated methods used to image cardiac mechanics and hemodynamics in humans can be translated to mice. Having similar tools for mice and men will assist in developing applications using vector flow and for understanding fundamental properties of cardiovascular function as they relate to blood flow, mechanics and the related forces between the two. 

This movie shows several heart cycles and the blood flow patterns.

[1] B. Y. S. Yiu and A. C. H. Yu, “Least-squares multi-angle Doppler estimators for plane wave vector flow imaging.” IEEE Trans Ultrason Ferroelectr Freq Control, vol. 63, no. 11, pp. 1733–1744, 2016.

[2] 2.  J.A. Ketterling, O. Aristizábal, B.Y.S. Yiu, D.H. Turnbull, C.K.L. Phoon, A.C.H. Yu and R.H. Silverman, “High-speed, high-frequency ultrasound,  in utero vector-flow imaging of mouse embryos,” Scientific Reports, 7, 16558 (2017)

5aPA – A Robust Smartphone Based Multi-Channel Dynamic-Range Audio Compression for Hearing Aids

Yiya Hao– yxh133130@utdallas.edu
Ziyan Zou – ziyan.zou@utdallas.edu
Dr. Issa M S Panahi – imp015000@utdallas.edu

Statistical Signal Processing Laboratory (SSPRL)
The University of Texas at Dallas
800W Campbell Road, Richardson, TX – 75080, USA

Popular Version of Paper 5aPA, “A Robust Smartphone Based Multi-Channel Dynamic-Range Audio Compression for Hearing Aids”
Presented Friday morning, May 11, 2018, 10:15 – 10:30 AM, GREENWAY J
175th ASA Meeting, Minneapolis

Records by National Institute on Deafness and Other Communication Disorders (NIDCD) indicate that nearly 15% of adults (37 million) aged 18 and over report some kind of hearing loss in the United States. Amongst the entire world population, 360 million people suffer from hearing loss.

Hearing impairment degrades perception of speech and audio signals due to low frequency- dependent audible threshold levels. Hearing aid devices (HADs) apply prescription gains and dynamic-range compression for improving users’ audibility without increasing the sound loudness to uncomfortable levels. Multi-Channel dynamic-range compression enhances quality and intelligibility of audio output by targeting each frequency band with different compression parameters such as compression ratio (CR), attack time (AT) and release time (RT).

Increasing the number of compression channels can result in more comfortable audio output when appropriate parameters are defined for each channel. However, the use of more channels increases computational complexity of the multi-channel compression algorithm limiting its application to some HADs. In this paper, we propose a nine-channel dynamic-range compression (DRC) with an optimized structure capable of running on smartphones and other portable digital platforms in real time. Test results showing the performance of proposed method are presented too. The block diagram of proposed method shows in Fig.1. And the block diagram of compressor shows in the Fig.2.

Fig.1. Block Diagram of 9-Channel Dynamic-Range Audio Compression

Fig.2. Block Diagram of Compressor

Several experimental results have been measured including the processing time measurements of real-time implementation of proposed method on an Android smartphone, objective evaluations and subjective evaluations, a commercial audio compression & limiter provided by Hotto Engineering [1] is used as a comparison running on a laptop. Proposed method running on a Google Pixel smartphone with operating system 6.0.1. The sampling rate is set to 16kHz and the frame size is set as 10 ms.

The High-quality INT eractomes (HINT) sentences database at 16 kHz sampling rate are used. First experimental measurement is testing the processing time running on the smartphone. Two processing times were measured, round-trip latency and algorithms processing time. Larsen test was used to measure the round-trip latency [2], and the test setup shows in Fig.3. The average processing time results shows in Fig.2 as well. Perceptual evaluation of speech quality (PESQ) [3] and short-time objective intelligibility (STOI) [4] has been used to test the objective quality and intelligibility of proposed nine-channel DRC.

The results could be find in Fig.4. Subjective tests including mean opinion score (MOS) test [5] and word recognition test (WR) have been tested, and the Fig.5 shows the results. Based on the results we can tell that proposed nine-channel DRC could run on the smartphone efficiently, and provides with decent quality and intelligibility as well.

Fig.3. Processing Time Measurements and Results

Fig.4. Objective evaluation results of speech quality and intelligibility.

Fig.5. Subjective evaluation results of speech quality and intelligibility.

Based on the results we can tell, proposed nine-channel dynamic-range audio compression could provide with decent the quality and intelligibility which could run on smartphones. Proposed DRC could pre-set all the parameters based on the audiograms of individuals. With proposed compression, the multi-channel DRC does not limit within advanced hardware, which is costly such as hearing aids or laptops. Proposed method also provides with a portable audio framework, which not just limiting in current version of DRC, but could be extended or upgraded further for research study.

Please refer our lab website http://www.utdallas.edu/ssprl/hearing-aid-project/ for video demos and the sample audio files are as attached below.

Audio files:

Unprocessed_MaleSpeech.wav

Unprocessed_FemaleSpeech.wav

Unprocessed_Song.wav

Processed_MaleSpeech.wav

Processed_FemaleSpeech.wav

Processed_Song.wav

Key References:

  • 2018. [Online]. Available: http://www.hotto.de/
  • 2018. [Online]. Available: https://source.android.com/devices/audio/latency_measurements
  • Rix, W., J. G. Beerends J.G., Hollier, M. P., Hekstra, A. P., “Perceptual evaluation of speech quality (PESQ) – a new method for speech quality assessment of telephone networks and codecs,” IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), 2, pp. 749-752., May 2001.
  • Tall, C. H, Hendricks, R. C., Heusdens, R., Jensen, R., “An algorithm for intelligibility prediction of time-frequency weighted noisy speech,” IEEE trans. Audio, Speech, Lang. Process. 19(7), pp. 2125- 2136., Feb
  • Streijl, R. C., Winkler, S., Hands, D. S., “Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives,” in Multimedia Systems 22.2, pp. 213-227, 2016.

*This work was supported by the National Institute of the Deafness and Other Communication Disorders (NIDCD) of the National Institutes of Health (NIH) under the grant number 5R01DC015430-02. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors are with the Statistical Signal Processing Research Laboratory (SSPRL), Department of Electrical and Computer Engineering, The University of Texas at Dallas.

2pBA5 – Sensing Osteoporosis by Acoustic Waves of Ultrasound

Siavash Ghavami – roudsari.seyed@mayo.edu
Max Denis – denis.max@mayo.edu
Adriana Gregory – gregory.adriana@mayo.edu
Jeremy Webb – webb.jeremy@mayo.edu
Mahdi Bayat – bayat.mahdi@mayo.edu
Mostafa Fatemi – fatemi.mostafa@mayo.edu
Azra Alizad – alizad.azra@mayo.edu

Mayo Clinic, College of Medicine and Science,
Department of Radiology, Department of Physiology and Biomedical Engineering
200 1st St SW, Rochester, MN 55905, USA

Popular version of paper 2pBA5, “Vibro-Acoustic Method for Detection of Osteopenia and Osteoporosis”
Presented Tuseday afternoon, May 8, 2018, 2:15-2:30 PM, GREENWAY F/G
175th ASA Meeting, Minneapolis

Osteoporosis, a condition with low bone mass and micro-architectural deterioration, is the most common bone disease in adults that leads to skeletal fragility and increased risk of fracture. Age-related osteoporosis is by far the most common form of the disease, most commonly in women after menopause and older men. Osteopenia refers to bone density that is lower than normal peak density, but not low enough to be classified as osteoporosis. Bone density is a measurement of how dense and strong the bones are. If the bone density is low compared to the normal peak density, the bone is said to have osteopenia. Having osteopenia means there is a greater risk that, as time passes, it may develop bone density that is very low compared to normal, known as osteoporosis.

Assessment of bone mass and bone quality is essential for early detection of osteopenia and osteoporosis in people at risk as well as for monitoring the efficacy of various therapeutic regimens projected to reduce fractures associated with these diseases. Estimations of bone mineral density (BMD) and double energy X-ray absorptiometry (DXA) have played an important role in bone evaluation and prediction of fractures risks in recent years. Although DXA is now the gold standard for bone mass measurements in adults, this method uses x-ray which can be harmful especially if used repeatedly.

In this study, a new noninvasive method is proposed for detection of osteoporosis and osteopenia. In this method a pulse of ultrasound is used to induces vibrations in the bone, where these vibrations produce an acoustic wave that is measured by a sensitive hydrophone placed on the skin. The resulting acoustic signals are used to measure wave velocity in the bone, which in turn used to assess the bone quality. The accuracy of wave velocity estimation in the bone is affected by the complex acoustic environment. The acoustic wave in this environment can be thought of as a composition of several simpler wave components. We used an efficient technique to decompose received signal into constructing components. This allowed us to choose the wave component that represents bone vibration. Using this component we estimate wave velocity in the bone and used it to decide about the bone abnormality.

The study was done on 27 volunteers, out of those 8 had osteopenia, 6 had osteoprosis, and 13 were healthy with no bone abnormality. For each volunteer the right and left tibia (the long bone in lower leg) were tested. By comparing the wave velocities, we were able to correctly identify those osteoporosis and osteopenia from healthy individual in up to 89% of the cases. This technique can provide physicians a safe, low-cost, and portable tool for diagnosis of osteoporosis and osteopenia in patients.

Osteoporosis

Fig. 1. Estimated wave velocity in osteopenic osteoporotic and normal bones.