2aPA6 – An acoustic approach to assess natural gas quality in real time

Andi Petculescu – andi@louisiana.edu
University of Louisiana at Lafayette
Lafayette, Louisiana, US

Popular version of paper 2aPA6 “An acoustic approach to assess natural gas quality in real time.”
Presented Tuesday morning, December 5, 2017, 11:00 AM-11:20, Balcony L
174th ASA in New Orleans

Infrared laser spectroscopy offers amazing measurement resolution for gas sensing applications, ranging between 1 part per million (ppm) down to a few parts per billion (ppb).

There are applications, however, that require sensor hardware able to operate in harsh conditions, without the need for periodic maintenance or recalibration. Examples are monitoring of natural gas composition in transport pipes, explosive gas accumulation in grain silos, and ethylene concentration in greenhouse environments. A robust alternative is embodied by gas-coupled acoustic sensing. Such gas sensors operate on the principle that sound waves are intimately coupled to the gas under study hence any perturbation on the latter will affect i) how fast the waves can travel and ii) how much energy they lose during propagation. The former effect is represented by the so-called speed of sound, which is the typical “workhorse” of acoustic sensing. The reason the sound speed of a gas mixture changes with composition is because it depends on two gas parameters beside temperature. The first parameter is the mass of the molecules forming the gas mixture; the second parameter is the heat capacity, describing the ability of the gas to follow, via the amount of heat exchanged, the temperature oscillations accompanying the sound wave. All commercial gas-coupled sonic gas monitors rely solely on the dependence of sound speed on molecular mass. This traditional approach, however, can only sense relative changes in the speed of sound hence in mean molecular mass; thus it cannot do a truly quantitative analysis. Heat capacity, on the other hand, is the thermodynamic “footprint” of the amount of energy exchanged during molecular collisions. It therefore opens up the possibility to perform quantitative gas sensing. Furthermore, the attenuation coefficient, which describes how fast energy is lost from the coherent (“acoustic”) motion to incoherent (random) behavior of the gas molecules, has largely been ignored. We have shown that measurements of sound speed and attenuation at only two acoustic frequencies can be used to infer the intermolecular energy transfer rates, depending on the species present in the gas. The foundation of our model is summarized in the pyramid of Figure 1. One can either predict the sound speed and attenuation if the composition is known (bottom-to-top arrow) or perform quantitative analysis or sensing based on measured sound speed and attenuation (top-to-bottom arrow).

gas

Figure 1. The prediction/sensing pyramid of molecular acoustics. Direct problem: prediction of sound wave propagation (speed and attenuation). Inverse problem: quantifying a gas mixture from measured sound speed and attenuation.

We are developing physics-based algorithms that not only quantify a gas mixture but also help identify contaminant species in a base gas. With the right optimization, the algorithms can be used in real time to measure the composition of piped natural gas as well as its degree of contamination by CO2, N2, O2 and other species. It is these features that have sparked the interest of the gas flow-metering industry. Figure 2 shows model predictions and experimental data for the attenuation coefficient for mixtures of nitrogen in methane (Fig. 2a) and ethylene in nitrogen (Fig. 2b).

gas

Figure 2. The normalized (dimensionless) attenuation coefficient in mixtures of N2 in CH4 (a) and C2H4 in N2 (b). Solid lines–theory; symbols–measurements

The sensing algorithm that we named “Quantitative Acoustic Relaxational Spectroscopy” (QARS) is based on a purely geometric interpretation of the frequency-dependent heat capacity of the mixture of polyatomic molecules. This characteristic makes it highly amenable to implementation as a robust real-time sensing/monitoring technique. The results of the algorithm are shown in Figure 3, for a nitrogen-methane mixture. The example shows how the normalized attenuation curve arising from intermolecular exchanges is reconstructed (or synthesized) from data at just two frequencies. The prediction of the first-principles model (dashed line) shows two relaxation times: the main one of approximately 50 us (=1/20000 Hz-1) and a secondary one around 1 ms (=1/1000 Hz-1). Probing the gas with only two frequencies yields the main relaxation process, around 20000 Hz, from which the composition of the mixture can be inferred with relatively high accuracy.

Figure 3. The normalized (dimensionless) attenuation as a function of frequency. Dashed line–theoretical prediction; solid line–reconstructed curve.

5aPA3 – Elastic Properties of a Self-Healing Thermal Plastic

Kenneth A. Pestka II – pestkaka@longwood.edu
Jacob W. Hull – ‪jacob.hull@live.longwood.edu
Jonathan D. Buckley – ‪jonathan.buckley@live.longwood.edu

Department of Chemistry and Physics
Longwood University
Farmville, Virginia, 23909, USA

Stephen J. Kalista Jr. –kaliss@rpi.edu
Department of Biomedical Engineering,
Rensselaer Polytechnic Institute
Troy, New York, 12180, USA

Popular version of paper 5aPA3
Presented Friday morning, May 11, 2018
175th ASA Meeting, Minneapolis, MN

In our lab at Longwood University we have recently used Resonant Acoustic and Ultrasonic Spectroscopy to improve our understanding of a self-healing thermal plastic ionomer composed of polyethylene co-methacrylic acid (EMAA-0.6Na) both before and after damage [1]. Resonant Ultrasound Spectroscopy (RUS) is a prodigious technique ideally suited for the characterization and determination of the elastic properties of novel materials, especially those that are often only accessible in small sample sizes or with exotic attributes, and EMAA-0.6Na is among one of the more exotic materials [1,2]. EMAA-0.6Na is a thermal plastic material that is capable of autonomously self-healing after energetic impact and even after penetration by a bullet [3].

Material samples, including those composed of EMAA-0.6Na, exhibit normal modes of vibration and resonant frequencies that are governed by their sample geometry, mass and elastic properties, as illustrated in Fig. 1. The standard RUS approach uses an initial set of approximate elastic constants as input parameters in a computer program to calculate a set of theoretical resonant frequencies. The resulting theoretically calculated resonant frequencies are then iteratively adjusted and compared to the experimentally measured resonant frequencies in order to determine the actual elastic properties of a material.

Figure 1. 3D-model of a self-healing EMAA-0.6Na sample illustrating the first six vibrational modes.

However, EMAA-60Na is a relatively soft material, leading to sample resonances that are often difficult to isolate and identify. A partial spectrum from an EMAA-0.6Na sample is shown in Fig. 2. In order to extract individual resonant frequencies a multiple peak-fitting algorithm was used as shown in Fig. 2 (b).

Thermal Plastic

Figure 2. Undamaged Sample Behavior: Time dependence of the partial resonant spectrum of an approximately 7✕7.5✕1.4 mm3 EMAA sample over 48 hours (a). Lorentzian multi-peak fit to the signal used to extract individual resonances (b). Time evolution of the resonant frequencies at approximately 8.7 kHz (c) and 9.8 kHz (d) for the undamaged EMAA sample, adapted from[1].

Interestingly, the resonant frequencies of undamaged EMAA-0.6Na samples changed over time as shown in Fig. 2(c) and 2(d), but the observed rate of elastic evolution was quite gradual. However, once the samples were damaged, in this case by a 3mm pinch punch hammered directly into approximately 1mm thick samples, dramatic changes occurred in the resonant spectrum, as shown in Fig. 3. Using this approach we were able to determine the approximate healing timescale of several EMAA-0.6Na samples after exposure to damage.

Thermal Plastic

Figure 3. Partial time dependent spectrum of an approximately 7✕7.5✕1.4 mm3 EMAA sample before damage (a) and after damage (b). The lorentzian multi-peak fits are shown just after damage (c) and over an hour after damage (d), adapted from [1].

Building on this approach we have been able to identify a sufficient number resonant frequencies of undamaged EMAA-0.6Na samples to determine the complete material elastic constants. In addition, it should be possible to assess the evolution of EMAA-0.6Na elastic constants for both undamaged and damaged samples, with the ultimate goal of quantifying the material parameters and environmental conditions that most significantly affect the elastic and self-healing behavior of this unusual material.

[1] Pestka II, K. A., Buckley, J. D., Kalista Jr., S. J., Bowers, N. R., Elastic evolution of a self-healing ionomer observed via acoustic and ultrasonic resonant spectroscopy Rep. vol. 7, Article number: 14417 (2017). doi:10.1038/s41598-017-14321-z

[2] Migliori, A. and Maynard, J. D. Implementation of a modern resonant ultrasound spectroscopy system for the measurement of the elastic moduli of small solid specimens.  Rev. Sci. Instrum. 76, 121301 (2005).

[3] S. J. Kalista, T.C. Ward, Self-Healing of Poly(ethylene-co-methacrylic acid) Copolymers Following Ballistic Puncture, Proceedings of the First International Conference on Self Healing Materials, Noorwijk aan Zee, The Netherlands: Springer (2007).

4bPA2 – Perception of sonic booms from supersonic aircraft of different sizes

Alexandra Loubeau – a.loubeau@nasa.gov
Structural Acoustics Branch
NASA Langley Research Center
MS 463
Hampton, VA 23681
USA

Popular version of paper 4bPA2, “Evaluation of the effect of aircraft size on indoor annoyance caused by sonic booms and rattle noise”
Presented Thursday afternoon, May 10, 2018, 2:00-2:20 PM, Greenway J
175th Meeting of the ASA, Minneapolis, MN, USA

Continuing interest in flying faster than the speed of sound has led researchers to develop new tools and technologies for future generations of supersonic aircraft.  One important breakthrough for these designs is that the sonic boom noise will be significantly reduced as compared to that of previous planes, such as the Concorde.  Currently, U.S. and international regulations prohibit civil supersonic flight over land because of people’s annoyance to the impulsive sound of sonic booms.  In order for regulators to consider lifting the ban and introducing a new rule for supersonic flight, surveys of the public’s reactions to the new sonic boom noise are required. For community overflight studies, a quiet sonic boom demonstration research aircraft will be built. A NASA design for such an aircraft is shown in Fig. 1.

(Loubeau_QueSST.jpg) - sonic booms

Figure 1. Artist rendering of a NASA design for a low-boom demonstrator aircraft, exhibiting a characteristic slender body and carefully shaped swept wings.

To keep costs down, this demonstration plane will be small and only include space for one pilot, with no passengers.  The smaller size and weight of the plane are expected to result in a sonic boom that will be slightly different from that of a full-size plane.  The most noticeable difference is that the demonstration plane’s boom will be shorter, which corresponds to less low-frequency energy.

A previous study assessed people’s reactions, in the laboratory, to simulated sonic booms from small and full-size planes.  No significant differences in annoyance were found for the booms from different size airplanes.  However, these booms were presented without including the secondary rattle sounds that would be expected in a house under the supersonic flight path.

The goal of the current study is to extend this assessment to include indoor window rattle sounds that are predicted to occur when a supersonic aircraft flies over a house.  Shown in Fig. 2, the NASA Langley indoor sonic boom simulator that was used for this test reproduces realistic sonic booms at the outside of a small structure, built to model a corner room of a house.  The sonic booms transmit to the inside of the room that is furnished to resemble a living room, which helps the subjects imagine that they are at home.  Window rattle sounds are played back through a small speaker below the window inside the room.  Thirty-two volunteers from the community rated the sonic booms on a scale ranging from “Not at all annoying” to “Extremely annoying”.  The ratings for 270 sonic boom and rattle combinations were averaged for each boom to obtain an estimate of the general public’s reactions to the sounds.

(Loubeau_IER.jpg) - sonic booms

Figure 2. Inside of NASA Langley’s indoor sonic boom simulator.

The analysis shows that aircraft size is still not significant when realistic window rattles are included in the simulated indoor sound field.  Hence a boom from a demonstration plane is predicted to result in approximately the same level of annoyance as a full-size plane’s boom, as long as they are of the same loudness level.  This further confirms the viability of plans to use the demonstrator for community studies.  While this analysis is promising, additional calculations would be needed to confirm the conclusions for a variety of house types.

5aPA – A Robust Smartphone Based Multi-Channel Dynamic-Range Audio Compression for Hearing Aids

Yiya Hao– yxh133130@utdallas.edu
Ziyan Zou – ziyan.zou@utdallas.edu
Dr. Issa M S Panahi – imp015000@utdallas.edu

Statistical Signal Processing Laboratory (SSPRL)
The University of Texas at Dallas
800W Campbell Road, Richardson, TX – 75080, USA

Popular Version of Paper 5aPA, “A Robust Smartphone Based Multi-Channel Dynamic-Range Audio Compression for Hearing Aids”
Presented Friday morning, May 11, 2018, 10:15 – 10:30 AM, GREENWAY J
175th ASA Meeting, Minneapolis

Records by National Institute on Deafness and Other Communication Disorders (NIDCD) indicate that nearly 15% of adults (37 million) aged 18 and over report some kind of hearing loss in the United States. Amongst the entire world population, 360 million people suffer from hearing loss.

Hearing impairment degrades perception of speech and audio signals due to low frequency- dependent audible threshold levels. Hearing aid devices (HADs) apply prescription gains and dynamic-range compression for improving users’ audibility without increasing the sound loudness to uncomfortable levels. Multi-Channel dynamic-range compression enhances quality and intelligibility of audio output by targeting each frequency band with different compression parameters such as compression ratio (CR), attack time (AT) and release time (RT).

Increasing the number of compression channels can result in more comfortable audio output when appropriate parameters are defined for each channel. However, the use of more channels increases computational complexity of the multi-channel compression algorithm limiting its application to some HADs. In this paper, we propose a nine-channel dynamic-range compression (DRC) with an optimized structure capable of running on smartphones and other portable digital platforms in real time. Test results showing the performance of proposed method are presented too. The block diagram of proposed method shows in Fig.1. And the block diagram of compressor shows in the Fig.2.

Fig.1. Block Diagram of 9-Channel Dynamic-Range Audio Compression

Fig.2. Block Diagram of Compressor

Several experimental results have been measured including the processing time measurements of real-time implementation of proposed method on an Android smartphone, objective evaluations and subjective evaluations, a commercial audio compression & limiter provided by Hotto Engineering [1] is used as a comparison running on a laptop. Proposed method running on a Google Pixel smartphone with operating system 6.0.1. The sampling rate is set to 16kHz and the frame size is set as 10 ms.

The High-quality INT eractomes (HINT) sentences database at 16 kHz sampling rate are used. First experimental measurement is testing the processing time running on the smartphone. Two processing times were measured, round-trip latency and algorithms processing time. Larsen test was used to measure the round-trip latency [2], and the test setup shows in Fig.3. The average processing time results shows in Fig.2 as well. Perceptual evaluation of speech quality (PESQ) [3] and short-time objective intelligibility (STOI) [4] has been used to test the objective quality and intelligibility of proposed nine-channel DRC.

The results could be find in Fig.4. Subjective tests including mean opinion score (MOS) test [5] and word recognition test (WR) have been tested, and the Fig.5 shows the results. Based on the results we can tell that proposed nine-channel DRC could run on the smartphone efficiently, and provides with decent quality and intelligibility as well.

Fig.3. Processing Time Measurements and Results

Fig.4. Objective evaluation results of speech quality and intelligibility.

Fig.5. Subjective evaluation results of speech quality and intelligibility.

Based on the results we can tell, proposed nine-channel dynamic-range audio compression could provide with decent the quality and intelligibility which could run on smartphones. Proposed DRC could pre-set all the parameters based on the audiograms of individuals. With proposed compression, the multi-channel DRC does not limit within advanced hardware, which is costly such as hearing aids or laptops. Proposed method also provides with a portable audio framework, which not just limiting in current version of DRC, but could be extended or upgraded further for research study.

Please refer our lab website http://www.utdallas.edu/ssprl/hearing-aid-project/ for video demos and the sample audio files are as attached below.

Audio files:

Unprocessed_MaleSpeech.wav

Unprocessed_FemaleSpeech.wav

Unprocessed_Song.wav

Processed_MaleSpeech.wav

Processed_FemaleSpeech.wav

Processed_Song.wav

Key References:

  • 2018. [Online]. Available: http://www.hotto.de/
  • 2018. [Online]. Available: https://source.android.com/devices/audio/latency_measurements
  • Rix, W., J. G. Beerends J.G., Hollier, M. P., Hekstra, A. P., “Perceptual evaluation of speech quality (PESQ) – a new method for speech quality assessment of telephone networks and codecs,” IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), 2, pp. 749-752., May 2001.
  • Tall, C. H, Hendricks, R. C., Heusdens, R., Jensen, R., “An algorithm for intelligibility prediction of time-frequency weighted noisy speech,” IEEE trans. Audio, Speech, Lang. Process. 19(7), pp. 2125- 2136., Feb
  • Streijl, R. C., Winkler, S., Hands, D. S., “Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives,” in Multimedia Systems 22.2, pp. 213-227, 2016.

*This work was supported by the National Institute of the Deafness and Other Communication Disorders (NIDCD) of the National Institutes of Health (NIH) under the grant number 5R01DC015430-02. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors are with the Statistical Signal Processing Research Laboratory (SSPRL), Department of Electrical and Computer Engineering, The University of Texas at Dallas.

3aPA7 – Moving and sorting living cells with sound and light

Gabriel Dumy– gabriel.dumy@espci.fr
Mauricio Hoyos – mauricio.hoyos@espci.fr
Jean-Luc Aider – jean-luc.aider@espci.fr
ESPCI Paris – PMMH Lab
10 rue Vauquelin
Paris, 75005, FRANCE

Popular version of paper 3aPA7, “Investigation on a novel photoacoustofluidic effect”

Presented Wednesday morning, December 6, 2017, 11:00-11:15 AM, Balcony L

174th ASA Meeting, New Orleans

Amongst the various ways of manipulating suspensions, acoustic levitation is one of the most practical yet not very known to the public. Allowing for contactless concentration of microscopic bodies (from particles to living cells) in fluids (whether it be air, water, blood…), this technique only requires a small amount of power and materials. It is thus smaller and less power consuming than other technologies using magnetic or electric fields for instance and does not require any preliminary tagging.

Acoustic levitation occurs when using standing ultrasonic waves trapped between two reflecting walls. If the ultrasonic wavelength ac is matched to the distance between the two walls (it has to be a certain number of the half wavelength), then an acoustic pressure field forces the particles or cells to move toward the region where the acoustic pressure is minimal (this region is called a pressure node) [1]. Once the particles or cells have reached the pressure node, they can be kept in so-called “acoustic levitation” as long as needed. They are literally trapped in an “acoustic tweezer”. Using this method, it is easy to force cells or particles to create large clusters or aggregates than can be kept in acoustic levitation as long as the ultrasonic field is on.

What happens if we illuminate the aforementioned aggregates of fluorescent particles or cells with a strong monochromatic (only one color) optic wave? If this wave is absorbed by the levitating objects, then the previously very stable aggregate explodes.

We can observe that the particles are now ejected from the illuminated aggregate at great speed from its periphery. But they are still kept in acoustic levitation, which is not affected by the introduction of light.

We determined that the key parameter is the absorption of light by the levitating objects because the explosions happened even with non-fluorescent particles. Moreover, this phenomenon exhibits a strong coupling between light and sound, as it needs the two sources of energy to be present at the same time to occur. If the particles are not in acoustic levitation, on the bottom of the cavity or floating in the suspending medium, even a very strong light does not move them. Without the adequate illumination, we only observe a classical acoustic aggregation process.

Using this light absorption property together with acoustic levitation opens the way to more complex and challenging experiments, like advanced manipulations of micro-objects in acoustic levitation or fast and highly selective sorting of mixed suspensions, since we can discriminate these particles not only on their mechanical properties but also on their optic ones.

We did preliminary experiments with living cells. We observed that human red blood cells (RBCs), having a strong absorption of blue light, could be easily manipulated by both sounds and light. We were able to break up RBCs aggregates very quickly. As a matter of fact, this new effect coupling both acoustics and light suggests all new perspectives for living cells manipulation and sorting, like cell washing (removing unwanted cells from the target cell).  Indeed, most of the living cells absorb light at different wavelengths and can already be manipulated using acoustic fields. This discovery should allow very selective manipulations and/or sorting of living cells in a very simple and easy way, using a low-cost setup.

Figure 1. Illustration of the acoustic manipulation of suspensions. A suspension is first focused under the influence of the vertical acoustic pressure field in red (a and b). Once in the pressure node, the suspension is radially aggregated c) by secondary acoustic forces [2]. On d), when we enlighten the stable aggregate with an adequate wavelength, this one laterally explodes.

Figure 2. (videos missing): Explosion (red_explosion) of the previously formed aggregate of 1.6 polystyrene beads, that are red fluorescent, by a green light. Explosion (green_explosion) of an aggregate of 1.7µm green fluorescent polystyrene beads by a blue light.

Figure 3 (videos missing): Illustration of the separation potential of the phenomenon. We take an aggregate (a) that is a mix of two kind of polystyrene particles with same diameter, one absorbing blue light and fluorescing green (b), the other absorbing green light and fluorescing red (c), that we cannot separate by acoustics alone. We expose this aggregate to blue light for 10 seconds. On the bottom row is shown the effect of this light, we effectively separated the blue absorbing particles (e) from the green absorbing one (f).

Movie missing – describes the observation from the top of the regular acoustic aggregation process of a suspension of 1.6µm polystyrene beads.

[1] K. Yosioka and Y. Kawasima, “Acoustic radiation pressure on a compressible sphere,” Acustica, vol. 5, pp. 167–173, 1955.

[2] G. Whitworth, M. A. Grundy, and W. T. Coakley, “Transport and harvesting of suspended particles using modulated ultrasound,” Ultrasonics, vol. 29, pp. 439–444, 1991.

3aPA3 – Standing Surface Acoustic Wave Enabled Acoustofluidics for Bioparticle Manipulation

Xiaoyun Ding- Xiaoyun.Ding@Colorado.edu
Department of Mechanical engineering
University of Colorado at Boulder
Boulder, CO 80309

Popular version of paper 3aPA3, “Standing Surface Acoustic Wave Enabled Acoustofluidics for Bioparticle Manipulation”
Presented Wednesday, December 06, 2017, 9:30-10:00 AM, Balcony L
174th ASA meeting, New Orleans

Techniques that can noninvasively and dexterously manipulate cells and other bioparticles (such as organisms, DNAs, proteins, and viruses) in a compact system are invaluable for many applications in life sciences and medicine. Historically, optical tweezers have been the primary tool used in the scientific community for bioparticle manipulation. Despite the remarkable capability and success, optical tweezers have notable limitations, such as complex and bulky instrumentation, high equipment costs, and low throughput. To overcome the limitations of optical tweezers and other particle manipulation methods, we have developed a series of acoustic-based, on-chip devices (Figure to the left) called acoustic tweezers that can manipulate cells and other bioparticles using sound waves in microfluidic channel. Cells viability and proliferation assays were also conducted to confirm the non-invasiveness of our technique. The simple structure/setup of these acoustic tweezers can be integrated with a small radio-frequency power supply and basic electronics to function as a fully integrated, portable, and inexpensive cell-manipulation system. Along with my colleagues, I have demonstrated that our acoustic tweezers can achieve the following functions: 1) single cell/organism manipulation [1]; 2) high-efficiency cell separation [2]; and 3) multichannel cell sorting [3].

Acoustic tweezers based single cell/organism manipulation
The acoustic tweezers I developed was the first acoustic manipulation method which can trap and dexterously manipulate single microparticles, cells, and entire organisms (i.e., Caenorhabditis elegans) along a programmed route in two-dimensions within a microfluidic chip [1]. We demonstrate that the acoustic tweezers can move a 10-µm single polystyrene bead to write the word “PNAS” and a bovine red blood cell to trace the letters “PSU” (Figure to the right). It was also the first technology capable of touchless trapping and manipulating Caenorhabditis elegans, a one-millimeter long roundworm that is one of the most important model systems for studying diseases and development in humans. To the best of our knowledge, this is the first demonstration of non-invasive, non-contact manipulation of C. elegans, a function that is challenging for optical tweezers.

Acoustic tweezers based high-efficiency cell separation
Simple and high-efficiency cell separation techniques are fundamentally important in biological and chemical analyses such as cancer cell detection, drug screening, and tissue engineering. In particular, the ability to separate cancer cells (such as leukaemia cells) from human blood can be invaluable for cancer biology, diagnostics, and therapeutics. We have developed an standing surface acoustic wave based cell separation technique that can achieve high-efficiency (>95%) separation of human leukemia cells (HL-60) from human blood cells and high efficiency separation of breast  cancer cells from human blood based on their size difference (Figure to the right). This method is simple and versatile, capable of separating virtually all kinds of cells (regardless of charge/polarization or optical properties) with high separation efficiency and low power consummation.

Acoustic tweezers based multichannel cell sorting
Cell sorting is essential for many fundamental cell studies, cancer research, clinical medicine, and transplantation immunology. I developed an acoustic-based method that can precisely sort cell into five separate outlets of cells (Figure to the right), rendering it particularly desirable for multi-type cell sorting [3]. Our device requires small sample volumes (~100 μl), making it an ideal tool for research labs and point-of-care diagnostics. Furthermore, it can be conveniently integrated with a small power supply, a fluorescent detection module, and a high-speed electrical feedback module to function as a fully integrated, portable, inexpensive, multi-color, miniature fluorescence-activated cell sorting (μFACS) system.

 

 References:

  1. Xiaoyun Ding, et al., On-Chip Manipulation of Single Microparticles, Cells, and Organisms Using Surface Acoustic Waves, Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2012, 109, 11105-09,.
  2. Xiaoyun Ding, et al., Cell separation using tilted-angle standing surface acoustic waves, Proceedings of the National Academy of Sciences of the United States of America (PNAS), 111, 12992-12997 (2014).
  3. Xiaoyun Ding, et al., Standing surface acoustic wave (SSAW) based multichannel cell sorting, Lab On a Chip, 2012,12, 4228–31,. (COVER ARTICLE)
  4. Xiaoyun Ding, et al., Lab on a Chip, 2012, 12, 2491-97. (COVER ARTICLE)