Acoustical Society of America
151st Meeting Press Release


21st CENTURY RADIO THEATER,
WHALE BUBBLE NETS, AND
ULTRASONIC VERSION OF THE LASER
AT UPCOMING ACOUSTICS MEETING

FOR IMMEDIATE RELEASE

Melville, New York, May 9, 2006


How does the hearing health of today's adults compare to those of thirty years ago? How can the judicious use of echoes disrupt offensive chants at a sporting event? How does the bowing technique of expert violinists differ from that of amateurs?

These and other questions will be addressed at the 151st Meeting of the Acoustical Society of America (ASA) which will take place from June 5-9, 2006 at the Rhode Island Convention Center (One Sabin Street, Providence, RI) and The Westin Providence Hotel (One West Exchange St, Providence, RI). More than 1000 papers will be presented. The Acoustical Society of America is the largest scientific organization in the United States devoted to acoustics, the science of sound.


PRESS LUNCHEON AT MEETING

On Tuesday, June 6, ASA will hold a press luncheon, from 11:30 a.m. to 1:30 p.m., featuring speakers on numerous topics that will be presented at the meeting. The speakers and location will be announced in a subsequent release. Reporters interested in attending the luncheon and meeting sessions should return the reply form at the end of this release.


ASA Press Room

We encourage you to visit ASA's "ASA Press Room" http://www.acoustics.org/press before and during the meeting. Starting the week of May 22, the site will contain lay language versions of selected meeting papers. These papers will enable you to cover the meeting, even if you can't leave your desk.


MEDIA INQUIRIES AND ONSITE REGISTRATION

Reporters covering the meeting can receive a complimentary press badge to attend all sessions. Please fill out the reply form if you are interested in attending the meeting and/or receiving a copy of the book of abstracts when it becomes available. For media inquiries during the meeting, please feel free to contact Ben Stein ( bstein@aip.org, 301-209-3091), who will be available to facilitate your requests, from contacting speakers at the meeting to obtaining background material on meeting topics.

PROGRAM HIGHLIGHTS

The following items describe some highlights from among the many papers being given at the meeting. Full abstracts of the presentations mentioned below can be viewed at the ASA Meeting Abstracts Database ( http://asa.aip.org/asasearch.html) by typing in the last name of the author or the appropriate paper code.
ULTRASOUND ACCELERATES BONE HEALING
THE SAXOPHONE'S DISTINCT SOUND
SUPPRESSING OFFENSIVE SPORTS CHANTS
TELEPHONE SPEECH AND FOOTPRINT SOUNDS
UASER---ULTRASOUND VERSION OF THE LASER
CHINA BLUE MULTIMEDIA EXHIBIT
PET SCANS SHED LIGHT ON UNUSUAL DOLPHIN PHYSIOLOGY
RETHINKING THE ACOUSTICS OF MODERN LIBRARIES
NEW DATA ON HEARING HEALTH
WHALE TAILSLAPPING AND BUBBLE NETS FOR CATCHING HERRING
LISTENING TO KATRINA AND THE TSUNAMI
ALLIGATOR BELLOWING YIELDS SIGNS OF EXTRA-COCHLEAR HEARING
FIFTY YEARS OF SPEECH PRIVACY
FAST SUBSURFACE IMAGING, PORT SECURITY APPLICATIONS POSSIBLE WITH AUTONOMOUS UNDERWATER VEHICLE
NEW AUDIO POSSIBILITIES FOR THE 21ST CENTURY
NEW ENGLAND ACOUSTICS
VIEWING NANOMACHINES WITH A PHOTOACOUSTIC MICROSCOPE

ULTRASOUND ACCELERATES BONE HEALING

Researchers have found that ultrasound seems to accelerate healing of particularly troublesome bone injuries called delayed union and nonunion fractures. However, the reason for ultrasound's healing effects has remained unclear. Now, in experiments conducted in vitro, James Greenleaf (jfg@mayo.edu) and his colleagues at the Mayo Clinic College of Medicine in Rochester, MN have found that pulsed ultrasound triggers genes inside bone cells to increase production of the protein aggrecan in chondrocytes, a key cell in the fracture-healing process. These results may have potential implications for using ultrasound to treat other conditions in the body in addition to those responsible for fracture healing (2aBB3). Other talks cover the potential of ultrasound for characterizing plaque in heart disease (2aBB10) as well as for measuring the health and elasticity of artery walls (2aBB11).

THE SAXOPHONE'S DISTINCT SOUND

The saxophone has an unmistakable sound, not easily confused with other instruments. What makes the sound of the saxophone so distinct? Recording the notes of various saxophones and comparing them with those of other instruments such as the oboe, Jean-Pierre Dalmont of the Universite du Maine in France (Jean-Pierre.Dalmont@univ-lemans.fr) will reveal the acoustical and geometrical features that endow the instrument with distinctive acoustics (2aMU4). To study the bowing technique of violinists, Diana Young of MIT (young@media.mit.edu) will present a custom-built system that uses accelerometers, gyroscopes, and force sensors to measure bowing properties such as force, speed, and distance from the violin bridge. Young will discuss bowing distinctions between novices and experts, as well as differences in style and technique among experts (5aMU12). In his paper "Musical Coffee Mugs, Singing Machines, and Laptop orchestras," Perry Cook of Princeton University (prc@cs.princeton.edu) will present live demonstrations as well as audio and visual examples of new instruments that his group has created over the last ten years (5aMU1).

SUPPRESSING OFFENSIVE SPORTS CHANTS

Few things can disrupt the mood of a sporting event more than an offensive or otherwise inappropriate chant. Now, human-factors engineers have introduced a novel approach for disrupting undesired chants at sporting events. Noting that people find it difficult to speak coherently when they hear a loud, delayed echo of their own voices, Sander J. van Wijngaarden (sander.vanwijngaarden@tno.nl) and Johan S. van Balken of TNO Human Factors in the Netherlands reasoned that such "delayed auditory feedback" would have similar effects in a group of people. By broadcasting an artificially delayed version of an offending chant, the researchers showed that they can severely disrupt the timing of others trying to join in the chant. For success, the feedback signal had to be at least as loud as the original chant, and the delay should be greater than 0.2 seconds. However, one challenge to implementing this technique in real-world conditions, the researchers report, is that the required loud echo of the original chant currently has the potential to lead to unstable feedback loops that could create unpleasant effects of their own. (3aPP9)

TELEPHONE SPEECH AND FOOTPRINT SOUNDS

The audio in telephone speech leaves much to be desired, especially for those who suffer from hearing loss. Present-day telecommunications infrastructure limits the frequency range of telephone speech to 300-3400 Hz, whereby conversational speech falls mostly within the range of 50-8000 Hz. The ensuing telephone speech has a considerable loss in sound quality, in terms of naturalness and intelligibility. Employing a technique called bandwidth extension (BWE), Harsha M. Sathyendra (hsathyen@cnel.ufl.edu), in collaboration with Ismail Uysal (uysal@ufl.edu), both from the University of Florida, will be presenting a technique for restoring relevant frequency information to the missing low and high frequencies. This work has the potential of improving telephone audio in the future (3aSC5). In efforts to eliminate the wind noise that plagues hearing-aid users when they are outdoors, Gary Elko (gwe@mhacoustics.com) and Jens Meyer of mh acoustics LLC in Summit, NJ will present an "electronic windscreen," a signal-processing algorithm that reduces wind noise picked up by the microphones in hearing aids (5aSP4). In work that might be useful for detecting individuals in homeland security applications, Alexander Ekimov of the National Center for Physical Acoustics at the University of Mississippi (aekimov@olemiss.edu) will present a system for identifying the vibration signature from human footsteps. (4aSAbl)

UASER---ULTRASOUND VERSION OF THE LASER

Scientists at the University of Illinois in Urbana-Champaign have built an array of circuits each of which can detect incident ultrasound waves and produce a train of sound waves of its own in synchrony with the incident waves. By carefully tuning the circuit parameters one can achieve the acoustic equivalent of a laser, namely stimulated emission and amplification. One of the Illinois researchers, Richard Weaver (217-333-3656, r-weaver@uiuc.edu), says that, like laser light, the acoustic output of his device is extremely coherent (all the waves are coordinated) and mono-frequency. This UASER (ultrasound amplification by stimulated emission of radiation) is an example of classical, not quantum, physics. Weaver said he anticipates various applications for his device, such as a high-sensitivity detector of changes in enclosed acoustic chambers. Also, with their longer wavelengths and more convenient frequencies, the acoustic-laser analog might be useful for modeling the dynamics of some optical lasers which can be otherwise difficult to study. (4pPA1, 4pPA3)

CHINA BLUE MULTIMEDIA EXHIBIT

China Blue is ASA's invited artist at the Providence meeting. She will present innovative sound works and paintings in an exhibit open to all meeting registrants on Thursday, June 8, from 4:00-9:00 p.m. in rooms 550 A&B. Based in New York City, China Blue creates artworks in a variety of media focusing on sound and how it shapes space. Her current works are based on what she terms "urban bioacoustics," recording activities from day-to-day life to examine acoustical energy. The current show will include two sound pieces. "Skratch" uses spatial recording and post-production processing to capture and manipulate the acoustic elements of a billiards game. "Mikey vs. Fabio," a study of the acoustics of a ping-pong game, immerses the listeners in both the spatialized dynamics of the ball and the human conversations punctuated by the play. She will also display a number of paintings of her visual interpretation of acoustic flow in different environments. (For video and images, go to http://www.chinablueart.com.)

PET SCANS SHED LIGHT ON UNUSUAL DOLPHIN PHYSIOLOGY

The dolphin brain is roughly the same size as the human brain, but it contains larger regions devoted to processing sound. Dolphins even have an additional component of the brain, called the paralimbic lobe, which does not occur in humans or other mammals besides porpoises and whales. Now, Sam Ridgway and colleagues at the School of Medicine, University of California and SPAWAR Systems Center in San Diego have imaged this traditionally inaccessible part of the brain using positron emission tomography (PET), a noninvasive technique conventionally used in medical settings. As the dolphins listened to acoustic pulses and tones, the researchers found interesting patterns of activity in the paralimbic lobe and other auditory centers of the dolphin brain (4pAO4). In a second study, the researchers report evidence for another unique part of dolphin physiology, an acoustic version of the tapetum. The tapetum is a reflective membrane in the back of the eyes of many mammals such as dogs, cats, and dolphins (but not humans). The visual tapetum helps improve vision in low-light conditions, by giving the retina a second chance to detect light. Playing acoustic tones to the dolphin during another PET scan, the researchers detected increased activity in tissue around air sinuses, suggesting that the sinuses shape air so as to provide an "acoustic tapetum" to help dolphins detect sound (2aUW9).

RETHINKING THE ACOUSTICS OF MODERN LIBRARIES

The library was once a sanctuary of quiet---but modern technology has irreversibly transformed its soundscape. At session 4pAAc, acoustical consultants and other researchers will discuss the new acoustical landscape and describe ways to manage the acoustics. "Printers, copiers, wireless computers, and espresso machines all contribute to the soundscape of typical libraries," writes Acentech's Benjamin Markham, who will explore the implications of wireless technology on library acoustics (4pAAc5). Presenting architectural designs of modern public libraries in Philadelphia, Vancouver and Salt Lake City, Isaac Franco of Moshe Safdie and Associates in Somerville, MA will demonstrate the trend of making libraries into civic focal points, containing large light-filled gathering spaces that encourage community interaction (4pAAc6). Many places of worship have also undergone changes because of technology. Another set of sessions (4aAAb and 4pAAb) explores the acoustics in almost 100 worship spaces.

NEW DATA ON HEARING HEALTH

Investigating the hearing levels of today's adults and comparing the data to those collected 35 years ago, William J. Murphy (wjm4@cdc.gov) and his colleagues at the National Institute for Occupational Safety and Health in Cincinnati, OH will report the results of the National Health and Nutrition Examination Survey (NHANES), which tested a nationally representative sample of over 5000 individuals in the US population from 1999-2004. The median hearing levels of persons aged 20-69 have changed little from the NHANES conducted from 1971-1975, which examined adults who were 25-74 years of age. In addition, the recent NHANES indicate that, as a function of ethnicity, non-Hispanic blacks have the best hearing and non-Hispanic whites have the poorest hearing thresholds (i.e., the softest sound that an individual with average hearing in this group can detect is louder than the lowest sound level that average individuals in other groups can hear), particularly among males and in the older age groups (2aPPb5). Does noise pose equal risks to all listeners? "After 40+ years of study, we can conclude that there is a great range of individual differences in susceptibility to noise-induced hearing loss (NIHL)," says Donald Henderson of the Center for Hearing and Deafness at the State University of New York at Buffalo. Henderson will review the different acoustical, environmental and biological factors that can lead to variations (2aPPb1). Lynne Marshall of the Naval Submarine Medical Research Lab (marshall@nsmrl.navy.mil) in Groton, CT will discuss the possibility of measuring otoacoustic emissions (sounds produced inside the ear) to indicate susceptibility to noise-induced hearing loss (2aPPb3).

WHALE TAILSLAPPING AND BUBBLE NETS FOR CATCHING HERRING

Killer whales in Norway and Iceland work cooperatively to perform underwater tail slaps that stun herring prey. Icelandic killer whales employ an additional tactic, as discovered by Lee Miller of the University of Southern Denmark (lee@biology.sdu.dk) and his colleagues. Right before the tail slap, the Icelandic whales emit a three-second, 680-Hertz call, which could possibly herd the herring into tighter schools for easier prey capture, but conveniently falls outside of the killer whale's own hearing range (4aAO2). In the Gulf of Alaska, humpback whales work in groups to capture herring, with one whale broadcasting sound at a herring school to drive them to the water surface. A second whale blows a "net" of bubbles to encircle the rising school. As Orest Diachok of the Johns Hopkins University Applied Physics Laboratory (orest.diachok@jhuapl.edu) in Laurel, MD reports, during this process one or more of the whales emits long "trumpet" tones at several different frequencies, one of which resonates with, and is attenuated (absorbed) by the swim bladders of the herring (analogous to x-rays being absorbed by water in human lungs). Diachok proposes that the whales might use this phenomenon to infer the fish length, species, and size of school (4aA05). What whales may do naturally is something that fisheries managers covet: Diachok and his collaborators have been working toward developing artificial acoustic systems that work in much the same way to detect and monitor fish populations.

LISTENING TO KATRINA AND THE TSUNAMI

Underwater listening stations designed for other purposes, such as detecting signs of illicit underground nuclear tests, have obtained a wealth of useful information on the 2004 Sumatra earthquake and subsequent tsunami. These hydrophones captured various kinds of signals, including T waves, slow-moving acoustic waves which manifest themselves as rumblings of the ocean floor. Emile Okal of Northwestern University (emile@earth.northwestern.edu) will discuss the contributions of such data to investigating the earthquake's source (1eID1). Catherine de Groot-Hedlin of the Scripps Institution of Oceanography in San Diego (chedlin@ucsd.edu) will present acoustical evidence that that the fault associated with the earthquake proceeded in two phases: initially northwest at approximately 2.4 km/s, and then slowing down to 1.5 km/s at 600 km from the epicenter (2aAO1). Other stations (2aAO3 and 2aAO4) reported some of the loudest and longest T waves ever recorded, lasting for approximately 15 minutes. According to Maya Tolstoy (tolstoy@ldeo.columbia.edu) of Columbia University's Lamont-Doherty Earth Observatory, T-wave data offer the potential of rapidly assessing the size, location, and geographic extent of large underwater earthquakes, and hence their potential to create tsunamis (2aAO2). Peter Gerstoft of Scripps (gerstoft@ucsd.edu, www.mpl.ucsd.edu/people/gerstoft) will report that seismic stations in Southern California were able to detect signs of Hurricane Katrina from thousands of miles away (2aAO6). As the hurricane traveled over the Gulf of Mexico and made landfall, resulting ocean waves generated seismic waves which propagated to a depth of 1100 km. Such observations indicate that even distant seismic stations may have useful functions in monitoring and studying hurricanes.

ALLIGATOR BELLOWING YIELDS SIGNS OF EXTRA-COCHLEAR HEARING

Recent evidence indicates the sacculus, a primordial inner-ear organ found in all vertebrates including humans, plays a role in hearing and communication. This means that hearing in humans and many other species involves more than the cochlea, the inner-ear structure traditionally associated with auditory function. Crocodilians are interesting animals to study as they are aquatic, highly vocal, and have a large sacculus. Alligators, in particular, are known to be one of the most vocal species. Towards these ends, Neil McAngus Todd of the University of Manchester (neil.todd@manchester.ac.uk) carried out an acoustic analysis of alligator vocal displays recorded simultaneously in air and water in order to estimate the role of the sacculus in crocodilian acoustic communication. The results of the analysis were that most of the power in adult vocalizations in water is typically very low pitched, near infrasound between 30-50 Hz, and very loud, up to 140 dB. As these bellowing sounds are outside the optimal range of hearing in the cochlea, there is the very real possibility that the cochlea may not be the primary receptor during crocodilian water-borne communication. Todd estimated that the sacculus could detect the 30-50 Hz sounds over 30 m in water. The sacculus, he says, may be an ideal mechanism for mediating vocal courtship responses in these animals (5aABa6). In another paper (5aABa1),Todd will review recent evidence suggesting that the sacculus plays an auditory role in all vertebrate species. Other talks in session 5aABa1 explore its function in such living things as fish (5aABa2), mammals (5aABa3) and humans (5aABa4).

FIFTY YEARS OF SPEECH PRIVACY

Researchers have been working for fifty years to measure and maximize speech privacy, an issue that is more important than ever because of new patient privacy rights, security concerns, and workplace issues. On Wednesday, June 7, internationally known leaders in the field (including Leo Beranek, winner of the 2003 President's National Medal of Science) will gather for the first-ever scientific colloquium on the subject of international privacy laws and speech privacy. The event will end with a dinner featuring a keynote speech on the topic (for more information, contact David Sykes at dsykes@speechprivacy.org). During the day-long symposium (sessions 3aNS and 3pNS), Rein Pirn of Acentech, Inc. in Cambridge, MA (pirus@rcn.com) will review a systematic approach to speech privacy in open office design (often referred to as "cubicles") that was first proposed 36 years ago, highlight the lessons learned, and identify the shortcomings in current design practice (3aNS5). Bradford Gover and John Bradley of the National Research Council of Canada will present a new procedure for accurately measuring the degree of speech privacy of a closed office or meeting room, as well as a microphone array that can quickly locate the position of potential sound leaks or "hot spots" in a room (3pNS1 and 3aNS7). David Sykes of Remington Partners will explain the landscape for speech privacy in the US has been dramatically altered by new laws, such as the Patriot Act and HIPAA, the healthcare law that stipulates protection of speech privacy (3aNS8).

FAST SUBSURFACE IMAGING, PORT SECURITY APPLICATIONS POSSIBLE WITH AUTONOMOUS UNDERWATER VEHICLE

Using a small, torpedo-shaped unmanned underwater vehicle (AUV) towing an oil-filled hose containing hydrophones (underwater microphones), researchers at Boston University and the Woods Hole Oceanographic Institution (Jason Holmes, jholmes@bu.edu) have demonstrated a fast, efficient technique for imaging the sub-surface sediments in shallow water, where sound interacts strongly with the bottom floor. Taking measurements of the acoustic properties of the first ~50m underneath the sea floor is both technically challenging and important for various naval and civilian purposes, e.g. better low frequency underwater communication. The researchers made measurements at both Buzzards Bay and Nantucket Sound, which are close to the Woods Hole Oceanographic Institution. In addition to performing rapid surveys of sub-floor properties over a large area of shallow water, this system can also work as a surveillance vehicle for defense and homeland security. As an example, while measuring the properties of the Nantucket Sound ocean floor, the system was also able to track the movements of the Nantucket Ferry as it cruised back and forth between Nantucket and Cape Cod. This capability demonstrates the potential for the robot-driven vehicle to find and identify unauthorized sea vessels for better port security (3aUW12).

NEW AUDIO POSSIBILITIES FOR THE 21ST CENTURY

Making use of some of the latest recording and playback technology, presenters at session 3aAA will describe a variety of "composed spaces" that, in some cases, would have been impossible to reproduce in typical audio systems a few years ago. First performed live in the early 1990s with many special effects and new instruments, "Flowers and Wreaths" provides an audio setting for the poems of 20th century French writer Jacques Prevert. With today's technology, it is now possible to record and reproduce this work on present-day surround sound systems, showing the possibilities of a modern-day radio theater. James Moses of Brown University will present "Passing Landscapes," a 15-minute piece that uses modern synthesis to endow environmental recordings with harmonic and melodic content. In his piece "Vis-a-vis," Joseph Rovan, also of Brown, uses computer synthesis to transform the voice of a solo singer into a surround-sound mix of various electronic sounds. In "human space factory" Hans Tutschku of Harvard creates distinguishable acoustical spaces that play simultaneously in the same recording. Eric Chasalow of Brandeis will show how electro-acoustic music has made possible new forms of counterpoint. Twelve of these composed pieces will be performed in their entirety on Thursday morning (4aAAa) and repeated on Thursday afternoon (4pAAa) of the meeting. Also of note are sessions on surround sound (2aAAa and 2pAAa) featuring presentations by Grammy winnners George Massenburg (Billy Joel, Linda Ronstadt) and Bob Ludwig (Mariah Carey, Dire Straits).

NEW ENGLAND ACOUSTICS

Several papers at the meeting deal with acoustical phenomena in New England. Bill Hartmann of Michigan State (hartmann@pa.msu.cdu) will present a study of the fascinating acoustical effects that occur inside Boston's Mapparium, a 30-foot-diameter stained-glass globe that visitors can enter. Having conducted extensive listening and recording experiments within the Mapparium prior to its renovations of 2002, Hartmann and Boston University colleagues Steve Colburn and Gerald Kidd have analyzed some of its remarkable acoustical effects, such as sounds that change in elevation as a listener walks, and auditory illusions in which a sound on a listener's left is heard on the right (1aAA11). Optimizing the ability of the New England Seismic Network to detect all local earthquakes of the smallest possible magnitude, John E. Ebel of Boston College's Weston Observatory (ebel@bc.edu) has developed an automated event detection and identification system that has greatly increased the sensitivity of detecting and locating small earthquakes and quarry blasts in the New England region (2pSP12). Exploring the recent conversions of century-old New England mill buildings into multi-residential apartments and condominiums, acoustical consultant Benjamin Markham of Acentech Inc. in Cambridge, MA (bmarkham@acentech.com) will describe pitfalls--and solutions--for chopping up mill buildings into acoustically acceptable living spaces (1pAA5).

VIEWING NANOMACHINES WITH A PHOTOACOUSTIC MICROSCOPE

Activating and imaging nano-electro-mechanical systems (NEMS) usually works like this: cause a tiny limb of matter such as a membrane or a slender silicon beam to vibrate and then watch what happens to it, whether for the purpose of sensing magnetic fields or detecting nearby nanoparticles. Todd Murray (twmurray@bu.edu) and Kamil Ekinci (ekinci@bu.edu) and their colleagues at Boston University have built a photoacoustic microscope that first excites tiny vibrations (acoustic waves set in motion by local heating) in a micron-long plank-shaped beam of silicon (held at the ends by clamps) and then aims laser light at the faintly undulating surface. The reflected light is then detected. The result: tiny acoustic pulses in the beam, with an amplitude of mere picometers (trillionths of a meter), can be monitored. Unlike other acoustic microscopy ventures, the Boston approach does not require a water interface between the transducer (the energy source exciting motion) and the specimen. Furthermore, the displacements of the beam's surface can be measured even though the device is smaller than the spot size of the optical detection probe. Two other advantages of the Boston University design: the readout is remote and the signal-to-noise ratio (which ultimately limits the space and time resolution of a microscope) is optimized by narrowing the frequency sensitivity of the detection system to match the bandwidth of the excited beam vibrations (2pBB6).


REPORTER'S REPLY FORM

Please return the REPLY FORM if you are interested in attending the meeting or receiving additional information.

REPORTER'S REPLY FORM

Return to 151st Meeting Archive Return to ASA Press Room