Melville, New York, October 1, 2003
How might subtle echoes inside the ear shed light on the origins of attention deficit disorder? How does a person's voice reveal clues about his or her age? Does speech-to-text software perform better if it's trained with "baby-talk"?
These and other questions will be addressed at the 146th Meeting of the Acoustical Society of America, to be held November 10-14, 2003 at the Renaissance Austin Hotel, 9721 Arboretum Boulevard, 512-343-2626, in Austin, Texas. Over 700 papers will be presented. The ASA is the largest scientific organization in the United States devoted to acoustics, with over 7000 members worldwide.
COOL TRUMPET TREATMENT INSIGNIFICANT THE CASE OF THE ACOUSTICAL SOLUTION DO PEOPLE'S VOICES REVEAL THEIR AGES? ULTRASOUND AND LIGHT TEAM UP FOR EARLY CANCER DETECTION TREATING VOICE-RECOGNIZER SOFTWARE LIKE A BABY SOUND AFFECTS OUR PERCEPTION OF FORCE COULD THE SOUND OF RAMS' HORNS KNOCK DOWN JERICHO'S WALLS? A SMART HEARING PROTECTOR EAR SOUNDS MAY PROVIDE CLUES TO ATTENTION-DEFICIT/HYPERACTIVITY DISORDER DOES CLASSICAL MUSIC SOUND BETTER AT A LOWER PITCH? ACOUSTICS OF THE HUMAN FORM SONAR TECHNIQUES FOR HELPING FISHERIES ELECTRONIC ARCHITECTURE MUSICAL SIMULATIONS GET HELP FROM FEATHERED FRIENDS MAKING HOSPITAL ROOMS MORE ACOUSTICALLY PLEASANT GOLDFISH SHARE A "FUNDAMENTAL" SIMILARITY WITH HUMANS TIME REVERSAL FOR COMMUNICATIONS IN HOSTILE ENVIRONMENTS These items were prepared by Ben Stein, James Riordon, Martha Heil, Phil Schewe, and Emilie Lorditch of the American Institute of Physics in cooperation with the Acoustical Society of America.
Some trumpet players are convinced that cryogenically treating their instruments can improve the sounds they produce. The treatments involve cooling the instruments down to -195 degrees Celsius (-321 degrees Fahrenheit), and then letting them slowly return to room temperature. Despite many testimonials from musicians who believe in the practice, it has not been entirely clear how or why the treatment would affect a trumpet's sound. Now Jesse Jones IV (email@example.com) and Chris Rogers of Tufts University (firstname.lastname@example.org) have analyzed ten trumpets, half of which were cryogenically treated. The researchers found that there is no statistically significant difference between treated and untreated instruments. In fact, differences from player to player and instrument to instrument overshadow any changes that cryogenic treatment might have produced. (Paper 2pMUa6)
What good does it do to put a scientist in the witness box? Two sessions, chaired by acoustical consultant Jack E. Randorff (806-829-2521) incorporate papers from forensic acoustics, the intersection of acoustics and law. The two sessions on Wednesday morning and afternoon will explore topics such as why acoustic scientists and engineers should provide expert testimony (3aAA1), the results of specific cases, and psychoacoustics--the human perception of sound (3pAA2, 3pAA3). Several case studies of trials that relied on the expert testimony of acousticians, "The sounds of a murder" (3aAA2), "Could the gunshot be heard?" (3pAA4), and uncovering evidence of employee fraud (3aAA4) have plots worthy of Agatha Christie or Dr. Watson. The session will illustrate how acoustic expertise could apply to legal cases.
In a series of ongoing experiments, researchers at the University of Florida are exploring the acoustical properties of the aging voice and how people judge a speaker's age based on a voice's acoustical qualities. Such knowledge can be applied to forensics (e.g., courts can better determine the reliability of witnesses in estimating the age of a suspect they hear but do not see), medicine (doctors with patients can better separate the effect of the normal aging process from the symptoms of diseases such as Parkinson's which may alter patients' voices) and even drama (actors can better know how to sound old and young). In the first phase of the study, Rahul Shrivastav (email@example.com) and colleagues identified the acoustical parameters (such as the rate of speech) which signal a person's age. In the second phase, the researchers systematically shifted these parameters to see if listeners would perceive a corresponding shift in age. In this study, the researchers contrasted standard speech samples of 16 males aged 70-90 years with 14 males aged 20-33 years. The researchers found significant differences between the younger and older populations in such features as sentence duration, word duration, consonant-vowel ratios, number of pauses, pause duration and fundamental frequency. Then the researchers synthesized the voices and modified their acoustical properties so that young voices sounded older and old voices younger. According to the authors, preliminary data suggests that these modifications could especially make older individuals sound more like younger men. The researchers have also begun similar studies with female voices. (2aSC14)
A new imaging technique can detect cancers and brain function by examining hemoglobin. Combining optical and ultrasonic techniques, the hybrid imaging method yields clearer pictures of large areas of tissue. Developed by Lihong Wang of Texas A&M University (LWang@tamu.edu) and colleagues, the new imaging technique overcomes the individual drawbacks of non-ionizing electromagnetic and ultrasound waves in spatial resolution and contrast. In addition, light waves are safer than traditional x-rays, which can "ionize" or remove electrons from cells. (3aBB2)
Once a science-fiction dream, voice-recognizer software now allows users to dictate words to a computer and see them appear on the screen. But in practice, computers are less than perfect in converting speech into text. Striving to improve the performance of voice-to-text software, Katrin Kirchhoff (firstname.lastname@example.org) and Steven Schimmel (email@example.com) of the University of Washington trained automatic speech recognizers with infant-directed, rather than adult-directed, speech. Infant-directed speech, or "motherese," often exaggerates pronunciation of important language sounds, presumably so that infants can more easily learn the key sounds of their native language. In their study, the researchers found that some speech recognizers trained on infant-directed (ID) speech performed significantly better than those trained on adult-directed (AD) speech. Although the ID-trained speech recognizer suffered performance losses when trying to convert AD test speech into text, it performed better than an AD-trained recognizer attempting to transcribe ID test speech. According to the authors, the results suggest that speech with over-emphasized features may in certain cases constitute better training material for speech recognizers (4pSC10). We take for granted our ability to hear the difference between talking and singing. David Gerhard of the University of Regina (firstname.lastname@example.org) will present an automated computer model designed to distinguish between these two forms of vocal expression (4aSP5).
Video and computer games increasingly utilize "force feedback," in which the game controller applies forces, vibrations or rumble effects, in response to a player's actions or onscreen events. Now, researchers in Germany (M. Ercan Altinsoy, Ruhr-Universitt Bochum, Germany, Ercan.Altinsoy@ruhr-uni-bochum.de) are exploring an interesting connection between a player's tactile and auditory perceptions when using these controllers. In recent experiments, subjects hit a special force-feedback surface that both played a drum sound and produced a tactile response when struck. The researchers investigated how the loudness of the drum sound affected players' perception of the strength of the tactile response from the force-feedback unit. Interestingly, when the researchers increased the loudness of the drum sound without increasing the force from the feedback unit, subjects still perceived a stronger force from the feedback device. The results contribute to researchers' knowledge on how to effectively combine tactile and auditory information in virtual environments. (2aPP5)
In a popular Biblical story, the Old Testament figure Joshua breaks down the walls of Jericho by ordering seven priests to trumpet their rams' horns, and commanding all the men in his army to shout at the same time. Consulted by the producers of an upcoming Discovery Channel "Ancient Evidence" series on Biblical stories, acoustical consultant David Lubman (email@example.com) has calculated the sound power needed to damage the city of Jericho's stone or mud-brick walls. Such a feat, he found, would require acoustical power almost a million times greater than a generous estimate of the power produced by the horns and the sounds of an estimated 300 shouting men. Viewed from a scientific perspective, Lubman says, such sounds plausibly could have emboldened the army-and engendered dread in their enemies. Having found many other Biblical references to the ram's horn, or "shofar," Lubman will be giving a talk on the acoustics of the ancient instrument, which continues to be used today on Jewish holidays. The shofar is unique, Lubman says, in having retained its primitive form. Lubman speculates that the ram's horn originally had mundane use by shepherds for gathering their flocks and for signaling over long distances. Gradually, he says, the instrument came to signify the shepherd culture itself. Its subsequent use in religious rituals, Lubman says, froze its form. Lubman will acoustically analyze sound recordings of modern shofars played by experienced musicians. (2aMU7)
Ed Nykaza of Penn State (firstname.lastname@example.org) will present the Exposure Smart Hearing Protector (ESP), a device that can measure personal noise exposure levels in efforts to prevent noise-induced hearing loss. The device consists of left and right earplugs containing small microphones and a dosimeter that measures cumulative noise exposure level. A warning light indicates to the ESP user that the instantaneous noise exposure exceeds a safe level, suggesting that the worker may need to wear the device more effectively. If the noise level in the user's ear canal drops to a safe level, the warning light goes off. More importantly, the device measures the user's cumulative daily noise exposure, consisting of periods with the plugs in the user's ear canals (protected) and of periods with the plugs removed from the ears (unprotected). The user downloads the dosimeter data via infrared into a PC at the end of the each work shift. These measurements are intended to be performed daily, thereby establishing a complete noise exposure history for the worker. The information is then formatted, analyzed, and stored into a software program. This software program helps management to easily monitor worker's daily noise exposures and intervene when necessary. Together, the ESP and software program allow the user to modify his or her behavior at work to maintain a safe personal noise exposure. Nykaza will demonstrate the device and software. (2aPP8)
For the last 25 years, acoustical researchers have known that the ear "talks": receptor cells in the inner ear produce subtle echoes in response to brief acoustic stimuli such as clicks. These so-called "otoacoustic" emissions, or OAEs, are generally less strong in males than in females. According to a research team headed by experimental psychologist Dennis McFadden of the University of Texas at Austin (email@example.com), this male-female disparity may reflect the fact that males are exposed to higher levels of hormones known as androgens during early development, when the ear and the rest of the body are being formed. Attention-deficit/hyperactivity disorder (ADHD) is more common in boys than in girls, suggesting that some of the symptoms of ADHD may result from irregularities in androgen exposure occurring during early development. If so, this relationship might be revealed by the OAEs of people diagnosed with ADHD. Greg Westhafer, a research assistant to McFadden, measured OAEs in children diagnosed with two different forms of ADHD. The result was that children who have problems with inattention but are not hyperactive had OAEs that were weaker than those of a control group of children not having ADHD. One simple interpretation is that the children with ADHD/inattentive disorder were exposed to higher-than-normal levels of androgens at some period during development, perhaps during prenatal development. By contrast, the Texas investigators did not find significant differences in OAEs in boys and girls diagnosed with the better-known subtype of ADHD in which there is hyperactivity as well as inattention. These findings suggest that the causal factors for the two forms of ADHD may not be identical. (2aSC19)
Today, concert pianos are tuned so that the A note above middle C on the piano has a frequency of 440 Hertz (Hz). Eighty years ago, however, pianos were tuned to slightly lower pitches, so that the A note had a frequency of around 432 Hz. Hugo Fastl of the Munich Institute of Technology (firstname.lastname@example.org) and his colleagues will address a German music-lovers' group claim that the sound quality of a grand piano tuned to 432 Hz is much superior to one tuned to 440 Hz. For perspective, the difference is so acoustically small that the two frequency values are only about a sixth of a tone apart in that range of the piano. Playing back grand-piano recordings of selected classical pieces each played at both tunings, Fastl will report on listeners' evaluations of the alternate versions (4aAAa2). Other meeting talks discuss unanswered questions on the acoustics of the piano (3aMU2), the physics of piano keys (3aMU3), an acoustical comparison between an upright and grand piano (3aMU5), holograms of piano soundboard motion (3aMU6) and an attempt to describe the motion of hammers, springs, soundboards and surrounding air by first principles using Newton's laws (3aMU4). Moving to another popular classical instrument, Colin Gough of the University of Birmingham explores the role of vibrato in the perception of violin quality (4aAAa2). Presenting 40 years of research into a longstanding question, Leo L. Beranek (email@example.com) will discuss the physical and subjective attributes of good concert-hall acoustics (4aAAa1).
There's an old joke about how a theoretical physicist would simplify the study of agricultural science: "First," the physicist said, "assume the cow has a spherical shape." To be consistent, one might well ask what "shape" best characterizes the human species. The answer, at least from acoustical point of view, is that humans are approximately ellipsoidal (i.e., an elongated sphere), and not spherical or cylindrical. In a recent experiment conducted at UC San Diego sound waves were scattered from human subjects walking about a room. By detecting the scattered waves, the researchers can work out the effective acoustical properties of humans, with implications for the design of home entertainment centers or concert halls. Stephane G. Conti of the National Oceanic and Atmospheric Administration (firstname.lastname@example.org) and colleagues find from the test that sound waves see the human form basically as a hard ellipsoid and that absorption of sound waves increases with the amount of clothes one wears. (2pPA12)
In North America alone, salmon, cod, red snapper, and numerous similar species are becoming scarce because of overfishing, the depletion of important fish populations due to excessive harvesting. In efforts to supply much needed information that can help to monitor fish populations, acoustical oceanographers around the world are busily developing new tools for fisheries. One emerging tool is "multibeam sonar," in which several sound waves are aimed over a swath of different angles to take 3-D images of the underwater environment. Researchers are developing this technique to study movement patterns in schools of fish and make estimates of fish biomass so that it can ultimately count, track, and detect any desired fishing population. Advances in this technique will be discussed in numerous papers, such as 1aAO1, 1aAO2, 1aAO8, 1pAO6, 1pAO7 and 3aAO4.
Modern concert halls often use subtle electronic enhancements to help the halls achieve superior acoustics. Acoustical consultant Christopher Jaffe, whose firm has designed successful natural acoustic halls as well as electronically enhanced facilities will provide an overview of what he calls "electronic architecture." To be described are halls that were renovated for the Indianapolis, Milwaukee and San Antonio Symphonies (2pAA1). Takayuki Watanabe of Yamaha Corporation in Japan (email@example.com) will describe Active Field Control, an electro-acoustic enhancement system that has been employed at the 5000-seat Tokyo International Forum to endow it with the acoustical properties of a smaller, 2500-seat hall; and the Osaka Central Public Hall, where the system was used to improve the acoustics without altering historically important architecture (2pAA4). Ronald Freiheit, an acoustical consultant in Minnesota, will discuss how "virtual acoustics" enables musicians in rehearsal rooms to recreate electronically the actual performance venues in which they will perform (2pAA5).
Musicians produce sounds in brass and reed instruments by adjusting the pressure in the airflow through the instruments. Similarly, "pressure-controlled valves" are the primary sound mechanism for the larynx and syrinx, the vocal organ in birds. In the case of a reed or brass musician, when the motion of their reed or lips is extreme, the flow of air into the instrument may be entirely blocked for a brief time. Simulations that model such musical events often handle these sorts of situations poorly, leading to inaccurate simulation results. Drawing upon knowledge of bioacoustic vocal mechanisms which exhibit similar periodically interrupted air flows, Tamara Smyth (firstname.lastname@example.org) and colleagues from Stanford University and Universal Audio, Inc. have developed improved solutions for the time evolution of air-volume flow, leading to smoother, more accurate simulations of the transition from open to closed air channels in instruments. The "feathered collisions," as the researchers call them, refine the sound quality produced by numerical simulations of clarinets and other beating reed instruments. (2aMU5)
Competing acoustical requirements make it surprisingly challenging to design quiet rooms for patients who wish to sleep restfully in hospitals, nursing homes, and rehabilitation facilities. For example, nurses and other caregivers must be able to see and hear patients and their vital-signs monitors relatively easily. To complicate matters further, a new Federal law says that "appropriate physical safeguards" must protect the confidentiality of patient health information, meaning, for example, that casual passers-by should not be able to hear a conversation between a doctor and patient. Potential strategies for resolving these competing needs will be discussed by Bennett Brooks (email@example.com) of Brooks Acoustics Corporation in Connecticut (2aNS1). Magnetic resonance imaging (MRI) machines provide extremely important information for patients, but unfortunately these machines create a large amount of noise and vibration in their surroundings. Compounding this problem is the fact that hospitals are increasingly installing MRI machines on upper floors, where they can create an acoustical nuisance, for example by residing near (or even above) critical care areas. Authors of several papers (2aNS4, 2aNS5, 2aNS6) will address these problems and explore solutions.
Although a musical note typically contains many different audio frequencies, the frequency that is most crucial to a note's identity is the lowest one, called the "fundamental." If that fundamental frequency is removed, humans notice that the sound is different, but they still can correctly identify the note. Loyola University of Chicago psychologist Richard Fay (firstname.lastname@example.org), also the director of Loyola's Parmly Hearing Institute, has found over the course of 30 years many similarities in the way humans hear and the way fish hear. His work suggests that all vertebrates, even those whose evolutionary lines diverged hundreds of millions of years ago, may share the same basic mechanism for hearing. But do goldfish have the remarkable ability of detecting a missing fundamental frequency? Fay played a harmonic sound (with a fundamental of 100 Hz) to two groups of goldfish. One group's sound had the fundamental frequency and the other group's sound did not. The subtle responses of both groups provide evidence that goldfish perceive the missing fundamental in a similar way to humans. (4aAB7)
Complex environments often garble transmitted sounds; it is a result of the echoes and delays that cause the components of an acoustic signal to follow different paths on their way from a transmitter to a receiver. In recent years, researchers have found that by playing a distorted signal backwards at the location that it is detected and broadcasting it back to the original transmitter, much of the distortion is removed as it travels back through the complex environment. Time-reversal communications, as this is known, potentially provide a way to transmit information reliably in environments that would severely limit conventional communications. James Candy and colleagues at Lawrence Livermore National Laboratory (email@example.com) will present recent acoustic, time-reversal communications experiments in hostile, reverberant environments in paper 2pPa1.
Please return the REPLY FORM if you are interested in attending the meeting or receiving additional information.
Return to 146th Meeting Archive Return to ASA Press Room