University of Washington, Department of Speech and Hearing Sciences, Seattle, WA, 98105, United States
Additional authors: DJ Audet Jr, Aoi A. Hunsaker, Mallory Butler, Carol Sammeth, Alexandria Podolski, Theodore F. Argo, David A. Anderson, Nathaniel T. Greene,
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
In noisy professions – from manufacturing to the military – hearing protection and perception are often at odds. The sense of hearing normally enables listeners to detect and locate sounds arriving from any direction – an especially valuable ability in settings with low visibility (darkness, fog, smoke), visual clutter, or in which important sound sources may be outside the field of vision altogether, whether off in the distance or “right behind you!” However, when noisy settings demand the use of hearing protectors (usually earplugs or earmuffs), the ability to determine sound direction is reduced. Hearing protectors lower the level of transmitted sound – their designed purpose – but they also change the quality of the transmitted sound, disrupting the subtle bits of acoustic information the brain relies on to determine sound direction. This means listeners may confuse forward and rearward sounds, or struggle to locate sounds overhead. The trade-off between protection and perception can contribute to disuse of hearing protectors in critical settings where situational awareness and personal safety may be acutely valued above long-term hearing health.
Methods to evaluate hearing protector impacts have varied widely across previous studies; hearing protectors come in many shapes and sizes, and directional hearing ability varies across people even before hearing protectors enter the picture. Here, in an effort to identify key factors that mediate hearing protector impacts, we measured directional hearing during hearing protector use in a large sample of listeners across two different sites (130 subjects enrolled study-wide). Listeners were asked to orient to sounds that varied in horizontal and vertical location while wearing a variety of commercially available hearing protector styles, with orientation accuracy measured using wireless sensors.
All hearing protectors reduced directional hearing ability, but variation across devices pointed to key variables that may impact performance – and may be captured using relatively simple acoustic measurements. This work is part of an effort to develop metrics beyond the industry-standard “Noise Reduction Rating” that consumers and hearing conservation professionals alike might use to select job-appropriate hearing protectors, and that hearing protection manufacturers might leverage to design and build better devices.
This work was funded by the US Department of Defense Joint Warfighter Medical Research Program.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
People often associate restorative experiences with nature: the sound of birds, wind, or flowing water. But what if indoor spaces could offer their own kind of mental escape, not through what we see, but through how we interact with sound?
This idea began with a simple observation. When you walk into a space and notice how your footsteps and voice are reflected back to you, the echoes create a subtle sense of awe. According to Attention Restoration Theory, experiences that evoke fascination and effortless engagement can help replenish mental resources. We wanted to explore whether these moments of acoustic interaction between a person and a space could invite gentle attention and, in turn, support cognitive restoration. In Attention Restoration Theory, this is referred to as soft fascination, a type of stimulus that is engaging but not overwhelming.
Exploring Echoes as a Path to Mental Restoration:
During a live demonstration at the MIT Museum, we used auralization a technology that allows you to hear your voice as if you were in a different place using that place’s sound signature or impulse response. A volunteer hummed into the acoustic signature of Hagia Sophia. Later, the entire audience hummed together and reflected on their experiences. The conversation pointed to the potential of such acoustic interaction to support a meditative state by impacting sense of space, time, and self.
This inspired a controlled experiment to study the restorative potential of indoor acoustic environments. We asked people to experience different sound environments (Figure 1) and measure their cognitive activity before and after each interaction. Early results suggest that interactive acoustics may support attention restoration depending on the acoustic characteristics, opening a new way of thinking about how sound affects us indoors.
Figure 1: Virtual interaction with an acoustic environment during the experiment, where a person hears their own voice transformed through the acoustic signature of another space.
Why does this matter?
We spend most of our time indoors, yet discussions of restorative environments often focus on natural settings. This is especially relevant for workplaces and schools, where mental fatigue is common. It may also hold meaningful promise for neurodivergent individuals, including those with ADHD, who often benefit from environments that support attention without overstimulating it.
We imagine applications in immersive restorative spaces where people can interact with sound to reset and return to their activities with greater clarity. We also envision subtle integration into transitional spaces such as staircases, corridors, and building entrances that provide gentle cognitive relief as people move throughout their day.
Sound(e)scape reframes acoustics not as background, but as a tool for well-being. By understanding how interactive sound shapes attention and cognition, we can design buildings that do not simply avoid harmful noise. They can actively help the mind take a restorative break.
Figure 2: Visualization of interacting with different acoustic environments. Left: Max Addae vocalizing in an office environment (MIT Media Lab). Middle: “Hagia Sophia – Muhammad, Allah, Abu Bakr” by Rabe!, licensed under CC BY-SA 3.0 (https://commons.wikimedia.org/wiki/File:Hagia_Sophia_-_Muhammad,_Allah,_Abu_Bakr.jpg) Cropped and one person (Max Addae) added by Alaa Algargoosh. Right: Max Addae vocalizing in Boston Symphony Hall.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Note on Publication
This article is a new version prepared for the Acoustics Lay Language Paper. Our research was originally published in the Journal of Music Perception and Cognition: Kim, K-H., Yamaguchi, M., Iwamiya, S. (2021). Optimal listening level for audio-visual media: Influence of gender difference, presence or absence of video, and display size. Journal of Music Perception and Cognition, 26(2), 67-80. (in Japanese with English abstract)
Tired of arguing with family or a partner over the TV volume? Someone often says it’s “too loud” while the other insists they “can’t hear it well.” This common conflict suggests that the preferred volume is not just an acoustic phenomenon. Our research reveals that gender and the presence or absence of video play a crucial role in determining the volume people find “just right.”
In our daily lives, we constantly process sound alongside visual cues. The preferred playback volume for a comfortable experience is known as the Optimal Listening Level (OLL). Our study demonstrates that simply measuring physical sound intensity is insufficient; we must adopt a multisensory approach to fully comprehend loudness perception.
To clarify the effects of video and gender on OLL, we examined twenty Japanese university students (10 men and 10 women). All participants used a remote control to adjust the volume freely until they reached their “most comfortable level” (OLL). They did this while watching various video clips of diverse genres or simply listening to the audio only. We then precisely measured the sound level at their ear position.
The Main Discovery: Video Affects Women’s Volume More Than Men’s
The most important finding is that the multisensory integration effect—the way we integrate sight and sound—is significantly stronger in women when setting the OLL:
1. Women Turn Up the Volume with Video
When women transitioned from listening to audio only to watching an audio-visual (AV) clip, they increased their preferred volume by an average of 1.7 dB (up to 3.3 dB). This increase was a statistically significant change, demonstrating that visual information leads women to set the volume louder.
2. Men’s Volume Setting Stays Consistent
For men, the addition of the video element resulted in no significant change in their OLL.
This indicates that female viewers tend to use visual context to modify their ideal sound level, a sensitivity that male viewers did not exhibit.
Figure 1: Gender differences in the multisensory integration effect on the Optimal Listening Level (OLL). † p< .10, * p< .05 , ** p< .01, n.s.: not significant
Other Findings
Beyond the influence of video, we confirmed other substantial factors influencing the OLL:
1. The Overall Gender Difference: Men Prefer It Louder
Across all experimental conditions, men consistently preferred a higher listening level than women. On average, the volume set by men was 5.3 dB higher than the volume set by women. This difference is large enough to be easily perceived as a noticeable difference in loudness. In this way, the gender difference was maintained regardless of whether the video was present.
2. The Influence of Content and Display Size
We also found that the preferred volume varied significantly based on the type of content. In particular, the listening level was notably higher for music-related productions (Pop and classical concerts) than for other genres. However, the size of the display (16-inch small vs. 46-inch large) had no significant effect on the volume setting.
Conclusions and Takeaways
To create a truly comfortable listening experience in movies, television, and gaming, we must look beyond sound alone. Recognizing gender differences and the multisensory interaction effects—specifically, the shift in women’s preferred volume with video—highlights the necessity of considering gender-specific viewing experiences in all AV productions. Adopting this approach leads to more inclusive AV experiences for all viewer-listeners.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Language is a uniquely human capacity. Members of other species communicate, but those communications are neither as complex nor as interactional as human language. In spite of its greater complexity, however, human language evolved within the constraints of the mammalian auditory system, a system shared by all mammals. For individual children, spoken language must develop within the constraints of their own auditory systems. But even though the great majority of children can hear sounds at birth, there is a tremendous amount of development in the auditory system that takes place after birth, extending through puberty. This development happens in the central auditory pathways, which means the ability to perform more complex functions on acoustic signals does not reach maturity until near puberty. Thus, a reasonable proposal is that any condition that delays the development of the child’s auditory system can disrupt language development, especially for aspects of language most dependent upon having sophisticated auditory functions. This proposal was explored in this study. Furthermore, the idea was explored that two conditions heretofore known to negatively affect language development may exert some of that influence by disturbing the normal timing of auditory development. These conditions are poverty and premature birth.
Developmental scientists have long searched for the roots of the delays in language acquisition exhibited by children living in poverty. That work has focused on language models in the child’s environment, which are fewer in quantity and poorer in quality than what a middle-class child hears. But even though this factor has been found to explain effects of poverty on child language abilities to some extent, those relationships are never found to be very strong. This means that some other factor(s) must also be contributing.
Children born prematurely are known to have delayed language development, and the usual explanation is that the auditory environment in the neonatal intensive care unit is at once too noisy and too void of the human voice, which is available in utero. Again, those explanations might explain some of the deficit, but animal studies show that the simple act of being removed from the womb before full gestation leads to neurodevelopmental challenges. Obviously, those challenges for animals do not include language acquisition, but for human children born too early, language acquisition can be a challenge.
Our primary findings are:
Relatively strong relationships exist between measures of auditory function and language measures, and these relationships were strongest for the most complex language skills.
Socioeconomic status and gestational age at birth were related to measures of both auditory and language development.
Effects on language development of both socioeconomic status and gestational age at birth could be explained by their effects on auditory function, to at least some extent.
These results mean that developmental delays in the biological structures and functions underlying language disorders are happening long before the language problem can be diagnosed. We need to provide intensive interventions right from birth focused not only on discrete language targets, but on the whole child.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Should people have a legal right to quiet in their homes? There already is a legal doctrine called based on English Common Law, but this has nothing to do with quiet. It means that a landlord can’t bother a tenant unnecessarily in a rented apartment or house.
The official definition of noise is noise is unwanted sound but a newer definition, already adopted by the International Commission on Biological Effects of Noise, is noise is unwanted and/or harmful sound. Noise has both auditory and non-auditory health effects. Too much noise causes hearing loss, tinnitus (ringing in the ears), and hyperacusis (a sensitivity to noise that doesn’t bother others). Non-auditory health effects include high blood pressure, heart disease, and increased mortality. Possible non-auditory health effects also include obesity, diabetes, and infertility.
The Environmental Protection Agency calculated safe noise levels in 1974. These are not standards or regulations, but were calculated as mandated by Congress.
Table 2. World Health Organization Noise Level Recommendations
Noise damages hearing but how does noise damage overall health? As shown by the National Park Service noise map (Figure 1), without human noise, nature is quiet. Loud noise usually signals danger. The perception of danger leads to a three-part involuntary response, 1) an almost immediate increase in blood pressure and pulse, mediated by the autonomic nervous system; 2) a slower increase in stress hormone levels, involving the brain, the pituitary gland, and the adrenal glands; and 3) inflammation of blood vessel linings. An illustration of how this might occur is shown in the Figure 2.
In 1981, the Environmental Protection Agency estimated that 100 million Americans were exposed to harmful levels of noise pollution. That number is undoubtedly larger now. Multiple studies document excessive noise exposure for those living in cities, largely from road traffic noise. In London, the median daytime road traffic noise level was 55.6 decibels (dB), with increased cardiovascular disease and mortality to those exposed to >60 dB, especially older people. In the HYENA study, increased aircraft and road traffic noise exposure was correlated with increased blood pressure. Average noise levels may hide intermittent noise disrupting sleep. Nighttime noise has particularly deleterious effects on health due to sleep disruption.
Figure 1. National Park Service Noise Map Without Anthropogenic Noise
Figure 2. Proposed pathophysiological mechanisms of noise-induced cardiometabolic disease. Reproduced with permission from Munzel T, Schmidt FP, Steven S, et al. J Am Coll Cardiol. 2018 Feb 13;71(6):688-697. https://pubmed.ncbi.nlm.nih.gov/29420965/
There can be no rational doubt that noise is harmful. The Noise Control Act of 1972 established a national policy to promote an environment for all Americans free from noise that jeopardizes their health and welfare. Expanding the Right to Quiet Enjoyment to a literal right to enjoy quiet in one’s home, whether rented or owned, will take either litigation or legislation at the local, state, or national levels. This presentation is a preliminary discussion of this topic, with any expansion of a right to quiet enjoyment undoubtedly something that will take may years to accomplish. One thing is for sure: a quieter world, with homes that are free from unhealthy noise disturbances inside and in their outdoor spaces, will be a better and healthier world for all.
–The research described in this Acoustics Lay Language Paper may not have yet been peer reviewed–
Is hearing loss in older people normal? It certainly is common, but the radical conclusion proposed in this summary paper is that it isn’t part of normal aging. Hearing loss in older people, technically called presbycusis or age-related hearing loss, is really the result of exposure to too much noise over one’s lifetime. The hearing loss common in old age is entirely preventable by reducing exposure to loud noise. Figure 1 shows how too much noise causes hearing loss by damaging the hair cells in the cochlea in the inner ear.
Figure 1. Top: Auditory structures from external ear (pinna) to auditory nerve. Bottom: Normal and damaged hair cells. From Centers for Disease Control and Prevention. How does loud noise cause hearing loss?
Why does this matter? If something is caused by normal aging, like thinning gray hair, nothing can be done about it. But if a condition common in old age is due to something that can be changed, like diet, exercise, or avoiding harmful exposures, maybe it can be delayed or prevented entirely.
Many conditions common in older people, once thought to be due to normal aging, have been shown to be preventable. These include obesity, diabetes, high blood pressure, muscle weakness, heart disease, skin cancers, and even dementia. Age-related hearing loss should be added to this list.
A number of studies done in the 1960s in isolated populations not exposed to loud noise found good hearing preserved to age 70. For example, a study of hearing in the isolated Mabaan population in the Sudan published in 1962 found good hearing preserved to age 70. Figure 2 shows that anything more than a 10-decibel hearing loss may not be normal.
Figure 2. Hearing loss in women and men in industrial societies and the non-industrialized Mabaans. Adapted by Kathleen Romito MD from Figure 11 in Kryter KD. Presbycusis, sociocusis and nosocusis. J. Acoust. Soc. Am. 1 June 1983; 73 (6): 1897–1917. https://doi.org/10.1121/1.389580.
Other lines of evidence supporting the conclusion that hearing loss in old people isn’t due to normal aging include:
Occupational studies showing exactly how much noise causes hearing loss. This is the basis of noise exposure limits for workers. Everyone’s ears are the same. If noise causes hearing loss in workers, it has to cause hearing loss in everyone.
Boys and girls have equal hearing at birth, but by the teen years and into adulthood, women have better hearing than men. [See Figure 2.] Girls and women generally don’t do noisy things like hunting or woodworking, or work in noisy factories or mines or operate heavy equipment.
Workplace hearing loss occurs in the frequencies the ear is exposed to. For example, dentists have high-frequency hearing loss in the ear nearest the drill.
How noise damages hearing is well-understood, down to the cellular, subcellular, and molecular levels.
What else could cause age-related hearing loss? Some experts mention drugs that damage the ear, hardening of the arteries, genes that cause hearing loss, or nutritional factors, but seem to ignore or downplay noise. The published evidence, though, doesn’t support a major role for any of these other factors.
Recent research supports the conclusion that hearing loss in older people can be prevented. The upper left-hand graph in Figure 3 shows that normal hearing loss in older people is minimal, about 10 decibels at 4,000 Hertz (cycles per second ) as in Figure 2.
Figure 3. Mean audiograms and standard errors of exemplars (filled symbols) and non-exemplars (open symbols) in four audiometric phenotypes. Reproduced with permission from Dubno JR, Eckert MA, Lee FS, et al. Classifying human audiometric phenotypes of age-related hearing loss from animal models. J Assoc Res Otolaryngol. 2013 Oct;14(5):687-701. https://pmc.ncbi.nlm.nih.gov/articles/PMC3767874/
Why does prevention of age-related hearing loss matter? Hearing aids are expensive. Only one-third of older Americans who might benefit from hearing aids have them. Even in countries where hearing aids are provided by the national health insurance program, many people don’t want them. There is a stigma attached to hearing loss and to wearing hearing aids. Also, hearing aids don’t restore normal hearing and don’t work as well as desired in noisy restaurants or at parties,
CDC states that noise-induced hearing loss is the only type of hearing loss that is 100% preventable. Preventing age-related hearing loss is simple and inexpensive: reduce lifetime noise exposure. If something sounds loud, it’s too loud, and one’s auditory health is at risk. Turn down the volume, insert earplugs, or leave the noisy environment and you won’t need hearing aids when you get old.