Room Design Considerations for Optimal Podcasting

Madeline Didier –

Jaffe Holden, 114-A Washington Street, Norwalk, CT, 06854, United States

Twitter: @JaffeHolden
Instagram: @jaffeholden

Popular version of 1aAA2-Podcast recording room design considerations and best practices, presented at the 183rd ASA Meeting.

Podcast popularity has been on the rise, with over two million active podcasts as of 2021. There are countless options when choosing a podcast to listen to, and unacceptable audio quality will cause a listener to quickly move on to another option. Poor acoustics in the space where a podcast was recorded are noticeable even by an untrained ear, and listeners may hear differences in room acoustics without even seeing a space. Podcasters use a variety of setups to record episodes, ranging from closets to professional recording spaces. One trend is recording spaces that feel comfortable and look aesthetically pleasing, more like living rooms rather than radio stations.

Figure 1: Podcast studio with a living room aesthetic. Image courtesy of The Qube.

A high-quality podcast recording is one that does not capture sounds other than the podcaster’s voice. Unwanted sounds include noise from mechanical systems, vocal reflections, or ambient noise such as exterior traffic or people in a neighboring room. Listen to the examples below.

More ideal recording conditions:
Media courtesy of Home Cooking Podcast, Episode: Kohlrabi – Turnip for What

Less ideal recording conditions:
Media courtesy of The Birding Life Podcast, Episode 15: Roberts Bird Guide Second Edition

The first example is a higher quality recording where the voices can be clearly heard. In the second example, the podcast guest is not recording in an acoustically suitable room. The voice reflects off the wall surfaces and detracts from the overall quality and listener experience.

Every room design project comes with its own challenges and considerations related to budget, adjacent spaces, and expected quality. Each room may have different design needs, but best practice recommendations for designing a podcasting room remain the same.

Background noise: Mechanical noise should be controlled so that you cannot hear HVAC systems in a recording. Computers and audio interfaces should ideally be located remotely so that noises, such as computer fans, are not picked up on the recording.
Room shape: Square room proportions should be avoided as this can cause room modes, or buildup of sound energy in spots of the room, creating an uneven acoustic environment.
Room finishes: Carpet is ideal for flooring, and an acoustically absorptive material should be attached to the wall(s) in the same plane as the podcaster’s voice. Wall materials should be 1-2” thick. Ceiling materials should be acoustically absorptive, and window glass should be angled upward to reduce resonance within the room.
Sound isolation: Strategies for improving sound separation may include sound rated doors or standard doors with full perimeter gaskets, sound isolation ceilings, and full height wall constructions with insulation and multiple layers of gypsum wallboard.

In the example below, the podcast studio (circled) is strategically located at the back of a dedicated corridor for radio and podcasting. It is physically isolated from the main corridor, creating more acoustical separation. Absorptive ceiling tile (not shown) and 2” thick wall panels help limit vocal reflections, and background noise is controlled.

Figure 2: Podcast recording room within a radio and podcasting suite. Image courtesy of BWBR and RAMSA.

While the challenges for any podcast room may differ, the acoustical goals remain the same. With thoughtful consideration of background noise, room shape, finishes, and sound isolation, any room can support high-quality podcast recording.

Ultrasonics to monitor liquid metal melt pool dynamics for improving metal 3D printing

Christopher Kube –
Twitter: @_chriskube

Penn State University, 212 Earth and Engineering Sciences Bldg, University Park, PA, 16802, United States

Tao Sun, University of Virginia
Samuel Clark, Advanced Photon Source, Twitter: @advancedphoton

Find the authors on LinkedIn:

Popular version of 3pID2-Acoustics for in-process melt pool monitoring during metal additive manufacturing, presented at the 183rd ASA Meeting.

3D printed or additively manufactured (AM) metal parts are disrupting the status quo in a variety of industries including defense, transportation, energy, and space exploration. Engineers now design and produce customizable parts unimaginable only a decade ago. New geometrical or part shape freedom inherent to AM has already led to part performance often beyond traditionally manufactured counterparts. In the years to come, another revolutionary performance jump is expected by enabling the AM process to control the grain layout and structural features on the microscopic scale. Grains are the building blocks of metal parts that dictate many of the performance metrics associated with the descriptors of bigger, faster, and stronger.

The second performance revolution of AM metal parts requires uncovering new knowledge in the complicated physics present during the AM process. 3D printed metals are born from an energy source such as a laser or electron beam to selectively melt feedstock material at microscopic locations dictated by the computerized part drawing. Melted locations temporarily form liquid metal melt pools that solidify after the energy source moves to another location. Resulting grain structure and pore/defect formation strongly depends on how the melt pool cools and solidifies.

Over the past five years, high-energy X-rays only available at particle accelerators are used for direct real-time visualization of AM melt pool dynamics and solidification. Figure 1 shows an example X-ray frame, which captured a laser-generated melt pool moving in a single direction with a speed of 800 mm/ms.

This situation mimics the laser and melt pool movement found during 3D printing metal parts. Being able to directly observe melt pool behavior has led to new and improved understanding of the underlying physics. Unfortunately, experiments at such X-ray sources is difficult to ascertain because of extremely high demand across the sciences. Additionally, the measurement technique relegated to high-energy X-ray sources is not transferrable to metal 3D printers that exist in normal industrial settings. For these reasons, ultrasonics are being explored as a melt pool monitoring technology that can be deployed within real 3D printers.

Ultrasound is commonly used for imaging and detecting features inside of solid materials. For example, ultrasound is applied in medical settings during pregnancy or for diagnostics. Application of ultrasound for melt pool monitoring is made possible because of the tendency of ultrasound to scatter from the melt pool’s solid/liquid boundary. The development of the technique is being supported alongside X-ray imaging at the Advanced Photon Source at Argonne National Laboratory. X-ray imaging is providing the extremely important ground truth melt pool behavior allowing for easy interpretation of the ultrasonic response. In Figure 1, the ultrasonic response from the exact same melt pool given in the X-ray video is being shown for two different sensors. As the melt pool enters the field of view of the ultrasonic sensors (see online video), features in the ultrasound response confirms their sensitivity to the melt pool.

In this research, high-energy X-rays are being used to develop the ultrasonic technique and technology. In the coming year, the knowledge developed will be leveraged such that ultrasound can be applied on its own for melt pool monitoring in real metal 3D printers. Currently, no existing technology can capture the highly dynamic melt pool behavior through the depth of the part or substrate.

Practical benefits and value of melt pool monitoring within 3D printers are significant. Ultrasound can provide a quick check to determine the optimal laser power and speed combinations toward accelerated determination of process parameters. Currently, determination of the optimal process parameters requires destructive postmortem microscopy techniques that are extremely costly, time-consuming (sometimes more than a year), and wasteful. Ultrasound has the potential to reduce these factors by an order of magnitude. Furthermore, metal 3D printing processes are highly variable over many months, across different machines, and even when using feedstock powder from different suppliers. Ultrasonic melt pool monitoring can provide period checks to assure variability is minimized.

Assessment of road surfaces using sound analysis

Andrzej Czyzewski –

Multimedia Systems, The Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Gdansk, Pomorskie, 80-233, Poland

Jozef Kotus – Multimedia Systems, The Faculty of Electronics, Telecommunications and Informatics,
Grzegorz Szwoch – Multimedia Systems, The Faculty of Electronics, Telecommunications and Informatics],
Bozena Kostek – Audio Acoustics Lab., Gdansk Univ. of Technology, Gdansk, Poland

Popular version of 3pPAb1-Assessment of road surface state with acoustic vector sensor, presented at the 183rd ASA Meeting.

Have you ever listened to the sound of road vehicles passing by? Perhaps you’ve noticed that the sound differs depending on whether the road surface is dry or wet (for example, after the rain). This observation is the basis of the presented algorithm that assesses the road surface state using sound analysis.

Listen to the sound of a car moving on a dry road.
And this is the sound of a car on a wet road.

A wet road surface not only sounds different, but it also affects road safety for drivers and pedestrians. Knowing the state of the road (dry/wet), it is possible to notify the drivers about dangerous road conditions, for example, using signs displayed on the road.

There are various methods of assessing the road surface. For example, there are optical (laser) sensors, but they are expensive. Therefore, we have decided to develop an acoustic sensor that ‘listens” to the sound of vehicles moving along the road and determines whether the surface is dry or wet.

The task may seem simple, but we must remember that the sensor records the sound of road vehicles and other environmental sounds (people speaking, aircraft, animals, etc.). Therefore, instead of a single microphone, we use a special acoustic sensor built from six miniature digital microphones mounted on a small cube (10 mm side length). With this sensor, we can select sounds incoming from the road, ignoring sounds from other directions, and also detect the direction in which a vehicle moves.

Since the sound of road vehicles moving on a dry and wet surface differ, performing frequency analysis of the vehicle sounds is recommended.

The figures below present how the sound spectrum changes in time when a vehicle moves on a dry surface (left figure) and a wet surface (right figure). It is evident that in the case of a damp surface, the spectrum is expanded towards higher frequencies (the upper part of the plot) compared with the dry surface plot. Colors on the plot represent the direction of arrival of sound generated by vehicle passing by (the angle in degrees). You can observe how the vehicles moved in relation to the sensor.

Plots of the sound spectrum for cars moving on a dry road (left) and a wet road (right). Color denotes the sound source azimuth. In both cases, two vehicles moving in opposite directions were observed.

In our algorithm, we have developed a parameter that describes the amount of water on the road. The parameter value is low for a dry surface. However, as the road surface becomes increasingly wet during rainfall, the parameter value becomes more extensive.

The results obtained from our algorithm were verified by comparing them with data from a professional road surface sensor that measures the thickness of a water layer on the road using a laser beam (VAISALA Remote Road Surface State Sensor DSC111). The plot below shows the results from analyzing sounds recorded from 1200 road vehicles passing by the sensor, compared with data obtained from the reference sensor. The data were obtained from a continuous 6-hour observation period, starting from a dry surface, then observing rainfall until the road surface had dried.

A surface state measure calculated with the proposed algorithm and obtained from the reference device

As one can see, the results obtained from our algorithm are consistent with data from the professional device. Therefore, the results are promising, and the cheap sensor is easy to install at multiple points within a road network. Hence, it makes the proposed solution an attractive method of road condition assessment for intelligent road management systems.

Connecting industry to a more diverse student population

Felicia Doggett –

Instagram: @metropolitan_acoustics

Metropolitan Acoustics, 1628 JFK Blvd., Suite 1902, Philadelphia, PA, 19103, United States

Popular version of 4pED4-Internships in the acoustical disciplines: How can we attract a more diverse student population?, presented at the 183rd ASA Meeting.

Metropolitan Acoustics has employed 26 interns over a 27-year period. Of those 26, there were 6 students who pursued careers in the acoustics fields; of those 6, there was only one who was both a woman and minority, and that person was a foreign born student who came to the United States for school. Not one woman or minority from the United States who interned with us starting from 1995 entered into the acoustics fields after graduation. This is a very telling microcosm into the Acoustical Society of America as a whole.

Within the acoustics fields, we need to ask ourselves how we are connecting to underrepresented student groups. The engineering disciplines are not very diverse and the few woman and minority groups that enter into the field often leave for a variety of reasons, which most often lead back to a lack of inclusion. It doesn’t have to be a mountain – it can simply be a molehill that sends someone off the track of having sustained and productive careers in the science and engineering fields.

At Metropolitan Acoustics, a large majority of our interns have been 6-month co-ops as compared to 3-month summer interns (23-3). For the most part, the students were fairly productive and we found that interest, enthusiasm, engagement, and work ethic are all factors to their success. Six of the 26 went into careers in acoustics, and one of them works for us currently. The gender and racial breakdown are as follows:

  • Gender diversity: 20 male, 6 female
  • Racial diversity: 20 Caucasian, 6 minority; of the 6 minorities, 4 male and 2 female
  • Out of the 6 interns that went into careers in acoustics, 5 are Caucasian males and 1 is a minority female who is not native to the US

As an organization, what are we doing to attract a more diverse pipeline of candidates to the acoustics fields? And perhaps a bigger question is how we plan to keep them in the field, which is all about inclusiveness. Dedicated student portals on organizational websites populated with videos, student awards, lists of schools with acoustic programs, and other items is a start. This information can be transmitted to underrepresented student organizations like National Society of Black Engineers, Society of Women Engineers, Society of Hispanic Professional Engineers, Society of STEM Women of Color, American Indian Science and Engineering, among others with the hope that this information may light a spark in some to enter the field.

Artificial intelligence in music production: controversy and opportunity

Joshua Reiss Reiss –
Twitter: @IntelSoundEng

Queen Mary University of London, Mile End Road, London, England, E1 4NS, United Kingdom

Popular version of 3aSP1-Artificial intelligence in music production: controversy and opportunity, presented at the 183rd ASA Meeting.

Music production
In music production, one typically has many sources. They each need to be heard simultaneously, but can all be created in different ways, in different environments and with different attributes. The mix should have all sources sound distinct yet contribute to a nice clean blend of the sounds. To achieve this is labour intensive and requires a professional engineer. Modern production systems help, but they’re incredibly complex and all require manual manipulation. As technology has grown, it has become more functional but not simpler for the user.

Intelligent music production
Intelligent systems could analyse all the incoming signals and determine how they should be modified and combined. This has the potential to revolutionise music production, in effect putting a robot sound engineer inside every recording device, mixing console or audio workstation. Could this be achieved? This question gets to the heart of what is art and what is science, what is the role of the music producer and why we prefer one mix over another.

Figure 1 Caption: The architecture of an automatic mixing system. [Image courtesy of the author]

Perception of mixing
But there is little understanding of how we perceive audio mixes. Almost all studies have been restricted to lab conditions; like measuring the perceived level of a tone in the presence of background noise. This tells us very little about real world cases. It doesn’t say how well one can hear lead vocals when there are guitar, bass and drums.

Best practices
And we don’t know why one production will sound dull while another makes you laugh and cry, even though both are on the same piece of music, performed by competent sound engineers. So we needed to establish what is good production, how to translate it into rules and exploit it within algorithms. We needed to step back and explore more fundamental questions, filling gaps in our understanding of production and perception.

Knowledge engineering
We used an approach that incorporated one of the earliest machine learning methods, knowledge engineering. Its so old school that its gone out of fashion. It assumes experts have already figured things out, they are experts after all. So let’s capture best practices as a set of rules and processes. But this is no easy task. Most sound engineers don’t know what they did. Ask a famous producer what he or she did on a hit song and you often get an answer like ‘I turned the knob up to 11 to make it sound phat.” How do you turn that into a mathematical equation? Or worse, they say it was magic and can’t be put into words.

We systematically tested all the assumptions about best practices and supplemented them with listening tests that helped us understand how people perceive complex sound mixtures. We also curated multitrack audio, with detailed information about how it was recorded, multiple mixes and evaluations of those mixes.

This enabled us to develop intelligent systems that automate much of the music production process.

Video Caption: An automatic mixing system based on a technology we developed.

Transformational impact
I gave a talk about this once in a room that had panel windows all around. These talks are usually half full. But this time it was packed, and I could see faces outside pressed up against the windows. They all wanted to find out about this idea of automatic mixing. It’s  a unique opportunity for academic research to have transformational impact on an entire industry. It addresses the fact that music production technologies are often not fit for purpose. Intelligent systems open up new opportunities. Amateur musicians can create high quality mixes of their content, small venues can put on live events without needing a professional engineer, time and preparation for soundchecks could be drastically reduced, and large venues and broadcasters could significantly cut manpower costs.

Taking away creativity
Its controversial. We entered an automatic mix in a student recording competition as a sort of Turing Test. Technically we cheated, because the mixes were supposed to be made by students, not by an ‘artificial intelligence’ created by a student. Afterwards I asked the judges what they thought of the mix. The first two were surprised and curious when I told them how it was done. The third judge offered useful comments when he thought it was a student mix. But when I told him that it was an ‘automatic mix’, he suddenly switched and said it was rubbish and he could tell all along.

Mixing is a creative process where stylistic decisions are made. Is this taking away creativity, is it taking away jobs? Such questions come up time and time again with new technologies, going back to 19th century protests by the Luddites, textile workers who feared that time spent on their skills and craft would be wasted as machines could replace their role in industry.

Not about replacing sound engineers
These are valid concerns, but its important to see other perspectives. A tremendous amount of music production work is technical, and audio quality would be improved by addressing these problems. As the graffiti artist Banksy said “All artists are willing to suffer for their work. But why are so few prepared to learn to draw?”

Creativity still requires technical skills. To achieve something wonderful when mixing music, you first have to achieve something pretty good and address issues with masking, microphone placement, level balancing and so on.

Video Caption: Time offset (comb filtering) correction, a technical problem in music production solved by an intelligent system.

The real benefit is not replacing sound engineers. Its dealing with all those situations when a talented engineer is not available; the band practicing in the garage, the small restaurant venue that does not provide any support, or game audio, where dozens of sounds need to be mixed and there is no miniature sound engineer living inside the games console.

Atom Tones – A periodic table of audible elements

Jill A. Linz –

Skidmore College, 815 N. Broadway, Saratoga Springs, NY, 12866, United States

Christian Howat
Skidmore College, Class of 2022
815 N. Broadway
Saratoga Springs, NY 12866

Popular version of 4aMU5-Atom Tones: investigating waveforms and spectra of atomic elements in an audible periodic chart using techniques found in music production, presented at the 183rd ASA Meeting.

Atom Tones is an audible periodic table that allows us to identify elements through sound and to investigate the atomic world with methods used by sound engineers. The periodic table of Atom Tones can be accessed on the Atom Tones website. The Atom Music project was introduced in 2019 and explained the background ideas for creating audible tones for each atom. Each tone is clearly unique and can be used to identify the element by its sound. Audible tones can also be used in conjunction with the visual interpretations of the sound’s waveform to possibly gain insight into the atom.

In the same way that sunlight can be decomposed into individual colors of the rainbow, light produced from different elements can be decomposed into rainbow-like patterns that are unique to that element. The rainbow colors of the element appear as a series of bright lines known as spectral lines, or atomic spectra. Figure 1 shows examples of several element patterns, along with the element’s signature tone. The pattern of lines is unique to each atom.

Figure 1: Spectral lines produced by three different elements. These lines are unique for each element and are used to identify the element itself. The tones can be heard by clicking on each image. Image courtesy of Linz original paper (Proceedings on Meetings in Acoustics)

The relationship between music and physics is so intertwined that translating the spectral lines into sound is a relatively easy thing to do. Tedious perhaps, but not difficult. We can translate those colors into sounds of varying frequency, or pitch. These frequencies act like notes in a scale that can be played individually or combined. It is with these notes that we created the sounds of the elements.

A sound engineer can easily identify specific types of musical instruments as well as the musical intervals and chords played by those instruments by observing the digital waveforms and spectra produced in a recording, in addition to simply listening by ear. Digital audio software adds an extra layer of insight to the sound. Figure 2 shows the different waveforms and spectral lines for a French Horn and Bassoon each playing the same note, D3.

Figure 2: waveform and spectra of a French Horn compared to a Bassoon. Image courtesy of Linz original paper (Proceedings on Meetings in Acoustics)

Using the techniques developed for audio recording and music synthesis, we can create an audible representation of each element. Possible ways to interpret the tones produced are being investigated. Figure 3 shows the waveforms and spectra for a few elements that exhibit wave patterns that repeat themselves. This is what a sound engineer would expect to see when the recording sounds harmonic, or musical.

Figure 3: These are a few atom tones whose waveforms exhibited similar patterns that repeat themselves. Image courtesy of Linz, Howat original paper (Proceedings on Meetings in Acoustics)

Other combinations of elements exhibit very different patterns. The software allows you to zoom in and observe the pattern from different perspectives. Not only are we hearing the atoms for the first time, perhaps we are also seeing them in a new light.