1aNS5 – Noise, vibration, and harshness (NVH) of smartphones – Inman Jang

1aNS5 – Noise, vibration, and harshness (NVH) of smartphones – Inman Jang

Noise, vibration, and harshness (NVH) of smartphones

Inman Jang – kgpbjim@yonsei.ac.kr
Tae-Young Park – pty0948@yonsei.ac.kr
Won-Suk Ohm – ohm@yonsei.ac.kr
Yonsei University
50, Yonsei-ro, Seodaemun-gu
Seoul 03722
Korea

Heungkil Park – heungkil.park@samsung.com
Samsung Electro Mechanics Co., Ltd.
150, Maeyeong-ro, Yeongtong-gu
Suwon-si, Gyeonggi-do 16674
Korea

Popular version of paper 1aNS5, “Controlling smartphone vibration and noise”
Presented Monday morning, November 28, 2016
172nd ASA Meeting, Honolulu

Noise, vibration, and harshness, also known as NVH, refers to the comprehensive engineering of noise and vibration of a device through stages of their production, transmission, and human perception. NVH is a primary concern in car and home appliance industries because many consumers take into account the quality of noise when making buying decisions. For example, a car that sounds too quiet (unsafe) or too loud (uncomfortable) is a definite turnoff. That said, a smartphone may strike you as an acoustically innocuous device (unless you are not a big fan of Metallica ringtones), for which the application of NVH seems unwarranted. After all, who would expect the roar of a Harley from a smartphone? But think again. Albeit small in amplitude (less than 30 dB), smartphones emit an audible buzz that, because of the close proximity to the ear, can degrade the call quality and cause annoyance.

smartphone-noise
Figure 1: Smartphone noise caused by MLCCs
The major culprit for the smartphone noise is the collective vibration of tiny electronics components, known as multi-layered ceramic capacitors (MLCCs). An MLCC is basically a condenser made of piezoelectric ceramics, which expands and contracts upon the application of voltage (hence piezoelectric). A typical smartphone has a few hundred MLCCs soldered to the circuit board inside. The almost simultaneous pulsations of these MLCCs are transmitted to and amplified by the circuit board, the vibration of which eventually produces the distinct buzzing noise as shown in Fig. 1. (Imagine a couple hundred rambunctious little kids jumping up and down on a floor almost in unison!) The problem has been even more exacerbated by the recent trend in which the name of the game is “The slimmer the better”; because a slimmer circuit board is much easier to flex it transmits and produces more vibration and noise.
Recently, Yonsei University and Samsung Electromechanics in South Korea joined forces to address this problem. Their comprehensive NVH regime includes the visualization of smartphone noise and vibration (transmission), the identification and replacement of the most problematic MLCCs (production), and the evaluation of harshness of the smartphone noise (human perception). For visualization of smartphone noise, a technique known as the nearfield acoustic holography is used to produce a sound map as shown in Fig. 2, in which the spatial distribution of sound pressure, acoustic intensity or surface velocity can be overlapped on the snapshot of the smartphone. Such sound maps help smartphone designers draw a detailed mental picture of what is going on acoustically and proceed to rectify the problem by identifying the groups of MLCCs most responsible for producing the vibration of the circuit board. Then, engineers can take corrective actions by replacing the (cheap) problematic MLCCs with (expensive) low-vibration MLCCs. Lastly, the outcome of the noise/vibration engineering is measured not only in terms of physical attributes such as sound pressure level, but also in their psychological correlates such as loudness and the overall psychoacoustic annoyance. This three-pronged strategy (addressing production, transmission, and human perception) is proven to be highly effective, and currently Samsung Electromechanics is offering the NVH service to a number of major smartphone vendors around the world.

sound-map
Figure 2: Sound map of a smartphone surface

What Does Your Signature Sound Like?  – Daichi Asakura

What Does Your Signature Sound Like? – Daichi Asakura

What Does Your Signature Sound Like?

Daichi Asakura – asakura@pa.info.mie-u.ac.jp
Mie University
Tsu, Mie, Japan

Popular version of poster, “Writer recognition with a sound in hand-writing”

172nd ASA Meeting, Honolulu

We can notice a car approaching by noise it makes on the road or can recognize a person by the sound of their footsteps. There are many studies analyzing and recognizing these noises. In the computer security industry, studies have even been proposed to estimate what is being typed from the sound of typing on the keyboard [1] and extracting RSA keys through noises made by a PC [2].

Of course, there is a relationship between a noise and its cause and that noise, therefore, contains information. The sound of a person writing, or “hand writing sound,” is one of the noises in our everyday environment. Previous studies have addressed the recognition of handwritten numeric characters by using the resulting sound, finding an average recognition of 88.4%. Based on this study, we seek the possibility of recognizing and identifying a writer by using the sound of their handwriting. If accurate identification is possible, it could become a method of signature verification without having to ever look at the signature.

We used the handwriting sounds of nine participants, conducting recognition experiments. We asked them to write the same text, which were names in Kanji, the Chinese characters, under several different conditions, such as writing slowly or writing on a different day. Figure 1 shows an example of a spectrogram of the hand-writing sound we analyzed. The bottom axis represents time and the vertical axis shows frequency. Colors represent the magnitude – or intensity – of the frequencies, where red indicates high intensity and blue is low.

The spectrogram showed features corresponding to the number of strokes in the Kanji. We used a recognition system based on a hidden Markov model (HMM) – typically used for speech recognition –, which represents transitions of spectral patterns as they evolve in time. The results showed an average identification rate of 66.3%, indicating that writer identification is possible in this manner. However, the identification rate decreased under certain conditions, especially a slow writing speed.

To improve performances, we need to increase the number of hand writing samples and include various written texts as well as participants. We also intend to include writing of English characters and numbers. We expect that Deep Learning, which is attracting increasing attention around the world, will also help us achieve a higher recognition rate in future experiments.

  1. Zhuang, L., Zhou, F., and Tygar, J. D., Keyboard Acoustic Emanations Revisited, ACM Transactions on Information and Systems Security, 2009, vol.13, no.1, article 3, pp.1-26.
  2. Genkin, D., Shamir, A., and Tromer, E., RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis, Proceedings of CRYPTO 2014, 2014, pp.444-461.
  3. Kitano, S., Nishino, T. and Naruse, H., Handwritten digit recognition from writing sound using HMM, 2013, Technical Report of the Institute of Electronics, Information and Communication Engineers, vol.113, no.346, pp.121-125.

 

 

1aSC31 – Shape changing artificial ear inspired by bats enriches speech signals – Anupam K Gupta

1aSC31 – Shape changing artificial ear inspired by bats enriches speech signals – Anupam K Gupta

Shape changing artificial ear inspired by bats enriches speech signals

Anupam K Gupta1,2 , Jin-Ping Han ,2, Philip Caspers1, Xiaodong Cui2, Rolf Müller1

1 Dept. of Mechanical Engineering, Virginia Tech, Blacksburg, VA, USA
2 IBM T. J. Watson Research Center, Yorktown, NY, USA

Contact: Jin-Ping Han – hanjp@us.ibm.com

Popular version of paper 1aSC31, “Horseshoe bat inspired reception dynamics embed dynamic features into speech signals.”
Presented Monday morning, Novemeber 28, 2016
172nd ASA Meeting, Honolulu

 

Have you ever had difficulty understanding what someone was saying to you while walking down a busy big city street, or in a crowded restaurant? Even if that person was right next to you? Words can become difficult to make out when they get jumbled with the ambient noise – cars honking, other voices – making it hard for our ears to pick up what we want to hear. But this is not so for bats. Their ears can move and change shape to precisely pick out specific sounds in their environment.

This biosonar capability inspired our artificial ear research and improving the accuracy of automatic speech recognition (ASR) systems and speaker localization. We asked if could we enrich a speech signal with direction-dependent, dynamic features by using bat-inspired reception dynamics?

Horseshoe bats, for example, are found throughout Africa, Europe and Asia, and so-named for the shape of their noses, can change the shape of their outer ears to help extract additional information about the environment from incoming ultrasonic echoes. Their sophisticated biosonar systems emit ultrasonic pulses and listen to the incoming echoes that reflect back after hitting surrounding objects by changing their ear shape (something other mammals cannot do). This allows them to learn about the environment, helping them navigate and hunt in their home of dense forests.

While probing the environment, horseshoe bats change their ear shape to modulate the incoming echoes, increasing the information content embedded in the echoes. We believe that this shape change is one of the reasons bats’ sonar exhibit such high performance compared to technical sonar systems of similar size.

To test this, we first built a robotic bat head that mimics the ear shape changes we observed in horseshoe bats.

 

Figure 1: Horseshoe bat inspired robotic set-up used to record speech signal

han1

 

We then recorded speech signals to explore if using shape change, inspired by the bats, could embed direction-dependent dynamic features into speech signals. The potential applications of this could range from improving hearing aid accuracy to helping a machine more-accurately hear – and learn from – sounds in real-world environments.
We compiled a digital dataset of 11 US English speakers from open source speech collections provided by Carnegie Mellon University. The human acoustic utterances were shifted to the ultrasonic domain so our robot could understand and play back the sounds into microphones, while the biomimetic bat head actively moved its ears. The signals at the base of the ears were then translated back to the speech domain to extract the original signal.
This pilot study, performed at IBM Research in collaboration with Virginia Tech, showed that the ear shape change was, in fact, able to significantly modulate the signal and concluded that these changes, like in horseshoe bats, embed dynamic patterns into speech signals.

The dynamically enriched data we explored improved the accuracy of speech recognition. Compared to a traditional system for hearing and recognizing speech in noisy environments, adding structural movement to a complex outer shape surrounding a microphone, mimicking an ear, significantly improved its performance and access to directional information. In the future, this might improve performance in devices operating in difficult hearing scenarios like a busy street in a metropolitan center.

 

Figure 2: Example of speech signal recorded without and with the dynamic ear. Top row: speech signal without the dynamic ear, Bottom row: speech signal with the dynamic ear

han2

 

4aPPa24 – Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task  – Takahiro Tamesue

4aPPa24 – Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task – Takahiro Tamesue

Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task

Takahiro Tamesue – tamesue@yamaguchi-u.ac.jp
Yamaguchi University
1677-1 Yoshida, Yamaguchi
Yamaguchi Prefecture 753-8511
Japan

Popular version of poster 4aPPa24, “Effects of meaningful or meaningless noise on psychological impression for annoyance and selective attention to stimuli during intellectual task”
Presented Thursday morning, December 1, 2016
172nd ASA Meeting, Honolulu
Open offices that make effective use of limited space and encourage dialogue, interaction, and collaboration among employees are becoming increasingly common. However, productive work-related conversation might actually decrease the performance of other employees within earshot — more so than other random, meaningless noises. When carrying out intellectual activities involving memory or arithmetic tasks, it is a common experience for noise to cause an increased psychological impression of “annoyance,” leading to a decline in performance. This is more apparent for meaningful noise, such as conversation, than it is for other random, meaningless noise. In this study, the impact of meaningless and meaningful noises on selective attention and cognitive performance in volunteers, as well as the degree of subjective annoyance of those noises, were investigated through physiological and psychological experiments.

The experiments were based on the so-called “odd-ball” paradigm — a test used to examine selective attention and information processing ability. In the odd-ball paradigm, subjects detect and count rare target events embedded in a series of repetitive events. To complete the odd-ball task it is necessary to regulate attention to a stimulus. In one trial, subjects had to count the number of times the infrequent target sounds occurred under meaningless or meaningful noises over a 10 minute period. The infrequent sound — appearing 20% of the time—was a 2 kHz tone burst; the frequent sound was a 1 kHz tone burst. In a visual odd-ball test, subjects observed pictures flashing on a PC monitor as meaningless or meaningful sounds were played to both ears through headphones. The most infrequent image was 10 x 10 centimeter-squared red image; the most frequent was a green square. At the end of the trial, the subjects also rated their level of annoyance at each sound on a seven-point scale.

During the experiments, the subjects brain waves were measured through electrodes placed on their scalp. In particular, we look at what is called, “event-related potentials,” very small voltages generated in the brain structures in response to specific events or stimuli that generate electroencephalograph waveforms. Example results, after appropriate averaging, of wave forms of event-related potentials under no external noise are shown in Figure 1. The so-called N100 component peaks negatively about 100 milliseconds after the stimulus and the P300 component positive peaks positively around 300 milliseconds after a stimulus, related to selective attention and working memory. Figure 2 and 3 show the results of event-related potentials for infrequent sound under the meaningless and meaningful noise. N100 and P300 components are smaller in amplitude and longer in latency because of the meaningful noise compared to the meaningless noise.

tamesue1

Figure 1. Averaged wave forms of evoked Event-related potentials for infrequent sound under no external noise.

tamesue2

Figure 2. Averaged wave forms of evoked Event-related potentials for infrequent sound under meaningless noise.

tamesue3

Figure 3. Averaged wave forms of auditory evoked Event-related potentials under meaningful noise.

We employed a statistical method called, “principal component analysis” to identify the latent components. Results of statistical analysis, where four principal components were extracted as shown in Figure 4. Considering the results, where component scores of meaningful noise was smaller than other noise conditions, meaningful noise reduces the component of event-related potentials. Thus, selective attention to cognitive tasks was influenced by the degree of meaningfulness of the noise.

 

tamesue4

Figure 4. Loadings of principal component analysis

Figure 5 shows the results for annoyance in the auditory odd-ball paradigms. These results demonstrated that the subjective experience of annoyance in response to noise increased due to the meaningfulness of the noise. The results revealed that whether the noise is meaningless or meaningful had a strong influence not only on the selective attention to auditory stimuli in cognitive tasks, but also the subjective experience of annoyance.

tamesue5

Figure 5. Subjective experience of annoyance (Auditory odd-ball paradigms)

That means that when designing sound environments in spaces used for cognitive tasks, such as the workplace or schools, it is appropriate to consider not only the sound level, but also meaningfulness of the noise that is likely to be present. Surrounding conversations often disturb the business operations conducted in such open offices. Because it is difficult to soundproof an open office, a way to mask meaningful speech with some other sound would be of great benefit for achieving a comfortable sound environment.

 

2aNS – How virtual reality technologies can enable better soundscape design – Chung

2aNS – How virtual reality technologies can enable better soundscape design – Chung

How virtual reality technologies can enable better soundscape design.

W.M. To – wmto@ipm.edu.mo
Macao Polytechnic Institute, Macao SAR, China.
A. Chung – ac@smartcitymakter.com
Smart City Maker, Denmark.
B. Schulte-Fortkamp – b.schulte-fortkamp@tu-berlin.de
Technische Universität Berlin, Berlin, Germany.

Popular version of paper 2aNS, “How virtual reality technologies can enable better soundscape design”
Presented Tuesday morning, November 29, 2016
172nd ASA Meeting, Honolulu

The quality of life including good sound quality has been sought by community members as part of the smart city initiative. While many governments have placed special attention to waste management, air and water pollution, acoustic environment in cities has been directed toward the control of noise, in particular, transportation noise. Governments that care about the tranquility in cities rely primarily on setting the so-called acceptable noise levels i.e. just quantities for compliance and improvement [1]. Sound quality is most often ignored. Recently, the International Organization for Standardization (ISO) released the standard on soundscape [2]. However, sound quality is a subjective matter and depends heavily on the perception of humans in different contexts [3]. For example, China’s public parks are well known to be rather noisy in the morning due to the activities of boisterous amateur musicians and dancers – many of them are retirees and housewives – or “Da Ma” [4]. These activities would cause numerous complaints if they would happen in other parts of the world, but in China it is part of everyday life.

According to the ISO soundscape guideline, people can use sound walks, questionnaire surveys, and even lab tests to determine sound quality during a soundscape design process [3]. With the advance of virtual reality technologies, we believe that the current technology enables us to create an application that immerses designers and stakeholders in the community to perceive and compare changes in sound quality and to provide feedback on different soundscape designs. An app has been developed specifically for this purpose. Figure 1 shows a simulated environment in which a student or visitor arrives the school’s campus, walks through the lawn, passes a multifunctional court, and get into an open area with table tennis tables. She or he can experience different ambient sounds and can click an object to increase or decrease the volume of sound from that object. After hearing sounds at different locations from different sources, the person can evaluate the level of acoustic comfort at each location and express their feelings toward overall soundscape.  She or he can rate the sonic environment based on its degree of perceived loudness and its level of pleasantness using a 5-point scale from 1 = ‘heard nothing/not at all pleasant’ to 5 = ‘very loud/pleasant’. Besides, she or he shall describe the acoustic environment and soundscape using free words because of the multi-dimensional nature of sonic environment.

Figure 1. A simulated soundwalk in a school campus.

  1. To, W. M., Mak, C. M., and Chung, W. L.. Are the noise levels acceptable in a built environment like Hong Kong? Noise and Health, 2015. 17(79): 429-439.
  2. ISO. ISO 12913-1:2014 Acoustics – Soundscape – Part 1: Definition and Conceptual Framework, Geneva: International Organization for Standardization, 2014.
  3. Kang, J. and Schulte-Fortkamp, B. (Eds.). Soundscape and the Built Environment, CRC Press, 2016.
  4. Buckley, C. and Wu, A. In China, the ‘Noisiest Park in the World’ Tries to Tone Down Rowdy Retirees, NYTimes.com, from http://www.nytimes.com/2016/07/04/world/asia/china-chengdu-park-noise.html , 2016.