Archi Banerjee – archibanerjee7@iitkgp.ac.in
Priyadarshi Patnaik – bapi@hss.iitkgp.ac.in
Rekhi Centre of Excellence for the Science of Happiness
Indian Institute of Technology Kharagpur, 721301, INDIA
Shankha Sanyal – ssanyal.ling@jadavpuruniversity.in
Souparno Roy – thesouparnoroy@gmail.com
Sir C. V. Raman Centre for Physics and Music
Jadavpur University, Kolkata: 700032, INDIA
Popular version of paper 4aMUa1 Lyrics on the melody or melody of the lyrics?
Presented Thursday morning, December 10, 2020
179th ASA Meeting, Acoustics Virtually Everywhere
Read the article in Proceedings of Meetings on Acoustics
The musicians often say “When a marriage happens between a lyric and a melody, only then a true song is born”! But, which impacts the audience more in a song – Melody or lyrics? The answer to this question is still unknown. What happens when the melody is hummed independently without the lyrics? How does that affect the acoustical waveform of the original song? Does the emotional appraisal remain same in both cases? The present work attempts to answer these questions using songs from different genres of Indian music. Recordings of two pairs of contrast emotion (happy-sad) evoking Raga bandishes from Indian Classical Music (ICM) and one pair of Bengali contemporary songs of opposite emotions (happy-sad) were taken from an experienced female professional singer who was asked to consecutively sing (with proper meaningful lyrics) and hum (without using any lyric or meaningful words) the songs, keeping the melodic structure, pitch and tempo same. In ICM, the basic building blocks are Ragas – bandish is a song composed in a particular Raga. The chosen audio clips are:
Genre | Chosen Songs | Primary emotion | Tempo |
Indian Classical Music | Raga Multani vilambit bandish | Sad | ~ 45 bpm |
Raga Hamsadhwani vilambit bandish | Happy | ~ 50 bpm | |
Raga Multani drut bandish | Sad | ~ 90 bpm | |
Raga Hamsadhwani drut bandish | Happy | ~ 110 bpm | |
Bengali Contemporary Music | O tota pakhi re | Sad | ~ 50 bpm |
Ami cheye cheye dekhi | Happy | ~ 130 bpm |
Audio 1(a,b): Sample audios of (a) Humming and (b) Song versions of same melodic part
Figure 1(a,b): Sample acoustical waveforms and pitch contours of (a) Humming and (b) Song versions of same melodic part
Next, using different sets of humming-song pairs from the chosen songs as stimuli, Electroencephalogram (EEG) recordings were taken from 5 musically untrained participants who understand the languages Hindi (of bandishes) and Bengali. Both music and EEG signals have highly complex structures, but their inherent geometry features self similarity or structural repetitions. Chaos based nonlinear fractal technique (Detrended fluctuation analysis or DFA) was applied both on the acoustical waveforms and their corresponding EEG signals. The changes in self similarity were calculated for each humming-song pair to study the impact of lyrics both in acoustical and neurological levels.
Figure 2(a,b): Variation in DFA scaling exponent in acoustical signals of humming-song pairs taken from songs of (a) Indian Classical music and (b) Bengali contemporary music
Acoustical analysis revealed that in songs where the lyrics is highly projected or emphasized (slow tempo vilambit bandish and Bengali contemporary songs), the DFA scaling exponent or self similarity decreases from humming to song version if the melodic pattern remains same. The sudden and spontaneous fluctuations in the pitch and intensity levels of the song versions due to the introduction of several consonants, rhythmic variations and pauses between words embedded in the lyrics may help in lowering the scale of self similarity.
Figure 3(a,b): Average of differences in DFA scaling exponent in (a) Frontal electrodes (F3, F4, F7, F8 & FZ) and (b) Occipital (O1 & O2), Parietal (P3 & P4) and Temporal (T3 &T4) electrodes for different Humming-Song pairs of Bengali contemporary songs
EEG analysis revealed that for both genres of music, in songs with highly projected lyrics, self similarity in frontal lobe electrodes increases from humming to song version of the same melody; whereas in occipital, temporal and parietal electrodes, we observed an increment in DFA scaling exponent from humming to song for the slow tempo songs and a decrement in the same for high tempo songs.
Combining results of acoustic analysis and EEG analysis, the impact of lyrics was found to be significantly higher in lower tempo songs compared to higher tempo songs both in acoustical and neuro-cognitive level. This is a pilot study in the context of Indian music which endeavors to quantitatively analyze the contribution of lyrics in songs of different genre as well as different emotional content in both acoustical and neuro-cognitive domain with the help of a unique scaling exponent.