We investigated through electrophysiological recordings how music-induced emotions are recognized and combined with the emotional content of written sentences. Twenty-four sad, joyful, and frightening musical tracks were presented to 16 participants reading 270 short sentences conveying a sad, joyful, or frightening emotional meaning. Audiovisual stimuli could be emotionally congruent or incongruent with each other; participants were asked to pay attention and respond to filler sentences containing cities’ names, while ignoring the rest. The amplitude values of event-related potentials (ERPs) were subjected to repeated measures ANOVAs. Distinct electrophysiological markers were identified for the processing of stimuli inducing fear (N450, either linguistic or musical), for language-induced sadness (P300) and for joyful music (positive P2 and LP potentials). The music/language emotional discordance elicited a large N400 mismatch response (p =.032). Its stronger intracranial source was the right superior temporal gyrus (STG) devoted to multisensory integration of emotions. The results suggest that music can communicate emotional meaning as distinctively as language.

Proverbio, A., Russo, F. (2022). Multimodal recognition of emotions in music and language. PSYCHOLOGY OF MUSIC, 50(1), 54-68 [10.1177/0305735620978697].

Multimodal recognition of emotions in music and language

Proverbio A. M.
Primo
;
2022

Abstract

We investigated through electrophysiological recordings how music-induced emotions are recognized and combined with the emotional content of written sentences. Twenty-four sad, joyful, and frightening musical tracks were presented to 16 participants reading 270 short sentences conveying a sad, joyful, or frightening emotional meaning. Audiovisual stimuli could be emotionally congruent or incongruent with each other; participants were asked to pay attention and respond to filler sentences containing cities’ names, while ignoring the rest. The amplitude values of event-related potentials (ERPs) were subjected to repeated measures ANOVAs. Distinct electrophysiological markers were identified for the processing of stimuli inducing fear (N450, either linguistic or musical), for language-induced sadness (P300) and for joyful music (positive P2 and LP potentials). The music/language emotional discordance elicited a large N400 mismatch response (p =.032). Its stronger intracranial source was the right superior temporal gyrus (STG) devoted to multisensory integration of emotions. The results suggest that music can communicate emotional meaning as distinctively as language.
Articolo in rivista - Articolo scientifico
event-related potentials; incongruence; mismatch response; multimodal stimuli; musical emotions; reading; STG;
English
7-gen-2021
2022
50
1
54
68
reserved
Proverbio, A., Russo, F. (2022). Multimodal recognition of emotions in music and language. PSYCHOLOGY OF MUSIC, 50(1), 54-68 [10.1177/0305735620978697].
File in questo prodotto:
File Dimensione Formato  
PsycholMusic.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 1.05 MB
Formato Adobe PDF
1.05 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/299083
Citazioni
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 6
Social impact