Unknown

Dataset Information

0

Multisensory integration of musical emotion perception in singing.


ABSTRACT: We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio-visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio-visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers' orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.

SUBMITTER: Lange EB 

PROVIDER: S-EPMC9470688 | biostudies-literature | 2022 Oct

REPOSITORIES: biostudies-literature

altmetric image

Publications

Multisensory integration of musical emotion perception in singing.

Lange Elke B EB   Fünderich Jens J   Grimm Hartmut H  

Psychological research 20220110 7


We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio-visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound  ...[more]

Similar Datasets

| S-EPMC9907018 | biostudies-literature
| S-EPMC5638301 | biostudies-literature
| S-EPMC9285530 | biostudies-literature
| S-EPMC10904801 | biostudies-literature
| S-EPMC4324720 | biostudies-literature
| S-EPMC9924934 | biostudies-literature
| S-EPMC3485368 | biostudies-literature
| S-EPMC3409119 | biostudies-literature
| S-EPMC1978520 | biostudies-literature
| S-EPMC9902668 | biostudies-literature