Unknown

Dataset Information

0

Crossmodal benefits to vocal emotion perception in cochlear implant users.


ABSTRACT: Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.

SUBMITTER: von Eiff CI 

PROVIDER: S-EPMC9791346 | biostudies-literature | 2022 Dec

REPOSITORIES: biostudies-literature

altmetric image

Publications

Crossmodal benefits to vocal emotion perception in cochlear implant users.

von Eiff Celina Isabelle CI   Frühholz Sascha S   Korth Daniela D   Guntinas-Lichius Orlando O   Schweinberger Stefan Robert SR  

iScience 20221202 12


Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between gr  ...[more]

Similar Datasets

| S-EPMC3448664 | biostudies-literature
| S-EPMC9197138 | biostudies-literature
| S-EPMC6518352 | biostudies-literature
| S-EPMC4617902 | biostudies-literature
| S-EPMC8785216 | biostudies-literature
| S-EPMC8669965 | biostudies-literature
| S-EPMC4297259 | biostudies-literature
| S-EPMC10593103 | biostudies-literature
| S-EPMC9560480 | biostudies-literature
| S-EPMC4430230 | biostudies-literature