Temporal Dynamics of Natural Static Emotional Facial Expressions Decoding: A Study Using Event- and Eye Fixation-Related Potentials.
ABSTRACT: This study aims at examining the precise temporal dynamics of the emotional facial decoding as it unfolds in the brain, according to the emotions displayed. To characterize this processing as it occurs in ecological settings, we focused on unconstrained visual explorations of natural emotional faces (i.e., free eye movements). The General Linear Model (GLM; Smith and Kutas, 2015a,b; Kristensen et al., 2017a) enables such a depiction. It allows deconvolving adjacent overlapping responses of the eye fixation-related potentials (EFRPs) elicited by the subsequent fixations and the event-related potentials (ERPs) elicited at the stimuli onset. Nineteen participants were displayed with spontaneous static facial expressions of emotions (Neutral, Disgust, Surprise, and Happiness) from the DynEmo database (Tcherkassof et al., 2013). Behavioral results on participants' eye movements show that the usual diagnostic features in emotional decoding (eyes for negative facial displays and mouth for positive ones) are consistent with the literature. The impact of emotional category on both the ERPs and the EFRPs elicited by the free exploration of the emotional faces is observed upon the temporal dynamics of the emotional facial expression processing. Regarding the ERP at stimulus onset, there is a significant emotion-dependent modulation of the P2-P3 complex and LPP components' amplitude at the left frontal site for the ERPs computed by averaging. Yet, the GLM reveals the impact of subsequent fixations on the ERPs time-locked on stimulus onset. Results are also in line with the valence hypothesis. The observed differences between the two estimation methods (Average vs. GLM) suggest the predominance of the right hemisphere at the stimulus onset and the implication of the left hemisphere in the processing of the information encoded by subsequent fixations. Concerning the first EFRP, the Lambda response and the P2 component are modulated by the emotion of surprise compared to the neutral emotion, suggesting an impact of high-level factors, in parieto-occipital sites. Moreover, no difference is observed on the second and subsequent EFRP. Taken together, the results stress the significant gain obtained in analyzing the EFRPs using the GLM method and pave the way toward efficient ecological emotional dynamic stimuli analyses.
Project description:Visual mismatch negativity (vMMN), a component in event-related potentials (ERPs), can be elicited when rarely presented "deviant" facial expressions violate regularity formed by repeated "standard" faces. vMMN is observed as differential ERPs elicited between the deviant and standard faces. It is not clear, however, whether differential ERPs to rare emotional faces interspersed with repeated neutral ones reflect true vMMN (i.e., detection of regularity violation) or merely encoding of the emotional content in the faces. Furthermore, a face-sensitive N170 response, which reflects structural encoding of facial features, can be modulated by emotional expressions. Owing to its similar latency and scalp topography with vMMN, these two components are difficult to separate. We recorded ERPs to neutral, fearful, and happy faces in two different stimulus presentation conditions in adult humans. For the oddball condition group, frequently presented neutral expressions (p = 0.8) were rarely replaced by happy or fearful expressions (p = 0.1), whereas for the equiprobable condition group, fearful, happy, and neutral expressions were presented with equal probability (p = 0.33). Independent component analysis (ICA) revealed two prominent components in both stimulus conditions in the relevant latency range and scalp location. A component peaking at 130 ms post stimulus showed a difference in scalp topography between the oddball (bilateral) and the equiprobable (right-dominant) conditions. The other component, peaking at 170 ms post stimulus, showed no difference between the conditions. The bilateral component at the 130-ms latency in the oddball condition conforms to vMMN. Moreover, it was distinct from N170 which was modulated by the emotional expression only. The present results suggest that future studies on vMMN to facial expressions should take into account possible confounding effects caused by the differential processing of the emotional expressions as such.
Project description:Eye fixations on packaging elements are not necessarily correlated to consumer attention or positive emotions towards those elements. This study aimed to assess links between the emotional responses of consumers and the eye fixations on areas of interest (AOI) of different chocolate packaging designs using eye trackers. Sixty participants were exposed to six novel and six familiar (commercial) chocolate packaging concepts on tablet PC screens. Analysis of variance (ANOVA) and multivariate analysis were performed on eye tracking, facial expressions, and self-reported responses. The results showed that there were significant positive correlations between liking and familiarity in commercially available concepts (r = 0.88), whereas, with novel concepts, there were no significant correlations. Overall, the total number of fixations on the familiar packaging was positively correlated (r = 0.78) with positive emotions elicited in people using the FaceReader™ (Happy), while they were not correlated with any emotion for the novel packaging. Fixations on a specific AOI were not linked to positive emotions, since, in some cases, they were related to negative emotions elicited in people or not even associated with any emotion. These findings can be used by package designers to better understand the link between the emotional responses of consumers and their eye fixation patterns for specific AOI.
Project description:There is evidence that women are better in recognizing their own and others' emotions. The female advantage in emotion recognition becomes even more apparent under conditions of rapid stimulus presentation. Affective priming paradigms have been developed to examine empirically whether facial emotion stimuli presented outside of conscious awareness color our impressions. It was observed that masked emotional facial expression has an affect congruent influence on subsequent judgments of neutral stimuli. The aim of the present study was to examine the effect of gender on affective priming based on negative and positive facial expression. In our priming experiment sad, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces which had to be evaluated. 81 young healthy volunteers (53 women) participated in the study. Subjects had no subjective awareness of emotional primes. Women did not differ from men with regard to age, education, intelligence, trait anxiety, or depressivity. In the whole sample, happy but not sad facial expression elicited valence congruent affective priming. Between-group analyses revealed that women manifested greater affective priming due to happy faces than men. Women seem to have a greater ability to perceive and respond to positive facial emotion at an automatic processing level compared to men. High perceptual sensitivity to minimal social-affective signals may contribute to women's advantage in understanding other persons' emotional states.
Project description:Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region.
Project description:BACKGROUND:Facial emotion perception is a major social skill, but its molecular signal pathway remains unclear. The MET/AKT cascade affects neurodevelopment in general populations and face recognition in patients with autism. This study explores the possible role of MET/AKT cascade in facial emotion perception. METHODS:One hundred and eighty two unrelated healthy volunteers (82 men and 100 women) were recruited. Four single nucleotide polymorphisms (SNP) of MET (rs2237717, rs41735, rs42336, and rs1858830) and AKT rs1130233 were genotyped and tested for their effects on facial emotion perception. Facial emotion perception was assessed by the face task of Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Thorough neurocognitive functions were also assessed. RESULTS:Regarding MET rs2237717, individuals with the CT genotype performed better in facial emotion perception than those with TT (p = 0.016 by ANOVA, 0.018 by general linear regression model [GLM] to control for age, gender, and education duration), and showed no difference with those with CC. Carriers with the most common MET CGA haplotype (frequency = 50.5%) performed better than non-carriers of CGA in facial emotion perception (p = 0.018, df = 1, F = 5.69, p = 0.009 by GLM). In MET rs2237717/AKT rs1130233 interaction, the C carrier/G carrier group showed better facial emotion perception than those with the TT/AA genotype (p = 0.035 by ANOVA, 0.015 by GLM), even when neurocognitive functions were controlled (p = 0.046 by GLM). CONCLUSIONS:To our knowledge, this is the first study to suggest that genetic factors can affect performance of facial emotion perception. The findings indicate that MET variances and MET/AKT interaction may affect facial emotion perception, implicating that the MET/AKT cascade plays a significant role in facial emotion perception. Further replication studies are needed.
Project description:The human brain is tuned to recognize emotional facial expressions in faces having a natural upright orientation. The relative contributions of featural, configural, and holistic processing to decision-making are as yet poorly understood. This study used a diffusion decision model (DDM) of decision-making to investigate the contribution of early face-sensitive processes to emotion recognition from physiognomic features (the eyes, nose, and mouth) by determining how experimental conditions tapping those processes affect early face-sensitive neuroelectric reflections (P100, N170, and P250) of processes determining evidence accumulation at the behavioral level. We first examined the effects of both stimulus orientation (upright vs. inverted) and stimulus type (photographs vs. sketches) on behavior and neuroelectric components (amplitude and latency). Then, we explored the sources of variance common to the experimental effects on event-related potentials (ERPs) and the DDM parameters. Several results suggest that the N170 indicates core visual processing for emotion recognition decision-making: (a) the additive effect of stimulus inversion and impoverishment on N170 latency; and (b) multivariate analysis suggesting that N170 neuroelectric activity must be increased to counteract the detrimental effects of face inversion on drift rate and of stimulus impoverishment on the stimulus encoding component of non-decision times. Overall, our results show that emotion recognition is still possible even with degraded stimulation, but at a neurocognitive cost, reflecting the extent to which our brain struggles to accumulate sensory evidence of a given emotion. Accordingly, we theorize that: (a) the P100 neural generator would provide a holistic frame of reference to the face percept through categorical encoding; (b) the N170 neural generator would maintain the structural cohesiveness of the subtle configural variations in facial expressions across our experimental manipulations through coordinate encoding of the facial features; and (c) building on the previous configural processing, the neurons generating the P250 would be responsible for a normalization process adapting to the facial features to match the stimulus to internal representations of emotional expressions.
Project description:Recognizing and distinguishing the emotional states of those around us is crucial for adaptive social behavior. Previous work has shown that damage to the ventromedial frontal lobe (VMF) impairs recognition of subtle emotional facial expressions and affects fixation patterns to face stimuli. However, whether this relates to deficits in acquiring or interpreting facial expression information remains unclear. We tested 37 patients with frontal lobe damage, including 17 subjects with VMF lesions, in a series of emotion recognition tasks with different gaze manipulations. Subjects were asked to rate neutral, subtle and extreme emotional expressions while freely examining faces, while instructed to look only at the eyes, and in a gaze-contingent condition that required top-down direction of eye movements to reveal the stimulus. People with VMF damage were worse at detecting subtle disgust during free viewing and confused extreme emotional expressions more than healthy controls. However, fixation patterns did not differ systematically between groups during free or gaze-contingent viewing conditions. Moreover, instruction to fixate only the eyes did not improve the performance of VMF damaged subjects. These data argue that VMF is not necessary for normal fixations to emotional face stimuli, and that impairments in emotion recognition after VMF damage do not stem from impaired information gathering, as indexed by patterns of fixation.
Project description:Spatial frequency (SF) contents have been shown to play an important role in emotion perception. This study employed event-related potentials (ERPs) to explore the time course of neural dynamics involved in the processing of facial expression conveying specific SF information. Participants completed a dual-target rapid serial visual presentation (RSVP) task, in which SF-filtered happy, fearful, and neutral faces were presented. The face-sensitive N170 component distinguished emotional (happy and fearful) faces from neutral faces in a low spatial frequency (LSF) condition, while only happy faces were distinguished from neutral faces in a high spatial frequency (HSF) condition. The later P3 component differentiated between the three types of emotional faces in both LSF and HSF conditions. Furthermore, LSF information elicited larger P1 amplitudes than did HSF information, while HSF information elicited larger N170 and P3 amplitudes than did LSF information. Taken together, these results suggest that emotion perception is selectively tuned to distinctive SF contents at different temporal processing stages.
Project description:Previous research suggests declines in emotion perception in older as compared to younger adults, but the underlying neural mechanisms remain unclear. Here, we address this by investigating how "face-age" and "face emotion intensity" affect both younger and older participants' behavioural and neural responses using event-related potentials (ERPs). Sixteen young and fifteen older adults viewed and judged the emotion type of facial images with old or young face-age and with high- or low- emotion intensities while EEG was recorded. The ERP results revealed that young and older participants exhibited significant ERP differences in two neural clusters: the left frontal and centromedial regions (100-200 ms stimulus onset) and frontal region (250-900 ms) when perceiving neutral faces. Older participants also exhibited significantly higher ERPs within these two neural clusters during anger and happiness emotion perceptual tasks. However, while this pattern of activity supported neutral emotion processing, it was not sufficient to support the effective processing of facial expressions of anger and happiness as older adults showed reductions in performance when perceiving these emotions. These age-related changes are consistent with theoretical models of age-related changes in neurocognitive abilities and may reflect a general age-related cognitive neural compensation in older adults, rather than a specific emotion-processing neural compensation.
Project description:Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present.