Selective eye fixations on diagnostic face regions of dynamic emotional expressions: KDEF-dyn database.
ABSTRACT: Prior research using static facial stimuli (photographs) has identified diagnostic face regions (i.e., functional for recognition) of emotional expressions. In the current study, we aimed to determine attentional orienting, engagement, and time course of fixation on diagnostic regions. To this end, we assessed the eye movements of observers inspecting dynamic expressions that changed from a neutral to an emotional face. A new stimulus set (KDEF-dyn) was developed, which comprises 240 video-clips of 40 human models portraying six basic emotions (happy, sad, angry, fearful, disgusted, and surprised). For validation purposes, 72 observers categorized the expressions while gaze behavior was measured (probability of first fixation, entry time, gaze duration, and number of fixations). Specific visual scanpath profiles characterized each emotional expression: The eye region was looked at earlier and longer for angry and sad faces; the mouth region, for happy faces; and the nose/cheek region, for disgusted faces; the eye and the mouth regions attracted attention in a more balanced manner for surprise and fear. These profiles reflected enhanced selective attention to expression-specific diagnostic face regions. The KDEF-dyn stimuli and the validation data will be available to the scientific community as a useful tool for research on emotional facial expression processing.
Project description:Wearing face masks is one of the essential means to prevent the transmission of certain respiratory diseases such as coronavirus disease 2019 (COVID-19). Although acceptance of such masks is increasing in the Western hemisphere, many people feel that social interaction is affected by wearing a mask. In the present experiment, we tested the impact of face masks on the readability of emotions. The participants (N = 41, calculated by an a priori power test; random sample; healthy persons of different ages, 18-87 years) assessed the emotional expressions displayed by 12 different faces. Each face was randomly presented with six different expressions (angry, disgusted, fearful, happy, neutral, and sad) while being fully visible or partly covered by a face mask. Lower accuracy and lower confidence in one's own assessment of the displayed emotions indicate that emotional reading was strongly irritated by the presence of a mask. We further detected specific confusion patterns, mostly pronounced in the case of misinterpreting disgusted faces as being angry plus assessing many other emotions (e.g., happy, sad, and angry) as neutral. We discuss compensatory actions that can keep social interaction effective (e.g., body language, gesture, and verbal communication), even when relevant visual information is crucially reduced.
Project description:Most experimental studies of facial expression processing have used static stimuli (photographs), yet facial expressions in daily life are generally dynamic. In its original photographic format, the Karolinska Directed Emotional Faces (KDEF) has been frequently utilized. In the current study, we validate a dynamic version of this database, the KDEF-dyn. To this end, we applied animation between neutral and emotional expressions (happy, sad, angry, fearful, disgusted, and surprised; 1,033-ms unfolding) to 40 KDEF models, with morphing software. Ninety-six human observers categorized the expressions of the resulting 240 video-clip stimuli, and automated face analysis assessed the evidence for 6 expressions and 20 facial action units (AUs) at 31 intensities. Low-level image properties (luminance, signal-to-noise ratio, etc.) and other purely perceptual factors (e.g., size, unfolding speed) were controlled. Human recognition performance (accuracy, efficiency, and confusions) patterns were consistent with prior research using static and other dynamic expressions. Automated assessment of expressions and AUs was sensitive to intensity manipulations. Significant correlations emerged between human observers' categorization and automated classification. The KDEF-dyn database aims to provide a balance between experimental control and ecological validity for research on emotional facial expression processing. The stimuli and the validation data are available to the scientific community.
Project description:There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.
Project description:Facial expressions of emotion play a key role in guiding social judgements, including deciding whether or not to approach another person. However, no research has examined how situational context modulates approachability judgements assigned to emotional faces, or the relationship between perceived threat and approachability judgements. Fifty-two participants provided approachability judgements to angry, disgusted, fearful, happy, neutral, and sad faces across three situational contexts: no context, when giving help, and when receiving help. Participants also rated the emotional faces for level of perceived threat and labelled the facial expressions. Results indicated that context modulated approachability judgements to faces depicting negative emotions. Specifically, faces depicting distress-related emotions (i.e., sadness and fear) were considered more approachable in the giving help context than both the receiving help and neutral context. Furthermore, higher ratings of threat were associated with the assessment of angry, happy and neutral faces as less approachable. These findings are the first to demonstrate the significant role that context plays in the evaluation of an individual's approachability and illustrate the important relationship between perceived threat and the evaluation of approachability.
Project description:Newborns and infants are highly depending on successfully communicating their needs; e.g., through crying and facial expressions. Although there is a growing interest in the mechanisms of and possible influences on the recognition of facial expressions in infants, heretofore there exists no validated database of emotional infant faces. In the present article we introduce a standardized and freely available face database containing Caucasian infant face images from 18 infants 4 to 12 months old. The development and validation of the Tromsø Infant Faces (TIF) database is presented in Study 1. Over 700 adults categorized the photographs by seven emotion categories (happy, sad, disgusted, angry, afraid, surprised, neutral) and rated intensity, clarity and their valance. In order to examine the relevance of TIF, we then present its first application in Study 2, investigating differences in emotion recognition across different stages of parenthood. We found a small gender effect in terms of women giving higher intensity and clarity ratings than men. Moreover, parents of young children rated the images as clearer than all the other groups, and parents rated "neutral" expressions as more clearly and more intense. Our results suggest that caretaking experience provides an implicit advantage in the processing of emotional expressions in infant faces, especially for the more difficult, ambiguous expressions.
Project description:Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d' analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms.
Project description:Adult aging is associated with difficulties in recognizing negative facial expressions such as fear and anger. However, happiness and disgust recognition is generally found to be less affected. Eye-tracking studies indicate that the diagnostic features of fearful and angry faces are situated in the upper regions of the face (the eyes), and for happy and disgusted faces in the lower regions (nose and mouth). These studies also indicate age-differences in visual scanning behavior, suggesting a role for attention in emotion recognition deficits in older adults. However, because facial features can be processed extrafoveally, and expression recognition occurs rapidly, eye-tracking has been questioned as a measure of attention during emotion recognition. In this study, the Moving Window Technique (MWT) was used as an alternative to the conventional eye-tracking technology. By restricting the visual field to a moveable window, this technique provides a more direct measure of attention. We found a strong bias to explore the mouth across both age groups. Relative to young adults, older adults focused less on the left eye, and marginally more on the mouth and nose. Despite these different exploration patterns, older adults were most impaired in recognition accuracy for disgusted expressions. Correlation analysis revealed that among older adults, more mouth exploration was associated with faster recognition of both disgusted and happy expressions. As a whole, these findings suggest that in aging there are both attentional differences and perceptual deficits contributing to less accurate emotion recognition.
Project description:BACKGROUND:Difficulties with facial expression processing may be associated with the characteristic social impairments in individuals with autism spectrum disorder (ASD). Emotional face processing in ASD has been investigated in an abundance of behavioral and EEG studies, yielding, however, mixed and inconsistent results. METHODS:We combined fast periodic visual stimulation (FPVS) with EEG to assess the neural sensitivity to implicitly detect briefly presented facial expressions among a stream of neutral faces, in 23 boys with ASD and 23 matched typically developing (TD) boys. Neutral faces with different identities were presented at 6 Hz, periodically interleaved with an expressive face (angry, fearful, happy, sad in separate sequences) every fifth image (i.e., 1.2 Hz oddball frequency). These distinguishable frequency tags for neutral and expressive stimuli allowed direct and objective quantification of the expression-categorization responses, needing only four sequences of 60 s of recording per condition. RESULTS:Both groups show equal neural synchronization to the general face stimulation and similar neural responses to happy and sad faces. However, the ASD group displays significantly reduced responses to angry and fearful faces, compared to TD boys. At the individual subject level, these neural responses allow to predict membership of the ASD group with an accuracy of 87%. Whereas TD participants show a significantly lower sensitivity to sad faces than to the other expressions, ASD participants show an equally low sensitivity to all the expressions. CONCLUSIONS:Our results indicate an emotion-specific processing deficit, instead of a general emotion-processing problem: Boys with ASD are less sensitive than TD boys to rapidly and implicitly detect angry and fearful faces. The implicit, fast, and straightforward nature of FPVS-EEG opens new perspectives for clinical diagnosis.
Project description:The present study examined whether 6-month-old infants could transfer amodal information (i.e. independently of sensory modalities) from emotional voices to emotional faces. Thus, sequences of successive emotional stimuli (voice or face from one sensory modality -auditory- to another sensory modality -visual-), corresponding to a cross-modal transfer, were displayed to 24 infants. Each sequence presented an emotional (angry or happy) or neutral voice, uniquely, followed by the simultaneous presentation of two static emotional faces (angry or happy, congruous or incongruous with the emotional voice). Eye movements in response to the visual stimuli were recorded with an eye-tracker. First, results suggested no difference in infants' looking time to happy or angry face after listening to the neutral voice or the angry voice. Nevertheless, after listening to the happy voice, infants looked longer at the incongruent angry face (the mouth area in particular) than the congruent happy face. These results revealed that a cross-modal transfer (from auditory to visual modalities) is possible for 6-month-old infants only after the presentation of a happy voice, suggesting that they recognize this emotion amodally.
Project description:BACKGROUND:Social anxiety disorder (SAD) is characterized by intense fear when facing a crowd. Processing biases of crowd-related information have been suggested as contributing to the etiology and maintenance of the disorder. Here we tested whether patients with SAD display aberrant patterns of extracting the mean emotional tone from sets of faces. METHODS:Twenty-one participants with SAD and 24 unanxious control participants had to determine the average emotion expression of sets of six different morphed faces ranging from happy to angry. In 20% of trials the six faces were randomly sampled from the entire happy-angry range. The remaining 80% of trials, considered the critical trials, had an emotional outlier: five faces were sampled from one-half of the emotional range, whereas the sixth face was sampled from the opposite emotional range. RESULTS:Participants with SAD were less accurate than controls in extracting the mean emotional tone from sets of faces. Unanxious participants underweighted negative outliers and overweighed positive outliers when extracting the mean, whereas participants with SAD exhibited no such biases. CONCLUSIONS:Results suggest a possible mechanism associated with the anxiety experienced by socially anxious individuals when facing a crowd.