Warsaw set of emotional facial expression pictures: a validation study of facial display photographs.
ABSTRACT: Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express "felt" emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g., accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU's codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for non-commercial use by request at http://www.emotional-face.org.
Project description:The stimulus sets presently used to study emotion processing are primarily static pictures of individuals (primarily adults) making emotional facial expressions. However, the dynamic, stereotyped movements associated with emotional expressions contain rich information missing from static pictures, such as the difference between happiness and pride. We created a set of 1.1 s dynamic emotional facial stimuli representing boys and girls aged 8-18. A separate group of 36 individuals (mean [M] age = 19.5 years, standard deviation [SD] = 1.95, 13 male) chose the most appropriate emotion label for each video from a superset of 250 videos. Validity and reliability statistics were performed across all stimuli, which were then used to determine which stimuli should be included in the final stimulus set. We set a criterion for inclusion of 70% agreement with the modal response made for each video. The final stimulus set contains 142 videos of 36 actors (M age = 13.24 years, SD = 2.09, 14 male) making negative (disgust, embarrassment, fear, sadness), positive (happiness, pride), and neutral facial expressions. The percent correct among the final stimuli was high (median = 88.89%; M = 88.38%, SD = 7.74%), as was reliability (? = 0.753).
Project description:Access to validated stimuli depicting children's facial expressions is useful for different research domains (e.g., developmental, cognitive or social psychology). Yet, such databases are scarce in comparison to others portraying adult models, and validation procedures are typically restricted to emotional recognition accuracy. This work presents subjective ratings for a sub-set of 283 photographs selected from the Child Affective Facial Expression set (CAFE ). Extending beyond the original emotion recognition accuracy norms , our main goal was to validate this database across eight subjective dimensions related to the model (e.g., attractiveness, familiarity) or the specific facial expression (e.g., intensity, genuineness), using a sample from a different nationality (N = 450 Portuguese participants). We also assessed emotion recognition (forced-choice task with seven options: anger, disgust, fear, happiness, sadness, surprise and neutral). Overall results show that most photographs were rated as highly clear, genuine and intense facial expressions. The models were rated as both moderately familiar and likely to belong to the in-group, obtaining high attractiveness and arousal ratings. Results also showed that, similarly to the original study, the facial expressions were accurately recognized. Normative and raw data are available as supplementary material at https://osf.io/mjqfx/.
Project description:Experimental research examining emotional processes is typically based on the observation of images with affective content, including facial expressions. Future studies will benefit from databases with emotion-inducing stimuli in which characteristics of the stimuli potentially influencing results can be controlled. This study presents Portuguese normative data for the identification of seven facial expressions of emotions (plus a neutral face), on the Radboud Faces Database (RaFD). The effect of participants' gender and models' sex on emotion recognition was also examined. Participants (N = 1249) were exposed to 312 pictures of white adults displaying emotional and neutral faces with a frontal gaze. Recognition agreement between the displayed and participants' chosen expressions ranged from 69% (for anger) to 97% (for happiness). Recognition levels were significantly higher among women than among men only for anger and contempt. The emotion recognition was higher either in female models or in male models depending on the emotion. Overall, the results show high recognition levels of the facial expressions presented, indicating that the RaFD provides adequate stimuli for studies examining the recognition of facial expressions of emotion among college students. Participants' gender had a limited influence on emotion recognition, but the sex of the model requires additional consideration.
Project description:Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
Project description:BACKGROUND:Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. METHODS:The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants' level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques' facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. RESULTS:Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques' facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for aggressive faces. However, these participants, even when previously exposed to pictures, had difficulties in recognising aggressive, distressed and friendly faces above chance level. DISCUSSION:These results do not support the universality hypothesis as exposed and naïve participants had difficulties in correctly identifying aggressive, distressed and friendly faces. Exposure to facial expressions improved their correct recognition. In addition, the findings suggest that providing simple exposure to 2D pictures (for example, information signs explaining animals' facial signalling in zoos or animal parks) is not a sufficient educational tool to reduce tourists' misinterpretations of macaque emotion. Additional measures, such as keeping a safe distance between tourists and wild animals, as well as reinforcing learning via videos or supervised visits led by expert guides, could reduce such issues and improve both animal welfare and tourist experience.
Project description:Individuals with schizophrenia show deficits both in facial emotion recognition and context processing (Kohler, C.G., Walker, J.B., Martin, E.A., Healey, K.M., Moberg, P.J., 2010. Facial emotion perception in schizophrenia: a meta-analytic review. Schizophr. Bull. 36, 1009-1019). Recent evidence suggests context information can affect facial emotion recognition (Aviezer, H., Bentin, S., Hassin, R.R., Meschino, W.S., Kennedy, J., Grewal, S., Esmail, S., Cohen, S., Moscovitch, M., 2009. Not on the face alone: perception of contextualized face expressions in Huntington's disease. Brain 132, 1633-1644). Thus, individuals with schizophrenia may have deficits in facial emotion processing, at least in part, due to impairments in processing context information (Green, M.J., Waldron, J.H., Coltheart, M., 2007. Emotional context processing is impaired in schizophrenia. Cogn. Neuropsychiatry 12, 259-280.). We used a novel experimental task, the Emotion Context Processing Task (ECPT) to examine the influences of emotional context (IAPS pictures) on the processing of subtle surprised faces in schizophrenia. One of the task conditions included a manipulation designed to determine whether enhancing attention to the context (by requiring a categorization judgment on the context pictures) would facilitate the influence of context on facial emotion processing in schizophrenia. In addition, we tested whether deficits on a non-social context processing would predict deficits in the influence of context on facial emotion processing in schizophrenia. We administered the Dot Probe Expectancy Task (a non-social context processing task) and the ECPT to individuals with schizophrenia (n=35) and healthy controls (n=32). Individuals with schizophrenia showed an intact influence of context information on facial emotion recognition. The manipulation designed to enhance attention to emotional context reduced the effect of context for both groups. In schizophrenia, better processing of non-social context was associated with a stronger influence of context on valence ratings of facial expressions in the negative context condition. These results suggest in schizophrenia, similar mechanisms may influence the processing of context for both social and non-social information.
Project description:The spontaneous tendency to synchronize our facial expressions with those of others is often termed emotional contagion. It is unclear, however, whether emotional contagion depends on visual awareness of the eliciting stimulus and which processes underlie the unfolding of expressive reactions in the observer. It has been suggested either that emotional contagion is driven by motor imitation (i.e., mimicry), or that it is one observable aspect of the emotional state arising when we see the corresponding emotion in others. Emotional contagion reactions to different classes of consciously seen and "unseen" stimuli were compared by presenting pictures of facial or bodily expressions either to the intact or blind visual field of two patients with unilateral destruction of the visual cortex and ensuing phenomenal blindness. Facial reactions were recorded using electromyography, and arousal responses were measured with pupil dilatation. Passive exposure to unseen expressions evoked faster facial reactions and higher arousal compared with seen stimuli, therefore indicating that emotional contagion occurs also when the triggering stimulus cannot be consciously perceived because of cortical blindness. Furthermore, stimuli that are very different in their visual characteristics, such as facial and bodily gestures, induced highly similar expressive responses. This shows that the patients did not simply imitate the motor pattern observed in the stimuli, but resonated to their affective meaning. Emotional contagion thus represents an instance of truly affective reactions that may be mediated by visual pathways of old evolutionary origin bypassing cortical vision while still providing a cornerstone for emotion communication and affect sharing.
Project description:Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople (n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence.
Project description:Facial expressions are fundamental to interpersonal communication, including social interaction, and allow people of different ages, cultures, and languages to quickly and reliably convey emotional information. Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion, particularly positive and negative affect intensity. This is likely, in part, because achieving inter-rater reliability for facial action and affect intensity ratings is painstaking and labor-intensive. We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants, which show strong correspondences to positive and negative affect intensity ratings obtained from highly trained coders. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases. Further, we show that CVML can be applied to individual human judges to infer which facial actions they use to generate perceptual emotion ratings from facial expressions.
Project description:The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions.