Children Facial Expression Production: Influence of Age, Gender, Emotion Subtype, Elicitation Condition and Culture.
ABSTRACT: The production of facial expressions (FEs) is an important skill that allows children to share and adapt emotions with their relatives and peers during social interactions. These skills are impaired in children with Autism Spectrum Disorder. However, the way in which typical children develop and master their production of FEs has still not been clearly assessed. This study aimed to explore factors that could influence the production of FEs in childhood such as age, gender, emotion subtype (sadness, anger, joy, and neutral), elicitation task (on request, imitation), area of recruitment (French Riviera and Parisian) and emotion multimodality. A total of one hundred fifty-seven children aged 6-11 years were enrolled in Nice and Paris, France. We asked them to produce FEs in two different tasks: imitation with an avatar model and production on request without a model. Results from a multivariate analysis revealed that: (1) children performed better with age. (2) Positive emotions were easier to produce than negative emotions. (3) Children produced better FE on request (as opposed to imitation); and (4) Riviera children performed better than Parisian children suggesting regional influences on emotion production. We conclude that facial emotion production is a complex developmental process influenced by several factors that needs to be acknowledged in future research.
Project description:According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Project description:Background:Computer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD). Methods:We recruited 157 children with typical development (TD) and 36 children with ASD in Paris and Nice to perform two experimental tasks to produce FEs with emotional valence. FEs were explored by judging ratings and by random forest (RF) classifiers. To do so, we located a set of 49 facial landmarks in the task videos, we generated a set of geometric and appearance features and we used RF classifiers to explore how children with ASD differed from TD children when producing FEs. Results:Using multivariate models including other factors known to predict FEs (age, gender, intellectual quotient, emotion subtype, cultural background), ratings from expert raters showed that children with ASD had more difficulty producing FEs than TD children. In addition, when we explored how RF classifiers performed, we found that classification tasks, except for those for sadness, were highly accurate and that RF classifiers needed more facial landmarks to achieve the best classification for children with ASD. Confusion matrices showed that when RF classifiers were tested in children with ASD, anger was often confounded with happiness. Limitations:The sample size of the group of children with ASD was lower than that of the group of TD children. By using several control calculations, we tried to compensate for this limitation. Conclusion:Children with ASD have more difficulty producing socially meaningful FEs. The computer vision methods we used to explore FE dynamics also highlight that the production of FEs in children with ASD carries more ambiguity.
Project description:Background:It is inconclusive whether children with autism spectrum disorder (ASD) experience a deficit in facial emotion recognition. The dopaminergic pathway has been implicated in the pathogenesis of ASD. This study was aimed at determining facial emotion recognition and its correlation with polymorphisms in the dopaminergic pathway genes in children with ASD. Methods:Facial emotion recognition was examined in 98 children with ASD and 60 age- and gender-matched healthy controls. The severity of ASD was evaluated using the Childhood Autism Rating Scale (CARS). DNA from blood cells was used to analyze the genotypes of single-nucleotide polymorphisms (SNPs) in dopaminergic pathway genes. SNPs of DBH rs1611115, DDC rs6592961, DRD1 rs251937, DRD2 rs4630328, and DRD3 rs167771 were analyzed. Results:Children with ASD took a significantly longer time to recognize all facial emotions, and their interpretations were less accurate for anger at low intensity and fear at both low and high intensities. The severity of the disease was associated with significant delays in recognition of all facial emotions and with a decrease in accuracy in recognition of happiness and anger at low intensity. Accuracy in recognizing fear at high intensity and sadness at low intensity was associated with rs251937 and rs4630328, respectively, in children with ASD. Multivariate logistic regression analysis revealed that SNP rs167771, response time for the recognition of happiness, sadness and fear, and accuracy in recognition of anger and fear were all associated with the risk of childhood ASD. Conclusions:Children with ASD experience a deficit in facial emotion recognition. Certain SNPs in the dopaminergic pathway genes are associated with accuracy in recognizing selective facial emotions in children with ASD.
Project description:Despite theoretical claims that emotions are primarily communicated through prototypic facial expressions, empirical evidence is surprisingly scarce. This study aimed to (a) test whether children produced more components of a prototypic emotional facial expression during situations judged or self-reported to involve the corresponding emotion than situations involving other emotions (termed "intersituational specificity"), (b) test whether children produced more components of the prototypic expression corresponding to a situation's judged or self-reported emotion than components of other emotional expressions (termed "intrasituational specificity"), and (c) examine coherence between children's self-reported emotional experience and observers' judgments of children's emotions. One hundred and 20 children (ages 7-9) were video-recorded during a discussion with their mothers. Emotion ratings were obtained for children in 441 episodes. Children's nonverbal behaviors were judged by observers and coded by FACS-trained researchers. Children's self-reported emotion corresponded significantly to observers' judgments of joy, anger, fear, and sadness but not surprise. Multilevel modeling results revealed that children produced joy facial expressions more in joy episodes than nonjoy episodes (supporting intersituational specificity for joy) and more joy and surprise expressions than other emotional expressions in joy and surprise episodes (supporting intrasituational specificity for joy and surprise). However, children produced anger, fear, and sadness expressions more in noncorresponding episodes and produced these expressions less than other expressions in corresponding episodes. Findings suggest that communication of negative emotion during social interactions-as indexed by agreement between self-report and observer judgments-may rely less on prototypic facial expressions than is often theoretically assumed. (PsycINFO Database Record
Project description:There is surprisingly little empirical evidence supporting theoretical and anecdotal claims regarding the spontaneous production of prototypic facial expressions used in numerous emotion recognition studies. Proponents of innate prototypic expressions believe that this lack of evidence may be due to ethical restrictions against presenting powerful elicitors in the lab. The current popularity of internet platforms designed for public sharing of videos allows investigators to shed light on this debate by examining naturally-occurring facial expressions outside the laboratory. An Internet prank ("Scary Maze") has provided a unique opportunity to observe children reacting to a consistent fear- and surprise-inducing stimulus: The unexpected presentation of a "scary face" during an online maze game. The purpose of this study was to examine children's facial expressions in this naturalistic setting. Emotion ratings of non-facial behaviour (provided by untrained undergraduates) and anatomically-based facial codes were obtained from 60 videos of children (ages 4-7) found on YouTube. Emotion ratings were highest for fear and surprise. Correspondingly, children displayed more facial expressions of fear and surprise than for other emotions (e.g. anger, joy). These findings provide partial support for the ecological validity of fear and surprise expressions. Still prototypic expressions were produced by fewer than half the children.
Project description:Similar to adults with schizophrenia, youth at high risk for developing schizophrenia present difficulties in recognizing emotions in faces. These difficulties might index vulnerability for schizophrenia and play a role in the development of the illness. Facial emotion recognition (FER) impairments have been implicated in declining social functioning during the prodromal phase of illness and are thus a potential target for early intervention efforts. This study examined 9- to 14-year-old children: 34 children who presented a triad of well-replicated antecedents of schizophrenia (ASz), including motor and/or speech delays, clinically relevant internalizing and/or externalizing problems, and psychotic-like experiences (PLEs), and 34 typically developing (TD) children who presented none of these antecedents. An established FER task (ER40) was used to assess correct recognition of happy, sad, angry, fearful, and neutral expressions, and facial emotion misperception responses were made for each emotion type. Relative to TD children, ASz children presented an overall impairment in FER. Further, ASz children misattributed neutral expressions to face displaying other emotions and also more often mislabeled a neutral expression as sad compared with healthy peers. The inability to accurately discriminate subtle differences in facial emotion and the misinterpretation of neutral expressions as sad may contribute to the initiation and/or persistence of PLEs. Interventions that are effective in teaching adults to recognize emotions in faces could potentially benefit children presenting with antecedents of schizophrenia.
Project description:Physiological signals may be used as objective markers to identify emotions, which play relevant roles in social and daily life. To measure these signals, the use of contact-free techniques, such as Infrared Thermal Imaging (IRTI), is indispensable to individuals who have sensory sensitivity. The goal of this study is to propose an experimental design to analyze five emotions (disgust, fear, happiness, sadness and surprise) from facial thermal images of typically developing (TD) children aged 7-11 years using emissivity variation, as recorded by IRTI. For the emotion analysis, a dataset considered emotional dimensions (valence and arousal), facial bilateral sides and emotion classification accuracy. The results evidence the efficiency of the experimental design with interesting findings, such as the correlation between the valence and the thermal decrement in nose; disgust and happiness as potent triggers of facial emissivity variations; and significant emissivity variations in nose, cheeks and periorbital regions associated with different emotions. Moreover, facial thermal asymmetry was revealed with a distinct thermal tendency in the cheeks, and classification accuracy reached a mean value greater than 85%. From the results, the emissivity variations were an efficient marker to analyze emotions in facial thermal images, and IRTI was confirmed to be an outstanding technique to study emotions. This study contributes a robust dataset to analyze the emotions of 7-11-year-old TD children, an age range for which there is a gap in the literature.
Project description:Automatic motor mimicry is essential to the normal processing of perceived emotion, and disrupted automatic imitation might underpin socio-emotional deficits in neurodegenerative diseases, particularly the frontotemporal dementias. However, the pathophysiology of emotional reactivity in these diseases has not been elucidated. We studied facial electromyographic responses during emotion identification on viewing videos of dynamic facial expressions in 37 patients representing canonical frontotemporal dementia syndromes versus 21 healthy older individuals. Neuroanatomical associations of emotional expression identification accuracy and facial muscle reactivity were assessed using voxel-based morphometry. Controls showed characteristic profiles of automatic imitation, and this response predicted correct emotion identification. Automatic imitation was reduced in the behavioural and right temporal variant groups, while the normal coupling between imitation and correct identification was lost in the right temporal and semantic variant groups. Grey matter correlates of emotion identification and imitation were delineated within a distributed network including primary visual and motor, prefrontal, insular, anterior temporal and temporo-occipital junctional areas, with common involvement of supplementary motor cortex across syndromes. Impaired emotional mimesis may be a core mechanism of disordered emotional signal understanding and reactivity in frontotemporal dementia, with implications for the development of novel physiological biomarkers of socio-emotional dysfunction in these diseases.
Project description:Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate "in the moment" states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation.
Project description:There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.