A Wearable High-Resolution Facial Electromyography for Long Term Recordings in Freely Behaving Humans.
ABSTRACT: Human facial expressions are a complex capacity, carrying important psychological and neurological information. Facial expressions typically involve the co-activation of several muscles; they vary between individuals, between voluntary versus spontaneous expressions, and depend strongly on personal interpretation. Accordingly, while high-resolution recording of muscle activation in a non-laboratory setting offers exciting opportunities, it remains a major challenge. This paper describes a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrates its application in a natural setting. We focus on muscle activation associated with "enjoyment", "social" and "masked" smiles; three categories with distinct social meanings. We use an innovative, dry, soft electrode array designed specifically for facial surface electromyography recording, a customized independent component analysis algorithm, and a short training procedure to achieve the desired mapping. First, identification of the orbicularis oculi and the levator labii superioris was demonstrated from voluntary expressions. Second, the zygomaticus major was identified from voluntary and spontaneous Duchenne and non-Duchenne smiles. Finally, using a wireless device in an unmodified work environment revealed expressions of diverse emotions in face-to-face interaction. Our high-resolution and crosstalk-free mapping, along with excellent user-convenience, opens new opportunities in gaming, virtual-reality, bio-feedback and objective psychological and neurological assessment.
Project description:The automatic detection of facial expressions of pain is needed to ensure accurate pain assessment of patients who are unable to self-report pain. To overcome the challenges of automatic systems for determining pain levels based on facial expressions in clinical patient monitoring, a surface electromyography method was tested for feasibility in healthy volunteers. In the current study, two types of experimental gradually increasing pain stimuli were induced in thirty-one healthy volunteers who attended the study. We used a surface electromyography method to measure the activity of five facial muscles to detect facial expressions during pain induction. Statistical tests were used to analyze the continuous electromyography data, and a supervised machine learning was applied for pain intensity prediction model. Muscle activation of corrugator supercilii was most strongly associated with self-reported pain, and the levator labii superioris and orbicularis oculi showed a statistically significant increase in muscle activation when the pain stimulus reached subjects' self -reported pain thresholds. The two strongest features associated with pain, the waveform length of the corrugator supercilii and levator labii superioris, were selected for a prediction model. The performance of the pain prediction model resulted in a c-index of 0.64. In the study results, the most detectable difference in muscle activity during the pain experience was connected to eyebrow lowering, nose wrinkling and upper lip raising. As the performance of the prediction model remains modest, yet with a statistically significant ordinal classification, we suggest testing with a larger sample size to further explore the variables that affect variation in expressiveness and subjective pain experience.
Project description:Smiles are the most commonly and frequently used facial expressions by human beings. Some scholars claimed that the low accuracy in recognizing genuine smiles is explained by the perceptual-attentional hypothesis, meaning that observers either did not pay attention to responsible cues or were unable to recognize these cues (usually the Duchenne marker or AU6 displaying as contraction of muscles in eye regions). We investigated whether training (instructing participants to pay attention either to the Duchenne mark or to mouth movement) might help improve the recognition of genuine smiles, including accuracy and confidence. Results indicated that attention to mouth movement improves these people's ability to distinguish between genuine and posed smiles, with nullification of the alternative explanations such as sample distribution and intensity of lip pulling (AU12). The generalization of the conclusion requires further investigations. This study further argues that the perceptual-attentional hypothesis can explain smile genuineness recognition.
Project description:Tracking emotional responses as they unfold has been one of the hallmarks of applied neuroscience and related disciplines, but recent studies suggest that automatic tracking of facial expressions have low validation. In this study, we focused on the direct measurement of facial muscles involved in expressions such as smiling. We used single-channel surface electromyography (sEMG) to evaluate the muscular activity from the Zygomaticus Major face muscle while participants watched music videos. Participants were then tasked with rating each video with regard to their thoughts and responses to each of them, including their judgment of emotional tone ("Valence"), personal preference ("Liking") and rating of whether the video displayed strength and impression ("Dominance"). Using a minimal recording setup, we employed three ways to characterize muscular activity associated with spontaneous smiles. The total time spent smiling (ZygoNum), the average duration of smiles (ZygoLen), and instances of high valence (ZygoTrace). Our results demonstrate that Valence was the emotional dimension that was most related to the Zygomaticus activity. Here, the ZygoNum had higher discriminatory power than ZygoLen for Valence quantification. An additional investigation using fractal properties of sEMG time series confirmed previous studies of the Facial Action Coding System (FACS) documenting a smoother contraction of facial muscles for enjoyment smiles. Further analysis using ZygoTrace responses over time to the video events discerned "high valence" stimuli with a 76% accuracy. Additional validation of this approach came against previous findings on valence detection using features derived from a single channel EEG setup. We discuss these results in light of both the recent replication problems of facial expression measures, and in relation to the need for methods to reliably assess emotional responses in more challenging conditions, such as Virtual Reality, in which facial expressions are often covered by the equipment used.
Project description:Facial mimicry is described by embodied cognition theories as a human mirror system-based neural mechanism underpinning emotion recognition. This could play a critical role in the Self-Mirroring Technique (SMT), a method used in psychotherapy to foster patients' emotion recognition by showing them a video of their own face recorded during an emotionally salient moment. However, dissociation in facial mimicry during the perception of own and others' emotions has not been investigated so far. In the present study, we measured electromyographic (EMG) activity from three facial muscles, namely, the zygomaticus major (ZM), the corrugator supercilii (CS), and the levator labii superioris (LLS) while participants were presented with video clips depicting their own face or other unknown faces expressing anger, happiness, sadness, disgust, fear, or a neutral emotion. The results showed that processing self vs. other expressions differently modulated emotion perception at the explicit and implicit muscular levels. Participants were significantly less accurate in recognizing their own vs. others' neutral expressions and rated fearful, disgusted, and neutral expressions as more arousing in the self condition than in the other condition. Even facial EMG evidenced different activations for self vs. other facial expressions. Increased activation of the ZM muscle was found in the self condition compared to the other condition for anger and disgust. Activation of the CS muscle was lower for self than for others' expressions during processing a happy, sad, fearful, or neutral emotion. Finally, the LLS muscle showed increased activation in the self condition compared to the other condition for sad and fearful expressions but increased activation in the other condition compared to the self condition for happy and neutral expressions. Taken together, our complex pattern of results suggests a dissociation at both the explicit and implicit levels in emotional processing of self vs. other emotions that, in the light of the Emotion in Context view, suggests that STM effectiveness is primarily due to a contextual-interpretative process that occurs before that facial mimicry takes place.
Project description:Smiles that vary in muscular configuration also vary in how they are perceived. Previous research suggests that "Duchenne smiles," indicated by the combined actions of the orbicularis oculi (cheek raiser) and the zygomaticus major muscles (lip corner puller), signal enjoyment. This research has compared perceptions of Duchenne smiles with non-Duchenne smiles among individuals voluntarily innervating or inhibiting the orbicularis oculi muscle. Here we used a novel set of highly controlled stimuli: photographs of patients taken before and after receiving botulinum toxin treatment for crow's feet lines that selectively paralyzed the lateral orbicularis oculi muscle and removed visible lateral eye wrinkles, to test perception of smiles. Smiles in which the orbicularis muscle was active (prior to treatment) were rated as more felt, spontaneous, intense, and happier. Post treatment patients looked younger, although not more attractive. We discuss the potential implications of these findings within the context of emotion science and clinical research on botulinum toxin.
Project description:A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.
Project description:Physical proximity is important in social interactions. Here, we assessed whether simulated physical proximity modulates the perceived intensity of facial emotional expressions and their associated physiological signatures during observation or imitation of these expressions. Forty-four healthy volunteers rated intensities of dynamic angry or happy facial expressions, presented at two simulated locations, proximal (0.5 m) and distant (3 m) from the participants. We tested whether simulated physical proximity affected the spontaneous (in the observation task) and voluntary (in the imitation task) physiological responses (activity of the corrugator supercilii face muscle and pupil diameter) as well as subsequent ratings of emotional intensity. Angry expressions provoked relative activation of the corrugator supercilii muscle and pupil dilation, whereas happy expressions induced a decrease in corrugator supercilii muscle activity. In proximal condition, these responses were enhanced during both observation and imitation of the facial expressions, and were accompanied by an increase in subsequent affective ratings. In addition, individual variations in condition related EMG activation during imitation of angry expressions predicted increase in subsequent emotional ratings. In sum, our results reveal novel insights about the impact of physical proximity in the perception of emotional expressions, with early proximity-induced enhancements of physiological responses followed by an increased intensity rating of facial emotional expressions.
Project description:When people are being evaluated, their whole body responds. Verbal feedback causes robust activation in the hypothalamic-pituitary-adrenal (HPA) axis. What about nonverbal evaluative feedback? Recent discoveries about the social functions of facial expression have documented three morphologically distinct smiles, which serve the functions of reinforcement, social smoothing, and social challenge. In the present study, participants saw instances of one of three smile types from an evaluator during a modified social stress test. We find evidence in support of the claim that functionally different smiles are sufficient to augment or dampen HPA axis activity. We also find that responses to the meanings of smiles as evaluative feedback are more differentiated in individuals with higher baseline high-frequency heart rate variability (HF-HRV), which is associated with facial expression recognition accuracy. The differentiation is especially evident in response to smiles that are more ambiguous in context. Findings suggest that facial expressions have deep physiological implications and that smiles regulate the social world in a highly nuanced fashion.
Project description:According to the familiar axiom, the eyes are the window to the soul. However, wearing masks to prevent the spread of viruses such as COVID-19 involves obscuring a large portion of the face. Do the eyes carry sufficient information to allow for the accurate perception of emotions in dynamic expressions obscured by masks? What about the perception of the meanings of specific smiles? We addressed these questions in two studies. In the first, participants saw dynamic expressions of happiness, disgust, anger, and surprise that were covered by N95, surgical, or cloth masks or were uncovered and rated the extent to which the expressions conveyed each of the same four emotions. Across conditions, participants perceived significantly lower levels of the expressed (target) emotion in masked faces, and this was particularly true for expressions composed of more facial action in the lower part of the face. Higher levels of other (non-target) emotions were also perceived in masked expressions. In the second study, participants rated the extent to which three categories of smiles (reward, affiliation, and dominance) conveyed positive feelings, reassurance, and superiority, respectively. Masked smiles communicated less of the target signal than unmasked smiles, but not more of other possible signals. The present work extends recent studies of the effects of masked faces on the perception of emotion in its novel use of dynamic facial expressions (as opposed to still images) and the investigation of different types of smiles.<h4>Supplementary information</h4>The online version contains supplementary material available at 10.1007/s42761-021-00097-z.
Project description:Time is a key factor to consider in Autism Spectrum Disorder. Detecting the condition as early as possible is crucial in terms of treatment success. Despite advances in the literature, it is still difficult to identify early markers able to effectively forecast the manifestation of symptoms. Artificial intelligence (AI) provides effective alternatives for behavior screening. To this end, we investigated facial expressions in 18 autistic and 15 typical infants during their first ecological interactions, between 6 and 12 months of age. We employed Openface, an AI-based software designed to systematically analyze facial micro-movements in images in order to extract the subtle dynamics of Social Smiles in unconstrained Home Videos. Reduced frequency and activation intensity of Social Smiles was computed for children with autism. Machine Learning models enabled us to map facial behavior consistently, exposing early differences hardly detectable by non-expert naked eye. This outcome contributes to enhancing the potential of AI as a supportive tool for the clinical framework.