Project description:BackgroundHypnotherapy has a potential role in modulating attention bias to treat social anxiety disorder (SAD). This study aimed to verify whether hypnotherapy can reduce social anxiety by changing attentional bias. The primary objective of our study is to explore the influence of hypnosis on various aspects of attention processes, specifically focusing on how it affects attention bias and social anxiety.MethodsThis study included 69 participants with SAD who were assigned to three groups based on their scores on the Liebowitz Social Anxiety Scale (LSAS). The experimental group (n = 23) received a hypnosis treatment once a week, for a total of six sessions, while the control group (n = 23) and the baseline group (n = 23) did not receive any treatment. To evaluate whether hypnosis could alleviate SAD and attention bias towards threatening stimuli, we employed questionnaires and an odd-one-out task accompanied by electroencephalography (EEG) recordings.ResultsUnder the attention sensitivity conditions, the experimental group exhibited a reduced N170 and LPP at the posttest stage, and a similar N170 and LPP reduction under the attention disengagement conditions. Notably, the symptom improvements were positively correlated with the reduction in N170 and LPP amplitude across conditions.ConclusionHypnosis treatment modulates the early face processing and late emotional evaluation of threat-related stimuli in SAD patients. These findings suggest that N170 and LPP are important biomarkers for the treatment of SAD.
Project description:Prior work has provided conceptual support for developmental changes in face and object processing, such that: face processing, as captured by the N290 event-related potential (ERP) component in infancy, may develop into the N170 in adulthood; and motivated attention, as captured by the negative central (Nc) in infancy, may develop into the late positive potential (LPP). The present study examined these neural correlates in 12-month-old infants and their mothers (N = 33 dyads). Dyads completed a viewing task consisting of familiar and novel face and toy stimuli while electroencephalography was recorded. Results suggest that for mothers, the N170 was larger for faces than toys, regardless of familiarity, and the LPP was largest for familiar faces. In infants, the N290 was somewhat larger for faces than toys (p < .10); the Nc did not vary by condition. Adult ERPs demonstrated fair to good reliability; reliability of infant ERPs was lower and was influenced by looking behaviors. Intergenerational associations were strongest between the LPP and Nc, particularly when electrode and time window were taken into account. Refinement of data handling and ERP scoring procedures for infant ERPs are crucial next steps for estimation of intergenerational associations and further examination of developmental changes in face and object processing.
Project description:Studies examining children's face perception have revealed developmental changes in early and face-sensitive event-related potential (ERP) components. Children also tend to show racial biases in their face perception and evaluation of others. The current study examined how early face-sensitive ERPs are influenced by face race in children and adults. A second objective examined face recognition proficiency and implicit racial bias in relation to ERP responses to own- and other-race faces. Electroencephalographic responses were recorded while Caucasian children and adults viewed Caucasian and East Asian faces. Participants also completed recognition tasks and an IAT with Caucasian and East Asian faces. Other-race faces elicited larger P100 amplitudes than own-race faces. Furthermore, adults with better other-race recognition proficiency showed larger P100 amplitude responses to other-race faces compared with adults with worse other-race recognition proficiency. In addition, larger implicit biases favoring own-race individuals were associated with larger P100 to N170 peak-to-peak amplitudes for other-race faces in adults. In contrast, larger implicit biases favoring own-race individuals were associated with smaller P100 to N170 peak-to-peak amplitudes for both own- and other-race faces in 8- to 10-year-olds. There was also an age-related decrease in P100 to N170 peak-to-peak amplitudes for own-race faces among 5- to 10-year-olds with better own-race recognition proficiency. The age-related decrease in N170 latency for other-race faces was also more pronounced in 5- to 10-year-olds with better other-race recognition proficiency. Thus, recognition proficiency and implicit racial bias are associated with early ERP responses in adults and children, but in different ways.
Project description:BackgroundAtypicalities in perception and interpretation of faces and emotional facial expressions have been reported in both autism and attention-deficit/hyperactivity disorder (ADHD) during childhood and adulthood. Investigation of face processing during young adulthood (18 to 25 years), a transition period to full-fledged adulthood, could provide important information on the adult outcomes of autism and ADHD.MethodsIn this study, we investigated event-related potentials (ERPs) related to visual face processing in autism, ADHD, and co-occurring autism and ADHD in a large sample of young adults (N = 566). The groups were based on the Diagnostic Interview for ADHD in Adults 2.0 (DIVA-2) and the Autism Diagnostic Observation Schedule-2 (ADOS-2). We analyzed ERPs from two passive viewing tasks previously used in childhood investigations: (1) upright and inverted faces with direct or averted gaze; (2) faces expressing different emotions.ResultsAcross both tasks, we consistently found lower amplitude and longer latency of N170 in participants with autism compared to those without. Longer P1 latencies and smaller P3 amplitudes in response to emotional expressions and longer P3 latencies for upright faces were also characteristic to the autistic group. Those with ADHD had longer N170 latencies, specific to the face-gaze task. Individuals with both autism and ADHD showed additional alterations in gaze modulation and a lack of the face inversion effect indexed by a delayed N170.ConclusionAlterations in N170 for autistic young adults is largely consistent with studies on autistic adults, and some studies in autistic children. These findings suggest that there are identifiable and measurable socio-functional atypicalities in young adults with autism.
Project description:Background:Parkinson's disease is associated with impaired ability to recognize emotional facial expressions. In addition to a visual processing disorder, a visual recognition disorder may be involved in these patients. Pareidolia is a type of complex visual illusion that permits the interpretation of a vague stimulus as something known to the observer. Parkinson's patients experience pareidolic illusions. N170 and N250 waveforms are two event-related potentials (ERPs) involved in emotional facial expression recognition. Objective:In this study, we investigated how Parkinson's patients process face and face-pareidolia stimuli at the neural level using N170, vertex positive potential (VPP), and N250 components of event-related potentials. Methods:To examine the response of face and face-pareidolia processing in Parkinson's patients, we measured the N170, VPP, and N250 components of the event-related brain potentials in a group of 21 participants with Parkinson's disease and 26 control participants. Results:We found that the latencies of N170 and VPP responses to both face and face-pareidolia stimuli were increased along with their amplitudes, and the amplitude of N250 responses decreased in Parkinson's patients compared to the control group. In both control and Parkinson's patients, face stimuli generated greater ERP amplitude and shorter latency in responses than did face-pareidolia stimuli. Conclusion:The results of our study showed that ERPs associated with face and also face-pareidolia stimuli processing are changed in early-stage neurophysiological activity in the temporoparietal cortex of Parkinson's patients.
Project description:Purpose Recent findings in preschool children indicated novel adjective recall was enhanced when learned using repeated retrieval with contextual reinstatement (RRCR) compared to repeated study (RS). Recall was similar for learned pictures used during training and new (generalized) pictures with the same adjective features. The current study compared the effects of learning method and learned/generalized pictures on the neural processes mediating the recognition of novel adjectives. Method Twenty typically developing children aged 4;6-5;11 (years;months) learned four novel adjectives, two using RRCR and two using RS. Five-minute and 1-week tests assessed adjective recall using learned and generalized pictures. Also, at the 1-week visit, event-related potentials (ERPs) were recorded to assess children's processing of learned/generalized pictures, followed by naturally spoken novel adjectives in a match-mismatch paradigm. Results Naming recall and match-mismatch judgment accuracy were similar for the RS and RRCR conditions and across learned/generalized pictures. However, ERPs revealed more reliable condition effects in the phonological mapping negativity, indexing phonological expectations, and the late positive component, indexing semantic reanalysis, for the adjectives learned in the RRCR relative to the RS condition. Unfamiliar pictures (generalized) elicited larger amplitude N300 and N400 components relative to learned pictures. Conclusions Although behavioral accuracy measures suggest similar effects of the RS and RRCR learning conditions, subtle differences in the ERPs underlying novel adjective processing indicate advantages of RRCR for phonological processing and semantic reanalysis. While children readily generalized the novel adjectives, ERPs revealed greater cognitive resources for processing unfamiliar compared to learned pictures of the novel adjective characteristics. Supplemental Material https://doi.org/10.23641/asha.13683214.
Project description:It has been widely accepted that moral violations that involve impurity (such as spitting in public) induce the emotion of disgust, but there has been a debate about whether moral violations that do not involve impurity (such as swearing in public) also induce the same emotion. The answer to this question may have implication for understanding where morality comes from and how people make moral judgments. This study aimed to compared the neural mechanisms underlying two kinds of moral violation by using an affective priming task to test the effect of sentences depicting moral violation behaviors with and without physical impurity on subsequent detection of disgusted faces in a visual search task. After reading each sentence, participants completed the face search task. Behavioral and electrophysiological (event-related potential, or ERP) indices of affective priming (P2, N400, LPP) and attention allocation (N2pc) were analyzed. Results of behavioral data and ERP data showed that moral violations both with and without impurity promoted the detection of disgusted faces (RT, N2pc); moral violations without impurity impeded the detection of neutral faces (N400). No priming effect was found on P2 and LPP. The results suggest both types of moral violation influenced the processing of disgusted faces and neutral faces, but the neural activity with temporal characteristics was different.
Project description:A pattern of components from brain event-related potentials (ERPs) (cognitive non-invasive electrical brain measures) performed well in separating early-stage Alzheimer's disease (AD) subjects from normal-aging control subjects and shows promise for developing a clinical diagnostic for probable AD. A Number-Letter task elicited brain activity related to cognitive processes. In response to the task stimuli, brain activity was recorded as ERPs, whose components were measured by principal components analysis (PCA). The ERP component scores to relevant and irrelevant stimuli were used in discriminant analyses to develop functions that successfully classified individuals as belonging to an early-stage Alzheimer's disease group or a like-aged Control group, with probabilities of an individual belonging to each group. Applying the discriminant function to the developmental half of the data showed 92% of the subjects were correctly classified into either the AD group or the Control group with a sensitivity of 1.00. The two crossvalidation results were good with sensitivities of 0.83 and classification accuracies of 0.75-0.79. P3 and CNV components, as well as other, earlier ERP components, e.g. C145 and the memory "Storage" component, were useful in the discriminant functions.
Project description:Though previous studies with autistic individuals have provided behavioral evidence of animacy perception difficulties, the spatio-temporal dynamics of animacy processing in autism remain underexplored. This study investigated how animacy is neurally encoded in autistic adults, and whether potential deficits in animacy processing have cascading deleterious effects on their social functioning skills. We employed a picture naming paradigm that recorded accuracy and response latencies to animate and inanimate pictures in young autistic adults and age- and IQ-matched healthy individuals, while also employing high-density EEG analysis to map the spatio-temporal dynamics of animacy processing. Participants' social skills were also assessed through a social comprehension task. The autistic adults exhibited lower accuracy than controls on the animate pictures of the task and also exhibited altered brain responses, including larger and smaller N100 amplitudes than controls on inanimate and animate stimuli, respectively. At late stages of processing, there were shorter slow negative wave latencies for the autistic group as compared to controls for the animate trials only. The autistic individuals' altered brain responses negatively correlated with their social difficulties. The results suggest deficits in brain responses to animacy in the autistic group, which were related to the individuals' social functioning skills.
Project description:Language understanding requires the integration of the input with preceding context. Event-related potentials (ERPs) have contributed significantly to our understanding of what contextual information is accessed and when. Much of this research has, however, been limited to experimenter-designed stimuli with highly atypical lexical and context statistics. This raises questions about the extent to which previous findings generalize to everyday language processing of natural stimuli with typical linguistic statistics. We ask whether context can affect ERPs over natural stimuli early, before the N400 time window. We re-analyzed a data set of ERPs over ~700 visually presented content words in sentences from English novels. To increase power, we employed linear mixed effects regression simultaneously modeling random variance by subject and by item. To reduce concerns about Type I error inflation common to any type of time series analysis, we introduced a simple approach to model and discount auto-correlations at multiple, empirically determined, time lags. We compared this approach to Bonferroni correction. Planned follow-up analyses used Generalized Additive Mixture Models to assess the linearity of contextual effects, including lexical surprisal, found within the N400 time window. We found that contextual information affects ERPs in both early (~200ms after word onset) and late (N400) time windows, supporting a cascading, interactive account of lexical access.