The reliability of the N400 in single subjects: implications for patients with disorders of consciousness.
ABSTRACT: Functional neuroimaging assessments of residual cognitive capacities, including those that support language, can improve diagnostic and prognostic accuracy in patients with disorders of consciousness. Due to the portability and relative inexpensiveness of electroencephalography, the N400 event-related potential component has been proposed as a clinically valid means to identify preserved linguistic function in non-communicative patients. Across three experiments, we show that changes in both stimuli and task demands significantly influence the probability of detecting statistically significant N400 effects - that is, the difference in N400 amplitudes caused by the experimental manipulation. In terms of task demands, passively heard linguistic stimuli were significantly less likely to elicit N400 effects than task-relevant stimuli. Due to the inability of the majority of patients with disorders of consciousness to follow task commands, the insensitivity of passive listening would impede the identification of residual language abilities even when such abilities exist. In terms of stimuli, passively heard normatively associated word pairs produced the highest detection rate of N400 effects (50% of the participants), compared with semantically-similar word pairs (0%) and high-cloze sentences (17%). This result is consistent with a prediction error account of N400 magnitude, with highly predictable targets leading to smaller N400 waves, and therefore larger N400 effects. Overall, our data indicate that non-repeating normatively associated word pairs provide the highest probability of detecting single-subject N400s during passive listening, and may thereby provide a clinically viable means of assessing residual linguistic function. We also show that more liberal analyses may further increase the detection-rate, but at the potential cost of increased false alarms.
Project description:Language occurs naturally in conversations. However, the study of the neural underpinnings of language has mainly taken place in single individuals using controlled language material. The interactive elements of a conversation (e.g., turn-taking) are often not part of neurolinguistic setups. The prime reason is the difficulty to combine open unrestricted conversations with the requirements of neuroimaging. It is necessary to find a trade-off between the naturalness of a conversation and the restrictions imposed by neuroscientific methods to allow for ecologically more valid studies. Here, we make an attempt to study the effects of a conversational element, namely turn-taking, on linguistic neural correlates, specifically the N400 effect. We focus on the physiological aspect of turn-taking, the speaker-switch, and its effect on the detectability of the N400 effect. The N400 event-related potential reflects expectation violations in a semantic context; the N400 effect describes the difference of the N400 amplitude between semantically expected and unexpected items. Sentences with semantically congruent and incongruent final words were presented in two turn-taking modes: (1) reading aloud first part of the sentence and listening to speaker-switch for the final word, and (2) listening to first part of the sentence and speaker-switch for the final word. A significant N400 effect was found for both turn-taking modes, which was not influenced by the mode itself. However, the mode significantly affected the P200, which was increased for the reading aloud mode compared to the listening mode. Our results show that an N400 effect can be detected during a speaker-switch. Speech articulation (reading aloud) before the analyzed sentence fragment did also not impede the N400 effect detection for the final word. The speaker-switch, however, seems to influence earlier components of the electroencephalogram, related to processing of salient stimuli. We conclude that the N400 can effectively be used to study neural correlates of language in conversational approaches including speaker-switches.
Project description:Listeners use lexical information to resolve ambiguity in the speech signal, resulting in the restructuring of speech sound categories. Recent findings suggest that lexically guided perceptual learning is attenuated when listeners use a perception-focused listening strategy (that directs attention towards surface variation) compared to when listeners use a comprehension-focused listening strategy (that directs attention towards higher-level linguistic information). However, previous investigations used the word position of the ambiguity to manipulate listening strategy, raising the possibility that attenuated learning reflected decreased strength of lexical recruitment instead of a perception-oriented listening strategy. The current work tests this hypothesis. Listeners completed an exposure phase followed by a test phase. During exposure, listeners heard an ambiguous fricative embedded in word-medial lexical contexts that supported realization of the ambiguity as /?/. At test, listeners categorized members of an /?si/-/??i/ continuum. Listening strategy was manipulated via exposure task (experiment 1) and explicit acknowledgement of the ambiguity (experiment 2). Compared to control participants, listeners who were exposed to the ambiguity showed more /?/ responses at the test; critically, the magnitude of learning did not differ across listening strategy conditions. These results suggest that given sufficient lexical context, lexically guided perceptual learning is robust to task-based changes in listening strategy.
Project description:To examine the neural signatures of language co-activation and control during bilingual spoken word comprehension, Korean-English bilinguals and English monolinguals were asked to make overt or covert semantic relatedness judgments on auditorily-presented English word pairs. In two critical conditions, participants heard word pairs consisting of an English-Korean interlingual homophone (e.g., the sound /mu:n/ means "moon" in English and "door" in Korean) as the prime and an English word as the target. In the homophone-related condition, the target (e.g., "lock") was related to the homophone's Korean meaning, but not related to the homophone's English meaning. In the homophone-unrelated condition, the target was unrelated to either the homophone's Korean meaning or the homophone's English meaning. In overtly responded situations, ERP results revealed that the reduced N400 effect in bilinguals for homophone-related word pairs correlated positively with the amount of their daily exposure to Korean. In covertly responded situations, ERP results showed a reduced late positive component for homophone-related word pairs in the right hemisphere, and this late positive effect was related to the neural efficiency of suppressing interference in a non-linguistic task. Together, these findings suggest 1) that the degree of language co-activation in bilingual spoken word comprehension is modulated by the amount of daily exposure to the non-target language; and 2) that bilinguals who are less influenced by cross-language activation may also have greater efficiency in suppressing interference in a non-linguistic task.
Project description:Recent evidence suggests that lexical-semantic activation spread during language production can be dynamically shaped by contextual factors. In this study we investigated whether semantic processing modes can also affect lexical-semantic activation during word production. Specifically, we tested whether the processing of linguistic ambiguities, presented in the form of puns, has an influence on the co-activation of unrelated meanings of homophones in a subsequent language production task. In a picture-word interference paradigm with word distractors that were semantically related or unrelated to the non-depicted meanings of homophones we found facilitation induced by related words only when participants listened to puns before object naming, but not when they heard jokes with unambiguous linguistic stimuli. This finding suggests that a semantic processing mode of ambiguity perception can induce the co-activation of alternative homophone meanings during speech planning.
Project description:ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.
Project description:From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.
Project description:Linguistic phrases are tracked in sentences even though there is no one-to-one acoustic phrase marker in the physical signal. This phenomenon suggests an automatic tracking of abstract linguistic structure that is endogenously generated by the brain. However, all studies investigating linguistic tracking compare conditions where either relevant information at linguistic timescales is available, or where this information is absent altogether (e.g., sentences versus word lists during passive listening). It is therefore unclear whether tracking at phrasal timescales is related to the content of language, or rather, results as a consequence of attending to the timescales that happen to match behaviourally relevant information. To investigate this question, we presented participants with sentences and word lists while recording their brain activity with magnetoencephalography (MEG). Participants performed passive, syllable, word, and word-combination tasks corresponding to attending to four different rates: one they would naturally attend to, syllable-rates, word-rates, and phrasal-rates, respectively. We replicated overall findings of stronger phrasal-rate tracking measured with mutual information for sentences compared to word lists across the classical language network. However, in the inferior frontal gyrus (IFG) we found a task effect suggesting stronger phrasal-rate tracking during the word-combination task independent of the presence of linguistic structure, as well as stronger delta-band connectivity during this task. These results suggest that extracting linguistic information at phrasal rates occurs automatically with or without the presence of an additional task, but also that IFG might be important for temporal integration across various perceptual domains.
Project description:The N400 ERP component is a direct neural index of word meaning. Studies show that the N400 component is already present in early infancy, albeit often delayed. Many researchers capitalize on this finding, using the N400 component to better understand how early language acquisition unfolds. However, variability in how researchers quantify the N400 makes it difficult to set clear predictions or build theory. Not much is known about how the N400 component develops in the first 2 years of life in terms of its latency and topographical distributions, nor do we know how task parameters affect its appearance. In the current paper we carry out a systematic review, comparing over 30 studies that report the N400 component as a proxy of semantic processing elicited in infants between 0 and 24 months old who listened to linguistic stimuli. Our main finding is that there is large heterogeneity across semantic-priming studies in reported characteristics of the N400, both with respect to latency and to distributions. With age, the onset of the N400 insignificantly decreases, while its offset slightly increases. We also examined whether the N400 appears different for recently-acquired novel words vs. existing words: both situations reveal heterogeneity across studies. Finally, we inspected whether the N400 was modulated differently with studies using a between-subject design. In infants with more proficient language skills the N400 was more often present or showed itself here with earlier latency, compared to their peers; but no consistent patterns were observed for distribution characteristics of the N400. One limitation of the current review is that we compared studies that widely differed in choice of EEG recordings, pre-processing steps and quantification of the N400, all of which could affect the characteristics of the infant N400. The field is still missing research that systematically tests development of the N400 using the same paradigm across infancy.
Project description:Previous research has pointed out that the combination of orthographic and semantic-associative training is a more advantageous strategy for the lexicalization of novel written word-forms than their single orthographic training. However, paradigms used previously involve explicit stimuli categorization (lexical decision), which likely influence word learning. In the present study, we used a more automatic task (silent reading) to determine the advantage of the associative training, by comparing the brain electrical signals elicited in combined (orthographic and semantic) and single (only orthographic) training conditions. In addition, the learning effect (in terms of similar neurophysiological activity between novel and known words) was also tested under a categorization paradigm, enabling determination of the possible influence of the training task in the lexicalization process. Results indicated that novel words repeatedly associated with meaningful cues showed a higher attenuation of N400 responses than those trained in the single orthographic condition, confirming the higher facilitation in the lexico-semantic processing of these stimuli, as a consequence of semantic associations. Moreover, only when the combined training was carried out in the reading task did novel words show similar N400 responses to those elicited by known words, suggesting the achievement of a similar lexical processing to known words. Crucially, when the training is carried out under a demanding task context (lexical decision), known words exhibited positive enhancement within the N400 time window, contributing to maintaining N400 differences with novel trained words and confounding the outcome of the learning. Such deflection-compatible with the modulation of the categorization-related P300 component-suggests that novel word learning could be influenced by the activation of categorization-related processes. Thus, the use of low-demand tasks arises as a more appropriate approach to study novel word learning, enabling the build-up process of mental representations, which probably depends on pure lexical and semantic factors rather than being guided by categorization demands.
Project description:Functional Near-Infrared spectroscopy (fNIRS) is a neuroimaging tool that has been recently used in a variety of cognitive paradigms. Yet, it remains unclear whether fNIRS is suitable to study complex cognitive processes such as categorization or discrimination. Previously, functional imaging has suggested a role of both inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we extended paradigms used in functional magnetic resonance imaging (fMRI) to investigate the suitability of fNIRS to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination using mini-blocks and event-related stimuli. Participants heard speech-like but semantically meaningless pseudowords spoken in various tones and evaluated them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize; and processed the linguistic faster than the emotional content of stimuli. Interactions between condition (emotion/word), task (discrimination/categorization) and emotion content (anger, fear, neutral) influenced accuracy and reaction time. At the brain level, we found a modulation of the Oxy-Hb changes in IFG depending on condition, task, emotion and hemisphere (right or left), highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation, fostering its application to complex cognitive paradigms.