Listener's personality traits predict changes in pupil size during auditory language comprehension.
ABSTRACT: Research suggests that listeners' comprehension of spoken language is concurrently affected by linguistic and non-linguistic factors, including individual difference factors. However, there is no systematic research on whether general personality traits affect language processing. We correlated 88 native English-speaking participants' Big-5 traits with their pupillary responses to spoken sentences that included grammatical errors, "He frequently have burgers for dinner"; semantic anomalies, "Dogs sometimes chase teas"; and statements incongruent with gender stereotyped expectations, such as "I sometimes buy my bras at Hudson's Bay", spoken by a male speaker. Generalized additive mixed models showed that the listener's Openness, Extraversion, Agreeableness, and Neuroticism traits modulated resource allocation to the three different types of unexpected stimuli. No personality trait affected changes in pupil size across the board: less open participants showed greater pupil dilation when processing sentences with grammatical errors; and more introverted listeners showed greater pupil dilation in response to both semantic anomalies and socio-cultural clashes. Our study is the first one demonstrating that personality traits systematically modulate listeners' online language processing. Our results suggest that individuals with different personality profiles exhibit different patterns of the allocation of cognitive resources during real-time language comprehension.
Project description:Spoken words carry linguistic and indexical information to listeners. Abstractionist models of spoken word recognition suggest that indexical information is stripped away in a process called normalization to allow processing of the linguistic message to proceed. In contrast, exemplar models of the lexicon suggest that indexical information is retained in memory, and influences the process of spoken word recognition. In the present study native Spanish listeners heard Spanish words that varied in grammatical gender (masculine, ending in -o, or feminine, ending in -a) produced by either a male or a female speaker. When asked to indicate the grammatical gender of the words, listeners were faster and more accurate when the sex of the speaker "matched" the grammatical gender than when the sex of the speaker and the grammatical gender "mismatched." No such interference was observed when listeners heard the same stimuli, but identified whether the speaker was male or female. This finding suggests that indexical information, in this case the sex of the speaker, influences not just processes associated with word recognition, but also higher-level processes associated with grammatical processing. This result also raises questions regarding the widespread assumption about the cognitive independence and automatic nature of grammatical processes.
Project description:There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.
Project description:Listeners identify talkers more accurately when listening to their native language compared to an unfamiliar, foreign language. This language-familiarity effect in talker identification has been shown to arise from familiarity with both the sound patterns (phonetics and phonology) and the linguistic content (words) of one's native language. However, it has been unknown whether these two sources of information contribute independently to talker identification abilities, particularly whether hearing familiar words can facilitate talker identification in the absence of familiar phonetics. To isolate the contribution of lexical familiarity, we conducted three experiments that tested listeners' ability to identify talkers saying familiar words, but with unfamiliar phonetics. In two experiments, listeners identified talkers from recordings of their native language (English), an unfamiliar foreign language (Mandarin Chinese), or "hybrid" speech stimuli (sentences spoken in Mandarin, but which can be convincingly coerced to sound like English when presented with subtitles that prime plausible English-language lexical interpretations based on the Mandarin phonetics). In a third experiment, we explored natural variation in lexical-phonetic congruence as listeners identified talkers with varying degrees of a Mandarin accent. Priming listeners to hear English speech did not improve their ability to identify talkers speaking Mandarin, even after additional training, and talker identification accuracy decreased as talkers' phonetics became increasingly dissimilar to American English. Together, these experiments indicate that unfamiliar sound patterns preclude talker identification benefits otherwise afforded by familiar words. These results suggest that linguistic representations contribute hierarchically to talker identification; the facilitatory effect of familiar words requires the availability of familiar phonological forms.
Project description:Congenital blindness modifies the neural basis of language: "visual" cortices respond to linguistic information, and fronto-temporal language networks are less left-lateralized. We tested the hypothesis that this plasticity follows a sensitive period by comparing the neural basis of sentence processing between adult-onset blind (AB, n?=?16), congenitally blind (CB, n?=?22) and blindfolded sighted adults (n?=?18). In Experiment 1, participants made semantic judgments for spoken sentences and, in a control condition, solved math equations. In Experiment 2, participants answered "who did what to whom" yes/no questions for grammatically complex (with syntactic movement) and simpler sentences. In a control condition, participants performed a memory task with non-words. In both experiments, visual cortices of CB and AB but not sighted participants responded more to sentences than control conditions, but the effect was much larger in the CB group. Only the "visual" cortex of CB participants responded to grammatical complexity. Unlike the CB group, the AB group showed no reduction in left-lateralization of fronto-temporal language network, relative to the sighted. These results suggest that congenital blindness modifies the neural basis of language differently from adult-onset blindness, consistent with a developmental sensitive period hypothesis.
Project description:The neurobiology of sentence comprehension is well-studied but the properties and characteristics of sentence processing networks remain unclear and highly debated. Sign languages (i.e., visual-manual languages), like spoken languages, have complex grammatical structures and thus can provide valuable insights into the specificity and function of brain regions supporting sentence comprehension. The present study aims to characterize how these well-studied spoken language networks can adapt in adults to be responsive to sign language sentences, which contain combinatorial semantic and syntactic visual-spatial linguistic information. Twenty native English-speaking undergraduates who had completed introductory American Sign Language (ASL) courses viewed videos of the following conditions during fMRI acquisition: signed sentences, signed word lists, English sentences and English word lists. Overall our results indicate that native language (L1) sentence processing resources are responsive to ASL sentence structures in late L2 learners, but that certain L1 sentence processing regions respond differently to L2 ASL sentences, likely due to the nature of their contribution to language comprehension. For example, L1 sentence regions in Broca's area were significantly more responsive to L2 than L1 sentences, supporting the hypothesis that Broca's area contributes to sentence comprehension as a cognitive resource when increased processing is required. Anterior temporal L1 sentence regions were sensitive to L2 ASL sentence structure, but demonstrated no significant differences in activation to L1 than L2, suggesting its contribution to sentence processing is modality-independent. Posterior superior temporal L1 sentence regions also responded to ASL sentence structure but were more activated by English than ASL sentences. An exploratory analysis of the neural correlates of L2 ASL proficiency indicates that ASL proficiency is positively correlated with increased activations in response to ASL sentences in L1 sentence processing regions. Overall these results suggest that well-established fronto-temporal spoken language networks involved in sentence processing exhibit functional plasticity with late L2 ASL exposure, and thus are adaptable to syntactic structures widely different than those in an individual's native language. Our findings also provide valuable insights into the unique contributions of the inferior frontal and superior temporal regions that are frequently implicated in sentence comprehension but whose exact roles remain highly debated.
Project description:We form very rapid personality impressions about speakers on hearing a single word. This implies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming social impressions. Here, we aimed to explore how personality impressions for brief social utterances transfer across languages and whether acoustical properties play a similar role in driving personality impressions. Additionally, we examined whether evaluations are similar in the native and a foreign language of the listener. In two experiments we asked Spanish listeners to evaluate personality traits from different instances of the Spanish word "Hola" (Experiment 1) and the English word "Hello" (Experiment 2), native and foreign language respectively. The results revealed that listeners across languages form very similar personality impressions irrespective of whether the voices belong to the native or the foreign language of the listener. A social voice space was summarized by two main personality traits, one emphasizing valence (e.g., trust) and the other strength (e.g., dominance). Conversely, the acoustical properties that listeners pay attention to when judging other's personality vary across languages. These results provide evidence that social voice perception contains certain elements invariant across cultures/languages, while others are modulated by the cultural/linguistic background of the listener.
Project description:Individuals with congenital amusia have a lifelong history of unreliable pitch processing. Accordingly, they downweight pitch cues during speech perception and instead rely on other dimensions such as duration. We investigated the neural basis for this strategy. During fMRI, individuals with amusia (N = 15) and controls (N = 15) read sentences where a comma indicated a grammatical phrase boundary. They then heard two sentences spoken that differed only in pitch and/or duration cues and selected the best match for the written sentence. Prominent reductions in functional connectivity were detected in the amusia group between left prefrontal language-related regions and right hemisphere pitch-related regions, which reflected the between-group differences in cue weights in the same groups of listeners. Connectivity differences between these regions were not present during a control task. Our results indicate that the reliability of perceptual dimensions is linked with functional connectivity between frontal and perceptual regions and suggest a compensatory mechanism.
Project description:Language ordinarily emerges in young children as a consequence of both linguistic experience (for example, exposure to a spoken or signed language) and innate abilities (for example, the ability to acquire certain types of language patterns). One way to discern which aspects of language acquisition are controlled by experience and which arise from innate factors is to remove or manipulate linguistic input. However, experimental manipulations that involve depriving a child of language input are impossible. The present work examines the communication systems resulting from natural situations of language deprivation and thus explores the inherent tendency of humans to build communication systems of particular kinds, without any conventional linguistic input. We examined the gesture systems that three isolated deaf Nicaraguans (ages 14-23 years) have developed for use with their hearing families. These deaf individuals have had no contact with any conventional language, spoken or signed. To communicate with their families, they have each developed a gestural communication system within the home called "home sign." Our analysis focused on whether these systems show evidence of the grammatical category of Subject. Subjects are widely considered to be universal to human languages. Using specially designed elicitation tasks, we show that home signers also demonstrate the universal characteristics of Subjects in their gesture productions, despite the fact that their communicative systems have developed without exposure to a conventional language. These findings indicate that abstract linguistic structure, particularly the grammatical category of Subject, can emerge in the gestural modality without linguistic input.
Project description:Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices as well as the basal ganglia, medial frontal, and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages.
Project description:An important question in understanding language processing is whether there are distinct neural mechanisms for processing specific types of grammatical structure, such as syntax versus morphology, and, if so, what the basis of the specialization might be. However, this question is difficult to study: A given language typically conveys its grammatical information in one way (e.g., English marks "who did what to whom" using word order, and German uses inflectional morphology). American Sign Language permits either device, enabling a direct within-language comparison. During functional (f)MRI, native signers viewed sentences that used only word order and sentences that included inflectional morphology. The two sentence types activated an overlapping network of brain regions, but with differential patterns. Word order sentences activated left-lateralized areas involved in working memory and lexical access, including the dorsolateral prefrontal cortex, the inferior frontal gyrus, the inferior parietal lobe, and the middle temporal gyrus. In contrast, inflectional morphology sentences activated areas involved in building and analyzing combinatorial structure, including bilateral inferior frontal and anterior temporal regions as well as the basal ganglia and medial temporal/limbic areas. These findings suggest that for a given linguistic function, neural recruitment may depend upon on the cognitive resources required to process specific types of linguistic cues.