Sign and Speech Share Partially Overlapping Conceptual Representations.
ABSTRACT: Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1-3]. This suggests that conceptual representations are "modality independent." However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes . A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are "language independent." However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world.
Project description:Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. Differences between unimodal and bimodal bilinguals have implications for how the brain is organized to control, process, and represent two languages. Evidence from code-blending (simultaneous production of a word and a sign) indicates that the production system can access two lexical representations without cost, and the comprehension system must be able to simultaneously integrate lexical information from two languages. Further, evidence of cross-language activation in bimodal bilinguals indicates the necessity of links between languages at the lexical or semantic level. Finally, the bimodal bilingual brain differs from the unimodal bilingual brain with respect to the degree and extent of neural overlap for the two languages, with less overlap for bimodal bilinguals.
Project description:During spoken language comprehension, auditory input activates a bilingual's two languages in parallel based on phonological representations that are shared across languages. However, it is unclear whether bilinguals access phonotactic constraints from the non-target language during target language processing. For example, in Spanish, words with s+ consonant onsets cannot exist, and phonotactic constraints call for epenthesis (addition of a vowel, e.g., stable/estable). Native Spanish speakers may produce English words such as estudy ("study") with epenthesis, suggesting that these bilinguals apply Spanish phonotactic constraints when speaking English. The present study is the first to examine whether bilinguals access Spanish phonotactic constraints during English comprehension. In an English cross-modal priming lexical decision task, Spanish-English bilinguals and English monolinguals heard English cognate and non-cognate primes containing s+ consonant onsets or controls without s+ onsets, followed by a lexical decision on visual targets with the /e/ phonotactic constraint or controls without /e/. Results revealed that bilinguals were faster to respond to /es/ non-word targets preceded by s+ cognate primes and /es/ and /e/ non-word targets preceded by s+ non-cognate primes, confirming that English primes containing s+ onsets activated Spanish phonotactic constraints. These findings are discussed within current accounts of parallel activation of two languages during bilingual spoken language comprehension, which may be expanded to include activation of phonotactic constraints from the irrelevant language.
Project description:To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H2 (15)O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
Project description:Formal and semantic overlap across languages plays an important role in bilingual language processing systems. In the present study, Japanese (first language; L1)-English (second language; L2) bilinguals rated 193 Japanese-English word pairs, including cognates and noncognates, in terms of phonological and semantic similarity. We show that the degree of cross-linguistic overlap varies, such that words can be more or less "cognate," in terms of their phonological and semantic overlap. Bilinguals also translated these words in both directions (L1-L2 and L2-L1), providing a measure of translation equivalency. Notably, we reveal for the first time that Japanese-English cognates are "special," in the sense that they are usually translated using one English term (e.g., ??? /kooru/ is always translated as "call"), but the English word is translated into a greater variety of Japanese words. This difference in translation equivalency likely extends to other non-etymologically related, different-script languages in which cognates are all loanwords (e.g., Korean-English). Norming data were also collected for L1 age of acquisition, L1 concreteness, and L2 familiarity, because such information had been unavailable for the item set. Additional information on L1/L2 word frequency, L1/L2 number of senses, and L1/L2 word length and number of syllables is also provided. Finally, correlations and characteristics of the cognate and noncognate items are detailed, so as to provide a complete overview of the lexical and semantic characteristics of the stimuli. This creates a comprehensive bilingual data set for these different-script languages and should be of use in bilingual word recognition and spoken language research.
Project description:The present study sought to explain why bilingual speakers are disadvantaged relative to monolingual speakers when it comes to speech understanding in noise. Exemplar models of the mental lexicon hold that each encounter with a word leaves a memory trace in long-term memory. Words that we encounter frequently will be associated with richer phonetic representations in memory and therefore recognized faster and more accurately than less frequently encountered words. Because bilinguals are exposed to each of their languages less often than monolinguals by virtue of speaking two languages, they encounter all words less frequently and may therefore have poorer phonetic representations of all words compared to monolinguals. In the present study, vocabulary size was taken as an estimate for language exposure and the prediction was made that both vocabulary size and word frequency would be associated with recognition accuracy for words presented in noise. Forty-eight early Spanish-English bilingual and 53 monolingual English young adults were tested on speech understanding in noise (SUN) ability, English oral verbal ability, verbal working memory (WM), and auditory attention. Results showed that, as a group, monolinguals recognized significantly more words than bilinguals. However, this effect was attenuated by language proficiency; higher proficiency was associated with higher accuracy on the SUN test in both groups. This suggests that greater language exposure is associated with better SUN. Word frequency modulated recognition accuracy and the difference between groups was largest for low frequency words, suggesting that the bilinguals' insufficient exposure to these words hampered recognition. The effect of WM was not significant, likely because of its large shared variance with language proficiency. The effect of auditory attention was small but significant. These results are discussed within the Ease of Language Understanding model (Rönnberg et al., 2013), which provides a framework for explaining individual differences in SUN.
Project description:Bilinguals' two languages are both active in parallel, and controlling co-activation is one of bilinguals' principle challenges. Trilingualism multiplies this challenge. To investigate how third language (L3) learners manage interference between languages, Spanish-English bilinguals were taught an artificial language that conflicted with English and Spanish letter-sound mappings. Interference from existing languages was higher for L3 words that were similar to L1 or L2 words, but this interference decreased over time. After mastering the L3, learners continued to experience competition from their other languages. Notably, spoken L3 words activated orthography in all three languages, causing participants to experience cross-linguistic orthographic competition in the absence of phonological overlap. Results indicate that L3 learners are able to control between-language interference from the L1 and L2. We conclude that while the transition from two languages to three presents additional challenges, bilinguals are able to successfully manage competition between languages in this new context.
Project description:Recent neuroimaging studies suggest that monolingual infants activate a left-lateralized frontotemporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near-infrared spectroscopy (fNIRS) data from 60 hearing infants (4 to 8 months of age): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, whereas sign language elicited activation in the right temporoparietal area. A significant difference in brain lateralization was observed between groups. Activation in the posterior temporal region was not lateralized in monolinguals and bimodal bilinguals, but right lateralized in response to both language modalities in unimodal bilinguals. This suggests that the experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPAs) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language and that unimodal bilingual experience has greater impact on early brain lateralization than bimodal bilingual experience.
Project description:During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension.
Project description:The human capacity to master multiple languages is remarkable and leads to structural and functional changes in the brain. Understanding how the brain accommodates multiple languages simultaneously is crucial to developing a complete picture of our species' linguistic capabilities. To examine the neural mechanisms involved in processing two languages, we looked at cortical activation in Spanish-English bilinguals in response to phonological competition either between two languages or within a language. Participants recognized spoken words in a visual world task while their brains were scanned using functional magnetic resonance imaging (fMRI). Results revealed that between-language competition recruited a larger network of frontal control and basal ganglia regions than within-language competition. Bilinguals also recruited more neural resources to manage between-language competition from the dominant language compared to competition from the less dominant language. Additionally, bilinguals' activation of the basal ganglia was inversely correlated with their executive function ability, suggesting that bilinguals compensated for lower levels of cognitive control by recruiting a broader neural network to manage more difficult tasks. These results provide evidence for differences in neural responses to linguistic competition between versus within languages, and demonstrate the brain's remarkable plasticity, where language experience can change neural processing.
Project description:The relationship between orthography (spelling) and phonology (speech sounds) varies across alphabetic languages. Consequently, learning to read a second alphabetic language, that uses the same letters as the first, increases the phonological associations that can be linked to the same orthographic units. In subjects with English as their first language, previous functional imaging studies have reported increased left ventral prefrontal activation for reading words with spellings that are inconsistent with their orthographic neighbors (e.g., PINT) compared with words that are consistent with their orthographic neighbors (e.g., SHIP). Here, using functional magnetic resonance imaging (fMRI) in 17 Italian-English and 13 English-Italian bilinguals, we demonstrate that left ventral prefrontal activation for first language reading increases with second language vocabulary knowledge. This suggests that learning a second alphabetic language changes the way that words are read in the first alphabetic language. Specifically, first language reading is more reliant on both lexical/semantic and nonlexical processing when new orthographic to phonological mappings are introduced by second language learning. Our observations were in a context that required participants to switch between languages. They motivate future fMRI studies to test whether first language reading is also altered in contexts when the second language is not in use.