Semantic and phonological schema influence spoken word learning and overnight consolidation.
ABSTRACT: We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Project description:Listeners identify talkers more accurately when listening to their native language compared to an unfamiliar, foreign language. This language-familiarity effect in talker identification has been shown to arise from familiarity with both the sound patterns (phonetics and phonology) and the linguistic content (words) of one's native language. However, it has been unknown whether these two sources of information contribute independently to talker identification abilities, particularly whether hearing familiar words can facilitate talker identification in the absence of familiar phonetics. To isolate the contribution of lexical familiarity, we conducted three experiments that tested listeners' ability to identify talkers saying familiar words, but with unfamiliar phonetics. In two experiments, listeners identified talkers from recordings of their native language (English), an unfamiliar foreign language (Mandarin Chinese), or "hybrid" speech stimuli (sentences spoken in Mandarin, but which can be convincingly coerced to sound like English when presented with subtitles that prime plausible English-language lexical interpretations based on the Mandarin phonetics). In a third experiment, we explored natural variation in lexical-phonetic congruence as listeners identified talkers with varying degrees of a Mandarin accent. Priming listeners to hear English speech did not improve their ability to identify talkers speaking Mandarin, even after additional training, and talker identification accuracy decreased as talkers' phonetics became increasingly dissimilar to American English. Together, these experiments indicate that unfamiliar sound patterns preclude talker identification benefits otherwise afforded by familiar words. These results suggest that linguistic representations contribute hierarchically to talker identification; the facilitatory effect of familiar words requires the availability of familiar phonological forms.
Project description:Five- and six-year-old children (n = 160) participated in three studies designed to explore language discrimination. After an initial exposure period (during which children heard either an unfamiliar language, a familiar language, or music), children performed an ABX discrimination task involving two unfamiliar languages that were either similar (Spanish vs. Italian) or different (Spanish vs. Mandarin). On each trial, participants heard two sentences spoken by two individuals, each spoken in an unfamiliar language. The pair was followed by a third sentence spoken in one of the two languages. Participants were asked to judge whether the third sentence was spoken by the first speaker or the second speaker. Across studies, both the difficulty of the discrimination contrast and the relation between exposure and test materials affected children's performance. In particular, language discrimination performance was facilitated by an initial exposure to a different unfamiliar language, suggesting that experience can help tune children's attention to the relevant features of novel languages.
Project description:Purpose Previous studies with children and adults have demonstrated a familiar talker advantage-better word recognition for familiar talkers. The goal of the current study was to test whether this phenomenon is modulated by a child's language ability. Method Sixty children with a range of language ability were trained to learn the voices of 3 foreign-accented, German-English bilingual talkers and received feedback about their performance. Both before and after this talker voice training, children completed a spoken word recognition task in which they heard consonant-vowel-consonant words mixed with noise that were spoken by the 3 familiarized talkers and by 3 unfamiliar German-English bilinguals. Results Two findings emerged from this study: First, children with both higher and lower language ability performed similarly on the familiarized talkers. Second, children with higher language scores performed similarly on both the familiarized and unfamiliar talkers, whereas children with lower language scores performed worse on the unfamiliar talkers compared to familiar talkers, suggesting an inability to generalize to novel, unfamiliar talkers who spoke with a similar accent. Discussion Together, these findings indicate that children with higher language scores are able to generalize knowledge about foreign-accented talkers to help spoken word recognition for novel talkers with the same accent. In contrast, children with lower language skills did not exhibit the same magnitude of generalization. This lack of generalization to similar talkers may mean that children with lower language skills are at a disadvantage in spoken language tasks because they are unable to process speech as well when listening to unfamiliar talkers.
Project description:Purpose Older native speakers of English have difficulty in understanding Spanish-accented English compared to younger native English speakers. However, it is unclear if this age effect would be observed among native speakers of Spanish. The current study investigates the effects of age and native language experience with Spanish on the ability to recognize words spoken in English by Spanish-accented and unaccented talkers. Method English monosyllabic words, recorded by native speakers of English and Spanish, were presented to 4 groups of listeners with normal hearing: younger native Spanish listeners ( n = 15), older native Spanish listeners ( n = 16), younger native English listeners ( n = 15), and older native English listeners ( n = 15). Speech recognition accuracy was assessed for the unaccented and accented words in both quiet and noise. Results In all conditions, the native English listeners performed better than the native Spanish listeners. More specifically, the native speakers of Spanish consistently recognized accented English less accurately than the native speakers of English, demonstrating no advantage of shared native language experience between nonnative listeners and accented talkers. Older listeners in the native Spanish language group also performed less accurately than their younger counterparts, for English words spoken by both unaccented and accented talkers. Finally, whereas listeners who were native speakers of English showed marked declines in recognition of Spanish-accented English relative to unaccented English, listeners who were native speakers of Spanish (both younger and older) showed less decline. Conclusions The general pattern of results suggests that both native language experience in a language other than English and age limit the ability to recognize Spanish-accented English. The implication of the overall findings is that older nonnative listeners will have considerable difficulty in understanding English, regardless of the talker's accent, in both clinical and everyday listening situations.
Project description:To attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e., /de?/ "day" and /ðe?/ "they"). Because adult listeners are less sensitive to allophonic than phonemic contrasts in their native language (L1), novel target language contrasts between L1 allophones may pose special difficulty for L2 learners. We investigate whether advanced Spanish late-learners of English overcome native language mappings to establish new phonological relations between familiar phones. We report behavioral and magnetoencepholographic (MEG) evidence from two experiments that measured the sensitivity and pre-attentive processing of three listener groups (L1 English, L1 Spanish, and advanced Spanish late-learners of English) to differences between three nonword stimulus pairs ([idi]-[iði], [idi]-[i?i], and [iði]-[i?i]) which differ in phones that play a different functional role in Spanish and English. Spanish and English listeners demonstrated greater sensitivity (larger d' scores) for nonword pairs distinguished by phonemic than by allophonic contrasts, mirroring previous findings. Spanish late-learners demonstrated sensitivity (large d' scores and MMN responses) to all three contrasts, suggesting that these L2 learners may have established a novel [d]-[ð] contrast despite the phonological relatedness of these sounds in the L1. Our results suggest that phonological relatedness influences perceived similarity, as evidenced by the results of the native speaker groups, but may not cause persistent difficulty for advanced L2 learners. Instead, L2 learners are able to use cues that are present in their input to establish new mappings between familiar phones.
Project description:A large-scale study of 484 elementary school children (6-10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children's brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language.
Project description:Bilinguals' two languages are both active in parallel, and controlling co-activation is one of bilinguals' principle challenges. Trilingualism multiplies this challenge. To investigate how third language (L3) learners manage interference between languages, Spanish-English bilinguals were taught an artificial language that conflicted with English and Spanish letter-sound mappings. Interference from existing languages was higher for L3 words that were similar to L1 or L2 words, but this interference decreased over time. After mastering the L3, learners continued to experience competition from their other languages. Notably, spoken L3 words activated orthography in all three languages, causing participants to experience cross-linguistic orthographic competition in the absence of phonological overlap. Results indicate that L3 learners are able to control between-language interference from the L1 and L2. We conclude that while the transition from two languages to three presents additional challenges, bilinguals are able to successfully manage competition between languages in this new context.
Project description:Three groups of native English speakers named words aloud in Spanish, their second language (L2). Intermediate proficiency learners in a classroom setting (Experiment 1) and in a domestic immersion program (Experiment 2) were compared to a group of highly proficient English-Spanish speakers. All three groups named cognate words more quickly and accurately than matched noncognates, indicating that all speakers experienced cross-language activation during speech planning. However, only the classroom learners exhibited effects of cross-language activation in their articulation: Cognate words were named with shorter overall durations, but longer (more English-like) voice onset times. Inhibition of the first language during L2 speech planning appears to impact the stages of speech production at which cross-language activation patterns can be observed.
Project description:It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.
Project description:We examined if iconic pictures belonging to one's native culture interfere with second language production in bilinguals in an object naming task. Bengali-English bilinguals named pictures in both L1 and L2 against iconic cultural images representing Bengali culture or neutral images. Participants named in both "Blocked" and "Mixed" language conditions. In both conditions, participants were significantly slower in naming in English when the background was an iconic Bengali culture picture than a neutral image. These data suggest that native language culture cues lead to activation of the L1 lexicon that competed against L2 words creating an interference. These results provide further support to earlier observations where such culture related interference has been observed in bilingual language production. We discuss the results in the context of cultural influence on the psycholinguistic processes in bilingual object naming.