Positive effects of grasping virtual objects on memory for novel words in a second language.
ABSTRACT: Theories of embodied cognition describe language processing and representation as inherently connected to the sensorimotor experiences collected during acquisition. While children grasp their world, collect bodily experiences and name them, in second language (L2), students learn bilingual word lists. Experimental evidence shows that embodiment by mean of gestures enhances memory for words in L2. However, no study has been conducted on the effects of grasping in L2. In a virtual scenario, we trained 46 participants on 18 two- and three-syllabic words of Vimmi, an artificial corpus created for experimental purposes. The words were assigned concrete meanings of graspable objects. Six words were learned audio-visually, by reading the words projected on the wall and by hearing them. Another 6 words were trained by observation of virtual objects. Another 6 words were learned by observation and additional grasping the virtual objects. Thereafter participants were subministered free, cued recall, and reaction time tests in order to assess the word retention and the word recognition. After 30 days, the recall tests were repeated remotely to assess the memory in the long term. The results show that grasping of virtual objects can lead to superior memory performance and to lower reaction times during recognition.
Project description:We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Project description:Non-verbal enrichment in the form of pictures or gesture can support word learning in first and foreign languages. The present study seeks to compare the effects of viewing pictures vs. imitating iconic gestures on learning second language (L2) vocabulary. In our study participants learned L2 words (nouns, verbs, and adjectives) together with a virtual, pedagogical agent. The to-be-learned items were either (i) enriched with pictures, or (ii) with gestures that had to be imitated, or (iii) without any non-verbal enrichment as control. Results showed that gesture imitation was particularly supportive for learning nouns, whereas pictures showed to be most beneficial for memorizing verbs. These findings, suggesting that the type of vocabulary learning strategy has to match with the type of linguistic material to be learned, have important educational implications for L2 classrooms and technology-enhanced tutoring systems.
Project description:The effects of cognate synonymy in L2 word learning are explored. Participants learned the names of well-known concrete concepts in a new fictional language following a picture-word association paradigm. Half of the concepts (set A) had two possible translations in the new language (i.e., both words were synonyms): one was a cognate in participants' L1 and the other one was not. The other half of the concepts (set B) had only one possible translation in the new language, a non-cognate word. After learning the new words, participants' memory was tested in a picture-word matching task and a translation recognition task. In line with previous findings, our results clearly indicate that cognates are much easier to learn, as we found that the cognate translation was remembered much better than both its non-cognate synonym and the non-cognate from set B. Our results also seem to suggest that non-cognates without cognate synonyms (set B) are better learned than non-cognates with cognate synonyms (set A). This suggests that, at early stages of L2 acquisition, learning a cognate would produce a poorer acquisition of its non-cognate synonym, as compared to a solely learned non-cognate. These results are discussed in the light of different theories and models of bilingual mental lexicon.
Project description:Physiological evidence was sought for a center-surround attentional mechanism (CSM), which has been proposed to assist in the retrieval of weakly activated items from semantic memory. The CSM operates by facilitating strongly related items in the "center" of the weakly activated area of semantic memory, and inhibiting less strongly related items in its "surround". In this study weak activation was created by having subjects acquire the meanings of new words to a recall criterion of only 50%. Subjects who attained this approximate criterion level of performance were subsequently included in a semantic priming task, during which ERPs were recorded. Primes were newly learned rare words, and targets were either synonyms, non-synonymously related words, or unrelated words. All stimuli were presented to the RVF/LH (right visual field/left hemisphere) or the LVF/RH (left visual field/right hemisphere). Under RVF/LH stimulation the newly learned word primes produced facilitation on N400 for synonym targets, and inhibition for related targets. No differences were observed under LVF/RH stimulation. The LH thus, supports a CSM, whereby a synonym in the "center" of attention, focused on the newly learned word, is facilitated, whereas a related word in the "surround" is inhibited. The data are consistent with the view of this laboratory that semantic memory is subserved by a spreading activation system in the LH. Also consistent with our view, there was no evidence of spreading activation in the RH. The findings are discussed in the context of additional recent theories of semantic memory. Finally, the adult right hemisphere may require more learning than the LH in order to demonstrate evidence of meaning acquisition.
Project description:Ambiguous words are hard to learn, yet little is known about what causes this difficulty. The current study aimed to investigate the relationship between the representations of new and prior meanings of ambiguous words in second language (L2) learning, and to explore the function of inhibitory control on L2 ambiguous word learning at the initial stage of learning. During a 4-day learning phase, Chinese-English bilinguals learned 30 novel English words for 30 min per day using bilingual flashcards. Half of the words to be learned were unambiguous (had one meaning) and half were ambiguous (had two semantically unrelated meanings learned in sequence). Inhibitory control was introduced as a subject variable measured by a Stroop task. The semantic representations established for the studied items were probed using a cross-language semantic relatedness judgment task, in which the learned English words served as the prime, and the targets were either semantically related or unrelated to the prime. Results showed that response latencies for the second meaning of ambiguous words were slower than for the first meaning and for unambiguous words, and that performance on only the second meaning of ambiguous words was predicted by inhibitory control ability. These results suggest that, at the initial stage of L2 ambiguous word learning, the representation of the second meaning is weak, probably interfered with by the representation of the prior learned meaning. Moreover, inhibitory control may modulate learning of the new meanings, such that individuals with better inhibitory control may more effectively suppress interference from the first meaning, and thus learn the new meaning more quickly.
Project description:Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children's word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children's learning of novel word-object associations. Children (18 months, 30 months and 36-48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word-object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child's perceptual abilities to shape the learning experience.
Project description:Certain manipulations, such as testing oneself on newly learned word associations (recall), or the act of repeating a word during training (reproduction), can lead to better learning and retention relative to simply providing more exposure to the word (restudy). Such benefit has been observed for written words. Here, we test how these training manipulations affect learning of words presented aurally, when participants are required to produce these novel phonological forms in a recall task.Participants (36 English-speaking adults) learned 27 pseudowords, which were paired with 27 unfamiliar pictures. They were given cued recall practice for 9 of the words, reproduction practice for another set of 9 words, and the remaining 9 words were restudied. Participants were tested on their recognition (3-alternative forced choice) and recall (saying the pseudoword in response to a picture) of these items immediately after training, and a week after training. Our hypotheses were that reproduction and restudy practice would lead to better learning immediately after training, but that cued recall practice would lead to better retention in the long term.In all three conditions, recognition performance was extremely high immediately after training, and a week following training, indicating that participants had acquired associations between the novel pictures and novel words. In addition, recognition and cued recall performance was better immediately after training relative to a week later, confirming that participants forgot some words over time. However, results in the cued recall task did not support our hypotheses. Immediately after training, participants showed an advantage for cued Recall over the Restudy condition, but not over the Reproduce condition. Furthermore, there was no boost for the cued Recall condition over time relative to the other two conditions. Results from a Bayesian analysis also supported this null finding. Nonetheless, we found a clear effect of word length, with shorter words being better learned than longer words, indicating that our method was sufficiently sensitive to detect an impact of condition on learning.Our primary hypothesis about training conditions conferring specific advantages for production of novel words presented aurally, especially over long intervals, was not supported by this data. Although there may be practical reasons for preferring a particular method for training expressive vocabulary, no difference in effectiveness was detected when presenting words aurally: reproducing, recalling or restudying a word led to the same production accuracy.
Project description:OBJECTIVE: To determine if sleep talkers with REM sleep behavior disorder (RBD) would utter during REM sleep sentences learned before sleep, and to evaluate their verbal memory consolidation during sleep. METHODS: Eighteen patients with RBD and 10 controls performed two verbal memory tasks (16 words from the Free and Cued Selective Reminding Test and a 220-263 word long modified Story Recall Test) in the evening, followed by nocturnal video-polysomnography and morning recall (night-time consolidation). In 9 patients with RBD, daytime consolidation (morning learning/recall, evening recall) was also evaluated with the modified Story Recall Test in a cross-over order. Two RBD patients with dementia were studied separately. Sleep talking was recorded using video-polysomnography, and the utterances were compared to the studied texts by two external judges. RESULTS: Sleep-related verbal memory consolidation was maintained in patients with RBD (+24±36% words) as in controls (+9±18%, p=0.3). The two demented patients with RBD also exhibited excellent nighttime consolidation. The post-sleep performance was unrelated to the sleep measures (including continuity, stages, fragmentation and apnea-hypopnea index). Daytime consolidation (-9±19%) was worse than night-time consolidation (+29±45%, p=0.03) in the subgroup of 9 patients with RBD. Eleven patients with RBD spoke during REM sleep and pronounced a median of 20 words, which represented 0.0003% of sleep with spoken language. A single patient uttered a sentence that was judged to be semantically (but not literally) related to the text learned before sleep. CONCLUSION: Verbal declarative memory normally consolidates during sleep in patients with RBD. The incorporation of learned material within REM sleep-associated sleep talking in one patient (unbeknownst to himself) at the semantic level suggests a replay at a highly cognitive creative level.
Project description:Supraspan verbal list-learning tests, such as the Rey Auditory Verbal Learning Test (RAVLT), are classic neuropsychological tests for assessing verbal memory. In this study, we investigated the impact of the meaning of the words to be learned on three memory stages [short-term recall (STR), learning, and delayed recall (DR)] in a cohort of 447 healthy adults. First, we compared scores obtained from the RAVLT (word condition) to those of an alternative version of this test using phonologically similar but meaningless items (pseudoword condition) and observed how each score varied as a function of age and sex. Then, we collected the participants' self-reported strategies to retain the word and pseudoword lists and examined if these strategies mediated the age and sex effects on memory scores. The word condition resulted in higher memory scores than pseudoword condition at each memory stage and even canceled out, for the learning stage, the detrimental effect of age that was observed for the short-term and DR. When taking sex into account, the word advantage was observed only in women for STR. The self-reported strategies, which were similar for words and pseudowords, were based on the position of the item on the list (word: 53%, pseudoword: 37%) or the meaning of the item (word: 64%, pseudoword: 58%) and were used alone or in combination. The best memory performance was associated with the meaning strategy in the word condition and with the combination of the meaning and position strategies in the pseudoword condition. Finally, we found that the word advantage observed in women for STR was mediated by the use of the meaning strategy. The RAVLT scores were thus highly dependent on word meaning, notably because it allowed efficient semantic knowledge-based strategies. Within the framework of Tulving's declarative memory model, these results are at odds with the depiction of the RAVLT as a verbal episodic memory test as it is increasingly referred to in the literature.
Project description:This study investigates whether listeners' experience with a second language learned later in life affects their use of fundamental frequency (F0) as a cue to word boundaries in the segmentation of an artificial language (AL), particularly when the cues to word boundaries conflict between the first language (L1) and second language (L2). F0 signals phrase-final (and thus word-final) boundaries in French but word-initial boundaries in English. Participants were functionally monolingual French listeners, functionally monolingual English listeners, bilingual L1-English L2-French listeners, and bilingual L1-French L2-English listeners. They completed the AL-segmentation task with F0 signaling word-final boundaries or without prosodic cues to word boundaries (monolingual groups only). After listening to the AL, participants completed a forced-choice word-identification task in which the foils were either non-words or part-words. The results show that the monolingual French listeners, but not the monolingual English listeners, performed better in the presence of F0 cues than in the absence of such cues. Moreover, bilingual status modulated listeners' use of F0 cues to word-final boundaries, with bilingual French listeners performing less accurately than monolingual French listeners on both word types but with bilingual English listeners performing more accurately than monolingual English listeners on non-words. These findings not only confirm that speech segmentation is modulated by the L1, but also newly demonstrate that listeners' experience with the L2 (French or English) affects their use of F0 cues in speech segmentation. This suggests that listeners' use of prosodic cues to word boundaries is adaptive and non-selective, and can change as a function of language experience.