Learning abstract words and concepts: insights from developmental language disorder.
ABSTRACT: Some explanations of abstract word learning suggest that these words are learnt primarily from the linguistic input, using statistical co-occurrences of words in language, whereas concrete words can also rely on non-linguistic, experiential information. According to this hypothesis, we expect that, if the learner is not able to fully exploit the information in the linguistic input, abstract words should be affected more than concrete ones. Embodied approaches instead argue that both abstract and concrete words can rely on experiential information and, therefore, there might not be any linguistic primacy. Here, we test the role of linguistic input in the development of abstract knowledge with children with developmental language disorder (DLD) and typically developing children aged 8-13. We show that DLD children, who by definition have impoverished language, do not show a disproportionate impairment for abstract words in lexical decision and definition tasks. These results indicate that linguistic information does not have a primary role in the learning of abstract concepts and words; rather, it would play a significant role in semantic development across all domains of knowledge.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.
Project description:While embodied approaches of cognition have proved to be successful in explaining concrete concepts and words, they have more difficulties in accounting for abstract concepts and words, and several proposals have been put forward. This work aims to test the Words As Tools proposal, according to which both abstract and concrete concepts are grounded in perception, action and emotional systems, but linguistic information is more important for abstract than for concrete concept representation, due to the different ways they are acquired: while for the acquisition of the latter linguistic information might play a role, for the acquisition of the former it is instead crucial. We investigated the acquisition of concrete and abstract concepts and words, and verified its impact on conceptual representation. In Experiment 1, participants explored and categorized novel concrete and abstract entities, and were taught a novel label for each category. Later they performed a categorical recognition task and an image-word matching task to verify a) whether and how the introduction of language changed the previously formed categories, b) whether language had a major weight for abstract than for concrete words representation, and c) whether this difference had consequences on bodily responses. The results confirm that, even though both concrete and abstract concepts are grounded, language facilitates the acquisition of the latter and plays a major role in their representation, resulting in faster responses with the mouth, typically associated with language production. Experiment 2 was a rating test aiming to verify whether the findings of Experiment 1 were simply due to heterogeneity, i.e. to the fact that the members of abstract categories were more heterogeneous than those of concrete categories. The results confirmed the effectiveness of our operationalization, showing that abstract concepts are more associated with the mouth and concrete ones with the hand, independently from heterogeneity.
Project description:This study explores the impact of the extensive use of an oral device since infancy (pacifier) on the acquisition of concrete, abstract, and emotional concepts. While recent evidence showed a negative relation between pacifier use and children's emotional competence (Niedenthal et al., 2012), the possible interaction between use of pacifier and processing of emotional and abstract language has not been investigated. According to recent theories, while all concepts are grounded in sensorimotor experience, abstract concepts activate linguistic and social information more than concrete ones. Specifically, the Words As Social Tools (WAT) proposal predicts that the simulation of their meaning leads to an activation of the mouth (Borghi and Binkofski, 2014; Borghi and Zarcone, 2016). Since the pacifier affects facial mimicry forcing mouth muscles into a static position, we hypothesize its possible interference on acquisition/consolidation of abstract emotional and abstract not-emotional concepts, which are mainly conveyed during social and linguistic interactions, than of concrete concepts. Fifty-nine first grade children, with a history of different frequency of pacifier use, provided oral definitions of the meaning of abstract not-emotional, abstract emotional, and concrete words. Main effect of concept type emerged, with higher accuracy in defining concrete and abstract emotional concepts with respect to abstract not-emotional concepts, independently from pacifier use. Accuracy in definitions was not influenced by the use of pacifier, but correspondence and hierarchical clustering analyses suggest that the use of pacifier differently modulates the conceptual relations elicited by abstract emotional and abstract not-emotional. While the majority of the children produced a similar pattern of conceptual relations, analyses on the few (6) children who overused the pacifier (for more than 3 years) showed that they tend to distinguish less clearly between concrete and abstract emotional concepts and between concrete and abstract not-emotional concepts than children who did not use it (5) or used it for short (17). As to the conceptual relations they produced, children who overused the pacifier tended to refer less to their experience and to social and emotional situations, use more exemplifications and functional relations, and less free associations.
Project description:The brain is thought to combine linguistic knowledge of words and nonlinguistic knowledge of their referents to encode sentence meaning. However, functional neuroimaging studies aiming at decoding language meaning from neural activity have mostly relied on distributional models of word semantics, which are based on patterns of word co-occurrence in text corpora. Here, we present initial evidence that modeling nonlinguistic "experiential" knowledge contributes to decoding neural representations of sentence meaning. We model attributes of peoples' sensory, motor, social, emotional, and cognitive experiences with words using behavioral ratings. We demonstrate that fMRI activation elicited in sentence reading is more accurately decoded when this experiential attribute model is integrated with a text-based model than when either model is applied in isolation (participants were 5 males and 9 females). Our decoding approach exploits a representation-similarity-based framework, which benefits from being parameter free, while performing at accuracy levels comparable with those from parameter fitting approaches, such as ridge regression. We find that the text-based model contributes particularly to the decoding of sentences containing linguistically oriented "abstract" words and reveal tentative evidence that the experiential model improves decoding of more concrete sentences. Finally, we introduce a cross-participant decoding method to estimate an upper bound on model-based decoding accuracy. We demonstrate that a substantial fraction of neural signal remains unexplained, and leverage this gap to pinpoint characteristics of weakly decoded sentences and hence identify model weaknesses to guide future model development.SIGNIFICANCE STATEMENT Language gives humans the unique ability to communicate about historical events, theoretical concepts, and fiction. Although words are learned through language and defined by their relations to other words in dictionaries, our understanding of word meaning presumably draws heavily on our nonlinguistic sensory, motor, interoceptive, and emotional experiences with words and their referents. Behavioral experiments lend support to the intuition that word meaning integrates aspects of linguistic and nonlinguistic "experiential" knowledge. However, behavioral measures do not provide a window on how meaning is represented in the brain and tend to necessitate artificial experimental paradigms. We present a model-based approach that reveals early evidence that experiential and linguistically acquired knowledge can be detected in brain activity elicited in reading natural sentences.
Project description:Children with Developmental Language Disorder (DLD) often struggle learning to spell. However, it is still unclear where their spelling difficulties lie, and whether they reflect on-going difficulties with specific linguistic domains. It is also unclear whether the spelling profiles of these children vary in different orthographies. The present study compares the spelling profiles of monolingual children with DLD in France and England at the end of primary school. By contrasting these cohorts, we explored the linguistic constraints that affect spelling, beyond phono-graphemic transparency, in two opaque orthographies. Seventeen French and 17 English children with DLD were compared to typically developing children matched for age or spelling level. Participants wrote a 5 min sample of free writing and spelled 12 controlled dictated words. Spelling errors were analyzed to capture areas of difficulty in each language, in the phonological, morphological, orthographic and semantic domains. Overall, the nature of the errors produced by children with DLD is representative of their spelling level in both languages. However, areas of difficulty vary with the language and task, with more morphological errors in French than in English across both tasks and more orthographic errors in English than in French dictated words. The error types produced by children with DLD also differed in the two languages: segmentation and contraction errors were found in French, whilst morphological ending errors were found in English. It is hypothesized that these differences reflect the phonological salience of the units misspelled in both languages. The present study also provides a detailed breakdown of the spelling errors found in both languages for children with DLD and typical peers aged 5-11.
Project description:Space and shape are distinct perceptual categories. In language, perceptual information can also be used to describe abstract semantic concepts like a "rising income" (space) or a "square personality" (shape). Despite being inherently concrete, co-speech gestures depicting space and shape can accompany concrete or abstract utterances. Here, we investigated the way that abstractness influences the neural processing of the perceptual categories of space and shape in gestures. Thus, we tested the hypothesis that the neural processing of perceptual categories is highly dependent on language context. In a two-factorial design, we investigated the neural basis for the processing of gestures containing shape (SH) and spatial information (SP) when accompanying concrete (c) or abstract (a) verbal utterances. During fMRI data acquisition participants were presented with short video clips of the four conditions (cSP, aSP, cSH, aSH) while performing an independent control task. Abstract (a) as opposed to concrete (c) utterances activated temporal lobes bilaterally and the left inferior frontal gyrus (IFG) for both shape-related (SH) and space-related (SP) utterances. An interaction of perceptual category and semantic abstractness in a more anterior part of the left IFG and inferior part of the posterior temporal lobe (pTL) indicates that abstractness strongly influenced the neural processing of space and shape information. Despite the concrete visual input of co-speech gestures in all conditions, space and shape information is processed differently depending on the semantic abstractness of its linguistic context.
Project description:Eight to 11% of children have a clinical disorder in oral language (Developmental Language Disorder, DLD). Language deficits in DLD can affect all levels of language and persist through adulthood. Word-level processing may be critical as words link phonology, orthography, syntax and semantics. Thus, a lexical deficit could cascade throughout language. Cognitively, word recognition is a competition process: as the input (e.g., lizard) unfolds, multiple candidates (liver, wizard) compete for recognition. Children with DLD do not fully resolve this competition, but it is unclear what cognitive mechanisms underlie this. We examined lexical inhibition-the ability of more active words to suppress competitors-in 79 adolescents with and without DLD. Participants heard words (e.g. net) in which the onset was manipulated to briefly favor a competitor (neck). This was predicted to inhibit the target, slowing recognition. Word recognition was measured using a task in which participants heard the stimulus, and clicked on a picture of the item from an array of competitors, while eye-movements were monitored as a measure of how strongly the participant was committed to that interpretation over time. TD listeners showed evidence of inhibition with greater interference for stimuli that briefly activated a competitor word. DLD listeners did not. This suggests deficits in DLD may stem from a failure to engage lexical inhibition. This in turn could have ripple effects throughout the language system. This supports theoretical approaches to DLD that emphasize lexical-level deficits, and deficits in real-time processing.
Project description:According to the dual coding theory, differences in the ease of retrieval between concrete and abstract words are related to the exclusive dependence of abstract semantics on linguistic information. Argument structure can be considered a measure of the complexity of the linguistic contexts that accompany a verb. If the retrieval of abstract verbs relies more on the linguistic codes they are associated to, we could expect a larger effect of argument structure for the processing of abstract verbs. In this study, sets of length- and frequency-matched verbs including 40 intransitive verbs, 40 transitive verbs taking simple complements, and 40 transitive verbs taking sentential complements were presented in separate lexical and grammatical decision tasks. Half of the verbs were concrete and half were abstract. Similar results were obtained in the two tasks, with significant effects of imageability and transitivity. However, the interaction between these two variables was not significant. These results conflict with hypotheses assuming a stronger reliance of abstract semantics on linguistic codes. In contrast, our data are in line with theories that link the ease of retrieval with availability and robustness of semantic information.
Project description:Linguistic bias is the differential use of linguistic abstraction (as defined by the Linguistic Category Model) to describe the same behaviour for members of different groups. Essentially, it is the tendency to use concrete language for belief-inconsistent behaviours and abstract language for belief-consistent behaviours. Having found that linguistic bias is produced without intention or awareness in many contexts, researchers argue that linguistic bias reflects, reinforces, and transmits pre-existing beliefs, thus playing a role in belief maintenance. Based on the Linguistic Category Model, this assumes that concrete descriptions reduce the impact of belief-inconsistent behaviours while abstract descriptions maximize the impact of belief-consistent behaviours. However, a key study by Geschke, Sassenberg, Ruhrmann, and Sommer  found that concrete descriptions of belief-inconsistent behaviours actually had a greater impact than abstract descriptions, a finding that does not fit easily within the linguistic bias paradigm. Abstract descriptions (e.g. the elderly woman is athletic) are, by definition, more open to interpretation than concrete descriptions (e.g. the elderly woman works out regularly). It is thus possible that abstract descriptions are (1) perceived as having less evidentiary strength than concrete descriptions, and (2) understood in context (i.e. athletic for an elderly woman). In this study, the design of Geschke et al.  was modified to address this possibility. We expected that the differences in the impact of concrete and abstract descriptions would be reduced or reversed, but instead we found that differences were largely absent. This study did not support the findings of Geschke et al.  or the linguistic bias paradigm. We encourage further attempts to understand the strong effect of concrete descriptions for belief-inconsistent behaviour.
Project description:The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years.Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure.At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes.Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
Project description:Much is known about the acquisition of phonological competence and lexical categories, but there has been substantially less research into word meaning development. In an attempt to contribute to this debate, a group of 24 children aged 4-11 were asked to define a set of words, as were a group of 12 adult controls. The stimuli included both concrete and abstract words, in particular words exhibiting a rare form of polysemy known as copredication, which permits the simultaneous attribution of concrete and abstract senses to a single nominal, creating an 'impossible' entity. The results were used to track the developmental trajectory of copredication, previously unexplored in the language acquisition literature.