Animacy or case marker order?: priority information for online sentence comprehension in a head-final language.
Ontology highlight
ABSTRACT: It is well known that case marker information and animacy information are incrementally used to comprehend sentences in head-final languages. However, it is still unclear how these two kinds of information are processed when they are in competition in a sentence's surface expression. The current study used sentences conveying the potentiality of some event (henceforth, potential sentences) in the Japanese language with theoretically canonical word order (dative-nominative/animate-inanimate order) and with scrambled word order (nominative-dative/inanimate-animate order). In Japanese, nominative-first case order and animate-inanimate animacy order are preferred to their reversed patterns in simplex sentences. Hence, in these potential sentences, case information and animacy information are in competition. The experiment consisted of a self-paced reading task testing two conditions (that is, canonical and scrambled potential sentences). Forty-five native speakers of Japanese participated. In our results, the canonical potential sentences showed a scrambling cost at the second argument position (the nominative argument). This result indicates that the theoretically scrambled case marker order (nominative-dative) is processed as a mentally canonical case marker order, suggesting that case information is used preferentially over animacy information when the two are in competition. The implications of our findings are discussed with regard to incremental simplex sentence comprehension models for head-final languages.
Project description:The acquisition of languages by children using two languages is a matter of debate as many factors contribute to the success of this type of acquisition. We focus on how the competence of dual-language children changes in their two languages as a function of length of exposure and establish whether there are reciprocal influences during language development. We examined the comprehension of subject and object relative clauses in a group of 6-year-old (younger) and 8-year-old (older) Mandarin-Italian dual-language children. After 3 years of regular and intensive exposure to Italian, the younger group reached the same level of competence in the comprehension of relative clauses in their two languages, and after 5 years of exposure to Italian, the older group had a better comprehension of relative clauses in Italian than in Mandarin. Acquiring two languages leads to bidirectional influence, beyond a reciprocal support. Finally, some penalty may be observed in the acquisition of subject head-final relative clauses, which is not evident in that of subject head-initial relative clauses.
Project description:In explicit memory recall and recognition tasks, elaboration and contextual isolation both facilitate memory performance. Here, we investigate these effects in the context of sentence processing: targets for retrieval during online sentence processing of English object relative clause constructions differ in the amount of elaboration associated with the target noun phrase, or the homogeneity of superficial features (text color). Experiment 1 shows that greater elaboration for targets during the encoding phase reduces reading times at retrieval sites, but elaboration of non-targets has considerably weaker effects. Experiment 2 illustrates that processing isolated superficial features of target noun phrases-here, a green word in a sentence with words colored white-does not lead to enhanced memory performance, despite triggering longer encoding times. These results are interpreted in the light of the memory models of Nairne, 1990, 2001, 2006, which state that encoding remnants contribute to the set of retrieval cues that provide the basis for similarity-based interference effects.
Project description:Much research in cognitive neuroscience supports prediction as a canonical computation of cognition across domains. Is such predictive coding implemented by feedback from higher-order domain-general circuits, or is it locally implemented in domain-specific circuits? What information sources are used to generate these predictions? This study addresses these two questions in the context of language processing. We present fMRI evidence from a naturalistic comprehension paradigm (1) that predictive coding in the brain's response to language is domain-specific, and (2) that these predictions are sensitive both to local word co-occurrence patterns and to hierarchical structure. Using a recently developed continuous-time deconvolutional regression technique that supports data-driven hemodynamic response function discovery from continuous BOLD signal fluctuations in response to naturalistic stimuli, we found effects of prediction measures in the language network but not in the domain-general multiple-demand network, which supports executive control processes and has been previously implicated in language comprehension. Moreover, within the language network, surface-level and structural prediction effects were separable. The predictability effects in the language network were substantial, with the model capturing over 37% of explainable variance on held-out data. These findings indicate that human sentence processing mechanisms generate predictions about upcoming words using cognitive processes that are sensitive to hierarchical structure and specialized for language processing, rather than via feedback from high-level executive control mechanisms.
Project description:Parkinson's disease (PD) affects the language processes, with a significant impact on the patients' daily communication. We aimed to describe specific alterations in the comprehension of syntactically complex sentences in patients with PD (PwPD) as compared to healthy controls (HC) and to identify the neural underpinnings of these deficits using a functional connectivity analysis of the striatum. A total of 20 patients PwPD and 15 HC participated in the fMRI study. We analyzed their performance of a Test of sentence comprehension (ToSC) adjusted for fMRI. A task-dependent functional connectivity analysis of the striatum was conducted using the psychophysiological interaction method (PPI). On the behavioral level, the PwPD scored significantly lower (mean ± sd: 77.3 ± 12.6) in the total ToSC score than the HC did (mean ± sd: 86.6 ± 8.0), p = 0.02, and the difference was also significant specifically for sentences with a non-canonical word order (PD-mean ± sd: 69.9 ± 14.1, HC-mean ± sd: 80.2 ± 11.5, p = 0.04). Using PPI, we found a statistically significant difference between the PwPD and the HC in connectivity from the right striatum to the supplementary motor area [SMA, (4 8 53)] for non-canonical sentences. This PPI connectivity was negatively correlated with the ToSC accuracy of non-canonical sentences in the PwPD. Our results showed disturbed sentence reading comprehension in the PwPD with altered task-dependent functional connectivity from the right striatum to the SMA, which supports the synchronization of the temporal and sequential aspects of language processing. The study revealed that subcortical-cortical networks (striatal-frontal loop) in PwPD are compromised, leading to impaired comprehension of syntactically complex sentences.
Project description:The neurobiology of sentence comprehension is well-studied but the properties and characteristics of sentence processing networks remain unclear and highly debated. Sign languages (i.e., visual-manual languages), like spoken languages, have complex grammatical structures and thus can provide valuable insights into the specificity and function of brain regions supporting sentence comprehension. The present study aims to characterize how these well-studied spoken language networks can adapt in adults to be responsive to sign language sentences, which contain combinatorial semantic and syntactic visual-spatial linguistic information. Twenty native English-speaking undergraduates who had completed introductory American Sign Language (ASL) courses viewed videos of the following conditions during fMRI acquisition: signed sentences, signed word lists, English sentences and English word lists. Overall our results indicate that native language (L1) sentence processing resources are responsive to ASL sentence structures in late L2 learners, but that certain L1 sentence processing regions respond differently to L2 ASL sentences, likely due to the nature of their contribution to language comprehension. For example, L1 sentence regions in Broca's area were significantly more responsive to L2 than L1 sentences, supporting the hypothesis that Broca's area contributes to sentence comprehension as a cognitive resource when increased processing is required. Anterior temporal L1 sentence regions were sensitive to L2 ASL sentence structure, but demonstrated no significant differences in activation to L1 than L2, suggesting its contribution to sentence processing is modality-independent. Posterior superior temporal L1 sentence regions also responded to ASL sentence structure but were more activated by English than ASL sentences. An exploratory analysis of the neural correlates of L2 ASL proficiency indicates that ASL proficiency is positively correlated with increased activations in response to ASL sentences in L1 sentence processing regions. Overall these results suggest that well-established fronto-temporal spoken language networks involved in sentence processing exhibit functional plasticity with late L2 ASL exposure, and thus are adaptable to syntactic structures widely different than those in an individual's native language. Our findings also provide valuable insights into the unique contributions of the inferior frontal and superior temporal regions that are frequently implicated in sentence comprehension but whose exact roles remain highly debated.
Project description:Previous studies have indicated that sentences are comprehended via widespread brain regions in the fronto-temporo-parietal network in explicit language tasks (e.g., semantic congruency judgment tasks), and through restricted temporal or frontal regions in implicit language tasks (e.g., font size judgment tasks). This discrepancy has raised questions regarding a common network for sentence comprehension that acts regardless of task effect and whether different tasks modulate network properties. To this end, we constructed brain functional networks based on 27 subjects' fMRI data that was collected while performing explicit and implicit language tasks. We found that network properties and network hubs corresponding to the implicit language task were similar to those associated with the explicit language task. We also found common hubs in occipital, temporal and frontal regions in both tasks. Compared with the implicit language task, the explicit language task resulted in greater global efficiency and increased integrated betweenness centrality of the left inferior frontal gyrus, which is a key region related to sentence comprehension. These results suggest that brain functional networks support both explicit and implicit sentence comprehension; in addition, these two types of language tasks may modulate the properties of brain functional networks.
Project description:The present study investigates how heritage speakers conduct good-enough processing at the interface of home-language proficiency, cognitive skills (inhibitory control; working memory), and task types (acceptability judgement; self-paced reading). For this purpose, we employ two word-order patterns (verb-final vs. verb-initial) of two clausal constructions in Korean-suffixal passive and morphological causative-which contrast pertaining to the mapping between thematic roles and case-marking and the interpretive procedures driven by verbal morphology. We find that, while Korean heritage speakers demonstrate the same kind of acceptability-rating behaviour as monolingual Korean speakers do, their reading-time patterns are notably modulated by construction-specific properties, cognitive skills, and proficiency. This suggests a heritage speaker's ability and willingness to conduct both parsing routes, induced by linguistic cues in a non-dominant language, which are proportional to the computational complexity involving these cues. Implications of this study are expected to advance our understanding of a learner's mind for underrepresented languages and populations in the field.
Project description:PurposeWe assessed the potential direct and indirect (mediated) influences of 4 cognitive mechanisms we believe are theoretically relevant to canonical and noncanonical sentence comprehension of school-age children with and without developmental language disorder (DLD).MethodOne hundred seventeen children with DLD and 117 propensity-matched typically developing (TD) children participated. Comprehension was indexed by children identifying the agent in implausible sentences. Children completed cognitive tasks indexing the latent predictors of fluid reasoning (FLD-R), controlled attention (CATT), complex working memory (cWM), and long-term memory language knowledge (LTM-LK).ResultsStructural equation modeling revealed that the best model fit was an indirect model in which cWM mediated the relationship among FLD-R, CATT, LTM-LK, and sentence comprehension. For TD children, comprehension of both sentence types was indirectly influenced by FLD-R (pattern recognition) and LTM-LK (linguistic chunking). For children with DLD, canonical sentence comprehension was indirectly influenced by LTM-LK and CATT, and noncanonical comprehension was indirectly influenced just by CATT.ConclusionscWM mediates sentence comprehension in children with DLD and TD children. For TD children, comprehension occurs automatically through pattern recognition and linguistic chunking. For children with DLD, comprehension is cognitively effortful. Whereas canonical comprehension occurs through chunking, noncanonical comprehension develops on a word-by-word basis.Supplemental materialhttps://doi.org/10.23641/asha.7178939.
Project description:PurposeCompared with same-age typically developing peers, school-age children with specific language impairment (SLI) exhibit significant deficits in spoken sentence comprehension. They also demonstrate a range of memory limitations. Whether these 2 deficit areas are related is unclear. The present review article aims to (a) review 2 main theoretical accounts of SLI sentence comprehension and various studies supporting each and (b) offer a new, broader, more integrated memory-based framework to guide future SLI research, as we believe the available evidence favors a memory-based perspective of SLI comprehension limitations.MethodWe reviewed the literature on the sentence comprehension abilities of English-speaking children with SLI from 2 theoretical perspectives.ResultsThe sentence comprehension limitations of children with SLI appear to be more fully captured by a memory-based perspective than by a syntax-specific deficit perspective.ConclusionsAlthough a memory-based view appears to be the better account of SLI sentence comprehension deficits, this view requires refinement and expansion. Current memory-based perspectives of adult sentence comprehension, with proper modification, offer SLI investigators new, more integrated memory frameworks within which to study and better understand the sentence comprehension abilities of children with SLI.
Project description:Since Saussure, the relationship between the sound and the meaning of words has been regarded as largely arbitrary. Here, however, we show that a probabilistic relationship exists between the sound of a word and its lexical category. Corpus analyses of nouns and verbs indicate that the phonological properties of the individual words in these two lexical categories form relatively separate and coherent clusters, with some nouns sounding more typical of the noun category than others and likewise for verbs. Additional analyses reveal that the phonological properties of nouns and verbs affect lexical access, and we also demonstrate the influence of such properties during the on-line processing of both simple unambiguous and syntactically ambiguous sentences. Thus, although the sound of a word may not provide cues to its specific meaning, phonological typicality, the degree to which the sound properties of an individual word are typical of other words in its lexical category, affects both word- and sentence-level language processing. The findings are consistent with a perspective on language comprehension in which sensitivity to multiple syntactic constraints in adulthood emerges as a product of language-development processes that are driven by the integration of multiple cues to linguistic structure, including phonological typicality.