The motor system contributes to comprehension of abstract language.
ABSTRACT: If language comprehension requires a sensorimotor simulation, how can abstract language be comprehended? We show that preparation to respond in an upward or downward direction affects comprehension of the abstract quantifiers "more and more" and "less and less" as indexed by an N400-like component. Conversely, the semantic content of the sentence affects the motor potential measured immediately before the upward or downward action is initiated. We propose that this bidirectional link between motor system and language arises because the motor system implements forward models that predict the sensory consequences of actions. Because the same movement (e.g., raising the arm) can have multiple forward models for different contexts, the models can make different predictions depending on whether the arm is raised, for example, to place an object or raised as a threat. Thus, different linguistic contexts invoke different forward models, and the predictions constitute different understandings of the language.
Project description:Embodied metaphor theory suggests abstract concepts are metaphorically linked to more experientially basic ones and recruit sensorimotor cortex for their comprehension. To test whether words associated with spatial attributes reactivate traces in sensorimotor cortex, we recorded EEG from the scalp of healthy adults as they read words while performing a concurrent task involving either upward- or downward- directed arm movements. ERPs were time-locked to words associated with vertical space-either literally (ascend, descend) or metaphorically (inspire, defeat)-as participants made vertical movements that were either congruent or incongruent with the words. Congruency effects emerged 200-300 ms after word onset for literal words, but not until after 500 ms post-onset for metaphorically related words. Results argue against a strong version of embodied metaphor theory, but support a role for sensorimotor simulation in concrete language.
Project description:Previous studies have shown that language can modulate visual perception, by biasing and/or enhancing perceptual performance. However, it is still debated where in the brain visual and linguistic information are integrated, and whether the effects of language on perception are automatic and persist even in the absence of awareness of the linguistic material. Here, we aimed to explore the automaticity of language-perception interactions and the neural loci of these interactions in an fMRI study. Participants engaged in a visual motion discrimination task (upward or downward moving dots). Before each trial, a word prime was briefly presented that implied upward or downward motion (e.g., "rise", "fall"). These word primes strongly influenced behavior: congruent motion words sped up reaction times and improved performance relative to incongruent motion words. Neural congruency effects were only observed in the left middle temporal gyrus, showing higher activity for congruent compared to incongruent conditions. This suggests that higher-level conceptual areas rather than sensory areas are the locus of language-perception interactions. When motion words were rendered unaware by means of masking, they still affected visual motion perception, suggesting that language-perception interactions may rely on automatic feed-forward integration of perceptual and semantic material in language areas of the brain.
Project description:Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record
Project description:Correlational evidence suggests that the experience of reading and writing in a certain direction is able to induce spatial biases at both low-level perceptuo-motor skills and high-level conceptual representations. However, in order to support a causal relationship, experimental evidence is required. In this study, we asked whether the direction of the script is a sufficiente cause of spatial biases in the mental models that understanders build when listening to language. In order to establish causality, we manipulated the experience of reading a script with different directionalities. Spanish monolinguals read either normal (left-to-right), mirror reversed (right-to-left), rotated downward (up-down), or rotated upward (down-up) texts, and then drew the contents of auditory descriptions such as "the square is between the cross and the triangle". The directionality of the drawings showed that a brief reading experience is enough to cause congruent and very specific spatial biases in mental model construction. However, there were also clear limits to this flexibility: there was a strong overall preference to arrange the models along the horizontal dimension. Spatial preferences when building mental models from language are the results of both short-term and long-term biases.
Project description:Semantic memory includes all acquired knowledge about the world and is the basis for nearly all human activity, yet its neurobiological foundation is only now becoming clear. Recent neuroimaging studies demonstrate two striking results: the participation of modality-specific sensory, motor, and emotion systems in language comprehension, and the existence of large brain regions that participate in comprehension tasks but are not modality-specific. These latter regions, which include the inferior parietal lobe and much of the temporal lobe, lie at convergences of multiple perceptual processing streams. These convergences enable increasingly abstract, supramodal representations of perceptual experience that support a variety of conceptual functions including object recognition, social cognition, language, and the remarkable human capacity to remember the past and imagine the future.
Project description:We investigate three potential mechanisms underlying the deficit in idiom comprehension seen in aphasia: difficulty inhibiting literal meanings, inability to recognize that a figurative interpretation is required, and difficulty processing abstract words and concepts. Unimpaired adults and PWA read high and moderate familiarity idioms either preceded or followed by a figuratively biasing context sentence. They then completed a string-to-word probe selection task, choosing between a figurative target, a literal lure, and unrelated concrete and abstract lures. PWA chose the figurative target more often for more familiar idioms and after figuratively biasing contexts, suggesting that difficulty accessing figurative meanings may be a key contributor to idiom impairment in aphasia. Importantly, PWA chose abstract lures at the same rate as they chose literal lures, suggesting that abstract lures may be considered equally good matches for weak idiomatic representations in PWA, and therefore that idiomatic figurative meanings may be represented similarly to abstract concepts for PWA. These results have implications for models of idiom comprehension in aphasia, as well as the design of future studies of idiom comprehension in PWA.
Project description:Many neurocognitive studies on the role of motor structures in action-language processing have implicitly adopted a "dictionary-like" framework within which lexical meaning is constructed on the basis of an invariant set of semantic features. The debate has thus been centered on the question of whether motor activation is an integral part of the lexical semantics (embodied theories) or the result of a post-lexical construction of a situation model (disembodied theories). However, research in psycholinguistics show that lexical semantic processing and context-dependent meaning construction are narrowly integrated. An understanding of the role of motor structures in action-language processing might thus be better achieved by focusing on the linguistic contexts under which such structures are recruited. Here, we therefore analyzed online modulations of grip force while subjects listened to target words embedded in different linguistic contexts. When the target word was a hand action verb and when the sentence focused on that action (John signs the contract) an early increase of grip force was observed. No comparable increase was detected when the same word occurred in a context that shifted the focus toward the agent's mental state (John wants to sign the contract). There mere presence of an action word is thus not sufficient to trigger motor activation. Moreover, when the linguistic context set up a strong expectation for a hand action, a grip force increase was observed even when the tested word was a pseudo-verb. The presence of a known action word is thus not required to trigger motor activation. Importantly, however, the same linguistic contexts that sufficed to trigger motor activation with pseudo-verbs failed to trigger motor activation when the target words were verbs with no motor action reference. Context is thus not by itself sufficient to supersede an "incompatible" word meaning. We argue that motor structure activation is part of a dynamic process that integrates the lexical meaning potential of a term and the context in the online construction of a situation model, which is a crucial process for fluent and efficient online language comprehension.
Project description:Forty native Italian children (age 6-15) performed a sentence plausibility judgment task. ERP recordings were available for 12 children with specific language impairment (SLI), 11 children with nonverbal learning disabilities (NVLD), and 13 control children. Participants listened to verb-object combinations and judged them as acceptable or unacceptable. Stimuli belonged to four conditions, where concreteness and congruency were manipulated. All groups made more errors responding to abstract and to congruent sentences. Moreover, SLI participants performed worse than NVLD participants with abstract sentences. ERPs were analyzed in the time window 300-500?ms. SLI children show atypical, reversed effects of concreteness and congruence as compared to control and NVLD children, respectively. The results suggest that linguistic impairments disrupt abstract language processing more than visual-motor impairments. Moreover, ROI and SPM analyses of ERPs point to a predominant involvement of the left rather than the right hemisphere in the comprehension of figurative expressions.
Project description:Natural language provides an intuitive and effective interaction interface between human beings and robots. Currently, multiple approaches are presented to address natural language visual grounding for human-robot interaction. However, most of the existing approaches handle the ambiguity of natural language queries and achieve target objects grounding via dialogue systems, which make the interactions cumbersome and time-consuming. In contrast, we address interactive natural language grounding without auxiliary information. Specifically, we first propose a referring expression comprehension network to ground natural referring expressions. The referring expression comprehension network excavates the visual semantics via a visual semantic-aware network, and exploits the rich linguistic contexts in expressions by a language attention network. Furthermore, we combine the referring expression comprehension network with scene graph parsing to achieve unrestricted and complicated natural language grounding. Finally, we validate the performance of the referring expression comprehension network on three public datasets, and we also evaluate the effectiveness of the interactive natural language grounding architecture by conducting extensive natural language query groundings in different household scenarios.
Project description:BACKGROUND:Journals are trying to make their papers more accessible by creating a variety of research summaries including graphical abstracts, video abstracts, and plain language summaries. It is unknown if individuals with science, science-related, or non-science careers prefer different summaries, which approach is most effective, or even what criteria should be used for judging which approach is most effective. A survey was created to address this gap in our knowledge. Two papers from Nature on similar research topics were chosen, and different kinds of research summaries were created for each one. Questions to measure comprehension of the research, as well as self-evaluation of enjoyment of the summary, perceived understanding after viewing the summary, and the desire for more updates of that summary type were asked to determine the relative merits of each of the summaries. RESULTS:Participants (n = 538) were randomly assigned to one of the summary types. The response of adults with science, science-related, and non-science careers were slightly different, but they show similar trends. All groups performed well on a post-summary test, but participants reported higher perceived understanding when presented with a video or plain language summary (p<0.0025). All groups enjoyed video abstracts the most followed by plain language summaries, and then graphical abstracts and published abstracts. The reported preference for different summary types was generally not correlated to the comprehension of the summaries. Here we show that original abstracts and graphical abstracts are not as successful as video abstracts and plain language summaries at producing comprehension, a feeling of understanding, and enjoyment. Our results indicate the value of relaxing the word counts in the abstract to allow for more plain language or including a plain language summary section along with the abstract.