Typical neural representations of action verbs develop without vision.
ABSTRACT: Many empiricist theories hold that concepts are composed of sensory-motor primitives. For example, the meaning of the word "run" is in part a visual image of running. If action concepts are partly visual, then the concepts of congenitally blind individuals should be altered in that they lack these visual features. We compared semantic judgments and neural activity during action verb comprehension in congenitally blind and sighted individuals. Participants made similarity judgments about pairs of nouns and verbs that varied in the visual motion they conveyed. Blind adults showed the same pattern of similarity judgments as sighted adults. We identified the left middle temporal gyrus (lMTG) brain region that putatively stores visual-motion features relevant to action verbs. The functional profile and location of this region was identical in sighted and congenitally blind individuals. Furthermore, the lMTG was more active for all verbs than nouns, irrespective of visual-motion features. We conclude that the lMTG contains abstract representations of verb meanings rather than visual-motion images. Our data suggest that conceptual brain regions are not altered by the sensory modality of learning.
Project description:Several regions of the posterior-lateral-temporal cortex (PLTC) are reliably recruited when participants read or listen to action verbs, relative to other word and nonword types. This PLTC activation is generally interpreted as reflecting the retrieval of visual-motion features of actions. This interpretation supports the broader theory, that concepts are comprised of sensory-motor features. We investigated an alternative interpretation of the same activations: PLTC activity for action verbs reflects the retrieval of modality-independent representations of event concepts, or the grammatical types associated with them, i.e., verbs. During a functional magnetic resonance imaging scan, participants made semantic-relatedness judgments on word pairs varying in amount of visual-motion information. Replicating previous results, several PLTC regions showed higher responses to words that describe actions versus objects. However, we found that these PLTC regions did not overlap with visual-motion regions. Moreover, their response was higher for verbs than nouns, regardless of visual-motion features. For example, the response of the PLTC is equally high to action verbs (e.g., to run) and mental verbs (e.g., to think), and equally low to animal nouns (e.g., the cat) and inanimate natural kind nouns (e.g., the rock). Thus, PLTC activity for action verbs might reflect the retrieval of event concepts, or the grammatical information associated with verbs. We conclude that concepts are abstracted away from sensory-motor experience and organized according to conceptual properties.
Project description:What is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing, but what information do these networks represent? We used multivoxel pattern analysis (MVPA) to investigate whether these networks are sensitive to lexicosemantic distinctions among verbs and among nouns and, if so, whether they are more sensitive to distinctions among words in their preferred grammatical class. Participants heard 4 types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and 4 types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG+), and inferior frontal gyrus (LIFG) responded more to verbs, whereas the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in lexicosemantic sensitivity: classification was more accurate among verbs than nouns in the LMTG+, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the nonpreferred category in all regions. These results suggest that the lexicosemantic information about verbs and nouns is represented in partially nonoverlapping networks.
Project description:The middle temporal complex (MT/MST) is a brain region specialized for the perception of motion in the visual modality. However, this specialization is modified by visual experience: after long-standing blindness, MT/MST responds to sound. Recent evidence also suggests that the auditory response of MT/MST is selective for motion. The developmental time course of this plasticity is not known. To test for a sensitive period in MT/MST development, we used fMRI to compare MT/MST function in congenitally blind, late-blind, and sighted adults. MT/MST responded to sound in congenitally blind adults, but not in late-blind or sighted adults, and not in an individual who lost his vision between ages of 2 and 3 years. All blind adults had reduced functional connectivity between MT/MST and other visual regions. Functional connectivity was increased between MT/MST and lateral prefrontal areas in congenitally blind relative to sighted and late-blind adults. These data suggest that early blindness affects the function of feedback projections from prefrontal cortex to MT/MST. We conclude that there is a sensitive period for visual specialization in MT/MST. During typical development, early visual experience either maintains or creates a vision-dominated response. Once established, this response profile is not altered by long-standing blindness.
Project description:Classic theories emphasize the primacy of first-person sensory experience for learning meanings of words: to know what "see" means, one must be able to use the eyes to perceive. Contrary to this idea, blind adults and children acquire normative meanings of "visual" verbs, e.g., interpreting "see" and "look" to mean with the eyes for sighted agents. Here we ask the flip side of this question: how easily do sighted children acquire the meanings of "visual" verbs as they apply to blind agents? We asked sighted 4-, 6- and 9-year-olds to tell us what part of the body a blind or a sighted agent would use to "see", "look" (and other visual verbs, n = 5), vs. "listen", "smell" (and other non-visual verbs, n = 10). Even the youngest children consistently reported the correct body parts for sighted agents (eyes for "look", ears for "listen"). By contrast, there was striking developmental change in applying "visual" verbs to blind agents. Adults, 9- and 6-year-olds, either extended visual verbs to other modalities for blind agents (e.g., "seeing" with hands or a cane) or stated that the blind agent "cannot" "look" or "see". By contrast, 4-year-olds said that a blind agent would use her eyes to "see", "look", etc., even while explicitly acknowledging that the agent's "eyes don't work". Young children also endorsed "she is looking at the dax" descriptions of photographs where the blind agent had the object in her "line of sight", irrespective of whether she had physical contact with the object. This pattern held for leg-motion verbs ("walk", "run") applied to wheelchair users. The ability to modify verb modality for agents with disabilities undergoes developmental change between 4 and 6. Despite this, we find that 4-year-olds are sensitive to the semantic distinction between active ("look") and stative ("see"), even when applied to blind agents. These results challenge the primacy of first-person sensory experience and highlight the importance of linguistic input and social interaction in the acquisition of verb meaning.
Project description:Recent evidence suggests that blindness enables visual circuits to contribute to language processing. We examined whether this dramatic functional plasticity has a sensitive period. BOLD fMRI signal was measured in congenitally blind, late blind (blindness onset 9-years-old or later) and sighted participants while they performed a sentence comprehension task. In a control condition, participants listened to backwards speech and made match/non-match to sample judgments. In both congenitally and late blind participants BOLD signal increased in bilateral foveal-pericalcarine cortex during response preparation, irrespective of whether the stimulus was a sentence or backwards speech. However, left occipital areas (pericalcarine, extrastriate, fusiform and lateral) responded more to sentences than backwards speech only in congenitally blind people. We conclude that age of blindness onset constrains the non-visual functions of occipital cortex: while plasticity is present in both congenitally and late blind individuals, recruitment of visual circuits for language depends on blindness during childhood.
Project description:Sense of body ownership and body representation are fundamental parts of human consciousness, but the contribution of the visual modality to their development remains unclear. We tested congenitally and late blind adults on a somatosensory version of the rubber hand illusion, and on the Aristotle illusion, in which sighted controls touching a single sphere with crossed fingers commonly report perceiving two. We found that congenitally and late blind individuals did not report subjectively experiencing the rubber hand illusion. However, in an objective measure, the congenitally blind did not show a recalibration of the position of their hand towards the rubber hand while late blind and sighted individuals did. By contrast, all groups experienced the Aristotle illusion. This pattern of results provides evidence for a dissociation of the concepts of body ownership and spatial recalibration and, furthermore, suggests different reference frames for hands (external space) and fingers (anatomical space).
Project description:Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here, we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, whereas the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when hearing and attempting to recognize action sounds.
Project description:Congenital blindness modifies the neural basis of language: "visual" cortices respond to linguistic information, and fronto-temporal language networks are less left-lateralized. We tested the hypothesis that this plasticity follows a sensitive period by comparing the neural basis of sentence processing between adult-onset blind (AB, n = 16), congenitally blind (CB, n = 22) and blindfolded sighted adults (n = 18). In Experiment 1, participants made semantic judgments for spoken sentences and, in a control condition, solved math equations. In Experiment 2, participants answered "who did what to whom" yes/no questions for grammatically complex (with syntactic movement) and simpler sentences. In a control condition, participants performed a memory task with non-words. In both experiments, visual cortices of CB and AB but not sighted participants responded more to sentences than control conditions, but the effect was much larger in the CB group. Only the "visual" cortex of CB participants responded to grammatical complexity. Unlike the CB group, the AB group showed no reduction in left-lateralization of fronto-temporal language network, relative to the sighted. These results suggest that congenital blindness modifies the neural basis of language differently from adult-onset blindness, consistent with a developmental sensitive period hypothesis.
Project description:Nouns and verbs are fundamental grammatical building blocks of all languages. Studies of brain-damaged patients and healthy individuals have demonstrated that verb processing can be dissociated from noun processing at a neuroanatomical level. In cases where bilingual patients have a noun or verb deficit, the deficit has been observed in both languages. This suggests that the noun-verb distinction may be based on neural components that are common across languages. Here we investigated the cortical organization of grammatical categories in healthy, early Spanish-English bilinguals using functional magnetic resonance imaging (fMRI) in a morphophonological alternation task. Four regions showed greater activity for verbs than for nouns in both languages: left posterior middle temporal gyrus (LMTG), left middle frontal gyrus (LMFG), pre-supplementary motor area (pre-SMA), and right middle occipital gyrus (RMOG); no regions showed greater activation for nouns. Multi-voxel pattern analysis within verb-specific regions showed indistinguishable activity patterns for English and Spanish, indicating language-invariant bilingual processing. In LMTG and LMFG, patterns were more similar within than across grammatical category, both within and across languages, indicating language-invariant grammatical class information. These results suggest that the neural substrates underlying verb-specific processing are largely independent of language in bilinguals, both at the macroscopic neuroanatomical level and at the level of voxel activity patterns.
Project description:There is ample evidence that congenitally blind individuals rely more strongly on non-visual information compared to sighted controls when interacting with the outside world. Although brain imaging studies indicate that congenitally blind individuals recruit occipital areas when performing various non-visual and cognitive tasks, it remains unclear through which pathways this is accomplished. To address this question, we compared resting state functional connectivity in a group of congenital blind and matched sighted control subjects. We used a seed-based analysis with a priori specified regions-of-interest (ROIs) within visual, somato-sensory, auditory and language areas. Between-group comparisons revealed increased functional connectivity within both the ventral and the dorsal visual streams in blind participants, whereas connectivity between the two streams was reduced. In addition, our data revealed stronger functional connectivity in blind participants between the visual ROIs and areas implicated in language and tactile (Braille) processing such as the inferior frontal gyrus (Broca's area), thalamus, supramarginal gyrus and cerebellum. The observed group differences underscore the extent of the cross-modal reorganization in the brain and the supra-modal function of the occipital cortex in congenitally blind individuals.