Functional reorganization of the conceptual brain system after deafness in early childhood.
ABSTRACT: The neurodevelopmental consequences of deafness on the functional neuroarchitecture of the conceptual system have not been intensively investigated so far. Using functional magnetic resonance imaging (fMRI), we therefore identified brain areas involved in conceptual processing in deaf and hearing participants. Conceptual processing was probed by a pictorial animacy decision task. Furthermore, brain areas sensitive to observing verbal signs and to observing non-verbal visual hand actions were identified in deaf participants. In hearing participants, brain areas responsive to environmental sounds and the observation of visual hand actions were determined. We found a stronger recruitment of superior and middle temporal cortex in deaf compared to hearing participants during animacy decisions. This region, which forms auditory cortex in hearing people according to the sound listening task, was also activated in deaf participants, when they observed sign language, but not when they observed non-verbal hand actions. These results indicate that conceptual processing in deaf people more strongly depends on language representations compared to hearing people. Furthermore, additionally enhanced activation in visual and motor areas of deaf versus hearing participants during animacy decisions and a more frequent report of visual and motor features in the property listing task suggest that the loss of the auditory channel is partially compensated by an increased importance of visual and motor information for constituting object knowledge. Hence, our results indicate that conceptual processing in deaf compared to hearing people is more strongly based on the language system, complemented by an enhanced contribution of the visuo-motor system.
Project description:Audition dominates other senses in temporal processing, and in the absence of auditory cues, temporal perception can be compromised. Moreover, after auditory deprivation, visual attention is selectively enhanced for peripheral visual stimuli. In this study, we assessed whether early hearing loss affects motor-sensory recalibration, the ability to adjust the timing of an action and its sensory effect based on the recent experience. Early deaf participants and hearing controls were asked to discriminate the temporal order between a motor action (a keypress) and a visual stimulus (a white circle) before and after adaptation to a delay between the two events. To examine the effects of spatial modulation, we presented visual stimuli in both central and peripheral visual fields. Results showed overall higher temporal JNDs (Just Noticeable Difference) for deaf participants as compared to hearing controls suggesting that the auditory information is important for the calibration of motor-sensory timing. Adaptation to a motor-sensory delay induced distinctive effect in the two groups of participants, with hearing controls showing a recalibration effect for central stimuli only whereas deaf individuals for peripheral visual stimuli only. Our results suggest that auditory deprivation affects motor-sensory recalibration and that the mechanism underlying motor-sensory recalibration is susceptible to spatial modulation.
Project description:Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1-3]. This suggests that conceptual representations are "modality independent." However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes . A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are "language independent." However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world.
Project description:Deaf individuals have been known to process visual stimuli better at the periphery compared to the normal hearing population. However, very few studies have examined attention orienting in the oculomotor domain in the deaf, particularly when targets appear at variable eccentricity. In this study, we examined if the visual perceptual processing advantage reported in the deaf people also modulates spatial attentional orienting with eye movement responses. We used a spatial cueing task with cued and uncued targets that appeared at two different eccentricities and explored attentional facilitation and inhibition. We elicited both a saccadic and a manual response. The deaf showed a higher cueing effect for the ocular responses than the normal hearing participants. However, there was no group difference for the manual responses. There was also higher facilitation at the periphery for both saccadic and manual responses, irrespective of groups. These results suggest that, owing to their superior visual processing ability, the deaf may orient attention faster to targets. We discuss the results in terms of previous studies on cueing and attentional orienting in deaf.
Project description:Objects contain rich visual and conceptual information, but do these two types of information interact? Here, we examine whether visual and conceptual information interact when observers see novel objects for the first time. We then address how this interaction influences the acquisition of perceptual expertise. We used two types of novel objects (Greebles), designed to resemble either animals or tools, and two lists of words, which described non-visual attributes of people or man-made objects. Participants first judged if a word was more suitable for describing people or objects while ignoring a task-irrelevant image, and showed faster responses if the words and the unfamiliar objects were congruent in terms of animacy (e.g., animal-like objects with words that described human). Participants then learned to associate objects and words that were either congruent or not in animacy, before receiving expertise training to rapidly individuate the objects. Congruent pairing of visual and conceptual information facilitated observers' ability to become a perceptual expert, as revealed in a matching task that required visual identification at the basic or subordinate levels. Taken together, these findings show that visual and conceptual information interact at multiple levels in object recognition.
Project description:Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.
Project description:Deafness results in greater reliance on the remaining senses. It is unknown whether the cortical architecture of the intact senses is optimized to compensate for lost input. Here we performed widefield population receptive field (pRF) mapping of primary visual cortex (V1) with functional magnetic resonance imaging (fMRI) in hearing and congenitally deaf participants, all of whom had learnt sign language after the age of 10 years. We found larger pRFs encoding the peripheral visual field of deaf compared to hearing participants. This was likely driven by larger facilitatory center zones of the pRF profile concentrated in the near and far periphery in the deaf group. pRF density was comparable between groups, indicating pRFs overlapped more in the deaf group. This could suggest that a coarse coding strategy underlies enhanced peripheral visual skills in deaf people. Cortical thickness was also decreased in V1 in the deaf group. These findings suggest deafness causes structural and functional plasticity at the earliest stages of visual cortex.
Project description:Auditory cortex in congenitally deaf early sign language users reorganizes to support cognitive processing in the visual domain. However, evidence suggests that the potential benefits of this reorganization are largely unrealized. At the same time, there is growing evidence that experience of playing computer and console games improves visual cognition, in particular visuospatial attentional processes. In the present study, we investigated in a group of deaf early signers whether those who reported recently playing computer or console games (deaf gamers) had better visuospatial attentional control than those who reported not playing such games (deaf non-gamers), and whether any such effect was related to cognitive processing in the visual domain. Using a classic test of attentional control, the Eriksen Flanker task, we found that deaf gamers performed on a par with hearing controls, while the performance of deaf non-gamers was poorer. Among hearing controls there was no effect of gaming. This suggests that deaf gamers may have better visuospatial attentional control than deaf non-gamers, probably because they are less susceptible to parafoveal distractions. Future work should examine the robustness of this potential gaming benefit and whether it is associated with neural plasticity in early deaf signers, as well as whether gaming intervention can improve visuospatial cognition in deaf people.
Project description:Compensatory changes as a result of auditory deprivation in the deaf lead to higher visual processing skills. In two experiments, we explored if such brain plasticity in the deaf modulates processing of masked stimuli in the visual modality. Deaf and normal-hearing participants responded to targets either voluntarily or by instruction. Masked primes related to the response were presented briefly before the targets at the center and the periphery. In Experiment 1, targets appeared only at the foveal region whereas, in Experiment 2, they appeared both at the fovea and the periphery. The deaf showed higher sensitivity to masked primes in both the experiments. They chose the primed response more often and also were faster during congruent responses compared to the normal hearing. These results suggest that neuroplasticity in the deaf modulates how they perceive and use information with reduced visibility for action selection and execution.
Project description:Background:Previous research has been designed to study the effect of hearing loss on supra-second duration estimation in the visual channel and position effect of visual abilities among deaf populations. The current study aimed to investigate the sub-second duration perception of different visual fields in profoundly deaf individuals. Methods:A total of 16 profoundly deaf undergraduates and 16 hearing undergraduates completed a visual duration bisection task in which participants made judgments about whether a series of probe durations that were linearly spaced from 200 ms to 800 ms at 100 ms intervals were more similar to a standard short duration (200 ms) or a standard long duration (800 ms). The probe stimuli were presented in the center, left, or right of the screen. A repeated measure analysis of variance (ANOVA) with a between-participants factor of group and a within-participants factor of position, and a one-sample t-test were conducted. Results:The Weber ratio (WR) values of deaf participants were significantly higher than those of hearing participants, regardless of the presented positions of the visual stimulus. The bisection point (BP) value of deaf participants was significantly lower than 500 ms (average mean of 200/800 ms) and the BP value of hearing participants did not significantly differ from 500 ms, although the overall difference of BP values between the deaf group and hearing group did not reach significance. For deaf participants, the BP value in the center condition was significantly lower than 500 ms; however, the difference between the BP value in the left condition and 500 ms did not reach significance, indicating that their duration discrimination accuracy in the left visual field was better than that in the center visual field. Conclusions:Hearing loss impaired visual sub-second duration perception, and deaf individuals showed a left visual field advantage of duration discrimination accuracy during the visual duration bisection task.
Project description:We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl's gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.