Oculomotor Remapping of Visual Information to Foveal Retinotopic Cortex.
ABSTRACT: Our eyes continually jump around the visual scene to bring the high-resolution, central part of our vision onto objects of interest. We are oblivious to these abrupt shifts, perceiving the visual world to appear reassuringly stable. A process called remapping has been proposed to mediate this perceptual stability for attended objects by shifting their retinotopic representation to compensate for the effects of the upcoming eye movement. In everyday vision, observers make goal-directed eye movements towards items of interest bringing them to the fovea and, for these items, the remapped activity should impinge on foveal regions of the retinotopic maps in visual cortex. Previous research has focused instead on remapping for targets that were not saccade goals, where activity is remapped to a new peripheral location rather than to the foveal representation. We used functional magnetic resonance imaging (fMRI) and a phase-encoding design to investigate remapping of spatial patterns of activity towards the fovea/parafovea for saccade targets that were removed prior to completion of the eye movement. We found strong evidence of foveal remapping in retinotopic visual areas, which failed to occur when observers merely attended to the same peripheral target without making eye movements towards it. Significantly, the spatial profile of the remapped response matched the orientation and size of the saccade target, and was appropriately scaled to reflect the retinal extent of the stimulus had it been foveated. We conclude that this remapping of spatially structured information to the fovea may serve as an important mechanism to support our world-centered sense of location across goal-directed eye movements under natural viewing conditions.
Project description:Neurons in the lateral intraparietal area, frontal eye field, and superior colliculus exhibit a pattern of activity known as remapping. When a salient visual stimulus is presented shortly before a saccade, the representation of that stimulus is updated, or remapped, at the time of the eye movement. This updating is presumably based on a corollary discharge of the eye movement command. To investigate whether visual areas also exhibit remapping, we recorded from single neurons in extrastriate and striate cortex while monkeys performed a saccade task. Around the time of the saccade, a visual stimulus was flashed either at the location occupied by the neuron's receptive field (RF) before the saccade (old RF) or at the location occupied by it after the saccade (new RF). More than half (52%) of V3A neurons responded to a stimulus flashed in the new RF even though the stimulus had already disappeared before the saccade. These neurons responded to a trace of the flashed stimulus brought into the RF by the saccade. In 16% of V3A neurons, remapped activity began even before saccade onset. Remapping also was observed at earlier stages of the visual hierarchy, including in areas V3 and V2. At these earlier stages, the proportion of neurons that exhibited remapping decreased, and the latency of remapped activity increased relative to saccade onset. Remapping was very rare in striate cortex. These results indicate that extrastriate visual areas are involved in the process of remapping.
Project description:The retinal location of visual information changes each time we move our eyes. Although it is now known that visual information is remapped in retinotopic coordinates across eye-movements (saccades), it is currently unclear how head-centered auditory information is remapped across saccades. Keeping track of the location of a sound source in retinotopic coordinates requires a rapid multi-modal reference frame transformation when making saccades. To reveal this reference frame transformation, we designed an experiment where participants attended an auditory or visual cue and executed a saccade. After the saccade had landed, an auditory or visual target could be presented either at the prior retinotopic location or at an uncued location. We observed that both auditory and visual targets presented at prior retinotopic locations were reacted to faster than targets at other locations. In a second experiment, we observed that spatial attention pointers obtained via audition are available in retinotopic coordinates immediately after an eye-movement is made. In a third experiment, we found evidence for an asymmetric cross-modal facilitation of information that is presented at the retinotopic location. In line with prior single cell recording studies, this study provides the first behavioral evidence for immediate auditory and cross-modal transsaccadic updating of spatial attention. These results indicate that our brain has efficient solutions for solving the challenges in localizing sensory input that arise in a dynamic context.
Project description:Humans typically make several saccades per second. This provides a challenge for the visual system as locations are largely coded in retinotopic (eye-centered) coordinates. Spatial remapping, the updating of retinotopic location coordinates of items in visuospatial memory, is typically assumed to be limited to robust, capacity-limited and attention-demanding working memory (WM). Are pre-attentive, maskable, sensory memory representations (e.g. fragile memory, FM) also remapped? We directly compared trans-saccadic WM (tWM) and trans-saccadic FM (tFM) in a retro-cue change-detection paradigm. Participants memorized oriented rectangles, made a saccade and reported whether they saw a change in a subsequent display. On some trials a retro-cue indicated the to-be-tested item prior to probe onset. This allowed sensory memory items to be included in the memory capacity estimate. The observed retro-cue benefit demonstrates a tFM capacity considerably above tWM. This provides evidence that some, if not all sensory memory was remapped to spatiotopic (world-centered, task-relevant) coordinates. In a second experiment, we show backward masks to be effective in retinotopic as well as spatiotopic coordinates, demonstrating that FM was indeed remapped to world-centered coordinates. Together this provides conclusive evidence that trans-saccadic spatial remapping is not limited to higher-level WM processes but also occurs for sensory memory representations.
Project description:Visual processing can be facilitated by covert attention at behaviorally relevant locations. If the eyes move while a location in the visual field is facilitated, what happens to the internal representation of the attended location? With each eye movement, the retinotopic (eye-centered) coordinates of the attended location change while the spatiotopic (world-centered) coordinates remain stable. To investigate whether the neural substrates of spatial attention reside in retinotopically and/or spatiotopically organized maps, we used a novel gaze-contingent behavioral paradigm that probed spatial attention at various times after eye movements. When task demands required maintaining a spatiotopic representation after the eye movement, we found facilitation at the retinotopic location of the spatial cue for 100-200 ms after the saccade, although this location had no behavioral significance. This task-irrelevant retinotopic representation dominated immediately after the saccade, whereas at later delays, the task-relevant spatiotopic representation prevailed. However, when task demands required maintaining the cue in retinotopic coordinates, a strong retinotopic benefit persisted long after the saccade, and there was no evidence of spatiotopic facilitation. These data suggest that the cortical and subcortical substrates of spatial attention primarily reside in retinotopically organized maps that must be dynamically updated to compensate for eye movements when behavioral demands require a spatiotopic representation of attention. Our conclusion is that the visual system's native or low-level representation of endogenously maintained spatial attention is retinotopic, and remapping of attention to spatiotopic coordinates occurs slowly and only when behaviorally necessary.
Project description:Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.
Project description:The fundamental role of the visual system is to guide behavior in natural environments. To optimize information transmission, many animals have evolved a non-homogeneous retina and serially sample visual scenes by saccadic eye movements. Such eye movements, however, introduce high-speed retinal motion and decouple external and internal reference frames. Until now, these processes have only been studied with unnatural stimuli, eye movement behavior, and tasks. These experiments confound retinotopic and geotopic coordinate systems and may probe a non-representative functional range. Here we develop a real-time, gaze-contingent display with precise spatiotemporal control over high-definition natural movies. In an active condition, human observers freely watched nature documentaries and indicated the location of periodic narrow-band contrast increments relative to their gaze position. In a passive condition under central fixation, the same retinal input was replayed to each observer by updating the video's screen position. Comparison of visual sensitivity between conditions revealed three mechanisms that the visual system has adapted to compensate for peri-saccadic vision changes. Under natural conditions we show that reduced visual sensitivity during eye movements can be explained simply by the high retinal speed during a saccade without recourse to an extra-retinal mechanism of active suppression; we give evidence for enhanced sensitivity immediately after an eye movement indicative of visual receptive fields remapping in anticipation of forthcoming spatial structure; and we demonstrate that perceptual decisions can be made in world rather than retinal coordinates.
Project description:The mammalian visual system contains an extensive web of feedback connections projecting from higher cortical areas to lower areas, including primary visual cortex. Although multiple theories have been proposed, the role of these connections in perceptual processing is not understood. We found that the pattern of functional magnetic resonance imaging response in human foveal retinotopic cortex contained information about objects presented in the periphery, far away from the fovea, which has not been predicted by prior theories of feedback. This information was position invariant, correlated with perceptual discrimination accuracy and was found only in foveal, but not peripheral, retinotopic cortex. Our data cannot be explained by differential eye movements, activation from the fixation cross, or spillover activation from peripheral retinotopic cortex or from lateral occipital complex. Instead, our findings indicate that position-invariant object information from higher cortical areas is fed back to foveal retinotopic cortex, enhancing task performance.
Project description:With each eye movement, the image of the world received by the visual system changes dramatically. To maintain stable spatiotopic (world-centered) visual representations, the retinotopic (eye-centered) coordinates of visual stimuli are continually remapped, even before the eye movement is completed. Recent psychophysical work has suggested that updating of attended locations occurs as well, although on a slower timescale, such that sustained attention lingers in retinotopic coordinates for several hundred milliseconds after each saccade. To explore where and when this "retinotopic attentional trace" resides in the cortical visual processing hierarchy, we conducted complementary functional magnetic resonance imaging and event-related potential (ERP) experiments using a novel gaze-contingent task. Human subjects executed visually guided saccades while covertly monitoring a fixed spatiotopic target location. Although subjects responded only to stimuli appearing at the attended spatiotopic location, blood oxygen level-dependent responses to stimuli appearing after the eye movement at the previously, but no longer, attended retinotopic location were enhanced in visual cortical area V4 and throughout visual cortex. This retinotopic attentional trace was also detectable with higher temporal resolution in the anterior N1 component of the ERP data, a well established signature of attentional modulation. Together, these results demonstrate that, when top-down spatiotopic signals act to redirect visuospatial attention to new retinotopic locations after eye movements, facilitation transiently persists in the cortical regions representing the previously relevant retinotopic location.
Project description:Most people easily learn to recognize new faces and places, and with more extensive practice they can become experts at visual tasks as complex as radiological diagnosis and action video games. Such perceptual plasticity has been thoroughly studied in the context of training paradigms that require constant fixation. In contrast, when observers learn under more natural conditions, they make frequent saccadic eye movements. Here we show that such eye movements can play an important role in visual learning. Observers performed a task in which they executed a saccade while discriminating the motion of a cued visual stimulus. Additional stimuli, presented simultaneously with the cued one, permitted an assessment of the perceptual integration of information across visual space. Consistent with previous results on perisaccadic remapping [M. Szinte, D. Jonikaitis, M. Rolfs, P. Cavanagh, H. Deubel, J. Neurophysiol. 116, 1592-1602 (2016)], most observers preferentially integrated information from locations representing the presaccadic and postsaccadic retinal positions of the cue. With extensive training on the saccade task, these observers gradually acquired the ability to perform similar motion integration without making eye movements. Importantly, the newly acquired pattern of spatial integration was determined by the metrics of the saccades made during training. These results suggest that oculomotor influences on visual processing, long thought to subserve the function of perceptual stability, also play a role in visual plasticity.
Project description:Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33±7% vs. 27±4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.