Correlates of transsaccadic integration in the primary visual cortex of the monkey.
ABSTRACT: We make several eye movements per second when we explore a visual scene. Each eye movement sweeps the scene's projection across the retina and changes its representation in retinotopic areas of the visual cortex, but we nevertheless perceive a stable world. Here we investigate the neuronal correlates of visual stability in the primary visual cortex. Monkeys were trained to make two saccades along a single curve and to ignore another, distracting curve. Attention enhanced neuronal responses to the entire relevant curve before the first saccade. This response enhancement was rapidly reestablished after the saccade, although the image was shifted across the primary visual cortex. We argue that this fast postsaccadic restoration of the attentional response enhancement contributes to the stability of vision across eye movements, and reduces the impact of saccades on visual cognition.
Project description:As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window.
Project description:Across saccades, humans can integrate the low-resolution presaccadic information of an upcoming saccade target with the high-resolution postsaccadic information. There is converging evidence to suggest that transsaccadic integration occurs at the saccade target. However, given divergent evidence on the spatial specificity of related mechanisms such as attention, visual working memory, and remapping, it is unclear whether integration is also possible at locations other than the saccade target. We tested the spatial profile of transsaccadic integration, by testing perceptual performance at six locations around the saccade target and between the saccade target and initial fixation. Results show that integration benefits do not differ between the saccade target and surrounding locations. Transsaccadic integration benefits are not specific to the saccade target and can occur at other locations when they are behaviorally relevant, although there is a trend for worse performance for the location above initial fixation compared with those in the direction of the saccade. This suggests that transsaccadic integration may be a more general mechanism used to reconcile task-relevant pre- and postsaccadic information at attended locations other than the saccade target.<b>NEW & NOTEWORTHY</b> This study shows that integration of pre- and postsaccadic information across saccades is not restricted to the saccade target. We found performance benefits of transsaccadic integration at attended locations other than the saccade target, and these benefits did not differ from those found at the saccade target. This suggests that transsaccadic integration may be a more general mechanism used to reconcile pre- and postsaccadic information at task-relevant locations.
Project description:Neurons in the visual cortex quickly adapt to constant input, which should lead to perceptual fading within few tens of milliseconds. However, perceptual fading is rarely observed in everyday perception, possibly because eye movements refresh retinal input. Recently, it has been suggested that amplitudes of large saccadic eye movements are scaled to maximally decorrelate presaccadic and postsaccadic inputs and thus to annul perceptual fading. However, this argument builds on the assumption that adaptation within naturally brief fixation durations is strong enough to survive any visually disruptive saccade and affect perception. We tested this assumption by measuring the effect of luminance adaptation on postsaccadic contrast perception. We found that postsaccadic contrast perception was affected by presaccadic luminance adaptation during brief periods of fixation. This adaptation effect emerges within 100 milliseconds and persists over seconds. These results indicate that adaptation during natural fixation periods can affect perception even after visually disruptive saccades.
Project description:Whenever we move our eyes, some visual information obtained before a saccade is combined with the visual information obtained after a saccade. Interestingly, saccades rarely land exactly on the saccade target, which may pose a problem for transsaccadic perception as it could affect the quality of postsaccadic input. Recently, however, we showed that transsaccadic feature integration is actually unaffected by deviations of saccade landing points. Possibly, transsaccadic integration remains unaffected because the presaccadic shift of attention follows the intended saccade target and not the actual saccade landing point during regular saccades. Here, we investigated whether saccade landing point errors can in fact alter transsaccadic perception when the presaccadic shift of attention follows the saccade landing point deviation. Given that saccadic adaptation not only changes the saccade vector, but also the presaccadic shift of attention, we combined a feature report paradigm with saccadic adaptation. Observers reported the color of the saccade target, which occasionally changed slightly during a saccade to the target. This task was performed before and after saccadic adaptation. The results showed that, after adaptation, presaccadic color information became less precise and transsaccadic perception had a stronger reliance on the postsaccadic color estimate. Therefore, although previous studies have shown that transsaccadic perception is generally unaffected by saccade landing point deviations, our results reveal that this cannot be considered a general property of the visual system. When presaccadic shifts of attention follow altered saccade landing points, transsaccadic perception is affected, suggesting that transsaccadic feature perception might be dependent on visual spatial attention.
Project description:Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.
Project description:Saccadic eye movements can drastically affect motion perception: during saccades, the stationary surround is swept rapidly across the retina and contrast sensitivity is suppressed. However, after saccades, contrast sensitivity is enhanced for color and high-spatial frequency stimuli and reflexive tracking movements known as ocular following responses (OFR) are enhanced in response to large field motion. Additionally, OFR and postsaccadic enhancement of neural activity in primate motion processing areas are well correlated. It is not yet known how this postsaccadic enhancement arises. Therefore, we tested if the enhancement can be explained by changes in the balance of centre-surround antagonism in motion processing, where spatial summation is favoured at low contrasts and surround suppression is favoured at high contrasts. We found motion perception was selectively enhanced immediately after saccades for high spatial frequency stimuli, consistent with previously reported selective postsaccadic enhancement of contrast sensitivity for flashed high spatial frequency stimuli. The observed enhancement was also associated with changes in spatial summation and suppression, as well as contrast facilitation and inhibition, suggesting that motion processing is augmented to maximise visual perception immediately after saccades. The results highlight that spatial and contrast properties of underlying neural mechanisms for motion processing can be affected by an antecedent saccade for highly detailed stimuli and are in line with studies that show behavioural and neuronal enhancement of motion processing in non-human primates.
Project description:Anti-saccades are eye movements that require inhibition to stop the automatic saccade to the visual target and to perform instead a saccade in the opposite direction. The inhibitory processes underlying anti-saccades have been primarily associated with frontal cortex areas for their role in executive control. Impaired performance in anti-saccades has also been associated with the parietal cortex, but its role in inhibitory processes remains unclear. Here, we tested the assumption that the dorsal parietal cortex contributes to spatial inhibition processes of contralateral visual target. We measured anti-saccade performance in 2 unilateral optic ataxia patients and 15 age-matched controls. Participants performed 90 degree (across and within visual fields) and 180 degree inversion anti-saccades, as well as pro-saccades. The main result was that our patients took longer to inhibit visually guided saccades when the visual target was presented in the ataxic hemifield and the task required a saccade across hemifields. This was observed through anti-saccades latencies and error rates. These deficits show the crucial role of the dorsal posterior parietal cortex in spatial inhibition of contralateral visual target representations to plan an accurate anti-saccade toward the ipsilesional side.
Project description:Most people easily learn to recognize new faces and places, and with more extensive practice they can become experts at visual tasks as complex as radiological diagnosis and action video games. Such perceptual plasticity has been thoroughly studied in the context of training paradigms that require constant fixation. In contrast, when observers learn under more natural conditions, they make frequent saccadic eye movements. Here we show that such eye movements can play an important role in visual learning. Observers performed a task in which they executed a saccade while discriminating the motion of a cued visual stimulus. Additional stimuli, presented simultaneously with the cued one, permitted an assessment of the perceptual integration of information across visual space. Consistent with previous results on perisaccadic remapping [M. Szinte, D. Jonikaitis, M. Rolfs, P. Cavanagh, H. Deubel, J. Neurophysiol. 116, 1592-1602 (2016)], most observers preferentially integrated information from locations representing the presaccadic and postsaccadic retinal positions of the cue. With extensive training on the saccade task, these observers gradually acquired the ability to perform similar motion integration without making eye movements. Importantly, the newly acquired pattern of spatial integration was determined by the metrics of the saccades made during training. These results suggest that oculomotor influences on visual processing, long thought to subserve the function of perceptual stability, also play a role in visual plasticity.
Project description:Humans are able to integrate pre- and postsaccadic percepts of an object across saccades to maintain perceptual stability. Previous studies have used Maximum Likelihood Estimation (MLE) to determine that integration occurs in a near-optimal manner. Here, we compared three different models to investigate the mechanism of integration in more detail: an early noise model, where noise is added to the pre- and postsaccadic signals before integration occurs; a late-noise model, where noise is added to the integrated signal after integration occurs; and a temporal summation model, where integration benefits arise from the longer transsaccadic presentation duration compared to pre- and postsaccadic presentation only. We also measured spatiotemporal aspects of integration to determine whether integration can occur for very brief stimulus durations, across two hemifields, and in spatiotopic and retinotopic coordinates. Pre-, post-, and transsaccadic performance was measured at different stimulus presentation durations, both at the saccade target and a location where the pre- and postsaccadic stimuli were presented in different hemifields across the saccade. Results showed that for both within- and between-hemifields conditions, integration could occur when pre- and postsaccadic stimuli were presented only briefly, and that the pattern of integration followed an early noise model. Whereas integration occurred when the pre- and post-saccadic stimuli were presented in the same spatiotopic coordinates, there was no integration when they were presented in the same retinotopic coordinates. This contrast suggests that transsaccadic integration is limited by early, independent, sensory noise acting separately on pre- and postsaccadic signals.
Project description:Spatial computations underlying the coordination of the hand and eye present formidable geometric challenges. One way for the nervous system to simplify these computations is to directly encode the relative position of the hand and the center of gaze. Neurons in the dorsal premotor cortex (PMd), which is critical for the guidance of arm-reaching movements, encode the relative position of the hand, gaze, and goal of reaching movements. This suggests that PMd can coordinate reaching movements with eye movements. Here, we examine saccade-related signals in PMd to determine whether they also point to a role for PMd in coordinating visual-motor behavior. We first compared the activity of a population of PMd neurons with a population of parietal reach region (PRR) neurons. During center-out reaching and saccade tasks, PMd neurons responded more strongly before saccades than PRR neurons, and PMd contained a larger proportion of exclusively saccade-tuned cells than PRR. During a saccade relative position-coding task, PMd neurons encoded saccade targets in a relative position code that depended on the relative position of gaze, the hand, and the goal of a saccadic eye movement. This relative position code for saccades is similar to the way that PMd neurons encode reach targets. We propose that eye movement and eye position signals in PMd do not drive eye movements, but rather provide spatial information that links the control of eye and arm movements to support coordinated visual-motor behavior.