The parallel programming of landing position in saccadic eye movement sequences.
ABSTRACT: Saccadic eye movements occur in sequences, gathering new information about the visual environment to support successful task completion. Here, we examine the control of these saccadic sequences and specifically the extent to which the spatial aspects of the saccadic responses are programmed in parallel. We asked participants to saccade to a series of visual targets and, while they shifted their gaze around the display, we displaced select targets. We found that saccade landing position was deviated toward the previous location of the target suggesting that partial parallel programming of target location information was occurring. The saccade landing position was also affected by the new target location, which demonstrates that the saccade landing position was also partially updated following the shift. This pattern was present even for targets that were the subject of the next fixation. Having a greater preview about the sequence path influenced saccade accuracy with saccades being less affected by relocations when there is less preview information. The results demonstrate that landing positions from a saccade sequence are programmed in parallel and combined with more immediate visual signals.
Project description:Saccades are made thousands of times a day and are the principal means of localizing objects in our environment. However, the saccade system faces the challenge of accurately localizing objects as they are constantly moving relative to the eye and head. Any delays in processing could cause errors in saccadic localization. To compensate for these delays, the saccade system might use one or more sources of information to predict future target locations, including changes in position of the object over time, or its motion. Another possibility is that motion influences the represented position of the object for saccadic targeting, without requiring an actual change in target position. We tested whether the saccade system can use motion-induced position shifts to update the represented spatial location of a saccade target, by using static drifting Gabor patches with either a soft or a hard aperture as saccade targets. In both conditions, the aperture always remained at a fixed retinal location. The soft aperture Gabor patch resulted in an illusory position shift, whereas the hard aperture stimulus maintained the motion signals but resulted in a smaller illusory position shift. Thus, motion energy and target location were equated, but a position shift was generated in only one condition. We measured saccadic localization of these targets and found that saccades were indeed shifted, but only with a soft-aperture Gabor patch. Our results suggest that motion shifts the programmed locations of saccade targets, and this remapped location guides saccadic localization.
Project description:Whenever we move our eyes, some visual information obtained before a saccade is combined with the visual information obtained after a saccade. Interestingly, saccades rarely land exactly on the saccade target, which may pose a problem for transsaccadic perception as it could affect the quality of postsaccadic input. Recently, however, we showed that transsaccadic feature integration is actually unaffected by deviations of saccade landing points. Possibly, transsaccadic integration remains unaffected because the presaccadic shift of attention follows the intended saccade target and not the actual saccade landing point during regular saccades. Here, we investigated whether saccade landing point errors can in fact alter transsaccadic perception when the presaccadic shift of attention follows the saccade landing point deviation. Given that saccadic adaptation not only changes the saccade vector, but also the presaccadic shift of attention, we combined a feature report paradigm with saccadic adaptation. Observers reported the color of the saccade target, which occasionally changed slightly during a saccade to the target. This task was performed before and after saccadic adaptation. The results showed that, after adaptation, presaccadic color information became less precise and transsaccadic perception had a stronger reliance on the postsaccadic color estimate. Therefore, although previous studies have shown that transsaccadic perception is generally unaffected by saccade landing point deviations, our results reveal that this cannot be considered a general property of the visual system. When presaccadic shifts of attention follow altered saccade landing points, transsaccadic perception is affected, suggesting that transsaccadic feature perception might be dependent on visual spatial attention.
Project description:Saccadic eye movements enable us to rapidly direct our high-resolution fovea onto relevant parts of the visual world. However, while we can intentionally select a location as a saccade target, the wider visual scene also influences our executed movements. In the presence of multiple objects, eye movements may be "captured" to the location of a distractor object, or be biased toward the intermediate position between objects (the "global effect"). Here we examined how the relative strengths of the global effect and visual object capture changed with saccade latency, the separation between visual items and stimulus contrast. Importantly, while many previous studies have omitted giving observers explicit instructions, we instructed participants to either saccade to a specified target object or to the midpoint between two stimuli. This allowed us to examine how their explicit movement goal influenced the likelihood that their saccades terminated at either the target, distractor, or intermediate locations. Using a probabilistic mixture model, we found evidence that both visual object capture and the global effect co-occurred at short latencies and declined as latency increased. As object separation increased, capture came to dominate the landing positions of fast saccades, with reduced global effect. Using the mixture model fits, we dissociated the proportion of unavoidably captured saccades to each location from those intentionally directed to the task goal. From this we could extract the time course of competition between automatic capture and intentional targeting. We show that task instructions substantially altered the distribution of saccade landing points, even at the shortest latencies.NEW & NOTEWORTHY When making an eye movement to a target location, the presence of a nearby distractor can cause the saccade to unintentionally terminate at the distractor itself or the average position in between stimuli. With probabilistic mixture models, we quantified how both unavoidable capture and goal-directed targeting were influenced by changing the task and the target-distractor separation. Using this novel technique, we could extract the time course over which automatic and intentional processes compete for control of saccades.
Project description:Human participants made saccadic eye movements to various features in a modified vertical Poggendorff figure, to measure errors in the location of key geometrical features. In one task, subjects (n?=?8) made saccades to the vertex of the oblique T-intersection between a diagonal pointer and a vertical line. Results showed both a small tendency to shift the saccade toward the interior of the angle, and a larger bias in the direction of a shorter saccade path to the landing line. In a different kind of task (visual extrapolation), the same subjects fixated the tip of a 45° pointer and made a saccade to the implicit point of intersection between pointer and a distant vertical line. Results showed large errors in the saccade landing positions and the saccade polar angle, in the direction predicted from the perceptual Poggendorff bias. Further experiments manipulated the position of the fixation point relative to the implicit target, such that the Poggendorff bias would be in the opposite direction from a bias toward taking the shortest path to the landing line. The bias was still significant. We conclude that the Poggendorff bias in eye movements is in part due to the mislocation of visible target features but also to biases in planning a saccade to a virtual target across a gap. The latter kind of error comprises both a tendency to take the shortest path to the landing line, and a perceptual error that overestimates the vector component orthogonal to the gap.
Project description:Previous studies have shown that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the present study, we further investigated to what extent face stimuli influence the programming and execution of saccades examining their amplitude. We performed two experiments using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to initiate a saccade toward the image (Experiment 1) or toward a cross in the image (Experiment 2) containing a target stimulus (a face or a vehicle). Results revealed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Furthermore, error saccades had smaller amplitude than correct saccades. Further analyses showed that error saccades were interrupted in mid-flight to initiate a concurrently-programmed corrective saccade. Overall, these data suggest that the content of visual stimuli can influence the programming of saccade amplitude, and that efficient online correction of saccades can be performed during the saccadic choice task.
Project description:After a saccade, most MST neurons respond to moving visual stimuli that had existed in their post-saccadic receptive fields and turned off before the saccade ("trans-saccadic memory remapping"). Neuronal responses in higher visual processing areas are known to be modulated in relation to gaze angle to represent image location in spatiotopic coordinates. In the present study, we investigated the eye position effects after saccades and found that the gaze angle modulated the visual sensitivity of MST neurons after saccades both to the actually existing visual stimuli and to the visual memory traces remapped by the saccades. We suggest that two mechanisms, trans-saccadic memory remapping and gaze modulation, work cooperatively in individual MST neurons to represent a continuous visual world.
Project description:When the inside texture of a moving object moves, the perceived motion of the object is often distorted toward the direction of the texture's motion (motion-induced position shift), and such perceptual distortion accumulates while the object is watched, causing what is known as the curveball illusion. In a recent study, however, the accumulation of the position error was not observed in saccadic eye movements. Here, we examined whether the position of the illusory object is represented independently in the perceptual and saccadic systems. In the experiments, the stimulus of the curveball illusion was adopted to examine the temporal change in the position representation for saccadic eye movements and for perception by varying the elapsed time from the input of visual information to saccade onset and perceptual judgment, respectively. The results showed that the temporal accumulation of the motion-induced position shift is observed not only in perception but also in saccadic eye movements. In the saccade tasks, the landing positions of saccades gradually shifted to the illusory perceived position as the elapsed time from the target offset to the saccade "go" signal increased. Furthermore, in the perception task, shortening the time between the target offset and the perceptual judgment reduced the size of the illusion effect. Therefore, these results argue against the idea of dissociation between saccadic and perceptual localization of a moving object suggested in the previous study, in which saccades were measured in a rushed way while perceptual responses were measured without time constraint. Instead, the similar temporal trends of these effects imply a common or similar target representation for perception and eye movements which dynamically changes over the course of evidence accumulation.
Project description:Saccadic adaptation is the motor learning process that keeps saccade amplitudes on target. This process is eye position specific: amplitude adaptation that is induced for a saccade at one particular location in the visual field transfers incompletely to saccades at other locations. In our current study, we investigated wether this eye position signal corresponds to the initial or to the final eye position of the saccade. Each case would have different implications on the mechanisms of adaptation. The initial eye position is not directly available, when the adaptation driving post saccadic error signal is received. On the other hand the final eye position signal is not available, when the motor command for the saccade is calculated. In six human subjects we adapted a saccade of 15 degree amplitude that started at a constant position. We then measured the transfer of adaptation to test saccades of 10 and 20 degree amplitude. In each case we compared test saccades that matched the start position of the adapted saccade to those that matched the target of the adapted saccade. We found significantly more transfer of adaptation to test saccades with the same start position than to test saccades with the same target position. The results indicate that saccadic adaptation is specific to the initial eye position. This is consistent with a previously proposed effect of gain field modulated input from areas like the frontal eye field, the lateral intraparietal area and the superior colliculus into the cerebellar adaptation circuitry.
Project description:Control of saccadic gain is often viewed as a simple compensatory process in which gain is adjusted over many trials by the postsaccadic retinal error, thereby maintaining saccadic accuracy. Here, we propose that gain might also be changed by a reinforcement process not requiring a visual error. To test this hypothesis, we used experimental paradigms in which retinal error was removed by extinguishing the target at the start of each saccade and either an auditory tone or the vision of the target on the fovea was provided as reinforcement after those saccades that met an amplitude criterion. These reinforcement procedures caused a progressive change in saccade amplitude in nearly all subjects, although the rate of adaptation differed greatly among subjects. When we reversed the contingencies and reinforced those saccades landing closer to the original target location, saccade gain changed back toward normal gain in most subjects. When subjects had saccades adapted first by reinforcement and a week later by conventional intrasaccadic step adaptation, both paradigms yielded similar degrees of gain changes and similar transfer to new amplitudes and to new starting positions of the target step as well as comparable rates of recovery. We interpret these changes in saccadic gain in the absence of postsaccadic retinal error as showing that saccade adaptation is not controlled by a single error signal. More generally, our findings suggest that normal saccade adaptation might involve general learning mechanisms rather than only specialized mechanisms for motor calibration.
Project description:For humans, visual tracking of moving stimuli often triggers catch-up saccades during smooth pursuit. The switch between these continuous and discrete eye movements is a trade-off between tolerating sustained position error (PE) when no saccade is triggered or a transient loss of vision during the saccade due to saccadic suppression. de Brouwer et al. (2002b) demonstrated that catch-up saccades were less likely to occur when the target re-crosses the fovea within 40-180 ms. To date, there is no mechanistic explanation for how the trigger decision is made by the brain. Recently, we proposed a stochastic decision model for saccade triggering during visual tracking (Coutinho et al., 2018) that relies on a probabilistic estimate of predicted PE (PEpred). Informed by model predictions, we hypothesized that saccade trigger time length and variability will increase when pre-saccadic predicted errors are small or visual uncertainty is high (e.g., for blurred targets). Data collected from human participants performing a double step-ramp task showed that large pre-saccadic PEpred (>10°) produced short saccade trigger times regardless of the level of uncertainty while saccade trigger times preceded by small PEpred (<10°) significantly increased in length and variability, and more so for blurred targets. Our model also predicted increased signal-dependent noise (SDN) as retinal slip (RS) increases; in our data, this resulted in longer saccade trigger times and more smooth trials without saccades. In summary, our data supports our hypothesized predicted error-based decision process for coordinating saccades during smooth pursuit.