Two distinct visual motion mechanisms for smooth pursuit: evidence from individual differences.
ABSTRACT: Smooth-pursuit eye velocity to a moving target is more accurate after an initial catch-up saccade than before, an enhancement that is poorly understood. We present an individual-differences-based method for identifying mechanisms underlying a physiological response and use it to test whether visual motion signals driving pursuit differ pre- and postsaccade. Correlating moment-to-moment measurements of pursuit over time with two psychophysical measures of speed estimation during fixation, we find two independent associations across individuals. Presaccadic pursuit acceleration is predicted by the precision of low-level (motion-energy-based) speed estimation, and postsaccadic pursuit precision is predicted by the precision of high-level (position-tracking) speed estimation. These results provide evidence that a low-level motion signal influences presaccadic acceleration and an independent high-level motion signal influences postsaccadic precision, thus presenting a plausible mechanism for postsaccadic enhancement of pursuit.
Project description:As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window.
Project description:Neurons in the visual cortex quickly adapt to constant input, which should lead to perceptual fading within few tens of milliseconds. However, perceptual fading is rarely observed in everyday perception, possibly because eye movements refresh retinal input. Recently, it has been suggested that amplitudes of large saccadic eye movements are scaled to maximally decorrelate presaccadic and postsaccadic inputs and thus to annul perceptual fading. However, this argument builds on the assumption that adaptation within naturally brief fixation durations is strong enough to survive any visually disruptive saccade and affect perception. We tested this assumption by measuring the effect of luminance adaptation on postsaccadic contrast perception. We found that postsaccadic contrast perception was affected by presaccadic luminance adaptation during brief periods of fixation. This adaptation effect emerges within 100 milliseconds and persists over seconds. These results indicate that adaptation during natural fixation periods can affect perception even after visually disruptive saccades.
Project description:Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ?300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination.NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ?300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature.
Project description:Whenever we move our eyes, some visual information obtained before a saccade is combined with the visual information obtained after a saccade. Interestingly, saccades rarely land exactly on the saccade target, which may pose a problem for transsaccadic perception as it could affect the quality of postsaccadic input. Recently, however, we showed that transsaccadic feature integration is actually unaffected by deviations of saccade landing points. Possibly, transsaccadic integration remains unaffected because the presaccadic shift of attention follows the intended saccade target and not the actual saccade landing point during regular saccades. Here, we investigated whether saccade landing point errors can in fact alter transsaccadic perception when the presaccadic shift of attention follows the saccade landing point deviation. Given that saccadic adaptation not only changes the saccade vector, but also the presaccadic shift of attention, we combined a feature report paradigm with saccadic adaptation. Observers reported the color of the saccade target, which occasionally changed slightly during a saccade to the target. This task was performed before and after saccadic adaptation. The results showed that, after adaptation, presaccadic color information became less precise and transsaccadic perception had a stronger reliance on the postsaccadic color estimate. Therefore, although previous studies have shown that transsaccadic perception is generally unaffected by saccade landing point deviations, our results reveal that this cannot be considered a general property of the visual system. When presaccadic shifts of attention follow altered saccade landing points, transsaccadic perception is affected, suggesting that transsaccadic feature perception might be dependent on visual spatial attention.
Project description:Our knowledge about objects in our environment reflects an integration of current visual input with information from preceding gaze fixations. Such a mechanism may reduce uncertainty but requires the visual system to determine which information obtained in different fixations should be combined or kept separate. To investigate the basis of this decision, we conducted three experiments. Participants viewed a stimulus in their peripheral vision and then made a saccade that shifted the object into the opposite hemifield. During the saccade, the object underwent changes of varying magnitude in two feature dimensions (Experiment 1, color and location; Experiments 2 and 3, color and orientation). Participants reported whether they detected any change and estimated one of the postsaccadic features. Integration of presaccadic with postsaccadic input was observed as a bias in estimates toward the presaccadic feature value. In all experiments, presaccadic bias weakened as the magnitude of the transsaccadic change in the estimated feature increased. Changes in the other feature, despite having a similar probability of detection, had no effect on integration. Results were quantitatively captured by an observer model where the decision whether to integrate information from sequential fixations is made independently for each feature and coupled to awareness of a feature change.
Project description:Retinal image motion is produced with each eye movement, yet we usually do not perceive this self-produced "reafferent" motion, nor are motion judgments much impaired when the eyes move. To understand the neural mechanisms involved in processing reafferent motion and distinguishing it from the motion of objects in the world, we studied the visual responses of single cells in middle temporal (MT) and medial superior temporal (MST) areas during steady fixation and smooth-pursuit eye movements in awake, behaving macaques. We measured neuronal responses to random-dot patterns moving at different speeds in a stimulus window that moved with the pursuit target and the eyes. This allowed us to control retinal image motion at all eye velocities. We found the expected high proportion of cells selective for the direction of visual motion. Pursuit tracking changed both response amplitude and preferred retinal speed for some cells. The changes in preferred speed were on average weakly but systematically related to the speed of pursuit for area MST cells, as would be expected if the shifts in speed selectivity were compensating for reafferent input. In area MT, speed tuning did not change systematically during pursuit. Many cells in both areas also changed response amplitude during pursuit; the most common form of modulation was response suppression when pursuit was opposite in direction to the cell's preferred direction. These results suggest that some cells in area MST encode retinal image motion veridically during eye movements, whereas others in both MT and MST contribute to the suppression of visual responses to reafferent motion.
Project description:We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions.
Project description:Most people easily learn to recognize new faces and places, and with more extensive practice they can become experts at visual tasks as complex as radiological diagnosis and action video games. Such perceptual plasticity has been thoroughly studied in the context of training paradigms that require constant fixation. In contrast, when observers learn under more natural conditions, they make frequent saccadic eye movements. Here we show that such eye movements can play an important role in visual learning. Observers performed a task in which they executed a saccade while discriminating the motion of a cued visual stimulus. Additional stimuli, presented simultaneously with the cued one, permitted an assessment of the perceptual integration of information across visual space. Consistent with previous results on perisaccadic remapping [M. Szinte, D. Jonikaitis, M. Rolfs, P. Cavanagh, H. Deubel, J. Neurophysiol. 116, 1592-1602 (2016)], most observers preferentially integrated information from locations representing the presaccadic and postsaccadic retinal positions of the cue. With extensive training on the saccade task, these observers gradually acquired the ability to perform similar motion integration without making eye movements. Importantly, the newly acquired pattern of spatial integration was determined by the metrics of the saccades made during training. These results suggest that oculomotor influences on visual processing, long thought to subserve the function of perceptual stability, also play a role in visual plasticity.
Project description:Across saccades, humans can integrate the low-resolution presaccadic information of an upcoming saccade target with the high-resolution postsaccadic information. There is converging evidence to suggest that transsaccadic integration occurs at the saccade target. However, given divergent evidence on the spatial specificity of related mechanisms such as attention, visual working memory, and remapping, it is unclear whether integration is also possible at locations other than the saccade target. We tested the spatial profile of transsaccadic integration, by testing perceptual performance at six locations around the saccade target and between the saccade target and initial fixation. Results show that integration benefits do not differ between the saccade target and surrounding locations. Transsaccadic integration benefits are not specific to the saccade target and can occur at other locations when they are behaviorally relevant, although there is a trend for worse performance for the location above initial fixation compared with those in the direction of the saccade. This suggests that transsaccadic integration may be a more general mechanism used to reconcile task-relevant pre- and postsaccadic information at attended locations other than the saccade target.<b>NEW & NOTEWORTHY</b> This study shows that integration of pre- and postsaccadic information across saccades is not restricted to the saccade target. We found performance benefits of transsaccadic integration at attended locations other than the saccade target, and these benefits did not differ from those found at the saccade target. This suggests that transsaccadic integration may be a more general mechanism used to reconcile pre- and postsaccadic information at task-relevant locations.
Project description:Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher-level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders.