The faster you decide, the more accurate localization is possible: Position representation of "curveball illusion" in perception and eye movements.
ABSTRACT: When the inside texture of a moving object moves, the perceived motion of the object is often distorted toward the direction of the texture's motion (motion-induced position shift), and such perceptual distortion accumulates while the object is watched, causing what is known as the curveball illusion. In a recent study, however, the accumulation of the position error was not observed in saccadic eye movements. Here, we examined whether the position of the illusory object is represented independently in the perceptual and saccadic systems. In the experiments, the stimulus of the curveball illusion was adopted to examine the temporal change in the position representation for saccadic eye movements and for perception by varying the elapsed time from the input of visual information to saccade onset and perceptual judgment, respectively. The results showed that the temporal accumulation of the motion-induced position shift is observed not only in perception but also in saccadic eye movements. In the saccade tasks, the landing positions of saccades gradually shifted to the illusory perceived position as the elapsed time from the target offset to the saccade "go" signal increased. Furthermore, in the perception task, shortening the time between the target offset and the perceptual judgment reduced the size of the illusion effect. Therefore, these results argue against the idea of dissociation between saccadic and perceptual localization of a moving object suggested in the previous study, in which saccades were measured in a rushed way while perceptual responses were measured without time constraint. Instead, the similar temporal trends of these effects imply a common or similar target representation for perception and eye movements which dynamically changes over the course of evidence accumulation.
Project description:Despite growing evidence for perceptual interactions between motion and position, no unifying framework exists to account for these two key features of our visual experience. We show that percepts of both object position and motion derive from a common object-tracking system--a system that optimally integrates sensory signals with a realistic model of motion dynamics, effectively inferring their generative causes. The object-tracking model provides an excellent fit to both position and motion judgments in simple stimuli. With no changes in model parameters, the same model also accounts for subjects' novel illusory percepts in more complex moving stimuli. The resulting framework is characterized by a strong bidirectional coupling between position and motion estimates and provides a rational, unifying account of a number of motion and position phenomena that are currently thought to arise from independent mechanisms. This includes motion-induced shifts in perceived position, perceptual slow-speed biases, slowing of motions shown in visual periphery, and the well-known curveball illusion. These results reveal that motion perception cannot be isolated from position signals. Even in the simplest displays with no changes in object position, our perception is driven by the output of an object-tracking system that rationally infers different generative causes of motion signals. Taken together, we show that object tracking plays a fundamental role in perception of visual motion and position.
Project description:Introspection makes it clear that we do not see the visual motion generated by our saccadic eye movements. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission : the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression, the reduction in visual sensitivity before and during a saccade (see Ross et al.  and Wurtz  for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes perisaccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4, 5]. Our results show that a stimulus that undergoes saccadic omission can nevertheless generate a shape contrast illusion. This illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing . This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which perisaccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing.
Project description:The information used by conscious perception may differ from that which drives certain actions. A dramatic illusion caused by an object's internal texture motion has been put forward as one example. The motion causes an illusory position shift that accumulates over seconds into a large effect, but targeting of the grating for a saccade (a rapid eye movement) is not affected by this illusion. While this has been described as a dissociation between perception and action, an alternative explanation is that rather than saccade targeting having privileged access to the correct position, a shift of attention that precedes saccades resets the accumulated illusory position shift to zero. In support of this possibility, we found that the accumulation of illusory position shift can be reset by transients near the moving object, creating an impression of the object returning to near its actual position. Repetitive luminance changes of the object also resulted in reset of the accumulation, but less so when attention to the object was reduced by a concurrent digit identification task. Finally, judgments of the object's positions around the time of saccade onset reflected the veridical rather than the illusory position. These results suggest that attentional shifts, including those preceding saccades, can update the perceived position of moving objects and mediate the previously reported dissociation between conscious perception and saccades.
Project description:Saccades are made thousands of times a day and are the principal means of localizing objects in our environment. However, the saccade system faces the challenge of accurately localizing objects as they are constantly moving relative to the eye and head. Any delays in processing could cause errors in saccadic localization. To compensate for these delays, the saccade system might use one or more sources of information to predict future target locations, including changes in position of the object over time, or its motion. Another possibility is that motion influences the represented position of the object for saccadic targeting, without requiring an actual change in target position. We tested whether the saccade system can use motion-induced position shifts to update the represented spatial location of a saccade target, by using static drifting Gabor patches with either a soft or a hard aperture as saccade targets. In both conditions, the aperture always remained at a fixed retinal location. The soft aperture Gabor patch resulted in an illusory position shift, whereas the hard aperture stimulus maintained the motion signals but resulted in a smaller illusory position shift. Thus, motion energy and target location were equated, but a position shift was generated in only one condition. We measured saccadic localization of these targets and found that saccades were indeed shifted, but only with a soft-aperture Gabor patch. Our results suggest that motion shifts the programmed locations of saccade targets, and this remapped location guides saccadic localization.
Project description:Across saccades, small displacements of a visual target are harder to detect and their directions more difficult to discriminate than during steady fixation. Prominent theories of this effect, known as saccadic suppression of displacement, propose that it is due to a bias to assume object stability across saccades. Recent studies comparing the saccadic effect to masking effects suggest that suppression of displacement is not saccade-specific. Further evidence for this account is presented from two experiments where participants judged the size of displacements on a continuous scale in saccade and mask conditions, with and without blanking. Saccades and masks both reduced the proportion of correctly perceived displacements and increased the proportion of missed displacements. Blanking improved performance in both conditions by reducing the proportion of missed displacements. Thus, if suppression of displacement reflects a bias for stability, it is not a saccade-specific bias, but a more general stability assumption revealed under conditions of impoverished vision. Specifically, I discuss the potentially decisive role of motion or other transient signals for displacement perception. Without transients or motion, the quality of relative position signals is poor, and saccadic and mask-induced suppression of displacement reflects performance when the decision has to be made on these signals alone. Blanking may improve those position signals by providing a transient onset or a longer time to encode the pre-saccadic target position.
Project description:Feature integration theory proposes that visual features, such as shape and color, can only be combined into a unified object when spatial attention is directed to their location in retinotopic maps. Eye movements cause dramatic changes on our retinae, and are associated with obligatory shifts in spatial attention. In two experiments, we measured the prevalence of conjunction errors (that is, reporting an object as having an attribute that belonged to another object), for brief stimulus presentation before, during, and after a saccade. Planning and executing a saccade did not itself disrupt feature integration. Motion did disrupt feature integration, leading to an increase in conjunction errors. However, retinal motion of an equal extent but caused by saccadic eye movements is spared this disruption, and showed similar rates of conjunction errors as a condition with static stimuli presented to a static eye. The results suggest that extra-retinal signals are able to compensate for the motion caused by saccadic eye movements, thereby preserving the integrity of objects across saccades and preventing their features from mixing or mis-binding.
Project description:Eye movements affect object localization and object recognition. Around saccade onset, briefly flashed stimuli appear compressed towards the saccade target, receptive fields dynamically change position, and the recognition of objects near the saccade target is improved. These effects have been attributed to different mechanisms. We provide a unifying account of peri-saccadic perception explaining all three phenomena by a quantitative computational approach simulating cortical cell responses on the population level. Contrary to the common view of spatial attention as a spotlight, our model suggests that oculomotor feedback alters the receptive field structure in multiple visual areas at an intermediate level of the cortical hierarchy to dynamically recruit cells for processing a relevant part of the visual field. The compression of visual space occurs at the expense of this locally enhanced processing capacity.
Project description:Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33±7% vs. 27±4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.
Project description:As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window.
Project description:Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.