Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements.
ABSTRACT: With each eye movement, the image of the world received by the visual system changes dramatically. To maintain stable spatiotopic (world-centered) visual representations, the retinotopic (eye-centered) coordinates of visual stimuli are continually remapped, even before the eye movement is completed. Recent psychophysical work has suggested that updating of attended locations occurs as well, although on a slower timescale, such that sustained attention lingers in retinotopic coordinates for several hundred milliseconds after each saccade. To explore where and when this "retinotopic attentional trace" resides in the cortical visual processing hierarchy, we conducted complementary functional magnetic resonance imaging and event-related potential (ERP) experiments using a novel gaze-contingent task. Human subjects executed visually guided saccades while covertly monitoring a fixed spatiotopic target location. Although subjects responded only to stimuli appearing at the attended spatiotopic location, blood oxygen level-dependent responses to stimuli appearing after the eye movement at the previously, but no longer, attended retinotopic location were enhanced in visual cortical area V4 and throughout visual cortex. This retinotopic attentional trace was also detectable with higher temporal resolution in the anterior N1 component of the ERP data, a well established signature of attentional modulation. Together, these results demonstrate that, when top-down spatiotopic signals act to redirect visuospatial attention to new retinotopic locations after eye movements, facilitation transiently persists in the cortical regions representing the previously relevant retinotopic location.
Project description:Visual processing can be facilitated by covert attention at behaviorally relevant locations. If the eyes move while a location in the visual field is facilitated, what happens to the internal representation of the attended location? With each eye movement, the retinotopic (eye-centered) coordinates of the attended location change while the spatiotopic (world-centered) coordinates remain stable. To investigate whether the neural substrates of spatial attention reside in retinotopically and/or spatiotopically organized maps, we used a novel gaze-contingent behavioral paradigm that probed spatial attention at various times after eye movements. When task demands required maintaining a spatiotopic representation after the eye movement, we found facilitation at the retinotopic location of the spatial cue for 100-200 ms after the saccade, although this location had no behavioral significance. This task-irrelevant retinotopic representation dominated immediately after the saccade, whereas at later delays, the task-relevant spatiotopic representation prevailed. However, when task demands required maintaining the cue in retinotopic coordinates, a strong retinotopic benefit persisted long after the saccade, and there was no evidence of spatiotopic facilitation. These data suggest that the cortical and subcortical substrates of spatial attention primarily reside in retinotopically organized maps that must be dynamically updated to compensate for eye movements when behavioral demands require a spatiotopic representation of attention. Our conclusion is that the visual system's native or low-level representation of endogenously maintained spatial attention is retinotopic, and remapping of attention to spatiotopic coordinates occurs slowly and only when behaviorally necessary.
Project description:During natural vision, eye movements can drastically alter the retinotopic (eye-centered) coordinates of locations and objects, yet the spatiotopic (world-centered) percept remains stable. Maintaining visuospatial attention in spatiotopic coordinates requires updating of attentional representations following each eye movement. However, this updating is not instantaneous; attentional facilitation temporarily lingers at the previous retinotopic location after a saccade, a phenomenon known as the retinotopic attentional trace. At various times after a saccade, we probed attention at an intermediate location between the retinotopic and spatiotopic locations to determine whether a single locus of attentional facilitation slides progressively from the previous retinotopic location to the appropriate spatiotopic location, or whether retinotopic facilitation decays while a new, independent spatiotopic locus concurrently becomes active. Facilitation at the intermediate location was not significant at any time, suggesting that top-down attention can result in enhancement of discrete retinotopic and spatiotopic locations without passing through intermediate locations.
Project description:Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating - a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the "spatial congruency bias," to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the spatial congruency bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic congruency bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates.
Project description:The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex-important for stable object recognition and action-contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a "searchlight" analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates.
Project description:Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Project description:The activity of neurons in the primate posterior parietal cortex reflects the location of visual stimuli relative to the eye, body, and world, and is modulated by selective attention and task rules. It is not known however how these effects interact with each other. To address this question, we recorded neuronal activity from area 7a of monkeys trained to perform two variants of a delayed match-to-sample task. The monkeys attended a spatial location defined in either spatiotopic (world-centered) or retinotopic (eye-centered) coordinates. We found neuronal responses to be remarkably plastic depending on the task. In contrast to previous studies using the simple version of the delayed match-to-sample task, we discovered that after training in a task where the locus of attention shifted during the trial, neural responses were typically enhanced for a match stimulus. Our results further revealed that responses were mostly enhanced for stimuli matching in spatiotopic coordinates, although the proportion of neurons modulated by either coordinate frame was influenced by the behavioral task executed.
Project description:We experience the visual world as phenomenally invariant to eye position, but almost all cortical maps of visual space in monkeys use a retinotopic reference frame, that is, the cortical representation of a point in the visual world is different across eye positions. It was recently reported that human cortical area MT (unlike monkey MT) represents stimuli in a reference frame linked to the position of stimuli in space, a "spatiotopic" reference frame. We used visuotopic mapping with blood oxygen level-dependent functional magnetic resonance imaging signals to define 12 human visual cortical areas, and then determined whether the reference frame in each area was spatiotopic or retinotopic. We found that all 12 areas, including MT, represented stimuli in a retinotopic reference frame. Although there were patches of cortex in and around these visual areas that were ostensibly spatiotopic, none of these patches exhibited reliable stimulus-evoked responses. We conclude that the early, visuotopically organized visual cortical areas in the human brain (like their counterparts in the monkey brain) represent stimuli in a retinotopic reference frame.
Project description:To interact successfully with objects, we must maintain stable representations of their locations in the world. However, their images on the retina may be displaced several times per second by large, rapid eye movements. A number of studies have demonstrated that visual processing is heavily influenced by gaze-centered (retinotopic) information, including a recent finding that memory for an object's location is more accurate and precise in gaze-centered (retinotopic) than world-centered (spatiotopic) coordinates (Golomb & Kanwisher, 2012b). This effect is somewhat surprising, given our intuition that behavior is successfully guided by spatiotopic representations. In the present experiment, we asked whether the visual system may rely on a more spatiotopic memory store depending on the mode of responding. Specifically, we tested whether reaching toward and tapping directly on an object's location could improve memory for its spatiotopic location. Participants performed a spatial working memory task under four conditions: retinotopic vs. spatiotopic task, and computer mouse click vs. touchscreen reaching response. When participants responded by clicking with a mouse on the screen, we replicated Golomb & Kanwisher's original results, finding that memory was more accurate in retinotopic than spatiotopic coordinates and that the accuracy of spatiotopic memory deteriorated substantially more than retinotopic memory with additional eye movements during the memory delay. Critically, we found the same pattern of results when participants responded by using their finger to reach and tap the remembered location on the monitor. These results further support the hypothesis that spatial memory is natively retinotopic; we found no evidence that engaging the motor system improves spatiotopic memory across saccades.
Project description:A central question in vision is whether spatial attention is represented in an eye-centered (retinotopic) or world-centered (spatiotopic) reference-frame. Most previous studies on this question focused on how coordinates are modulated across saccades. In the present study, we investigated the reference-frame of attention across smooth pursuit eye-movements using a goal-directed saccade task. In two experiments, participants were asked to pursue a moving target while attending to one or two grating stimuli. On each trial, one stimulus was constant in its retinal position and the other was constant in its spatial position. Upon detection of a slight change in stimulus orientation, participants were asked to stop pursuing and perform a fast saccade toward the modified stimulus. In the focused attention condition, they attended one, predefined, stimulus, and in the divided attention condition they attended both. In Experiment 1 the angle of the orientation change marking the target event was constant across participants and conditions. In Experiment 2, the angle was individually adapted to equate performance across participants and conditions. Findings of the two experiments were consistent and showed that the enhancement of mean visual sensitivity in the focused relative to the divided attention condition was similar in magnitude for both retinotopic and spatiotopic targets. This indicates that during smooth pursuit, endogenous attention was proportionally divided between targets in retinotopic and spatiotopic frames of reference.
Project description:Inhibition of return (IOR), typically explored in cueing paradigms, is a performance cost associated with previously attended locations and has been suggested as a crucial attentional mechanism that biases orientation towards novelty. In their seminal IOR paper, Posner and Cohen (1984) showed that IOR is coded in spatiotopic or environment-centered coordinates. Recent studies, however, have consistently reported IOR effects in both spatiotopic and retinotopic (eye-centered) coordinates. One overlooked methodological confound of all previous studies is that the spatial gradient of IOR is not considered when selecting the baseline for estimating IOR effects. This methodological issue makes it difficult to tell if the IOR effects reported in previous studies were coded in retinotopic or spatiotopic coordinates, or in both. The present study addresses this issue with the incorporation of no-cue trials to a modified cueing paradigm in which the cue and target are always intervened by a gaze-shift. The results revealed that a) IOR is indeed coded in both spatiotopic and retinotopic coordinates, and b) the methodology of previous work may have underestimated spatiotopic and retinotopic IOR effects.