The perceived position of moving objects: transcranial magnetic stimulation of area MT+ reduces the flash-lag effect.
ABSTRACT: How does the visual system assign the perceived position of a moving object? This question is surprisingly complex, since sluggish responses of photoreceptors and transmission delays along the visual pathway mean that visual cortex does not have immediate information about a moving object's position. In the flash-lag effect (FLE), a moving object is perceived ahead of an aligned flash. Psychophysical work on this illusion has inspired models for visual localization of moving objects. However, little is known about the underlying neural mechanisms. Here, we investigated the role of neural activity in areas MT+ and V1/V2 in localizing moving objects. Using short trains of repetitive Transcranial Magnetic Stimulation (TMS) or single pulses at different time points, we measured the influence of TMS on the perceived location of a moving object. We found that TMS delivered to MT+ significantly reduced the FLE; single pulse timings revealed a broad temporal tuning with maximum effect for TMS pulses, 200 ms after the flash. Stimulation of V1/V2 did not significantly influence perceived position. Our results demonstrate that area MT+ contributes to the perceptual localization of moving objects and is involved in the integration of position information over a long time window.
Project description:When a flash of light is presented in physical alignment with a moving object, the flash is perceived to lag behind the position of the object. This phenomenon, known as the flash-lag effect, has been of particular interest to vision scientists because of the challenge it presents to understanding how the visual system generates perceptions of objects in motion. Although various explanations have been offered, the significance of this effect remains a matter of debate. Here, we show that: (i) contrary to previous reports based on limited data, the flash-lag effect is an increasing nonlinear function of image speed; and (ii) this function is accurately predicted by the frequency of occurrence of image speeds generated by the perspective transformation of moving objects. These results support the conclusion that perceptions of the relative position of a moving object are determined by accumulated experience with image speeds, in this way allowing for visual behavior in response to real-world sources whose speeds and positions cannot be perceived directly.
Project description:How is visual space represented in cortical area MT+? At a relatively coarse scale, the organization of MT+ is debated; retinotopic, spatiotopic, or mixed representations have all been proposed. However, none of these representations entirely explain the perceptual localization of objects at a fine spatial scale--a scale relevant for tasks like navigating or manipulating objects. For example, perceived positions of objects are strongly modulated by visual motion; stationary flashes appear shifted in the direction of nearby motion. Does spatial coding in MT+ reflect these shifts in perceived position? We performed an fMRI experiment employing this "flash-drag" effect and found that flashes presented near motion produced patterns of activity similar to physically shifted flashes in the absence of motion. This reveals a motion-dependent change in the neural representation of object position in human MT+, a process that could help compensate for perceptual and motor delays in localizing objects in dynamic scenes.
Project description:The mechanism of positional localization has recently been debated due to interest in the flash-lag effect, which occurs when a briefly flashed stationary stimulus is perceived to lag behind a spatially aligned moving stimulus. Here we report positional localization observed at motion offsets as well as at onsets. In the 'flash-lead' effect, a moving object is perceived to be behind a spatially concurrent stationary flash before the two disappear. With 'reverse-repmo', subjects mis-localize the final position of a moving bar in the direction opposite to the trajectory of motion. Finally, we demonstrate that simultaneous onset and offset effects lead to a perceived compression of visual space. By characterizing illusory effects observed at motion offsets as well as at onsets, we provide evidence that the perceived position of a moving object is the result of an averaging process over a short time period, weighted towards the most recent positions. Our account explains a variety of motion illusions, including the compression of moving shapes when viewed through apertures.
Project description:When the brain has determined the position of a moving object, because of anatomical and processing delays the object will have already moved to a new location. Given the statistical regularities present in natural motion, the brain may have acquired compensatory mechanisms to minimize the mismatch between the perceived and real positions of moving objects. A well-known visual illusion-the flash lag effect-points toward such a possibility. Although many psychophysical models have been suggested to explain this illusion, their predictions have not been tested at the neural level, particularly in a species of animal known to perceive the illusion. To this end, we recorded neural responses to flashed and moving bars from primary visual cortex (V1) of awake, fixating macaque monkeys. We found that the response latency to moving bars of varying speed, motion direction, and luminance was shorter than that to flashes, in a manner that is consistent with psychophysical results. At the level of V1, our results support the differential latency model positing that flashed and moving bars have different latencies. As we found a neural correlate of the illusion in passively fixating monkeys, our results also suggest that judging the instantaneous position of the moving bar at the time of flash-as required by the postdiction/motion-biasing model-may not be necessary for observing a neural correlate of the illusion. Our results also suggest that the brain may have evolved mechanisms to process moving stimuli faster and closer to real time compared with briefly appearing stationary stimuli. NEW & NOTEWORTHY We report several observations in awake macaque V1 that provide support for the differential latency model of the flash lag illusion. We find that the equal latency of flash and moving stimuli as assumed by motion integration/postdiction models does not hold in V1. We show that in macaque V1, motion processing latency depends on stimulus luminance, speed and motion direction in a manner consistent with several psychophysical properties of the flash lag illusion.
Project description:As the visual world changes, its representation in our consciousness must be constantly updated. Given that the external changes are continuous, it appears plausible that conscious updating is continuous as well. Alternatively, this updating could be periodic, if, for example, its implementation at the neural level relies on oscillatory activity. The flash-lag illusion, where a briefly presented flash in the vicinity of a moving object is misperceived to lag behind the moving object, is a useful tool for studying the dynamics of conscious updating. Here, we show that the trial-by-trial variability in updating, measured by the flash-lag effect (FLE), is highly correlated with the phase of spontaneous EEG oscillations in occipital (5-10 Hz) and frontocentral (12-20 Hz) cortices just around the reference event (flash onset). Further, the periodicity in each region independently influences the updating process, suggesting a two-stage periodic mechanism. We conclude that conscious updating is not continuous; rather, it follows a rhythmic pattern.
Project description:When an object moves back and forth, its trajectory appears significantly shorter than it actually is. The object appears to stop and reverse well before its actual reversal point, as if there is some averaging of location within a window of about 100 ms (Sinico et al., 2009). Surprisingly, if a bar is flashed at the physical end point of the trajectory, right on top of the object, just as it reverses direction, the flash is also shifted - grabbed by the object - and is seen at the perceived endpoint of the trajectory rather than the physical endpoint. This can shift the perceived location of the flash by as much as 2 or 3 times its physical size and by up to several degrees of visual angle. We first show that the position shift of the flash is generated by the trajectory shortening, as the same shift is seen with or without the flash. The flash itself is only grabbed if it is presented within a small spatiotemporal attraction zone around the physical end point of the trajectory. Any flash falling in that zone is pulled toward the perceived endpoint. The effect scales linearly with speed, up to a maximum, and is independent of the contrast of the moving stimulus once it is above 5%. Finally, we demonstrate that this position shift requires attention. These results reveal a new "flash grab" effect in the family of motion-induced position shifts. Although it most resembles the flash drag effect, it differs from this in the following ways: (1) it has a different temporal profile, (2) it requires attention, (3) it is about 10 times larger.
Project description:A gradually fading moving object is perceived to disappear at positions beyond its luminance detection threshold, whereas abrupt offsets are usually localized accurately. What role does retinotopic activity in visual cortex play in this motion-induced mislocalization of the endpoint of fading objects? Using functional magnetic resonance imaging (fMRI), we localized regions of interest (ROIs) in retinotopic maps abutting the trajectory endpoint of a bar moving either toward or away from this position while gradually decreasing or increasing in luminance. Area V3A showed predictive activity, with stronger fMRI responses for motion toward versus away from the ROI. This effect was independent of the change in luminance. In Area V1 we found higher activity for high-contrast onsets and offsets near the ROI, but no significant differences between motion directions. We suggest that perceived final positions of moving objects are based on an interplay of predictive position representations in higher motion-sensitive retinotopic areas and offset transients in primary visual cortex.
Project description:Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.
Project description:Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.
Project description:Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea.