ABSTRACT: We present a series of novel observations about interactions between flicker and motion that lead to three distinct perceptual effects. We use the term flicker to describe alternating changes in a stimulus' luminance or color (i.e. a circle that flickers from black to white and visa-versa). When objects flicker, three distinct phenomena can be observed: (1) Flicker Induced Motion (FLIM) in which a single, stationary object, appears to move when it flickers at certain rates; (2) Flicker Induced Motion Suppression (FLIMS) in which a moving object appears to be stationary when it flickers at certain rates, and (3) Flicker-Induced Induced-Motion (FLIIM) in which moving objects that are flickering induce another flickering stationary object to appear to move. Across four psychophysical experiments, we characterize key stimulus parameters underlying these flicker-motion interactions. Interactions were strongest in the periphery and at flicker frequencies above 10?Hz. Induced motion occurred not just for luminance flicker, but for isoluminant color changes as well. We also found that the more physically moving objects there were, the more motion induction to stationary objects occurred. We present demonstrations that the effects reported here cannot be fully accounted for by eye movements: we show that the perceived motion of multiple stationary objects that are induced to move via flicker can appear to move independently and in random directions, whereas eye movements would have caused all of the objects to appear to move coherently. These effects highlight the fundamental role of spatiotemporal dynamics in the representation of motion and the intimate relationship between flicker and motion.
Project description:An important role of visual systems is to detect nearby predators, prey, and potential mates, which may be distinguished in part by their motion. When an animal is at rest, an object moving in any direction may easily be detected by motion-sensitive visual circuits. During locomotion, however, this strategy is compromised because the observer must detect a moving object within the pattern of optic flow created by its own motion through the stationary background. However, objects that move creating back-to-front (regressive) motion may be unambiguously distinguished from stationary objects because forward locomotion creates only front-to-back (progressive) optic flow. Thus, moving animals should exhibit an enhanced sensitivity to regressively moving objects. We explicitly tested this hypothesis by constructing a simple fly-sized robot that was programmed to interact with a real fly. Our measurements indicate that whereas walking female flies freeze in response to a regressively moving object, they ignore a progressively moving one. Regressive motion salience also explains observations of behaviors exhibited by pairs of walking flies. Because the assumptions underlying the regressive motion salience hypothesis are general, we suspect that the behavior we have observed in Drosophila may be widespread among eyed, motile organisms.
Project description:Recently, Flavell et al. (2019) demonstrated that an object's motion fluency (how smoothly and predictably it moves) influences liking of the object itself. Though the authors demonstrated learning of object-motion associations, participants only preferred fluently associated objects over disfluently associated objects when ratings followed a moving presentation but not a stationary presentation. In the presented experiment, we tested the possibility that this apparent failure of associative learning / evaluative conditioning was due to stimulus choice. To do so we replicate part of the original work but change the 'naturally stationary' household object stimuli with winged insects which move in a similar way to the original motions. Though these more ecologically valid stimuli should have facilitated object to motion associations, we again found that preference effects were only apparent following moving presentations. These results confirm the potential of motion fluency for 'in the moment' preference change, and they demonstrate a critical boundary condition that should be considered when attempting to generalise fluency effects across contexts such as in advertising or behavioural interventions.
Project description:The human visual system has specialised mechanisms for encoding mirror-symmetry and for detecting symmetric motion-directions for objects that loom or recede from the observers. The contribution of motion to mirror-symmetry perception has never been investigated. Here we examine symmetry detection thresholds for stationary (static and dynamic flicker) and symmetrically moving patterns (inwards, outwards, random directions) with and without positional symmetry. We also measured motion detection and direction-discrimination thresholds for horizontal (left, right) and symmetrically moving patterns with and without positional symmetry. We found that symmetry detection thresholds were (a) significantly higher for static patterns, but there was no difference between the dynamic flicker and symmetrical motion conditions, and (b) higher than motion detection and direction-discrimination thresholds for horizontal or symmetrical motion, with or without positional symmetry. In addition, symmetrical motion was as easy to detect or discriminate as horizontal motion. We conclude that whilst symmetrical motion per se does not contribute to symmetry perception, limiting the lifetime of pattern elements does improve performance by increasing the number of element-locations as elements move from one location to the next. This may be explained by a temporal integration process in which weak, noisy symmetry signals are combined to produce a stronger signal.
Project description:Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.
Project description:We continually move our body and our eyes when exploring the world, causing our sensory surfaces, the skin and the retina, to move relative to external objects. In order to estimate object motion consistently, an ideal observer would transform estimates of motion acquired from the sensory surface into fixed, world-centered estimates, by taking the motion of the sensor into account. This ability is referred to as spatial constancy. Human vision does not follow this rule strictly and is therefore subject to perceptual illusions during eye movements, where immobile objects can appear to move. Here, we investigated whether one of these, the Filehne illusion, had a counterpart in touch. To this end, observers estimated the movement of a surface from tactile slip, with a moving or with a stationary finger. We found the perceived movement of the surface to be biased if the surface was sensed while moving. This effect exemplifies a failure of spatial constancy that is similar to the Filehne illusion in vision. We quantified this illusion by using a Bayesian model with a prior for stationarity, applied previously in vision. The analogy between vision and touch points to a modality-independent solution to the spatial constancy problem.
Project description:Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component - that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (experiment 1) and direction (experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects.
Project description:The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Project description:Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world's global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object's motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object's global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.
Project description:Insects can detect the presence of discrete objects in their visual fields based on a range of differences in spatiotemporal characteristics between the images of object and background. This includes but is not limited to relative motion. Evidence suggests that <i>edge detection</i> is an integral part of this capability, and this study examines the ability of a bio-inspired processing model to detect the presence of boundaries between two regions of a one-dimensional visual field, based on general differences in image dynamics. The model consists of two parts. The first is an early vision module inspired by insect visual processing, which implements adaptive photoreception, ON and OFF channels with transient and sustained characteristics, and delayed and undelayed signal paths. This is replicated for a number of photoreceptors in a small linear array. It is followed by an artificial neural network trained to discriminate the presence vs. absence of an edge based on the array output signals. Input data are derived from natural imagery and feature both static and moving edges between regions with moving texture, flickering texture, and static patterns in all possible combinations. The model can discriminate the presence of edges, stationary or moving, at rates far higher than chance. The resources required (numbers of neurons and visual signals) are realistic relative to those available in the insect second optic ganglion, where the bulk of such processing would be likely to take place.
Project description:Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to 'compensate for' or 'parse out' the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement.