Project description:Most people easily learn to recognize new faces and places, and with more extensive practice they can become experts at visual tasks as complex as radiological diagnosis and action video games. Such perceptual plasticity has been thoroughly studied in the context of training paradigms that require constant fixation. In contrast, when observers learn under more natural conditions, they make frequent saccadic eye movements. Here we show that such eye movements can play an important role in visual learning. Observers performed a task in which they executed a saccade while discriminating the motion of a cued visual stimulus. Additional stimuli, presented simultaneously with the cued one, permitted an assessment of the perceptual integration of information across visual space. Consistent with previous results on perisaccadic remapping [M. Szinte, D. Jonikaitis, M. Rolfs, P. Cavanagh, H. Deubel, J. Neurophysiol. 116, 1592-1602 (2016)], most observers preferentially integrated information from locations representing the presaccadic and postsaccadic retinal positions of the cue. With extensive training on the saccade task, these observers gradually acquired the ability to perform similar motion integration without making eye movements. Importantly, the newly acquired pattern of spatial integration was determined by the metrics of the saccades made during training. These results suggest that oculomotor influences on visual processing, long thought to subserve the function of perceptual stability, also play a role in visual plasticity.
Project description:During steady fixation, observers make small fixational saccades at a rate of around 1-2 per second. Presentation of a visual stimulus triggers a biphasic modulation in fixational saccade rate-an initial inhibition followed by a period of elevated rate and a subsequent return to baseline. Here we show that, during passive viewing, this rate signature is highly sensitive to small changes in stimulus contrast. By training a linear support vector machine to classify trials in which a stimulus is either present or absent, we directly compared the contrast sensitivity of fixational eye movements with individuals' psychophysical judgements. Classification accuracy closely matched psychophysical performance, and predicted individuals' threshold estimates with less bias and overall error than those obtained using specific features of the signature. Performance of the classifier was robust to changes in the training set (novel subjects and/or contrasts) and good prediction accuracy was obtained with a practicable number of trials. Our results indicate a tight coupling between the sensitivity of visual perceptual judgements and fixational eye control mechanisms. This raises the possibility that fixational saccades could provide a novel and objective means of estimating visual contrast sensitivity without the need for observers to make any explicit judgement.
Project description:Seeing a talker's face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162-171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601-615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.
Project description:High visual acuity is essential for many tasks, from recognizing distant friends to driving a car. While much is known about how the eye's optics and anatomy contribute to spatial resolution, possible influences from eye movements are rarely considered. Yet humans incessantly move their eyes, and it has long been suggested that oculomotor activity enhances fine pattern vision. Here we examine the role of eye movements in the most common assessment of visual acuity, the Snellen eye chart. By precisely localizing gaze and actively controlling retinal stimulation, we show that fixational behavior improves acuity by more than 0.15 logMAR, at least 2 lines of the Snellen chart. This improvement is achieved by adapting both microsaccades and ocular drifts to precisely position the image on the retina and adjust its motion. These findings show that humans finely tune their fixational eye movements so that they greatly contribute to normal visual acuity.
Project description:NoneA patient was transferred for management of "medication-refractory seizures" after failure of levetiracetam and valproate dual therapy. She had a life-long history of two types of events: periods in which she would rapidly and uncontrollably lapse into unconsciousness, and spells in which she would "pass out" but maintain consciousness, the latter happening with increasing frequency in association with laughing, as of late. She also reported hypnogogic/hypnopompic hallucinations, sleep paralysis, and disrupted nocturnal sleep. A clinical diagnosis of narcolepsy was made. The prevailing pathophysiological concept of narcolepsy details "partial intrusions of REM" sleep into wakefulness. Healthy REM sleep includes generalized atonia, but with preservation of eye movements, respiratory function, and sphincter tone. Cataplexy recapitulates this pattern, and is often induced by extreme emotions, laughter in this case. Despite generalized and severe weakness and areflexia during this patient's cataplectic events, she was able to volitionally move her eyes, which is consistent with the physiology of REM sleep. The diagnosis of cataplexy is often missed, due to clinicians being unfamiliar with the findings and the lack of ability to induce sufficient emotional responses to trigger an episode. This example of cataplexy is also quite characteristic of the "cataplectic facies." The ability to observe the infrequently observed phenomenon of cataplexy serves as a reminder that consciousness is preserved, as are extra-ocular muscle movements.
Project description:Ocular saccades bringing the gaze toward the straight-ahead direction (centripetal) exhibit higher dynamics than those steering the gaze away (centrifugal). This is generally explained by oculomotor determinants: centripetal saccades are more efficient because they pull the eyes back toward their primary orbital position. However, visual determinants might also be invoked: elements located straight-ahead trigger saccades more efficiently because they receive a privileged visual processing. Here, we addressed this issue by using both pro- and anti-saccade tasks in order to dissociate the centripetal/centrifugal directions of the saccades, from the straight-ahead/eccentric locations of the visual elements triggering those saccades. Twenty participants underwent alternating blocks of pro- and anti-saccades during which eye movements were recorded binocularly at 1 kHz. The results confirm that centripetal saccades are always executed faster than centrifugal ones, irrespective of whether the visual elements have straight-ahead or eccentric locations. However, by contrast, saccades triggered by elements located straight-ahead are consistently initiated more rapidly than those evoked by eccentric elements, irrespective of their centripetal or centrifugal direction. Importantly, this double dissociation reveals that the higher dynamics of centripetal pro-saccades stem from both oculomotor and visual determinants, which act respectively on the execution and initiation of ocular saccades.
Project description:Aggression and trait anger have been linked to attentional biases toward angry faces and attribution of hostile intent in ambiguous social situations. Memory and emotion play a crucial role in social-cognitive models of aggression but their mechanisms of influence are not fully understood. Combining a memory task and a visual search task, this study investigated the guidance of attention allocation toward naturalistic face targets during visual search by visual working memory (WM) templates in 113 participants who self-reported having served a custodial sentence. Searches were faster when angry faces were held in working memory regardless of the emotional valence of the visual search target. Higher aggression and trait anger predicted increased working memory modulated attentional bias. These results are consistent with the Social-Information Processing model, demonstrating that internal representations bias attention allocation to threat and that the bias is linked to aggression and trait anger.
Project description:Extracting statistical regularities from the environment is crucial for survival. It allows us to learn cues for where and when future events will occur. Can we learn these associations even when the cues are not consciously perceived? Can these unconscious processes integrate information over long periods of time? We show that human visual system can track the probability of location contingency between an unconscious prime and a conscious target over a period of time of minutes. In a series of psychophysical experiments, we adopted an exogenous priming paradigm and manipulated the location contingency between a masked prime and a visible target (i.e., how likely the prime location predicted the target location). The prime's invisibility was verified both subjectively and objectively. Although the participants were unaware of both the existence of the prime and the prime-target contingency, our results showed that the probability of location contingency was tracked and manifested in the subsequent priming effect. When participants were first entrained into the fully predictive prime-target probability, they exhibited faster responses to the more predictive location. On the contrary, when no contingency existed between the prime and target initially, participants later showed faster responses to the less predictive location. These results were replicated in two more experiments with increased statistical power and a fine-grained delineation of prime awareness. Together, we report that the human visual system is capable of tracking unconscious probability over a period of time, demonstrating how implicit and uncertain regularity guides behavior.
Project description:Visual attention is an important aspect of everyday life, which can be incorporated in the assessment of many diagnoses. Another important characteristic of visual attention is that it can be improved via therapeutic interventions. Fifteen subjects with normal binocular vision were presented with visual distractor stimuli at various spatial locations while initiating disparity vergence eye movements (inward or outward rotation of eyes) within a haploscope system. First, a stationary distractor stimulus was presented in either the far, middle, or near visual spaces while the subjects were instructed to follow a target stimulus that was either stationary, converging (moving toward subject), or diverging (moving away from subject). For the second experiment, a dynamic distractor stimulus within the far, middle, or near visual space that was converging or diverging was presented while the target stimulus was also converging or diverging. The subjects were instructed to visually follow the target stimulus and ignore the distractor stimulus. The vergence responses had a final vergence angle between the target and distractor stimuli which has been termed a center of gravity (CoG) effect. Statistically significant differences were observed between the convergence peak velocities (p < 0.001) and response amplitudes (p < 0.001) comparing responses without distractors to responses with the presence of a vergence distractor. The results support that vergence eye movements are influenced by visual distractors, which is similar to how distractors influence saccadic eye movements. The influence of visual distractors within vergence eye movements may be useful to assess binocular dysfunction and visual distraction which are common post brain injury.
Project description:Visually guided reaching, a regular feature of human life, comprises an intricate neural control task. It includes identifying the target's position in 3D space, passing the representation to the motor system that controls the respective appendages, and adjusting ongoing movements using visual and proprioceptive feedback. Given the complexity of the neural control task, invertebrates, with their numerically constrained central nervous systems, are often considered incapable of this level of visuomotor guidance. Here, we provide mechanistic insights into visual appendage guidance in insects by studying the probing movements of the hummingbird hawkmoth's proboscis as they search for a flower's nectary. We show that visually guided proboscis movements fine-tune the coarse control provided by body movements in flight. By impairing the animals' view of their proboscis, we demonstrate that continuous visual feedback is required and actively sought out to guide this appendage. In doing so, we establish an insect model for the study of neural strategies underlying eye-appendage control in a simple nervous system.