Is congruent movement training more effective than standard visual scanning therapy to ameliorate symptoms of visuospatial neglect? Study protocol of a randomised control trial.
ABSTRACT: INTRODUCTION:Approximately 30% of all patients with stroke show visuospatial neglect (VSN). Currently, visual scanning therapy (VST) is applied in clinical settings to attenuate neglect symptoms. VST builds on the premise that eye movements to the affected hemifield lead to a concurrent shift of visual attention. Congruent movements with different effectors of the motor system, for example, eye and hand, can produce an even larger boost of attention compared with a single effector. This congruency principle may produce a powerful bias in the motor system, which may counteract the pathological biases in the attentional system of neglect patients. Therefore, an intervention with congruent eye and hand movements may result in greater attenuation of neglect compared with an intervention with single eye movements as applied in standard VST. The current randomised controlled trial will investigate the beneficial effects of this updated version of VST by comparing changes in performance on standard neuropsychological neglect tasks and severity of neglect in activities of daily living. METHODS AND ANALYSIS:Thirty VSN patients in the subacute phase poststroke onset will be randomly assigned to one of two groups: congruent eye and hand movement training (experimental group) versus standard VST (control group). Each patient will receive 10 sessions of training, 30?min each, within 2?weeks. Performance on standard neuropsychological neglect tasks, a visual discrimination task, severity of neglect in ADL and eye movement characteristics before and after intervention will be compared for and between both groups. ETHICS AND DISSEMINATION:This study has been approved by the ethical committee of the University Medical Centre Utrecht. All subjects will participate voluntarily and will give written informed consent. Results of this study will be published in peer-reviewed scientific journals and presented at international conferences. TRIAL REGISTRATION NUMBER:NTR7005.
Project description:INTRODUCTION:Eye movements and spatial attention are closely related, and eye-tracking can provide valuable information in research on visual attention. We investigated the pathology of overt attention in right hemisphere (RH) stroke patients differing in their severity of neglect symptoms by using eye-tracking during a dynamic attention task. METHODS:Eye movements were recorded in 26 RH stroke patients (13 with and 13 without unilateral spatial neglect, and a matched group of 26 healthy controls during a Multiple Object Tracking task. We assessed the frequency and spatial distributions of fixations, as well as frequencies of eye movements to the left and to the right side of visual space so as to investigate individuals' efficiency of visual processing, distribution of attentional processing resources, and oculomotoric orienting mechanisms. RESULTS:Both patient groups showed increased fixation frequencies compared to controls. A spatial bias was found in neglect patients' fixation distribution, depending on neglect severity (indexed by scores on the Behavioral Inattention Test). Patients with more severe neglect had more fixations within the right field, while patients with less severe neglect had more fixations within their left field. Eye movement frequencies were dependent on direction in the neglect patient group, as they made more eye movements toward the right than toward the left. CONCLUSION:The patient groups' higher fixation rates suggest that patients are generally less efficient in visual processing. The spatial bias in fixation distribution, dependent on neglect severity, suggested that patients with less severe neglect were able to use compensational mechanisms in their contralesional space. The observed relation between eye movement rates and directions observed in neglect patients provides a measure of the degree of difficulty these patients may encounter during dynamic situations in daily life and supports the idea that directional oculomotor hypokinesia may be a relevant component in this syndrome.
Project description:Both eye and hand movements have been shown to selectively interfere with visual working memory. We investigated working memory in the context of simultaneous eye-hand movements to approach the question whether the eye and the hand movement systems independently interact with visual working memory. Participants memorized several locations and performed eye, hand, or simultaneous eye-hand movements during the maintenance interval. Subsequently, we tested spatial working memory at the eye or the hand motor goal, and at action-irrelevant locations. We found that for single eye and single hand movements, memory at the eye or hand target was significantly improved compared to action-irrelevant locations. Remarkably, when an eye and a hand movement were prepared in parallel, but to distinct locations, memory at both motor targets was enhanced-with no tradeoff between the two separate action goals. This suggests that eye and hand movements independently enhance visual working memory at their goal locations, resulting in an overall working memory performance that is higher than that expected when recruiting only one effector.
Project description:Variance stabilization is a step in the preprocessing of microarray data that can greatly benefit the performance of subsequent statistical modeling and inference. Due to the often limited number of technical replicates for Affymetrix and cDNA arrays, achieving variance stabilization can be difficult. Although the Illumina microarray platform provides a larger number of technical replicates on each array (usually over 30 randomly distributed beads per probe), these replicates have not been leveraged in the current log2 data transformation process. We devised a variance-stabilizing transformation (VST) method that takes advantage of the technical replicates available on an Illumina microarray. We have compared VST with log2 and Variance-stabilizing normalization (VSN) by using the Kruglyak bead-level data (2006) and Barnes titration data (2005). The results of the Kruglyak data suggest that VST stabilizes variances of bead-replicates within an array. The results of the Barnes data show that VST can improve the detection of differentially expressed genes and reduce false-positive identifications. We conclude that although both VST and VSN are built upon the same model of measurement noise, VST stabilizes the variance better and more efficiently for the Illumina platform by leveraging the availability of a larger number of within-array replicates. The algorithms and Supplementary Data are included in the lumi package of Bioconductor, available at: www.bioconductor.org.
Project description:Selective spatial attention is a crucial cognitive process that guides us to the behaviorally relevant objects in a complex visual world by using exploratory eye movements. The spatial location of objects, their (bottom-up) saliency and (top-down) relevance is assumed to be encoded in one "attentional priority map" in the brain, using different egocentric (eye-, head- and trunk-centered) spatial reference frames. In patients with hemispatial neglect, this map is supposed to be imbalanced, leading to a spatially biased exploration of the visual environment. As a proof of concept, we altered the visual saliency (and thereby attentional priority) of objects in a naturalistic scene along a left-right spatial gradient and investigated whether this can induce a bias in the exploratory eye movements of healthy humans (n = 28; all right-handed; mean age: 23 years, range 19-48). We developed a computerized mask, using high-end "gaze-contingent display (GCD)" technology, that immediately and continuously reduced the saliency of objects on the left-"left" with respect to the head (body-centered) and the current position on the retina (eye-centered). In both experimental conditions, task-free viewing and goal-driven visual search, this modification induced a mild but significant bias in visual exploration similar to hemispatial neglect. Accordingly, global eye movement parameters changed (reduced number and increased duration of fixations) and the spatial distribution of fixations indicated an attentional bias towards the right (rightward shift of first orienting, fixations favoring the scene's outmost right over left). Our results support the concept of an attentional priority map in the brain as an interface between perception and behavior and as one pathophysiological ground of hemispatial neglect.
Project description:In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments (n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points.NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points.
Project description:Sensorimotor coupling in healthy humans is demonstrated by the higher accuracy of visually tracking intrinsically-rather than extrinsically-generated hand movements in the fronto-parallel plane. It is unknown whether this coupling also facilitates vergence eye movements for tracking objects in depth, or can overcome symmetric or asymmetric binocular visual impairments. Human observers were therefore asked to track with their gaze a target moving horizontally or in depth. The movement of the target was either directly controlled by the observer's hand or followed hand movements executed by the observer in a previous trial. Visual impairments were simulated by blurring stimuli independently in each eye. Accuracy was higher for self-generated movements in all conditions, demonstrating that motor signals are employed by the oculomotor system to improve the accuracy of vergence as well as horizontal eye movements. Asymmetric monocular blur affected horizontal tracking less than symmetric binocular blur, but impaired tracking in depth as much as binocular blur. There was a critical blur level up to which pursuit and vergence eye movements maintained tracking accuracy independent of blur level. Hand-eye coordination may therefore help compensate for functional deficits associated with eye disease and may be employed to augment visual impairment rehabilitation.
Project description:Both eye and hand movements bind visual attention to their target locations during movement preparation. However, it remains contentious whether eye and hand targets are selected jointly by a single selection system, or individually by independent systems. To unravel the controversy, we investigated the deployment of visual attention - a proxy of motor target selection - in coordinated eye-hand movements. Results show that attention builds up in parallel both at the eye and the hand target. Importantly, the allocation of attention to one effector's motor target was not affected by the concurrent preparation of the other effector's movement at any time during movement preparation. This demonstrates that eye and hand targets are represented in separate, effector-specific maps of action-relevant locations. The eye-hand synchronisation that is frequently observed on the behavioral level must emerge from mutual influences of the two effector systems at later, post-attentional processing stages.
Project description:We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt.
Project description:Spatial computations underlying the coordination of the hand and eye present formidable geometric challenges. One way for the nervous system to simplify these computations is to directly encode the relative position of the hand and the center of gaze. Neurons in the dorsal premotor cortex (PMd), which is critical for the guidance of arm-reaching movements, encode the relative position of the hand, gaze, and goal of reaching movements. This suggests that PMd can coordinate reaching movements with eye movements. Here, we examine saccade-related signals in PMd to determine whether they also point to a role for PMd in coordinating visual-motor behavior. We first compared the activity of a population of PMd neurons with a population of parietal reach region (PRR) neurons. During center-out reaching and saccade tasks, PMd neurons responded more strongly before saccades than PRR neurons, and PMd contained a larger proportion of exclusively saccade-tuned cells than PRR. During a saccade relative position-coding task, PMd neurons encoded saccade targets in a relative position code that depended on the relative position of gaze, the hand, and the goal of a saccadic eye movement. This relative position code for saccades is similar to the way that PMd neurons encode reach targets. We propose that eye movement and eye position signals in PMd do not drive eye movements, but rather provide spatial information that links the control of eye and arm movements to support coordinated visual-motor behavior.
Project description:Humans can distinguish visual stimuli that differ by features the size of only a few photoreceptors. This is possible despite the incessant image motion due to fixational eye movements, which can be many times larger than the features to be distinguished. To perform well, the brain must identify the retinal firing patterns induced by the stimulus while discounting similar patterns caused by spontaneous retinal activity. This is a challenge since the trajectory of the eye movements, and consequently, the stimulus position, are unknown. We derive a decision rule for using retinal spike trains to discriminate between two stimuli, given that their retinal image moves with an unknown random walk trajectory. This algorithm dynamically estimates the probability of the stimulus at different retinal locations, and uses this to modulate the influence of retinal spikes acquired later. Applied to a simple orientation-discrimination task, the algorithm performance is consistent with human acuity, whereas naive strategies that neglect eye movements perform much worse. We then show how a simple, biologically plausible neural network could implement this algorithm using a local, activity-dependent gain and lateral interactions approximately matched to the statistics of eye movements. Finally, we discuss evidence that such a network could be operating in the primary visual cortex.