Project description:The majority of subjects who attempt to learn control of a brain-computer interface (BCI) can do so with adequate training. Much like when one learns to type or ride a bicycle, BCI users report transitioning from a deliberate, cognitively focused mindset to near automatic control as training progresses. What are the neural correlates of this process of BCI skill acquisition? Seven subjects were implanted with electrocorticography (ECoG) electrodes and had multiple opportunities to practice a 1D BCI task. As subjects became proficient, strong initial task-related activation was followed by lessening of activation in prefrontal cortex, premotor cortex, and posterior parietal cortex, areas that have previously been implicated in the cognitive phase of motor sequence learning and abstract task learning. These results demonstrate that, although the use of a BCI only requires modulation of a local population of neurons, a distributed network of cortical areas is involved in the acquisition of BCI proficiency.
Project description:BACKGROUND:During planning and execution of reaching movements, the activity of cortical motor neurons is modulated by a diversity of motor, sensory, and cognitive signals. Brain-machine interfaces (BMIs) extract part of these modulations to directly control artificial actuators. However, cortical modulations that emerge in the novel context of operating the BMI are poorly understood. METHODOLOGY/PRINCIPAL FINDINGS:Here we analyzed the changes in neuronal modulations that occurred in different cortical motor areas as monkeys learned to use a BMI to control reaching movements. Using spike-train analysis methods we demonstrate that the modulations of the firing-rates of cortical neurons increased abruptly after the monkeys started operating the BMI. Regression analysis revealed that these enhanced modulations were not correlated with the kinematics of the movement. The initial enhancement in firing rate modulations declined gradually with subsequent training in parallel with the improvement in behavioral performance. CONCLUSIONS/SIGNIFICANCE:We conclude that the enhanced modulations are related to computational tasks that are significant especially in novel motor contexts. Although the function and neuronal mechanism of the enhanced cortical modulations are open for further inquiries, we discuss their potential role in processing execution errors and representing corrective or explorative activity. These representations are expected to contribute to the formation of internal models of the external actuator and their decoding may facilitate BMI improvement.
Project description:Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair's translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.
Project description:Working memory-the brain's ability to internalize information and use it flexibly to guide behaviour-is an essential component of cognition. Although activity related to working memory has been observed in several brain regions1-3, how neural populations actually represent working memory4-7 and the mechanisms by which this activity is maintained8-12 remain unclear13-15. Here we describe the neural implementation of visual working memory in mice alternating between a delayed non-match-to-sample task and a simple discrimination task that does not require working memory but has identical stimulus, movement and reward statistics. Transient optogenetic inactivations revealed that distributed areas of the neocortex were required selectively for the maintenance of working memory. Population activity in visual area AM and premotor area M2 during the delay period was dominated by orderly low-dimensional dynamics16,17 that were, however, independent of working memory. Instead, working memory representations were embedded in high-dimensional population activity, present in both cortical areas, persisted throughout the inter-stimulus delay period, and predicted behavioural responses during the working memory task. To test whether the distributed nature of working memory was dependent on reciprocal interactions between cortical regions18-20, we silenced one cortical area (AM or M2) while recording the feedback it received from the other. Transient inactivation of either area led to the selective disruption of inter-areal communication of working memory. Therefore, reciprocally interconnected cortical areas maintain bound high-dimensional representations of working memory.
Project description:Diverse organisms, from insects to humans, actively seek out sensory information that best informs goal-directed actions. Efficient active sensing requires congruity between sensor properties and motor strategies, as typically honed through evolution. However, it has been difficult to study whether active sensing strategies are also modified with experience. Here, we used a sensory brain-machine interface paradigm, permitting both free behavior and experimental manipulation of sensory feedback, to study learning of active sensing strategies. Rats performed a searching task in a water maze in which the only task-relevant sensory feedback was provided by intracortical microstimulation (ICMS) encoding egocentric bearing to the hidden goal location. The rats learned to use the artificial goal direction sense to find the platform with the same proficiency as natural vision. Manipulation of the acuity of the ICMS feedback revealed distinct search strategy adaptations. Using an optimization model, the different strategies were found to minimize the effort required to extract the most salient task-relevant information. The results demonstrate that animals can adjust motor strategies to match novel sensor properties for efficient goal-directed behavior.
Project description:Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.
Project description:Motor cortex plays a substantial role in driving movement, yet the details underlying this control remain unresolved. We analyzed the extent to which movement-related information could be extracted from single-trial motor cortical activity recorded while monkeys performed center-out reaching. Using information theoretic techniques, we found that single units carry relatively little speed-related information compared with direction-related information. This result is not mitigated at the population level: simultaneously recorded population activity predicted speed with significantly lower accuracy relative to direction predictions. Furthermore, a unit-dropping analysis revealed that speed accuracy would likely remain lower than direction accuracy, even given larger populations. These results suggest that the instantaneous details of single-trial movement speed are difficult to extract using commonly assumed coding schemes. This apparent paucity of speed information takes particular importance in the context of brain-machine interfaces (BMIs), which rely on extracting kinematic information from motor cortex. Previous studies have highlighted subjects' difficulties in holding a BMI cursor stable at targets. These studies, along with our finding of relatively little speed information in motor cortex, inspired a speed-dampening Kalman filter (SDKF) that automatically slows the cursor upon detecting changes in decoded movement direction. Effectively, SDKF enhances speed control by using prevalent directional signals, rather than requiring speed to be directly decoded from neural activity. SDKF improved success rates by a factor of 1.7 relative to a standard Kalman filter in a closed-loop BMI task requiring stable stops at targets. BMI systems enabling stable stops will be more effective and user-friendly when translated into clinical applications.
Project description:Innovation in the field of brain-machine interfacing offers a new approach to managing human pain. In principle, it should be possible to use brain activity to directly control a therapeutic intervention in an interactive, closed-loop manner. But this raises the question as to whether the brain activity changes as a function of this interaction. Here, we used real-time decoded functional MRI responses from the insula cortex as input into a closed-loop control system aimed at reducing pain and looked for co-adaptive neural and behavioral changes. As subjects engaged in active cognitive strategies orientated toward the control system, such as trying to enhance their brain activity, pain encoding in the insula was paradoxically degraded. From a mechanistic perspective, we found that cognitive engagement was accompanied by activation of the endogenous pain modulation system, manifested by the attentional modulation of pain ratings and enhanced pain responses in pregenual anterior cingulate cortex and periaqueductal gray. Further behavioral evidence of endogenous modulation was confirmed in a second experiment using an EEG-based closed-loop system. Overall, the results show that implementing brain-machine control systems for pain induces a parallel set of co-adaptive changes in the brain, and this can interfere with the brain signals and behavior under control. More generally, this illustrates a fundamental challenge of brain decoding applications-that the brain inherently adapts to being decoded, especially as a result of cognitive processes related to learning and cooperation. Understanding the nature of these co-adaptive processes informs strategies to mitigate or exploit them.
Project description:Brain machine interfaces (BMI) connect brains directly to the outside world, bypassing natural neural systems and actuators. Neuronal-activity-to-motion transformation algorithms allow applications such as control of prosthetics or computer cursors. These algorithms lie within a spectrum between bio-mimetic control and bio-feedback control. The bio-mimetic approach relies on increasingly complex algorithms to decode neural activity by mimicking the natural neural system and actuator relationship while focusing on machine learning: the supervised fitting of decoder parameters. On the other hand, the bio-feedback approach uses simple algorithms and relies primarily on user learning, which may take some time, but can facilitate control of novel, non-biological appendages. An increasing amount of work has focused on the arguably more successful bio-mimetic approach. However, as chronic recordings have become more accessible and utilization of novel appendages such as computer cursors have become more universal, users can more easily spend time learning in a bio-feedback control paradigm. We believe a simple approach which leverages user learning and few assumptions will provide users with good control ability. To test the feasibility of this idea, we implemented a simple firing-rate-to-motion correspondence rule, assigned groups of neurons to virtual "directional keys" for control of a 2D cursor. Though not strictly required, to facilitate initial control, we selected neurons with similar preferred directions for each group. The groups of neurons were kept the same across multiple recording sessions to allow learning. Two Rhesus monkeys used this BMI to perform a center-out cursor movement task. After about a week of training, monkeys performed the task better and neuronal signal patterns changed on a group basis, indicating learning. While our experiments did not compare this bio-feedback BMI to bio-mimetic BMIs, the results demonstrate the feasibility of our control paradigm and paves the way for further research in multi-dimensional bio-feedback BMIs.
Project description:Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain's behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user's motor intention during CLDA-a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics.