Feasibility of identifying the ideal locations for motor intention decoding using unimodal and multimodal classification at 7T-fMRI.
ABSTRACT: Invasive Brain-Computer Interfaces (BCIs) require surgeries with high health-risks. The risk-to-benefit ratio of the procedure could potentially be improved by pre-surgically identifying the ideal locations for mental strategy classification. We recorded high-spatiotemporal resolution blood-oxygenation-level-dependent (BOLD) signals using functional MRI at 7 Tesla in eleven healthy participants during two motor imagery tasks. BCI diagnostic task isolated the intent to imagine movements, while BCI simulation task simulated the neural states that may be yielded in a real-life BCI-operation scenario. Imagination of movements were classified from the BOLD signals in sub-regions of activation within a single or multiple dorsal motor network regions. Then, the participant's decoding performance during the BCI simulation task was predicted from the BCI diagnostic task. The results revealed that drawing information from multiple regions compared to a single region increased the classification accuracy of imagined movements. Importantly, systematic unimodal and multimodal classification revealed the ideal combination of regions that yielded the best classification accuracy at the individual-level. Lastly, a given participant's decoding performance achieved during the BCI simulation task could be predicted from the BCI diagnostic task. These results show the feasibility of 7T-fMRI with unimodal and multimodal classification being utilized for identifying ideal sites for mental strategy classification.
Project description:Laboratory demonstrations of brain-computer interface (BCI) systems show promise for reducing disability associated with paralysis by directly linking neural activity to the control of assistive devices. Surveys of potential users have revealed several key BCI performance criteria for clinical translation of such a system. Of these criteria, high accuracy, short response latencies, and multi-functionality are three key characteristics directly impacted by the neural decoding component of the BCI system, the algorithm that translates neural activity into control signals. Building a decoder that simultaneously addresses these three criteria is complicated because optimizing for one criterion may lead to undesirable changes in the other criteria. Unfortunately, there has been little work to date to quantify how decoder design simultaneously affects these performance characteristics. Here, we systematically explore the trade-off between accuracy, response latency, and multi-functionality for discrete movement classification using two different decoding strategies-a support vector machine (SVM) classifier which represents the current state-of-the-art for discrete movement classification in laboratory demonstrations and a proposed deep neural network (DNN) framework. We utilized historical intracortical recordings from a human tetraplegic study participant, who imagined performing several different hand and finger movements. For both decoders, we found that response time increases (i.e., slower reaction) and accuracy decreases as the number of functions increases. However, we also found that both the increase of response times and the decline in accuracy with additional functions is less for the DNN than the SVM. We also show that data preprocessing steps can affect the performance characteristics of the two decoders in drastically different ways. Finally, we evaluated the performance of our tetraplegic participant using the DNN decoder in real-time to control functional electrical stimulation (FES) of his paralyzed forearm. We compared his performance to that of able-bodied participants performing the same task, establishing a quantitative target for ideal BCI-FES performance on this task. Cumulatively, these results help quantify BCI decoder performance characteristics relevant to potential users and the complex interactions between them.
Project description:OBJECTIVE:In recent years, brain-computer interface (BCI) systems have been investigated for their potential as a communication device to assist people with severe paralysis. Decoding speech sensorimotor cortex activity is a promising avenue for the generation of BCI control signals, but is complicated by variability in neural patterns, leading to suboptimal decoding. We investigated whether neural pattern variability associated with sound pronunciation can be explained by prior pronunciations and determined to what extent prior speech affects BCI decoding accuracy. APPROACH:Neural patterns in speech motor areas were evaluated with electrocorticography in five epilepsy patients, who performed a simple speech task that involved pronunciation of the /i/ sound, preceded by either silence, the /a/ sound or the /u/ sound. MAIN RESULTS:The neural pattern related to the /i/ sound depends on previous sounds and is therefore associated with multiple distinct sensorimotor patterns, which is likely to reflect differences in the movements towards this sound. We also show that these patterns still contain a commonality that is distinct from the other vowel sounds (/a/ and /u/). Classification accuracies for the decoding of different sounds do increase, however, when the multiple patterns for the /i/ sound are taken into account. Simply including multiple forms of the /i/ vowel in the training set for the creation of a single /i/ model performs as well as training individual models for each /i/ variation. SIGNIFICANCE:Our results are of interest for the development of BCIs that aim to decode speech sounds from the sensorimotor cortex, since they argue that a multitude of cortical activity patterns associated with speech movements can be reduced to a basis set of models which reflect meaningful language units (vowels), yet it is important to account for the variety of neural patterns associated with a single sound in the training process.
Project description:Electroencephalography (EEG) and near-infrared spectroscopy (NIRS) are non-invasive neuroimaging methods that record the electrical and metabolic activity of the brain, respectively. Hybrid EEG-NIRS brain-computer interfaces (hBCIs) that use complementary EEG and NIRS information to enhance BCI performance have recently emerged to overcome the limitations of existing unimodal BCIs, such as vulnerability to motion artifacts for EEG-BCI or low temporal resolution for NIRS-BCI. However, with respect to NIRS-BCI, in order to fully induce a task-related brain activation, a relatively long trial length (?10 s) is selected owing to the inherent hemodynamic delay that lowers the information transfer rate (ITR; bits/min). To alleviate the ITR degradation, we propose a more practical hBCI operated by intuitive mental tasks, such as mental arithmetic (MA) and word chain (WC) tasks, performed within a short trial length (5 s). In addition, the suitability of the WC as a BCI task was assessed, which has so far rarely been used in the BCI field. In this experiment, EEG and NIRS data were simultaneously recorded while participants performed MA and WC tasks without preliminary training and remained relaxed (baseline; BL). Each task was performed for 5 s, which was a shorter time than previous hBCI studies. Subsequently, a classification was performed to discriminate MA-related or WC-related brain activations from BL-related activations. By using hBCI in the offline/pseudo-online analyses, average classification accuracies of 90.0 ± 7.1/85.5 ± 8.1% and 85.8 ± 8.6/79.5 ± 13.4% for MA vs. BL and WC vs. BL, respectively, were achieved. These were significantly higher than those of the unimodal EEG- or NIRS-BCI in most cases. Given the short trial length and improved classification accuracy, the average ITRs were improved by more than 96.6% for MA vs. BL and 87.1% for WC vs. BL, respectively, compared to those reported in previous studies. The suitability of implementing a more practical hBCI based on intuitive mental tasks without preliminary training and with a shorter trial length was validated when compared to previous studies.
Project description:Brain-computer interfaces (BCIs) are being developed to assist paralyzed people and amputees by translating neural activity into movements of a computer cursor or prosthetic limb. Here we introduce a novel BCI task paradigm, intended to help accelerate improvements to BCI systems. Through this task, we can push the performance limits of BCI systems, we can quantify more accurately how well a BCI system captures the user's intent, and we can increase the richness of the BCI movement repertoire.We have implemented an instructed path task, wherein the user must drive a cursor along a visible path. The instructed path task provides a versatile framework to increase the difficulty of the task and thereby push the limits of performance. Relative to traditional point-to-point tasks, the instructed path task allows more thorough analysis of decoding performance and greater richness of movement kinematics.We demonstrate that monkeys are able to perform the instructed path task in a closed-loop BCI setting. We further investigate how the performance under BCI control compares to native arm control, whether users can decrease their movement variability in the face of a more demanding task, and how the kinematic richness is enhanced in this task.The use of the instructed path task has the potential to accelerate the development of BCI systems and their clinical translation.
Project description:Long calibration time hinders the feasibility of brain-computer interfaces (BCI). If other subjects' data were used for training the classifier, BCI-based neurofeedback practice could start without the initial calibration. Here, we compare methods for inter-subject decoding of left- vs. right-hand motor imagery (MI) from MEG and EEG. Six methods were tested on data involving MEG and EEG measurements of healthy participants. Inter-subject decoders were trained on subjects showing good within-subject accuracy, and tested on all subjects, including poor performers. Three methods were based on Common Spatial Patterns (CSP), and three others on logistic regression with l<sub>1</sub> - or l<sub>2,1</sub> -norm regularization. The decoding accuracy was evaluated using (1) MI and (2) passive movements (PM) for training, separately for MEG and EEG. With MI training, the best accuracies across subjects (mean 70.6% for MEG, 67.7% for EEG) were obtained using multi-task learning (MTL) with logistic regression and l<sub>2,1</sub>-norm regularization. MEG yielded slightly better average accuracies than EEG. With PM training, none of the inter-subject methods yielded above chance level (58.7%) accuracy. In conclusion, MTL and training with other subject's MI is efficient for inter-subject decoding of MI. Passive movements of other subjects are likely suboptimal for training the MI classifiers.
Project description:Brain-computer interfaces (BCIs) have been studied extensively in order to establish a non-muscular communication channel mainly for patients with impaired motor functions. However, many limitations remain for BCIs in clinical use. In this study, we propose a hybrid BCI that is based on only frontal brain areas and can be operated in an eyes-closed state for end users with impaired motor and declining visual functions. In our experiment, electroencephalography (EEG) and near-infrared spectroscopy (NIRS) were simultaneously measured while 12 participants performed mental arithmetic (MA) and remained relaxed (baseline state: BL). To evaluate the feasibility of the hybrid BCI, we classified MA- from BL-related brain activation. We then compared classification accuracies using two unimodal BCIs (EEG and NIRS) and the hybrid BCI in an offline mode. The classification accuracy of the hybrid BCI (83.9 ± 10.3%) was shown to be significantly higher than those of unimodal EEG-based (77.3 ± 15.9%) and NIRS-based BCI (75.9 ± 6.3%). The analytical results confirmed performance improvement with the hybrid BCI, particularly for only frontal brain areas. Our study shows that an eyes-closed hybrid BCI approach based on frontal areas could be applied to neurodegenerative patients who lost their motor functions, including oculomotor functions.
Project description:Event-related potentials (ERPs) represent neuronal activity in the brain elicited by external visual or auditory stimulation and are widely used in brain-computer interface (BCI) systems. The ERP responses are elicited a few milliseconds after attending to an oddball stimulus; target and non-target stimuli are repeatedly flashed, and the ERP trials are averaged over time in order to improve their decoding accuracy. To reduce this time-consuming process, previous studies have attempted to evoke stronger ERP responses by changing certain experimental parameters like color, size, or the use of a face image as a target symbol. Since these exogenous potentials can be naturally evoked by merely looking at a target symbol, the BCI system could generate unintended commands while subjects are gazing at one of the symbols in a non-intentional mental state. We approached this problem of unintended command generation by assuming that a greater effort by the user in a short-term imagery task would evoke a discriminative ERP response. Three tasks were defined: passive attention, counting, and pitch-imagery. Users were instructed to passively attend to a target symbol, or to perform a mental tally of the number of target presentations, or to perform the novel task of imagining a high-pitch tone when the target symbol was highlighted. The decoding accuracy were 71.4%, 83.5%, and 89.2% for passive attention, counting, and pitch-imagery, respectively, after the fourth averaging procedure. We found stronger deflections in the N500 component corresponding to the levels of mental effort (passive attention: -1.094 ±0.88 ?V, counting: -2.226 ±0.97 ?V, and pitch-imagery: -2.883 ±0.74 ?V), which highly influenced the decoding accuracy. In addition, the rate of binary classification between passive attention and pitch-imagery tasks was 73.5%, which is an adequate classification rate that motivated us to propose a two-stage classification strategy wherein the target symbols are estimated in the first stage and the passive or active mental state is decoded in the second stage. In this study, we found that the ERP response and the decoding accuracy are highly influenced by the user's voluntary mental tasks. This could lead to a useful approach in practical ERP systems in two respects. Firstly, the user-voluntary tasks can be easily utilized in many different types of BCI systems, and performance enhancement is less dependent on the manipulation of the system's external, visual stimulus parameters. Secondly, we propose an ERP system that classifies the brain state as intended or unintended by considering the measurable differences between passively gazing and actively performing the pitch-imagery tasks in the EEG signal thus minimizing unintended commands to the BCI system.
Project description:Support vector machines (SVM) have developed into a gold standard for accurate classification in brain-computer interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of hidden Markov models (HMM) for online BCIs and discuss strategies to improve their performance.We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from electrocorticograms of four subjects performing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features.We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques.We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online BCIs.
Project description:We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ textipa i/ [heed], / textipa A/ [hot], and / textipa u/ [who'd]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual (AV) feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer, and visual feedback was given as a 2-D cursor on a graphical representation of the plane defined by the first two formant frequencies. We found that combined AV feedback led to the greatest performance in terms of percent accuracy, distance to target, and movement time to target compared with either unimodal feedback of auditory or visual information. These results indicate that performance is enhanced when multimodal feedback is meaningful for the BCI task goals, rather than as a generic biofeedback signal of BCI progress.
Project description:OBJECTIVE:Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]). APPROACH:Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law. MAIN RESULTS:We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. SIGNIFICANCE:The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.