Project description:Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: "CharStreamer". The speller can be used with an instruction as simple as "please attend to what you want to spell". The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences.
Project description:Most P300-based brain-computer interface (BCI) approaches use the visual modality for stimulation. For use with patients suffering from amyotrophic lateral sclerosis (ALS) this might not be the preferable choice because of sight deterioration. Moreover, using a modality different from the visual one minimizes interference with possible visual feedback. Therefore, a multi-class BCI paradigm is proposed that uses spatially distributed, auditory cues. Ten healthy subjects participated in an offline oddball task with the spatial location of the stimuli being a discriminating cue. Experiments were done in free field, with an individual speaker for each location. Different inter-stimulus intervals of 1000 ms, 300 ms and 175 ms were tested. With averaging over multiple repetitions, selection scores went over 90% for most conditions, i.e., in over 90% of the trials the correct location was selected. One subject reached a 100% correct score. Corresponding information transfer rates were high, up to an average score of 17.39 bits/minute for the 175 ms condition (best subject 25.20 bits/minute). When presenting the stimuli through a single speaker, thus effectively canceling the spatial properties of the cue, selection scores went down below 70% for most subjects. We conclude that the proposed spatial auditory paradigm is successful for healthy subjects and shows promising results that may lead to a fast BCI that solely relies on the auditory sense.
Project description:Gaze-independent brain computer interfaces (BCIs) are a potential communication tool for persons with paralysis. This study applies affective auditory stimuli to investigate their effects using a P300 BCI. Fifteen able-bodied participants operated the P300 BCI, with positive and negative affective sounds (PA: a meowing cat sound, NA: a screaming cat sound). Permuted stimuli of the positive and negative affective sounds (permuted-PA, permuted-NA) were also used for comparison. Electroencephalography data was collected, and offline classification accuracies were compared. We used a visual analog scale (VAS) to measure positive and negative affective feelings in the participants. The mean classification accuracies were 84.7% for PA and 67.3% for permuted-PA, while the VAS scores were 58.5 for PA and -12.1 for permuted-PA. The positive affective stimulus showed significantly higher accuracy and VAS scores than the negative affective stimulus. In contrast, mean classification accuracies were 77.3% for NA and 76.0% for permuted-NA, while the VAS scores were -50.0 for NA and -39.2 for permuted NA, which are not significantly different. We determined that a positive affective stimulus with accompanying positive affective feelings significantly improved BCI accuracy. Additionally, an ALS patient achieved 90% online classification accuracy. These results suggest that affective stimuli may be useful for preparing a practical auditory BCI system for patients with disabilities.
Project description:Background Brain-computer interfaces (BCIs) can restore communication in movement- and/or speech-impaired individuals by enabling neural control of computer typing applications. Single command "click" decoders provide a basic yet highly functional capability. Methods We sought to test the performance and long-term stability of click-decoding using a chronically implanted high density electrocorticographic (ECoG) BCI with coverage of the sensorimotor cortex in a human clinical trial participant (ClinicalTrials.gov, NCT03567213) with amyotrophic lateral sclerosis (ALS). We trained the participant's click decoder using a small amount of training data (< 44 minutes across four days) collected up to 21 days prior to BCI use, and then tested it over a period of 90 days without any retraining or updating. Results Using this click decoder to navigate a switch-scanning spelling interface, the study participant was able to maintain a median spelling rate of 10.2 characters per min. Though a transient reduction in signal power modulation interrupted testing with this fixed model, a new click decoder achieved comparable performance despite being trained with even less data (< 15 min, within one day). Conclusion These results demonstrate that a click decoder can be trained with a small ECoG dataset while retaining robust performance for extended periods, providing functional text-based communication to BCI users.
Project description:BackgroundBrain-computer interfaces (BCIs) can restore communication for movement- and/or speech-impaired individuals by enabling neural control of computer typing applications. Single command click detectors provide a basic yet highly functional capability.MethodsWe sought to test the performance and long-term stability of click decoding using a chronically implanted high density electrocorticographic (ECoG) BCI with coverage of the sensorimotor cortex in a human clinical trial participant (ClinicalTrials.gov, NCT03567213) with amyotrophic lateral sclerosis. We trained the participant's click detector using a small amount of training data (<44 min across 4 days) collected up to 21 days prior to BCI use, and then tested it over a period of 90 days without any retraining or updating.ResultsUsing a click detector to navigate a switch scanning speller interface, the study participant can maintain a median spelling rate of 10.2 characters per min. Though a transient reduction in signal power modulation can interrupt usage of a fixed model, a new click detector can achieve comparable performance despite being trained with even less data (<15 min, within 1 day).ConclusionsThese results demonstrate that a click detector can be trained with a small ECoG dataset while retaining robust performance for extended periods, providing functional text-based communication to BCI users.
Project description:Brain-computer interfaces (BCIs) can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g., in the completely locked-in state) or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on 2 consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS) in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min) that increased to 90% (4.23 bits/min) on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.
Project description:The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.
Project description:Efforts to study the neural correlates of learning are hampered by the size of the network in which learning occurs. To understand the importance of learning-related changes in a network of neurons, it is necessary to understand how the network acts as a whole to generate behavior. Here we introduce a paradigm in which the output of a cortical network can be perturbed directly and the neural basis of the compensatory changes studied in detail. Using a brain-computer interface, dozens of simultaneously recorded neurons in the motor cortex of awake, behaving monkeys are used to control the movement of a cursor in a three-dimensional virtual-reality environment. This device creates a precise, well-defined mapping between the firing of the recorded neurons and an expressed behavior (cursor movement). In a series of experiments, we force the animal to relearn the association between neural firing and cursor movement in a subset of neurons and assess how the network changes to compensate. We find that changes in neural activity reflect not only an alteration of behavioral strategy but also the relative contributions of individual neurons to the population error signal.
Project description:This article investigates a possible Brain Computer Interface (BCI) based on semantic relations. The BCI determines which prime word a subject has in mind by presenting probe words using an intelligent algorithm. Subjects indicate when a presented probe word is related to the prime word by a single finger tap. The detection of the neural signal associated with this movement is used by the BCI to decode the prime word. The movement detector combined both the evoked (ERP) and induced (ERD) responses elicited with the movement. Single trial movement detection had an average accuracy of 67%. The decoding of the prime word had an average accuracy of 38% when using 100 probes and 150 possible targets, and 41% after applying a dynamic stopping criterium, reducing the average number of probes to 47. The article shows that the intelligent algorithm used to present the probe words has a significantly higher performance than a random selection of probes. Simulations demonstrate that the BCI also works with larger vocabulary sizes, and the performance scales logarithmically with vocabulary size.