Decoding the processing stages of mental arithmetic with magnetoencephalography.
ABSTRACT: Elementary arithmetic is highly prevalent in our daily lives. However, despite decades of research, we are only beginning to understand how the brain solves simple calculations. Here, we applied machine learning techniques to magnetoencephalography (MEG) signals in an effort to decompose the successive processing stages and mental transformations underlying elementary arithmetic. Adults subjects verified single-digit addition and subtraction problems such as 3 + 2 = 9 in which each successive symbol was presented sequentially. MEG signals revealed a cascade of partially overlapping brain states. While the first operand could be transiently decoded above chance level, primarily based on its visual properties, the decoding of the second operand was more accurate and lasted longer. Representational similarity analyses suggested that this decoding rested on both visual and magnitude codes. We were also able to decode the operation type (additions vs. subtraction) during practically the entire trial after the presentation of the operation sign. At the decision stage, MEG indicated a fast and highly overlapping temporal dynamics for (1) identifying the proposed result, (2) judging whether it was correct or incorrect, and (3) pressing the response button. Surprisingly, however, the internally computed result could not be decoded. Our results provide a first comprehensive picture of the unfolding processing stages underlying arithmetic calculations at a single-trial level, and suggest that externally and internally generated neural codes may have different neural substrates.
Project description:Positive number arithmetic is based on combining and separating sets of items, with systematic differences in brain activity in specific regions depending on operation. In contrast, arithmetic with negative numbers involves manipulating abstract values worth less than zero, possibly involving different operation-activity relationships in these regions. Use of procedural arithmetic knowledge, including transformative rules like "minus a negative is plus a positive," may also differ by operand sign. Here, we examined whether the activity evoked in negative number arithmetic was similar to that seen in positive problems, using region of interest analyses (ROIs) to examine a specific set of brain regions. Negative-operand problems demonstrated a positive-like effect of operation in the inferior parietal lobule with more activity for subtraction than addition, as well as increased activity across operation. Interestingly, while positive-operand problems demonstrated the expected addition > subtraction activity difference in the angular gyrus, negative problems showed a reversed effect, with relatively more activity for subtraction than addition. Negative subtraction problems may be understood after translation to addition via rule, thereby invoking more addition-like activity. Whole-brain analyses showed increased right caudate activity for negative-operand problems across operation, indicating a possible overall increase in usage of procedural rules. Arithmetic with negative numbers may thus shows some operation-activity relationships similar to positive numbers, but may also be affected by strategy. This study examines the flexibility of the mental number system by exploring to what degree the processing of an applied usage of a difficult, abstract mathematical concept is similar to that for positive numbers.
Project description:We introduce a novel method capable of dissecting the succession of processing stages underlying mental arithmetic, thus revealing how two numbers are transformed into a third. We asked adults to point to the result of single-digit additions and subtractions on a number line, while their finger trajectory was constantly monitored. We found that the two operands are processed serially: the finger first points toward the larger operand, then slowly veers toward the correct result. This slow deviation unfolds proportionally to the size of the smaller operand, in both additions and subtractions. We also observed a transient operator effect: a plus sign attracted the finger to the right and a minus sign to the left and a transient activation of the absolute value of the subtrahend. These findings support a model whereby addition and subtraction are computed by a stepwise displacement on the mental number line, starting with the larger number and incrementally adding or subtracting the smaller number.
Project description:Turner syndrome (TS) is a neurogenetic disorder characterized by the absence of one X chromosome in a phenotypic female. Individuals with TS are at risk for impairments in mathematics. We investigated the neural mechanisms underlying arithmetic processing in TS. Fifteen subjects with TS and 15 age-matched typically developing controls were scanned using functional MRI while they performed easy (two-operand) and difficult (three-operand) versions of an arithmetic processing task. Both groups activated fronto-parietal regions involved in arithmetic processing during the math tasks. Compared with controls, the TS group recruited additional neural resources in frontal and parietal regions during the easier, two-operand math task. During the more difficult three-operand task, individuals with TS demonstrated significantly less activation in frontal, parietal and subcortical regions than controls. However, the TS group's performance on both math tasks was comparable to controls. Individuals with TS demonstrate activation differences in fronto-parietal areas during arithmetic tasks compared with controls. They must recruit additional brain regions during a relatively easy task and demonstrate a potentially inefficient response to increased task difficulty compared with controls.
Project description:It has been proposed that elementary arithmetic induces spatial shifts of attention. However, the timing of this arithmetic-space association remains unknown. Here we investigate this issue with a target detection paradigm. Detecting targets in the right visual field was faster than in the left visual field when preceded by an addition operation, while detecting targets in the left visual field was faster than in the right visual field when preceded by a subtraction operation. The arithmetic-space association was found both at the end of the arithmetic operation and during calculation. In contrast, the processing of operators themselves did not induce spatial biases. Our results suggest that the arithmetic-space association resides in the mental arithmetic operation rather than in the individual numbers or the operators. Moreover, the temporal course of this effect was different in addition and subtraction.
Project description:We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes.
Project description:It has been proposed that recent cultural inventions such as symbolic arithmetic recycle evolutionary older neural mechanisms. A central assumption of this hypothesis is that the degree to which a preexisting mechanism is recycled depends on the degree of similarity between its initial function and the novel task. To test this assumption, we investigated whether the brain region involved in magnitude comparison in the intraparietal sulcus (IPS), localized by a numerosity comparison task, is recruited to a greater degree by arithmetic problems that involve number comparison (single-digit subtractions) than by problems that involve retrieving number facts from memory (single-digit multiplications). Our results confirmed that subtractions are associated with greater activity in the IPS than multiplications, whereas multiplications elicit greater activity than subtractions in regions involved in verbal processing including the middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) that were localized by a phonological processing task. Pattern analyses further indicated that the neural mechanisms more active for subtraction than multiplication in the IPS overlap with those involved in numerosity comparison and that the strength of this overlap predicts interindividual performance in the subtraction task. These findings provide novel evidence that elementary arithmetic relies on the cooption of evolutionary older neural circuits.
Project description:It is crucial to understand what brain signals can be decoded from single trials with different recording techniques for the development of Brain-Machine Interfaces. A specific challenge for non-invasive recording methods are activations confined to small spatial areas on the cortex such as the finger representation of one hand. Here we study the information content of single trial brain activity in non-invasive MEG and EEG recordings elicited by finger movements of one hand. We investigate the feasibility of decoding which of four fingers of one hand performed a slight button press. With MEG we demonstrate reliable discrimination of single button presses performed with the thumb, the index, the middle or the little finger (average over all subjects and fingers 57%, best subject 70%, empirical guessing level: 25.1%). EEG decoding performance was less robust (average over all subjects and fingers 43%, best subject 54%, empirical guessing level 25.1%). Spatiotemporal patterns of amplitude variations in the time series provided best information for discriminating finger movements. Non-phase-locked changes of mu and beta oscillations were less predictive. Movement related high gamma oscillations were observed in average induced oscillation amplitudes in the MEG but did not provide sufficient information about the finger's identity in single trials. Importantly, pre-movement neuronal activity provided information about the preparation of the movement of a specific finger. Our study demonstrates the potential of non-invasive MEG to provide informative features for individual finger control in a Brain-Machine Interface neuroprosthesis.
Project description:In face-to-face communication, audio-visual (AV) stimuli can be fused, combined or perceived as mismatching. While the left superior temporal sulcus (STS) is presumably the locus of AV integration, the process leading to combination is unknown. Based on previous modelling work, we hypothesize that combination results from a complex dynamic originating in a failure to integrate AV inputs, followed by a reconstruction of the most plausible AV sequence. In two different behavioural tasks and one MEG experiment, we observed that combination is more time demanding than fusion. Using time-/source-resolved human MEG analyses with linear and dynamic causal models, we show that both fusion and combination involve early detection of AV incongruence in the STS, whereas combination is further associated with enhanced activity of AV asynchrony-sensitive regions (auditory and inferior frontal cortices). Based on neural signal decoding, we finally show that only combination can be decoded from the IFG activity and that combination is decoded later than fusion in the STS. These results indicate that the AV speech integration outcome primarily depends on whether the STS converges or not onto an existing multimodal syllable representation, and that combination results from subsequent temporal processing, presumably the off-line re-ordering of incongruent AV stimuli.
Project description:<b>Objective:</b> Brain-machine interfaces (BMIs) are useful for inducing plastic changes in cortical representation. A BMI first decodes hand movements using cortical signals and then converts the decoded information into movements of a robotic hand. By using the BMI robotic hand, the cortical representation decoded by the BMI is modulated to improve decoding accuracy. We developed a BMI based on real-time magnetoencephalography (MEG) signals to control a robotic hand using decoded hand movements. Subjects were trained to use the BMI robotic hand freely for 10 min to evaluate plastic changes in the cortical representation due to the training. <b>Method:</b> We trained nine young healthy subjects with normal motor function. In open-loop conditions, they were instructed to grasp or open their right hands during MEG recording. Time-averaged MEG signals were then used to train a real decoder to control the robotic arm in real time. Then, subjects were instructed to control the BMI-controlled robotic hand by moving their right hands for 10 min while watching the robot's movement. During this closed-loop session, subjects tried to improve their ability to control the robot. Finally, subjects performed the same offline task to compare cortical activities related to the hand movements. As a control, we used a random decoder trained by the MEG signals with shuffled movement labels. We performed the same experiments with the random decoder as a crossover trial. To evaluate the cortical representation, cortical currents were estimated using a source localization technique. Hand movements were also decoded by a support vector machine using the MEG signals during the offline task. The classification accuracy of the movements was compared among offline tasks. <b>Results:</b> During the BMI training with the real decoder, the subjects succeeded in improving their accuracy in controlling the BMI robotic hand with correct rates of 0.28 ± 0.13 to 0.50 ± 0.11 (<i>p</i> = 0.017, <i>n</i> = 8, paired Student's <i>t</i>-test). Moreover, the classification accuracy of hand movements during the offline task was significantly increased after BMI training with the real decoder from 62.7 ± 6.5 to 70.0 ± 11.1% (<i>p</i> = 0.022, <i>n</i> = 8, <i>t</i><sub>(7)</sub> = 2.93, paired Student's <i>t-</i>test), whereas accuracy did not significantly change after BMI training with the random decoder from 63.0 ± 8.8 to 66.4 ± 9.0% (<i>p</i> = 0.225, <i>n</i> = 8, <i>t</i><sub>(7)</sub> = 1.33). <b>Conclusion:</b> BMI training is a useful tool to train the cortical activity necessary for BMI control and to induce some plastic changes in the activity.
Project description:Although distinct categories are reliably decoded from fMRI brain responses, it has proved more difficult to distinguish visually similar inputs, such as different faces. Here, we apply a recently developed deep learning system to reconstruct face images from human fMRI. We trained a variational auto-encoder (VAE) neural network using a GAN (Generative Adversarial Network) unsupervised procedure over a large data set of celebrity faces. The auto-encoder latent space provides a meaningful, topologically organized 1024-dimensional description of each image. We then presented several thousand faces to human subjects, and learned a simple linear mapping between the multi-voxel fMRI activation patterns and the 1024 latent dimensions. Finally, we applied this mapping to novel test images, translating fMRI patterns into VAE latent codes, and codes into face reconstructions. The system not only performed robust pairwise decoding (>95% correct), but also accurate gender classification, and even decoded which face was imagined, rather than seen.