Project description:Gamma oscillations are widely seen in the awake and sleeping cerebral cortex, but the exact role of these oscillations is still debated. Here, we used biophysical models to examine how Gamma oscillations may participate to the processing of afferent stimuli. We constructed conductance-based network models of Gamma oscillations, based on different cell types found in cerebral cortex. The models were adjusted to extracellular unit recordings in humans, where Gamma oscillations always coexist with the asynchronous firing mode. We considered three different mechanisms to generate Gamma, first a mechanism based on the interaction between pyramidal neurons and interneurons (PING), second a mechanism in which Gamma is generated by interneuron networks (ING) and third, a mechanism which relies on Gamma oscillations generated by pacemaker chattering neurons (CHING). We find that all three mechanisms generate features consistent with human recordings, but that the ING mechanism is most consistent with the firing rate change inside Gamma bursts seen in the human data. We next evaluated the responsiveness and resonant properties of these networks, contrasting Gamma oscillations with the asynchronous mode. We find that for both slowly-varying stimuli and precisely-timed stimuli, the responsiveness is generally lower during Gamma compared to asynchronous states, while resonant properties are similar around the Gamma band. We could not find conditions where Gamma oscillations were more responsive. We therefore predict that asynchronous states provide the highest responsiveness to external stimuli, while Gamma oscillations tend to overall diminish responsiveness.
Project description:Researchers working with neural networks have historically focused on either non-spiking neurons tractable for running on computers or more biologically plausible spiking neurons typically requiring special hardware. However, in nature homogeneous networks of neurons do not exist. Instead, spiking and non-spiking neurons cooperate, each bringing a different set of advantages. A well-researched biological example of such a mixed network is a sensorimotor pathway, responsible for mapping sensory inputs to behavioral changes. This type of pathway is also well-researched in robotics where it is applied to achieve closed-loop operation of legged robots by adapting amplitude, frequency, and phase of the motor output. In this paper we investigate how spiking and non-spiking neurons can be combined to create a sensorimotor neuron pathway capable of shaping network output based on analog input. We propose sub-threshold operation of an existing spiking neuron model to create a non-spiking neuron able to interpret analog information and communicate with spiking neurons. The validity of this methodology is confirmed through a simulation of a closed-loop amplitude regulating network inspired by the internal feedback loops found in insects for posturing. Additionally, we show that non-spiking neurons can effectively manipulate post-synaptic spiking neurons in an event-based architecture. The ability to work with mixed networks provides an opportunity for researchers to investigate new network architectures for adaptive controllers, potentially improving locomotion strategies of legged robots.
Project description:Interictal high-frequency oscillations (HFO) detected in electroencephalography recordings have been proposed as biomarkers of epileptogenesis, seizure propensity, disease severity, and treatment response. Automatic HFO detectors typically analyze the data offline using complex time-consuming algorithms, which limits their clinical application. Neuromorphic circuits offer the possibility of building compact and low-power processing systems that can analyze data on-line and in real time. In this review, we describe a fully automated detection pipeline for HFO that uses, for the first time, spiking neural networks and neuromorphic technology. We demonstrated that our HFO detection pipeline can be applied to recordings from different modalities (intracranial electroencephalography, electrocorticography, and scalp electroencephalography) and validated its operation in a custom-designed neuromorphic processor. Our HFO detection approach resulted in high accuracy and specificity in the prediction of seizure outcome in patients implanted with intracranial electroencephalography and electrocorticography, and in the prediction of epilepsy severity in patients recorded with scalp electroencephalography. Our research provides a further step toward the real-time detection of HFO using compact and low-power neuromorphic devices. The real-time detection of HFO in the operation room may improve the seizure outcome of epilepsy surgery, while the use of our neuromorphic processor for non-invasive therapy monitoring might allow for more effective medication strategies to achieve seizure control. Therefore, this work has the potential to improve the quality of life in patients with epilepsy by improving epilepsy diagnostics and treatment.
Project description:We propose a novel, scalable, and accurate method for detecting neuronal ensembles from a population of spiking neurons. Our approach offers a simple yet powerful tool to study ensemble activity. It relies on clustering synchronous population activity (population vectors), allows the participation of neurons in different ensembles, has few parameters to tune and is computationally efficient. To validate the performance and generality of our method, we generated synthetic data, where we found that our method accurately detects neuronal ensembles for a wide range of simulation parameters. We found that our method outperforms current alternative methodologies. We used spike trains of retinal ganglion cells obtained from multi-electrode array recordings under a simple ON-OFF light stimulus to test our method. We found a consistent stimuli-evoked ensemble activity intermingled with spontaneously active ensembles and irregular activity. Our results suggest that the early visual system activity could be organized in distinguishable functional ensembles. We provide a Graphic User Interface, which facilitates the use of our method by the scientific community.
Project description:Traditional object detection methods usually underperform when locating tiny or small drones against complex backgrounds, since the appearance features of the targets and the backgrounds are highly similar. To address this, inspired by the magnocellular motion processing mechanisms, we proposed to utilize the spatial-temporal characteristics of the flying drones based on spiking neural networks, thereby developing the Magno-Spiking Neural Network (MG-SNN) for drone detection. The MG-SNN can learn to identify potential regions of moving targets through motion saliency estimation and subsequently integrates the information into the popular object detection algorithms to design the retinal-inspired spiking neural network module for drone motion extraction and object detection architecture, which integrates motion and spatial features before object detection to enhance detection accuracy. To design and train the MG-SNN, we propose a new backpropagation method called Dynamic Threshold Multi-frame Spike Time Sequence (DT-MSTS), and establish a dataset for the training and validation of MG-SNN, effectively extracting and updating visual motion features. Experimental results in terms of drone detection performance indicate that the incorporation of MG-SNN significantly improves the accuracy of low-altitude drone detection tasks compared to popular small object detection algorithms, acting as a cheap plug-and-play module in detecting small flying targets against complex backgrounds.
Project description:The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.
Project description:State-of-the-art computer vision systems use frame-based cameras that sample the visual scene as a series of high-resolution images. These are then processed using convolutional neural networks using neurons with continuous outputs. Biological vision systems use a quite different approach, where the eyes (cameras) sample the visual scene continuously, often with a non-uniform resolution, and generate neural spike events in response to changes in the scene. The resulting spatio-temporal patterns of events are then processed through networks of spiking neurons. Such event-based processing offers advantages in terms of focusing constrained resources on the most salient features of the perceived scene, and those advantages should also accrue to engineered vision systems based upon similar principles. Event-based vision sensors, and event-based processing exemplified by the SpiNNaker (Spiking Neural Network Architecture) machine, can be used to model the biological vision pathway at various levels of detail. Here we use this approach to explore structural synaptic plasticity as a possible mechanism whereby biological vision systems may learn the statistics of their inputs without supervision, pointing the way to engineered vision systems with similar online learning capabilities.
Project description:In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.
Project description:Somatosensation is composed of two distinct modalities: touch, arising from sensors in the skin, and proprioception, resulting primarily from sensors in the muscles, combined with these same cutaneous sensors. In contrast to the wealth of information about touch, we know quite less about the nature of the signals giving rise to proprioception at the cortical level. Likewise, while there is considerable interest in developing encoding models of touch-related neurons for application to brain machine interfaces, much less emphasis has been placed on an analogous proprioceptive interface. Here we investigate the use of Artificial Neural Networks (ANNs) to model the relationship between the firing rates of single neurons in area 2, a largely proprioceptive region of somatosensory cortex (S1) and several types of kinematic variables related to arm movement. To gain a better understanding of how these kinematic variables interact to create the proprioceptive responses recorded in our datasets, we train ANNs under different conditions, each involving a different set of input and output variables. We explore the kinematic variables that provide the best network performance, and find that the addition of information about joint angles and/or muscle lengths significantly improves the prediction of neural firing rates. Our results thus provide new insight regarding the complex representations of the limb motion in S1: that the firing rates of neurons in area 2 may be more closely related to the activity of peripheral sensors than it is to extrinsic hand position. In addition, we conduct numerical experiments to determine the sensitivity of ANN models to various choices of training design and hyper-parameters. Our results provide a baseline and new tools for future research that utilizes machine learning to better describe and understand the activity of neurons in S1.
Project description:In biological neural systems, different neurons are capable of self-organizing to form different neural circuits for achieving a variety of cognitive functions. However, the current design paradigm of spiking neural networks is based on structures derived from deep learning. Such structures are dominated by feedforward connections without taking into account different types of neurons, which significantly prevent spiking neural networks from realizing their potential on complex tasks. It remains an open challenge to apply the rich dynamical properties of biological neural circuits to model the structure of current spiking neural networks. This paper provides a more biologically plausible evolutionary space by combining feedforward and feedback connections with excitatory and inhibitory neurons. We exploit the local spiking behavior of neurons to adaptively evolve neural circuits such as forward excitation, forward inhibition, feedback inhibition, and lateral inhibition by the local law of spike-timing-dependent plasticity and update the synaptic weights in combination with the global error signals. By using the evolved neural circuits, we construct spiking neural networks for image classification and reinforcement learning tasks. Using the brain-inspired Neural circuit Evolution strategy (NeuEvo) with rich neural circuit types, the evolved spiking neural network greatly enhances capability on perception and reinforcement learning tasks. NeuEvo achieves state-of-the-art performance on CIFAR10, DVS-CIFAR10, DVS-Gesture, and N-Caltech101 datasets and achieves advanced performance on ImageNet. Combined with on-policy and off-policy deep reinforcement learning algorithms, it achieves comparable performance with artificial neural networks. The evolved spiking neural circuits lay the foundation for the evolution of complex networks with functions.