Cellular and Network Mechanisms for Temporal Signal Propagation in a Cortical Network Model.
ABSTRACT: The mechanisms underlying an effective propagation of high intensity information over a background of irregular firing and response latency in cognitive processes remain unclear. Here we propose a SSCCPI circuit to address this issue. We hypothesize that when a high-intensity thalamic input triggers synchronous spike events (SSEs), dense spikes are scattered to many receiving neurons within a cortical column in layer IV, many sparse spike trains are propagated in parallel along minicolumns at a substantially high speed and finally integrated into an output spike train toward or in layer Va. We derive the sufficient conditions for an effective (fast, reliable, and precise) SSCCPI circuit: (i) SSEs are asynchronous (near synchronous); (ii) cortical columns prevent both repeatedly triggering SSEs and incorrectly synaptic connections between adjacent columns; and (iii) the propagator in interneurons is temporally complete fidelity and reliable. We encode the membrane potential responses to stimuli using the non-linear autoregressive integrated process derived by applying Newton's second law to stochastic resilience systems. We introduce a multithreshold decoder to correct encoding errors. Evidence supporting an effective SSCCPI circuit includes that for the condition, (i) time delay enhances SSEs, suggesting that response latency induces SSEs in high-intensity stimuli; irregular firing causes asynchronous SSEs; asynchronous SSEs relate to healthy neurons; and rigorous SSEs relate to brain disorders. For the condition (ii) neurons within a given minicolumn are stereotypically interconnected in the vertical dimension, which prevents repeated triggering SSEs and ensures signal parallel propagation; columnar segregation avoids incorrect synaptic connections between adjacent columns; and signal propagation across layers overwhelmingly prefers columnar direction. For the condition (iii), accumulating experimental evidence supports temporal transfer precision with millisecond fidelity and reliability in interneurons; homeostasis supports a stable fixed-point encoder by regulating changes to synaptic size, synaptic strength, and ion channel function in the membrane; together all-or-none modulation, active backpropagation, additive effects of graded potentials, and response variability functionally support the multithreshold decoder; our simulations demonstrate that the encoder-decoder is temporally complete fidelity and reliable in special intervals contained within the stable fixed-point range. Hence, the SSCCPI circuit provides a possible mechanism of effective signal propagation in cortical networks.
Project description:The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.
Project description:Sensory processing requires mechanisms of fast coincidence detection to discriminate synchronous from asynchronous inputs. Spike threshold adaptation enables such a discrimination but is ineffective in transmitting this information to the network. We show here that presynaptic axonal sodium channels read and transmit precise levels of input synchrony to the postsynaptic cell by modulating the presynaptic action potential (AP) amplitude. As a consequence, synaptic transmission is facilitated at cortical synapses when the presynaptic spike is produced by synchronous inputs. Using dual soma-axon recordings, imaging, and modeling, we show that this facilitation results from enhanced AP amplitude in the axon due to minimized inactivation of axonal sodium channels. Quantifying local circuit activity and using network modeling, we found that spikes induced by synchronous inputs produced a larger effect on network activity than spikes induced by asynchronous inputs. Therefore, this input synchrony-dependent facilitation may constitute a powerful mechanism, regulating synaptic transmission at proximal synapses.
Project description:Data prediction and imputation are important parts of marine animal movement trajectory analysis as they can help researchers understand animal movement patterns and address missing data issues. Compared with traditional methods, deep learning methods can usually provide enhanced pattern extraction capabilities, but their applications in marine data analysis are still limited. In this research, we propose a composite deep learning model to improve the accuracy of marine animal trajectory prediction and imputation. The model extracts patterns from the trajectories with an encoder network and reconstructs the trajectories using these patterns with a decoder network. We use attention mechanisms to highlight certain extracted patterns as well for the decoder. We also feed these patterns into a second decoder for prediction and imputation. Therefore, our approach is a coupling of unsupervised learning with the encoder and the first decoder and supervised learning with the encoder and the second decoder. Experimental results demonstrate that our approach can reduce errors by at least 10% on average comparing with other methods.
Project description:Knowledge about the collective dynamics of cortical spiking is very informative about the underlying coding principles. However, even most basic properties are not known with certainty, because their assessment is hampered by spatial subsampling, i.e., the limitation that only a tiny fraction of all neurons can be recorded simultaneously with millisecond precision. Building on a novel, subsampling-invariant estimator, we fit and carefully validate a minimal model for cortical spike propagation. The model interpolates between two prominent states: asynchronous and critical. We find neither of them in cortical spike recordings across various species, but instead identify a narrow "reverberating" regime. This approach enables us to predict yet unknown properties from very short recordings and for every circuit individually, including responses to minimal perturbations, intrinsic network timescales, and the strength of external input compared to recurrent activation "thereby informing about the underlying coding principles for each circuit, area, state and task.
Project description:Autoencoders are commonly used in representation learning. They consist of an encoder and a decoder, which provide a straightforward method to map <i>n</i>-dimensional data in input space to a lower <i>m</i>-dimensional representation space and back. The decoder itself defines an <i>m</i>-dimensional manifold in input space. Inspired by manifold learning, we showed that the decoder can be trained on its own by learning the representations of the training samples along with the decoder weights using gradient descent. A sum-of-squares loss then corresponds to optimizing the manifold to have the smallest Euclidean distance to the training samples, and similarly for other loss functions. We derived expressions for the number of samples needed to specify the encoder and decoder and showed that the decoder generally requires much fewer training samples to be well-specified compared to the encoder. We discuss the training of autoencoders in this perspective and relate it to previous work in the field that uses noisy training examples and other types of regularization. On the natural image data sets MNIST and CIFAR10, we demonstrated that the decoder is much better suited to learn a low-dimensional representation, especially when trained on small data sets. Using simulated gene regulatory data, we further showed that the decoder alone leads to better generalization and meaningful representations. Our approach of training the decoder alone facilitates representation learning even on small data sets and can lead to improved training of autoencoders. We hope that the simple analyses presented will also contribute to an improved conceptual understanding of representation learning.
Project description:Deep learning methods have been widely applied to visual and acoustic technology. In this paper, we propose an odor labeling convolutional encoder-decoder (OLCE) for odor identification in machine olfaction. OLCE composes a convolutional neural network encoder and decoder where the encoder output is constrained to odor labels. An electronic nose was used for the data collection of gas responses followed by a normative experimental procedure. Several evaluation indexes were calculated to evaluate the algorithm effectiveness: accuracy 92.57%, precision 92.29%, recall rate 92.06%, F1-Score 91.96%, and Kappa coefficient 90.76%. We also compared the model with some algorithms used in machine olfaction. The comparison result demonstrated that OLCE had the best performance among these algorithms.
Project description:Reliable propagation of slow-modulations of the firing rate across multiple layers of a feedforward network (FFN) has proven difficult to capture in spiking neural models. In this paper, we explore necessary conditions for reliable and stable propagation of time-varying asynchronous spikes whose instantaneous rate of changes-in fairly short time windows [20-100] msec-represents information of slow fluctuations of the stimulus. Specifically, we study the effect of network size, level of background synaptic noise, and the variability of synaptic delays in an FFN with all-to-all connectivity. We show that network size and the level of background synaptic noise, together with the strength of synapses, are substantial factors enabling the propagation of asynchronous spikes in deep layers of an FFN. In contrast, the variability of synaptic delays has a minor effect on signal propagation.
Project description:The release of neurotransmitters from synapses obeys complex and stochastic dynamics. Depending on the recent history of synaptic activation, many synapses depress the probability of releasing more neurotransmitter, which is known as synaptic depression. Our understanding of how synaptic depression affects the information efficacy, however, is limited. Here we propose a mathematically tractable model of both synchronous spike-evoked release and asynchronous release that permits us to quantify the information conveyed by a synapse. The model transits between discrete states of a communication channel, with the present state depending on many past time steps, emulating the gradual depression and exponential recovery of the synapse. Asynchronous and spontaneous releases play a critical role in shaping the information efficacy of the synapse. We prove that depression can enhance both the information rate and the information rate per unit energy expended, provided that synchronous spike-evoked release depresses less (or recovers faster) than asynchronous release. Furthermore, we explore the theoretical implications of short-term synaptic depression adapting on longer time scales, as part of the phenomenon of metaplasticity. In particular, we show that a synapse can adjust its energy expenditure by changing the dynamics of short-term synaptic depression without affecting the net information conveyed by each successful release. Moreover, the optimal input spike rate is independent of the amplitude or time constant of synaptic depression. We analyze the information efficacy of three types of synapses for which the short-term dynamics of both synchronous and asynchronous release have been experimentally measured. In hippocampal autaptic synapses, the persistence of asynchronous release during depression cannot compensate for the reduction of synchronous release, so that the rate of information transmission declines with synaptic depression. In the calyx of Held, the information rate per release remains constant despite large variations in the measured asynchronous release rate. Lastly, we show that dopamine, by controlling asynchronous release in corticostriatal synapses, increases the synaptic information efficacy in nucleus accumbens.
Project description:Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.
Project description:A brain-to-brain interface (BTBI) enabled a real-time transfer of behaviorally meaningful sensorimotor information between the brains of two rats. In this BTBI, an "encoder" rat performed sensorimotor tasks that required it to select from two choices of tactile or visual stimuli. While the encoder rat performed the task, samples of its cortical activity were transmitted to matching cortical areas of a "decoder" rat using intracortical microstimulation (ICMS). The decoder rat learned to make similar behavioral selections, guided solely by the information provided by the encoder rat's brain. These results demonstrated that a complex system was formed by coupling the animals' brains, suggesting that BTBIs can enable dyads or networks of animal's brains to exchange, process, and store information and, hence, serve as the basis for studies of novel types of social interaction and for biological computing devices.