Feedforward inhibition and synaptic scaling--two sides of the same coin?
ABSTRACT: Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Project description:Recently, deep learning algorithms have outperformed human experts in various tasks across several domains; however, their characteristics are distant from current knowledge of neuroscience. The simulation results of biological learning algorithms presented herein outperform state-of-the-art optimal learning curves in supervised learning of feedforward networks. The biological learning algorithms comprise asynchronous input signals with decaying input summation, weights adaptation, and multiple outputs for an input signal. In particular, the generalization error for such biological perceptrons decreases rapidly with increasing number of examples, and it is independent of the size of the input. This is achieved using either synaptic learning, or solely through dendritic adaptation with a mechanism of swinging between reflecting boundaries, without learning steps. The proposed biological learning algorithms outperform the optimal scaling of the learning curve in a traditional perceptron. It also results in a considerable robustness to disparity between weights of two networks with very similar outputs in biological supervised learning scenarios. The simulation results indicate the potency of neurobiological mechanisms and open opportunities for developing a superior class of deep learning algorithms.
Project description:The synaptic connectivity within neuronal networks is thought to determine the information processing they perform, yet network structure-function relationships remain poorly understood. By combining quantitative anatomy of the cerebellar input layer and information theoretic analysis of network models, we investigated how synaptic connectivity affects information transmission and processing. Simplified binary models revealed that the synaptic connectivity within feedforward networks determines the trade-off between information transmission and sparse encoding. Networks with few synaptic connections per neuron and network-activity-dependent threshold were optimal for lossless sparse encoding over the widest range of input activities. Biologically detailed spiking network models with experimentally constrained synaptic conductances and inhibition confirmed our analytical predictions. Our results establish that the synaptic connectivity within the cerebellar input layer enables efficient lossless sparse encoding. Moreover, they provide a functional explanation for why granule cells have approximately four dendrites, a feature that has been evolutionarily conserved since the appearance of fish.
Project description:The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning.
Project description:Memories are believed to be encoded by changes in the synaptic connections between neurons. Although many forms of synaptic plasticity have been identified, it remains unknown how such changes affect local circuits. Feedforward inhibitory networks are a common type of local circuitry and occur when principal neurons and their afferent inhibitory interneurons receive the same input. Using slices of cerebellar cortex, we explored how synaptic plasticity at multiple sites within a feedforward inhibitory network consisting of parallel fibers, interneurons, and Purkinje neurons alters the output of this circuit. We found that stimuli resembling baseline activity potentiated feedforward excitatory and simultaneously depressed feedforward inhibitory pathways. In contrast, stimuli resembling sensory-evoked patterns of firing potentiated both types of feedforward connections. These distinct forms of ensemble plasticity change the way Purkinje neurons subsequently respond to inputs. Such concerted changes in the circuitry of cerebellar cortex may contribute to certain forms of sensorimotor learning.
Project description:We developed a biologically plausible unsupervised learning algorithm, error-gated Hebbian rule (EGHR)-?, that performs principal component analysis (PCA) and independent component analysis (ICA) in a single-layer feedforward neural network. If parameter ??=?1, it can extract the subspace that major principal components span similarly to Oja's subspace rule for PCA. If ??=?0, it can separate independent sources similarly to Bell-Sejnowski's ICA rule but without requiring the same number of input and output neurons. Unlike these engineering rules, the EGHR-? can be easily implemented in a biological or neuromorphic circuit because it only uses local information available at each synapse. We analytically and numerically demonstrate the reliability of the EGHR-? in extracting and separating major sources given high-dimensional input. By adjusting ?, the EGHR-? can extract sources that are missed by the conventional engineering approach that first applies PCA and then ICA. Namely, the proposed rule can successfully extract hidden natural images even in the presence of dominant or non-Gaussian noise components. The results highlight the reliability and utility of the EGHR-? for large-scale parallel computation of PCA and ICA and its future implementation in a neuromorphic hardware.
Project description:The perirhinal (PER) and lateral entorhinal (LEC) cortex form an anatomical link between the neocortex and the hippocampus. However, neocortical activity is transmitted through the PER and LEC to the hippocampus with a low probability, suggesting the involvement of the inhibitory network. This study explored the role of interneuron mediated inhibition, activated by electrical stimulation in the agranular insular cortex (AiP), in the deep layers of the PER and LEC. Activated synaptic input by AiP stimulation rarely evoked action potentials in the PER-LEC deep layer excitatory principal neurons, most probably because the evoked synaptic response consisted of a small excitatory and large inhibitory conductance. Furthermore, parvalbumin positive (PV) interneurons-a subset of interneurons projecting onto the axo-somatic region of principal neurons-received synaptic input earlier than principal neurons, suggesting recruitment of feedforward inhibition. This synaptic input in PV interneurons evoked varying trains of action potentials, explaining the fast rising, long lasting synaptic inhibition received by deep layer principal neurons. Altogether, the excitatory input from the AiP onto deep layer principal neurons is overruled by strong feedforward inhibition. PV interneurons, with their fast, extensive stimulus-evoked firing, are able to deliver this fast evoked inhibition in principal neurons. This indicates an essential role for PV interneurons in the gating mechanism of the PER-LEC network.
Project description:Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks' convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations.
Project description:Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.
Project description:Input-timing-dependent plasticity (ITDP) is a circuit-based synaptic learning rule by which paired activation of entorhinal cortical (EC) and Schaffer collateral (SC) inputs to hippocampal CA1 pyramidal neurons (PNs) produces a long-term enhancement of SC excitation. We now find that paired stimulation of EC and SC inputs also induces ITDP of SC excitation of CA2 PNs. However, whereas CA1 ITDP results from long-term depression of feedforward inhibition (iLTD) as a result of activation of CB1 endocannabinoid receptors on cholecystokinin-expressing interneurons, CA2 ITDP results from iLTD through activation of ?-opioid receptors on parvalbumin-expressing interneurons. Furthermore, whereas CA1 ITDP has been previously linked to enhanced specificity of contextual memory, we find that CA2 ITDP is associated with enhanced social memory. Thus, ITDP may provide a general synaptic learning rule for distinct forms of hippocampal-dependent memory mediated by distinct hippocampal regions.
Project description:Pattern separation is a fundamental function of the brain. The divergent feedforward networks thought to underlie this computation are widespread, yet exhibit remarkably similar sparse synaptic connectivity. Marr-Albus theory postulates that such networks separate overlapping activity patterns by mapping them onto larger numbers of sparsely active neurons. But spatial correlations in synaptic input and those introduced by network connectivity are likely to compromise performance. To investigate the structural and functional determinants of pattern separation we built models of the cerebellar input layer with spatially correlated input patterns, and systematically varied their synaptic connectivity. Performance was quantified by the learning speed of a classifier trained on either the input or output patterns. Our results show that sparse synaptic connectivity is essential for separating spatially correlated input patterns over a wide range of network activity, and that expansion and correlations, rather than sparse activity, are the major determinants of pattern separation.