From correlation to causation: Estimating effective connectivity from zero-lag covariances of brain signals.
ABSTRACT: Knowing brain connectivity is of great importance both in basic research and for clinical applications. We are proposing a method to infer directed connectivity from zero-lag covariances of neuronal activity recorded at multiple sites. This allows us to identify causal relations that are reflected in neuronal population activity. To derive our strategy, we assume a generic linear model of interacting continuous variables, the components of which represent the activity of local neuronal populations. The suggested method for inferring connectivity from recorded signals exploits the fact that the covariance matrix derived from the observed activity contains information about the existence, the direction and the sign of connections. Assuming a sparsely coupled network, we disambiguate the underlying causal structure via L1-minimization, which is known to prefer sparse solutions. In general, this method is suited to infer effective connectivity from resting state data of various types. We show that our method is applicable over a broad range of structural parameters regarding network size and connection probability of the network. We also explored parameters affecting its activity dynamics, like the eigenvalue spectrum. Also, based on the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics, we show that with our method it is possible to estimate directed connectivity from zero-lag covariances derived from such signals. In this study, we consider measurement noise and unobserved nodes as additional confounding factors. Furthermore, we investigate the amount of data required for a reliable estimate. Additionally, we apply the proposed method on full-brain resting-state fast fMRI datasets. The resulting network exhibits a tendency for close-by areas being connected as well as inter-hemispheric connections between corresponding areas. In addition, we found that a surprisingly large fraction of more than one third of all identified connections were of inhibitory nature.
Project description:In this study, we investigate if phase-locking of fast oscillatory activity relies on the anatomical skeleton and if simple computational models informed by structural connectivity can help further to explain missing links in the structure-function relationship. We use diffusion tensor imaging data and alpha band-limited EEG signal recorded in a group of healthy individuals. Our results show that about 23.4% of the variance in empirical networks of resting-state functional connectivity is explained by the underlying white matter architecture. Simulating functional connectivity using a simple computational model based on the structural connectivity can increase the match to 45.4%. In a second step, we use our modeling framework to explore several technical alternatives along the modeling path. First, we find that an augmentation of homotopic connections in the structural connectivity matrix improves the link to functional connectivity while a correction for fiber distance slightly decreases the performance of the model. Second, a more complex computational model based on Kuramoto oscillators leads to a slight improvement of the model fit. Third, we show that the comparison of modeled and empirical functional connectivity at source level is much more specific for the underlying structural connectivity. However, different source reconstruction algorithms gave comparable results. Of note, as the fourth finding, the model fit was much better if zero-phase lag components were preserved in the empirical functional connectome, indicating a considerable amount of functionally relevant synchrony taking place with near zero or zero-phase lag. The combination of the best performing alternatives at each stage in the pipeline results in a model that explains 54.4% of the variance in the empirical EEG functional connectivity. Our study shows that large-scale brain circuits of fast neural network synchrony strongly rely upon the structural connectome and simple computational models of neural activity can explain missing links in the structure-function relationship.
Project description:Functional connectivity metrics have been widely used to infer the underlying structural connectivity in neuronal networks. Maximum entropy based Ising models have been suggested to discount the effect of indirect interactions and give good results in inferring the true anatomical connections. However, no benchmarking is currently available to assess the performance of Ising couplings against other functional connectivity metrics in the microscopic scale of neuronal networks through a wide set of network conditions and network structures. In this paper, we study the performance of the Ising model couplings to infer the synaptic connectivity in in silico networks of neurons and compare its performance against partial and cross-correlations for different correlation levels, firing rates, network sizes, network densities, and topologies. Our results show that the relative performance amongst the three functional connectivity metrics depends primarily on the network correlation levels. Ising couplings detected the most structural links at very weak network correlation levels, and partial correlations outperformed Ising couplings and cross-correlations at strong correlation levels. The result was consistent across varying firing rates, network sizes, and topologies. The findings of this paper serve as a guide in choosing the right functional connectivity tool to reconstruct the structural connectivity.
Project description:Brain processes occur at various timescales, ranging from milliseconds (neurons) to minutes and hours (behavior). Characterizing functional coupling among brain regions at these diverse timescales is key to understanding how the brain produces behavior. Here, we apply instantaneous and lag-based measures of conditional linear dependence, based on Granger-Geweke causality (GC), to infer network connections at distinct timescales from functional magnetic resonance imaging (fMRI) data. Due to the slow sampling rate of fMRI, it is widely held that GC produces spurious and unreliable estimates of functional connectivity when applied to fMRI data. We challenge this claim with simulations and a novel machine learning approach. First, we show, with simulated fMRI data, that instantaneous and lag-based GC identify distinct timescales and complementary patterns of functional connectivity. Next, we analyze fMRI scans from 500 subjects and show that a linear classifier trained on either instantaneous or lag-based GC connectivity reliably distinguishes task versus rest brain states, with ~80-85% cross-validation accuracy. Importantly, instantaneous and lag-based GC exploit markedly different spatial and temporal patterns of connectivity to achieve robust classification. Our approach enables identifying functionally connected networks that operate at distinct timescales in the brain.
Project description:In the neonatal rodent hippocampus, the first and predominant pattern of correlated neuronal network activity is early sharp waves (eSPWs). Whether and how eSPWs are organized bilaterally remains unknown. Here, using simultaneous silicone probe recordings from the left and right hippocampus in neonatal rats in vivo we found that eSPWs are highly synchronized bilaterally with nearly zero time lag between the two sides. The amplitudes of eSPWs in the left and right hippocampi were also highly correlated. eSPWs also supported bilateral synchronization of multiple unit activity (MUA). We suggest that bilateral correlated activity supported by synchronized eSPWs participates in the formation of bilateral connections in the hippocampal system.
Project description:Traditional resting-state network concept is based on calculating linear dependence of spontaneous low frequency fluctuations of the BOLD signals of different brain areas, which assumes temporally stable zero-lag synchrony across regions. However, growing amount of experimental findings suggest that functional connectivity exhibits dynamic changes and a complex time-lag structure, which cannot be captured by the static zero-lag correlation analysis. Here we propose a new approach applying Dynamic Time Warping (DTW) distance to evaluate functional connectivity strength that accounts for non-stationarity and phase-lags between the observed signals. Using simulated fMRI data we found that DTW captures dynamic interactions and it is less sensitive to linearly combined global noise in the data as compared to traditional correlation analysis. We tested our method using resting-state fMRI data from repeated measurements of an individual subject and showed that DTW analysis results in more stable connectivity patterns by reducing the within-subject variability and increasing robustness for preprocessing strategies. Classification results on a public dataset revealed a superior sensitivity of the DTW analysis to group differences by showing that DTW based classifiers outperform the zero-lag correlation and maximal lag cross-correlation based classifiers significantly. Our findings suggest that analysing resting-state functional connectivity using DTW provides an efficient new way for characterizing functional networks.
Project description:A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Project description:Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.
Project description:The specific connectivity of a neuronal network is reflected in the dynamics of the signals recorded on its nodes. The analysis of how the activity in one node predicts the behaviour of another gives the directionality in their relationship. However, each node is composed of many different elements which define the properties of the links. For instance, excitatory and inhibitory neuronal subtypes determine the functionality of the connection. Classic indexes such as the Granger causality (GC) quantifies these interactions, but they do not infer into the mechanism behind them. Here, we introduce an extension of the well-known GC that analyses the correlation associated to the specific influence that a transmitter node has over the receiver. This way, the G-causal link has a positive or negative effect if the predicted activity follows directly or inversely, respectively, the dynamics of the sender. The method is validated in a neuronal population model, testing the paradigm that excitatory and inhibitory neurons have a differential effect in the connectivity. Our approach correctly infers the positive or negative coupling produced by different types of neurons. Our results suggest that the proposed approach provides additional information on the characterization of G-causal connections, which is potentially relevant when it comes to understanding interactions in the brain circuits.
Project description:Learning in neuronal networks has developed in many directions, in particular to reproduce cognitive tasks like image recognition and speech processing. Implementations have been inspired by stereotypical neuronal responses like tuning curves in the visual system, where, for example, ON/OFF cells fire or not depending on the contrast in their receptive fields. Classical models of neuronal networks therefore map a set of input signals to a set of activity levels in the output of the network. Each category of inputs is thereby predominantly characterized by its mean. In the case of time series, fluctuations around this mean constitute noise in this view. For this paradigm, the high variability exhibited by the cortical activity may thus imply limitations or constraints, which have been discussed for many years. For example, the need for averaging neuronal activity over long periods or large groups of cells to assess a robust mean and to diminish the effect of noise correlations. To reconcile robust computations with variable neuronal activity, we here propose a conceptual change of perspective by employing variability of activity as the basis for stimulus-related information to be learned by neurons, rather than merely being the noise that corrupts the mean signal. In this new paradigm both afferent and recurrent weights in a network are tuned to shape the input-output mapping for covariances, the second-order statistics of the fluctuating activity. When including time lags, covariance patterns define a natural metric for time series that capture their propagating nature. We develop the theory for classification of time series based on their spatio-temporal covariances, which reflect dynamical properties. We demonstrate that recurrent connectivity is able to transform information contained in the temporal structure of the signal into spatial covariances. Finally, we use the MNIST database to show how the covariance perceptron can capture specific second-order statistical patterns generated by moving digits.
Project description:The analysis of the activity of neuronal cultures is considered to be a good proxy of the functional connectivity of in vivo neuronal tissues. Thus, the functional complex network inferred from activity patterns is a promising way to unravel the interplay between structure and functionality of neuronal systems. Here, we monitor the spontaneous self-sustained dynamics in neuronal cultures formed by interconnected aggregates of neurons (clusters). Dynamics is characterized by the fast activation of groups of clusters in sequences termed bursts. The analysis of the time delays between clusters' activations within the bursts allows the reconstruction of the directed functional connectivity of the network. We propose a method to statistically infer this connectivity and analyze the resulting properties of the associated complex networks. Surprisingly enough, in contrast to what has been reported for many biological networks, the clustered neuronal cultures present assortative mixing connectivity values, meaning that there is a preference for clusters to link to other clusters that share similar functional connectivity, as well as a rich-club core, which shapes a 'connectivity backbone' in the network. These results point out that the grouping of neurons and the assortative connectivity between clusters are intrinsic survival mechanisms of the culture.