The causal connectivity of a network is often inferred to understand network function. It is arguably acknowledged that the inferred causal connectivity relies on the causality measure one applies, and it may differ from the network’s underlying structural connectivity. However, the interpretation of causal connectivity remains to be fully clarified, in particular, how causal connectivity depends on causality measures and how causal connectivity relates to structural connectivity. Here, we focus on nonlinear networks with pulse signals as measured output, e.g., neural networks with spike output, and address the above issues based on four commonly utilized causality measures, i.e., time-delayed correlation coefficient, time-delayed mutual information, Granger causality, and transfer entropy. We theoretically show how these causality measures are related to one another when applied to pulse signals. Taking a simulated Hodgkin–Huxley network and a real mouse brain network as two illustrative examples, we further verify the quantitative relations among the four causality measures and demonstrate that the causal connectivity inferred by any of the four well coincides with the underlying network structural connectivity, therefore illustrating a direct link between the causal and structural connectivity. We stress that the structural connectivity of pulse-output networks can be reconstructed pairwise without conditioning on the global information of all other nodes in a network, thus circumventing the curse of dimensionality. Our framework provides a practical and effective approach for pulse-output network reconstruction.
2023
Mathematical Modeling and Analysis of Spatial Neuron Dynamics: Dendritic Integration and Beyond
Neuromatch Academy (https://academy.neuromatch.io; (van Viegen et al., 2021)) was designed as an online summer school to cover the basics of computational neuroscience in three weeks. The materials cover dominant and emerging computational neuroscience tools, how they complement one another, and specifically focus on how they can help us to better understand how the brain functions. An original component of the materials is its focus on modeling choices, i.e. how do we choose the right approach, how do we build models, and how can we evaluate models to determine if they provide real (meaningful) insight. This meta-modeling component of the instructional materials asks what questions can be answered by different techniques, and how to apply them meaningfully to get insight about brain function.
Hierarchical timescales in the neocortex: Mathematical mechanism and biological insights
In the neocortex, while early sensory areas encode and process external inputs rapidly, higher-association areas are endowed with slow dynamics suitable for accumulating information over time. Such a hierarchy of temporal response windows along the cortical hierarchy naturally emerges in a model of multiareal primate cortex. This finding raises the question of why diverse temporal modes are not mixed in roughly the same way across the whole cortex, despite high connection density and an abundance of feedback loops. We investigate this question by mathematically analyzing the anatomically based network model of macaque cortex and theoretically show that three sufficient conditions of synaptic excitation and inhibition give rise to timescale segregation in a hierarchy, a functionally important characteristic of the cortex.
A striatal SOM-driven ChAT-iMSN loop generates beta oscillations and produces motor deficits
Qian, Dandan, , Li, Wei, ,
Xue, Jinwen, and
13 more authors
Summary
Enhanced beta oscillations within the cortico-basal ganglia-thalamic (CBT) network are correlated with motor deficits in Parkinson’s disease (PD), whose generation has been associated recently with amplified network dynamics in the striatum. However, how distinct striatal cell subtypes interact to orchestrate beta oscillations remains largely unknown. Here, we show that optogenetic suppression of dopaminergic control over the dorsal striatum (DS) elevates the power of local field potentials (LFPs) selectively at beta band (12–25 Hz), accompanied by impairments in locomotion. The amplified beta power originates from a striatal loop driven by somatostatin-expressing (SOM) interneurons and constituted by choline acetyltransferase (ChAT)-expressing interneurons and dopamine D2 receptor (D2R)-expressing medium spiny neurons (iMSNs). Moreover, closed-loop intervention selectively targeting striatal iMSNs or ChATs diminishes beta oscillations and restores motor function. Thus, we reveal a striatal microcircuit motif that underlies beta oscillation generation and accompanied motor deficits upon perturbation of dopaminergic control over the striatum.
2021
Maximum Entropy Principle Underlies Wiring Length Distribution in Brain Networks
A brain network comprises a substantial amount of short-range connections with an admixture of long-range connections. The portion of long-range connections in brain networks is observed to be quantitatively dissimilar across species. It is hypothesized that the length of connections is constrained by the spatial embedding of brain networks, yet fundamental principles that underlie the wiring length distribution remain unclear. By quantifying the structural diversity of a brain network using Shannon’s entropy, here we show that the wiring length distribution across multiple species—including Drosophila, mouse, macaque, human, and C. elegans—follows the maximum entropy principle (MAP) under the constraints of limited wiring material and the spatial locations of brain areas or neurons. In addition, by considering stochastic axonal growth, we propose a network formation process capable of reproducing wiring length distributions of the 5 species, thereby implementing MAP in a biologically plausible manner. We further develop a generative model incorporating MAP, and show that, for the 5 species, the generated network exhibits high similarity to the real network. Our work indicates that the brain connectivity evolves to be structurally diversified by maximizing entropy to support efficient interareal communication, providing a potential organizational principle of brain networks.
Neuromatch Academy: Teaching Computational Neuroscience with Global Accessibility
Viegen, Tara, , Akrami, Athena, ,
Bonnen, Kathryn, and
32 more authors
Neuromatch Academy (NMA) designed and ran a fully online 3-week Computational Neuroscience Summer School for 1757 students with 191 teaching assistants (TAs) working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity and universal accessibility.
Network mechanism for insect olfaction
Pyzza, Pamela B., , Newhall, Katherine A., ,
Kovačič, Gregor, and
2 more authors
Early olfactory pathway responses to the presentation of an odor exhibit remarkably similar dynamical behavior across phyla from insects to mammals, and frequently involve transitions among quiescence, collective network oscillations, and asynchronous firing. We hypothesize that the time scales of fast excitation and fast and slow inhibition present in these networks may be the essential element underlying this similar behavior, and design an idealized, conductance-based integrate-and-fire model to verify this hypothesis via numerical simulations. To better understand the mathematical structure underlying the common dynamical behavior across species, we derive a firing-rate model and use it to extract a slow passage through a saddle-node-on-an-invariant-circle bifurcation structure. We expect this bifurcation structure to provide new insights into the understanding of the dynamical behavior of neuronal assemblies and that a similar structure can be found in other sensory systems.
2020
A computational investigation of electrotonic coupling between pyramidal cells in the cortex
Crodelle, Jennifer, , Douglas Zhou,
Kovačič, Gregor, and
1 more author
The existence of electrical communication among pyramidal cells (PCs) in the adult cortex has been debated by neuroscientists for several decades. Gap junctions (GJs) among cortical interneurons have been well documented experimentally and their functional roles have been proposed by both computational neuroscientists and experimentalists alike. Experimental evidence for similar junctions among pyramidal cells in the cortex, however, has remained elusive due to the apparent rarity of these couplings among neurons. In this work, we develop a neuronal network model that includes observed probabilities and strengths of electrotonic coupling between PCs and gap-junction coupling among interneurons, in addition to realistic synaptic connectivity among both populations. We use this network model to investigate the effect of electrotonic coupling between PCs on network behavior with the goal of theoretically addressing this controversy of existence and purpose of electrotonically coupled PCs in the cortex.
Computational neuroscience: a frontier of the 21st century
Wang, Xiao-Jing, , Hu, Hailan, ,
Huang, Chengcheng, and
11 more authors
The exponential time differencing (ETD) method allows using a large time step to efficiently evolve stiff systems such as Hodgkin-Huxley (HH) neural networks. For pulse-coupled HH networks, the synaptic spike times cannot be predetermined and are convoluted with neuron’s trajectory itself. This presents a challenging issue for the design of an efficient numerical simulation algorithm. The stiffness in the HH equations are quite different, for example, between the spike and non-spike regions. Here, we design a second-order adaptive exponential time differencing algorithm (AETD2) for the numerical evolution of HH neural networks. Compared with the regular second-order Runge-Kutta method (RK2), our AETD2 method can use time steps one order of magnitude larger and improve computational efficiency more than ten times while excellently capturing accurate traces of membrane potentials of HH neurons. This high accuracy and efficiency can be robustly obtained and do not depend on the dynamical regimes, connectivity structure or the network size.
The extended Granger causality analysis for Hodgkin-Huxley neuronal models
How to extract directions of information flow in dynamical systems based on empirical data remains a key challenge. The Granger causality (GC) analysis has been identified as a powerful method to achieve this capability. However, the framework of the GC theory requires that the dynamics of the investigated system can be statistically linearized; i.e., the dynamics can be effectively modeled by linear regressive processes. Under such conditions, the causal connectivity can be directly mapped to the structural connectivity that mediates physical interactions within the system. However, for nonlinear dynamical systems such as the Hodgkin-Huxley (HH) neuronal circuit, the validity of the GC analysis has yet been addressed; namely, whether the constructed causal connectivity is still identical to the synaptic connectivity between neurons remains unknown. In this work, we apply the nonlinear extension of the GC analysis, i.e., the extended GC analysis, to the voltage time series obtained by evolving the HH neuronal network. In addition, we add a certain amount of measurement or observational noise to the time series to take into account the realistic situation in data acquisition in the experiment. Our numerical results indicate that the causal connectivity obtained through the extended GC analysis is consistent with the underlying synaptic connectivity of the system. This consistency is also insensitive to dynamical regimes, e.g., a chaotic or non-chaotic regime. Since the extended GC analysis could in principle be applied to any nonlinear dynamical system as long as its attractor is low dimensional, our results may potentially be extended to the GC analysis in other settings.
A Combined Offline–Online Algorithm for Hodgkin–Huxley Neural Networks
Tian, Zhong qi Kyle, , Crodelle, Jennifer, , and Douglas Zhou
Spiking neural networks are widely applied to simulate cortical dynamics in the brain and are regarded as the next generation of machine learning. The classical Hodgkin–Huxley (HH) neuron is the foundation of all spiking neural models. In numerical simulation, however, the stiffness of the nonlinear HH equations during an action potential (a spike) period prohibits the use of large time steps for numerical integration. Outside of this stiff period, the HH equations can be efficiently simulated with a relatively large time step. In this work, we present an efficient and accurate offline–online combined method that stops evolving the HH equations during an action potential period, uses a pre-computed (offline) high-resolution data set to determine the voltage value during the spike, and restarts the time evolution of the HH equations after the stiff period using reset values interpolated from the offline data set. Our method allows for time steps an order of magnitude larger than those used in the standard Runge–Kutta (RK) method, while accurately capturing dynamical properties of HH neurons. In addition, this offline–online method robustly achieves a maximum of a tenfold decrease in computation time as compared to RK methods, a result that is independent of network size.
Neural networks of different species, brain areas and states can be characterized by the probability polling state
Xu, Zhi Qin John, , Gu, Xiaowei, ,
Li, Chengyu, and
3 more authors
Cortical networks are complex systems of a great many interconnected neurons that operate from collective dynamical states. To understand how cortical neural networks function, it is important to identify their common dynamical operating states from the probabilistic viewpoint. Probabilistic characteristics of these operating states often underlie network functions. Here, using multi-electrode data from three separate experiments, we identify and characterize a cortical operating state (the “probability polling” or “p-polling” state), common across mouse and monkey with different behaviors. If the interaction among neurons is weak, the p-polling state provides a quantitative understanding of how the high dimensional probability distribution of firing patterns can be obtained by the low-order maximum entropy formulation, effectively utilizing a low dimensional stimulus-coding structure. These results show evidence for generality of the p-polling state and in certain situations its advantage of providing a mathematical validation for the low-order maximum entropy principle as a coding strategy.
2019
Compressive Sensing Inference of Neuronal Network Connectivity in Balanced Neuronal Dynamics
Determining the structure of a network is of central importance to understanding its function in both neuroscience and applied mathematics. However, recovering the structural connectivity of neuronal networks remains a fundamental challenge both theoretically and experimentally. While neuronal networks operate in certain dynamical regimes, which may influence their connectivity reconstruction, there is widespread experimental evidence of a balanced neuronal operating state in which strong excitatory and inhibitory inputs are dynamically adjusted such that neuronal voltages primarily remain near resting potential. Utilizing the dynamics of model neurons in such a balanced regime in conjunction with the ubiquitous sparse connectivity structure of neuronal networks, we develop a compressive sensing theoretical framework for efficiently reconstructing network connections by measuring individual neuronal activity in response to a relatively small ensemble of random stimuli injected over a short time scale. By tuning the network dynamical regime, we determine that the highest fidelity reconstructions are achievable in the balanced state. We hypothesize the balanced dynamics observed in vivo may therefore be a result of evolutionary selection for optimal information encoding and expect the methodology developed to be generalizable for alternative model networks as well as experimental paradigms.
Emergence of spatially periodic diffusive waves in small-world neuronal networks
Gu, Qinglong L., , Xiao, Yanyang, , Songting Li, and
1 more author
It has been observed in experiment that the anatomical structure of neuronal networks in the brain possesses the feature of small-world networks. Yet how the small-world structure affects network dynamics remains to be fully clarified. Here we study the dynamics of a class of small-world networks consisting of pulse-coupled integrate-and-fire (I&F) neurons. Under stochastic Poisson drive, we find that the activity of the entire network resembles diffusive waves. To understand its underlying mechanism, we analyze the simplified regular-lattice network consisting of firing-rate-based neurons as an approximation to the original I&F small-world network. We demonstrate both analytically and numerically that, with strongly coupled connections, in the absence of noise, the activity of the firing-rate-based regular-lattice network spatially forms a static grating pattern that corresponds to the spatial distribution of the firing rate observed in the I&F small-world neuronal network. We further show that the spatial grating pattern with different phases comprise the continuous attractor of both the I&F small-world and firing-rate-based regular-lattice network dynamics. In the presence of input noise, the activity of both networks is perturbed along the continuous attractor, which gives rise to the diffusive waves. Our numerical simulations and theoretical analysis may potentially provide insights into the understanding of the generation of wave patterns observed in cortical networks.
Dendritic computations captured by an effective point neuron model
Songting Li, Liu, Nan, ,
Zhang, Xiaohui, and
3 more authors
Complex dendrites in general present formidable challenges to understanding neuronal information processing. To circumvent the difficulty, a prevalent viewpoint simplifies the neuronal morphology as a point representing the soma, and the excitatory and inhibitory synaptic currents originated from the dendrites are treated as linearly summed at the soma. Despite its extensive applications, the validity of the synaptic current description remains unclear, and the existing point neuron framework fails to characterize the spatiotemporal aspects of dendritic integration supporting specific computations. Using electrophysiological experiments, realistic neuronal simulations, and theoretical analyses, we demonstrate that the traditional assumption of linear summation of synaptic currents is oversimplified and underestimates the inhibition effect. We then derive a form of synaptic integration current within the point neuron framework to capture dendritic effects. In the derived form, the interaction between each pair of synaptic inputs on the dendrites can be reliably parameterized by a single coefficient, suggesting the inherent low-dimensional structure of dendritic integration. We further generalize the form of synaptic integration current to capture the spatiotemporal interactions among multiple synaptic inputs and show that a point neuron model with the synaptic integration current incorporated possesses the computational ability of a spatial neuron with dendrites, including direction selectivity, coincidence detection, logical operation, and a bilinear dendritic integration rule discovered in experiment. Our work amends the modeling of synaptic inputs and improves the computational power of a modeling neuron within the point neuron framework.
A Role for Electrotonic Coupling Between Cortical Pyramidal Cells
Crodelle, Jennifer, , Douglas Zhou,
Kovačič, Gregor, and
1 more author
Many brain regions communicate information through synchronized network activity. Electrical coupling among the dendrites of interneurons in the cortex has been implicated in forming and sustaining such activity in the cortex. Evidence for the existence of electrical coupling among cortical pyramidal cells, however, has been largely absent. A recent experimental study measured properties of electrical connections between pyramidal cells in the cortex deemed “electrotonic couplings.” These junctions were seen to occur pair-wise, sparsely, and often coexist with electrically-coupled interneurons. Here, we construct a network model to investigate possible roles for these rare, electrotonically-coupled pyramidal-cell pairs. Through simulations, we show that electrical coupling among pyramidal-cell pairs significantly enhances coincidence-detection capabilities and increases network spike-timing precision. Further, a network containing multiple pairs exhibits large variability in its firing pattern, possessing a rich coding structure.
Determination of effective synaptic conductances using somatic voltage clamp
Songting Li, Liu, Nan, ,
Yao, Li, and
3 more authors
The interplay between excitatory and inhibitory neurons imparts rich functions of the brain. To understand the synaptic mechanisms underlying neuronal computations, a fundamental approach is to study the dynamics of excitatory and inhibitory synaptic inputs of each neuron. The traditional method of determining input conductance, which has been applied for decades, employs the synaptic current-voltage (I-V) relation obtained via voltage clamp. Due to the space clamp effect, the measured conductance is different from the local conductance on the dendrites. Therefore, the interpretation of the measured conductance remains to be clarified. Using theoretical analysis, electrophysiological experiments, and realistic neuron simulations, here we demonstrate that there does not exist a transform between the local conductance and the conductance measured by the traditional method, due to the neglect of a nonlinear interaction between the clamp current and the synaptic current in the traditional method. Consequently, the conductance determined by the traditional method may not correlate with the local conductance on the dendrites, and its value could be unphysically negative as observed in experiment. To circumvent the challenge of the space clamp effect and elucidate synaptic impact on neuronal information processing, we propose the concept of effective conductance which is proportional to the local conductance on the dendrite and reflects directly the functional influence of synaptic inputs on somatic membrane potential dynamics, and we further develop a framework to determine the effective conductance accurately. Our work suggests re-examination of previous studies involving conductance measurement and provides a reliable approach to assess synaptic influence on neuronal computation.
Maximum entropy principle analysis in network systems with short-time recordings
Xu, Zhi-Qin John, , Crodelle, Jennifer, , Douglas Zhou, and
1 more author
In many realistic systems, maximum entropy principle (MEP) analysis provides an effective characterization of the probability distribution of network states. However, to implement the MEP analysis, a sufficiently long-time data recording in general is often required, e.g., hours of spiking recordings of neurons in neuronal networks. The issue of whether the MEP analysis can be successfully applied to network systems with data from short-time recordings has yet to be fully addressed. In this work, we investigate relationships underlying the probability distributions, moments, and effective interactions in the MEP analysis and then show that, with short-time recordings of network dynamics, the MEP analysis can be applied to reconstructing probability distributions of network states that is much more accurate than the one directly measured from the short-time recording. Using spike trains obtained from both Hodgkin-Huxley neuronal networks and electrophysiological experiments, we verify our results and demonstrate that MEP analysis provides a tool to investigate the neuronal population coding properties for short-time recordings.
Balanced Active Core in Heterogeneous Neuronal Networks
Gu, Qing-long L., , Songting Li,
Dai, Wei P., and
2 more authors
It is hypothesized that cortical neuronal circuits operate in a global balanced state, i.e., the majority of neurons fire irregularly by receiving balanced inputs of excitation and inhibition. Meanwhile, it has been observed in experiments that sensory information is often sparsely encoded by only a small set of firing neurons, while neurons in the rest of the network are silent. The phenomenon of sparse coding challenges the hypothesis of a global balanced state in the brain. To reconcile this, here we address the issue of whether a balanced state can exist in a small number of firing neurons by taking account of the heterogeneity of network structure such as scale-free and small-world networks. We propose necessary conditions and show that, under these conditions, for sparsely but strongly connected heterogeneous networks with various types of single-neuron dynamics, despite the fact that the whole network receives external inputs, there is a small active subnetwork (active core) inherently embedded within it. The neurons in this active core have relatively high firing rates while the neurons in the rest of the network are quiescent. Surprisingly, although the whole network is heterogeneous and unbalanced, the active core possesses a balanced state and its connectivity structure is close to a homogeneous Erdös-Rényi network. The dynamics of the active core can be well-predicted using the Fokker-Planck equation. Our results suggest that the balanced state may be maintained by a small group of spiking neurons embedded in a large heterogeneous network in the brain. The existence of the small active core reconciles the balanced state and the sparse coding, and also provides a potential dynamical scenario underlying sparse coding in neuronal networks.
Dynamical and Coupling Structure of Pulse-Coupled Networks in Maximum Entropy Analysis
Maximum entropy principle (MEP) analysis with few non-zero effective interactions successfully characterizes the distribution of dynamical states of pulse-coupled networks in many fields, e.g., in neuroscience. To better understand the underlying mechanism, we found a relation between the dynamical structure, i.e., effective interactions in MEP analysis, and the anatomical coupling structure of pulse-coupled networks and it helps to understand how a sparse coupling structure could lead to a sparse coding by effective interactions. This relation quantitatively displays how the dynamical structure is closely related to the anatomical coupling structure.
Fast algorithms for simulation of neuronal dynamics based on the bilinear dendritic integration rule
We aim to develop fast algorithms for neuronal simulations to capture the dynamics of a neuron with realistic dendritic morphology. To achieve this, we perform the asymptotic analysis on a cable neuron model with branched dendrites. Using the second-order asymptotic solutions, we derive a bilinear dendritic integration rule to characterize the voltage response at the soma when receiving multiple spatiotemporal synaptic inputs from dendrites, with a dependency on the voltage state of the neuron at input arrival times. Based on the derived bilinear rule, we finally propose two fast algorithms and demonstrate numerically that, in comparison with solving the original cable neuron model numerically, the algorithms can reduce the computational cost of simulation for neuronal dynamics enormously while retaining relatively high accuracy in terms of both sub-threshold dynamics and firing statistics.
The role of sparsity in inverse problems for networks with nonlinear dynamics
Barranca, Victor J., , Kovačič, Gregor, , and Douglas Zhou
Sparsity is a fundamental characteristic of numerous biological, social, and technological networks. Network connectivity frequently demonstrates sparsity on multiple spatial scales and network inputs may also possess sparse representations in appropriate domains. In this work, we address the role of sparsity for solving inverse problems in networks with nonlinear and time-evolving dynamics. In the context of pulse-coupled integrate-and-fire networks, we demonstrate that nonlinear network dynamics imparts a compressive coding of both network connectivity and inputs provided they possess a sparse structure. This work thereby formulates an efficient sparsity-based framework for solving several classes of inverse problems in nonlinear network dynamics. Driving the network with a small ensemble of random inputs, we derive a mean-field set of underdetermined linear systems relating the network inputs to the corresponding activity of the nodes via the feed-forward connectivity matrix. In reconstructing the network connections, we utilize compressive sensing theory, which facilitates the recovery of sparse solutions to such underdetermined linear systems. Using the reconstructed connectivity matrix, we are capable of accurately recovering detailed network inputs, which may vary in time, distinct from the random input ensemble. This framework underlines the central role of sparsity in information transmission through network dynamics, providing new insight into the structure-function relationship for high-dimensional networks with nonlinear dynamics. Considering the reconstruction of structural connectivity in large networks is a significant and challenging problem in the study of complex networks, we hypothesize that analogous reconstruction methods taking advantage of sparsity may facilitate key advances in the field.
Representing conditional Granger causality by vector auto-regressive parameters
Granger Causality (GC) has been widely applied to various scientific fields to reveal causal relationships between dynamical variables. The mathematical framework of GC is based on the vector auto-regression (VAR) model, and the GC value from one variable to the other is defined as the logarithmic ratio of the variance of two prediction errors obtained by excluding and including the second variable in the VAR model respectively. Besides its definition, GC shall also be reflected in the regression parameters of the VAR model, e.g., larger regression coefficients indicate stronger causal interactions in general. Yet an explicit description of how the GC value depends on the VAR parameters for a general multi-variable case remains lacking. In this work, we aim to bridge this gap by expressing conditional GC using the VAR parameters, which provides an alternative interpretation of GC with novel intuition. The analysis developed in this work may also benefit the study of the VAR model in the future.
2018
Mechanisms underlying contrast-dependent orientation selectivity in mouse V1
Dai, Wei P., , Douglas Zhou,
McLaughlin, David W., and
1 more author
Recently, sophisticated optogenetic tools for mouse have enabled many detailed studies of the neuronal circuits of its primary visual cortex (V1), providing much more specific information than is available for cat or monkey. Among various other differences, they show a striking contrast dependency in orientation selectivity in mouse V1 rather than the well-known contrast invariance for cat and monkey. Constrained by the existing experiment data, we develop a comprehensive large-scale model of an effective input layer of mouse V1 that successfully reproduces the contrast-dependent phenomena and many other response properties. The model helps to probe different mechanisms based on excitation–inhibition balance that underlie both contrast dependencies and invariance, and it provides implications for future studies on these circuits.
The Dynamics of Balanced Spiking Neuronal Networks Under Poisson Drive Is Not Chaotic
Gu, Qing-long L., , Zhong-qi K. Tian,
Kovačič, Gregor, and
2 more authors
Some previous studies have shown that chaotic dynamics in the balanced state, i.e., one with balanced excitatory and inhibitory inputs into cortical neurons, is the underlying mechanism for the irregularity of neural activity. In this work, we focus on networks of current-based integrate-and-fire neurons with delta-pulse coupling. While we show that the balanced state robustly persists in this system within a broad range of parameters, we mathematically prove that the largest Lyapunov exponent of this type of neuronal networks is negative. Therefore, the irregular firing activity can exist in the systemwithout the chaotic dynamics. That is the irregularity of balanced neuronal networks need not arise from chaos.
Causal inference in nonlinear systems: Granger causality versus time-delayed mutual information
The Granger causality (GC) analysis has been extensively applied to infer causal interactions in dynamical systems arising from economy and finance, physics, bioinformatics, neuroscience, social science, and many other fields. In the presence of potential nonlinearity in these systems, the validity of the GC analysis in general is questionable. To illustrate this, here we first construct minimal nonlinear systems and show that the GC analysis fails to infer causal relations in these systems - it gives rise to all types of incorrect causal directions. In contrast, we show that the time-delayed mutual information (TDMI) analysis is able to successfully identify the direction of interactions underlying these nonlinear systems. We then apply both methods to neuroscience data collected from experiments and demonstrate that the TDMI analysis but not the GC analysis can identify the direction of interactions among neuronal signals. Our work exemplifies inference hazards in the GC analysis in nonlinear systems and suggests that the TDMI analysis can be an appropriate tool in such a case.
Effects of Firing Variability on Network Structures with Spike-Timing-Dependent Plasticity
Synaptic plasticity is believed to be the biological substrate underlying learning and memory. One of the most widespread forms of synaptic plasticity, spike-timing-dependent plasticity (STDP), uses the spike timing information of presynaptic and postsynaptic neurons to induce synaptic potentiation or depression. An open question is how STDP organizes the connectivity patterns in neuronal circuits. Previous studies have placed much emphasis on the role of firing rate in shaping connectivity patterns. Here, we go beyond the firing rate description to develop a self-consistent linear response theory that incorporates the information of both firing rate and firing variability. By decomposing the pairwise spike correlation into one component associated with local direct connections and the other associated with indirect connections, we identify two distinct regimes regarding the network structures learned through STDP. In one regime, the contribution of the direct-connection correlations dominates over that of the indirect-connection correlations in the learning dynamics; this gives rise to a network structure consistent with the firing rate description. In the other regime, the contribution of the indirect-connection correlations dominates in the learning dynamics, leading to a network structure different from the firing rate description. We demonstrate that the heterogeneity of firing variability across neuronal populations induces a temporally asymmetric structure of indirect-connection correlations. This temporally asymmetric structure underlies the emergence of the second regime. Our study provides a new perspective that emphasizes the role of high-order statistics of spiking activity in the spike-correlation-sensitive learning dynamics.
2017
Spike-Triggered Regression for Synaptic Connectivity Reconstruction in Neuronal Networks
Zhang, Yaoyu, , Xiao, Yanyang, , Douglas Zhou, and
1 more author
How neurons are connected in the brain to perform computation is a key issue in neuroscience. Recently, the development of calcium imaging and multi-electrode array techniques have greatly enhanced our ability to measure the firing activities of neuronal populations at single cell level. Meanwhile, the intracellular recording technique is able to measure subthreshold voltage dynamics of a neuron. Our work addresses the issue of how to combine these measurements to reveal the underlying network structure. We propose the spike-triggered regression (STR) method, which employs both the voltage trace and firing activity of the neuronal population to reconstruct the underlying synaptic connectivity. Our numerical study of the conductance-based integrate-and-fire neuronal network shows that only short data of 20 ∼ 100 s is required for an accurate recovery of network topology as well as the corresponding coupling strength. Our method can yield an accurate reconstruction of a large neuronal network even in the case of dense connectivity and nearly synchronous dynamics, whichmany other network reconstruction methods cannot successfully handle. In addition, we point out that, for sparse networks, the STR method can infer coupling strength between each pair of neurons with high accuracy in the absence of the global information of all other neurons.
The characterization of hippocampal theta-driving neurons-A time-delayed mutual information approach
Songting Li, Xu, Jiamin, ,
Chen, Guifen, and
3 more authors
Interneurons are important for computation in the brain, in particular, in the information processing involving the generation of theta oscillations in the hippocampus. Yet the functional role of interneurons in the theta generation remains to be elucidated. Here we use time-delayed mutual information to investigate information flow related to a special class of interneurons-theta-driving neurons in the hippocampal CA1 region of the mouse-to characterize the interactions between theta-driving neurons and theta oscillations. For freely behaving mice, our results show that information flows from the activity of theta-driving neurons to the theta wave, and the firing activity of theta-driving neurons shares a substantial amount of information with the theta wave regardless of behavioral states. Via realistic simulations of a CA1 pyramidal neuron, we further demonstrate that theta-driving neurons possess the characteristics of the cholecystokinin-expressing basket cells (CCK-BC). Our results suggest that it is important to take into account the role of CCK-BC in the generation and information processing of theta oscillations.
A dynamical state underlying the second order maximum entropy principle in neuronal networks
Xu, Zhi-Qin John, , Bi, Guoqiang, , Douglas Zhou, and
1 more author
The maximum entropy principle is widely used in diverse fields. We address the issue of why the second order maximum entropy model, by using only firing rates and second order correlations of neurons as constraints, can well capture the observed distribution of neuronal firing patterns in many neuronal networks, thus, conferring its great advantage in that the degree of complexity in the analysis of neuronal activity data reduces drastically from [inline-equation] to [inline-equation], where n is the number of neurons under consideration. We first derive an expression for the effective interactions of the nth order maximum entropy model using all orders of correlations of neurons as constraints and show that there exists a recursive relation among the effective interactions in the model. Then, via a perturbative analysis, we explore a possible dynamical state in which this recursive relation gives rise to the strengths of higher order interactions always smaller than the lower orders. Finally, we invoke this hierarchy of effective interactions to provide a possible mechanism underlying the success of the second order maximum entropy model and to predict whether such a model can successfully capture the observed distribution of neuronal firing patterns.
2016
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J., , Kovačič, Gregor, , Douglas Zhou, and
1 more author
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.
Compressive sensing reconstruction of feed-forward connectivity in pulse-coupled nonlinear networks
Barranca, Victor J., , Douglas Zhou, and and
Cai, David
Utilizing the sparsity ubiquitous in real-world network connectivity, we develop a theoretical framework for efficiently reconstructing sparse feed-forward connections in a pulse-coupled nonlinear network through its output activities. Using only a small ensemble of random inputs, we solve this inverse problem through the compressive sensing theory based on a hidden linear structure intrinsic to the nonlinear network dynamics. The accuracy of the reconstruction is further verified by the fact that complex inputs can be well recovered using the reconstructed connectivity. We expect this Rapid Communication provides a new perspective for understanding the structure-function relationship as well as compressive sensing principle in nonlinear network dynamics.
Granger causality analysis with nonuniform sampling and its application to pulse-coupled nonlinear dynamics
Zhang, Yaoyu, , Xiao, Yanyang, , Douglas Zhou, and
1 more author
The Granger causality (GC) analysis is an effective approach to infer causal relations for time series. However, for data obtained by uniform sampling (i.e., with an equal sampling time interval), it is known that GC can yield unreliable causal inference due to aliasing if the sampling rate is not sufficiently high. To solve this unreliability issue, we consider the nonuniform sampling scheme as it can mitigate against aliasing. By developing an unbiased estimation of power spectral density of nonuniformly sampled time series, we establish a framework of spectrum-based nonparametric GC analysis. Applying this framework to a general class of pulse-coupled nonlinear networks and utilizing some particular spectral structure possessed by these nonlinear network data, we demonstrate that, for such nonlinear networks with nonuniformly sampled data, reliable GC inference can be achieved at a low nonuniform mean sampling rate at which the traditional uniform sampling GC may lead to spurious causal inference.
Efficient image processing via compressive sensing of integrate-and-fire neuronal network dynamics
Barranca, Victor J., , Kovačič, Gregor, , Douglas Zhou, and
1 more author
Integrate-and-fire (I&F) neuronal networks are ubiquitous in diverse image processing applications, including image segmentation and visual perception. While conventional I&F network image processing requires the number of nodes composing the network to be equal to the number of image pixels driving the network, we determine whether I&F dynamics can accurately transmit image information when there are significantly fewer nodes than network input-signal components. Although compressive sensing (CS) theory facilitates the recovery of images using very few samples through linear signal processing, it does not address whether similar signal recovery techniques facilitate reconstructions through measurement of the nonlinear dynamics of an I&F network. In this paper, we present a new framework for recovering sparse inputs of nonlinear neuronal networks via compressive sensing. By recovering both one-dimensional inputs and two-dimensional images, resembling natural stimuli, we demonstrate that input information can be well-preserved through nonlinear I&F network dynamics even when the number of network-output measurements is significantly smaller than the number of input-signal components. This work suggests an important extension of CS theory potentially useful in improving the processing of medical or natural images through I&F network dynamics and understanding the transmission of stimulus information across the visual system.
2015
Low-rank network decomposition reveals structural characteristics of small-world networks
Barranca, Victor J., , Douglas Zhou, and and
Cai, David
Small-world networks occur naturally throughout biological, technological, and social systems. With their prevalence, it is particularly important to prudently identify small-world networks and further characterize their unique connection structure with respect to network function. In this work we develop a formalism for classifying networks and identifying small-world structure using a decomposition of network connectivity matrices into low-rank and sparse components, corresponding to connections within clusters of highly connected nodes and sparse interconnections between clusters, respectively. We show that the network decomposition is independent of node indexing and define associated bounded measures of connectivity structure, which provide insight into the clustering and regularity of network connections. While many existing network characterizations rely on constructing benchmark networks for comparison or fail to describe the structural properties of relatively densely connected networks, our classification relies only on the intrinsic network structure and is quite robust with respect to changes in connection density, producing stable results across network realizations. Using this framework, we analyze several real-world networks and reveal new structural properties, which are often indiscernible by previously established characterizations of network connectivity.
A Novel Characterization of Amalgamated Networks in Natural Systems
Barranca, Victor J., , Douglas Zhou, and and
Cai, David
Densely-connected networks are prominent among natural systems, exhibiting structural characteristics often optimized for biological function. To reveal such features in highly-connected networks, we introduce a new network characterization determined by a decomposition of network-connectivity into low-rank and sparse components. Based on these components, we discover a new class of networks we define as amalgamated networks, which exhibit large functional groups and dense connectivity. Analyzing recent experimental findings on cerebral cortex, food-web, and gene regulatory networks, we establish the unique importance of amalgamated networks in fostering biologically advantageous properties, including rapid communication among nodes, structural stability under attacks, and separation of network activity into distinct functional modules. We further observe that our network characterization is scalable with network size and connectivity, thereby identifying robust features significant to diverse physical systems, which are typically undetectable by conventional characterizations of connectivity. We expect that studying the amalgamation properties of biological networks may offer new insights into understanding their structure-function relationships.
Analysis of the dendritic integration of excitatory and inhibitory inputs using cable models
We address the question of how a neuron integrates excitatory (E) and inhibitory (I) synaptic inputs from different dendritic sites. For an idealized neuron model with an unbranched dendritic cable, we construct its Green’s function and carry out an asymptotic analysis to obtain its solutions. Using these asymptotic solutions, in the presence of E and I inputs, we can successfully reveal the underlying mechanisms of a dendritic integration rule, which was discovered in a recent experiment. Our analysis can be extended to the multi-branch case to characterize the E-I dendritic integration on any branches. The novel characterization is confirmed by the numerical simulation of a biologically realistic neuron.
2014
Network dynamics for optimal compressive-sensing input-signal recovery
Barranca, Victor J., , Kovačič, Gregor, , Douglas Zhou, and
1 more author
By using compressive sensing (CS) theory, a broad class of static signals can be reconstructed through a sequence of very few measurements in the framework of a linear system. For networks with nonlinear and time-evolving dynamics, is it similarly possible to recover an unknown input signal from only a small number of network output measurements? We address this question for pulse-coupled networks and investigate the network dynamics necessary for successful input signal recovery. Determining the specific network characteristics that correspond to a minimal input reconstruction error, we are able to achieve high-quality signal reconstructions with few measurements of network output. Using various measures to characterize dynamical properties of network output, we determine that networks with highly variable and aperiodic output can successfully encode network input information with high fidelity and achieve the most accurate CS input reconstructions. For time-varying inputs, we also find that high-quality reconstructions are achievable by measuring network output over a relatively short time window. Even when network inputs change with time, the same optimal choice of network characteristics and corresponding dynamics apply as in the case of static inputs.
A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony
Zhang, Jiwei, , Douglas Zhou,
Cai, David, and
1 more author
Homogeneously structured networks of neurons driven by noise can exhibit a broad range of dynamic behavior. This dynamic behavior can range from homogeneity to synchrony, and often incorporates brief spurts of collaborative activity which we call multiple-firing-events (MFEs). These multiple-firing- events depend on neither structured architecture nor structured input, and are an emergent property of the system. Although these MFEs likely play a major role in the neuronal avalanches observed in culture and in vivo, the mechanisms underlying these MFEs cannot easily be captured using current population-dynamics models. In this work we introduce a coarse-grained framework which illustrates certain dynamics responsible for the generation of MFEs. By using a new kind of ensemble-average, this coarse-grained framework can not only address the nucleation of MFEs, but can also faithfully capture a broad range of dynamic regimes ranging from homogeneity to synchrony. \textcopyright 2013 Springer Science+Business Media.
Sparsity and Compressed Coding in Sensory Systems
Barranca, Victor J., , Kovačič, Gregor, , Douglas Zhou, and
1 more author
Considering that many natural stimuli are sparse, can a sensory system evolve to take advantage of this sparsity? We explore this question and show that significant downstream reductions in the numbers of neurons transmitting stimuli observed in early sensory pathways might be a consequence of this sparsity. First, we model an early sensory pathway using an idealized neuronal network comprised of receptors and downstream sensory neurons. Then, by revealing a linear structure intrinsic to neuronal network dynamics, our work points to a potential mechanism for transmitting sparse stimuli, related to compressed-sensing (CS) type data acquisition. Through simulation, we examine the characteristics of networks that are optimal in sparsity encoding, and the impact of localized receptive fields beyond conventional CS theory. The results of this work suggest a new network framework of signal sparsity, freeing the notion from any dependence on specific component-space representations. We expect our CS network mechanism to provide guidance for studying sparse stimulus transmission along realistic sensory pathways as well as engineering network designs that utilize sparsity encoding.
Analysis of sampling artifacts on the Granger causality analysis for topology extraction of neuronal dynamics
Douglas Zhou, Zhang, Yaoyu, ,
Xiao, Yanyang, and
1 more author
Granger causality (GC) is a powerful method for causal inference for time series. In general, the GC value is computed using discrete time series sampled from continuous-time processes with a certain sampling interval length τ, i.e., the GC value is a function of τ. Using the GC analysis for the topology extraction of the simplest integrate-and-fire neuronal network of two neurons, we discuss behaviors of the GC value as a function of τ, which exhibits (i) oscillations, often vanishing at certain finite sampling interval lengths, (ii) the GC vanishes linearly as one uses finer and finer sampling. We show that these sampling effects can occur in both linear and non-linear dynamics: the GC value may vanish in the presence of true causal influence or become non-zero in the absence of causal influence. Without properly taking this issue into account, GC analysis may produce unreliable conclusions about causal influence when applied to empirical data. These sampling artifacts on the GC value greatly complicate the reliability of causal inference using the GC analysis, in general, and the validity of topology reconstruction for networks, in particular. We use idealized linear models to illustrate possible mechanisms underlying these phenomena and to gain insight into the general spectral structures that give rise to these sampling effects. Finally, we present an approach to circumvent these sampling artifacts to obtain reliable GC values. \textcopyright 2014 Zhou, Zhang, Xiao and Cai.
Distribution of correlated spiking events in a population-based approach for Integrate-and-Fire networks
Zhang, Jiwei, , Newhall, Katherine, , Douglas Zhou, and
1 more author
Randomly connected populations of spiking neurons display a rich variety of dynamics. However, much of the current modeling and theoretical work has focused on two dynamical extremes: on one hand homogeneous dynamics characterized by weak correlations between neurons, and on the other hand total synchrony characterized by large populations firing in unison. In this paper we address the conceptual issue of how to mathematically characterize the partially synchronous "multiple firing events" (MFEs) which manifest in between these two dynamical extremes. We further develop a geometric method for obtaining the distribution of magnitudes of these MFEs by recasting the cascading firing event process as a first-passage time problem, and deriving an analytical approximation of the first passage time density valid for large neuron populations. Thus, we establish a direct link between the voltage distributions of excitatory and inhibitory neurons and the number of neurons firing in an MFE that can be easily integrated into population-based computational methods, thereby bridging the gap between homogeneous firing regimes and total synchrony. \textcopyright 2013 Springer Science+Business Media.
Reliability of the Granger causality inference
Douglas Zhou, Zhang, Yaoyu, ,
Xiao, Yanyang, and
1 more author
How to characterize information flows in physical, biological, and social systems remains a major theoretical challenge. Granger causality (GC) analysis has been widely used to investigate information flow through causal interactions. We address one of the central questions in GC analysis, that is, the reliability of the GC evaluation and its implications for the causal structures extracted by this analysis. Our work reveals that the manner in which a continuous dynamical process is projected or coarse-grained to a discrete process has a profound impact on the reliability of the GC inference, and different sampling may potentially yield completely opposite inferences. This inference hazard is present for both linear and nonlinear processes. We emphasize that there is a hazard of reaching incorrect conclusions about network topologies, even including statistical (such as small-world or scale-free) properties of the networks, when GC analysis is blindly applied to infer the network topology. We demonstrate this using a small-world network for which a drastic loss of small-world attributes occurs in the reconstructed network using the standard GC approach. We further show how to resolve the paradox that the GC analysis seemingly becomes less reliable when more information is incorporated using finer and finer sampling. Finally, we present strategies to overcome these inference artifacts in order to obtain a reliable GC result. \textcopyright 2014 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft.
Granger Causality Network Reconstruction of Conductance-Based Integrate-and-Fire Neuronal Systems
Douglas Zhou, Xiao, Yanyang, ,
Zhang, Yaoyu, and
2 more authors
Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (I&F) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based I&F neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings. \textcopyright 2014 Zhou et al.
Bilinearity in Spatiotemporal Integration of Synaptic Inputs
Songting Li, Liu, Nan, ,
Zhang, Xiao, and
2 more authors
Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient (Formula presented.) . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient (Formula presented.) is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse.
2013
Causal and Structural Connectivity of Pulse-Coupled Nonlinear Networks
Douglas Zhou, Xiao, Yanyang, ,
Zhang, Yaoyu, and
2 more authors
We study the reconstruction of structural connectivity for a general class of pulse-coupled nonlinear networks and show that the reconstruction can be successfully achieved through linear Granger causality (GC) analysis. Using spike-triggered correlation of whitened signals, we obtain a quadratic relationship between GC and the network couplings, thus establishing a direct link between the causal connectivity and the structural connectivity within these networks. Our work may provide insight into the applicability of GC in the study of the function of general nonlinear networks. \textcopyright 2013 American Physical Society.
Spatiotemporal dynamics of neuronal population response in the primary visual cortex
Douglas Zhou, Rangan, Aaditya V., ,
McLaughlin, David W., and
1 more author
One of the fundamental questions in system neuroscience is how the brain encodes external stimuli in the early sensory cortex. It has been found in experiments that even some simple sensory stimuli can activate large populations of neurons. It is believed that information can be encoded in the spatiotemporal profile of these collective neuronal responses. Here, we use a large-scale computational model of the primary visual cortex (V1) to study the population responses in V1 as observed in experiments in which monkeys performed visual detection tasks. We show that our model can capture very well spatiotemporal activities measured by voltage-sensitive-dye-based optical imaging in V1 of the awake state. In our model, the properties of horizontal long-range connections with NMDA conductance play an important role in the correlated population responses and have strong implications for spatiotemporal coding of neuronal populations. Our computational modeling approach allows us to reveal intrinsic cortical dynamics, separating them from those statistical effects arising from averaging procedures in experiment. For example, in experiments, it was shown that there was a spatially antagonistic center-surround structure in optimal weights in signal detection theory, which was believed to underlie the efficiency of population coding. However, our study shows that this feature is an artifact of data processing.
Phenomenological Incorporation of Nonlinear Dendritic Integration Using Integrate-and-Fire Neuronal Frameworks
It has been discovered recently in experiments that the dendritic integration of excitatory glutamatergic inputs and inhibitory GABAergic inputs in hippocampus CA1 pyramidal neurons obeys a simple arithmetic rule as VSExp ≈ VEExp + VIExp + kVEExp VIExp, where VSExp, VEExp and VIExp are the respective voltage values of the summed somatic potential, the excitatory postsynaptic potential (EPSP) and the inhibitory postsynaptic potential measured at the time when the EPSP reaches its peak value. Moreover, the shunting coefficient k in this rule only depends on the spatial location but not the amplitude of the excitatory or inhibitory input on the dendrite. In this work, we address the theoretical issue of how much the above dendritic integration rule can be accounted for using subthreshold membrane potential dynamics in the soma as characterized by the conductance-based integrate-and-fire (I&F) model. Then, we propose a simple I&F neuron model that incorporates the spatial dependence of the shunting coefficient k by a phenomenological parametrization. Our analytical and numerical results show that this dendritic-integration-rule-based I&F (DIF) model is able to capture many experimental observations and it also yields predictions that can be used to verify the validity of the DIF model experimentally. In addition, the DIF model incorporates the dendritic integration effects dynamically and is applicable to more general situations than those in experiments in which excitatory and inhibitory inputs occur simultaneously in time. Finally, we generalize the DIF neuronal model to incorporate multiple inputs and obtain a similar dendritic integration rule that is consistent with the results obtained by using a realistic neuronal model with multiple compartments. This generalized DIF model can potentially be used to study network dynamics that may involve effects arising from dendritic integrations. \textcopyright 2013 Zhou et al.
2012
Coarse-grained event tree analysis for quantifying Hodgkin-Huxley neuronal network dynamics
Sun, Yi, , Rangan, Aaditya V., , Douglas Zhou, and
1 more author
We present an event tree analysis of studying the dynamics of the Hodgkin-Huxley (HH) neuronal networks. Our study relies on a coarse-grained projection to event trees and to the event chains that comprise these trees by using a statistical collection of spatial-temporal sequences of relevant physiological observables (such as sequences of spiking multiple neurons). This projection can retain information about network dynamics that covers multiple features, swiftly and robustly. We demonstrate that for even small differences in inputs, some dynamical regimes of HH networks contain sufficiently higher order statistics as reflected in event chains within the event tree analysis. Therefore, this analysis is effective in discriminating small differences in inputs. Moreover, we use event trees to analyze the results computed from an efficient library-based numerical method proposed in our previous work, where a pre-computed high resolution data library of typical neuronal trajectories during the interval of an action potential (spike) allows us to avoid resolving the spikes in detail. In this way, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable statistical accuracy in terms of average firing rate and power spectra of voltage traces. Our numerical simulation results show that the library method is efficient in the sense that the results generated by using this numerical method with much larger time steps contain sufficiently high order statistical structure of firing events that are similar to the ones obtained using a regular HH solver. We use our event tree analysis to demonstrate these statistical similarities. \textcopyright 2011 Springer Science+Business Media, LLC.
2010
Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type
Douglas Zhou, Sun, Yi, ,
Rangan, Aaditya V., and
1 more author
We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks. \textcopyright Springer Science + Business Media, LLC 2009.
Pseudo-Lyapunov exponents and predictability of Hodgkin-Huxley neuronal network dynamics
Sun, Yi, , Douglas Zhou,
Rangan, Aaditya V., and
1 more author
We present a numerical analysis of the dynamics of all-to-all coupled Hodgkin-Huxley (HH) neuronal networks with Poisson spike inputs. It is important to point out that, since the dynamical vector of the system contains discontinuous variables, we propose a so-called pseudo-Lyapunov exponent adapted from the classical definition using only continuous dynamical variables, and apply it in our numerical investigation. The numerical results of the largest Lyapunov exponent using this new definition are consistent with the dynamical regimes of the network. Three typical dynamical regimes - asynchronous, chaotic and synchronous, are found as the synaptic coupling strength increases from weak to strong. We use the pseudo-Lyapunov exponent and the power spectrum analysis of voltage traces to characterize the types of the network behavior. In the nonchaotic (asynchronous or synchronous) dynamical regimes, i.e., the weak or strong coupling limits, the pseudo-Lyapunov exponent is negative and there is a good numerical convergence of the solution in the trajectory-wise sense by using our numerical methods. Consequently, in these regimes the evolution of neuronal networks is reliable. For the chaotic dynamical regime with an intermediate strong coupling, the pseudo-Lyapunov exponent is positive, and there is no numerical convergence of the solution and only statistical quantifications of the numerical results are reliable. Finally, we present numerical evidence that the value of pseudo-Lyapunov exponent coincides with that of the standard Lyapunov exponent for systems we have been able to examine. \textcopyright Springer Science+Business Media, LLC 2009.
Dynamics of current-based, Poisson driven, integrate-and-fire neuronal networks
Cai, David, , Kovacic, Gregor, ,
Kramer, Peter R., and
3 more authors
Synchronous and asynchronous dynamics in all-to-all coupled networks of identical, excitatory, current-based, integrate-and-fire (I&F) neurons with delta-impulse coupling currents and Poisson spike-train external drive are studied. Repeating synchronous total firing events, during which all the neurons fire simultaneously, are observed using numerical simulations and found to be the attracting state of the network for a large range of parameters. Mechanisms leading to such events are then described in two regimes of external drive: superthreshold and subthreshold. In the former, a probabilistic argument similar to the proof of the Central Limit Theorem yields the oscillation period, while in the latter, this period is analyzed via an exit time calculation utilizing a diffusion approximation of the Kolmogorov forward equation. Asynchronous dynamics are observed computationally in networks with random transmission delays. Neuronal voltage probability density functions (PDFs) and gain curves-graphs depicting the dependence of the network firing rate on the external drive strength-are analyzed using the steady solutions of the self-consistency problem for a Kolmogorov forward equation. All the voltage PDFs are obtained analytically, and asymptotic solutions for the gain curves are obtained in several physiologically relevant limits. The absence of chaotic dynamics is proved for the type of network under investigation by demonstrating convergence in time of its trajectories. \textcopyright 2010 International Press.
2009
Library-based numerical reduction of the Hodgkin–Huxley neuron for network simulation
Sun, Yi, , Douglas Zhou,
Rangan, Aaditya V., and
1 more author
We present an efficient library-based numerical method for simulating the Hodgkin - Huxley (HH) neuronal networks. The key components in our numerical method involve (i) a pre-computed high resolution data library which contains typical neuronal trajectories (i.e., the time-courses of membrane potential and gating variables) during the interval of an action potential (spike), thus allowing us to avoid resolving the spikes in detail and to use large numerical time steps for evolving the HH neuron equations; (ii) an algorithm of spike-spike corrections within the groups of strongly coupled neurons to account for spike-spike interactions in a single large time step. By using the library method, we can evolve the HH networks using time steps one order of magnitude larger than the typical time steps used for resolving the trajectories without the library, while achieving comparable resolution in statistical quantifications of the network activity, such as average firing rate, interspike interval distribution, power spectra of voltage traces. Moreover, our large time steps using the library method can break the stability requirement of standard methods (such as Runge-Kutta (RK) methods) for the original dynamics. We compare our library-based method with RK methods, and find that our method can capture very well phase-locked, synchronous, and chaotic dynamics of HH neuronal networks. It is important to point out that, in essence, our library-based HH neuron solver can be viewed as a numerical reduction of the HH neuron to an integrate-and-fire (I&F) neuronal representation that does not sacrifice the gating dynamics (as normally done in the analytical reduction to an I&F neuron). \textcopyright Springer Science+Business Media, LLC 2009.
Network-induced chaos in integrate-and-fire neuronal ensembles
Douglas Zhou, Rangan, Aaditya V., ,
Sun, Yi, and
1 more author
It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons. \textcopyright 2009 The American Physical Society.