Comparing the Dynamics of Neural Networks using Edge-Ordered Multi-Directed Graphlets

Gabriel A. Silva
6 min readApr 14, 2022

The integration and propagation of information in the brain depend on the interplay between structural and dynamical properties across many scales of organization. This interplay happens in molecular and diffusion interactions in neurons and other neural cells, and at physiological scales in individual cells, networks of cells, and eventually in networks of brain regions. In order to eventually arrive at a systems engineering view of the brain we need to understand how physically imposed constraints determine neural dynamics and what the brain is able to do.

Implicit in any theoretical or computational work aimed at understanding neural dynamics from appropriate sets of mathematically bounded conditions is the notion of an underlying fundamental structure-function constraint. This fundamental constraint is imposed by the geometry of the structural networks that make up the brain at different scales, and the resultant latencies associated with the flow and transfer of information within and between functional scales. It is a constraint produced by the very way the brain is wired up, and how its constituent parts necessarily interact, e.g. neurons at one scale and brain regions at a higher scale.

The networks that make up the brain, across all the various scales of organization, are physical constructions over which signals and information must travel. These signals are subject to processing times and signaling speeds (conduction velocities) that must move finite distances to exert their effects — to transfer the information they are contributing to the next stage in the system. Nothing is infinitely fast. The latencies created by the interplay between structural geometries and signaling speeds is generally at a temporal scale similar to the functional processes being considered. So it matters from the perspective of understanding how structure determines function in the brain, and how function modulates structure, for example, in learning or plasticity.

Part of the work in our lab is focused on deriving and studying the mathematical relationships and consequences that this fundamental structure-function constraint produces. A central thesis of this work is that the mathematical relationships we discover and prove about information integration and computation in biological neural networks should be independent, as much as possible, from (unnecessary) biological and physiological details that, while important in how the brain works, are not themselves a part of the mathematical description of the algorithms they support.

We want to identify and understand the most basic and simple set of conditions, and write them down in mathematical forms amenable to deep theoretical analyses. In other words, we want to discover fundamental algorithms associated with neural dynamics and neurobiological information representations.

By ’unnecessary biological and physiological details’ we mean that our theoretical constructions should retain only features deemed essential to the algorithms themselves, while remaining as much as possible independent of details responsible for their implementation in the ’wetware’ environment of the brain. The algorithms and mathematical relationships that underly them should be independent of the neurobiological specifics of any particular experimental model, for example.

In order to formalize these ideas, we constructed a mathematical framework derived from the canonical neurophysiological principles of spatial and temporal summation in neurons.

This framework models the competing interactions of signals incident on a target downstream node (e.g. a neuron) along directed edges coming from other upstream nodes that connect into it in a network. We considered how temporal latencies produce offsets in the timing of the summation of incoming discrete events due to the geometry (physical structure) of the network, and how this results in the activation of the target node (neuron). The framework models how the timing of different signals compete to ‘activate’ nodes they connect into. This could be a network of neurons or a network of brain regions, for example.

At the core of the model is the notion of a refractory period or refractory state for each node. This reflects a period of internal processing, or period of inability to react to other inputs at the individual node level. It is important to note that we do not assume anything about the internal model that produces this refractory state, which could include an internal processing time during which the node is making a decision about how to react. In a geometric network, temporal latencies are due to the relationship between signaling speeds (conduction velocities) and the geometry of the edges on the network (i.e. edge path lengths).

We have shown that the interplay between temporal latencies of propagating discrete signaling events on the network relative to the internal dynamics of the individual nodes — when they become refractory and for how long — can have profound effects on the dynamics of the network.

We were also able to derive mathematical bounds on conditions required for a formal definition of efficient signaling. We have also shown that at least one subtype of neurons (inhibitory Basket cells) optimize their morphology (shape) in order to preserve what we call the refraction ratio, a balance between the temporal dynamics of individual nodes relative to the dynamics of the entire network. The refraction ratio is one of the mathematical predictions that came out of this work.

The framework we developed allows us to compute and study the local dynamics that govern and give rise to emergent global dynamics on a network. This framework and its theoretical analysis are a concrete example of brain invoked algorithms that directly result from the fundamental structure-function constraint, and are independent of any neurobiological or biophysical implementation details.

An important part of this research program is the requirement for a set of mathematical methods that allow us to catalog, theoretically analyze, and numerically study the rich dynamic patterns that result from network simulations. One direction we are explored is an extension of the theory of graphlets. Graphlets are small connected induced subgraphs of a larger network. They are similar to network motifs in the sense that both analyze networks using subgraphs. However, graphlets are induced subgraphs, whereas network motifs allow partial subgraphs.

While our motivation was to study dynamic simulations from our framework, this work has broad implications. It can be used to study network theory and analyze a wide range of dynamic spatial-temporal networks.

Specifically, we introduced an extension of graphlets that maps the topological transition of a network from one moment in time to another at the same time that causal relationships are preserved, including in situations where multiple signals contribute to the initiation of a downstream activation event in a target node.

The approach we developed is capable of analyzing the similarity between two dynamic patterns operating on an underlying structural network with a fixed connectivity topology. The technical approach we took was to create directed graphlets with multiple edges to encode multiple signals between connected nodes, and then apply edge-ordering in order to account for variable (synaptic) weight contributions to the running summation of arriving signals at a given target node.

These constructions extend graphlets to multi-digraphlets (directed graphlets) and edge-ordered multi-digraphlets.

A technical challenge this presented for us was the enumeration process. We overcame this by describing the space of graphlets with a vector space whose coordinates represented a multi-digraphlet class. We then studied the graph transitions going from one node activation time to the next. The crucial observation was the fact that there is a subgraph that is preserved between transitions. This creates a constraint on the graphlet-orbit transition matrices. Another important consideration was to conduct pairwise comparisons by only comparing graphlets observed in one or both synaptic signal graphs, which saves computational resources.

In future work will are planning to explore newer and more efficient algorithms that compute these graphlet-based analyses. We are also examining how edge-graphlets perform in contrast to vertex-graphlets, and we are extending graphlets into a persistent framework. This will be an attempt to extend the work that has been done with persistent homology and topological data analysis (TDA).

--

--

Gabriel A. Silva

Professor of Bioengineering and Neurosciences, University of California San Diego