Leveraging Quantum Superposition to Infer the Dynamic Behavior of a Neural Network Model

Gabriel A. Silva
8 min readDec 16, 2024
Image credit: Getty

UPDATE: The third version of this paper includes quantum computing simulations and results showing the implementation of the theory and solutions to test problems. You will also be able to find all the code used. You can download the full paper on arXiv.

Can quantum computing and quantum algorithms (mathematically) formally and rigorously solve questions and problems about the dynamic behavior of neural networks? The answer is yes, but very carefully.

A few months ago, I wrote the first draft of a paper exploring a theoretical proof-of-concept problem applying two specific foundational quantum algorithms to a question about the dynamic behavior of a particular class of neural network model. I also wrote a Medium piece on it.

A second (rewritten) version of that paper now considerably expands on the description of the problem, the choice and details of the neural signaling model we used, and the mathematical setup and details of the quantum algorithmic solution to the problem.

Research into quantum computing applications has historically focused on a few well-established topics. Things like cryptography, data security, and the simulation of complex physical and chemical systems, including quantum mechanics. Studying quantum mechanics itself was one of the original motivations for exploring the potential of quantum computing in the first place.

But there is a big push to explore and discover new problem classes particularly suitable for quantum computation. This is partly because quantum computers are not a replacement for existing classical computers. Realistically, they will be very good at solving certain types of problems that classical computers cannot, but they are overkill or are just not efficient at solving most problems. But the problems they will be able to solve may be huge and not accessible or solvable using even the most powerful classical supercomputers that are or can ever be built.

For example, Google Quantum and X Prize recently announced a competition to promote the development of practical, real-world quantum computing algorithms and applications.

One of the most significant and unique challenges of exploring the applicability of quantum computing is that there is no known or obvious process for systematically structuring problems or questions to conform to the mathematical and algorithmic requirements of quantum computation. The structure and solution of each problem are essentially a bespoke pursuit. A well-posed problem needs to leverage the uniqueness of quantum mechanics and computing, properties such as superposition, entanglement, and interference, so that measurement outcomes, i.e., the outputs, of computations make sense and are interpretable to the question being asked. Achieving this requires designing problems that can demonstrate the computational advantages of quantum algorithms while also showing their relevance to practical or theoretical topics and problems of interest.

The Problem We Addressed

In the paper we wrote, we introduced and solved a novel problem class related to dynamics on large-scale networks. In other words, a question about how the activity of the network behaves under different conditions. We chose this question because it is relevant to neurobiology, machine learning, and artificial intelligence.

Specifically, we asked if, given a particular neural network model and after some period of activity in the network, can the network inherently sustain dynamic activity beyond some arbitrary observation time or does the activity cease through quiescence or saturation via an ‘epileptic’-like state?

In a neuroscientific context, this question informs how and when brain networks can encode and represent information given their physical structural and temporal properties. In machine learning, this work relates to understanding activation patterns in artificial neural networks, which have implications for stability, efficiency, and memory.

From a computational perspective, solving this problem classically would require brute-force evaluation of the network’s state by assessing the internal state of each node that makes up the network individually, an approach that is increasingly computationally expensive and eventually prohibitive for very large networks.

We showed that this problem can be formulated and structured to take advantage of quantum superposition and is solvable efficiently using two specific and well-studied quantum algorithms called the Deutsch–Jozsa and Grover algorithms. We achieved this by carefully constructing inputs into the algorithms that were mathematically structured in such a way that they were computable given how the algorithms work, while simultaneously constructing the inputs so that measurement outputs can be interpreted as meaningful properties of the network dynamics. This, in turn, allows us to answer the question we pose.

How We Setup the Problem

Being more precise about the problem we are asking the algorithms to solve, the specific question is: After some period of dynamic evolution, can the dynamics (i.e., activity) of a neural network sustain inherent activity beyond some arbitrary observation time, or are the dynamics guaranteed to stop, either due to ‘epileptic’-like saturation or quiescence?

By inherent dynamic activity, we mean the ability of the neurons (nodes) within the network to continue firing and maintaining the network’s internal dynamics autonomously without requiring an external driving input. In this scenario, the network functions as a self-sustaining system where activity persists due to its internal structure and the interactions between connected nodes.

By epileptic, we mean a state where all nodes in the network fire simultaneously at the observation time. If this occurs, the dynamics will likely perpetuate indefinitely in a state of uniform and saturated firing. In this case, every node is active all at once all the time and the network cannot process any further information.

In contrast, quiescence refers to the complete cessation of neuronal firing across the network, resulting in no continued activity without external input or stimulation.

We spent much of the technical setup in the paper defining and describing the neural dynamic model we used. While the model we chose is not only model we could have explored, we chose it because:

1. It is derived from foundational physical and neurophysiological principles (spatial and temporal summation) relevant to neurobiological neurons.

2. It is computationally tractable and builds directly on prior peer-reviewed published work.

3. It is of relevance to real neurobiological systems. We have previously shown how individual neurons and networks of neurons (connectomes) use the model’s specific theoretical and computational properties as a functional optimization principle.

(And) 4. Its computational outputs exhibit sufficient complexity to make the application of quantum superposition both challenging and insightful.

How We Solved The Problem

To solve it, we had to map the network dynamics question, as outlined above, into a framework suitable for quantum computation. This involved formulating the problem to exploit quantum superposition using the Deutsch-Jozsa and Grover algorithms.

We were able to show that the quantum computations result in interpretable output measurements that enable us to infer whether the network dynamics can sustain inherent activity or cease through epileptic-like saturation or quiescence. The key was to carefully structure the problem so that a quantum algorithmic solution is computationally more efficient than purely classical methods but interpretable about how the network behaves.

The Deutsch-Jozsa algorithm is a quantum computing algorithm designed to efficiently determine whether a function behaves the same way for all its inputs, outputting either all 0’s or all 1’s (constant), or if the output is an equal mix of 0’s and 1’s (balanced). In our case, we provided the algorithm a list of 0’s and 1’s that encoded the activation (firing) patterns representing the network’s dynamics. The algorithm processes this information all at once, leveraging the quantum property of superposition. If the outputs are all 0’s or all 1’s (constant), we can interpret the network’s activity pattern as being either in a quiescent state (if all 0's), where no activity happens, or in an epileptic-like state (if all 1's), where everything fires at once.

If the outputs are not all the same, it implies the network’s dynamics are more complex, with activity in the network having the potential to continue processing information.

However, there’s a challenge: the Deutsch-Jozsa algorithm requires evaluating all possible patterns of activity in the network at once, including patterns that did not actually occur in the network. This would make the algorithm’s output impossible to interpret in terms of the network’s real dynamics. To work around this, we divided the problem into two parts. First, we focused only on the patterns of activity that actually occurred in the network. For the patterns that did not occur, we assigned them fixed, known values (either 0 or 1), depending on how we are running the algorithm.

This approach allows us to run the Deutsch-Jozsa algorithm twice in the following way. In the first run, we assign all non-occurring patterns a value of 0. In the second run, we assign them a value of 1. The patterns of activity that actually occurred are already encoded in the collection of observed or measured values of 0’s and 1’s. If the output of the algorithm in the first run (where non-occurring patterns were assigned 0’s) is all 0’s, this implies that the activity patterns in the network were also all 0’s, meaning the network was in a quiescent state. Similarly, if the output of the second run (where non-occurring patterns were assigned 1’s) is all 1’s, it implies that the network was in an epileptic-like state. If the outputs are not constant (i.e., not all 0’s or not all 1’s) in either case, it implies that the activity patterns in the network were a mix of 0’s and 1’s, and we can interpret this as the network having the potential to sustain dynamic activity.

By interpreting the outputs of these two runs, we were able to derive the state of the activity pattern of the network and determine if the activity can continue or if it stops. The quantum computational advantage lies in the fact that we do not need to evaluate each pattern individually, one at a time. Instead, the Deutsch-Jozsa algorithm evaluates the entire set of patterns simultaneously in superposition, saving an enormous amount of computational effort. The larger the network being interrogated, the greater the computational speedup and resource savings achieved through this approach.

Where Do We Go From Here (aka Open Future Questions and Challenges)

As with any research, especially proof-of-concept work that bridges different technical topics, we raise more questions than actual answers. A few of the key questions and challenges are the following:

First and foremost, while we showed a novel application of quantum computing to neural network dynamics and demonstrated significant computational speedup theoretically, implementing the approach we developed on current quantum hardware, real quantum computers, presents many practical challenges. Noise, decoherence, and the limited number of logical qubits in today’s quantum processors remain prohibitive limitations to modeling or computing properties of networks of any meaningful size. Future work could focus on testing our approach on new hardware and even adopt this as a benchmark problem.

We used the Deutsch–Jozsa and Grover algorithms in this work intentionally in part because we wanted to see if we could structure and solve a meaningful neural network problem by leveraging two foundational and extensively studied quantum algorithms. However, other quantum algorithms may offer complementary or even enhanced capabilities for similar types of problems. For example, algorithms designed for graph analysis or hybrid quantum-classical approaches could further expand the scope of network dynamics problems solvable by quantum computing.

Lastly, the methods developed here could inspire the design and engineering of quantum neural networks relevant to quantum machine learning that incorporate dynamic feedback mechanisms similar to biological neurons. Such networks could have applications in learning systems where temporal patterns and adaptive dynamics are critical.

--

--

Gabriel A. Silva
Gabriel A. Silva

Written by Gabriel A. Silva

Professor of Bioengineering and Neurosciences, University of California San Diego

No responses yet