Quantum Techniques for Reaction Networks

Reaction networks are a general formalism for describing collections of classical entities interacting in a random way. While reaction networks are mainly studied by chemists, they are equivalent to Petri nets, which are used for similar purposes in computer science and biology. As noted by Doi and others, techniques from quantum field theory can be adapted to apply to such systems. Here we use these techniques to study how the"master equation"describing stochastic time evolution for a reaction network is related to the"rate equation"describing the deterministic evolution of the expected number of particles of each species in the large-number limit. We show that the relation is especially strong when a solution of master equation is a"coherent state", meaning that the numbers of entities of each kind are described by independent Poisson distributions.


Introduction
A 'reaction network' describes how various kinds of classical particles can interact and turn into other kinds. They are commonly used in chemistry. For example, this reaction network gives a very simplified picture of what is happening in a glass of water: The reactions here are all reversible, but this is not required by the reaction network formalism. The relevant particles here are not atoms, but rather 'species' including the water molecule H 2 O, the proton H + , the hydroxyl ion OH − and the hydronium ion H 3 O + .
While the dynamics of the reactions is fundamentally quantum-mechanical, reaction networks give a simplified description of what is going on. First, given a 'rate constant' for each reaction, one can write down a differential equation called the rate equation, which describes the time evolution of the concentration of each species in a deterministic way. This is especially useful in the limit where there are many particles of each species. At a more fine-grained level, one can describe reactions stochastically, using the master equation. In the rate equation the concentration of each species is treated as a continuous variable, but in the master equation, the number of particles of each species is an integer.
In this paper we explain how techniques from quantum field theory can be used to study reaction networks. This work is part of a broader program emphasizing the links between quantum mechanics and a subject we call 'stochastic mechanics', where probabilities replace amplitudes [4,5,6].
Using ideas from quantum field theory to model classical stochastic systems goes back at least to 1976, with the work of Doi [10]. It has been further developed by Grassberger, Scheunert [12] and many others. The big surprise one meets at the very beginning of this subject is that the canonical commutation relations between creation and annihilation operators, long viewed as a hallmark of quantum theory, are also perfectly suited to systems of identical classical particles interacting in a probabilistic way.
In quantum theory, the simplest example of a Fock space is used to describe the states of a quantum harmonic oscillator. When such an oscillator is in its ℓth energy level, its state is a vector that we can formally write as z ℓ for some formal variable z-or better, z ℓ / √ ℓ!, where the normalization factor comes in handy later. A general quantum state is a linear combination of such vectors, so we can write it as a power series where the coefficient ψ ℓ ∈ C is the amplitude for the oscillator to be in its ℓth energy level. We can increase the energy level using the 'raising operator' a † , which multiplies any power series by z: We can decrease the energy level using the 'lowering operator' a, which differentiates any power series: If we choose an inner product for which the vectors z ℓ / √ ℓ! are orthonormal, these operators are indeed adjoint to each other: In checking this one sees why the normalization factor is required. We can think of z ℓ as a state containing ℓ quanta of energy. In applications to quantum field theory, we think of these quanta as actual particles-for example, photons. If we have k different kinds of particles, or a single kind of particle with k different states, we describe the quantum state of a collection of particles using a power series in k variables. For example, if k = 2 there is a Fock space consisting of power series in two variables z 1 and z 2 : where ψ ℓ1,ℓ2 ∈ C is the amplitude for having ℓ 1 particles of the first kind and ℓ 2 particles of the second kind. We have operators that create and annihilate particles of each kind: Each creation operator a † i is adjoint to the corresponding annihilation operator a i , and these operators obey the 'canonical commutation relations': These rules are the basis of many practical calculations in particle physics, where products of creation and annihilations operators are used to model processes in which particles interact and turn into other particles. Doi noticed that all this has a stochastic analogue where we work with probabilities rather than amplitudes. For example, there is a stochastic Fock space whose elements are power series Here ψ ℓ1,ℓ2 ∈ R is the probability of having ℓ 1 particles of the first kind and ℓ 2 particles of the second kind. The formalism is somewhat simpler than in the quantum case: for example, the normalization factors are no longer required. For Ψ to describe a probability distribution, we require that all the coefficients be nonnegative and sum to 1. We call Ψ with these properties a 'mixed state'. A special case is a monomial z ℓ1 1 z ℓ2 2 . This is called a 'pure state': it describes a completely definite situation where there are ℓ 1 particles of the first kind and ℓ 2 of the second kind.
This method of describing a probability distribution using a power series is far from new. It was introduced by DeMoivre [9], and perfected by Laplace [19] in a paper published in 1782. He dubbed these power series 'generating functions', and they have been used in probability theory ever since. Doi's new realization was that spaces of generating functions are analogous to the Fock spaces in quantum theory. In particular, products of creation and annihilation operators can be used to describe stochastic processes in which objects interact and turn into other objects.
A natural example is chemistry, where reactions are often described stochastically using the master equation or deterministically using the rate equation. There is a long line of work on both these equations, initated by Horn and Jackson [13] along with Feinberg [11] in the late 1970s, and continuing to this day. This line of work does not use quantum techniques, but it might profit from them, and also contribute to their development. To illustrate how, here we use these techniques to study how the master equation reduces to the rate equation in the large-number limit. This is formally similar to the 'classical limit' in which a quantum field theory reduces to a classical field theory.
In Theorem 8 we start with the master equation and derive an equation for the rate of change of the expected number of particles of each species. This equation is similar to the rate equation, and reduces to it in a certain approximation. However, in Theorem 9 we show that this equation gives the rate equation exactly when the numbers of particles of each species are described by independent Poisson distributions. This relies on a nice fact: the generating function of a product of independent Poisson distributions is an eigenvector of all the annhilation operators.
In quantum mechanics, eigenvectors of the annihilation operators are called 'coherent states'. Many facts about classical mechanics have simple generalizations to quantum mechanics if we restrict attention to coherent states [15]. Here we are seeing a similar phenomenon in stochastic mechanics: the deterministic dynamics given by the rate equation matches the stochastic dynamics given by the master equation in the special case of coherent states. We explore this further in a companion paper [6].
The application of quantum techniques to reaction networks is just one of many interesting connections between quantum physics and network theory. For example, the evolution of networks displays phase transitions analogous to Bose-Einstein condensation [8,14]. For more, see the review by Biamonte, Faccin and De Domenico [7].

Reaction networks
A reaction network, first defined by Aris [3], consists of a set of different kinds of particles-each kind being called a 'species'-together with a set of processes, called 'reactions', that turn collections of particles of various species into collections of particles of other species. As the name suggests, it is convenient to draw a reaction network as a graph.
While reaction networks first arose in chemistry, their uses are not limited to this context. Here is an example that arose in work on HIV, the human immunodeficiency virus [18]: Here we have three species: • H: healthy white blood cells, • I: infected white blood cells, • V : virions (that is, individual virus particles).
We also have six reactions: • α: the birth of one healthy cell, which has no input and one H as output.
• β: the death of a healthy cell, which has one H as input and no output.
• γ: the infection of a healthy cell, which has one H and one V as input, and one I as output.
• δ: 'production', or the reproduction of the virus in an infected cell, which has one I as input and one I and one V as output.
• ǫ: the death of an infected cell, which has one I as input and no output.
• ζ: the death of a virion, which has one V as input and no output.
So, we have a finite set of species S = {H, I, V } but reactions go between 'complexes', which are finite linear combinations of species with natural number coefficients, such as H + V or 2H + 3V or 0. (In the literature on reaction networks the complex 0, corresponding to nothing at all, is often denoted ∅.) We can think of complexes as elements of N S , the set of functions from S to the natural numbers. The reaction network is a graph with certain complexes as vertices and reactions as edges.
We can formalize this as follows. There are various kinds of graphs; the kind we want are sometimes called 'directed multigraphs' or 'quivers', but to keep our terminology simple, we make the followng definition: Then we may say: Definition 2. A reaction network (S, K, T, s, t) consists of: • a finite set S of species, • a finite set of complexes K ⊆ N S , • a graph s, t : T → K with complexes as vertices and some finite set of reactions T as edges.
In the chemistry literature this sort of thing is called a 'chemical reaction network', but we want to emphasize that they are useful more generally, so we drop the adjective. Petri nets are an alternative formalism that is entirely equivalent to reaction networks, often used in subjects other than chemistry [16,17,20]. In the Petri net literature the species are called 'places' and the reactions are called 'transitions'. For a more complete translation guide between reaction networks and Petri nets, see [6].
For convenience we shall often write k = |S| for the number of species present in a reaction network, and identify the set S with the set {1, . . . , k}. This lets us write any complex as a k-tuple of natural numbers. In particular, we write the source and target of any reaction τ as

The rate equation
The amount of each species is represented by a 'state' of the chemical reaction network. There are various options here. A pure state is a vector x ∈ N k with ith entry x i specifying the number of instances of the ith species. This generalises in two ways. First, we define a classical state to be any vector x ∈ [0, ∞) k of nonnegative real numbers, and think of such a state as specifying the expected number or concentration of the species present. Second, we define a mixed state to be a probability distribution over the pure states. This assigns a probability ψ ℓ to each ℓ ∈ N k . The rate equation describes the time evolution of a classical state, while the master equation describes the time evolution of a mixed state.
To write down either of these equations, we must first specify the system's dynamics. In this paper we consider systems that evolve by the law of mass action [3]. Very roughly, this law states that the rate at which a reactions occur is proportional to the product of the concentrations of all its input species. We call the constant of proportionality for each reaction its rate constant: Definition 3. A stochastic reaction network (S, K, T, s, t, r) consists of a reaction network together with a map r : T → (0, ∞) assigning a rate constant to each reaction.
This concept is equivalent to another concept in the literature, namely that of a stochastic Petri net [6].
Given a stochastic reaction network, the rate equation says that for each reaction τ the time derivative of a classical state x is the product of: • the vector t(τ ) − s(τ ) whose ith component is the change in the number of the ith species due to the reaction τ ; • the concentration of each input species i of τ raised to the power given by the number of times it appears as an input, namely s i (τ ); • the rate constant r(τ ) of τ .
where x : R → [0, ∞) k and we have used multi-index notation to define For the HIV reaction network in Equation 1, if we also use the Greek letter names of the reactions as names for their rate constants, we get this rate equation: This example involves only complexes where the coefficient of each species is at most 1. It is worthwhile comparing an example with larger coefficients, for example the dissociation of a diatomic gas into atoms and the recombination of these atoms into a molecule: This reaction network gives the following rate equation: The recombination of two atoms of chlorine into a diatomic molecule occurs at a rate proportional to the square of the concentration of atomic chlorine, so both terms arising from reaction β are proportional to Cl 2 . The reaction α produces two atoms of Cl while the reaction β uses up two atoms of Cl, so both terms in the second equation have a coefficient of 2. All this follows from Equation 2.

The master equation
The master equation describes the time evolution of mixed states. For the rest of this paper we fix a stochastic reaction network (S, K, T, s, t, r). Recall that for each ℓ ∈ N k , a 'mixed state' gives the probability ψ ℓ that for each i, exactly ℓ i ∈ N instances of the ith species are present. The master equation says that d dt for some matrix of numbers H determined by the stochastic reaction network. The matrix element H ℓ ′ ℓ is the probability per time for the pure state ℓ to evolve to the pure state ℓ ′ . This probability is a sum over reactions: To describe the matrix H(τ ), we need 'falling powers'. For any natural numbers n and p, we define the pth falling power of n by n p = n(n − 1) · · · (n − p + 1). This is the number of ways of choosing an ordered p-tuple of distinct elements from a set with n elements. Note that n p = 0 if p > n. More generally, for any multi-indices ℓ and m, we define This is the number of ways to choose, for each species i = 1, . . . , k, an ordered list of m i distinct things from a collection of ℓ i things of that species. Thus, we expect a factor of this sort to appear in the master equation.
Using this notation, we define the matrix H(τ ) as follows: Here δ is the Kronecker delta, which equals 1 if its subscripts are equal and 0 otherwise. The first term describes the rate for the complex ℓ to become the complex ℓ ′ via the reaction τ : it equals the rate constant for this reaction times the number of ways this reaction can occur. The second term describes the rate at which ℓ goes away due to this reaction. Thus, the size of the second term is equal to that of the first, but its sign is opposite. Putting together Equations 3 and 4, we obtain this formula for the Hamiltonian:

The stochastic Fock space
We can understand the Hamiltonian for the master equation at a deeper level using techniques borrowed from quantum field theory. The first step is to introduce a stochastic version of Fock space. To do this, we write any mixed state as a formal power series in some variables z 1 , . . . , z k , with the coefficient of z ℓ = z ℓ1 1 · · · z ℓ k k being the probability ψ ℓ . That is, we write any mixed state as Because a mixed state is a probability distribution, the coefficients ψ ℓ must be nonnegative and sum to 1. Indeed, in this formalism mixed states are precisely the formal power series Ψ with The simplest mixed states are the monomials z ℓ ; these are the pure states, where there is a definite number of things of each species. In quantum mechanics we use a similar sort of Fock space, but with power series having complex rather than real coefficients. To obtain a Hilbert space, we often restrict attention to power series obeying for which a certain norm is finite. However, power series not obeying this condition are also useful, for example to ensure that operators of interest are everywhere defined, instead of merely densely defined.
Thus, for the purposes of studying the master equation, we define the stochastic Fock space to be the vector space R[[z 1 , . . . , z k ]] of all real formal power series in the variables z 1 , . . . , z k . We treat the Hamiltonian H for the master equation as the operator on stochastic Fock space with where the matrix entries H ℓ ′ ℓ are given by Equation 5. This allows us to express the Hamiltonian in terms of certain special operators on the stochastic Fock space: the creation and annihilation operators. Our notation here follows that used in quantum field theory, where the creation and annihilation operators are adjoints of each other. Let 1 ≤ i ≤ k. The creation operator a † i is given by a † i Ψ = z i Ψ. This takes any pure state to the pure state with one additional instance of the ith species: The corresponding annihilation operator is given by formal differentiation: This takes any pure state to the pure state with one fewer instance of the ith species, but multiplied by a coefficient n i : This represents the fact that there are n i ways to annihilate one of the instances of the ith species present.
In what follows we use multi-index notation to define, for any n ∈ N k , a n = a n1 1 · · · a n k k and a † n = a † 1 n1 · · · a † k n k .
These expressions are well-defined because all the annihilation operators commute with each other, and similarly for the creation operators.

Theorem 5. For any stochastic reaction network, the Hamiltonian is given by
Proof. We start with a basic result on annihilation and creation operators: Proof. The first equation follows inductively from the definition of the creation operators, which implies a i † z ℓ = z ℓ1 1 · · · z ℓi+1 i · · · z ℓ k k . The second follows inductively from the definition of annihilation operators, which implies Note that ℓ m = 0 if m i > ℓ i for any i, and in this case we also have a m z ℓ = 0, so the equation a m z ℓ = ℓ m z ℓ−m holds in this case because both sides vanish.
We now prove the theorem. For any multi-index ℓ, Equation 5 gives Using Lemma 6 we obtain The last step requires a little justification. Technically speaking, the monomials z ℓ are not a basis of the stochastic Fock space R[[z 1 , . . . , z k ]], because not every formal power series is a finite linear combination of monomials. However, these monomials form a 'topological basis' of R[[z 1 , . . . , z k ]], in the sense that every element of this vector space can be expressed as an convergent infinite linear combination of these monomials, with respect to a certain topology. (In this topology, a sequence of formal power series converges if each coefficient converges.) The annihilation and creation operators, and indeed all the operators discussed in this paper, are continuous in this topology. So, to check equations between these operators, it suffices to check them on the monomials z ℓ .
Next we investigate what the master equation says about the time evolution of expected values. For this, we start by defining Ψ = ℓ∈N k ψ ℓ for any formal power series Ψ = ℓ∈N k ψ ℓ z ℓ .
In general the sum defining Ψ may not converge, but it converges and equals 1 when Ψ is a mixed state. Suppose O is an operator on the space of formal power series in the variables z 1 , . . . , z k . We define the expected value of O in the mixed state Ψ to be OΨ , assuming this converges. This generalizes the usual definition of the expected value of a random variable, in a way that emphasizes the analogy between stochastic mechanics and quantum mechanics. While in quantum mechanics we compute expected values of observables using expressions of the form Ψ, OΨ , where the brackets denote an inner product, in stochastic mechanics we use OΨ . For more details see [4,5].
We are especially interested in the expected value of the number operators Since ψ ℓ is the probability of being in a pure state where there are ℓ i things of the ith species, the above formula indeed gives the expected value of the number of things of this species. Our goal is to compute assuming that Ψ(t) obeys the master equation. For this, we need to define the falling powers of an operator A as follows: for any p ∈ N. If m is a multi-index we define k . This is well defined because the number operators N i commute.

Lemma 7. For any multi-indices ℓ and m we have
Proof. By Lemma 6 and the definition of the number operators we have The lemma follows directly from this.
There is a way to extract a classical state from a mixed state, which lets us connect the rate equation to the master equation. To do this, we write N for the vector, or list, of number operators (N 1 , . . . , N k ). Then, given a mixed state Ψ, we set N Ψ = ( N 1 Ψ , . . . , N k Ψ ).
If the expected values here are well-defined, N Ψ ∈ [0, ∞) k is a classical state. It is a simplified description of the mixed state Ψ, which only records the expected number of instances of each species.
The next result says how this sort of classical state evolves in time, assuming that the mixed state it comes from obeys the master equation:

Theorem 8. For any stochastic reaction network and any mixed state Ψ(t) evolving in time according to the master equation, we have
assuming the expected values and their derivatives exist.
Proof. First note using Lemmas 6 and 7 that for any multi-indices ℓ, m, n and any natural number Thus if at any given time and Ψ(t) evolves in time according to the master equation, we have Recall that N is our notation for the vector of number operators (N 1 , . . . , N k ). Thus, the above equation is equivalent to The above theorem makes clear that the rate equation is closely related to the master equation, since the former says d dt while the latter implies Suppose we let x(t) be the classical state coming from the mixed state Ψ(t): The rate equation for x(t) would then follow from the master equation for Ψ(t) if we had for every multi-index m. This equation is not true in general. However, it should hold approximately in a suitable limit of large numbers. And surprisingly, it holds exactly for coherent states.

Coherent states
For any c ∈ [0, ∞) k , we define the coherent state with expected value c to be the mixed state where c · z is the dot product of the vectors c and z, and we set e c = e c1+···+cn . Equivalently, where c n and z n are defined as products in our usual way, and n! = n 1 ! · · · n k !. The name 'coherent state' comes from quantum mechanics [15], where we think of the coherent state Ψ c as the quantum state that best approximates the classical state c. In the state Ψ c , the probability of having n i things of the ith species is equal to This is precisely the definition of a Poisson distribution with mean equal to c i . The state Ψ c is thus a product of independent Poisson distributions. To prove Theorem 9, we start by setting where Ψ(t) is a solution of the master equation. Theorem 8 implies that the time derivative of x equalsẋ (t) = τ ∈T r(τ ) (s(τ ) − t(τ )) N s(τ ) Ψ(t) .
If Ψ(t) is a coherent state at time t = t 0 , Lemma 13 implies that N s(τ ) Ψ(t 0 ) = N Ψ(t 0 ) s(τ ) = x(t 0 ) s(τ ) so we haveẋ which is the rate equation at time t 0 , as desired.
In general, the above result applies at only one moment in time. The reason is that if a solution of the master equation is a coherent state at some time, it need not be a coherent state at later (or earlier) times. Still, Theorem 8 implies that as long as the expected values N m Ψ(t) are close to the powers N Ψ(t) m , at least when m is the source of some reaction, the expected values x(t) = N Ψ(t) will approximately obey the rate equation. This could be studied in detail using either the techniques introduced here or those discussed by Anderson and Kurtz [2]. Furthermore, in some cases when Ψ(t) is initially a coherent state it continues to be so for all times. In this case, x(t) continues to exactly obey the rate equation. For example, this is true when all the complexes in the reaction network consist of a single species. It is also true for a large class of equilibrium solutions of the master equation, as shown by Anderson, Craciun and Kurtz [1]. For more on how quantum techniques apply to this situation, see the companion to this paper [6].