Vaccination Control in a Stochastic SVIR Epidemic Model

For a stochastic differential equation SVIR epidemic model with vaccination, we prove almost sure exponential stability of the disease-free equilibrium for ℛ 0 < 1, where ℛ 0 denotes the basic reproduction number of the underlying deterministic model. We study an optimal control problem for the stochastic model as well as for the underlying deterministic model. In order to solve the stochastic problem numerically, we use an approximation based on the solution of the deterministic model.


Introduction
The discovery of the first vaccine marked a major breakthrough in the battle against infectious diseases. The subsequent development of vaccines for various diseases has brought about remarkable results. Vaccination gained increasing popularity and success after it eradicated the smallpox outbreak of 1976 [1].
Vaccination is a commonly used method to control diseases such as measles, polio, and tuberculosis. Usually there are different schedules of dosage for different diseases and vaccines. For some diseases, doses should be taken by vaccinees several times and there must be some fixed time interval between two doses (see for instance Gabbuti et al. [2]). In a given population, the proportion of susceptibles who goes on to vaccination depends on different factors, one of which is the availability of the necessary resources.
There are numerous examples of vaccination models in the literature; see, for instance, the book of Brauer and Castillo-Chávez [3] or the journal papers [4][5][6][7][8][9][10]. With vaccination models one would be interested in the extent to which a vaccination program would reduce the basic reproduction number or how one can optimally roll out a vaccination program over time, in order to reach a certain target. In the latter case, optimal control theory is the obvious candidate to employ in the analysis [11][12][13][14]. It is usually the proportion of susceptibles who are admitted into vaccination, which is used as the control variable.
As a way of accommodating randomness in a compartmental epidemic model, several authors have proposed models with stochastic perturbation. This means that one modifies a system of ordinary differential equations (ODEs) by adding stochastic noise or stochastic perturbations, giving rise to a system of stochastic differential equations (SDEs) [7,8,[15][16][17][18]. We shall refer to the original system of ODEs as the underlying deterministic model.
It is known (see the book of Mao [19], for instance) that the stability of a system can be improved by adding stochastic perturbations. SDE epidemic models have been studied by various authors until it was proved in research articles [9,15,20,21] that stochastic perturbation improves stability of the disease-free equilibrium in the given models. SDE models with vaccination have been studied by Tornatore et al. [7,8]. In particular in [8] it is shown that the model permits solutions that are almost surely positive, and a theorem on exponential stability in mean square stability is proved. The model of Tornatore et al. [8] will be further analyzed in this paper.
For SDE models in epidemiology, optimal control has not been studied (or at least not published) extensively. One of the reasons for this could very well be the difficulty with high dimensionality of the resulting partial differential equation (PDE) for the value function; see the paper of Sulem and Tapiero [22], for instance. A four-compartmental SIVR model such as in [7,8] could easily lead to a PDE having the time variable together with three state variables. In this contribution we study the exponential stability of 2 Computational and Mathematical Methods in Medicine the stochastic model of Tornatore et al. [8] and control of vaccination both in the underlying deterministic model and in the stochastic model. In control problems, our goal is to characterize the vaccination rate (as control variable) which, on a finite time interval, minimizes the number of infected individuals balanced against the cost of vaccination. In the stochastic case we minimize an expected value. We use standard methods for the deterministic case and the Hamilton-Jacobi-Bellman equation in the stochastic case.
In Section 2 we study exponential stability of the diseasefree equilibrium of the stochastic SIVR model that was introduced in [8]. We prove that almost sure exponential stability of the disease-free equilibrium prevails whenever R 0 < 1, R 0 being the basic reproduction number of the underlying deterministic model. The deterministic control problem is treated in Section 3, and we find a numerical solution for the optimal control problem by using the fourthorder Runge-Kutta method. The optimal control of the SDE model is presented in Section 4. We observe a similarity in the form of the control in the two cases, stochastic versus deterministic. On this basis, in Section 5 we propose that the numerical solution of the control problem of the underlying deterministic model can be used to compute an approximate solution to the stochastic control problem, assuming the perturbation parameter to be small. We present computational examples to illustrate our findings.

The Stochastic Model and Stability
In this section we introduce the SVIR model and we analyse the stability of the disease-free equilibrium. Firstly, we formulate the necessary assumptions for modeling with SDEs. Let us assume having a filtered complete probability space (Ω, F, {F } ≥ 0 , P) and let ( ) be a one-dimensional Wiener process defined on this probability space.
We consider a stochastic SVIR model similar to the stochastic SIVR model of [8]. As in [8], the population is subdivided into four compartments/classes. These classes consist of all the individuals who are susceptible to the disease ( ), under vaccination ( ), infected with the disease ( ), and removed ( ).
It is assumed that births and deaths occur at the same constant rate and that all newborns enter the susceptible class. New infections occur at a rate , for a constant which is called the contact rate. A fraction ( ) of the susceptible class is being vaccinated at time . The vaccination may reduce but not completely eliminate susceptibility to the disease. Therefore the model includes a factor in the contact rate of vaccinated members, such that 0 ≤ ≤ 1. If = 0 the vaccination is perfectly effective while = 1 means the vaccination has no effect at all. Furthermore, immunity to the disease is assumed to be permanent so that a fraction of infectives goes into the removed class.
The total population is constant and the variables are normalized so that The stochastic SVIR model is given as follows: Of course the parameters , , and are positive constants. The parameter determines the intensity of the stochastic perturbation. If = 0, then the model reduces to the underlying deterministic model.
If is constant then the model above is identical to that in [8], and the point is the disease-free equilibrium point of the underlying deterministic model and is the only equilibrium point of the stochastic model. For some ∈ N, some 0 ∈ R , and an -dimensional Wiener process ( ), consider the general -dimensional stochastic differential equation A solution to the above equation is denoted by ( , 0 ). We assume that ( , 0) = ( , 0) = 0 for all ≥ 0, so that the origin point is an equilibrium of (4).

Definition 1.
The equilibrium = 0 of the system equation (4) is said to be almost surely exponentially stable if, for all Let us denote by L the differential operator associated with the function displayed in (4), defined for a function ( , ) ∈ 1,2 (R × R ) by Here Trc means trace and trp denotes the transpose of a matrix. Medicine   3 For the remainder of this section we shall study stability and for this purpose we regard ( ) to be a positive constant, ( ) = for all > 0. In [8] it was shown that the system of SDEs has unique solutions which are almost surely positive. Thus we shall restrict ourselves to sample paths ∈ Ω for which the coordinates are positive for all ≥ 0. The following observation is towards the proof of the stability theorem. Recall that we have introduced the number = ( + ) −1 in expression (3). Let us define, for any constants > 0 and > 0, the stochastic processes:

Computational and Mathematical Methods in
Note that ( )/ ( ) ≤ 1 and ( ) + Therefore by the strong law of large numbers for martingales (see [19]) it follows that Proof. Using the Itô formula and with 1 ( ) and 2 ( ) defined as above, we can express ( ) as We have shown that and hence the claim of the proposition follows readily.
The following observation from [9] is useful in exponential stability analysis. Now we present our stability theorem. Recall that R 0 is the basic reproduction number of the underlying deterministic model, and R 0 = /( + ). Also recall from expression (3) that = ( + ) −1 .

Computational and Mathematical Methods in Medicine
In view of Lemma 3 we can define the following limits for a suitable increasing, unbounded sequence ( ): and with lim sup In particular then we have Let We find an upper bound for 1 ( )+ 2 ( )+ 3 ( ). Since 2 ≤ 1 we have since | ( )− | < 1. Noting that 3 ( ) ≤ 0 and < 1, we obtain Therefore Λ satisfies the following inequality: Now we note that |2 ( − )| ≤ 2 , and so we can write The coefficients of , , and are negative (see (13)). Furthermore, , , and cannot all be zero since Therefore Λ < 0, and the proof is complete.  For R 0 < 1, Theorem 4 guarantees the disease-free equilibrium to be almost surely exponentially stable and indeed the -curves in Figure 2 appear to converge to 0. For R 0 > 1 the theorem did not predict almost sure exponential stability and in fact the -curves in Figure 4 (and many simulations with R 0 = 1.724 not shown) certainly do not show any clear intention of converging to 0.

The Deterministic Optimal Control Problem
We now formulate and solve the deterministic version of the control problem. Recall from Section 2 that in our SVIR model, ( ) represents the fraction of the susceptible class being vaccinated at time . We wish to design an optimal vaccination schedule, * ( ), which minimizes a combination of the number of infectives on the one hand and the overall cost of the vaccination on the other hand, over a certain time horizon [0, ]. The cost of the vaccination is assumed to be proportional to the square of ( ).
For the purposes of optimization we introduce the functions 1 ( ), 2 ( ), and 3 ( ) appearing in the SDE system (2) as follows: Now we can formulate the optimization problem.

Problem 5. Minimize the objective function
subject tȯ The control variable ( ) is assumed to be a measurable function of time and bounded: 0 ≤ ( ) ≤ ≤ 1.
We solve Problem 5 using well-established control theory such as in the book [23] of Lenhart and Workman. We construct the Hamiltonian function, and to this end we introduce Lagrange multipliers 1 ( ), 2 ( ), and 3 ( ), also referred to as the costate variables. In the theorem below, the control variable, the state variables, and the costate variables are functions of time, but this dependence is suppressed except where required explicitly. The Hamiltonian of Problem 5 has the form ( , , , , ( )) = 2 + + 1 1 ( , , , , ) + 2 2 ( , , , , ) ( , , , , ) .
(30) Theorem 6. An optimal solution for Problem 5 exists and satisfies the following system of differentiable equations: with transversality conditions 1 ( ) = 0, 2 ( ) = 0, and 3 ( ) = 0. Furthermore, the optimal vaccination rate is given by Proof. In particular the Hamiltonian is convex with respect to ( ) and the existence of a solution follows. We check the firstorder conditions for this optimization problem. We calculate the partial derivatives of the Hamiltonian with respect to the different state variables to obtain the time derivativeṡ( ) of the costate variables. The calculation oḟ 1 ( ) = − / ,̇2( ) = − / , anḋ3( ) = − / leads to the equations asserted in the theorem. We now turn to the final part of the proof, which is about the form of the optimal control, * ( ). Since the function * ( ) must optimize the Hamiltonian, we calculate Consider now a fixed value of . If 2 − ( 1 − 2 ) is zero for some value ( ) in the interval [0, ], then the given value of ( ) is optimal. If for every ∈ [0, ] we have 2 − ( 1 − 2 ) ≥ 0 (resp., 2 − ( 1 − 2 ) ≤ 0), then we must choose ( ) = 0 (resp., ( ) = ). Thus we must have * ( ) = min [max (0, In Figure 5 we plot for = 3 years the , , andtrajectories of the deterministic model subject to the optimal control * . We simulate * and * / * for the deterministic model in Figure 6 In Figure 6 the dashed curve represents * ( ), that is, the optimal vaccination rate. In this illustration, the optimal vaccination roll-out starts with initial vaccination of approximately 0.11 and decreases gradually over the next 3 years.

The Stochastic Optimal Control Problem
In this section we formulate the stochastic version of the optimization problem and describe its solution. For stochastic control theory we refer to the book [24] of Øksendal. Our objective is to find an optimal vaccination rate * ( ) that minimizes the objective functional which for an initial state 0 is defined by Here the expectation is obtained on the condition that the initial state (at time = 0) of the system is 0 . In step with the deterministic problem of earlier, we assume that there is a fixed constant ≤ 1 with ( ) ≤ (a.s.). The class of admissible control laws is To solve this stochastic control problem, we define the performance criterion as follows: where the expectation is conditional on the state of the system being a fixed value at time . We define the value function as We determine a control law that minimizes the expected value : A → R + given by (37). We now formulate Computational and Mathematical Methods in Medicine 7 the stochastic analogue of the optimal control problem, subsequent to which we present the solution formulae.
Problem 7. Given the system (4) and given A as in (36) with as in (37), find the value function and an optimal control function * ( ) = arg inf We can find an expression for the optimal vaccination strategy * through the following theorem. Proof. We determine (41) via the dynamic programming approach. First we calculate L ( ): Applying the Hamilton-Jacobi-Bellman theory (see, for instance, [24]) we must find the infimum: For this purpose, we need to find the partial derivative of the expression

Numerical Example
In the discussion that follows, the computations are done for For each value of above, we use the results of the deterministic control problem to find an approximate numerical solution for the stochastic control problem. In particular, we use 1 − 2 as a proxy for − in the calculation of * in this case. We note that the presence of ( ) makes into a stochastic variable even with the said proxy (in the stochastic case).
For each value of , we show in Table 1 the (0)-values obtained as the average over 3000 runs made for different candidates for̂as we take =̂. In these runs we take 1 and 2 of the underlying deterministic model.
Using a contact rate 0 = − ( 2 /2) instead of , we choose the first candidatê The adjustment on the contact rate is motivated by the general stabilizing effect of this type of stochastic perturbation and the perturbation being associated with the parameter . We also takê corresponding to the rate and with = 0.001. Furthermore, we consider corresponding to the contact rate . Just for further comparison we choose a linear curve candidatê5( ), say,̂5 with = 0.0001 and = 0.00015, also corresponding to a contact rate of . Let us use the notation (0) for the value of (0) corresponding to the control̂. We notice that 4 (0) and 1 (0) < 2 (0) < 3 (0). This creates the impression that 0 (0) is close to being a minimum. At least, with all the difficulties of a more precise solution, the choice of̂0 seems like a viable option in a real application. The inequalities −3 (0) > −2 (0) > −1 (0) > 0 (0) and 0 (0) < 1 (0) < 2 (0) < 3 (0) has been tested for other parameter choices too, with small. The simulations tested all revealed the same behaviour.
In Figures 7-9 we simulate the optimal solutions * , * , and * of the deterministic and stochastic ( = 0.3) models. We use the same values for the parameters , , , , , , , and initial conditions 0 , 0 , and 0 as in the discussion preceding these simulations.
An important point to note about our approximation is that it fully accommodates the stochasticity (embodied and concentrated in the factor of the expression for * ( )). Therefore it gives a very good approximation, at least in the sense that in Table 1, the minimum value of (0) consistently corresponds tô0.

Conclusion
This paper presents some further insights into a stochastic vaccination model introduced in the paper of Tornatore et al. [8]. Our investigations covered two important aspects:   exponential stability of the disease-free equilibrium and optimal control of vaccination.
Regarding stability, the main result of our paper has a particularly simple formulation. Essentially it says that we have almost sure exponential stability whenever the basic reproduction number of the underlying deterministic model is below unity. It will be good to know how an increase in vaccination rate would perhaps lead to better stability of the stochastic model. Nevertheless, the result as it stands is a good assurance.
As for the control problem, on the basis of a popular type of objective functional, we designed an efficient strategy for the roll-out of vaccination. It roughly amounts to minimizing the infections, balanced against the cost of vaccination. We exploit a similarity between the forms of the optimal controls for the stochastic model and the underlying deterministic model; then we use the relative simplicity of the latter to find approximate numerical solutions for the stochastic optimal control. Numerical simulation enables us to assess the feasibility of the option we followed, for a specific example. A more formal approach to the numerical solution of the optimal control problem is far more intricate and labour-intensive, and our method is a workable alternative. This could be the starting point for more sophisticated approximation methods.