On the Optimal Dynamic Control Strategy of Disruptive Computer Virus

Disruptive computer viruses have inflicted huge economic losses.This paper addresses the development of a cost-effective dynamic control strategy of disruptive viruses. First, the development problem ismodeled as an optimal control problem. Second, a criterion for the existence of an optimal control is given.Third, the optimality system is derived. Next, some examples of the optimal dynamic control strategy are presented. Finally, the performance of actual dynamic control strategies is evaluated.


Introduction
The proliferation of computer networks has brought huge benefits to human society.Meanwhile, it offers a shortcut to spread computer viruses, inflicting large economic losses [1].Consequently, containing the prevalence of digital viruses has been one of the major concerns in the field of cybersecurity.The spreading dynamics of computer virus has been widely adopted as the standard method for assessing the viral prevalence [2].Since the seminal work by Kephart and White [3,4], a multitude of computer virus-spreading models, ranging from the population-level models [5][6][7][8][9][10][11][12] and the network-level models [13][14][15][16][17] to the node-level models [18][19][20][21][22], have been proposed.
One of the central tasks in cybersecurity is to develop control strategies of computer virus so that, subject to limited budgets, the losses caused by computer infections are minimized [23].In recent years, the optimal design problem of virus control strategies has been modeled as static optimization problems [24][25][26][27][28].The optimal static control strategies, however, only apply to the small-timescale situations where the network state keeps unchanged.In the realistic situations where the network state is varying over time, the optimal design problem of virus control strategies can be modeled as dynamic optimal control problems [29][30][31][32][33].The optimal dynamic control strategies outperform their static counterparts, because the former not only are more cost-effective but apply to different timescales.
A disruptive computer virus is defined as a computer virus whose life period consists of two consecutive phases: the latent phase and the disruptive phase.In the latent phase, a disruptive virus staying in a host does not perform any disruptive operations.Rather, the virus tries to infect as many hosts as possible by sending its copies to them.In the disruptive phase, a disruptive virus staying in a host performs a variety of operations that disrupt the host, such as distorting data, deleting data or files, and destroying the operating system.To assess the prevalence of disruptive viruses, a number of virus-spreading models, which are referred to as the Susceptible-Latent-Bursting-Susceptible (SLBS) models, have been suggested [34][35][36][37][38].The main distinction between the SLBS models and the traditional SEIS models lies in that the latent hosts in the former possess strong infecting capability, whereas the exposed individuals in the latter possess no infecting capability at all.Recently, the basic SLBS models have been extended towards different directions [39][40][41][42][43].At the population-level, Chen et al. [44] developed an optimal dynamic control strategy of disruptive viruses.All of the above-mentioned SLBS models are populationlevel; that is, they are based on the assumption that every infected host in the population is equally likely to infect any other susceptible host.These models have two striking defects: (a) the personalized features of different hosts cannot be taken into consideration and (b) the impact of the structure of the virus-propagating network on the viral prevalence cannot be revealed by studying the models.To overcome these defects, Yang et al. [45] presented a node-level SLBS model.In our opinion, optimal dynamic control strategies of disruptive viruses should be developed at the node-level, so as to achieve the best cost-efficiency.
This paper is intended to develop at the node-level an optimal dynamic control strategy of disruptive computer viruses.First, the development problem is modeled as an optimal control problem.Second, a criterion for the existence of an optimal control for the optimal control problem is given.Third, the optimality system for the optimal control problem is presented.Next, some exemplar optimal dynamic control strategies are given.Finally, the difference between the costefficiency of an arbitrary control strategy and that of the optimal dynamic strategy is estimated.
The subsequent materials of this work are organized as follows.Section 2 presents the preliminary knowledge on optimal control theory.Sections 3 and 4 formulate and study the optimal control problem, respectively.Some numerical examples are given in Section 5. Section 6 estimates the aforementioned difference.Finally, Section 7 closes this work.

Fundamental Knowledge
For fundamental knowledge on optimal control theory, see [46].
Consider the following optimal control problem. Minimize Lemma 1. Problem (P) has an optimal control if the following five conditions hold simultaneously.
(C 1 ) U is closed and convex.
(C 2 ) There is u(⋅) ∈ U such that the adjunctive dynamical system is solvable.
) is bounded by a linear function in x.
As with the traditional SLBS models, assume that at any time every node in the population is in one of three possible states: susceptible, latent, and disruptive.Susceptible nodes are those that are not infected with any disruptive computer virus.Latent nodes are those that are infected with some disruptive viruses and all of them are in the latent phase.Disruptive nodes are those that are infected with some disruptive viruses and some of them are in the disruptive phase.Let   () = 0, 1, and 2 denote that at time  node  is susceptible, latent, and disruptive, respectively.Let As   () +   () +   () ≡ 1 (1 ≤  ≤ ), the vector    Let Δ > 0 denote a very small time interval.Hypotheses (H 1 )-(H 5 ) imply the following relations.
As a result, we have By the total probability formula, we get Transposing the terms   () and   () from the right to the left and dividing both sides by Δ, we get Letting Δ → 0, we get the following dynamical model.
where  ≥ 0, 1 ≤  ≤ .We refer to the model as the controlled SLBS model, where the control,  () = ( ,1 () , . . .,  , () ,  ,1 () , . . ., stands for a dynamic control strategy of disruptive computer viruses.The admissible set of controls is Model ( 7) can be written in matrix notation as Given a dynamic control strategy (⋅).The total loss can be measured by ], and the total cost can be gauged by As a result, the performance of a dynamic control strategy (⋅) can be measured by Hence, developing an optimal dynamic control strategy of disruptive viruses can be modeled as solving the following optimal control problem. Minimiz A solution to the optimal control problem (P * ) stands for an optimal dynamic control strategy of disruptive viruses.For convenience, let

A Theoretical Study of the Optimal Control Problem
In this section, we shall study the optimal control problem (P * ) presented in the previous section.

Existence of an Optimal Control.
As a solution to the optimal control problem (P * ) stands for an optimal dynamic control strategy of disruptive viruses, it is critical to show that there is such an optimal control.For that purpose, let us show that the five conditions in Lemma 1 hold true simultaneously.
Lemma 2. The admissible set Γ is closed.
Lemma 5. f(I, ) is bounded by a linear function in I.
Proof.The claim follows from the observation that, for  = 1, 2, . . ., , Proof.The Hessian of  with respect to , is always positive semidefinite.This implies the convexity of . Lemma We are ready to present the main result of this subsection.
Proof.Lemmas 2-7 show that the five conditions in Lemma 1 are all met.Hence, the existence of an optimal control follows from Lemma 1.
By applying the forward-backward Euler scheme to the optimality system, we can obtain the numerical solution to the optimal control problem (P * ), that is, an optimal dynamic control strategy of disruptive viruses.

Numerical Examples
This section gives some examples of the optimal dynamic control strategy of disruptive computer viruses.Given a dynamic control strategy ().Define the average control (AC) function, the average cumulative loss (ACL) function, the average cumulative cost (ACC) function, and the average cumulative performance (ACP) function as follows.
These functions form an evaluation criterion of dynamic control strategies of disruptive viruses.

Scale-Free
Network.Scale-free networks are a large class of networks having widespread applications.For our purpose, generate a scale-free network  with  = 100 nodes using the Barabasi-Albert method [48].
For the optimal dynamic control strategy to the optimal control problem and some static control strategies, the AC functions, the ACL functions, the ACC functions, and the ACP function are shown in Figure 2.

Small-World
Network.Small-world networks are another large class of networks having widespread applications.For our purpose, generate a small-world network  with  = 100 nodes using the Watts-Strogatz method [49].
For the optimal dynamic control strategy to the optimal control problem and some static control strategies, the AC functions, the ACL functions, the ACC functions, and the ACP function are shown in Figure 3.(d)   (0) = 0.1 and   (0) = 0, 1 ≤  ≤ .
For the optimal dynamic control strategy to the optimal control problem and some static control strategies, the AC functions, the ACL functions, the ACC functions, and the ACP function are shown in Figure 4.

Performance Evaluation
The previous discussions manifest that if the parameters in the optimal control problem (P * ) are all available, then an optimal dynamic control strategy can be obtained by numerically solving the optimality system.In realistic scenarios, however, some of these parameters might be unavailable.In such situations, it is necessary to estimate the performance of an actual dynamical control strategy in comparison with that of the optimal dynamical control strategy.Now let us present such an estimation.
Theorem 13.Consider the optimal control problem (P * ).Let  * (⋅) be the optimal dynamic control strategy, (⋅) an arbitrary dynamic control strategy.Then, As Although this estimation is rough, it takes the first step towards the accurate performance evaluation of actual dynamic control strategies of disruptive computer viruses.

Conclusions and Remarks
This paper has studied the problem of containing disruptive computer viruses in a cost-effective way.The problem has been modeled as an optimal control problem.A criterion for the existence of an optimal control has been given, and the optimality system has been derived.Some examples of the optimal dynamic control strategy have been presented.Finally, the performance of an actual control strategy of disruptive viruses has been estimated.Towards this direction, there are a number of problems that are worth studying.First, the bandwidth resources consumed in the virus control process should be measured and incorporated in the cost.Second, the optimal dynamic control problem should be investigated under sophisticated epidemic models such as the impulsive epidemic models [51,52], the stochastic epidemic models [53][54][55], and the epidemic models on time-varying networks [56][57][58].Last, it is rewarding to apply the methodology developed in this paper to the optimal dynamic control of rumor spreading [59][60][61].

Figure 2 :Figure 3 :diag ( 1 −diag ( 1 −
Figure 2: (a) The AC functions, (b) the ACL functions, (c) the ACC functions, and (d) the ACP functions for the optimal dynamic control strategies in Example 10.

Figure 4 :
Figure 4: (a) The AC functions, (b) the ACL functions, (c) the ACC functions, and (d) the ACP functions for the optimal dynamic control strategies in Example 12.