An Optimal Cooperative Control Design for State Consensus of Periodic Multiagent Systems

The problem of cooperative optimal control of multiagent systems with linear periodic continuous-time dynamics is considered. The state consensus problem is formulated as an optimal control problem in which the consensus requirement is reflected in the cost.The cost optimization of each subsystem is considered over finite horizon while the states of the agents converge to a common value, with a control signal that depends on the interactions of the neighboring subsystems. The proposed control law consists of a local and regional terms to capture local measurements and measurements due to interactions with the neighboring agents, respectively.These two terms are obtained by solving aHamilton-Jacobi-Bellman partial differential equation. A numerical example is presented to demonstrate the effectiveness of the proposed method.

Various dynamics can be considered for the agents in a multiagent team.Among these, multiagent systems with linear periodic dynamics that may be employed in several applications such as satellite networks or robotic systems, are considerable.The importance of the study of periodic control systems is that the properties of periodicity may be used to achieve more suitable design, and its capability allows better describing physical dynamics with cyclic behavior such as Low Earth Orbit (LEO) satellites or robotic systems [17][18][19].Furthermore, the theory of periodic systems provides useful tools to improve the control performance of the closed-loop systems and also is adequate enough for solving the problems of time-invariant systems where time-invariant controllers are inadequate [20,21].
One of the first works on periodic multiagent systems was presented [22] for discrete-time dynamical systems by extending a gradient based optimization approach of a single periodic system to a multiagent case.The method considers the network's cost as a linear combination of agent's costs and minimizes it by finding a single periodic output feedback.A stable optimal state feedback is proposed in [23] to control a group of periodic agents that can move at different speeds but only in the same direction.However, the designed control signal for each agent is considered completely independent from other agents' feedback.
Consensus problem is a fundamental topic in the area of cooperative control of multiagent systems.In a network of agents, consensus means to reach an agreement (either in state or output) regarding a certain quantity of interest while the agents have different initial states [24].Various control methods such as optimal control, event-based control, and sliding mode control have been investigated so far for the consensus problem of multiagent systems (see [25,26] and the references therein).In [27], the consensus problem is solved via a periodically intermittent control for a class of second-order agents while the optimality condition is not considered.In [28], a networked system whose agents can only communicate with their local neighbors is considered and a decentralized event-based control strategy is proposed for the consensus problem without considering the optimality conditions.In [29], an optimal controller is designed to achieve state consensus for a team of agents with discretetime dynamics.
A closed-loop solution for optimal control problems can be achieved by solving HJB equation if one can guess a suitable solution candidate [30][31][32].Among studies on the optimization of multiagent networks based on the HJB equation, HJB equations in coupled quadratic forms with nonquadratic input energy cost have been formed in [33] to investigate the optimal synchronization control for generic linear multiagent systems with input saturation.Authors in [34] focused on interactions among the agents to design a controller for a multiagent team in continuous-time case to reach output consensus.The result was implemented on a group of mobile robot vehicles.However, for the existence of a solution to HJB equation, the method requires nonsingular square input matrices.A similar method is incorporated in [35] to reach an optimum state consensus controller for a team of nonlinear agents.Authors in [36] designed an optimal consensus based formation controller for a team of mobile robots via HJB equation by considering both full connectivity and partial connectivity for their assumed network.
The main focus of this paper is on designing an optimum group cooperation in the sense of optimal consensus for a team of agents with continuous-time periodic dynamics.We assume that each agent of the team is connected to some other agents known as neighbors such that all the agents are in connection with each other directly or indirectly, such that the connectivity of the entire network is guaranteed.Having only a limited access to the information from the neighbors, the agents have to exchange information to achieve cooperation.Considering this challenge, we design a distributed control system with a separate controller for each agent that uses partial information of its neighbors.Each agent in the proposed team has its own cost function that must be minimized while the main goal of the team is to reach state consensus.Unlike the design in [22], we design distributed controllers for the agents to minimize their costs according to the feedback signals received from neighbors and also their own information.Similar to the method incorporated in [37], we assume that the control signal consists of two parts, but, unlike [34], in our proposed design method both parts are in feedback form.By the connectivity assumption, to reach state consensus, it suffices to consider state consensus for each agent only with its neighbors.The consensus requirement is reflected in the cost function of each agent.Similar to [34], we use the general method of HJB for solving the minimization problem, but, unlike that work, the input matrices do not need to be invertible.In addition, we incorporate interval numbers for the first time for periodic systems in order to extend a result of LTI systems to periodic systems.
The paper is organized as follows: Section 2 provides the preliminary background and the problem definition.In Section 3, our strategy to reach the optimal control signal is explained and the proposed design method is offered.In Section 4, a numerical example and the simulation results are provided.Finally, the conclusion remarks are given in Section 5.

The Problem Definition and Preliminaries
A multiagent team consists of a set of agents  = {  ,  = 1, . . .,}, where  denotes the number of agents in the team.Similar to [32,38,39] all agents are assumed to have the same dynamical model, which is typical in many applications.Each agent id considered a linear periodic continuous-time system of the following form: where   () ∈ R ×1 and   () ∈ R ×1 are state and input vectors, respectively, and () ∈ R × and () ∈ R × are periodic matrices with period , i.e., ( + ) = () and ( + ) = ().
For cooperation and coordination among the team's agents, each agent must be aware of the status of other agents, which in this work is considered as their states.Hence, the agents need to communicate with each other through two-way links.It is assumed that the network is partially connected such that there is no isolated agent in the team and that all agents are connected to each other either directly or indirectly via their neighboring sets.The connection among the neighboring agents is defined by an  ×  nonsingular and symmetric matrix  = [  ] known as neighboring matrix with   = 0 and other entries as follows: In this work, we consider the problem of optimal state consensus for a team of periodic agents; i.e., all agents in a neighboring set must reach to the same state.For reaching state consensus suppose that one arbitrary agent   approaches the desired state; then any other agent   for which   = 1 can directly reach the common reference state too.For an agent   such that   = 0 connectivity assumption implies the existence of an information path between   and   through their neighbors which helps   to reach the desired value.Considering the mentioned hypothesis, the main aim of this work is to minimize each individual agent cost function via its local and regional feedback signals such that all agents in the team reach the state consensus.
According to what is mentioned above, we define the cost function for agent   by importing state consensus condition as a part of cost, by the following expression: where matrix   ∈ R × is positive semi-definite,  ∈ R × is a symmetric positive definite matrix, and  ∈ R × is a symmetric positive definite matrix.

Main Results
In this section, we are going to propose  distinct control laws to minimize cost functions (4) for the periodic multiagent system (1).To this end, we are going to use HJB partial differential equations [40,41].
For a team of agents with dynamical equations introduced by (1) and costs introduced by (4), the corresponding HJB equation for agent   is as follows: where   is a value function which must be chosen such that the partial differential equation ( 5) satisfies with the boundary condition   (,   ) = 0. Due to the connectivity assumption, each agent of the introduced model receives information from its neighboring agents and passes its own information to them through existing communications links.As mentioned before, the control law for each agent consists of a local feedback as well as regional feedback signals received from its neighbors.Thus, we can decompose the control signal   of agent   into the following two parts: where   is the local feedback matrix and   (for   = 1) are regional feedback matrices corresponding to the neighboring agents.We also consider the following theorem to construct the required feedback matrices such that consensus is guaranteed.
Lemma 1 (see [42]).There exists a state feedback  =  such that the system (2) achieves consensus if and only if there exist a positive definite matrix X and a matrix Y such that where S is an orthonormal matrix in R × , for some , and S ⊥ is its orthonormal complement.Then the control law can be reconstructed by  = YX −1 .
Remark 2. The entries of a periodic matrix in one period can be expressed as interval numbers, which are a set of real numbers between two numbers:  = [, ] = { |  <  < } [43].Although Lemma 1 was originally given for timeinvariant systems, using the calculus of interval numbers and following the same derivations given in the proof of Lemma 1, one can easily yield the same result for the periodic systems.
The following theorem gives   and   that leads to the optimal control law for minimization of cost function (4), while state consensus is achieved.Theorem 3. Assume a team of periodic agents whose dynamics are governed by (1) with controllable pair (,), and, with the entire system dynamics of given by (2), and the cost functions are governed by (4).For the agent   , (for  = 1, . . ., ), consider the control law (6) with local feedback matrix, and the neighboring feedback matrices, in which   is an  ×  positive definite symmetric matrix with continuously differentiable entries computed from: and   (for  = 1, . . .,  ..  = 1) are × positive definite matrices with continuously differentiable entries computed from: then the proposed control protocol (6) will minimize the cost ( 4) while state consensus is achieved.
Proof.Consider a continuously differentiable value function candidate of the following form: that should satisfy the HJB equation ( 5).Here,   ,   , and   are  ×  matrices with continuously differentiable entries where   is symmetric positive definite and   is positive definite.Substituting ( 12) into (5) yields: To carry out the minimization on the right-hand side, we let: which yields where the first term is the local control signal and the second term is the regional counterpart.To show that the resulting  *  minimizes the objective function, we consider the second derivative of the right-hand side of ( 14) with respect to   as follows: According to our assumptions,  is a positive definite matrix, and, thus, the resulting  *  in ( 15) is the optimum control signal.
To determine   and   , one can substitute (15) into (13), which leads to the following ODEs: The last two ODEs must hold for all  = 1, . . ., s.t.  = 1.
The first equation in ( 17) is a matrix differential Riccati equation.According to [44], positive definiteness of  and   and the controllability of (,) guarantee the existence of the unique   satisfying in the first equation.The two other differential equations in (17) are linear in   and   , respectively.Due to the theorem of existence and uniqueness of solutions to the linear first-order differential equations [45], the existence of unique solutions to two remaining equations in ( 17) is guaranteed by existence of   .This implies that the HJB equation ( 5) has a solution in the form of ( 12) which satisfies the boundary conditions.This leads to the optimal solution.Now consider presentation of entire system as (2); then the optimum control signal (15) in entire form is as follows: where   = diag{, . . ., },  = diag{ 1 , . . .,   } and  = [  ] s.t.  =    for   = 1 and   = 0 for   = 0. Then the entire closed-loop form is Ẋ =    with   =   −    −1     ( + ).Now let S ∈ R × be an orthonormal basis for the nullspace of   and S ⊥ be its orthonormal complement, i.e., S  ⊥ S ⊥ =  and S  ⊥ S = 0, then, for every X > 0, the third relation in (7) of Lemma 1 is satisfied.If it does not, then there exists X > 0 such that: Left multiplying (19)  .Thus, we have: Due to the definition of S, we have   S = 0 or equivalently: As  and  are positive definite matrices, we can conclude that  +  is also positive definite and invertible.Therefore, By choosing X = ( + ) −1 , and Y = − −1     , the relation ( 21) can be represented as: Now by right multiplying (22) by S  S and making use of (20) and noting that SS  S = S we obtain: Thus, the first relation in (7) satisfies.To verifying the second relation, let: then we have the entire closed-loop system as follows: or equivalently: where S ⊥ is the orthonormal complement of S. For every (0) =  0 , system ζ = S  ⊥   S ⊥  converges to a point in {S} and lim →∞ () = 0. Therefore, () =  S  ⊥   S ⊥  (0) implies the eigenvalues of S  ⊥   S ⊥ have negative real part.Thus, there exists a matrix  > 0 such that: By choosing  = S  ⊥ XS ⊥ and substituting the abovementioned X and Y, the second relation in ( 7) is obtained and consensus is achieved.

Numerical Example
In this section, the proposed approach is applied to a multiagent team consisting of 3 identical SISO periodic subsystems.The matrices of each linear time periodic agent are given by [46] which are borrowed from an aeromechanic system: The neighboring matrix of this system is given by: Required matrices for cost functions are taken as: Appling ODEs ( 16) on the controllable pair (,) leads to the following results: Figure 1 that shows states of the system illustrates state consensus of the team for the arbitrary initial state  1 (0) =    over one period  = 2.Figures 2, 3, and 4 show differences between states of each of the two separate agents.These figures verify that the state differences converge to zero over one period which is equivalent to state consensus.

Conclusion
In this paper, the problem of cooperative control in a multiagent team with periodic dynamics was investigated and a control protocol to minimize agent's individual costs subject to availability of some other agent's information was introduced.The control law was obtained from solving HJB partial differential equation and consists of local and regional terms that are stated in feedback form.In addition, the states of all the agents in the team converge to a common reference value, and thus the state consensus is achieved.The effectiveness of the proposed method was demonstrated through an illustrative example.
by S ⊥ and right multiplying it by S  ⊥ yield S  ⊥ XS ⊥ ̸ = S  ⊥ XS ⊥ which is contradiction.Due to the definition of S, matrix [S ⊥ S] is invertible and [S ⊥ S]

Figure 1 :
Figure 1: States of the system.

Figure 2 :
Figure 2: Difference of states of agent 1 and agent 2.

Figure 3 :
Figure 3: Difference of states of agent 1 and agent 3.

Figure 4 :
Figure 4: Difference of states of agent 2 and agent 3.