Distributed Event-Triggered Control of Multiagent Systems with General Linear Dynamics

This paper discusses the event-triggered consensus problem of multiagent systems. To investigate distributed event-triggering strategies applied to general linear dynamics, we employ a dynamic controller to convert the general linear dynamic to the singleintegratormodel by a change of variable.The consensus value of these new states is a constant so that the distributed event-triggering scheme is obtained under periodic event detections, in which agents with general linear dynamics require knowledge only of the relative states with their neighbors. Further, an event-triggered observer is proposed to address the case that only relative output information is available. Hence, the consensus of both the state-based and observer-based cases is achieved by the distributed eventtriggered dynamic controller. Finally, numerical simulations are provided to demonstrate the effectiveness of theoretical results.


Introduction
Distributed coordination of multiagent systems has attracted considerable attentions recently due to their extensive applications [1,2].The implementation of distributed algorithms is significant for multiagent systems with limited resources in real applications.In general, sampled-data control is natural for the implementation on a digital platform.Traditionally, periodic sampling control is well done as long as the sampling frequency is high enough.However, this timetriggered approach is not suitable for large-scale multiagent systems where energy, computation, and communication constraints should be explicitly addressed.On the contrary, event-triggered control limits the sensor and control computation and/or communication to instances when the system needs attention.It offers some clear advantages such as the reduction of information transmissions and control updates, as well as a certain level of performance being guaranteed.
Motivated by these advantages, event-triggered control strategies have been researched on both general dynamic systems and distributed networked dynamic systems.Reference [3] provided an introductory overview on some recent works in these areas.In [4] the author proposed an execution rule of the control task based on an ISS-Lyapunov function for the closed-loop system, which is the main idea of event-triggered control in many subsequent works.The author in [5] provided a unifying Lyapunov-based framework for the eventtriggered control of nonlinear systems, which was modeled as hybrid systems including the case of unnecessary monotonic decrease of Lyapunov function.In [6] the output-based eventtriggered control was considered since the full state measurements were not available for feedback in practice.Moreover, it is not the original purpose of introducing event-based control for resource conservations if the event-triggering condition has to be monitored continuously.As a result, periodic event-triggered control was proposed by [7] where the event detection occurred periodically.Model-based eventtriggered control was proposed in [8], which provided a larger bound on the minimum intersampling time instead of using a zero-order hold.And [9] extended to model-based periodic event-triggered control with observers.
The issues on resource limitations are more critical in distributed networked dynamic systems [10][11][12][13].Typically, multiagent systems equipped with resource-limited microprocessors have necessitated the event-triggering strategies on actuating the control updates in [14][15][16][17][18][19][20][21][22][23][24][25][26], and so on.Most of the existing works focus on distributed event-triggered control for single-integrator multiagent systems.In [14], a distributed event-triggered scheme was proposed for a singleintegrator model; then this scheme was improved by [15][16][17][18] and extended to double-integrator models [19] and general linear models [20,21].Observed from these results, we can hardly find distributed event-triggered consensus schemes applicable to double-integrator models by Lyapunov method, to say nothing of general linear models.This is attributed to the fact that the consensus state of double-integrator agents is no longer a constant; that is, it is impossible to find a measurable error without global information to design the triggering condition by Lyapunov function.The triggering thresholds of measurement errors in most of the aforementioned references are state dependent which are natural and convenient for constructing the ISS-Lyapunov function.On the other hand, [15,20] proposed event-triggering schemes taking a constant or a time-dependent variable as the triggering threshold.These schemes with state-independent triggering thresholds cannot reflect the evolution of states; however, they do not require monitoring the neighbours' states for event triggering and can be easily extended to the doubleintegrator [15] or general linear models [20].
Most of the references defined the measurement error as   () =   (   ) −   () except [16,22,23].In [16] the author proposed a combinational measuring approach to event design, by which the control input of each agent was piecewise constant between its own successive events.And [22,23] exploited the relative state errors for the event design known as edge events.In large-scale multiagent systems, the absolute information measurement is unavailable or expensive; consequently, it is more reasonable to define relative state-based measurement error instead of the conventional measurement error as   () =   (   ) −   ().Based on the limitations mentioned above, the objective of this paper is to find a solution of event design for general linear dynamics using relative information in a distributed fashion to achieve consensus.Firstly, it is notable that the consensus value of general linear models is relative to   and (  ⊗   )(0) [27].Accordingly, the consensus value of general linear models can be represented as a constant by a change of variable.By virtue of a dynamic controller, the new variable will evolve according to the single-integrator model.Sequentially a distributed event-triggering scheme can be achieved based on the relative information of neighbors.This idea of dynamic controller is inspired by [24], which was also introduced in event design by [25,26].Nevertheless, [25] considered the absolute measurement error and state-independent triggering threshold; moreover, the eventtriggering scheme needed an exact model of the real system.By applying variable substitution method, as well, [26] solved the consensus problem of a special type of high-order linear multiagent systems via event-triggered control.Secondly, as the full states are not available in practice, a distributed event-triggered observer is proposed using relative output information under a mild assumption.Meanwhile, the event triggering of the dynamic controller is independent on that of the observer.Thirdly, the detections of triggering conditions of all agents occur at a sequence of times without requiring continuous monitoring.
The rest of this paper is organized as follows.Section 2 presents a formal problem description.In Section 3, a distributed event-triggered dynamic controller is given, which will be extended to the case of output feedback by proposing a distributed event-triggered observer in Section 4. Section 5 gives the simulations to validate the theoretical results.Conclusions are given in Section 6.
Notation.Throughout the paper, the sets R * and R × denote the * -dimensional vectors and  ×  matrices, respectively.The notion ‖ ⋅ ‖ refers to the Euclidean norm for vectors and the induced 2-norm for matrices.The superscript "T" stands for transposition. > 0 means that  is symmetric and positive definite and the symbol  * represents the identity matrix of * -dimension.For any matrix ,  min (),  max (), (), and  max () are the minimum eigenvalue, maximum eigenvalue, spectral radius, and maximal singular values of , respectively.

Problem Description
This paper addresses the sampled-data consensus problems for multiagent systems taking into account event-triggered strategies.
We use a graph  = (, ) to model the network topology of multiagent systems, where  = {V 1 , V 2 , . . ., V  } is the node set and  ⊆ × is the edge set.An edge   ∈  in  denotes that there is a directed information path from agent  to agent . is called undirected if   ∈  ⇔   ∈ . = [  ] ∈ R × is the weighted adjacency matrix associated with  such that   > 0 if   ⊆ ×, and   = 0 otherwise, as well as   = 0 for all  = 1, . . ., .The neighbor set of the th agent is denoted by   () = { |   > 0,  ̸ = },  ∈ . = [  ] ∈ R × is Laplacian matrix, where   = −  ,   = ∑  =1   ,  ̸ = .For undirected graphs, the Laplacian matrix is symmetric and positive semidefinite.Hence, the eigenvalues of  are real and can be ordered as  1 ≤  2 ≤ ⋅ ⋅ ⋅   with  1 = 0 and  2 is the smallest nonzero eigenvalue for connected graphs.Here, we impose the following assumption.Assumption 1. Graph  is fixed, undirected, and connected.
The dynamic of the th agent is described by the general linear time-invariant differential equation where   ∈ R  is the state,   ∈ R  is the output, and   ∈ R  is the input., , and  are real constant matrices with appropriate dimensions and the following assumption.
Assumption 2. (, ) is stabilizable, (, ) is detectable, and all the eigenvalues of  lie in the closed left-half plane.
We say that multiagent system (1) solves a consensus problem asymptotically under given   (),  = 1, . . ., , if for any initial states and any ,  = 1, . . ., , lim  → ∞ ‖  () −   ()‖ = 0.When the information transmissions among all agents are continuous, a general consensus protocol of the th agent is in the following form: where  is the feedback gain matrix.A majority of references have facilitated the implementation of the above control law by event-triggered strategies; unfortunately, these existing results are probably not feasible to general linear dynamics.
In this paper, we denote by   () = ∑ ∈    (  () −   ()) the relative state information of agent  and its neighbors and denote by   () = ∑ ∈    (V  () − V  ()) the difference of controller state of agent  and its neighbors.Instead of using continuous event detectors, we intend to implement the event-triggered control with discrete event detecting.That is, the event detections of all agents occur at a sequence of times denoted by  0 ,  1 ,  2 , . .., which are periodic in the sense that  +1 −   = ℎ,  ∈ N, for some properly chosen sampling interval ℎ > 0.
Consequently, we would like to discrete (1) with sampling interval ℎ and propose the following discrete-time eventtriggered dynamic control law for each agent  in the case of C = I as where V  (  ) ∈ R  is the state of dynamic controller,  ∈ R × is the control gain matrix to be designed, and where   (  (  ),   (  ),   (  )) is the triggering condition to be designed and   (  ) is the measurement error at time   defined as The main objective of this paper is to design an eventtriggered scheme of the dynamic controller (4) with respect to the measurement error   (  ) detecting at time instants   ,  ∈ N, such that the multiagent system (1) achieves consensus asymptotically.

Distributed Event-Triggered Control of Multiagent Systems
In this section, we will focus on the case that the state feedback is available.According to ( 3) and ( 4), we can define   (  ) =   (  ) − V  (  ) and give the following equation by combining ( 3), ( 4), and ( 6): Introducing the changes of variables   (  ) = Before giving the main result of this section, a lemma is presented as follows.

Lemma 3. Consider the system (9) under Assumption 1. If the triggering condition
holds and then the states of system (9) asymptotically converge to a common value.Proof.Consider the following ISS-Lyapunov function for system (9): and the time   = ℎ is briefly denoted by .
Remark 4. We have known that the consensus states of general linear dynamics are   () = (  ⊗   )(0),  ∈ 1, . . ., , which are not a common value except the case of single-integrator dynamic.Thus the so-called Lyapunov function with regard to () of general linear dynamics, which is exploited to the design of distributed event-triggered controllers, becomes invalid as it does not converge to zero.Notice that the design of distributed event-triggering scheme of general linear dynamics is converted to the case of single-integrator dynamic by virtue of the dynamic controller (4).Nevertheless, whether or not the consensus of system (9) concludes the consensus of the original system (1) is dependent on the stability of state matrix A, which will be explicated by the next result.Theorem 5. Consider the system (1) with the dynamic control law (4) and suppose that Assumption 1 and 2 hold.The triggering condition of (4) is determined by ( 16) with (11) In addition, |[  −   ]| ̸ = 2/ℎ,  = 1, 2, . .., whenever Re[  −   ] = 0, ∀,  = 1, 2, . . ., , where   ,   denote the eigenvalues of .Then for any initial states there exists matrix  such that all agents achieve asymptotically the consensus state (  ⊗   )( 0 ).

Distributed Event-Triggered Observer-Based Control of Multiagent Systems
As in many applications, the full state measurements are not always available for feedback and the absolute output measurements of each agent are also impractical.In this section, it is desired to design a distributed event-triggered observer using relative output information under the following assumption.
Assumption 7.There exists at least an agent knowing its own absolute output information in the graph .
Remark 8. Assumption 7 is not a very strong restriction on the system; for example, only a robot equipped with highperformance GPS is acceptable in practical applications of large-scale multirobot systems.
We denote by    () ∈ R  the estimate of the state   () and denote by    () =    () the consequent estimate of the output   ().Then, we denote by   () = ∑ ∈    (  () −   ()) the relative output information of agent  and its neighbors and denote by    () =  ∑ ∈    (   () −    ()) the estimate of the relative output information of agent  and its neighbors.The distributed event-triggered observer is designed in the following form: where where    (   (  ),   (  ),    (  ),   ,   (  ),    (  )) is the triggering condition to be designed,   = 1 if agent  knows its own absolute output information, and   = 0 otherwise.And    (  ) ∈ R  is the measurement error at time   defined as Based on the above estimate of   (),   (  ) in ( 4), (6), and ( 16) can be replaced by    (  ) = ∑ ∈    (   (  ) −    (  )).Then, we can derive the following result.Theorem 9. Consider the system (3) with the observer (19) and suppose that Assumptions 1, 2, and 7 hold.The triggering condition of ( 19) is determined by where  +1 −   = ℎ,  ∈ N, is identical to the detection period of update of controller in Theorem 5. Then there exist matrix F and coupling gain c such that the estimation error dynamics (23) are asymptotically stable.
Corollary 10.Under Assumptions 1, 2, and 7, consider the system (1) with the dynamical control law (4) and its triggering condition (16), where   (  ) in (4) and ( 16) are estimated by the observer (19) with its triggering condition (22).Assume that the sampling period of event detections and parameters of triggering conditions are satisfied with what are stated in Theorem 5.Then, for any initial states, there exist matrices ,  and coupling gain  such that all agents achieve consensus asymptotically.
Remark 11.Notice that the events of the dynamic controller and observer are triggered independently, although they both need to measure the states of observer for event detections.Theorem 9 has proved that the state of each agent can be estimated by the proposed event-triggered observer.Thus, Corollary 10 is established by the separation principle.However, the drawback of triggering condition (22) is that it is not clear how the state-independent triggering function should be designed.

Examples
In this section, two models of agents including doubleintegrator dynamics and linearized dynamics of the Caltech multivehicle wireless testbed vehicles are considered to illustrate the theoretical results.It will be shown that we   Example 1.Consider system (1) of six agents with the following system states and matrices: The fixed network topology in Figure 1 is chosen.By calculation,   = 4.5616; then the sampling period of event   is obtained.By using the dynamic controller (4) and triggering condition (16), the state trajectories of six agents during time intervals [0, 10] are shown in Figure 2. Event-triggering instants and control inputs of six agents are shown in Figures 3 and 4, respectively.It can be easily seen that consensus is achieved by the discretetime detections of events; moreover, the updates of controller of each agent only occur at its own event-triggering instants.
Example 2. A linearized model of the Caltech multivehicle wireless testbed vehicles in [31] is considered here.The system states and matrices of six agents are described as where  1 ,  2 are the positions of the th agent along the  and  coordinates, respectively. 3 is the orientation of the th agent.The initial states of the agents are It should be noticed that [26] is incapable of dealing with the above system matrices because they do not satisfy rank() = rank() in [26].The computer simulations illustrate the validity of the proposed dynamic controller (4) and triggering condition (16) on the system (37).The state trajectories of the agents are depicted in Figure 5. Also, eventtriggering instants and control inputs of six agents are shown in Figures 6 and 7, respectively.
The simulation results in Figures 3 and 6 are reported in Tables 1 and 2, respectively, which clarify that both the actuation and communication updates are reduced evidently.In addition, a minimum positive interevent interval is guaranteed by the sampling period of event detectors.Observed from Figure 8, the convergence rate of the estimate errors is fast so that the evolutions of agent states approximate the case of Example 1.As shown in Figure 9, the mean event intervals are affected slightly by the estimated states, while the event times of observer under the triggering condition (22) are considerable.Unfortunately, it remains an open problem to identify how to reduce the unnecessary updates triggered by the time-dependent triggering function.

Conclusion
This paper studies the distributed event-triggered control of multiagent systems under the fixed undirected network topology.In order to find distributed event-triggering schemes applicable to general linear dynamics, a dynamic controller is employed to convert the general linear dynamic to the single-integrator model by a change of variable.Therefore, a distributed event-triggering scheme only using the relative state information of neighbors is obtained to update the control law, where the triggering condition is detected periodically.Then, the result is extended to design event-triggered observer-based controller via relative output information.These theoretical results have been verified by simulations.Further work will focus on the event-triggered consensus of general linear dynamics in multiagent systems with switching topologies and/or time delays and other issues on multiagent systems, such as event-triggered formation and/or containment control.

Figure 2 :
Figure 2: The evolution of states of each agent with double-integrator dynamics.

Figure 4 :
Figure 4: Control inputs of each agent with double-integrator dynamics.

Figure 5 :
Figure 5: The evolution of states of the CMVWT vehicles.

Figure 7 :
Figure 7: Control inputs of the CMVWT vehicles.

Figure 8 :
Figure 8: The evolutions of state and estimate error of each agent.

Example 3 .
Consider the case of output feedback on the dynamic in Example 1 with  = [1 0].From Theorem 9, we obtain  = [1.0020.999]  and  = 0.15.The parameters of triggering condition (22) are chosen as  = 0.3 and  = 0.5.The initial states of the agents, the design of the dynamic controller with its triggering condition, and the network topology are also the same as Example 1.As expected, the simulation results demonstrate the consensus.

Figure 9 :
Figure 9: Event-triggering instants of controller and observer of each agent.