In this paper the suboptimal event-triggered consensus problem of Multiagent systems is investigated. Using the combinational measurement approach, each agent only updates its control input at its own event time instants. Thus the total number of events and the amount of controller updates can be significantly reduced in practice. Then, based on the observation of increasing the consensus rate and reducing the number of triggering events, we have proposed the time-average cost of the agent system and developed a suboptimal approach to determine the triggering condition. The effectiveness of the proposed strategy is illustrated by numerical examples.
1. Introduction
In practical application of networked dynamical systems, individual subsystems such as robots, vehicles, or mobile sensors are required to work cooperatively to accomplish complex tasks. This motivates the research on analysis and synthesis of networked dynamical systems and distributed coordination control of multiagent systems in recent years. Typical research directions in this filed include, but are not limited to, the problems of networked systems with unreliable communication links and quantized measurements [1–3], multiagent consensus [4–6], distributed tracking [7], formation control [8], connectivity preservation [9], agent flocking [10–12], rendezvous [13, 14], coverage, and deployment [15–17].
To reduce the total cost in practical systems, when implementing the communication and controller actuation schemes for multiagent systems, a possible design may equip each agent with a small embedded microprocessor and simple communication and actuation modules. However, these low-cost processors and modules usually have only limited energy and abilities. As a result, event-triggered schemes for practical control systems with digital platforms are proposed; see [18–21]. Recently, event-based distributed control strategies have also been proposed for multiagent systems in [22–25]. Using the deterministic strategy introduced in [19], the control input of each agent is updated only when the measurement error magnitude exceeds a certain threshold. It is also proved that the lower bounds for the inter-event time intervals are strictly positive to ensure there is no Zeno behavior. Other works regarding event-triggered control for multiagent systems include decentralized control over wireless sensor networks [26], event-triggered consensus with second-order dynamics [27], and event-based leader-follower tracking [28].
The motivation of employing event-triggered control in multiagent systems is to reduce the costs of communication and controller updates so as to meet the hardware limitations and to save energy. However, in the existing event-triggered control of multiagent systems, the performance of the event-triggered controller has not been studied yet [22–25]. In this paper, we firstly present a short review of the combinational measurement approach proposed in [29]. Compared with the existing approach, this approach allows the controller of each agent to be triggered only at the event time of itself, which reduces the frequency of controller updates in practice. Then, based on this control approach, we have investigated the optimal control problem of the event-triggered control system by formulating the average cost of the system. It is noted that the combinational measurement approach can be utilized to decouple the costs of different agents. By such cost decoupling strategy, a suboptimal approach to determine the triggering condition has been proposed. Numerical examples show that the proposed approach can reduce the total cost of the agent system during the consensus tasks.
The contribution of this work is as follows. Firstly, we have proposed a formulation of the time-average cost for multiagent systems with event-based controllers. This cost can describe the tradeoff between increasing the consensus rate and reducing the resource consumption. To the best of our knowledge, there has been very few works regarding this issue of event-triggered multiagent systems so far. Secondly, we have decoupled the costs of different agents and then found an upper bound of the cost for each agent. By this approach, we are able to propose a distributed suboptimal controller for the multiagent consensus problem.
The rest of this paper is organized as follows. Section 2 presents the event-triggered controller design of the multiagent system and the results of its convergence. In Section 3, the average cost of the system is formulated and the suboptimal triggering condition is obtained. In Section 4, simulations are provided to illustrate the proposed strategies. Finally the paper is concluded in Section 5.
2. Event-Triggered Consensus
In this section we provide a review of the event-triggered control with combinational measurement proposed in [29]. Consider a multiagent system with N agents, labeled by 1,2,…,N, which are required to achieve the consensus task. The agent states at time t are represented by xi(t)∈ℝn,i=1,…,N. The dynamic of agent i is
(1)x˙i(t)=ui(t).
The communication links among agents are considered to be undirected and the communication topology of the system is represented by an undirected graph G=(V,E), where V is the vertex set and E is the edge set. Agent j is said to be a neighbor of agent i if and only if (j,i)∈E (or (i,j)∈E). All the neighbors of agent i constitute the neighbor set Ni.
The event-triggering mechanism is introduced in agent control. The control input of agent i will remain fixed until the next triggering event occurs. Assume that the triggering time sequence for agent i is t0i,t1i,…,tki,…, where t0i=0 is always a default triggering time. In the agent group, each agent can obtain the state information of its communication neighbors. When t∈[tki,tk+1i), the control input of agent i will depend on the states of itself and its neighbors at time tki.
To develop decentralized control, agent i's local coordinate system is introduced and the origin is at xi(t). The real-time average state of agent i and all its neighbors in this local coordinate system is(2)qi(t)=1ni+1∑j∈Ni(xj(t)-xi(t)).
At each triggering time point, agent i measures this average state and takes the measurement as its target; see Figure 1 for an illustration in a 2D plane. This target state will remain fixed until the next triggering time comes. Thus the target state of agent i when t∈[tki,tk+1i) is
(3)qi(tki)=1ni+1∑j∈Ni(xj(tki)-xi(tki)).
For t∈[tki,tk+1i), the control law for agent i is proposed in [29] as follows:
(4)ui(t)=ξiqi(tki)=ξini+1∑j∈Ni(xj(tki)-xi(tki)),
where ξi is a positive real number to be determined. The measurement error of agent i will be
(5)ei(t)=qi(tki)-qi(t).
Since t0i=0 is a triggering time instant, one has ei(0)=0. In the sequel we will show how to use this error to determine the triggering event that guarantees the consensus of the agent group.
Target state and measurement error of agent i.
We denote x(t)=(x1T(t),…,xNT(t))T as the augmented state of the system and also denote e(t)=(e1T(t),…,eNT(t))T. From (4) and (5) one has
(6)x˙i(t)=-ξini+1∑j∈Ni(xi(t)-xj(t))+ξiei(t).
Let L be the Laplacian matrix of the underlying graph G. Also let Ξ=diag{ξ1,…,ξN} and M=diag{n1+1,…,nN+1}. Then the compact form of the system equation is given by
(7)x˙(t)=-((ΛL)⊗In)x(t)+(Ξ⊗In)e(t),
where Λ=M-1Ξ. Since the communication is bidirectional, graph G is undirected and then L is symmetric [5]. Consider the candidate Lyapunov function
(8)V(t)=12xT(t)(L⊗In)x(t).
One has
(9)V˙(t)=xT(t)(L⊗In)x˙(t)=xT(t)(L⊗In)(-((ΛL)⊗In)x(t)+(Ξ⊗In)e(t)).
Let zi(t)=∑j∈Ni(xi(t)-xj(t)) and
(10)z(t)=(z1T(t),…,zNT(t))T.
Then one has
(11)z(t)=(L⊗In)x(t).
From (9),
(12)V˙(t)=-zT(t)(Λ⊗In)z(t)+zT(t)(Ξ⊗In)e(t)=-∑i=1Nξini+1∥zi(t)∥2+∑i=1NξiziT(t)ei(t).
Note that for any a>0 and any x,y∈ℝn, one always has |xTy|≤(a/2)∥x∥2+(1/2a)∥y∥2. Thus
(13)V˙(t)≤-∑i=1Nξini+1∥zi(t)∥2+∑i=1Naiξi2∥zi(t)∥2+∑i=1Nξi2ai∥ei(t)∥2.
Enforcing
(14)∥ei(t)∥≤ηi∥zi(t)∥
yields
(15)V˙(t)≤-∑i=1N(ξini+1-aiξi2-ξiηi22ai)∥zi(t)∥2.
Thus V˙(t)≤0 if 1/(ni+1)-ai/2-ηi2/2ai>0. From this, one has ai<2/(ni+1) and ηi<2ai/(ni+1)-ai2. Notice that, when ai=1/(ni+1), 2ai/(ni+1)-ai2 reaches its maximum 1/(ni+1). Also notice that(16)zi(t)=-(ni+1)qi(t).
Thus (14) can be rewritten as
(17)∥ei(t)∥≤βi∥qi(t)∥,0<βi<1.
Then (15) becomes
(18)V˙(t)≤-∑i=1Nξi(1-βi2)2(ni+1)∥zi(t)∥2.
The triggering function for agent i is
(19)gi(ei(t),qi(t))=∥ei(t)∥-βi∥qi(t)∥,0<βi<1.
And an event of agent i will be triggered when
(20)gi(ei(t),qi(t))=0.
Notice that when an event is triggered, the control input changes and the error ei(t) is automatically reset to 0.
It is noted that the triggering mechanism is designed in such a way that the time derivative of the Lyapunov function is enforced to be nonpositive by (17). However, this does not sufficiently guarantee the convergence of the closed-loop system. In a hybrid system, the interevent time may get shorter and shorter for increasing k such that infinitely many events are triggered in a finite time interval. Such execution of a hybrid system is called Zeno; see [30] and references therein for more details. Generally speaking, in controller design one may expect the agents are always triggered regularly and there is no Zeno behavior. Actually, in [29] comprehensive triggering behavior analysis has been provided and we have the following lemma.
Lemma 1.
Consider an agent i with a nonempty neighbor set Ni. Its kinematic is given in (1) and its controller is the event-triggered control (4), with (20) being the triggering condition. If tki exists and qi(tki)≠0, agent i will only exhibit regular triggering behavior for all t>tki.
Proof.
The proof of this lemma follows directly from Lemmas 2, 3, and 4 in [29].
Then we are at the position to present the consensus result of the proposed event-triggered controller.
Theorem 2 (see [<xref ref-type="bibr" rid="B29">29</xref>]).
Consider a group of N agents moving in the working space ℝn. The dynamic of each agent is (1). Assume that the communication graph G is fixed and connected. If no agent is located at the average state of its neighbors, the group will achieve consensus asymptotically under the event-triggered control law (4) with the triggering condition (20).
Remark 3.
We note that the agent group may not achieve consensus if more than one agent is located at the average of all its neighbors. One strategy for solving this problem is to use a subset of Ni; for example, Nis⊂Ni, to compute qi(k+1) and tk+1i. Then, at tk+1i, when agent i is no longer at the average state of all its neighbors, the controller is switched back to use Ni.
3. Suboptimal Triggering
In a practical multiagent system, fast achievement of the coordination tasks with least resource consumption is often expected. For consensus problem discussed in this work, one may expect the highest consensus rate with the least amount of events and controller updating executions. However, there is a tradeoff between these two factors. On the one hand, to achieve fast consensus and precise control, one may require the norm of the measurement error ∥ei(t)∥ as smaller as possible, which may call for high frequency of event triggering and controller updating. On the other hand, to save energy and communication bandwidth, one may reduce the triggering frequency and thus the amount of controller updating, which contradicts the above mentioned consensus rate expectation. The goal of this section is to balance a tradeoff between increasing the consensus rate and reducing events and controller updates.
It is noted that, if ∥ei(t)∥=0, agent i takes the center of all its neighbors qi(t) as the target point. Then it may achieve consensus faster than using qi(tki) since qi(t) is the real-time neighborhood center. Thus ∥ei(t)∥ can be considered as the measurement cost of agent i. The smaller this cost is, the faster the consensus rate can be. However, directly using ∥ei(t)∥ as the measurement cost is not a better choice since, when all the agents are very close to each other, ∥ei(t)∥ goes to 0 and cannot reflect the consensus rate well. To solve this problem, we let the measurement cost of agent i be
(21)si(t)=∥ei(t)∥∥qi(t)∥.
This definition can represent the measurement deviation of the real-time neighborhood center from its true value. It is better than directly using ∥ei(t)∥ as the measurement cost since si(t) is also well defined to reflect the consensus rate when ∥ei(t)∥ tends to 0 as time goes to infinity. The lower this cost is, the faster the consensus rate can be.
To formulate the above mentioned tradeoff, we should also find a way to count the amount of triggering events for all the agents. Let the entire triggering cost of agent i be its total number of triggering events. Then during the time interval [tki,tk+1i), the triggering cost of agent i is 1. Thus we can define the time-average triggering cost as
(22)σi(t)=2(tk+1i-tki)2(t-tki),t∈[tki,tk+1i).
This definition implies that the total cost in a single event time interval [tki,tk+1i) is always 1; that is,
(23)∫tkitk+1iσi(t)dt=1.
The lower the cost σi(t), the fewer the amount of event triggering and controller updating and thus the resource consumption can be.
Then we can define the comprehensive time-average cost of agent i, that is, the per-period cost, as
(24)si(t)+λiσi(t),
where λi is the cost coupling strength. The objective is to find a balance between the estimation error and the triggering frequency. Namely, we are aiming to find a set of optimal triggering policies to minimize the average cost of the agent group, which is defined by
(25)J(g)=limT→∞1T∫0T∑i=1N(si(t)+λiσi(t))dt,
where g={gi,…,gN} is the collection of triggering functions. Since all the agents are coupled by the event-triggered control, the whole group exhibits the behavior of a complex hybrid system. Thus to solve the problem of minimizing J(g) by designing g is rather challenging. However, based on the behavior analysis presented in [29], one can find a suboptimal solution to this problem.
Denote J(g)=∑i=1NJi(g) where(26)Ji(g)=limT→∞1T∫0T(si(t)+λiσi(t))dt
is the average cost of agent i. Let tp and tq be two event instants of agent i with tq>tp, and let
(27)T=tq-tp.
We consider this cost over a sufficiently long time period [0,T]. From (25) one notices that the finite time form of agent i's average cost over the time interval [tp,tq] is
(28)Ji[tp,tq]=1T∫tptq(si(t)+λiσi(t))dt=1T∑k=pq-1∫tk-1itki(si(t)+λiσi(t))dt=1T∑k=pq-1∫tk-1itkisi(t)dt+λiT∑k=pq-1∫tk-1itkiσi(t)dt.
From (17) one has si(t)≤βi, and they are equal only when t=tki. Then from (23) one has
(29)Ji[tp,tq]<βiT∑k=pq-1(tki-tk-1i)+λiT∑k=pq-11=βi+λiMT,
with M being the number of event time intervals on [tp,tq]. Thus, when tq→∞, the average cost of agent i can be upper bounded by
(30)Ji[tp,tq]<βi+λiτi,
where τi=T/M is the average length of triggering time interval of agent i. It is difficult to obtain an estimation of τi. However, one may consider the average cost of agent i on the time interval [tp,tq] when tp is sufficiently large. In this case, τi will be lower bounded by τki's limit defined in [29]. Consider
(31)limk→∞τki=βi2ζi(1+βi),
where ζi=maxl{ξl∣l∈Ni∪{i}}. Thus one has
(32)Ji[tp,tq]<βi+λiβi/(2ζi(1+βi))=2λiζi+βi+2λiζiβi.
The right-hand side will reach its minimum if and only if βi=2λiζi. If all agents take the same ξi, this condition will be
(33)βi=2λiξi.
Thus a suboptimal triggering condition is given by
(34)gi(∥ei(t),qi(t)∥)=∥ei(t)∥-2λiξi∥qi(t)∥=0
if 2λiξi<1.
Remark 4.
Equation (33) shows the relationship between the triggering execution and the importance of the measurement and triggering cost. For example, a larger λi means reducing the triggering cost is more important. Then one will obtain a larger βi, which may lengthen the time in between consecutive triggering executions and reduce the triggering cost.
4. Simulations
In this section some simulations will be provided to illustrate the proposed event-triggered control strategy. Consider a group of N=6 agents in the working space ℝ3. Each agent has dynamic (1) and the controller (4). The parameters in the control input (4) and the triggering function (19) are given by ξi=0.4 and βi=0.9 for all agents. The initial states of agents are randomly selected which are as follows:
(35)x1(0)=(0.1725,0.4469,0.8357)T,x2(0)=(6.1630,-1.8492,4.1066)T,x3(0)=(-4.0025,2.9335,5.9033)T,x4(0)=(-3.7577,4.8591,9.9656)T,x5(0)=(-1.9829,3.3818,8.4150)T,x6(0)=(-1.8028,3.6658,9.5204)T.
The communication graph G is also shown in the first subfigure of Figure 2. In the simulation, when the sum of the distances from agents to the group average is shorter than Δ=0.01; that is, ∑i=1N∥xi(t)-(1/N)∑i=1Nxi(t)∥≤Δ, the group is considered to have achieved consensus.
Communication topology and trajectories of agents.
The trajectories of agents in the simulation are shown in Figure 2. The agents are represented by small circles and the trajectories are represented by solid lines. Notice that the agent group eventually achieve consensus at t=54.63s under the proposed control law. One can also note from the trajectories that the control inputs of all the agents, which are shown in Figure 3, are fixed during each interevent time interval.
Control inputs of agents.
Figure 4 shows the evolution of the error norms of all the agents. From (20) one concludes that the curve of these error norms stays below the threshold βi∥qi(t)∥. The error increases in each triggering time interval and then is automatically reset to 0 when an event occurs.
Measurement error norms of agents.
The event time instants of all the agents are shown in Figure 5. From this figure one can observe that all agents are triggered regularly and the interevent time intervals have strictly positive lengths. This implies there is no Zeno behavior in the system evolution. Moreover, the input of each agent only triggers when its own event occurs.
Event instants and time intervals of agents.
To verify the proposed suboptimal triggering approach, a set of similar simulations are carried out with different parameter selections. The initial conditions of the agents are the same as in Figure 2. The results are listed in Table 1. The table shows that, following the proposed strategy, the cost can be obviously reduced compared with those under the choices which appear appropriate. Simulations also show that, in some cases, the suboptimal choices are very close to the optimal ones.
Average cost J(g) with different feedback gains and cost coupling strengths; Δ=0.01.
βi
(λi,ξi)
(0.7,0.2)
(0.6,0.6)
(0.1,0.8)
0.15
2.6352
6.9189
1.5428
2λiξi
2.0066
4.2239
1.4984
0.90
3.0605
4.3262
2.7932
5. Conclusions
In this paper, the suboptimal event-triggered consensus problem for multiagent systems is considered. The event design is based on the measurement error which is determined by a combined state of neighbors. As a result, each agent only updates its controller at its own event time, which reduces the amount of interagent communication and controller updates in practice. Then we have proposed a novel definition of time-average cost for the agent system and developed a suboptimal triggering approach to determine the event condition. It has been shown that the proposed approach is effective in reducing the average cost of the system. Future work includes extending the proposed approach to multiagent systems with directed communication networks and developing better optimization approach to reduce the system cost.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The work described in this paper was partially supported by grants from the National Natural Science Foundation of China (no. 61203027), the Specialized Research Fund for the Doctoral Program of Higher Education of China (no. 20123401120011), and the Anhui Provincial Natural Science Foundation (no. 1208085QF108).
QiuJ.FengG.GaoH.Fuzzy-model-based piecewise H-infinity static output feedback controller design for networked nonlinear systemsQiuJ.FengG.GaoH.Nonsynchronized-State estimation of multichannel networked nonlinear systems with multiple packet dropouts via T-S fuzzy-affine dynamic modelsQiuJ.FengG.GaoH.Observer-based piecewise affine output feedback controller synthesis of continuous-time T-S fuzzy affine dynamic systems using quantized measurementsRenW.BeardR. W.Consensus seeking in multiagent systems under dynamically changing interaction topologiesOlfati-SaberR.FaxJ. A.MurrayR. M.Consensus and cooperation in networked multi-agent systemsXiaoL.LiaoX.WangH.Cluster consensus on discrete-time multi-agent networksHuJ.FengG.Distributed tracking control of leader-follower multi-agent systems under noisy measurementMarshallJ. A.TsaiD.Periodic formations of multivehicle systemsFanY.LiuL.FengG.SongC.WangY.Virtual neighbor based connectivity preserving of multi-agent systems with bounded control inputs in the presence of unreliable communication linksZavlanosM. M.TannerH. G.JadbabaieA.PappasG. J.Hybrid control for connectivity preserving flockingChenZ.ZhangH.-T.No-beacon collective circular motion of jointly connected multi-agentsChenZ.ZhangH. T.Analysis of joint connectivity condition for multi-agents with boundary constraintsCortésJ.MartínezS.BulloF.Robust rendezvous for mobile autonomous agents via proximity graphs in arbitrary dimensionsFanY.FengG.WangY.QiuJ.A novel approach to coordination of multiple robots with communication failures via proximity graphSongC.FengG.FanY.WangY.Decentralized adaptive awareness coverage control for multi-agent networksNowzariC.CortésJ.Self-triggered coordination of robotic networks for optimal deploymentSongC.LiuL.FengG.WangY.GaoQ.Persistent awareness coverage control for mobile sensor networksÅströmK. J.BernhardssonB. M.Comparison of Riemann and Lebesgue sampling for first order stochastic systemsProceedings of the 41st IEEE Conference on Decision and ControlDecember 2002Las Vegas, Nev, USA201120162-s2.0-0036990518TabuadaP.Event-triggered real-time scheduling of stabilizing control tasksMazoM.Jr.AntaA.TabuadaP.An ISS self-triggered implementation of linear controllersWangX.LemmonM. D.Event-triggering in distributed networked control systemsDimarogonasD. V.JohanssonK. H.Event-triggered cooperative controlProceedings of the European Control ConferenceAugust 2009Budapest, Hungary30153020DimarogonasD. V.JohanssonK. H.Event-triggered control for multi-agent systemsProceedings of the 48th IEEE Conference on Decision and Control Held Jointly with 28th Chinese Control Conference (CDC/CCC '09)December 2009Shanghai, China713171362-s2.0-7795082578410.1109/CDC.2009.5399776DimarogonasD. V.FrazzoliE.JohanssonK. H.Distributed event-triggered control for multi-agent systemsSeybothG. S.DimarogonasD. V.JohanssonK. H.Event-based broadcasting for multi-agent average consensusMazoM.Jr.TabuadaP.Decentralized event-triggered control over wireless sensor/actuator networksHuJ.ZhouY.LinY.Second-order multiagent systems with event-driven consensus controlHuJ.ChenG.LiH. X.Distributed event-triggered tracking control of leader-follower multi-agent systems with communication delaysFanY.FengG.WangY.SongC.Distributed event-triggered control of multi-agent systems with combinational measurementsZhangJ.JohanssonK. H.LygerosJ.SastryS.Zeno hybrid systems