Distributed discrete-time coordinated tracking control problem is investigated for multiagent systems in the ideal case, where agents with a fixed graph combine with a leader-following group, aiming to expand the function of the traditional one in some scenes. The modified union switching topology is derived from a set of Markov chains to the edges by introducing a novel mapping. The issue on how to guarantee all the agents tracking the leader is solved through a PD-like consensus algorithm. The available sampling period and the feasible control gain are calculated in terms of the trigonometric function theory, and the mean-square bound of tracking errors is provided finally. Simulation example is presented to demonstrate the validity of the theoretical results.
National Natural Science Foundation of China61473248Natural Science Foundation of Hebei ProvinceF20162034961. Introduction
Inspired by the potential applications in engineering, such as networked autonomous vehicles, sensor networks [1], and formation control [2], distributed coordination of multiagent systems has attracted much attention from researchers [3, 4]. Very recently, consensus problems have been studied extensively as well as references therein [5, 6]. Many methods have been developed to deal with the consensus problem like linear system theory [7], impulsive control [8], convex optimization method [9], and so on. Due to the complexity of network, the control problem of multiagent systems will be very challenging and difficult. Therefore, how to design the consensus control protocol for multiagent systems becomes a significant research focus.
In most cases, the connectivity of graph might be unfixed; it may deteriorate the system performance and even cause instability. Therefore, some of the existing results concentrate on the ideal case where multiagent systems can be described as the dynamic topology [10, 11]. Researching among those works, some main results and progress on distributed coordination control were given and the system under a dynamic topology was addressed through various methods [12, 13]. For example, distributed consensus problem was studied for discrete-time multiagent systems with the switching graphs, where each agent’s velocity was constrained to lie in a nonconvex set [14]. Moreover, two consensus problems were solved under a switching topology, which was assumed to be uniformly connected only [15]. Otherwise, the aforementioned works not only focused on first-order and second-order systems [16], but also focused on Euler-Lagrange models [17, 18] and even took the time delay and noises into account [19, 20].
Due to the random link failures, variation meeting the need and sudden environmental disturbances, some dynamical systems could be modeled as Markovian switching systems, which are starting with a rapid development [21–23]. Leader-following consensus problem was studied for data-sampled multiagent systems under the Markovian switching topologies [24] and a more interesting case with multiple dynamic leaders was considered in [25]. In [26], under a switching topology governed by Markov chains, the consensus seeking problem was solved through a guaranteed cost control method. It was unnecessary for the Markov chain to be ergodic, since each topology had a spanning tree. In addition, it is difficult to obtain all the elements of the transition rate matrix, or some of the elements are not necessary to guarantee the system stability. Markovian switching model with partially unknown transition rates was considered in [27], and any knowledge of the unknown elements was needed in the design procedure of finite time synchronization controller.
However, many practical systems can be addressed as a dynamic model, such as replacing the broken agents in the group and expanding the function on the basis of traditional one. The time varying reference can be tracked firmly in the original system, whether the union system could achieve consensus in case of combining some followers with a fixed graph. Besides, as far as we know, the usual case, in which Markov chain used in, is the modes of the topologies, since all of the subgraphs and the transition rate matrix should be known or partly known clearly. In contrast with that, Markov chains are applied to the edges of graph in this paper, so that the union system could be discovered through introducing a novel mapping, together with the distributed tracking problem for the union system; it is a valuable topic to be researched.
The main purpose of this paper is to establish the Markovian switching topologies for the union system with two subgraphs. Through a novel mapping, Markovian switching topologies are governed by a set of Markov chains to the edges of the graph. Hence, distributed coordinated tracking control problem is solved via a PD-like consensus algorithm adopted from [16]. Different from [16], a sufficient condition on the system stability is obtained based on trigonometric function theory. As shown in the forward reference, the tracking errors are ultimately bounded, which is partly determined by the bounded changing rate and the number of agents. Simulation result can more fully prove the effectiveness of the strategy.
The rest of this paper is organized as follows. In Section 2, graph theory based on a novel Markov process is given and PD-like consensus algorithm is adopted. In Section 3, stability analysis and some results are provided. Simulation example is presented in Section 4 and this paper is concluded in Section 5.
2. Preliminaries2.1. Graph Theory
Define a directed leader-following graph G≜V,ε with one leader labeled as node 0 and n followers. V=v0,…,vn is a nonempty finite set of nodes and ε⊆V×V is a set of edges. For an edge vi,vj∈ε, if the node vj can obtain information from vi, vi is a neighbor of node vj. A directed path is a sequence of edges in the form of v1,v2,v2,v3,v3,vk,…, where vk∈V. The adjacency matrix A=[aij]∈Rn+1×n+1 is associated with G, where aij>0 if agent vi can obtain information from agent vj and aij=0 otherwise. Assume aii=0 and the leader does not receive information from the followers. Thus, the adjacency matrix of G is denoted by(1)A=001×nA01A1,where A1=[aij]∈Rn×n and A01=[ai0]∈Rn×1. Let G2≜V2,ε2 be a fixed and directed graph with m agents. The adjacency matrix of G2 is given by A2=[aij]∈Rm×m. The union graph is denoted by Gu=G∪G2 with the node set Vu=V∪V2 and the edge set εu=ε∪ε2. Hence(2)Au=001×n01×mA01A1S21θkS02θkS12θkA2,(3)Aθk=A1S21θkS12θkA2,where Au=aijθk∈Rn+1+m×n+1+m is the adjacency matrix of Gu, S02θk∈Rm×1, S12θk∈Rm×n, and S21θk∈Rn×m are parts of the switching matrices among the nodes of G and G2, θk (for brevity, denoted by θk) is a finite homogeneous Markov process, and it will be detailed in the following section.
2.2. Markov Chains
Define a finite set Δ=1,…,γ, γ∈Z+, and a set of matrices Sθk∈Re×f, e,f∈Z+ with the elements sijθk, i∈1,e, j∈1,f. There are two sets Γef=Γ⊕Γ⊕⋯⊕Γ and Γef=1,2,…,γef, where ⊕ represents a novel operation mark among matrices, Γ is a set corresponding to Sθk. Meanwhile, introduce the mapping Ξ:Γef→Γef with(4)ΞSθk=s11θk+s12θk-1γ+⋯+sef-1θk-1γef-1+sefθk-1γef.Then, the mapping Ξ(·) is a bijection from Γef to Γef.
Remark 1.
Based on the bijection in (4), the transition probability Θ(θk)=Prθk∣θk-1 could be derived as follows.
Firstly, for the matrix S12θk∈Rm×n, each θk∈Γmn is corresponding to the only matrix in the set Γmn. In the modes θk and θk-1, the following is yielded:(5)S12θk=Ξ-1θk=s11θks12θk⋯s1nθks21θks22θk⋯s2nθk⋮⋮⋱⋮sm1θksm2θk⋯smnθk,S12θk-1=Ξ-1θk-1=s11θk-1s12θk-1⋯s1nθk-1s21θk-1s22θk-1⋯s2nθk-1⋮⋮⋱⋮sm1θk-1sm2θk-1⋯smnθk-1.
Assume that each edge of Gu takes value in the set Δ with an unequal probability. The transition rate matrix is given by(6)δ=δ11δ12⋯δ1γδ21δ22⋯δ2γ⋮⋮⋯⋮δγ1δγ2⋯δγγ,where(7)δxy≥0x,y∈Δ,∑y∈Δδxy=1x∈Δ.
In addition, Markov chain is ergodic throughout this paper. It is obvious that(8)Prθk∣θk-1=PrΞ-1θk∣Ξ-1θk-1=PrS12θk∣S12θk-1=Prδs11θk∣s11θk-1δs12θk∣s12θk-1⋯δs1nθk∣s1nθk-1δs21θk∣s21θk-1δs22θk∣s22θk-1⋯δs2nθk∣s2nθk-1⋮⋮⋱⋮δsm1θk∣sm1θk-1δsm2θk∣sm2θk-1⋯δsmnθk∣smnθk-1=∏i=1m∏j=1nδsijθk∣sijθk-1,where δ·∣· represents the transition probability from one mode to another. Let the transition probability be δijxy while sijθk-1=x, and sijθk=y; then(9)Prθk∣θk-1=∏i=1m∏j=1nδijxy.
The same work is done to the matrices S02θk∈Rm×1 and S21θk∈Rn×m. Overall consideration, for brevity, denotes the total probability as(10)1ω=∏i=1m∏j=1nδijxy+∏i=1n∏j=1mδijxy+∏i=1m∏j=11δijxy.
Finally, the total number of system modes is η=γ2mn+m, and the transition rate matrix is(11)Π=1ω1η1ηT.
2.3. PD-Like Consensus Algorithm
Suppose the discrete dynamic of the ith follower is(12)ξik+1=ξik+Tuik,where ξik is the state at t=kT, where k is the discrete-time index, T is the sampling period, and uik is the control input.
Let the reference state be ξ0k=ξrk. Consider the discrete-time coordinated tracking algorithm adopted from [16], together with the Markovian parameter θk; consensus algorithm (13) will be applied to the agents in graph Gu:(13)uik=1∑j=0n+maijθk∑j=1n+maijθkξjk-ξjk-1T-qξik-ξjk+1∑j=0n+maijθkai0θkξrk-ξrk-1T-qξik-ξrk,where aijθk, i=1,…,n+m, j=0,…,n+m, is the i,jth entry of Au, and q is a positive constant. Suppose that each follower has at least one neighbor, thus ∑j=1n+maijθk≠0, i=0,…,n+m. Appling (12) and (13) yields(14)ξik+1=ξik+T∑j=0n+maijθk∑j=1n+maijθkξjk-ξjk-1T-qξik-ξjk+T∑j=0n+maijθkai0θkξrk-ξrk-1T-qξik-ξrk.Define eik=ξik-ξrk; let Ek=e1k,…,em+nkT and σk=ETk+1,ETkT; it follows that(15)σk+1=Mθkσk+NXrk,where(16)Mθk=1-TqIn+m+1+TqD-θkAθk-D-θkAθkIn+m0n+m×n+m,Dθk=diag∑j=0n+ma1jθk,…,∑j=0n+man+mjθk,N=In+m0n+m×n+m,Xrk=1n+m2ξrk-ξrk+1-ξrk-1.
σk,k∈Z+ is not a Markov process, but the joint process σk,θk is. Assume that the reference trajectory is a deterministic signal instead of a random one. The initial state of the joint process is denoted as σ0,θ0. It follows that the solution of (15) is(17)σk=Mθk-1Mθk-2⋯Mθ0σ0+NXrk-1+∑l=0k-2Mθk-1Mθk-2⋯Mθl+1NXrl=M^1σ0+NXrk-1+M^2NXrl.Note that the eigenvalues of M^1 play an important role in the determining of σk as k→∞.
3. Convergence AnalysisTheorem 2.
Suppose that the leader has directed paths to all followers 1 to n+m in Gu, then (18)1ω∑i=1ηD-θkAθkhas all eigenvalues within the unit circle, where D-θk is denoted as the inverse of Dθk.
Proof.
There exists D12θk (resp., D21θk) which is corresponding to S12θk (resp., S21θk) as denoted in (16), it follows from (3) that(19)Dθk=D1+D21θk0n×m0m×nD12θk+D2.Then, it is obvious that(20)D-θkAθk=D1+D21θk-1A1D1+D21θk-1S21θkD12θk+D2-1S12θkD12θk+D2-1A2.All the elements of D1+D21θk-1A1, D1+D21θk-1S21θk, D12θk+D2-1S12θk, and D12θk+D2-1A2 are less than 1. Based on Lemma 3.1 in [5], (18) has all eigenvalues within the unit circle.
Let L=ΠT⊗I4n2diagM1⊗M1,…,Mη⊗Mη and L^=ΠT⊗I2ndiagM1,…,Mη, where Mi, i∈1,η, is defined in (16); let ⊗ represent the Kronecker product of matrices. If ρL<1, then ρL^<1, where ρ· denotes the matrix spectral radius.
Theorem 4.
Suppose that the leader has directed paths to all nodes in the union graph Gu, while τi1,τi2>0 holds obviously. If the positive scalars T>0 and q>0 satisfy(21)Tq<mini=1,…,n+mτi1,τi2,where(22)τi1=1-2cos2ϕ-1r2-2-6cos2ϕr2+3r2+2r3cos3ϕ-r3cosϕ-3rcosϕ-12cos2ϕ-1r2-2rcosϕ+1,τi2=1-2cos2ϕ-1r2+22cos2ϕ-1r2-1rcosϕ-12cos2ϕ-1r2-2rcosϕ+1,then L^ has all eigenvalues within the unit circle.
Proof.
Step 1. The matrix Au has η modes based on the analysis in the Section 2. If the leader has directed paths to all followers, it follows from Theorem 2 that (18) has all eigenvalues within the unit circle. It will be shown that ρL^<1 through the method of perturbation arguments. Hence L^ can be written as(23)L^=ΠT⊗I2ndiagM1,…,Mη=1ωM1M2⋯MηM1M2⋯Mη⋮⋮⋱⋮M1M2⋯Mη.
Denote the elementary transformation block matrix P1∈R2ηm+n2×2ηm+n2 as(24)P1=I4n+m202n+m×2n+m⋯I4n+m202n+m×2n+mI4n+m2⋯I4n+m2⋮⋮⋱⋮02n+m×2n+m02n+m×2n+m⋯I4n+m2.
The equation can be calculated as(25)λI4ωn+m2-L^=P1-1λI4ηn+m2-L^P1=λ4η-1n+m2λI4n+m2-1ω∑i=1ηMi.
The issue will be converted to find the conditions to make sure all eigenvalues of Mi are within the unit circle.
Step 2. The characteristic polynomial of Mi is given by(26)detλI2n+m-Mi=detλIn+m-1-TqIn+m+1+TqD-iAiD-iAi-In+mλIn+m=detλ2+Tq-1λIn+m+1-1+TqλD-iAi.Note that shi is the hth eigenvalue of D-iAi, which is in the unit circle. Define shi=rcosϕ+rsinϕj, where r∈0,1 is the length of shi, ϕ∈0,2π, and j is the imaginary parts signal. Therefore, the roots satisfy(27)λ2+Tq-1-1+Tqshiλ+shi=0.It can be noted that(28)λ1+λ2=-Tq-1-1+Tqrcosϕ+1+Tqrsinϕj,λ1λ2=rcosϕ+rsinϕj.Let λ1=l1cosα+l1sinαj and λ2=l2cosβ+l2sinβj. Based on (28), thus(29)sinϕ=sinα+β,cosϕ=cosα+β,r=l1l2,(30)l1cosα+l2cosβ=-Tq-1-1+Tqrcosϕ,l1sinα+l2sinβ=1+Tqrsinϕ.It follows from (30) that(31)l1cosα+l2cosβ2-l1sinα+l2sinβ2=Tq-1-1+Tqrcosϕ2-1+Tqrsinϕ2.Using (29) and (31), after some manipulation, (31) can be rewritten as(32)2l12cos2α+2l22cos2β-l12+l22=1+Tq2r2cos2ϕ-sin2ϕ-2Tq2rcosϕ+Tq-12.Aimed to prove that λ1 and λ2 are within the unit circle, with l1≤1, l2≤1, it follows that(33)-2<2l12cos2α+2l22cos2β-l12+l22<2.Then, the following holds:(34)1+Tq2r2cos2ϕ-sin2ϕ-2Tq2rcosϕ+Tq-12+2>0,1+Tq2r2cos2ϕ-sin2ϕ-2Tq2rcosϕ+Tq-12-2<0.To get the condition of Tq, the transition of (34) is made:(35)g1Tq=2cos2ϕ-1r2-2rcosϕ+1Tq2+22cos2ϕ-1r2-1Tq+2cos2ϕ-1r2+3>0,g2Tq=2cos2ϕ-1r2-2rcosϕ+1Tq2+22cos2ϕ-1r2-1Tq+2cos2ϕ-1r2-1<0.With the limit conditions of r, ϕ, and (35), the range of Tq can be obtained.
Firstly, as is well-known, in the analysis of g1Tq, let(36)a=2cos2ϕr2-r2-2rcosϕ+1,b=22cos2ϕ-1r2-1<0,c=2cos2ϕ-1r2+3;then(37)g10=2cos2ϕ-1r2+3>0,Δ1=b2-4ac=8-6cos2ϕr2+3r2+2r3cos3ϕ-r3cosϕ-3rcosϕ-1.After some manipulation, this yields(38)Tq∈0,τi1,where τi1 satisfies (22) with Δ1>0.
Then, for the condition of g2Tq, the same as g1Tq, it can be obtained that(39)g20=2cos2ϕ-1r2-1<0,Δ2=b2-4ac=82cos2ϕ-1r2-1rcosϕ-1>0.Similarly, we have(40)Tq∈0,τi2,where τi2 satisfies (22) with a>0.
Finally, sufficient condition (21) can be exactly proved. It follows from Lemma 3.1 in [5] that ∑i=1ηMi/ω has all eigenvalues within the unit circle. Thus, based on (21), the system tracking errors can be convergent stably.
Remark 5.
Markov chains are required to be ergodic; therefore it can be ensured that the leader has directed paths to all followers in Gu. Certainly, the results can be expanded to the case where all the links are governed by Markov chains; through the mapping in (4), Markovian switching topologies will be addressed finally. From this, large numbers of system modes can be described, and the traditional Markovian switching topologies can be recovered through the adjustment of the links modes and the transition rate matrix. However, it will magnify the calculation load and the unknown or partly unknown transition probability in some scenes will be considered in the future.
Let L be defined in Lemma 3. For small enough Tq, ρL<1, if and only if the leader has directed paths to all followers in the union graph rather than the subgraphs.
Theorem 7.
Assume that ξrk satisfies the fact that the changing rate is bounded; thus(41)ξrk-ξrk-1T≤ξ-;the leader has directed paths to all followers 1 to n+m in the union graph. When Theorem 4 holds, using algorithm (13), if there exist 0<μ<1 and ν≥1, the tracking errors of the agents are ultimately mean-square bounded as follows:(42)2m+nTξ-ν1-μ.
Proof.
It follows from (15) that(43)σkE=Mθk-1Mθk-2⋯Mθ0σ0E+NXrk-1E+∑l=0k-2Mθk-1Mθk-2⋯Mθl+1NXrlE.Noting that NXrk-1 is deterministic, based on (41), thus(44)NXrk-1E≤2m+nTξ-.Based on Lemmas 3.4 and 3.5 and Theorem 3.9 in [28], there exist 0<μ<1 and ν≥1, yielding(45)Mθk-1Mθk-2⋯Mθ0σ0E≤m+nμ2kν2σ02,Mθk-1Mθk-2⋯Mθl+1NXrlE≤2m+nTξ-μ2k-2l-2ν2.Noting that 2m+nTξ-≤2m+nTξ-ν, after some manipulation, thus it follows that(46)σkE≤m+nμkνσ02+2m+nTξ-ν1-μk1-μ.Therefore, as k→∞, it can be obtained that σkE≤2m+nTξ-ν/1-μ. The same as Theorem 3.2 in [16], the tracking errors will go to zero ultimately as T→0. But for the original interaction topology G, the ultimate mean-square bound is given by 2nTξ-ν/1-μ through the same method, which is smaller than the union system.
4. Simulation Results
In this section, a simulation example is given to verify the effectiveness of the theoretical results. For brevity, let aijθk=1 if vj,vi∈εu, i∈1,n+m, and j∈0,n+m. The subgraphs G and G2 are shown in Figure 1.
Two fixed subgraphs.
It follows from Gu that each Markov chain has two modes, which means γ=2 and η=23×5+3×4=227, and the transition rate matrices are considered as in Table 1.
Transition rate matrices for Markov chains.
Edges
v1,v5; v2,v7; v6,v3; v4,v5; v2,v6;
v1,v6; v2,v7; v3,v6; v5,v4; v7,v4;
v1,v7; v2,v5; v3,v5; v4,v6; v7,v3;
Transition rate matrices
δ=0.30.70.40.6
δ=0.20.80.40.6
δ=0.50.50.80.2
As an example, some modes of the edges are shown in Figure 2.
Modes of the edges generated by Markov chains.
For the PD-like discrete-time consensus algorithm, the initial states of the agents in G and G2 are σ10σ20σ30σ40=3102.5 and σ5σ6σ7=-2-3-2.5. Furthermore, σ1-1σ2-1σ3-1σ4-1=3102.5 also should be defined at the initial time, and let ξrk=coskT. Distributed controller (13) is implemented with the parameters in the following four cases.
Case 1.
T=0.1, q=4.
Case 2.
T=0.05, q=4.
Case 3.
T=0.05, q=2.
Case 4.
T=0.4, q=2.
Simulation results are shown in Figures 3–6.
Simulation results in Case 1.
State tracking of all agents
Tracking errors of all agents
Simulation results in Case 2.
State tracking of all agents
Tracking errors of all agents
Simulation results in Case 3.
State tracking of all agents
Tracking errors of all agents
Simulation results in Case 4.
State tracking of all agents
Tracking errors of all agents
Figure 3 shows the plots of the system states and tracking errors with a time varying reference when T=0.1, q=4. Once the two fixed groups are combined at half of the time, all the followers can track the reference finally. More specifically, the system states and tracking errors curves are smooth in the first half of time, but in the rest of time as shown in the partial enlarged details, there are lots of burrs on the plots for all agents obviously, since the links between agents in G and G2 are governed by the random Markov chains.
Under the same reference, compared with Figure 3, the system states can track more effectively and the tracking errors are smaller while T=0.05, q=4. Furthermore, as shown in Figures 4 and 5, there is a quick response and smaller tracking errors ultimately, along with a bigger control gain q and the same T. But for Figure 6, when T=0.4, q=2 in Case 4, the situation is unpredictable, because Tq does not meet the condition of Theorem 4. It can be noted that the tracking errors become unbounded in this case. What should be stressed more is that, based on Theorem 4, the largest value of Tq is approximately equal to 0.44. Otherwise, a quantitative comparison among the four cases is given in Table 2, which shows the mean and standard deviation of the tracking errors in the second half of the time. The comparison results show that the tracking errors depend on T and q obviously.
Comparison results of the tracking errors among the four cases.
Cases
Mean
Standard deviation
Case 1
0.0305
0.1718
Case 2
−0.0163
0.1483
Case 3
−0.0788
0.2349
Case 4
-4.4958×105
2.5529×107
5. Conclusion
In this paper, distributed discrete-time coordinated tracking control for multiagent systems is investigated to solve the issue on the union graph with Markov chains. Based on a novel mapping, Markovian switching topologies are redesigned through using the Markov chains to the edge set. The PD-like discrete-time consensus algorithm is applied to deal with the time varying reference. A sufficient condition of the match sampling period and a feasible control gain to the time varying reference is obtained in terms of trigonometric function with multiple-term formula. Both the theoretical and simulation results show that the ultimate tracking errors are related to the sampling period. Although we focus on studying the discrete-time multiagent systems with an ideal communication network, an extended analysis may be considered for the case with time delays, which will be addressed in our future work.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant 61473248 and Natural Science Foundation of Hebei Province under Grant F2016203496.
LiangJ.WangZ.LiuX.Distributed state estimation for discrete-time sensor networks with randomly varying nonlinearities and missing measurementsDongX.ZhouY.RenZ.ZhongY.Time-Varying Formation Tracking for Second-Order Multi-Agent Systems Subjected to Switching Topologies With Application to Quadrotor Formation FlyingZhangQ.LiP.YangZ.ChenZ.Adaptive flocking of non-linear multi-agents systems with uncertain parametersLiuT.JiangZ.-P.Distributed nonlinear control of mobile autonomous multi-agentsSuS.LinZ.Distributed Consensus Control of Multi-Agent Systems with Higher Order Agent Dynamics and Dynamically Changing Directed Interaction TopologiesLiP.QinK.Distributed robust H∞ consensus control for uncertain multiagent systems with state and input delaysLinP.JiaY.LiL.Distributed robust H∞ consensus control in directed networks of agents with time-delayHeW.ChenG.HanQ.-L.QianF.Network-based leader-following consensus of nonlinear multi-agent systems via distributed impulsive controlLinP.RenW.FarrellJ. A.Distributed continuous-time optimization: nonuniform gradient gains, finite-time convergence, and convex constraint setCaoY.RenW.EgerstedtM.Distributed containment control with multiple stationary or dynamic leaders in fixed and switching directed networksCaiH.HuangJ.Leader-following consensus of multiple uncertain Euler-Lagrange systems under switching network topologyCaoY.YuW.RenW.ChenG.An overview of recent progress in the study of distributed multi-agent coordinationWangY.WuQ.Distributed robust H∞ consensus for multi-agent systems with nonlinear dynamics and parameter uncertaintiesLinP.RenW.GaoH.Distributed velocity-constrained consensus of discrete-time multi-agent systems with nonconvex constraints, switching topologies, and delaysSuY.HuangJ.Stability of a class of linear switching systems with applications to two consensus problemsZhaoH.RenW.YuanD.ChenJ.Distributed discrete-time coordinated tracking with Markovian switching topologiesChenF.FengG.LiuL.RenW.Distributed average tracking of networked Euler-Lagrange systemsMeiJ.RenW.MaG.Distributed containment control for Lagrangian networks with parametric uncertainties under a directed graphLiX.WuH.YangY.Consensus of heterogeneous multiagent systems with arbitrarily bounded communication delayYuS.LongX.Finite-time consensus for second-order multi-agent systems with disturbances by integral sliding modeMingP.LiuJ.TanS.LiS.ShangL.YuX.Consensus stabilization in stochastic multi-agent systems with Markovian switching topology, noises and delayMaC.Finite-time passivity and passification design for markovian jumping systems with mode-dependent time-varying delaysXieD.ChengY.Bounded consensus tracking for sampled-data second-order multi-agent systems with fixed and Markovian switching topologyZhaoH.Leader-following consensus of data-sampled multi-agent systems with stochastic switching topologiesLiW.XieL.ZhangJ.-F.Containment control of leader-following multi-agent systems with Markovian switching network topologies and measurement noisesZhaoY.GuoG.DingL.Guaranteed cost control of mobile sensor networks with Markov switching topologiesLiuX.YuX.XiH.Finite-time synchronization of neutral complex networks with Markovian switching based on pinning controllerCostaO. L. V.FragosoM. D.MarquesR. P.