Event-Triggered Discrete-Time Distributed Consensus Optimization over Time-Varying Graphs

This paper focuses on a class of event-triggered discrete-time distributed consensus optimization algorithms, with a set of agents whose communication topology is depicted by a sequence of time-varying networks. The communication process is steered by independent trigger conditions observed by agents and is decentralized and just rests with each agent’s own state. At each time, each agent only has access to its privately local Lipschitz convex objective function. At the next time step, every agent updates its state by applying its own objective function and the information sent from its neighboring agents. Under the assumption that the network topology is uniformly strongly connected and weight-balanced, the novel event-triggered distributed subgradient algorithm is capable of steering the whole network of agents asymptotically converging to an optimal solution of the convex optimization problem. Finally, a simulation example is given to validate effectiveness of the introduced algorithm and demonstrate feasibility of the theoretical analysis.


Introduction
In the last decade, multiagent systems have obtained some achievements in theory and application, like consensus problem [1][2][3][4][5][6][7], flocking problem [8][9][10], resource allocation control [11][12][13], and so on [14][15][16][17].Multiagent systems not only have the characteristics of resource sharing, good coordination, high distribution, and strong autonomy, but also can cooperatively solve large-scale complex tasks [18][19][20][21][22][23].However, as an emerging issue, the coordination control of multiagent systems still faces many problems in theoretical researches and practical applications.As one of the most important research subjects in the field of coordination control of multiagent systems, the consensus problem has achieved considerable attention over the past few years due to its expanding applications in cooperative control of highway system, mobile multirobot system, design of distributed sensor networks, and other areas [24][25][26][27][28][29][30][31].Generally speaking, consensus means that the states of all agents asymptotically or even exponentially reach an agreement on a unique value by effectively applying local information of each agent.Naturally, in order to guide the actual application and achieve greater value, we still need further study.
Among the existing papers, the consensus-based subgradient methods for solving the distributed convex optimization problem have drawn a surge of attentions since Nedic and Ozdaglar presented a systematic analysis of it in [32].To date, many valuable consensus-based subgradient algorithms have been addressed.Projection-based distributed subgradient algorithm for distributed optimization was put forward in Nedic et al. [33], where each agent was constrained to an individual closed convex set.Convergence to the same optimal solution is proved for the cases when the weights were constant and equal and when the weights were uniform but all agents had the same constraint set.Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [34] and Lou et al. [35].To work out the distributed optimization problems with 2 Complexity asynchronous step-sizes or inequality-equality constraints, distributed Lagrangian and penalty primal-dual subgradient algorithms were developed in Zhu and Martinez [36] and Towfic and Sayed [37].Both of them were designed for function constrained problems.Meanwhile, the distributed convex optimization problems over general networks were settled in Lobel et al. [38] and Matei and Baras [39].Recent works [40][41][42][43][44][45] have coordinately put their efforts on the consensus of multiagent systems by designing the control protocols based on event-triggered sampling schemes.Lu and Tang [41] proposed a continuous-time consensus-based zerogradient-sum (ZGS) algorithm which is built on the condition of the gradient sum being zero.Seyboth et al. [42] studied a variant of the event-trigger average consensus problem for single-integrators and double-integrators, where a novel control strategy for multiagent coordination was employed to simplify the performance and convergence analysis of the method.To solve the optimization problem with more general case that the mean square consensus for multiple agents is affected by noises over directed networks, Hu et al. [44] proposed a novel centralized and decentralized eventtriggered protocols and built its convergence.In more recent literature, Li et al. [45] investigated event-triggered nonlinear consensus in directed multiagent systems with combinational state measurements.In general, event-triggered sampling schemes were introduced into the implementation of the aforementioned methods, respectively.
Our method builds on the pioneering work of [33,46,47].Nedic et al. [33] assumed that each agent was constrained to remain in a closed convex set.Convergence to the same optimal solution was proved for the cases when the weights were constant and equal and when the weights were time-varying but all agents had the same constraint set.Furthermore, paper [46] developed a broadcast-based algorithm, called the subgradient-push, which guides each agent to reach an optimal value under a standard assumption of subgradient boundedness.The subgradient-push requires neither the knowledge of the number of agents nor the graph sequence to implement.In order to avoid unnecessary communication and ensure fast and exact convergence, Chen and Ren [47] presented an event-triggered zero-gradient-sum distributed consensus optimization over directed networks with a general assumption that the objective function is twice continuously differentiable.
Contributions.Inspired by the previous works, this paper proposes a novel distributed subgradient algorithm for multiagent convex optimization with event-triggered sampling scheme.Previous works did not perform well on the applications of the distributed algorithms in multiagent network; for example, they may just study discrete-time distributed consensus optimization over time-varying graphs without trigger condition or they only consider event-based distributed consensus of multiagent systems with general networks.However, our methods perfectly integrate the event-triggered scheme with discrete-time distributed consensus optimization over time-varying graphs.Also, we only require that the network topology is uniformly strongly connected and weight-balanced, which makes our algorithm more effective.More precisely, the contribution of this paper is mainly in three aspects.Firstly, we study the convex optimization problem of discrete-time multiagent systems by a distributed event-triggered sampling control scheme, where the event-triggered control strategy in this paper can eliminate unnecessary communications among neighboring agents, leading to the reduction of computation costs and energy consumption in practice.Secondly, based on the assumption that the digraph is weight-balanced and uniformly strongly connected, we introduce a novel distributed subgradient algorithm by a distributed event-triggered sampling control scheme.Thirdly, we also show the convergence of the algorithm and prove that it can achieve the optimal point of the sum of agents' local objective functions while satisfying the trigger condition.
The remainder of this paper is organized as follows.Some essential concepts and knowledge with regard to graph theory are given, and problems are formulated in Section 2.Then, in Section 3, the main results are presented.Furthermore, the effectiveness of the algorithm is testified by using a numerical example in Section 4. Finally, the conclusion is drawn in Section 5.

Preliminaries and Concepts
In this section, we present some important mathematical preliminaries including algebraic graph theory, notations, and problem formulation (referring to [48,49]).

Algebraic Graph Theory.
We always employ a graph to describe the information exchange between the nodes.The information exchange between  nodes in an information interaction topology can be modeled as a weighted directed graph  = {, , }, where  = {1, 2, . . ., } is the set of vertices with  representing th vertex and  ⊆  ×  is the set of edges.The graph is assumed to be simple when there are no repeated edges or self-loops.if the out-degrees (or in-degrees) of all nodes in the directed graph are equivalent.A directed graph with  nodes is called a directed tree if it includes −1 edges and there exists a root node to every other node with directed paths.For a directed graph, a directed tree can be regarded as a directed spanning tree if it contains all network nodes.2.2.Notations.Some mathematically standard notations throughout this paper are listed in the following.,   , and  × refer to the set of real numbers, the set of  × 1 real vectors, and the set of  ×  real matrices, respectively.  and 0 × denote the identity matrix and the  ×  zero matrix, respectively.Let 1  ∈   and 0  ∈   refer to the vector with all entries being one and zero, respectively.We let   or   denote the transpose of a vector  or a matrix .For a vector  ∈   , we denote || = (| 1 |, . . ., |  |)  , while ‖‖ is the standard Euclidean norm in the Euclidean space.∇ :   →   denote the gradient of .For a matrix , we write   or []  to denote its , 'th entry.

Distributed Optimization Problem.
In this subsection, we consider a network of  nodes whose object is to solve the following distributed minimization problem: where   :   →  is the convex objective function of agent  and  is a global decision vector.Assume that   is only known by agent  and probably different.Under the condition that the set of optimal solutions  * = arg min ∈  () is nonempty, we would like to denote  * the optimal value of and  * an optimizer of (1).
In this paper, we do not assume the differentiability of the local objective function   .At the points where the function is not differentiable, the subgradient plays the role of gradient.For a given convex function  :   →  and a point x ∈   , a subgradient of the function  at x is a vector ∇( x) ∈   such that the following subgradient inequality holds for any  ∈   : The following assumptions are necessary in the analysis of distributed optimization algorithm throughout this paper.
There exists an integer  ≥ 1 such that, for every (, ) ∈  ∞ , agent  sends its information to a neighboring agent  at least once every  consecutive time slots, that is, at time   or at time  +1 and so on until (at last) at time  +−1 for any  ≥ 0. In other words, the graph sequence {()} is uniformly strongly connected.

Main Results
In this section, motivated by [35,36], we provide a novel distributed subgradient algorithm to solve the optimization problem (1), followed by its convergence properties.To this end, we consider a group of agents  = {1, . . ., } with the communication topology described by a sequence of uniformly strongly connected time-varying digraph () = {, (), ()} as before.The distributed subgradient algorithm is a discrete-time dynamical system, which is depicted as follows.
3.1.Distributed Subgradient Algorithm.Consider a set  = {1, . . ., } of agents.Formally, at each iteration  = {1, . . ., }, the agent  updates its next time state according to the following laws: where ) is the iteration number, ℎ is the control gain,     denotes the instant when the   th event happens for the agent , x () =   (    ) for     ≤  <     +1 , the positive scalars () > 0 are step-sizes, the scalars   () are nonnegative weights and have an upper bound , and the vector ∇  (  ()) is a subgradient of the agent  objective function   () at  =   ().

Remark 4. Denote the measurement error as
Then, we can rewrite algorithm (3) into the following form: where ) is an error term.Due to (5), it follows that Modifying the second term on the right-hand side in the above formula, we then have Letting   () = (1 − ℎ  ()),   () = −ℎ  (), one has Since graph () is balanced, we then have Before giving some supporting lemmas, we need the following assumption on the sequences of step-sizes and {()}.
Next, the transition matrices are introduced as follows: where Φ(, ) = () for all .And the th column of Φ(, ) Meanwhile, the entry in th column and th row of Φ(, ) A crucial property of the transition matrices is given in the following, which plays a key role in our analysis of the algorithms.
Lemma 7 (see [35]).Let the weight-balanced Assumption 1, the uniformly strongly connected Assumption 2, and the nondegeneracy Assumption 6 hold.Then the entries [Φ(, )]  of the transition matrices converge as  → ∞ to a uniform geometric rate 1/ with respect to  and , that is, for all ,  ∈ {1, . . ., }, where  is the lower bound of the nondegeneracy Assumption 6,  is the number of agents,  0 = ( − 1), and  is given in the uniformly strongly connected Assumption 2.
Proof.We refer the reader to the papers [35,37] for proofs of this and similar assertions.
Before moving on, it is important to introduce the following lemma and the proof of the lemma.Lemma 8 (see [33]).Let 0 <  < 1 and let {  } be a positive scalar sequence.Suppose lim →∞   = 0. Then Proof.We do not give the proof of Lemma 8 since it is almost identical to that of Lemma 7 in [33].
Chen and Ren [47] provided an event-triggered consensus algorithm of multiagent system.Inspired by this work, we design a new event-triggered scheme applied as follows.
We now define the triggering time sequence {   } for agent  by where is referred to as the trigger function for all 0 <  < where  = max ∈   .Substituting (15) into (18) yields Taking the limits of both sides of (19) as  → ∞, one can see that lim →∞ ‖  ()‖ = 0. Thus, the proof is completed.
Remark 10.Lemma 9 shows that if the error term in system (8) decays, the differences between the states of all the agents will vanish asymptotically.

Convergence Analysis.
We next introduce the characteristic that the agents reach a consensus asymptotically, which means the agent estimates   () converge to the same point when  goes to infinity.
We now begin to introduce a well-known convergence result which is shown in the following lemma.
Proof.The proof process can imitate that of [38], and thus it is omitted.
In what follows, we present a key lemma, which is important in the analysis of the distributed optimization algorithm.Thereafter, we study the convergence behavior of the subgradient algorithm, where the optimal solution can be asymptotically reached.
Proof.Applying (3) to the average process, it is obtained that Using  = (1/)∑  =1   (), the control law (33) can be rewritten as Now, let  ∈   be arbitrary vector; we have for all  ≥ 0 that Since the subgradient of each   () is uniformly bounded by   and  = max ∈   , then it follows that for all  ≥ 0 We next study the cross-term ∇  (  ())  (()−) in (36).For this term, we write Using the subgradient boundedness, we can lower-bound the first term [∇  (  ())]  (() −   ()) as As for the second term [∇  (  ())]  (  () − ), we use the convexity of   to obtain from which, by adding and subtracting   (()) and using the Lipschitz continuity of   (implied by the subgradient boundedness), we further obtain where in the second inequality we used the subgradient boundedness.Now, suppose that we can employ (41) where  * is the optimal value.Summing (43) Now, we are in the position to analyze inequality (44).The right side of ( 44) can be partitioned as three items.For the first item, it is easy to get Similarly, under the step-size Assumption 5, we immediately obtain Thus, we will place emphasis on the second item of (44).
Example 1.Consider the optimization problem (1) with   () = 0.5 2 ln(1 +  2 ) +  2 ( = 1, 2, 3, 4, 5), where   :  3 →  is the convex objective function of agent  and  ∈  3 is a global decision vector.Moreover, we use algorithm (3) with the triggering function (15).In simulation, we select the design parameters  =  −0.015 ,  = 1/(1 −  −0.015 ), and ℎ = 0.005 and the step-sizes {()} = 0.02/( + ).The simulation results of the distributed subgradient algorithm (3) are shown in Figures 1-4.The state evolutions of all agents are shown in Figure 1, from which we can observe that all the agents asymptotically achieve the optimal solution by taking 800 iterations.Figure 2 shows that the distributed control input   () tends to 0 when achieving consensus.The eventtriggered sampling time instants for each agent are described in Figure 3, from which we can observe that the updates of the control inputs are asynchronous.According to the statistics, the sampling times for the five agents are [20,20,20,19,18], and the average sampling time is 20.Thus, the average update rate of control inputs is 20/800 = 2.5%.In Figure 4, for agent 1, it is explicit that the norm of measurement error ‖ 1 ()‖ is asymptotically reduced to zero.

Conclusion and Future Work
In this paper, a novel consensus-based event-triggered algorithm for solving the distributed convex optimization problem over time-varying directed networks has been analyzed in detail.We have proved that, based on the designed distributed event-triggered scheme and the uniformly strongly connected communication graph sequence {()}, the algorithm succeeds in making all the nodes converge to the optimal point asymptotically.Moreover, the theoretical results are demonstrated through a numerical example.Future work will concentrate on the event-triggered algorithms for the constrained convex optimization problem.And the convergence rate of the algorithm we introduce in this paper deserves further investigation as well.
over[1, ∞), dropping the nonnegative term on the left hand side, and multiplying by  on both sides, we obtain