Distributed Constraint Optimization with Flocking Behavior

This paper studies distributed optimization having flocking behavior and local constraint set. Multiagent systems with continuoustime and second-order dynamics are studied. Each agent has a local constraint set and a local objective function, which are known to only one agent.The objective is for multiple agents to optimize a sum of the local functions with local interaction and information. First, a bounded potential function to construct the controller is given and a distributed optimization algorithm that makes a group of agents avoid collisions during the evolution is presented.Then, it is proved that all agents track the optimal velocity while avoiding collisions. The proof of the main result is divided into three steps: global set convergence, consensus analysis, and optimal set convergence. Finally, a simulation is included to illustrate the results.


Introduction
The study of distributed optimization of multiagent has attracted extensive attention in recent years.The objective of distributed optimization is to optimize the sum of local functions with local interaction and information.Both unconstraint and constraint models of distributed optimization problems were researched.Nedic et al. [1] firstly gave a distributed subgradient algorithm to investigate unconstrained distributed optimization problems and proved that all agents optimize the sum of local functions over a timevarying graph.Wang et al. [2] firstly introduced a continuoustime algorithm with undirected topology.Rahili et al. [3] studied a distributed optimization problem with single-order and second-order dynamics, in which the local function is time-varying.Zhao [4] studied a distributed continuous-time optimization problem for general linear multiagent systems.Under edge-and node-based frameworks, respectively, they developed two adaptive algorithms to minimize the team performance function.Yang et al. [5] studied distributed unconstraint optimization with flocking.A distributed adaptive protocol for multiagent was proposed to realise flocking behavior.For distributed constraint optimization problems, there are also some results.Nedic et al. [6] presented a distributed projected subgradient algorithm for the constraint optimization problem and researched its convergence properties.Some modified or extended models of distributed constrain optimization were also given [7][8][9][10][11][12][13][14][15][16].Qiu et al. [8] studied a distributed convex optimization of continuoustime dynamics with a common state set constraint.They noted that if the time-varying gains of the gradients satisfy a persistence condition, the states of all agents converge to the optimal point of the set constraints.Lin et al. [9] proposed a nonuniform gradient gain control method and a finite-time control method for distributed constraint optimization.Zeng et al. [10] studied nonsmooth convex optimization problem and proposed a distributed continuous-time algorithm by combining primal-dual methods for saddle point seeking and projection methods for set constraints.Lin [11] devoted to distributed optimization problems with nonuniform convex constraint sets and nonuniform step-sizes.Liu and Wang [12] proposed a group of coupled two-layer projection networks with bounded constraint.Li et al. [13] gave a distributed discrete-time control law for solving the nonconvex problem with inequality constraints.They transformed the nonconvex problem into a sequence of strongly convex subproblems through continuous convex approximation technique.Zhang et al. [14] optimized a sum of convex functions defined over a graph, where every edge in the graph carried a linear equality constraint.Hong et al. [15] studied two first-order primaldual based algorithms, the Gradient Primal-Dual Algorithm and the Gradient Alternating Direction Method of Multipliers, for solving a class of linearly constrained nonconvex optimization problems.Gu et al. [16] proposed a solution tool for distributed convex problems with coupling equality constraints.The proposed algorithm was implemented timechanging directed networks.
As is known to all, there are a great number of results about distributed optimization.However, the distributed optimization with flocking behavior is rarely considered.Flocking problem is a significant issue considered by many researchers [17][18][19][20][21][22][23][24][25].The purpose of flocking problem is to control a group of agents to move by local information while maintaining connectivity, avoiding collisions, and having the same speed.Nevertheless, the above results can not be directly applied to more complex flocking problems.In this paper, a distributed optimization problem while considering the flocking behavior is researched.The aim of this paper is to solve that problem with local constraint set.Due to the coexistence of constraint sets, flocking behavior, and optimization objectives, there are great challenges in the research.There are three major contributions in this manuscript.Firstly, a bounded potential function is used to construct the controller that makes a group of agents avoid collisions during the evolution.Secondly, our proposed control law lets the velocity sate converge to the local constraint set in finite time.Thirdly, the control law is proved to be correct in three aspects.
An outline of the paper is as follows.The notations and some essential concepts used in this paper are given in Section 2. In Section 3, we formulate the distributed constrained optimization with flocking behavior.In Section 4, we shown the main result and prove the main result in three steps.In Section 5, a simulation example is presented.Finally, conclusions are made in Section 6.

Notations and Preliminaries
Notations.The identity matrix in  × is denoted by   .The index set {1, 2, . . ., } is denoted by N.  ⊗  is the Kronecker product of  and .sgn() is a component wise sign function of .The gradient of () at  is denoted by ∇().The Euclidean norm of the vector  is denoted by ‖‖.‖‖ 1 denotes the 1-norm of the vector .  () denotes the projection of the vector  onto the closed convex set X; i.e.,   () = arg min ∈ ‖ − ‖.

Problem Formulation
We consider  multiagents operating in R  , with dynamics expressed by double integrators where   ∈   is the position vector of agent , V  ∈   is the velocity vector, and   ∈   is the control input acting on agent .A local cost function   (V) :   →  is assigned to agent (  ∈ N), and it is known to only agent .The global cost function is denoted by The topology graph studied in this paper is dynamic.In the remaining parts of this paper, the dynamic undirected graph, the adjacency matrix, the incidence matrix, and the Laplacian matrix on time  are simply remarked as G, , , and , respectively.In the dynamic graph, we assume (, ) ∈ E if and only if ‖  −   ‖ < , where  > 0 is the communication radius of an agent.
The aim of this paper is to design the controller   for system (2) using local function and local information gathered from neighbors such that all agents track the optimal velocity V * () while maintaining connectivity and avoiding collisions.The optimal velocity V * () satisfies where Ω  is the local constraint set, and it is closed and convex.Let  ⊂ Ω denote the optimal set of problem (4).

Complexity 3
The problem defined in ( 4) is equivalent to To ensure problem (4), the following assumptions are needed.Assumption 3.Each function   (V) is strictly convex and differentiable.Let ∇  (V) = V +   (V), ∀ ∈ N, where  ≥ 0,   (V) is a continuous function satisfying ‖  ()‖ <  for a certain positive number  and all .
Because Ω  is a closed convex set, there is a constant  > 0, which makes ‖‖ ≤ , ∀ ∈ Ω  .Moreover, if the function (V) is convex and differentiable, and the constrained set Ω is convex and closed; we can get the optimal solution set  ∈ Ω of problem ( 4) is nonempty, closed, and bounded.
In order to smooth the controller, we adopt the norm that firstly presented in [21] The function ‖‖  , unlike the norm ‖‖ which is not differentiable at  = 0, is differentiable everywhere.The gradient of ‖‖  is given by To propose a controller law for collision avoidance, we need to give a smooth collective potential function   .Definition 4. The potential function   is a differentiable nonnegative function of ‖  −   ‖  which satisfies the following conditions.
(1)   =   has a unique minimum in ‖  −   ‖ = , where  is a desired distance between agents  and  and 0 <  < , and  > 0 is a constant.
( To solve our objective of this paper, we present the algorithm. . ( where  and  are positive constants and  Ω  (⋅) is the projection onto Ω  .It is worth pointing out that ∇  (V  ) depends on only agent 's velocity.
Remark 7. In (9), the first term is used to regulate position between agent  and its neighbors; this term is responsible for collision avoidance and cohesion in the group, the second term is to the desired velocity alignment, the third term is a negative gradient   (V  ), and the fourth term is used to pull the velocity vector onto Ω  .

Main Theorem and Convergence Analysis
We give the main theorem of this paper and propose the convergence of the control law in this subsection.First, we give the main result of this paper as follows.
Theorem 8. Assume that the graph G() is connected for all  and Assumption 3 holds.For system (2) with algorithm (9), if all  > 4‖Φ‖/ 2 () and  >  +  +  + , the agents' velocities in the group track the optimal velocity and the agents avoid interagent collisions..
In the following, we are interested in proving Theorem 8.In order to do so, first, we have to verify global set convergence, second, we have to give consensus analysis, and third we have to prove all agents converge to the optimal set.Proof.First, to verify global set convergence, let us consider the Lyapunov function candidate: In the above prove, we have used the fact  Ω  (V  ) ∈ Ω  and ‖ Ω  (V  )‖ ≤ .From (11), it follows that We integrate on both sides of inequality and get the following inequality Thus,  1 () converges to zero in finite time.Namely, there is a constant  1 >  0 , such that for all  >  1 , ‖V  −  Ω (V  )‖ = 0.That is, under control (9), V  () ∈ Ω  for all  >  1 .
Second, we will present the consensus analysis.From the above proof, for all  >  1 , we can get V  () ∈ Ω  and From Assumption 3 we can get ∇  (V  ) is continuous.When Ω is a bounded closed convex set, from the property of continuous functions on closed bounded sets, we can get For all  >  1 , consider the Lyapunov function candidate Note, however, that, due to the symmetric nature of   , So, taking time derivative of  2 , we can get Because Thus, ( 18) can be rewritten by , then the above equation can be rewritten as Let  = ∇  (V  , ); we know ‖‖ is upper bounded and ‖Φ‖ 2 is upper bounded ∀ >  1 .So if  > 4‖Φ‖/ 2 (), we can get By LaSalle's invariance principle, we can get =1 Ω  as  → ∞, which implies that the velocities of all agents in system (2) asymptotically become the same.
From the convexity of the function   (V  ) and ( 15), we have So for all  >   , ( 14) and ( 24) hold.
For  >   , denote Similar to the proof of above, we have In view of the arbitrariness of , let  → 0; we have V 3 () < [(  (V)) − (V)] ≤ 0 for  >  2 .By LaSalle's invariance principle and the unique global minimum of (V) on Ω, we can get lim →∞ ‖V  −   (V)‖ = 0.That is, the global cost function ( 4) is minimized as  → ∞.
Remark 9. Compared with the distributed optimization with a common constraint [18][19][20], we extended the control law to distributed optimization with local constraints.Besides, the result of [8] requires time-varying gains of the gradients to satisfy a persistence condition, and the gains conditions of this paper are relatively easy to satisfy.Compared with the control law in [10,12], the control law this paper is relatively simple.Moreover this paper addresses the issue of distributed optimization problem having flocking and local constraint set.

Simulation
In this section, a numerical example is presented to verify the feasibility of the proposed algorithm and correctness of our theoretical analysis.
For giving the potential function   , we choose the action function () as follows: which satisfies condition (2) in Definition 4 The corresponding repulsive potential function is The following parameters remain fixed through the simulation:  = 3 and  = 5.The potential function () has the shape as in Figure 1.Then, we choose the potential function   satisfying Definition 4 as follows: In the illustration, we considered  = 8 agents in a 2D plane.The task of the multiagent is to enable their velocities to minimize the total cost function ∑ 8 =1   (V  ()), where V  () = (V 1 (), V 2 ())  is the coordinate of agent .We consider the second-order dynamic system (2) with the control algorithm (9).The local cost functions are given by  4 .Those local functions are used in [9].We assume that the local constraint set is a square, which is given by Ω  = {V ∈ R =1   (V) along the constraint set Ω, we have that V * = [−0.6,−0.6]  is the optimal points of the objective function (4).
The simulation results are shown in Figures 3-5.For simplicity, we choose the coefficients in algorithm (9) as  = 1,  = 20.Figure 3 demonstrates the initial state of the group including initial positions and velocities.All initial positions are set on a line and all initial velocities are set with arbitrary directions and magnitudes within the range of [−1, 1] m/s. Figure 4 gives the final steady state configuration and the final velocity of the agent group where the solid lines represent the neighboring relations between agents and the dotted arrows represent the velocities of all agents.Figure 5 plots the velocity V  ,  ∈ N. It is easy to see that the flocking motion can be obtained and the velocities of all agents converge to the velocity of the optimal velocity.

Conclusion
We studied distributed optimization with flocking and constraint set in this paper.Multiagent systems with continuoustime and second-order dynamics are considered.Each agent has a local constraint set and a local objective function which is known to only one agent.The objective is for multiple agents to optimize a sum of the local functions with local interaction and information.First, a bounded potential function to construct the controller is given and the distributed constrained optimization algorithm is presented that make all agents avoid collisions during the evolution.Then, it is proved that all agents can track the optimal velocity while avoiding the interagent collision.The proof of the main result is divided into three steps: global set convergence, consensus analysis, and optimal set convergence.Finally, a simulation is included to illustrate the results.

Figure 2 :
Figure 2: The local constraint set Ω  and the intersection Ω.