Error Bounds and Finite Termination for Constrained Optimization Problems

We present a global error bound for the projected gradient of nonconvex constrained optimization problems and a local error bound for the distance from a feasible solution to the optimal solution set of convex constrained optimization problems, by using themerit function involved in the sequential quadratic programming (SQP)method. For the solution sets (stationary points set and KKT points set) of nonconvex constrained optimization problems, we establish the definitions of generalized nondegeneration and generalized weak sharpminima. Based on the above, the necessary and sufficient conditions for a feasible solution of the nonconvex constrained optimization problems to terminate finitely at the two solutions are given, respectively. Accordingly, the results in this paper improve and popularize existing results known in the literature. Further, we utilize the global error bound for the projected gradient with the merit function being computed easily to describe these necessary and sufficient conditions.

It is well known that SQP is an important method for solving the problem ().Its essential idea is to approximate the solutions of the problem () by using the optimal solutions of a series of quadratic programming.The solutions here may be referred to as the stationary points,  points, or optimal solutions of the problem ().
Suppose that  ∈  is a current iterate point of the problem ().Generally, during the iterative process of the SQP method, the next iterate point is generated by solving a subproblem as follows: ∇ and ∇  are gradients of  and   , respectively, and () is a symmetric positive definite matrix.The matrix () is modified and selected again along with the iterative process.
For the matrix (), we assume in this paper that there exist positive numbers  min and  max such that  min ≤ ‖ ()‖ ≤  max , ∀ ∈ , where ‖ ⋅ ‖ stands for the Euclidean norm.Due to the convexity of   (⋅) and   () ⊆ {1, 2, . . ., }, it is easy to see that  ⊆ () for all  ∈ .
which is referred to as a merit function.
It is well known that the SQP method has wide and valid application for solving optimization problems (see [1][2][3][4][5][6][7]).However, the aim of this paper is not to propose more algorithms or computing skills for SQP methods but to study the global error bounds of projected gradient for nonconvex problems () and the local error bounds of the distance from a feasible solution to the optimal solution set for convex problems (), with these error bounds by means of the merit function () being provided.This is one of the main contents of this paper.
The theory of error bounds has attracted a lot of attention and many good results have been obtained.In particular, [8,9] established many types of global error bounds for monotone affine variational inequality problems; [10][11][12] developed global -type error bounds and global error bounds in R  for strongly monotone variational inequality problems; [13,14] obtained global -type error bounds for generalized linear complementarity problems and monotone variational inequality problems.However, to the best of our knowledge, there is not much work concerning the error bounds in SQP methods.This motivates us to do this research.
It should be noted that the merit function () considered here is different from the regular gap function (another kind of merit function) discussed in [10,11,15,16], since () is generated over a polyhedral set obtaining , rather than over  itself.The main advantage of this modification is that the  point of the problem () is equivalent to () = 0.In addition, the computation of () is easier than the regular gap function.
Another main contribution of this paper is the finite termination of a feasible solution sequence; that is, the feasible solution sequence converges finitely to the solution sets (stationary points set and  point set) of problem ().This research has received considerable attention (see [17][18][19][20][21][22][23][24][25]).Among these, under the assumption that the solution set is weak sharp minima or nondegeneration, [20][21][22][23][24][25] studied the finite termination, respectively; [17][18][19] discussed, under the stronger condition, finite termination for some new efficient algorithms.It is worth mentioning that Burke and Ferris (see [20]), under the solution set satisfying the condition of weak sharp minima and other hypotheses for convex optimization problems, put forward the necessary and sufficient condition by which a feasible solution sequence converges finitely to the solution set (see [20,Theorem 4.7]).Afterwards, under the same conditions, [21] verified the conclusion in Pseudomonotone + variational inequality problems.Recently, [25,Theorem 2] simplified the conditions of [20,Theorem 4.7] and confirmed that [20,Theorem 4.7] is efficacious only under the solution set satisfying the condition of weak sharp minima.However, for nonconvex optimization problems (), when its solution set satisfies the condition of weak sharp minima or nondegeneration, establishing the necessary and sufficient condition of a feasible solution sequence to converge finitely to the solution set is undoubtedly of great significance to solve this problem, but up to now we have not seen the research literature on this issue.
In this paper, inspired by [25,Theorem 2], we solve this problem.We firstly extend the concepts of nondegeneration and weak sharp minima and, for the solution set of the problem (), establish the definitions of generalized nondegeneration and generalized weak sharp minima.Based on these two generalized concepts, we prove the following main results: (1) the necessary and sufficient condition of a feasible solution sequence to converge finitely to the solution set for the problem () is that the corresponding sequence of projected gradient converges to zero.
When the feasible solution set  of () is general closed convex, the calculation of gradient projection is very difficult.However, the calculation of a merit function is much easier, and () = 0 equivalent to  being a  point.Based on this feature of (), we characterize the necessary and sufficient condition for the sequence terminating finitely to generalized a nondegeneration  point set by global error bound of projected gradient using merit function (); that is, (2) for the problem (), the necessary and sufficient condition for the feasible solution sequence terminating finitely to generalized nondegeneration  point set is that the corresponding sequence of merit function converges to zero.
For generalized weak sharp minima, we (3) suppose the stationary point set of the nonconvex optimization problem () is convex; then the necessary and sufficient condition for the feasible solution sequence terminating finitely to a generalized weak sharp minima stationary point set is that the corresponding sequence of projected gradient converges to zero.As a straightforward corollary of the result, we obtain [16,Theorem 2]; therefore, we extend [20,Theorem 4.7] to nonconvex optimization problems.
The rest of this paper is organized as follows.In Section 2, we introduce some concepts and symbols which shall be used in the following discussion.In Section 3, we develop some basic properties of () and obtain a global error bound for the projected gradient and a local error bound for the distance from a feasible solution to the optimal solution set of problem () by using ().Finally, in Section 4, we give the necessary and sufficient condition for the feasible solution sequence terminating finitely to the sets of generalized nondegenerate and weak sharp minima, respectively, and give a number of meaningful conclusions.

Definitions and Notation
Let ‖ ⋅ ‖ and ⟨⋅, ⋅⟩ stand for the standard Euclidean norm and inner product in R  , respectively.Denote by , Ŝ, and  * the stationary points set,  points set, and (global) optimal solutions set of problem (), respectively.In view of the assumption given in the matrix (), we know that the optimal solution of the subproblem ( ()) is unique; say  * ().For simplicity, we denote this optimal solution by  * , and the confusion can be eliminated from the context.
Given a nonempty subset  of R  , its closure set is denoted by  and the polar cone is defined as The tangent cone of  at  ∈  is given by The normal cone of  at  ∈  is defined as   () =   () ∘ .In particular, if  is convex, then the tangent and normal cone of  at  ∈  take the following form, respectively: The projection of a point  ∈ R  onto a closed set  is defined by and the distance from  ∈ R  to  is given by dist (, ) = ‖ −  ( | )‖ .
Fixed vector  ∈ R  and a nonnegative scalar , we use notation  + B to mean that { ∈ R  | ‖ − ‖ ≤ }, where B stands for the unit ball in R  .
A mapping ∇  (⋅) : R  → R  is said to be a projected gradient of (⋅) with respect to a set  if Clearly,  ∈  is a stationary point of () if and only if −∇() ∈   () or, equivalently, ∇  () = 0. Since   () is convex, it is easy to see that Ŝ ⊆  and  * ⊆ .Clearly, we have  * =  for convex optimization problem.
For the solution set  * of optimization problem (), Burke and Ferris [20] extended the concept of strong singleton minima point to weak sharp minima, which can be used to deal with the case where the solution set is a nonsingleton point set.More precisely, a set  * is said to be weak sharp minima if there exists a positive  * which depends only on ,  * , and , such that We call that a sequence {  } terminates finitely to  if there exists  0 such that   ∈  for all  ≥  0 .Given  ∈ R, the level set of () is defined as

Properties of 𝜙(𝑥)
. This subsection mainly deals with the basic properties of the merit function ().From the definition of subproblem ( ()), we know that () ≥ 0 for all  ∈ .Since () is polyhedral, we have the following.
Lemma 1.Given  ∈ , a point  * ∈ () is the unique solution of the subproblem ( ()) if and only if there exist  multipliers  *  ≥ 0 for  ∈   () such that where, for simplicity, one uses  *  to denote  multipliers  *  () associated with the unique optimal solution to the problem ( ()).

Lemma 2. For any 𝑥 ∈ 𝑆, one has
Proof.Left-multiplying the two sides of ( 13) by ( −  * )  and using ( 14), we have By the definition of (), we obtain Lemma 3.For any  ∈ , one has With the preparation of these lemmas, we obtain the following result.
From conclusions (1) and (2) of Theorem 4 and the nonnegativity of (), it is immediate to verify the following result.
Corollary 5. Suppose that the  point of () is nonempty.Then,  * is a  point of problem () if and only if  * is an optimal solution of problem In this case, ( * ) = 0 and  * =  * ( * ).Lemma 6.For any  ∈  and  ∈ (), one has Proof.Clearly, the optimal solutions of problem ( ()) is the same to that of the following programming problem: min Since this is a convex programming, its optimal solutions must be stationary points as well; that is,
Theorem 7.For any  ∈ , one has where Proof.Since () is a polyhedral set, it is easy to see that the tangent cone of () at  takes the form as follows: where we have used the fact that   () =  () ().Take  ∈  () () with ‖‖ ≤ 1. Left-multiplying both sides of (13) by   and using Lemma 3, we obtain In other words, we obtain where  1 =  max (2/ min ) 1/2 .This, together with the properties of projected gradient by Calamai and Moré [26], implies that Thus, the right inequality is proved.
On the other hand, Lemma 3 implies that from which we obtain Let  2 = ( min /2) From ( 29) and (30), we obtain So, the left inequality is valid.
The following result can be obtained by Theorems 4 and 7.

Finite Termination
In this section, we will study the necessary and sufficient conditions for the feasible solution sequence of nonconvex optimization problems () terminating finitely to their solution sets (stationary points set and  points set).From [20, Theorem 3.1], we know that gradient ∇(⋅) is always constant vector in the optimal solution set of a convex optimization problems.So, according to the monotonicity of ∇(⋅), we get the following.Proposition 16.Let  * , the optimal solution set of a convex optimization problems (), be a nondegenerate set.Then, for any {  } ⊂ R  ,  * is a generalized nondegenerate set for {  }.
Here, we give the necessary and sufficient condition for the feasible solution sequence of the nonconvex optimization problems () terminating finitely to its generalized nondegenerate solution set.
which is a contradiction with  > 0.
Corollary 18.For the the nonconvex optimization problems (), let {  } ⊂  be a bounded sequence and let  be a nondegenerate set.Then, {  } terminates finitely to , if and only if (45) holds.
Proof.The Necessity is obvious.We only need to prove the Sufficiency.Suppose (45) holds; that is, lim According to the lower semicontinuity of ∇  (⋅) and the boundedness of {  }, we have that every accumulation point  * of {  } is in .Therefore, from Proposition In the following, we will use the global error bounds of projected gradient, which resulted from last section, to characterize the necessary and sufficient condition of feasible solution sequence terminating finitely by the merit function (⋅) which is easy to calculate.
Then, there exists an index set   ⊂ {1, 2, . . ., } and an infinite subsequence  ⊆ {1, 2, . ..} such that The tangent cone of () at  ∈  is and its normal cone is Therefore, by (60) and (62), when  ∈ , there exists  ()  ≥ 0 for  ∈   such that from which, and using the orthogonal projection decomposition, for all  ∈ , we get We now show that the sequence {∑ ∈   ()  } ∈ must be bounded.Suppose on the contrary that lim Dividing the two sides of (64) by ∑ ∈   ()  and taking limit, due to (65) and (66), with the continuity of ∇  (⋅), we obtain and, according to (66), we have According to the assumption,  * satisfies the  −  constraint qualification; that is, there is ℎ ∈ R  such that ⟨∇  ( * ) , ℎ⟩ < 0, ∀ ∈   ( * ) . (69) Since   ⊆   ( * ), so (69), (67), and (68) are contradictory.So we assume that lim ∈, → ∞  ()  =  *  for  ∈   .By (58) and Theorem 7, we know that lim  → ∞ ∇ (  ) (  ) = 0.Then, by (64) and the continuity of ∇(⋅), we obtain Generally speaking, for a nonconvex optimization problem, (72) is only a necessary condition for weak sharp minima condition (11); that is, (72) is weaker than (11).However, considering its importance in analyzing the finite termination of algorithms, some previous studies, for example, [21,22], directly use (72) as the definition of weak sharp minima for the optimal solution set.In this paper, we will use the weaker condition (72) to define the set of weak sharp minima.Definition 25.Let  ∘ ⊂ , and  ∘ is said to be a set of weak sharp minima if and only if (72) holds.Now, we further extend the definition of weak sharp minima as follows.
Definition 26.Let  ∘ ⊂  and {  } ⊂ R  . ∘ is said to be a set of generalized weak sharp minima for {  }, if, for every subsequence {  } ∈ , there exists With the same as the set of generalized nondegeneration, it is easy to verify that the following several propositions expressed are special cases of generalized weak sharp minima.
Proposition 27.Let  ∘ ⊂  be a set of weak sharp minima, and let {  } ⊂ R  be a bounded sequence, with its every accumulation point  * ∈  ∘ .Then,  ∘ is a set of generalized weak sharp minima for {  }.
Proposition 29.Let the optimal solution set  * of a convex optimization problem () be a set of weak sharp minima.Then, for any {  } ⊂ R  ,  * is a set of generalized weak sharp minima for {  }.
Here, we give the necessary and sufficient condition for the feasible solution sequence of a nonconvex optimization problem () terminating finitely to its solution set of generalized weak sharp minima.
(11)Proof.Since  ⊆ () and  ∈ , it follows from the definition of tangent cone that   () ⊆  () () .Thus, the result is established by invoking Theorem 7. Now, we consider the case where the problem () is convex; that is, the functions involved in () are all convex.Obviously, in this case,  * is a closed convex set.Under condition(11), we use () to give a local error bound for dist(,  * ).Suppose that () is a convex programming. I  * is a set of weak sharp minima; that is, (11) holds, then there exist positive constants , , and  such that If  is polyhedral, then   () =  () (); that is, ∇  () = ∇ () ().
Then, {  } terminates finitely to the solution set  (or Ŝ) if and only if (45) holds.Proof.By Proposition 15 and Theorem 17, we can get it.For the nonconvex optimization problem (), let {  } ⊂ , and  is a nondegenerate set.Then, {  } terminates finitely to  * , if and only if (45) holds.Proof.Here, we have  * = .Then, by Proposition 16 and Theorem 17, we can get it.
[20]h means that  * is a  point of the ().For the nonconvex optimization problem (), suppose Ŝ is a nondegenerate set, {  } ⊂  is bounded, and each of its accumulation point  * is satisfying the Mangasarian-Fromovitz constraint qualification.Then, {  } terminates finitely to Ŝ if and only if the sequence satisfies (58).For the nonconvex optimization problem (), suppose {  } ⊂ , and {∇(  )} is a bounded sequence, and, for each of its accumulation point ∇ * , there exists  * ∈ Ŝ such that−∇ * ∈ int   ( * ) .Then, {  } terminates finitely to Ŝ if and only if the sequence satisfies (58).Generalized Weak Sharp Minima.In[20],Burke andFerris gave an equivalent condition for weak sharp minima for a convex optimization problem; that is, the optimal solution set  * of the convex optimization problem is a set of weak sharp minima if and only if for every  ∈  * *[  () ∩   * ()]∘ .