A Globally Convergent Line Search Filter SQP Method for Inequality Constrained Optimization

A line search filter SQPmethod for inequality constrained optimization is presented.This methodmakes use of a backtracking line search procedure to generate step size and the efficiency of the filter technique to determine step acceptance. At each iteration, the subproblem is always consistent, and it only needs to solve oneQP subproblem.Under somemild conditions, the global convergence property can be guaranteed. In the end, numerical experiments show that the method in this paper is effective.

It is well known that the sequential quadratic programming (SQP) method is one of the most efficient methods to solve problem (P).Because of its superlinear convergence rate, it has been widely studied.See Boggs and Tolle [1] for an excellent literature survey.In SQP methods, at each iteration the search direction is generally obtained by solving the subproblem as follows: min ∇(  ) + 1 2      s.t.  (  ) + ∇  (  )   ≤ 0,  ∈  = {1, 2, . . ., } , where   ∈  × is a symmetric positive definite matrix.However, the previous QP subproblem has a serious shortcoming that constraints in (1) may be inconsistent.To overcome this disadvantage, much attention has been paid [2][3][4][5][6][7].Burke and Han [2], Zhou [3], and J.-L.Zhang and X.-S.Zhang [4] modified the constraints in subproblem (1) to ensure that a revised QP subproblem is consistent, and their methods are globally convergent.Burke and Han's method is just a conceptual method and cannot be implementable practically, Zhou's method is based on the exact line search, and Zhang's method focuses on the inexact line search.Furthermore, Pantoja and Mayne [5] presented a modification of SQP algorithm by modifying both constraints and objective function in (1), and the search direction is obtained by solving a new QP subproblem which always has an optimal solution.In addition, Liu and Li [6] and Liu and Zeng [7] proposed SQP algorithms with cautious update criteria, which can be considered as modifications of the SQP algorithm given in [5].
In the previous methods [2][3][4][5][6][7], a penalty function is always used as a merit function to test the acceptability of the iterate points.However, as we all know, there are several difficulties associated with the use of penalty function and in particular with the choice of the penalty parameter.Too low a choice may result in the loss of an optimal solution; on the other hand, too large a choice damps out the effect of the objective function.Therefore, filter method was first introduced in trust region SQP method for constrained nonlinear optimization problems by Fletcher and Leyffer [8],

The Structure of Line Search Filter Technique
In this section, we will present the structure of line search filter technique and some related strategies.Instead of combing the objective function and constraint violation into a single function, filter methods view nonlinear optimization as a biobjective optimization problem that minimizes the objective function and the constraint violation.Now we give the structure of line search filter technique.
For the sake of a simplified notation, the filter is defined in this paper not as a list but as a set containing all (, ) pairs that are prohibited in iteration .We say that a trial point   ( , ) is acceptable to the filter if its (, ) pair does not lie in the taboo region, that is, if At the beginning of the algorithm, the filter is initialized by F 0 = {(, ) ∈  2 :  ≥ }, where  is a large positive number.Throughout the optimization the filter is then augmented in some iterations after the new iterate point  +1 has been accepted.For this, the following updating formula is used: Similar to the traditional strategy of the filter method, to avoid the convergence to a feasible point but not an optimal solution, we consider the following -type switching condition: where   ( , ) =  , ∇(  )    ,  > 0, and   ∈ (0, 1).When condition (5) holds, the step   is a descent direction for current objective function.Then, instead of insisting on (2), the Armijo-type reduction condition is employed as follows: where  ∈ (0, (1/2)) is a fixed constant.If ( 5) and ( 6) hold for the accepted trial step size, we may call it an f -type point, and accordingly this iteration is called an -type iteration.An -type point should be accepted as  +1 without updating of the filter; that is, F +1 = F  , while if a trial point   ( , ) does not satisfy the switching condition (5) but satisfies (2), we call it an ℎ-type point (or accordingly an h-type iteration).
An ℎ-type point should be accepted as  +1 with updating of the filter, and we denote the set of indices of those iterations where filter has been augmented by A ⊂ N.
In our method, when ∇(  )    < 0, the line search is performed.In the situation where no admissible step size can be found, the method switches to a feasibility restoration phrase, whose purpose is to find a new iterate point that satisfies (2) and is also acceptable to the current filter by trying to decrease the constraint violation.In order to detect the situation where no admissible step size can be found and the restoration phase has to be invoked, we approximate a minimum desired step size using linear models of involved functions.For this, we define that

Description of the Algorithm
We will now proceed to propose a line search filter SQP method for inequality constrained optimization and formally describe the algorithmic details in this section.The proposed algorithm consists of inner loops, and in the next section we show that there is no endless loop and the method is implementable and globally convergent.
In our algorithm, motivated by [5][6][7], the quadratic subproblem (1) Now, the algorithm for solving the problem (P) can be stated as follows.
Algorithm 1.Consider the following.
Step 5. Augment filter if it is necessary.If  is not an f -type iteration, augment the filter using (4); otherwise leave the filter unchanged; that is, set F +1 = F  .
Step 6. Update parameters.Compute   by Set Step 7. Update   to  +1 .Go to Step 2 with  replaced by  + 1.
Step 8. Obtain a new point  +1 from a feasible restoration phrase.Set  =  + 1, and go to Step 2.
Remark 3. The feasibility restoration phrase in Step 8 could be any iterative algorithm with the goal of finding a less infeasible point; for example, a nonlinear optimization algorithm is applied to minimise  such as Algorithm B in [14].

Global Convergence of Algorithm
In this section, we will show the proposed algorithm is well defined and globally convergent under some mild conditions.Then throughout this paper, we always assume that the following assumptions hold.Proof.Suppose by contradiction that  + = 0 and  + > 0 for all  ≥ 1 and some  ∈ N.Then, we have  + =   , and hence  + = ( + ) = (  ) =   > 0 and V + = 0.It follows from ( 9) and ( 10) that
The next lemma shows that the value of the parameter   increased only a finite number of times.Lemma 5. Let {  } be an infinite sequence generated by Algorithm 1; then there exists an integer  0 such that   =  max for all  ≥  0 .In addition,  max = max{  }.
Proof.The proof is by contradiction.Suppose that   → +∞, as  → +∞.The step 6 of Algorithm 1 implies that inequality holds for infinitely many .In addition, there exists a subsequence satisfying lim Without loss of generality, using Assumption (A1), we suppose that    →  as   → ∞.In a way similar to the proof of Lemma 4, we can derive, for all sufficiently large   , By (9) we have that for all   sufficiently large, (   )  = 0, for all  ∉ (), which yields We can also assume that lim   → +∞    /‖   ‖ 1 = λ.Dividing by ‖   ‖ 1 in both sides of the last equality and taking limits as λ ∇  () = 0, λ ≥ 0,  ∈  () .
Because   is increased only finitely many times, it is easy to see that  max = max{  }.Lemma 6.The line search is actually performed; that is, our method can generate   such that ∇(  )  d  < 0 at some iteration.
Proof.The proof is by contradiction.Suppose that   satisfying ∇(  )    < 0 cannot be generated and the current iterate point cannot be changed as Step 3; then the iterate points cannot be escaped from the feasibility restoration phrase.Therefore, all points of {  } are generated by the the feasibility restoration phrase, which follows |A| = ∞.Now we first prove that lim Consider an infinite subsequence {  } of A with (   ) ≥  for some  > 0. At each iteration   , ((   ), (   )) is added to the filter which means that no other (, ) can be added to the filter at a later stage within the area [(   )−  (   ), (   )]× [(   ) −   (   ), (   )], and the area of each square is at least      2 .By Assumption (A1), we have 0 ≤ () ≤  max and  min ≤ () ≤  max , and then (, ) associated with the filter are restricted to B = [0,  max ] × [ min ,  max ].Hence, B is completely covered by at most a finite number of such areas, which forms contradiction with the infinite subsequence {  } satisfying (3).This means that ( 23) is true.
Proof.Since we have lim  → ∞ (   ) = 0 by Theorem 8, it follows from Lemma 10 that there exist constants  1 ,  2 > 0 such that for   sufficiently large and  ∈ (0, 1].Without loss of generality we can assume that (45) is valid for all   .We can now apply Lemmas 13 and 14 to obtain the constants ,  1 , (   ) > 0. As lim  → ∞ (   ) = 0, choose a constant  2 and a sufficiently large  ∈ {  } such that (  ) >  2 (  ) > 0 and We note that (46) implies that as well as Now define Lemmas 13 and 14 then imply that a trial step size  , ≤   satisfies  (  ( , )) ≤  (  ) +   ( , ) , If we now denote with  , the first step size satisfying (50), the backtracking line search procedure in Step 4 then implies that for  ≥  , and therefore for  ≥  , This means that  , and all previous trial step sizes are fstep sizes.Hence the method does not switch to feasibility restoration phrase in Step 4 for those trial step sizes.Therefore,  , is the accepted step size   indeed.Since it satisfies both the f -type switching conditions ( 5) and ( 6), the filter is not augmented in iteration K.
Lemma 16.Suppose that the filter is augmented infinite number of times; that is, |A| = ∞.Then Let K = A; the claim is also true.
In a word, if Algorithm 1 generates an infinite sequence of iterate {  }, all limit points are feasible, and there exists at least one limit point  * of {  } which is a KKT point for the inequality constrained NLP (1).(xii) CSQP: the algorithm proposed in [7], (xiii) FilterSQP2: the algorithm proposed in [18], (xiv) FilterSQP3: the algorithm proposed in [23].
From Tables 1 and 2, we can see that our algorithm executes well for these problems taken from [28].Whenever the initial point is feasible or not, the results are promising.From the computation efficiency in Tables 3 and 4, we should point out that our algorithm is competitive with some existed SQP methods in terms of the number of iterations, for example, [5,7,18,23].
According to the forms of examples listed in these two groups of papers, we show the numerical results by two different representations in Tables 3 and 4. From Table 3, our algorithm is competitive with the first group of methods, that is, two methods in [5,7].Especially utilizing the algorithm in this paper, the number of iterations for problem hs001 (i.e., PBRT11) is much smaller than ones in other two algorithms.And the problem hs011 (i.e., QQRT12) converges to a good approximate optimal solution with small iterations, while algorithms CSQP and JSQP both fail on it.From Table 4, our algorithm is also competitive with the second group of methods, that is, two methods in [18,23].Although the filter technique is taken in these three algorithms, the quadratic subproblem (1) in [18,23] may be inconsistent, while the subproblem is always consistent, and it only needs to solve one QP subproblem at each iteration in our algorithm, which is simpler and can be applied much more conveniently.
All results summarized in Tables 1-4 show that our algorithm is practical and effective.

Conclusion
In this paper, combining a modification strategy of QP subproblem and the filter technique, we present a line search filter SQP method for inequality constrained optimization.This method can start with any arbitrary initial point rather than a feasible initial point; it makes use of a backtracking line search procedure to generate step size and the efficiency of the filter technique to determine step acceptance; it only needs to solve one QP subproblem at each iteration, and the subproblem is always consistent.Under some mild conditions, the method is well defined, and the global convergence property is obtained.Many numerical experiments in Section 5 show that our algorithm is effective.

Table 1 :
The detail information of the numerical results for feasible initial point.

Table 2 :
The detail information of the numerical results for infeasible initial point.Proof.Suppose that lim  → ∞,∈A sup ‖  ‖ > 0. Then there exist a subsequence {   } of {  ,  ∈ A} and a constant  > 0, such that lim  → ∞ (   ) = 0 and ‖   ‖ ≥  for all   .Applying Lemma 15 to {   }, we see that there is an iteration   , in which the filter is not augmented; that is to say,   ∉ A. This contradicts the choice of {   }, and then the claim follows.Although 0 ≤   ≤ (  ) and Theorem 8 can imply that lim  → ∞   = 0, the next lemma shows better result.Let {   } be a subsequence with    → 0; then    = 0 for   sufficiently large.Proof.Consider that    → 0 together with Lemma 5 implies for  sufficiently large    ≥    = min {           > ℎ   + 1 > 0 for   sufficiently large.So we get    = 0 for   sufficiently large.
Theorem 18. Suppose that all stated assumptions hold; the outcome of applying Algorithm 1 is one of the following.

Table 3 :
(1)7]rison of our algorithm with two methods in[5,7].All limit points are feasible, and there exists at least one limit point  * of {  } which is a KKT point for the inequality constrained NLP(1); that is to say,