Improved Filter-SQP Algorithm with Active Set for Constrained Minimax Problems

An improved filter-SQP algorithm with active set for constrained finite minimax problems is proposed. Firstly, an active constraint subset is obtained by a pivoting operation procedure. Then, a new quadratic programming (QP) subproblem is constructed based on the active constraint subset. The main search direction 𝑑 𝑘 is obtained by solving this (QP) subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Under some suitable conditions, the global convergence of our algorithm is established. Finally, some numerical results are reported to show the effectiveness of the proposed algorithm.


Introduction
Many real life problems, such as engineering, economics, management, finance, and other fields, can be described as the minimax problems, which wants to obtain the objection functions minimum under conditions of the maximum of the functions (such as [1,2]).In this paper, consider the following constrained minimax optimization problems: min () ≤ 0,  ∈  = {1, 2, . . .,  1 } , where (),   ():   → , are continuously differentiable.For convenience, we denote Obviously, the objective function () is not necessarily differentiable even if the   (),  ∈  are all differentiable.Consequently, the classical algorithms for smooth optimization problems may fail to reach an optimum if they are applied directly to the such constrained minimax optimization problem (1).Taking into account the value of minimax problems, many methods are proposed for solving problem (1).For example, in [3,4], the minimax optimization problem is viewed as an unconstrained nonsmooth optimization problem, which can be solved by the general methods, such as subgradient methods, bundle methods, and cutting plane methods.The other type of methods which solves the problem (1) is so called smoothing methods, whose approach is to transform the minimax problem (1) where  ∈  is an artificial variable.Obviously, from the problem (4), the Karush-Kuhn-Tucker (KKT) conditions of (1) can be stated as follows: where   ,   are the corresponding vector.In view of the equivalent relationship between the KKT point of ( 4) and the stationary point of ( 1), many methods focus on finding the stationary point of the problem (1), namely, solving (5).And a lot of methods have been proposed to solve minimax problem [5][6][7][8][9][10][11][12][13].For finding the minima of convex functions that are not necessarily differentiable, in [5][6][7][8], combining the nonmonotone line search with the second-order correction technique, which can effectively avoid Maratos effect, many other effective algorithms for solving the minimax problems are presented, such as [10][11][12][13].
To solve the minimax problem efficiently and save time, reduce computations, and reduce the number of iterations, we aim for a fast convergent algorithm.It is well-known that sequential quadratic programming (SQP) can be considered one of the best nonlinear programming methods for smooth constrained optimization problems (see, e.g., [14][15][16][17][18][19][20], etc.), which outperforms every other nonlinear programming method in terms of efficiency, accuracy, and percentage of successful solutions over a large number of test problems.Hence, some authors have directly applied SQP techniques to minimax problems and got some satisfactory results (such as [5,9]).For typical SQP method, the main procedure in the method for minimum problem is to solve the following quadratic programming: where  0 is an index set and  is an approximation of Hessian matrix of Lagrangian function.
where  is a symmetric positive definite matrix.However, it is well-known that the solution  of (8) may not be a feasible descent direction and can not avoid the Maratos effect.Recently, many researches have extended the popular SQP scheme to the minimax problems (see [21][22][23][24][25], etc.).Jian et al. [22] and Hu et al. [23] process pivoting operation to generate an -active constraint subset associated with the current iteration point.At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.
As an alternative to merit function, in 2002, Fletcher et al. [26] proposed the filter-SQP method for inequality constrained optimization problems instead of classic merit function SQP methods.This method main idea is that a trial point was received if it improves either the objective function or the constraint violation.Furthermore, the global and superlinear local convergence of a trust region filter-SQP method was shown in Ulbrich [27].In recent years, filter method got high importance and many algorithms have been paid [28][29][30], and so forth.
In this paper, an improved filter-SQP algorithm with active set for constrained minimax problems (1) is proposed.In the process of the iteration of our algorithm, an active constraint subset first is obtained by a pivoting operation procedure, and then, we construct a new quadratic programming (QP) subproblem based on the active constraint subset.In order to obtain a main search direction   for (1), we only need solve this (QP) subproblem which is feasible at per iteration point, and need not to consider the penalty function by using the filter technique.Furthermore, under some mild conditions, the global convergence of our algorithm is established.
The remaining part of this paper is organized as follows: an improved filter-SQP algorithm is proposed in Section 2. In Section 3, we prove that the algorithm is globally convergent.Some preliminary numerical tests are reported in Section 4. And concluding remarks are given in the last section.

Improved Filter-SQP Algorithm
As traditional filter technique, define the violation function ℎ(()) as follows: where It is easy to see that ℎ(()) = 0 if and only if  is a feasible point.
Definition 2. A filter is a list of pairs (ℎ((  )), (  )) such that no pair dominates any other.A pair (ℎ((  )), (  )) is said to be acceptable for the filter if it is not dominated by any point in the filter.
We use F  ≜ {(ℎ((  )), (  )) ∈  2 ,  < } to denote the set of iterations indices  ( < ) such that (ℎ((  )), (  )) is an entry in the current filter.Then, we say that a point   is acceptable for the filter if and only if for all (ℎ((  )), (  )) ∈ F  , where  is close to zero.We may also update the filter which means that the pair (ℎ((  )), (  )) is added to the list of pairs in the filter, and any pairs in the filter that are dominated by (ℎ((  )), (  )) are removed.
As a criterion for accepting or rejecting a trial step, we use the filter technique combined with SQP method.
Step 1. Computation of an active constraint set is as follows.
Step 7. Update filter F  to F +1 , and obtain  +1 by updating the positive definite matrix   using some quasi-Newton formulas.Set  :=  + 1. Go back to Step 1.
Remark 3. In Step 1, by using the the pivoting operation POP, we obtain an active set   =   ⋃   ⊆  ⋃ .Based on the -active constraint subset, we construct a new QP (14), which is helpful for discussing the convergence of our algorithm.

Global Convergence of Algorithm
In this section, we analyze the convergence of the algorithm.
The following general assumptions are true throughout this paper.
Proof.By contradiction, if the conclusion is false, then Algorithm A will run infinitely between Step 5 and Step 6, so we have  , → 0 ( → ∞) and the point   +  , (  + d ) is not acceptable for the filter.The following two cases need to be considered.

Case 1. Consider ℎ(𝑐(𝑥
From the definition of ℎ(( With ( 19) and ( 20), we conclude that   +  , (  + d ) must be acceptable for the filter and   , which is a contradiction.
Case 2. Consider ℎ((  )) > 0. Considering Taylor's formula, we have where  denotes some point on the line segment from   to   +  , (  + d ).Since   is acceptable for the filter, we have or Similar to Case 1, we can also get the relation From the assumption,   +  , (  + d ) is not acceptable for the filter, and we have (  +  , (  + d )) >  (  ) − ℎ ( (  )) .
In the following of this section, we will show the global convergence of the algorithm.Theorem 9. Suppose that H 1-H 3 hold; let {  } be the sequence of iterates produced by Algorithm A. The algorithm either stops at the KKT point   of the problem (1) in finite number of steps or generates an infinite sequence {  } of points such that each accumulation point  * of {  } is the KKT point of problem (1).
Proof.The first statement is easy to show, the only stopping point being in Step 1.Thus, assume that the algorithm generates an infinite sequence {  }, and since {  ,   ,   ,   } is bounded under all above-mentioned assumptions, we can assume without loss of generality that there exists an infinite index set  such that Obviously, according to Lemma 6, it is only necessary to prove that  * = 0.
Let  1 = { ∈  |   + (1/2)      > 0} ⊆ ; two cases need to be considered.Case 1.  1 is an infinite index set.Suppose by contradiction that  * ̸ = 0; since in view of  ∈ ,  → ∞, we obtain It is shown that the following corresponding quadratic programming subproblem (32) at has a nonempty feasible set.Moreover, from  * ̸ = 0 and Theorem 2.4 in [9], it is not difficult to show that ( * ,  * ) is the unique solution of (32).So, it holds that Considering the KKT conditions of the problem ( 14 Then, for some integer  0 , we have That means ‖  ‖ 2 → 0. Thereby,  * is a KKT point of problem (1).

Numerical Experiments
In this section, we select some problems in [9,10] to show the efficiency of our algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer.The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programming ( 14) and (15).The results show that the proposed algorithm is efficient.
During the numerical experiments, it is chosen at random some parameters as follows.
(2)   is updated by the BFGS formula similar to [15].Consider where (3) In the implementation, the stopping criterion of Step 2 is changed to ‖  ‖ ≤ 10 −6 STOP.
The algorithm has been tested on some problems from [9,10].The results are summarized in Tables 1 and 2. The columns of this table have the following meanings: No.: the number of the test problem in [9,10]; : the dimension of the problem; : the number of objective functions;  1 : the number of inequality constraints; NT: the number of iterations; IP: the initial point; LWM: the proposed Algorithm A; XUE: the method in [9]; RNM: the method in [10]; ZZM: the method in [21]; FV: the final value of the objective function.
In Table 2, the performance of algorithm LWM is compared with other algorithms.For problem 1 and 2, the results we get are a little better than those in [9] if we choose appropriate initial point.From the iteration results for test problems 3 to 7, it seems that our method is a bit more efficient than that in [10,21] if the number of iterations is considered.

Concluding Remarks
In this paper, we propose a filter method combining this method with sequential quadratic programming algorithm for inequality constrained minimax problems.With the help  of the pivoting operation procedure, an active constraint subset is first obtained.At each iteration, a main search direction is obtained by solving only one quadratic programming subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique.Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the global convergence properties under mild conditions.The preliminary numerical results also show that the proposed algorithm is effective.However, to show that our algorithm is global convergent, we suppose some rigorous conditions such as the hypotheses H 2-H 3. We hope that we can get rid of them in our future work.In addition, it is noted that there are still some problems worthy further discussion such as studying the algorithm with inequality and equality constraints.And we can get the main search direction by other techniques, for example, sequential systems of linear equations technique.

Remark 4 .
Step 1.1-Step 1.3 and Step 4-Step 6 are called inner circle iteration, while Step 1-Step 7 are called outer circle steps.

Table 1 :
The information of numerical experiments.

Table 2 :
Test results obtained by Algorithm A and other relearnt algorithms.