JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 10.1155/2014/293475 293475 Research Article Improved Filter-SQP Algorithm with Active Set for Constrained Minimax Problems http://orcid.org/0000-0001-8966-9314 Luo Zhijun 1 http://orcid.org/0000-0003-1110-6807 Wang Lirong 2 Komori Kazutake 1 The Department of Mathematics and Econometrics Hunan University of Humanities, Science and Technology Loudi 417000 China hnrku.net.cn 2 The Department of Information Science and Engineering Hunan University of Humanities, Science and Technology Loudi 417000 China hnrku.net.cn 2014 292014 2014 15 03 2014 19 07 2014 25 08 2014 2 9 2014 2014 Copyright © 2014 Zhijun Luo and Lirong Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

An improved filter-SQP algorithm with active set for constrained finite minimax problems is proposed. Firstly, an active constraint subset is obtained by a pivoting operation procedure. Then, a new quadratic programming (QP) subproblem is constructed based on the active constraint subset. The main search direction d k is obtained by solving this (QP) subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Under some suitable conditions, the global convergence of our algorithm is established. Finally, some numerical results are reported to show the effectiveness of the proposed algorithm.

1. Introduction

Many real life problems, such as engineering, economics, management, finance, and other fields, can be described as the minimax problems, which wants to obtain the objection functions minimum under conditions of the maximum of the functions (such as [1, 2]). In this paper, consider the following constrained minimax optimization problems: (1) min x R n F ( x ) s . t . g i ( x ) 0 , i I = { 1,2 , , m 1 } , where (2) F ( x ) = max { f j ( x ) j J = { 1,2 , , m } } , f j ( x ) , g i ( x ) : R n R , are continuously differentiable. For convenience, we denote (3) J ( x ) = { j J f j ( x ) = F ( x ) } , I ( x ) = { i I g i ( x ) = φ ( x ) max { g i ( x ) , i I ;    0 } } . Obviously, the objective function F ( x ) is not necessarily differentiable even if the f j ( x ) , j J are all differentiable. Consequently, the classical algorithms for smooth optimization problems may fail to reach an optimum if they are applied directly to the such constrained minimax optimization problem (1). Taking into account the value of minimax problems, many methods are proposed for solving problem (1). For example, in [3, 4], the minimax optimization problem is viewed as an unconstrained nonsmooth optimization problem, which can be solved by the general methods, such as subgradient methods, bundle methods, and cutting plane methods. The other type of methods which solves the problem (1) is so called smoothing methods, whose approach is to transform the minimax problem (1) into an equivalent smooth constrained nonlinear programming problem as follows: (4) min z s . t . f j ( x ) z , j J , g i ( x ) 0 , i I , where z R is an artificial variable. Obviously, from the problem (4), the Karush-Kuhn-Tucker (KKT) conditions of (1) can be stated as follows: (5) i I λ i f i ( x ) + j J μ j g j ( x ) = 0 λ i 0 , f i ( x ) - F ( x ) 0 , λ i ( f i ( x ) - F ( x ) ) = 0 , i I , μ j 0 , g j ( x ) 0 , μ j g j ( x ) = 0 , j J , i I λ i = 1 , where λ i , μ j are the corresponding vector. In view of the equivalent relationship between the KKT point of (4) and the stationary point of (1), many methods focus on finding the stationary point of the problem (1), namely, solving (5). And a lot of methods have been proposed to solve minimax problem . For finding the minima of convex functions that are not necessarily differentiable, in , combining the nonmonotone line search with the second-order correction technique, which can effectively avoid Maratos effect, many other effective algorithms for solving the minimax problems are presented, such as .

To solve the minimax problem efficiently and save time, reduce computations, and reduce the number of iterations, we aim for a fast convergent algorithm. It is well-known that sequential quadratic programming (SQP) can be considered one of the best nonlinear programming methods for smooth constrained optimization problems (see, e.g., , etc.), which outperforms every other nonlinear programming method in terms of efficiency, accuracy, and percentage of successful solutions over a large number of test problems. Hence, some authors have directly applied SQP techniques to minimax problems and got some satisfactory results (such as [5, 9]). For typical SQP method, the main procedure in the method for minimum problem (6) min { f ( x ) g j ( x ) 0 , j J 0 } is to solve the following quadratic programming: (7) min f ( x ) T d + 1 2 d T B d s . t . g j ( x ) + g j ( x ) T d 0 , j J 0 , where J 0 is an index set and B is an approximation of Hessian matrix of Lagrangian function. Since the objective function F ( x ) contains the max operator, it is continuous but nondifferentiable even if every constrained function f i ( x ) ( i I ) is differentiable. Hence, this method may fail to reach an optimum for the minimax problem. In view of this, and combining with (4), in a similar way to , one considers the following quadratic programming through introducing an auxiliary variable z : (8) min z + 1 2 d T H d s . t . f i ( x ) + f i ( x ) T d z , i I , g j ( x ) + g j ( x ) T d 0 , j J , where H is a symmetric positive definite matrix. However, it is well-known that the solution d of (8) may not be a feasible descent direction and can not avoid the Maratos effect. Recently, many researches have extended the popular SQP scheme to the minimax problems (see , etc.). Jian et al.  and Hu et al.  process pivoting operation to generate an ε -active constraint subset associated with the current iteration point. At each iteration of their proposed algorithm, a main search direction is obtained by solving a reduced quadratic program which always has a solution.

As an alternative to merit function, in 2002, Fletcher et al.  proposed the filter-SQP method for inequality constrained optimization problems instead of classic merit function SQP methods. This method main idea is that a trial point was received if it improves either the objective function or the constraint violation. Furthermore, the global and superlinear local convergence of a trust region filter-SQP method was shown in Ulbrich . In recent years, filter method got high importance and many algorithms have been paid , and so forth.

In this paper, an improved filter-SQP algorithm with active set for constrained minimax problems (1) is proposed. In the process of the iteration of our algorithm, an active constraint subset first is obtained by a pivoting operation procedure, and then, we construct a new quadratic programming (QP) subproblem based on the active constraint subset. In order to obtain a main search direction d k for (1), we only need solve this (QP) subproblem which is feasible at per iteration point, and need not to consider the penalty function by using the filter technique. Furthermore, under some mild conditions, the global convergence of our algorithm is established.

The remaining part of this paper is organized as follows: an improved filter-SQP algorithm is proposed in Section 2. In Section 3, we prove that the algorithm is globally convergent. Some preliminary numerical tests are reported in Section 4. And concluding remarks are given in the last section.

2. Improved Filter-SQP Algorithm

As traditional filter technique, define the violation function h ( c ( x ) ) as follows: (9) h ( c ( x ) ) = c + ( x ) , where (10) c j ( x ) = f j ( x ) - F ( x ) = f j ( x ) - z ( let z F ( x ) ) , c j + ( x ) = max { 0 , c j ( x ) , g i ( x ) } , ( i I , j J ) . It is easy to see that h ( c ( x ) ) = 0 if and only if x is a feasible point.

Definition 1.

A pair ( h ( c ( x k ) ) , z ( x k ) ) is said to dominate another pair ( h ( c ( x l ) ) , z ( x l ) ) if and only if both h ( c ( x k ) ) h ( c ( x l ) ) and z ( x k ) z ( x l ) .

Definition 2.

A filter is a list of pairs ( h ( c ( x k ) ) , z ( x k ) ) such that no pair dominates any other. A pair ( h ( c ( x k ) ) , z ( x k ) ) is said to be acceptable for the filter if it is not dominated by any point in the filter.

We use F k { ( h ( c ( x l ) ) , z ( x l ) ) R 2 , l < k } to denote the set of iterations indices l ( l < k ) such that ( h ( c ( x l ) ) , z ( x l ) ) is an entry in the current filter. Then, we say that a point x k is acceptable for the filter if and only if (11) h ( c ( x k ) ) ( 1 - γ ) h ( c ( x l ) ) , or z ( x k ) z ( x l ) - γ h ( c ( x l ) ) , for all ( h ( c ( x l ) ) , z ( x l ) ) F k , where γ is close to zero. We may also update the filter which means that the pair ( h ( c ( x k ) ) , z ( x k ) ) is added to the list of pairs in the filter, and any pairs in the filter that are dominated by ( h ( c ( x k ) ) , z ( x k ) ) are removed.

As a criterion for accepting or rejecting a trial step, we use the filter technique combined with SQP method.

Algorithm A

Step 0. Given initial point x 0 R n , a symmetric positive definite matrix H 0 R n × n . Choose parameters ε 0 > 0 , α ( 0,1 / 2 ) , τ ( 2,3 ) , and 0 < γ < β < 1 . Set k = 0 , and F 0 = { ( u , + ) } with some u β h ( c ( x k ) ) .

Step 1. Computation of an active constraint set is as follows.

Step  1.1. Set i = 0 and ε k i = ε 0 .

Step  1.2. Generate an ε -active constraint subset I ( x k , ε k i ) , J ( x k , ε k i ) and matrix M k by (12) I ( x k , ε k i ) = { i I - ε k i g i ( x k ) - φ ( x k ) 0 } , J ( x k , ε k i ) = { j J - ε k i f j ( x k ) - F ( x k ) 0 } , M k = ( - 1 f j ( x k ) g i ( x k ) ) , i I ( x k , ε k i ) , j J ( x k , ε k i ) .

Step  1.3. If det ( M k T M k ) ε k i , set (13) I k = I ( x k , ε k i ) , J k = I ( x k , ε k i ) , L k = I k J k ; go to Step 2; otherwise let i = i + 1 , set ε k i = 0.5 ε k i , and repeat Step 1.2.

Step 2. Compute ( d k , z k ) by the quadratic problem (14) at x k . Consider (14) min z + 1 2 d T H k d s . t . f j ( x k ) + f j ( x k ) T d - F ( x k ) z , j J k , g i ( x k ) + g i ( x k ) T d 0 , i I k . Let ( λ k , μ k ) be the corresponding KKT multipliers vector. If d k = 0 then stop.

Step 3. Compute ( d ~ k , z ~ k ) by the quadratic problem (15). Consider (15) min z + 1 2 ( d k + d ) T H k ( d k + d ) s . t . f j ( x k + d k ) + f j ( x k + d k ) T d - F ( x k + d k ) z , g i ( x k + d k ) + g i ( x k + d k ) T d - d k τ , where j J k , i I k . Set ( λ ~ k , μ ~ k ) as the corresponding KKT multipliers vector. If d ~ k > d k , set d ~ k = 0 ; otherwise, let d k = d k + d ~ k .

Step 4. Initial line search: set α k , 0 = 1 , l = 0 .

Step 5. If x k + 1 = x k + α k , l d k is not acceptable for the filter, go to Step 6; otherwise let α k = α k , l , x k + 1 = x k + α k d k , and add x k + 1 to the filter; go to Step 7.

Step 6. Set α k , l + 1 = α k , l / 2 , l = l + 1 , and go to Step 5.

Step 7. Update filter F k to F k + 1 , and obtain H k + 1 by updating the positive definite matrix H k using some quasi-Newton formulas. Set k k + 1 . Go back to Step 1.

Remark 3.

In Step 1, by using the the pivoting operation POP, we obtain an active set L k = I k J k I J . Based on the ε -active constraint subset, we construct a new QP (14), which is helpful for discussing the convergence of our algorithm.

Remark 4.

Step 1.1–Step 1.3 and Step 4–Step 6 are called inner circle iteration, while Step 1–Step 7 are called outer circle steps.

3. Global Convergence of Algorithm

In this section, we analyze the convergence of the algorithm. The following general assumptions are true throughout this paper.

The functions f j ( x ) , j J , g i ( x ) , i I , are continuously differentiable.

x R n , the set of vectors (16) { ( - 1 f j ( x ) ) , j J ( x ) ; ( 0 g i ( x ) ) , i I ( x ) }

is linearly independent.

There exist a , b > 0 , such that a d 2 d T H k d b d 2 , for all k R and d R n .

Similar to Lemmas 2.1 and 2.3 in , the following lemma holds which describes some beneficial properties of the pivoting operation POP.

Lemma 5.

Suppose that H 1H 3 hold and let x k R n . Then

the pivoting operation POP can be finished in a finite number of computations; that is, there is no infinite times of loop between Step 1.2 and Step 1.3;

if the sequence { x k } of points is bounded, then there exists a constant ε > 0 such that the associated sequence { ε k i } of parameters generated by POP satisfies ε k i ε ¯ for all k .

Lemma 6.

Suppose that H 1H 3 hold, matrix H k is symmetric positive definite, and ( d k , z k ) is an optimal solution of (14). Then

z k + ( 1 / 2 ) ( d k ) T H k d k 0 , z k 0 ;

if d k = 0 , then x k is a K-T point of problem (1);

if d k 0 , then z k < 0 ; moreover, d k is a descent direction of F ( x ) at point x k .

Lemma 7.

If d k 0 , Step 4–Step 6 of Algorithm  A are well defined; that is, the inner loop Step 5-Step 6 terminates finitely.

Proof.

By contradiction, if the conclusion is false, then Algorithm A will run infinitely between Step 5 and Step 6, so we have α k , l 0 ( l ) and the point x k + α k , l ( d k + d ~ k ) is not acceptable for the filter. The following two cases need to be considered.

Case 1. Consider h ( c ( x k ) ) = 0 .

From the definition of h ( c ( x k ) ) , we can obtain (17) h ( c ( x k + α k , l ( d k + d ~ k ) ) ) = max { 0 , f j ( x k ) + α k , l f j ( x k ) T d k - z k + o ( α k , l ) , hhhhhh g i ( x k ) + α k , l g i ( x k ) T d k + o ( α k , l ) } . Since d k is a solution of the problem (14), then (18) f j ( x k ) T d k < 0 , g i ( x k ) T d k < 0 . Together with α k , l 0 , there exists a constant β , such that (19) h ( c ( x k + α k , l ( d k + d ~ k ) ) ) max { 0 , β ( f j ( x k ) - z k , g i ( x k ) ) } = β h ( c ( x k ) ) ( 1 - γ ) h ( c ( x k ) ) . Moreover, for f j ( x k ) - F ( x k ) < 0 and 1 - α k , l > 0 , we have (20) f j ( x k + α k , l ( d k + d ~ k ) ) - F ( x k ) = f j ( x k ) + α k , l f j ( x k ) T d k + o ( α k , l ) - F ( x k ) = α k , l ( f j ( x k ) + f j ( x k ) T d k - F ( x k ) ) + ( 1 - α k , l ) ( f j ( x k ) - F ( x k ) ) + o ( α k , l ) α k , l z k + o ( α k , l ) . With (19) and (20), we conclude that x k + α k , l ( d k + d ~ k ) must be acceptable for the filter and x k , which is a contradiction.

Case 2. Consider h ( c ( x k ) ) > 0 . Considering Taylor’s formula, we have (21) f j ( x k + α k , l ( d k + d ~ k ) ) - F ( x k ) = f j ( x k ) + α k , l f j ( x k ) T ( d k + d ~ k ) - F ( x k ) + α k , l 2 2 d k T f j 2 ( y ) d k α k , l 2 2 d k T f j 2 ( y ) d k , where y denotes some point on the line segment from x k to x k + α k , l ( d k + d ~ k ) . Since x k is acceptable for the filter, we have (22) h ( c ( x k ) ) ( 1 - γ ) h ( c ( x l ) ) , or (23) z ( x k ) z ( x l ) - γ h ( c ( x l ) ) . Similar to Case 1, we can also get the relation (24) h ( c ( x k + α k , l ( d k + d ~ k ) ) ) ( 1 - γ ) h ( c ( x k ) ) . From the assumption, x k + α k , l ( d k + d ~ k ) is not acceptable for the filter, and we have (25) h ( c ( x k + α k , l ( d k + d ~ k ) ) ) > ( 1 - γ ) h ( c ( x l ) ) , (26) z ( x k + α k , l ( d k + d ~ k ) ) > z ( x l ) - γ h ( c ( x l ) ) . For the point x k , if it holds that (22), by α k , l 0 and (18), we have (27) h ( c ( x k + α k , l ( d k + d ~ k ) ) ) max { 0 , ( f j ( x k ) - z k , g i ( x k ) ) } = h ( c ( x k ) ) ( 1 - γ ) h ( c ( x l ) ) , which contradicts (25). If inequality (23) holds, by α k , l 0 and (18), we have (28) z ( x k + α k , l ( d k + d ~ k ) ) z k z ( x l ) - γ h ( c ( x l ) ) , which contradicts (26). From the above analysis, the desired conclusion holds.

Lemma 8.

Suppose that infinite points are added to the filter; then lim k h ( c ( x k ) ) = 0 .

In the following of this section, we will show the global convergence of the algorithm.

Theorem 9.

Suppose that H 1H 3 hold; let { x k } be the sequence of iterates produced by Algorithm A. The algorithm either stops at the KKT point x k of the problem (1) in finite number of steps or generates an infinite sequence { x k } of points such that each accumulation point x * of { x k } is the KKT point of problem (1).

Proof.

The first statement is easy to show, the only stopping point being in Step 1. Thus, assume that the algorithm generates an infinite sequence { x k } , and since { d k , z k , λ k , μ k } is bounded under all above-mentioned assumptions, we can assume without loss of generality that there exists an infinite index set K such that (29) x k x * , H k H * , d k d * , z k z * , λ k λ * , μ k μ * , k K . Obviously, according to Lemma 6, it is only necessary to prove that d * = 0 .

Let K 1 = { k K z k + ( 1 / 2 ) d k T H k d k > 0 } K ; two cases need to be considered.

Case 1. K 1 is an infinite index set. Suppose by contradiction that d * 0 ; since (30) f j ( x k ) + f j ( x k ) T d k - F ( x k ) z k , j J , g i ( x k ) + g i ( x k ) T d k 0 , i I , in view of k K , k , we obtain (31) f j ( x * ) + f j ( x * ) T d * - F ( x * ) z * , j J , g i ( x * ) + g i ( x * ) T d * 0 , i I . It is shown that the following corresponding quadratic programming subproblem (32) at x * (32) min z + 1 2 d T H * d s . t . f j ( x * ) + f j ( x * ) T d - F ( x * ) z , j J , g i ( x * ) + g i ( x * ) T d 0 , i I , has a nonempty feasible set. Moreover, from d * 0 and Theorem 2.4 in , it is not difficult to show that ( z * , d * ) is the unique solution of (32). So, it holds that (33) z * < 0 , f j ( x * ) T d * z * < 0 , j J ( x * ) , g i ( x * ) T d * < 0 , i I ( x * ) . Considering the KKT conditions of the problem (14), we have (34) λ j * f j ( x * ) T d = d * T H * d * = λ j * ( f j ( x * ) - F ( x * ) - z * ) z * + d * T H * d * = z * ( 1 - λ j * ) < 0 , which contradicts the definition of K 1 .

Case 2. K 1 is a finite index set. That means it holds z k < - ( 1 / 2 ) d k T H k d k for large enough k . There exists a constant α ¯ > 0 , for α ( 0 , α ¯ ) ; we have (35) F ( x k ) - f j ( x k + α ( d k + d ~ k ) ) = - f j ( x k ) - α f j ( x k ) T d k - o ( α ) + F ( x k ) - α ( f j ( x k ) + f j ( x k ) T d k - F ( x k ) ) - ( 1 - α ) ( f j ( x k ) - F ( x k ) ) + o ( α ) - α z k + o ( α ) α 2 d k T H k d k . Then, for some integer i 0 , we have (36) > k = i 0 ( F ( x k ) - f j ( x k + α ( d k + d ~ k ) ) ) k = i 0 a α 2 d k 2 . That means d k 2 0 . Thereby, x * is a KKT point of problem (1).

4. Numerical Experiments

In this section, we select some problems in [9, 10] to show the efficiency of our algorithm in Section 2. Some preliminary numerical experiments are tested on an Intel(R) Celeron(R) CPU 2.40 GHz computer. The code of the proposed algorithm is written by using MATLAB 7.0 and utilized the optimization toolbox to solve the quadratic programming (14) and (15). The results show that the proposed algorithm is efficient.

During the numerical experiments, it is chosen at random some parameters as follows.

Consider ε 0 = 0.5 , α = 0.25 , γ = 0.1 , β = 0.2 , τ = 2.25 , and H 0 = I , the n × n unit matrix.

H k is updated by the BFGS formula similar to . Consider (37) H k + 1 = H k - H k s k ( H k s k ) T s k T H k s k + η ^ k η ^ k T s k T η ^ k ,

where (38) s k = x k + 1 - x k , η ^ k = θ k γ k + ( 1 - θ k ) H k s k , γ ^ k = x L ( x k + 1 , λ k , μ k ) - x L ( x k , λ k , μ k ) , x L ( x k , λ k , μ k ) = i I λ i f i ( x ) + j J μ j g j ( x ) , θ k = { 1 , s k T γ ^ k 0.2 s k T H k s k , 0.8 s k T H k s k s k T H k s k - s k T γ ^ k , s k T γ ^ k < 0.2 s k T H k s k .

In the implementation, the stopping criterion of Step 2 is changed to I f d k 1 0 - 6 STOP.

The algorithm has been tested on some problems from [9, 10]. The results are summarized in Tables 1 and 2. The columns of this table have the following meanings:

No.: the number of the test problem in [9, 10];

n : the dimension of the problem;

m : the number of objective functions;

m 1 : the number of inequality constraints;

NT: the number of iterations;

IP: the initial point;

LWM: the proposed Algorithm A;

XUE: the method in ;

RNM: the method in ;

ZZM: the method in ;

FV: the final value of the objective function.

The information of numerical experiments.

No. n , m , m 1 , IP
1 (problem 1 in ) 2, 3, 0, ( 1,5 ) T
2 (problem 4 in ) 2, 3, 0, ( 3,1 ) T
3 (problem 1 in ) 2, 3, 2, ( 0,0 ) T
4 (problem 2 in ) 2, 6, 2, ( 1,3 ) T
5 (problem 4 in ) 2, 3, 2, ( 4,2 ) T
6 (problem 5 in ) 4, 4, 3, ( 0,1 , 1,0 ) T
7 (problem 6 in ) 2, 3, 2, ( 0,1 ) T

Test results obtained by Algorithm A and other relearnt algorithms.

No. Method NT FV
1 (problem 1 in ) LWM 9 1.952224
XUE 11 1.952224
ZZM 11 1.952224

2 (problem 4 in ) LWM 11 0.616234
XUE 15 0.616234
ZZM 11 0.616234

3 (problem 1 in ) LWM 5 1.952224
RNM 10 1.952224
ZZM 10 1.952224

4 (problem 2 in ) LWM 10 0.616432
RNM 15 0.616234
ZZM 12 0.616234

5 (problem 4 in ) LWM 8 2.250000
RNM 15 2.250000
ZZM 11 2.250000

6 (problem 5 in ) LWM 21 −44.000000
RNM 23 −44.000000
ZZM 23 −43.998000

7 (problem 6 in ) LWM 4 2.000000
RNM 4 2.000000
ZZM 4 2.000000

In Table 2, the performance of algorithm LWM is compared with other algorithms. For problem 1 and 2, the results we get are a little better than those in  if we choose appropriate initial point. From the iteration results for test problems 3 to 7, it seems that our method is a bit more efficient than that in [10, 21] if the number of iterations is considered.

5. Concluding Remarks

In this paper, we propose a filter method combining this method with sequential quadratic programming algorithm for inequality constrained minimax problems. With the help of the pivoting operation procedure, an active constraint subset is first obtained. At each iteration, a main search direction is obtained by solving only one quadratic programming subproblem which is feasible at per iteration point and need not to consider the penalty function by using the filter technique. Then, a correction direction is yielded by solving another quadratic programming to avoid Maratos effect and guarantee the global convergence properties under mild conditions. The preliminary numerical results also show that the proposed algorithm is effective.

However, to show that our algorithm is global convergent, we suppose some rigorous conditions such as the hypotheses H 2-H 3. We hope that we can get rid of them in our future work. In addition, it is noted that there are still some problems worthy further discussion such as studying the algorithm with inequality and equality constraints. And we can get the main search direction by other techniques, for example, sequential systems of linear equations technique.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors are deeply indebted to the editor Professor Wenyu Sun and the anonymous referees whose insightful comments helped the authors a lot to improve the quality of the paper. The first author would also like to thank Professor Zhibin Zhu for valuable work on numerical experiments. This research was supported by Scientific Research Fund of Hunan Provincial Education Department (nos. 12A077,12C0743,13C453, and 14C0609).

Baums A. Minimax method in optimizing energy consumption in real-time embedded systems Automatic Control and Computer Sciences 2009 43 2 57 62 10.3103/S0146411609020011 2-s2.0-65549115069 Rapoport E. Y. Minimax optimization of stationary states in systems with distributed parameters Journal of Computer and Systems Sciences International 2013 52 2 165 179 Polak E. Mayne D. Q. Higgins J. E. Superlinearly convergent algorithm for min-max problems Journal of Optimization Theory and Applications 1991 69 3 407 439 10.1007/BF00940683 MR1110508 2-s2.0-0026172439 Reemtsen R. A cutting plane method for solving minimax problems in the complex plane Numerical Algorithms 1992 2 3-4 409 436 10.1007/BF02139477 MR1184824 ZBL0756.65092 2-s2.0-0004746287 Zhou J. L. Tits A. L. Nonmonotone line search for minimax problems Journal of Optimization Theory and Applications 1993 76 3 455 476 10.1007/BF00939377 MR1212692 ZBL0792.90079 2-s2.0-0027561173 Grippo L. Lampariello F. Lucidi S. A nonmonotone line search technique for Newton's method SIAM Journal on Numerical Analysis 1986 23 4 707 716 10.1137/0723046 MR849278 2-s2.0-0022766519 Yu Y. H. Gao L. Nonmonotone line search algorithm for constrained minimax problems Journal of Optimization Theory and Applications 2002 115 2 419 446 10.1023/A:1020896407415 MR1950702 ZBL1039.90089 2-s2.0-0036415898 Wang F. Wang Y. Nonmonotone algorithm for minimax optimization problems Applied Mathematics and Computation 2011 217 13 6296 6308 10.1016/j.amc.2011.01.002 MR2773374 ZBL1215.65114 2-s2.0-79952002783 Xue Y. A SQP Method for Minimax Problems Journal of System Science and Math Science 2002 22 3 355 364 MR1956891 Rustem B. Nguyen Q. An algorithm for the inequality-constrained discrete min-max problem SIAM Journal on Optimization 1998 8 1 265 283 10.1137/S1056263493260386 MR1617446 2-s2.0-0032358586 Obasanjo E. Tzallas-Regas G. Rustem B. An interior-point algorithm for nonlinear minimax problems Journal of Optimization Theory and Applications 2010 144 2 291 318 10.1007/s10957-009-9599-z MR2581108 ZBL1196.90129 2-s2.0-76149137208 Rustem B. Žakovic S. Parpas P. An interior point algorithm for continuous minimax: implementation and computation Optimization Methods and Software 2008 23 6 911 928 10.1080/10556780802079891 2-s2.0-54249164090 Feng Y. Hongwei L. Shuisheng Z. Sanyang L. A smoothing trust-region Newton-CG method for minimax problem Applied Mathematics and Computation 2008 199 2 581 589 10.1016/j.amc.2007.10.070 MR2420586 2-s2.0-42649126009 Han S. P. A globally convergent method for nonlinear programming Journal of Optimization Theory and Applications 1977 22 3 297 309 10.1007/BF00932858 MR0456497 ZBL0336.90046 2-s2.0-34250293137 Powell M. J. D. A fast algorithm for nonlinearly constrained optimization calculations Numerical Analysis 1978 Berlin, Germany Springer 144 157 He G. Gao Z. Lai Y. New sequential quadratic programming algorithm with consistent subproblems Science in China A: Mathematics 1997 40 2 137 150 10.1007/BF02874433 MR1451093 Panier E. R. Tits A. L. A superlinearly convergent feasible method for the solution of inequality constrained optimization problems SIAM Journal on Control and Optimization 1987 25 4 934 950 10.1137/0325051 MR893991 ZBL0634.90054 2-s2.0-0023382107 Wan Z. A modified SQP algorithm for mathematical programs with linear complementarity constraints ACTA Scientiarum Naturalium Universitatis Normalis Hunanensis 2001 26 9 12 Zhu Z. Zhang K. Jian J. An improved SQP algorithm for inequality constrained optimization Mathematical Methods of Operations Research 2003 58 2 271 282 10.1007/s001860300299 MR2015012 2-s2.0-21144451676 Luo Z. Chen G. Luo S. Zhu Z. Improved feasible SQP algorithm for nonlinear programs with equality constrained sub-problems Journal of Computers 2013 8 6 1496 1503 10.4304/jcp.8.6.1496-1503 2-s2.0-84877998363 Zhu Z. Zhang C. A superlinearly convergent sequential quadratic programming algorithm for minimax problems Journal of Numerical Methods and Applications 2005 27 4 15 32 Jian J. Quan R. Hu Q. A new superlinearly convergent SQP algorithm for nonlinear minimax problems Acta Mathematicae Applicatae Sinica 2007 23 3 395 410 10.1007/s10255-007-0380-5 MR2332246 2-s2.0-34347249179 Hu Q. Chen Y. Chen N. Li X. A modified SQP algorithm for minimax problems Journal of Mathematical Analysis and Applications 2009 360 1 211 222 10.1016/j.jmaa.2009.06.009 MR2548377 2-s2.0-68049132857 Xue W. Shen C. Pu D. A new non-monotone SQP algorithm for the minimax problem International Journal of Computer Mathematics 2009 86 7 1149 1159 10.1080/00207160701763057 MR2537917 2-s2.0-70449559307 Jian J. Zhang X. Quan R. Ma Q. Generalized monotone line search SQP algorithm for constrained minimax problems Optimization 2009 58 1 101 131 10.1080/02331930801951140 MR2490418 2-s2.0-61449247461 Fletcher R. Leyffer S. Toint P. L. On the global convergence of a filter—SQP algorithm SIAM Journal on Optimization 2002 13 1 44 59 10.1137/S105262340038081X MR1922433 2-s2.0-0037288670 Ulbrich S. On the superlinear local convergence of a filter-SQP method Mathematical Programming 2004 100 1, Ser. B 217 245 10.1007/s10107-003-0491-6 MR2072932 2-s2.0-19844381859 Su K. Che J. A modified SQP-filter method and its global convergence Applied Mathematics and Computation 2007 194 1 92 101 10.1016/j.amc.2007.04.014 MR2385832 2-s2.0-35648989570 Shen C. Xue W. Chen X. Global convergence of a robust filter SQP algorithm European Journal of Operational Research 2010 206 1 34 45 10.1016/j.ejor.2010.02.031 MR2608075 2-s2.0-77950020353 Gu C. Zhu D. A non-monotone line search multidimensional filter-SQP method for general nonlinear programming Numerical Algorithms 2011 56 4 537 559 10.1007/s11075-010-9403-z MR2776664 2-s2.0-79952281384