JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 10.1155/2014/749475 749475 Research Article A Relax Inexact Accelerated Proximal Gradient Method for the Constrained Minimization Problem of Maximum Eigenvalue Functions Wang Wei http://orcid.org/0000-0002-5498-8881 Li Shanghua http://orcid.org/0000-0003-3741-9094 Gao Jingjing Xu Yuesheng School of Mathematics Liaoning Normal University Liaoning, Dalian 116029 China lnnu.edu.cn 2014 972014 2014 26 02 2014 21 06 2014 9 7 2014 2014 Copyright © 2014 Wei Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

For constrained minimization problem of maximum eigenvalue functions, since the objective function is nonsmooth, we can use the approximate inexact accelerated proximal gradient (AIAPG) method (Wang et al., 2013) to solve its smooth approximation minimization problem. When we take the function g ( X ) = δ Ω ( X ) ( Ω = { X S n : F ( X ) = b , X 0 } ) in the problem min { λ max ( X ) + g ( X ) : X S n } , where λ max ( X ) is the maximum eigenvalue function, g ( X ) is a proper lower semicontinuous convex function (possibly nonsmooth) and δ Ω ( X ) denotes the indicator function. But the approximate minimizer generated by AIAPG method must be contained in Ω otherwise the method will be invalid. In this paper, we will consider the case where the approximate minimizer cannot be guaranteed in Ω . Thus we will propose two different strategies, respectively, constructing the feasible solution and designing a new method named relax inexact accelerated proximal gradient (RIAPG) method. It is worth mentioning that one advantage when compared to the former is that the latter strategy can overcome the drawback. The drawback is that the required conditions are too strict. Furthermore, the RIAPG method inherits the global iteration complexity and attractive computational advantage of AIAPG method.

1. Introduction

The minimization problem of maximum eigenvalue functions in nonsmooth optimization presents a fascinating mathematical challenge. Such problems arise in many different areas of applied mathematics, especially in engineering design , and also play important roles in enriching blend of classical mathematical techniques and contemporary optimization theory. The constrained minimization problem of maximum eigenvalue functions can be transformed into the minimization problem of the sum of two convex functions. Various methods have been proposed to deal with such problems, such as in , a forward-backward splitting algorithm was used to solve the minimization problem of two proper lower semicontinuous convex functions. Besides, several fixed point algorithms based on proximity operator were introduced in  for ROF denoising model which is actually the minimization problem of the sum of two convex functions. More recently, the AIAPG method which is based on accelerated proximal gradient (APG) method  was introduced in  for solving the minimization problem of the sum of maximum eigenvalue function and proper lower semicontinuous convex function. If the approximate minimizer is infeasible, that is, the approximate minimizer is not strictly contained in the feasible set Ω , the AIAPG method will not be available. Hence, we design the RIAPG method which is based on AIAPG method to solve the smooth approximation problem of constrained minimization problem of maximum eigenvalue functions.

We consider the following constrained minimization problem of the maximum eigenvalue function: ( P ) min λ max ( X ) s . t . F ( X ) = b X 0 , where λ max ( X ) is the maximum eigenvalue function, F : S n R m is a linear map, b R m , and X 0 means X is a positive semidefinite matrix. S n is the space of n × n real symmetric matrices. The problem ( P ) is equivalent to the following form: ( P 1 ) min { λ max ( X ) + δ Ω ( X ) : X S n } , where Ω = { X S n : F ( X ) = b , X 0 } , δ Ω ( X ) denotes the indicator function. Then we consider the smooth approximation ( h ɛ λ ) ( X )  to the maximum eigenvalue function λ max ( X ) which is a proper, lower semicontinuous, convex function and ( h ɛ λ ) ( X ) is Lipschitz continuous. This thought for dealing with the problem resembles the technique used in . Hence, the approximation form of ( P 1 ) is given by ( P 2 ) min { ( h ɛ λ ) ( X ) : X Ω } . Problem ( P 2 ) can be solved by AIAPG method in feasible case. In infeasible case, we will propose two strategies. On the one hand, we use infeasible approximate minimizer to construct feasible solution which satisfies the conditions required by AIAPG method. On the other hand, we enlarge the feasible set Ω suitably and present RIAPG method to solve problem ( P 2 ) .

The rest of paper is organized as follows. Section 2 introduces the constructive technique of feasible approximate minimizer that satisfies the requirement of AIAPG method. Due to the drawback of AIAPG method is that the required conditions are strict. It makes challenge to the efficiency of practical performance and the accuracy of calculation. Hence, the relax inexact accelerated proximal gradient method will be addressed more formally in Section 3. Section 4 is devoted to a series of lemmas and theorems to show the convergence analysis of the method. Finally, we have a conclusion section.

Notation. For any X , Y in S n , X , Y denotes stand trace inner product, · and · 2 , respectively, stand for Frobenius norm and spectral norm. F * : R m S n is the adjoint operator of linear operator F such that F * X , Y = X , F Y for all ( X , Y ) R m × S n . To facilitate the latter specification, we have also given the following notations. Let N k be a self-adjoint positive definite operator that is chosen by the user. In addition, { ζ k } , { ρ k } , { θ k } are all the given convergent sequences of nonnegative numbers such that k = 1 ζ k < , k = 1 ρ k < , k = 1 θ k < .

2. Construction of Feasible Solution

Problem ( P 2 ) can be solved by AIAPG method , but note that the approximate minimizer X k generated by above method must be feasible; that is, F ( X k ) = b and X k 0 . At the same time, given Y k in , the approximate solution ( X k , P k , Z k ) should satisfy the KKT optimality conditions. More precisely (1) ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) - F * P k - Z k = : δ k 0 , F ( X k ) = b , X k , Z k = : ɛ k 0 , X k 0 , Z k 0 .

In practice, the positive semidefiniteness of approximate solution X k is easy to stipulate by performing projection onto S + n , but the vector R k : = F ( X k ) - b is usually not exactly equal to 0. Hence, we present a strategy that uses infeasible solution X k to construct X ^ k which is a feasible solution such that ( X ^ k , P k , Z k ) satisfies corresponding KKT optimality conditions.

Suppose ( X k , P k , Z k ) satisfies the conditions X k 0 , Z k 0 , N k - 1 / 2 δ k ρ k / 2 t k , and ε k ζ k / ( 2 t k 2 ) , and there exists X ¯ 0 such that F ( X ¯ ) = b , where t 1 = 1 , t k + 1 = ( 1 / 2 ) ( 1 + 1 + 4 t k 2 ) , F is surjective. Then the constructive form of feasible solution is given as follows: (2) X ^ k = λ ( X k + W k ) + ( 1 - λ ) X ¯ , where λ [ 0,1 ] and W k = - F * ( F F * ) - 1 ( R k ) .

In the following paragraph, we will show that X ^ k is feasible and satisfies corresponding KKT optimality conditions for above construction. By the definition of X ^ k , W k , and R k , we have (3) F ( X ^ k ) = F [ λ ( X k + W k ) + ( 1 - λ ) X ¯ ] = λ F ( X k + W k ) + ( 1 - λ ) F ( X ¯ ) = λ [ F ( X k ) - R k ] + ( 1 - λ ) b = b . It is easy to get W k 2 R k / σ min ( F ) and X ^ k will be positive semidefinite whenever λ = 1 - ( W k 2 / ( W k 2 + λ min ( X ¯ ) ) ) . In addition, for X ^ k the following results are also valid (4) 0 X ^ k , Z k 2 ε k , N k - 1 / 2 δ ^ k ρ k 2 t k , ( h ɛ λ ) ( Y k ) + N k ( X ^ k - Y k ) - F * P k - Z k = δ k + N k ( X ^ k - X k ) = : δ ^ k . But above results were established on the condition of the requirement of W k , that is, (5) W k 2 min { ζ k 4 t k 2 n Z k ( 1 + λ max ( X ¯ ) λ min ( X ¯ ) ) - 1 , ρ k 2 2 n t k ( λ max ( N 1 ) ) - 1 / 2 ( 1 + X ¯ - X k 2 λ min ( X ¯ ) ) - 1 } . The proof of above conclusions is similar as in  and we omit it here.

Though we have succeeded in constructing a feasible solution X ^ k Ω , the requirement of W k is too stringent to be difficult for computational efficiency. To overcome the drawbacks above we propose RIAPG method to solve problem ( P 2 ) for which the iterate points X k generated by method need not be strictly contained in Ω .

3. A Relax Inexact Accelerated Proximal Gradient Method

The RIAPG algorithm for solving the problem ( P 2 ) is described as follows.

Given a tolerance ɛ > 0 . Input Y 1 = X 0 dom ( δ Ω ( X ) ) ,    t 1 = 1 . Set k = 1 . Iterate the following steps.

Step 1.

Find an approximate minimizer (6) X k argmin { 1 2 ( h ɛ λ ) ( Y k ) + ( h ɛ λ ) ( Y k ) , X - Y k + 1 2 X - Y k , N k ( X - Y k ) : X Ω } , where X k is allowed to be contained in a suitable enlargement Ω k : = { X S n : F ( X ) - b θ k / t k 2 , X 0 } of Ω , and the sequence { θ k / t k 2 } is monotonically decreasing. Consider (7) ( h ɛ λ ) ( Y k ) = Q Diag [ h ɛ ( λ ) ] Q = Q Diag [ α ( ɛ , λ ) ] Q , α i ( ɛ , λ ) = e λ i / ɛ j = 1 n e λ j / ɛ = e ( λ i - λ 1 ) / ɛ j = 1    n e ( λ j - λ 1 ) / ɛ , where    i = 1 , , n .

Step 2.

Compute t k + 1 = ( 1 / 2 ) ( 1 + 1 + 4 t k 2 ) .

Step 3.

Compute Y k + 1 = X k + ( ( t k - 1 ) / t k + 1 ) ( X k - X k - 1 ) .

Let l k ( X ) = ( h ɛ λ ) ( Y k ) + ( h ɛ λ ) ( Y k ) , X - Y k + ( 1 / 2 ) X - Y k , N k ( X - Y k ) . When Ω k = Ω the dual of (6) is given by (8) max { l k ( X ) - l k ( X ) , X + b , P : l k ( X ) - F * P - Z = 0 , Z 0 , X 0 } . We assume that the approximate minimizer X k in (6) and its corresponding dual variables ( P k , Z k ) satisfy the following conditions: (9) ( h ɛ λ ) ( X k ) l k ( X k ) + ζ k ( 2 t k 2 ) | l k ( X k ) , X k + b , P k | Δ , l k ( X k ) - F * P k - Z k = δ k with N k - 1 / 2 δ k ρ k 2 t k , X k , Z k ζ k ( 2 t k 2 ) , R k θ k t k 2 , X k 0 , Z k 0 , where Δ is a given positive number and we also assume that the sequence { ρ k / t k } is monotonically decreasing.

Let X * be the optimal solution of ( P 2 ) , and the dual of ( P 2 ) is given as follows: (10) max { ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( X ) , X + b , P : ( h ɛ λ ) ( X ) - F * P - Z = 0 , Z 0 , X 0 } . Let ( X * , P * , Z * ) be the optimal solution of above dual problem.

To facilitate the later proof, we define the following quantities: (11) v k = ( h ɛ λ ) ( X k ) - ( h ɛ λ ) ( X * ) , u k = t k X k - ( t k - 1 ) X k - 1 - X * , a k = t k 2 v k , b k = 1 2 u k , N k ( u k ) 0 , e k = t k δ k , u k , η k = P k , t k 2 R k - t k - 1 2 R k - 1 , η 1 = P 1 , R 1 , χ k = P k - 1 - P k θ k , χ 1 = 0 , τ = 1 2 X 0 - X * , N 1 ( X 0 - X * ) , ρ ¯ k = j = 1 k ρ j , ζ ¯ k = j = 1 k ( ζ j + ρ j 2 ) .

It should be noted that comparing to the quantities of AIAPG method, a k and v k here may be negative since the lack of the feasibility of X k .

4. Convergence Analysis

In the following paragraphs, a series of lemmas and theorems will be given to specify the convergence analysis of the RIAPG method. We should mention that the lack of the feasibility of X k introduces nontrivial technical difficulties in the proof of the convergence.

Lemma 1.

Given Y k S n and a positive definite linear operator N k on S n such that the conditions in (9) hold, then for all X S + n , we have (12) ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( X k ) 1 2 X k - Y k , N k ( X k - Y k ) + Y k - X , N k ( X k - Y k ) + δ k + F * P k , X - X k - ζ k t k 2 .

Proof.

Noting the first inequality of (9), we have (13) ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( X k ) ( h ɛ λ ) ( X ) - l k ( X k ) - ζ k 2 t k 2 = ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( Y k ) - ( h ɛ λ ) ( Y k ) , X k - Y k - 1 2 X k - Y k , N k ( X k - Y k ) - ζ k 2 t k 2 . By the convexity of ( h ɛ λ ) ( X ) , we have (14) ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( Y k ) ( h ɛ λ ) ( Y k ) , X - Y k . Then, (15) ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( X k ) ( h ɛ λ ) ( Y k ) , X - X k - 1 2 X k - Y k , N k ( X k - Y k ) - ζ k 2 t k 2 . Since l k ( X k ) = ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) and the third inequality of (9), we have (16) ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( X k ) δ k + F * P k + Z k - N k ( X k - Y k ) , X - X k - 1 2 X k - Y k , N k ( X k - Y k ) - ζ k 2 t k 2 = δ k + F * P k , X - X k + Z k , X - N k ( X k - Y k ) , X - X k - 1 2 X k - Y k , N k ( X k - Y k ) - ζ k 2 t k 2 - Z k , X k . Then the required result is proved by considering the fact that Z k , X 0 and X k , Z k ζ k / ( 2 t k 2 ) .

Lemma 2.

Suppose that N k - 1 N k 0 , for all k . Then

a k - 1 + b k - 1 a k + b k - e k - ζ k - η k ;

in addition, the conditions in (9) are satisfied for all k . Then (17) a k + b k ( τ + ρ ¯ k ) 2 + P k θ k + 2 ( ζ ¯ k + χ ¯ k + J k ) ,

where J k = j = 1 k ρ j A j , A j = P j θ j + a j * , a j * = max { 0 , - a j } .

Proof.

(i) According to Lemma 1, taking X = X k - 1 in (12), we have (18) v k - 1 - v k 1 2 X k - Y k , N k ( X k - Y k ) + Y k - X k - 1 , N k ( X k - Y k ) + δ k + F * P k , X k - 1 - X k - ζ k t k 2 . Similarly, taking X = X * in (12), we have (19) - v k 1 2 X k - Y k , N k ( X k - Y k ) + Y k - X * , N k ( X k - Y k ) + δ k + F * P k , X * - X k - ζ k t k 2 . By multiplying (18) throughout by t k - 1 and adding that to (19), we have (20) ( t k - 1 ) v k - 1 - t k v k t k 2 X k - Y k , N k ( X k - Y k ) + t k Y k - ( t k - 1 ) X k - 1 - X * , N k ( X k - Y k ) - δ k + F * P k , t k X k - ( t k - 1 ) X k - 1 - X * - ζ k t k . In addition, by multiplying (20) throughout by t k , and using t k - 1 2 = t k 2 - t k , we have (21) a k - 1 - a k t k 2 2 X k - Y k , N k ( X k - Y k ) + t k t k Y k - ( t k - 1 ) X k - 1 - X * , N k ( X k - Y k ) - δ k + F * P k , t k 2 X k - t k - 1 2 X k - 1 - t k X * - ζ k 1 2 u k , N k u k - 1 2 u k - 1 , N k u k - 1 - δ k + F * P k , t k u k - ζ k b k - b k - 1 - e k - F * P k , t k u k - ζ k . Note that the second inequality above follows the fact that the definition of Y k and t k 2 X k - t k - 1 2 X k - 1 - t k X * = t k u k . Since N k - 1 N k 0 , t k - 1 2 = t k 2 - t k and (11), we have (22) F * P k , t k u k = P k , F ( t k u k ) = P k , t k 2 ( F ( X k ) - b ) - t k - 1 2 ( F ( X k - 1 ) - b ) = P k , t k 2 R k - t k - 1 2 R k - 1 = η k . Then the result (i) is proved.

We have | e k | = | t k δ k , u k | N k - 1 / 2 δ k N k 1 / 2 u k t k ρ k H k 1 / 2 u k / 2 = ρ k b k .

First, we show that a 1 + b 1 τ + | P 1 , R 1 | + ρ 1 b 1 + ζ 1 . As a 1 = ( h ɛ λ ) ( X 1 ) - ( h ɛ λ ) ( X * ) , b 1 = ( 1 / 2 ) X 1 - X * , N 1 ( X 1 - X * ) , Y 1 = X 0 , t 1 = 1 . By applying the Lemma 1, taking X = X * we have (23) - a 1 = ( h ɛ λ ) ( X * ) - ( h ɛ λ ) ( X 1 ) 1 2 X 1 - Y 1 , N 1 ( X 1 - Y 1 ) + Y 1 - X * , N 1 ( X 1 - Y 1 ) + δ 1 + F * P 1 , X * - X 1 - ζ 1 t 1 2 = 1 2 X 1 - X * , N 1 ( X 1 - X * ) - 1 2 Y 1 - X * , N 1 ( Y 1 - X * ) + δ 1 + F * P 1 , X * - X 1 - ζ 1 = b 1 - τ + δ 1 , X * - X 1 + F * P 1 , X * - X 1 - ζ 1 . Since N 1 - 1 / 2 δ 1 ρ 1 / 2 and e 1 = δ 1 , X 1 - X * , we have (24) a 1 + b 1 τ - e 1 + P 1 , F ( X * - X 1 ) + ζ 1 τ + | P 1 , R 1 | + ρ 1 b 1 + ζ 1 . Next we will show (25) a k + b k τ + | P k , t k 2 R k | + s k , where s k = j = 1 k ρ j b j + j = 1 k ζ j + j = 1 k χ j . By (i), we can get (26) τ a 1 + b 1 - ρ 1 b 1 - ζ 1 - η 1 a 2 + b 2 - e 2 - ζ 2 - η 2 - ρ 1 b 1 - ζ 1 - η 1 a 2 + b 2 - ρ 2 b 2 - ρ 1 b 1 - ζ 1 - ζ 2 - η 1 - η 2 a k + b k - j = 1 k ρ j b j - j = 1 k ζ j - j = 1 k η j . We use the fact that (27) j = 1 k η j = P k , t k 2 R k + j = 1 k - 1 P j - P j + 1 , t j 2 R j | P k , t k 2 R k | + j = 1 k - 1 P j - P j + 1 t j 2 · θ k t j 2 = | P k , t k 2 R k | + j = 1 k χ j ; consequently, (25) holds. Hence (28) b k τ k + s k , where    τ k : = τ + | P k , t k 2 R k | - a k τ + A k . Then we can get (29) s k = s k - 1 + ρ k b k + ζ k + χ k s k - 1 + ρ k τ k + s k + ζ k + χ k . According to (28), we have τ 1 b 1 - ρ 1 b 1 - ζ 1 ; this implies (30) b 1 1 2 ( ρ 1 + ρ 1 2 + 4 ( τ 1 + ζ 1 ) ) ρ 1 + τ 1 + ζ 1 , and then (31) s 1 = ρ 1 b 1 + ζ 1 ρ 1 ( ρ 1 + τ 1 + ζ 1 ) + ζ 1 ρ 1 2 + ζ 1 + ρ 1 ( τ 1 + ζ 1 ) . Adding τ k to both sides of (29) and moving the terms, we get (32) ( τ k + s k ) - ρ k τ k + s k - ( τ k + s k - 1 + ζ k + χ k ) 0 ; this implies (33) τ k + s k 1 2 [ ρ k + ρ k 2 + 4 ( τ k + s k - 1 + ζ k + χ k ) ] ; thus, (34) s k s k - 1 + 1 2 ρ k [ ρ k + ρ k 2 + 4 ( τ k + s k - 1 + ζ k + χ k ) ] + ζ k + χ k s k - 1 + 1 2 ρ k 2 + 1 2 ρ k ρ k 2 + 4 ( τ + A k + s k - 1 + ζ k + χ k ) + ζ k + χ k s k - 1 + ( ρ k 2 + ζ k ) + χ k + ρ k τ + A k + ρ k s k - 1 + ζ k + χ k s 1 + j = 2 k ( ρ j 2 + ζ j ) + j = 2 k χ j + j = 2 k ρ j τ + A j + j = 2 k ρ j s j - 1 + ζ j + χ j j = 1 k ( ρ j 2 + ζ j ) + j = 1 k χ j + j = 1 k ρ j τ + j = 1 k ρ j A j + j = 1 k ρ j s j ζ ¯ k + χ ¯ k + ρ ¯ k τ + J ¯ k + ρ ¯ k s k . In the last two inequalities, we use the fact that s j - 1 + ζ j + χ j s j and 0 s 1 s 2 s k .

Let ω k : = ζ ¯ k + χ ¯ k + J ¯ k + ρ ¯ k τ ; then we have s k ( 1 / 2 ) ( ρ ¯ k + ρ ¯ k 2 + 4 ω k ) which implies (35) s k ρ ¯ k 2 + ω k . The result (ii) follows from (35), (25), and the fact that | P k , t k 2 R k | P k θ k .

Lemma 3.

(i) Suppose that there exists ( X , P , Z ) such that (36) F ( X ) = b , X 0 , ( h ɛ λ ) ( X ) = F * P + Z , Z 0 . If the sequence { ( h ɛ λ ) ( X k ) } is bounded from above, then the sequence { X k } is bounded.

(ii) Suppose that N k - 1 N k 0 for all k , { X k } is bounded, and there exists X ~ such that (37) F ( X ~ ) = b , X ~ 0 . Then the sequence { Z k } is bounded. In addition, the sequence { P k } is also bounded.

Proof.

(i) By using the convexity of ( h ɛ λ ) ( · ) , X k Ω k , and monotonicity of the sequence of { θ k / t k 2 } , we have (38) ( h ɛ λ ) ( X ) - ( h ɛ λ ) ( X k ) ( h ɛ λ ) ( X ) , X - X k = F * P + Z , X - X k = P , F ( X ) - F ( X k ) + Z , X - Z , X k P b - F ( X k ) + Z , X - Z , X k P θ 1 + Z , X - Z , X k . Thus (39) λ min ( Z ) T r ( X k ) X k , Z P θ 1 + Z , X - ( h ɛ λ ) ( X ) + ( h ɛ λ ) ( X k ) . Then the required result is proved.

Noting (9) and monotonicity of the sequence of { ρ k / t k } , we have (40) λ min ( X ~ ) T r ( Z k ) X ~ , Z k = X ~ , l k ( X k ) - F * P k - δ k = ( X k , l k ( X k ) - b , P k ) + X ~ - X k , l k ( X k ) - X ~ , δ k Δ + X ~ - X k , ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) + N k 1 / 2 X ~ , N k - 1 / 2 δ k Δ + X ~ - X k ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) + N k 1 / 2 X ~ ρ 1 2 .

Then that the sequence { Y k } is bounded follows from the fact that { X k } is bounded. By the continuity of ( ( h ɛ λ ) ) and the fact that N 1 N k 0 , the sequence { ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) } is also bounded.

Next, we will show that { P k } is bounded. Take F = ( F F * ) - 1 F and we can get P k = F ( l k ( X k ) - Z k - δ k ) from (9). Hence (41) P k F l k ( X k ) - Z k - δ k F ( ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) + Z k + δ k ) . So we have δ k λ max ( N 1 ) N k - 1 / 2 δ k λ max ( N ) 1 ρ 1 / 2 directly from λ max ( N 1 ) I N 1 N k . Then the boundedness of { P k } is proved by using the fact that the sequence { ( h ɛ λ ) ( Y k ) + N k ( X k - Y k ) } and { Z k } are bounded.

Lemma 4.

For all k 1 , we have (42) 0 ( h ɛ λ ) ( X * ) - ( h ɛ λ ) ( X * k )    P * θ k t k 2 , where X * k is an optimal solution of the problem min { ( h ɛ λ ) ( X ) : X Ω k } .

Proof.

By the convexity of ( h ɛ λ ) ( X ) and the fact that F ( X * ) = b , X * , Z * = 0 , Z * , X * k 0 , we have (43) 0 ( h ɛ λ ) ( X * ) - ( h ɛ λ ) ( X * k ) ( h ɛ λ ) ( X * ) , X * - X * k = F * P * + Z * , X * - X * k = P * , F ( X * ) - F ( X * k ) + Z * , X * - Z * , X * k P * b - F ( X * k ) P * θ k t k 2 .

Theorem 5.

Suppose that N k - 1 N k 0 for all k . Taking M k : = max 1 j k { ( P * + P j ) θ j } , we have (44) - 4 P * θ k ( k + 1 ) 2 ( h ɛ λ ) ( X k ) - ( h ɛ λ ) ( X * ) 4 ( k + 1 ) 2 × ( ( τ + ρ k ¯ ) 2 + 2 ρ k ¯ M k + 2 ( ζ k + χ k ) + P k θ k ) .

Proof.

Taking the problem min { ( h ɛ λ ) ( X ) : X Ω k } into account, X * k is an optimal solution of it. Since X * , X k Ω k , we have (45) ( h ɛ λ ) ( X * k ) - ( h ɛ λ ) ( X * ) ( h ɛ λ ) ( X k ) - ( h ɛ λ ) ( X * ) . Then the inequality on the left side of (44) follows from Lemma 4, (45) and the fact that t k ( k + 1 ) / 2 .

Next, we will show the inequality on the right side of (44). By Lemma 2(ii) and using b k 0 , we have (46) t k 2 v k = t k 2 ( ( h ɛ λ ) ( X k ) - ( h ɛ λ ) ( X * ) ) = a k ( τ + ρ ¯ k ) 2 + P k θ k + 2 ( ζ ¯ k + χ ¯ k + J k ) . Since - a j = t j 2 ( ( h ɛ λ ) ( X * ) - ( h ɛ λ ) ( X j ) ) t j 2 ( ( h ɛ λ ) ( X * ) - ( h ɛ λ ) ( X * j ) ) P * θ j . Then we have (47) a j * P * θ j , J k j = 1 k ρ j ( P j + P * ) θ j M k ρ ¯ k . Using t k ( k + 1 ) / 2 again, the required result is proved.

From the assumption on the sequences of { ζ k } , { θ k } , { ρ k } , we can get the result that the sequences { ρ ¯ k } and { θ ¯ k } are bounded. Moreover, by using Lemma 3, we note the sequence { P k } is also bounded; at the same time, we can also get the boundedness of { M k } and { χ ¯ k } . Then the convergence of the RIAPG method with the convergent rate O ( 1 / k 2 ) is proved.

5. Conclusion

The principal result given here is that we have presented the implementable and globally convergent method (RIAPG method) for solving the constrained minimization problem of maximum eigenvalue functions. RIAPG method, being an extension of AIAPG method, is especially suited for the case where the approximate minimizer generated by AIAPG method may not be in the feasible set. Though this method is based on some assumptions, it enriches the way to deal with the constrained minimization problem of maximum eigenvalue functions.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors thank the referees for their beneficial suggestions for the improvement of this paper. This paper is supported by the National Natural Science Foundation of China under Project no. 11171138.

Mistakidis E. S. Stavroulakis G. E. Nonconvex Optimization in Mechanics. Smooth and Nonsmooth Algotithms , Heuristics and Engineering Applications 1998 Dordrechet, The Netherlands Kluwer Academic Publisher Combettes P. L. Wajs V. R. Signal recovery by proximal forward-backward splitting Multiscale Modeling & Simulation 2005 4 4 1168 1200 10.1137/050626090 MR2203849 2-s2.0-30844438177 Micchelli C. A. Shen L. Xu Y. Proximity algorithms for image models: denoising Inverse Problems 2011 27 4 30 045009 10.1088/0266-5611/27/4/045009 MR2781033 2-s2.0-79953645045 Nesterov Y. Gradient methods for minimizing composite functions Mathematical Programming 2013 140 1 125 161 10.1007/s10107-012-0629-5 2-s2.0-84879800501 Wang W. Gao J. J. Li S. H. Approximate inexact accelerated proximal gradient method for the minimization problem of a class of maximum eigenvalue functions Journal of Liaoning Normal University 2013 36 3 314 317 Chen X. Qi H. Qi L. Teo K.-L. Smooth convex approximation to the maximum eigenvalue function Journal of Global Optimization 2004 30 2-3 253 270 10.1007/s10898-004-8271-2 MR2115703 2-s2.0-14644435032 Nesterov Y. Smooth minimization of non-smooth functions Mathematical Programming 2005 103 1 127 152 10.1007/s10107-004-0552-5 MR2166537 2-s2.0-17444406259 Jiang K. Sun D. Toh K. An inexact accelerated proximal gradient method for large scale linearly constrained convex SDP SIAM Journal on Optimization 2012 22 3 1042 1064 10.1137/110847081 MR3023763 2-s2.0-84867309078