MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2015/379734 379734 Research Article An Alternating Direction Method for Convex Quadratic Second-Order Cone Programming with Bounded Constraints http://orcid.org/0000-0001-8491-1511 Mu Xuewen 1 http://orcid.org/0000-0001-9132-7612 Zhang Yaling 1,2 Weber Gerhard-Wilhelm 1 School of Mathematics and Statistics Xidian University Xi’an 710071 China xidian.edu.cn 2 School of Computer Science Xi’an Science and Technology University Xi’an 710054 China xust.edu.cn 2015 30 4 2015 2015 20 10 2014 26 03 2015 30 03 2015 30 4 2015 2015 Copyright © 2015 Xuewen Mu and Yaling Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

An alternating direction method is proposed for convex quadratic second-order cone programming problems with bounded constraints. In the algorithm, the primal problem is equivalent to a separate structure convex quadratic programming over second-order cones and a bounded set. At each iteration, we only need to compute the metric projection onto the second-order cones and the projection onto the bound set. The result of convergence is given. Numerical results demonstrate that our method is efficient for the convex quadratic second-order cone programming problems with bounded constraints.

1. Introduction

In this paper, we consider a convex quadratic second-order cone programming (CQSOCP) problem with bounded constraints which is defined by minimizing a convex quadratic function over the intersection of an affine set, a bounded set, and the product of second-order cones. The primal convex quadratic second-order cone programming problem is defined as (1) min 1 2 x T Q x + c T x s.t. A x = b x K , x Ω , where Ω = { x l x u } is a bounded set, Q is an n × n symmetric positive semidefinite matrix, A R m × n , c R n , b R m , and x = [ x 1 , , x N ] R n 1 × × R n N is viewed as a column vector in R n 1 + + n N with i = 1 N n i = n . In addition, K = K n 1 × K n 2 × × K n N and x i K n i , where K n i is the n i -dimensional second-order cone given by (2) K n i = x i = x i 1 x i 0 R n i - 1 × R : x i 1 2 x i 0 , where · 2 is the standard Euclidean norm.

Convex quadratic second-order cone programming problem with bounded constraints is a nonlinear programming problem, which can be seen as a trust region subproblem in the trust region method for the nonlinear second-order cone programming [1, 2]. Since Q is symmetric positive semidefinite, we can compute its positive semidefinite square root Q 1 / 2 by the Cholesky method. Then problem (1) can be equivalently transformed as the following mix linear and second-order cone programming (MLSOCP) : (3) min t + c T x s.t. A x = b t - 1 2 + 2 Q 1 / 2 x 2 2 t + 1 l x u x K . In paper [1, 2], the authors use those well developed and publicly available softwares, based on interior-point methods, such as SeDuMi  and SDPT3  to solve the equivalent MLSOCP (3).

Interior-point methods have been well developed for linear symmetric cone programming . However, at each iteration these solvers require to formulate and solve a dense Schur complement matrix, which for the CQSOCP problem with bounded constraints amounts to a linear system of dimension ( m + 3 n + 2 ) × ( m + 3 n + 2 ) . In addition, the transformed method needs to compute the square root of semidefinite matrix Q . When n is large, because of the very large size and ill-conditioning of the linear system of equations, interior-point methods are difficult to solve the transformed MLSOCP problem efficiently .

The alternating direction method (ADM) has been an effective first-order approach for solving large optimization problems, such as linear programming , linear semidefinite programming (LSDP) [10, 11], nonlinear convex optimization , and nonsmooth l 1 minimization arising from compressive sensing [13, 14]. A modified alternating direction method is proposed for convex quadratically constrained quadratic semidefinite programs in paper . In the thesis , a semismooth Newton-CG augmented Lagrangian method is proposed for large scale convex quadratic symmetric cone programming. In paper , an alternating direction dual augmented Lagrangian method for solving linear semidefinite programming problems in standard form is presented and extended to the SDP with inequality constraints and positivity constraints.

In the paper, an alternating direction method for the CQSOCP problem with bounded constraints is proposed. Firstly, the primal problem is equivalent to a separate structure convex quadratic programming over second-order cones and a bounded set. Then the alternating direction method is proposed to solve the separate structure convex quadratic programming. In the alternating direction method, we only need to compute the metric projection onto the second-order cones and projection onto the bounded set at each iteration. We also give the convergence results and the numerical results.

2. The Projection on the Second-Order Cone and the Bounded Set

In this section, we will give the projection results on the second-order cones and the bounded set.

Let x i = x i 1 x i 0 R n i - 1 × R for i = 1,2 , , N ; then the spectral decomposition of x i associate with second-order cone K n i can be described as  (4) x i = λ 1 x i c 1 x i + λ 2 x i c 2 x i , i = 1,2 , , N , where (5) λ 1 x i = x i 0 - x i 1 2 , λ 2 x i = x i 0 + x i 1 2 , c 1 x i = 1 2 - w 1 , c 2 x i = 1 2 w 1 with w = - x i 1 / x i 1 2 if x i 1 0 and any vector in R n i - 1 satisfying w 2 = 1 if x i 1 = 0 .

Next we introduce the projection lemma over the second-order cone .

Lemma 1 (see [<xref ref-type="bibr" rid="B17">17</xref>–<xref ref-type="bibr" rid="B19">19</xref>]).

For any x i = x i 1 x i 0 R n i - 1 × R , let P K n i ( x i ) be the projection of x i onto the second-order cone K n i ; then we have (6) P K n i x i = λ 1 x i + c 1 x i + λ 2 x i + c 2 x i , i = 1,2 , , N , where s + max ( 0 , s ) for s R .

Let x = [ x 1 , , x N ] R n 1 × × R n N ; then the projection P K ( x ) of x over the cone K is described as (7) P K x = P K n 1 x 1 , , P K n N x N R n 1 × × R n N .

Let y R n ; then the projection on the bounded set Ω is easy to carry out, namely, through an element by element method: (8) P Ω y = max l , min y , u .

3. An Alternating Direction Method for CQSOCP Problems with Bounded Constraints

In this section, we give an alternating direction method for convex quadratic second-order cone programming problems with bounded constraints.

Firstly, we give an equivalent separate structure convex quadratic programming over second-order cone and bounded set as follows: (9) min 1 2 x T Q x + c T y s.t. A x = b , x K x = y , y Ω .

The Lagrangian function for the separate structure convex quadratic programming problem is written as (10) max λ , μ min x K , y Ω L x , y , λ , μ = 1 2 x T Q x + c T y - λ T A x - b - μ T x - y , where λ R m , μ R n .

Under mild constraint qualifications (e.g., Slater condition), strong duality holds for problem (9), and hence, x is an optimal solution of (9) if and only if there exists ( x , y , λ , μ ) K × Ω × R m × R n satisfying the following KKT system in variational inequality form: (11) x - x , Q x - A T λ - μ 0 , x K , y - y , c + μ 0 , y Ω , A x = b , x = y .

The augmented Lagrangian function for the the separate structure convex quadratic programming problem is defined as (12) L x , y , λ , μ = 1 2 x T Q x + c T y - λ T A x - b - μ T x - y + 1 2 β 1 A x - b 2 2 + 1 2 β 2 x - y 2 2 , where β 1 , β 2 > 0 .

The variational inequality form of alternating direction method for (12) is as follows.

3.1. The Original Alternating Direction Method

Given x 0 K , y 0 Ω , λ 0 R m , μ 0 R n , and β 1 , β 2 > 0 . For k = 0,1 , 2 , , then consider the following.

Step 1.

Consider ( x k , y k , λ k , μ k ) ( x k + 1 , y k , λ k , μ k ) ; we compute x k + 1 , which satisfies (13) x - x k + 1 , Q x k + 1 - A T λ k - μ k + 1 β 1 A T A x k + 1 - b + 1 β 2 x k + 1 - y k 0 , x K .

Step 2.

Consider ( x k + 1 , y k , λ k , μ k ) ( x k + 1 , y k + 1 , λ k , μ k ) ; we compute y k + 1 , which satisfies (14) y - y k + 1 , c + μ k - 1 β 2 x k + 1 - y k + 1 0 , y Ω .

Step 3.

Consider ( x k + 1 , y k + 1 , λ k , μ k ) ( x k + 1 , y k + 1 , λ k + 1 , μ k ) ; update the Lagrange multiplier by (15) λ k + 1 = λ k - 1 β 1 A x k + 1 - b .

Step 4.

Consider ( x k + 1 , y k + 1 , λ k + 1 , μ k ) ( x k + 1 , y k + 1 , λ k + 1 , μ k + 1 ) ; update the Lagrange multiplier by (16) μ k + 1 = μ k - 1 β 2 x k + 1 - y k + 1 .

In Steps 1 and 2, we should solve variational inequalities. In the following analysis, we will convert them to simple projection operations.

Lemma 2 (see [<xref ref-type="bibr" rid="B20">20</xref>]).

Let Θ be a closed convex set in a Hilbert space and let P Θ ( x ) be the projection of x onto Θ . Then (17) z - y , y - x 0 , z Θ y = P Θ x .

Taking x = x k + 1 - α 1 ( Q x k + 1 - A T λ k - μ k + ( 1 / β 1 ) A T ( A x k + 1 - b ) + ( 1 / β 2 ) ( x k + 1 - y k ) ) and y = x k + 1 in (17), we see that (13) is equivalent to the following nonlinear equation: (18) x k + 1 = P K x k + 1 - α 1 Q x k + 1 - A T λ k - μ k + 1 β 1 A T A x k + 1 - b + 1 β 2 x k + 1 - y k , where α 1 can be any positive number.

Taking x = y k + 1 - α 2 c + μ k - ( 1 / β 2 ) ( x k + 1 - y k + 1 ) and y = y k + 1 in (17), we see that (14) is equivalent to the following nonlinear equation: (19) y k + 1 = P Ω y k + 1 - α 2 c + μ k - 1 β 2 x k + 1 - y k + 1 , where α 2 can be any positive number.

Due to the existence of the terms Q x k + 1 and A T A x k + 1 in (18), we can not compute x k + 1 directly. We therefore use the following approximate approach which is similar to the one in paper . For certain constants γ 1 and γ 2 , let (20) R 1 x k , x k + 1 = Q x k + 1 - Q x k - γ 1 x k + 1 - x k , R 2 x k , x k + 1 = A T A x k + 1 - A T A x k - γ 2 x k + 1 - x k be the residual between Q x k + 1 , A T A x k + 1 and their linearization at x k , respectively.

Instead of computing (18), we compute (21) x k + 1 = P K x k + 1 - α 1 Q x k + 1 - A T λ k - μ k + 1 β 1 A T A x k + 1 - b + 1 β 2 x k + 1 - y k - R 1 x k , x k + 1 - 1 β 1 R 2 x k , x k + 1 = P K x k + 1 - α 1 1 β 2 + γ 1 + γ 2 β 1 x k + 1 - A T λ k - μ k - 1 β 1 A T b - 1 β 2 y k + Q x k - γ 1 x k + 1 β 1 A T A x k - γ 2 β 1 x k . We choose γ 1 , γ 2 so that γ 1 > λ max ( Q ) , γ 2 > λ max ( A T A ) , where λ max ( Q ) and λ max ( A T A ) are the largest eigenvalues of Q and A T A , respectively.

Setting (22) α 1 = 1 β 2 + γ 1 + γ 2 β 1 - 1 in (21), we have (23) x k + 1 = P K α 1 A T λ k + μ k + 1 β 1 A T b + 1 β 2 y k - Q x k + γ 1 x k - 1 β 1 A T A x k + γ 2 β 1 x k , which will be used as an approximation to the solution of variational inequality (13).

Let α 2 = β 2 in (19); we have (24) y k + 1 = P Ω - α 2 c + μ k - 1 β 2 x k + 1 .

In summary, the modified alternating direction method is given as follows.

3.2. The Modified Alternating Direction Method

Given x 0 K , y 0 Ω , λ 0 R m , μ 0 R n , and β 1 , β 2 > 0 . For k = 0,1 , 2 , , then consider the following.

Step 1.

Consider ( x k , y k , λ k , μ k ) ( x k + 1 , y k , λ k , μ k ) ; we compute x k + 1 , which satisfies (25) x k + 1 = P K 1 β 2 + γ 1 + γ 2 β 1 - 1 A T λ k + μ k + 1 β 1 A T b + 1 β 2 y k - Q x k + γ 1 x k - 1 β 1 A T A x k + γ 2 β 1 x k .

Step 2.

Consider ( x k + 1 , y k , λ k , μ k ) ( x k + 1 , y k + 1 , λ k , μ k ) ; we compute y k + 1 , which satisfies (26) y k + 1 = P Ω x k + 1 - β 2 c + μ k .

Step 3.

Consider ( x k + 1 , y k + 1 , λ k , μ k ) ( x k + 1 , y k + 1 , λ k + 1 , μ k ) ; update the Lagrange multiplier by (27) λ k + 1 = λ k - 1 β 1 A x k + 1 - b .

Step 4.

Consider ( x k + 1 , y k + 1 , λ k + 1 , μ k ) ( x k + 1 , y k + 1 , λ k + 1 , μ k + 1 ) ; update the Lagrange multiplier by (28) μ k + 1 = μ k - 1 β 2 x k + 1 - y k + 1 .

From Steps 1 and 2, the modified alternating direction method only needs to compute the metric projection of vectors onto K and Ω . From Steps 3 and 4, we could interpret 1 / β 1 and 1 / β 2 as the dual stepsizes. Therefore, the iteration of our method is simple and fast.

4. The Convergence Result

In this section, we extended and modified the convergence results of the alternating direction methods for convex quadratically constrained quadratic semidefinite programs in paper  and then give the convergence analysis of the alternating direction method for CQSOCP problems with bounded constraints.

Lemma 3.

The sequence { x k , y k , λ k , μ k } generated by the modified alternating direction method satisfies (29) x k + 1 - x , R 1 x k , x k + 1 + 1 β 1 R 2 x k , x k + 1 + 1 β 2 y k + 1 - y , y k - y k + 1 + β 1 λ k + 1 - λ , λ k - λ k + 1 + β 2 μ k + 1 - μ , μ k - μ k + 1 0 , where { x , y , λ , μ } is a KKT point of system (11).

Proof.

Let y = y k + 1 in the second inequality in system (11); we have (30) y k + 1 - y , c + μ 0 . Let y = y in (14), and coupled with (16), we have (31) y - y k + 1 , c + μ k + 1 0 . Adding (30) and (31) together, we have (32) y k + 1 - y , μ - μ k + 1 0 . In addition, from (14) and (16), we have (33) y k - y k + 1 , c + μ k + 1 0 , y k + 1 - y k , c + μ k 0 . Adding the two inequality above, we have (34) y k + 1 - y k , μ k - μ k + 1 0 . Note that (21) can be written equivalently as (35) x - x k + 1 , Q x k + 1 - A T λ k + 1 - μ k + 1 + 1 β 2 y k + 1 - y k - R 1 x k , x k + 1 - 1 β 1 R 2 x k , x k + 1 0 , x K . Setting x = x , we have (36) x - x k + 1 , Q x k + 1 - A T λ k + 1 - μ k + 1 + 1 β 2 y k + 1 - y k - R 1 x k , x k + 1 - 1 β 1 R 2 x k , x k + 1 0 . Let x = x k + 1 in the first inequality in system (11); we have (37) x k + 1 - x , Q x - A T λ - μ 0 . Adding (36) and (37) together, we have (38) x k + 1 - x , A T λ k + 1 - λ + x k + 1 - x , μ k + 1 - μ + x k + 1 - x , 1 β 2 y k - y k + 1 + x k + 1 - x , R 1 x k , x k + 1 + 1 β 1 R 2 x k , x k + 1 x k + 1 - x , Q x k + 1 - x 0 . From first part at the left side of (38) and the third equation in system (11), we have (39) x k + 1 - x , A T λ k + 1 - λ = A x k + 1 - A x , λ k + 1 - λ = A x k + 1 - b , λ k + 1 - λ = β 1 λ k + 1 - λ , λ k - λ k + 1 . From (16), (36), the last equation in system (11), and the second part at the left side of (38), we have (40) x k + 1 - x , μ k + 1 - μ + y k + 1 - y , μ - μ k + 1 = μ k + 1 - μ , x k + 1 - x - y k + 1 + y = μ k + 1 - μ , x k + 1 - y k + 1 = β 2 μ k + 1 - μ , μ k - μ k + 1 . In addition, from the third part at the left side of (38), we have (41) 1 β 2 x k + 1 - x , y k - y k + 1 = 1 β 2 y k + 1 - y , y k - y k + 1 + 1 β 2 x k + 1 - y k + 1 , y k - y k + 1 = 1 β 2 y k + 1 - y , y k - y k + 1 - y k + 1 - y k , μ k - μ k + 1 . It follows from (32)-(34) and (38)–(41) that (42) x k + 1 - x , R 1 x k , x k + 1 + 1 β 1 R 2 x k , x k + 1 + 1 β 2 y k + 1 - y , y k - y k + 1 + β 1 λ k + 1 - λ , λ k - λ k + 1 + β 2 μ k + 1 - μ , μ k - μ k + 1 0 .

Now, we give the convergent conclusion.

Theorem 4.

The sequence { x k } generated by the modified alternating direction method converges to a solution point x of problem (9).

Proof.

We denote (43) w = x y λ μ , G = γ 1 I n - Q + 1 β 1 γ 2 I n - A T A 0 0 0 0 1 β 2 I n 0 0 0 0 β 1 I m 0 0 0 0 β 2 I n , where I n denotes the n -dimensional unit matrix and G is positive definite. Here, we define the G -inner product of w and w ¯ as (44) w , w ¯ G = x , γ 1 I n - Q x ¯ + 1 β 1 γ 2 I n - A T A x ¯ + 1 β 2 y , y ¯ + β 1 λ , λ ¯ + β 2 μ , μ ¯ and the associated G -norm as (45) w G = x γ 1 I n - Q + 1 / β 1 γ 2 I n - A T A 2 + 1 β 2 y 2 2 + β 1 λ 2 2 + β 2 μ 2 2 0.5 , where x γ 1 I n - Q + 1 / β 1 γ 2 I n - A T A 2 = x T ( γ 1 I n - Q ) + ( 1 / β 1 ) ( γ 2 I n - A T A ) x .

Observe that, by Lemma 2, solving the optimal condition (11) for problem (9) is equivalent to finding a zero point of the residual function: (46) e w = x - P K x - α 1 Q x - A T λ - μ y - P Ω y - α 2 c + μ A x - b x - y 2 . From (15), (16), and the first equation in (21), we have that (47) x k + 1 = P K x k + 1 - α 1 Q x k + 1 - A T λ k + 1 - μ k + 1 + 1 β 2 y k + 1 - y k - R 1 x k , x k + 1 - 1 β 1 R 2 x k , x k + 1 . From (19) and (16), we have (48) y k + 1 = P Ω y k + 1 - α 2 c + μ k - 1 β 2 x k + 1 - y k + 1 = P Ω y k + 1 - α 2 c + μ k + 1 . Based on (47)-(48), (15)-(16), and the nonexpansion property of the projection operator, we have (49) e w k + 1 2 α 1 β 2 y k - y k + 1 + α 1 R 1 x k , x k + 1 + α 1 β 1 R 2 x k , x k + 1 0 β 1 λ k - λ k + 1 β 2 μ k - μ k + 1 2 α 1 R 1 x k , x k + 1 + α 1 β 1 R 2 x k , x k + 1 0 β 1 λ k - λ k + 1 β 2 μ k - μ k + 1 2 + α 1 β 2 y k - y k + 1 0 0 0 2 δ w k - w k + 1 G , where δ is a positive constant depending on parameters α 1 , β 1 , β 2 , γ 1 , γ 2 , and the largest eigenvalue of Q and A T A , for example, setting (50) δ = max β 1 , β 2 , α 1 2 β 2 , α 1 2 λ max γ 1 I n - Q + 1 β 1 γ 2 I n - A T A .

From Lemma 3, we can write (29) as (51) w k + 1 - w , w k - w k + 1 G 0 , which implies that (52) w k - w , w k - w k + 1 G w k - w k + 1 G . Thus (53) w k + 1 - w G 2 = w k - w - w k - w k + 1 G 2 = w k - w G 2 - 2 w k - w , w k - w k + 1 G + w k - w k + 1 G 2 w k - w G 2 - w k - w k + 1 G 2 w k - w G 2 - 1 δ 2 e w k + 1 2 2 . From the above inequality, we have (54) w k + 1 - w G 2 w k - w G 2 , k = 1,2 , . That is, the sequence { w k } is bounded. Thus there exists at least one cluster point of { w k } .

It also follows from (53) that (55) k = 0 1 δ 2 e w k + 1 2 2 < + , and thus (56) lim k e w k + 1 2 = 0 .

Let w ¯ be a cluster point of { w k } and the subsequence { w k j } converges to w ¯ . We have (57) e w ¯ 2 = lim j e w k j 2 = 0 , so w ¯ satisfies system (11). Setting w = w ¯ , we have (58) w k + 1 - w ¯ G w k - w ¯ G . The sequence { w k } satisfies (59) lim k w k = w ¯ .

5. Simulation Experiments

In this section we present computational results by comparing the modified alternating direction method with the interior-point method. The interior-point method is used to solve the transformed mix linear and second-order cone programming problems (3). All the algorithms are run in the MATLAB 7.0 environment on an Inter Core processor 1.80 GHz personal computer with 2.00 GB of Ram.

The test problems are formulated by random method as follows:

Given the values of n , m , N , n i , i = 1,2 , , N with i = 0 N n i = n .

Generate a random matrix Q ~ R n × n , and set Q = Q ~ T Q ~ . At the same time, generate a random matrix A R m × n with full row rank.

Set l = - e , u = e , where e is a vector whose components are all ones.

Given x = [ x 1 , x 2 , , x N ] R n 1 × × R n N , then generate the random vector x i { l i , u i } and make it an interior point of second-order cone K n i for i = 1,2 , , N .

We obtain b by computing b = A x .

The first set of test problems includes 16 small scale CQSOCP problems with bounded constraints, which is shown in Table 1. In Tables 1 and 3, an entry of the form “ 20 × 5 ” in the “SOC” column means that there are 20 5-dimensional second-order cones, and the “ratio” denotes the ratio between the number of the second-order cones and the value of n .

The test problems with small scale.

Problems m n SOC Ratio
P01 40 100 1 × 100 1.00%
P02 40 100 1 × 40 ; 20 × 3 21.00%
P03 40 100 20 × 5 20.00%
P04 40 100 1 × 4 ; 32 × 3 33.00%

P05 120 200 1 × 200 0.50%
P06 120 200 1 × 100 ; 1 × 4 ; 32 × 3 17.00%
P07 120 200 40 × 5 20.00%
P08 120 200 1 × 5 ; 65 × 3 33.00%

P09 200 400 1 × 400 0.25%
P10 200 400 1 × 200 ; 1 × 5 ; 65 × 3 16.75%
P11 200 400 80 × 5 20.00%
P12 200 400 1 × 4 ; 132 × 3 33.25%

P13 300 600 1 × 600 0.16%
P14 300 600 1 × 400 ; 1 × 5 ; 65 × 3 11.16%
P15 300 600 120 × 5 20.00%
P16 300 600 200 × 3 33.33%

As is known to all, the interior-point methods have proved to be one of the most efficient class of methods for SOCP. Here the Matlab program codes for the interior-point method are designed from the software package by SeDuMi . In the SeDuMi software, we set the desired accuracy parameter p a r s . e p s = 1 0 - 6 .

Let Δ f k = f x k - f x k - 1 , where f ( x ) = x T Q x + c T x . In the alternating direction method, we stop our algorithm when (60) max x k - x k - 1 2 , y k - y k - 1 2 , λ k - λ k - 1 2 , μ k - μ k - 1 2 , Δ f k ϵ for ϵ > 0 . Here we set β 1 = 0.8 , β 2 = 0.8 , γ 1 = λ max ( Q ) + 0.0001 , γ 2 = λ max ( A T A ) + 0.0001 and ϵ = 1 0 - 6 . We choose the initial point x 0 = e n , y 0 = e n , λ 0 = e m , and μ 0 = e n , where e n is the n -dimensional vector of ones.

For the first set of test problems, the iteration number and average CPU time are used to evaluate the performances of the modified alternating direction method and the interior-point method by SeDuMi. The test results are shown in Table 2. In the Tables 2 and 4, “Time” represents the average CPU time (in seconds), and “Iter.” denotes the average number of iteration. In addition, “MADM” represents the modified alternating direction method. In Table 4, “/” denotes that the method does not work in our personal computer because the method is “out of memory.”

The results for the test problems with small scale.

Iter. Time Value Iter. Time Value
P01 216 0.14 6.8296723 9 0.34 6.8296742
P02 245 0.25 71.9788074 9 0.31 71.9788099
P03 264 0.30 76.2533942 11 0.39 76.2533927
P04 261 0.34 143.5300081 11 0.44 143.5300023

P05 269 0.33 47.1664568 10 0.81 47.1664516
P06 271 0.56 164.0914031 12 1.16 164.0914052
P07 293 0.66 467.8962158 13 1.28 467.8962123
P08 320 0.89 671.0620751 13 1.34 671.0620812

P09 322 1.53 71.2857981 12 5.19 71.2857926
P10 337 2.28 85.4731057 13 7.41 85.4731061
P11 304 2.14 1085.3976583 15 7.94 1085.3976475
P12 316 2.62 2256.4765633 16 10.11 2256.4765646

P13 351 4.17 89.3228874 11 15.03 89.3228832
P14 377 5.14 113.5586948 12 16.16 113.5586923
P15 327 5.13 2198.2136742 16 27.53 2198.2136739
P16 326 5.81 2727.7797204 16 32.67 2727.7797233

The test problems with medium scale.

Problems m n SOC Ratio
P21 400 1000 100 × 10 10.00%
P22 400 1000 1 × 200 ; 160 × 5 16.10%
P23 400 1000 1 × 4 ; 332 × 3 33.30%

P24 600 2000 50 × 40 2.5%
P25 600 2000 1 × 400 ; 1 × 4 ; 532 × 3 26.70%
P26 600 2000 1 × 5 ; 665 × 3 33.33%

P27 800 3000 100 × 30 3.33%
P28 800 3000 1 × 600 ; 800 × 3 26.70%
P29 800 3000 1000 × 3 33.33%

P30 1000 4000 100 × 40 2.50%
P31 1000 4000 1 × 200 ; 760 × 5 19.02%
P32 1000 4000 1 × 4 ; 1332 × 3 33.32%

P33 2000 5000 100 × 50 2.00%
P34 2000 5000 1 × 400 ; 920 × 5 18.42%
P45 2000 5000 1 × 5 ; 1665 × 3 33.32%

The results for the test problems with small scale.

Iter. Time Value Iter. Time Value
P21 326 10.88 1382.4709177 17 136.73 1382.4709168
P22 330 11.61 206.0023854 15 130.14 206.0023835
P23 378 16.78 7714.2171166 17 178.88 7714.2171243

P24 429 58.73 1678.7355144 19 1013.98 1678.7355313
P25 449 65.80 312.2865058 18 1596.53 312.2865057
P26 520 64.86 26103.1426757 19 1640.63 26103.1426826

P27 448 109.83 2882.0125678 / / /
P28 536 141.35 414.5181029 / / /
P29 566 150.15 35810.2685515 / / /

P30 425 190.92 6190.8005238 / / /
P31 506 239.69 801.0388729 / / /
P32 599 289.32 53966.4593065 / / /

P33 348 273.73 10471.3329318 / / /
P34 383 310.31 1145.2045565 / / /
P45 507 413.41 170350.7785126 / / /

Table 2 shows that the modified alternating direction method costs less CPU time than the interior-point method by SeDuMi. But, the iteration number of the interior-point method is less than that of the modified alternating direction method.

In addition, Table 1 gives different kinds of test problem, including the problems with only one large second-order cone, such as P01, P05, P09, and P13, the problems with many small second-order cones, such as P04, P08, P12, and P16, and the problems with one large second-order cone and some small second-order cones, such as P02, P06, P10, and P14. The test results in Table 2 show that the modified alternating direction method can solve different kinds of convex quadratic second-order cone programming problems within appropriate CPU time and accuracy.

The second set of test problems includes 15 medium scale problems, which is shown in Table 3. For the second set of test problems, the test results are shown in Table 4.

The results in Table 4 show the interior point method by SeDuMi does not work for the transformed problem (3) because of being “out of memory” in our personal computer when n > 2000 , but the modified alternating direction method is still efficient because the modified alternating direction method needs less memory space than the interior-point method.

In addition, we add test results of P04 and P12 in smaller criteria and with random initial points. The smaller criteria of our method is 1 0 - 10 . In addition, we do one hundred experiments with the random initial point. The test results are shown in Table 5. In the SeDuMi software, we set the desired accuracy parameter p a r s . e p s = 1 0 - 8 .

The results in smaller criteria and with random initial points.

Iter. Time Value Iter. Time Value
P04 ( 1 0 - 6 , random point) 275 0.37 169.24049123 21 0.88 169.24049003
P04 ( 1 0 - 6 , fixed point) 287 0.41 169.24049234 21 0.88 169.24049003
P04 ( 1 0 - 10 , random point) 485 0.62 169.24049007 21 0.88 169.24049003
P04 ( 1 0 - 10 , fixed point) 494 0.67 169.24049007 21 0.88 169.24049003

P12 ( 1 0 - 6 , random point) 355 2.55 1904.587401 27 20.52 1904.587416
P12 ( 1 0 - 6 , fixed point) 367 2.95 1904.587402 27 20.52 1904.587416
P12 ( 1 0 - 10 , random point) 628 4.37 1904.587409 27 20.52 1904.587416
P12 ( 1 0 - 10 , fixed point) 642 5.20 1904.587408 27 20.52 1904.587416

Table 5 shows that the performances of MADM with random initial points are a bit better than that of MADM with fixed initial points in two different stop criteria. In addition, the number of iteration of MADM with ϵ = 1 0 - 10 is more than that of MADM with ϵ = 1 0 - 6 , and the CPU time of MADM with ϵ = 1 0 - 10 is longer than that of MADM with ϵ = 1 0 - 6 .

6. Conclusion

In the paper, a modified alternating direction method is proposed for solving convex quadratic second-order cone programming problems with bounded constraints. The proposed method does not require solving subvariational inequality problems over the second cones and the bounded set. At each iteration, we only need to compute the metric projection onto the second-order cones and a projection onto the bounded set. The proposed modified method does not require second-order information and it is easy to implement. The random simulation results show that our method can efficiently solve some convex quadratic second-order cone programming problems of vector size up to 5000 within reasonable time and accuracy by using a desktop computer.

Disclosure

This work was conducted while Xuewen Mu has been visiting Ohio University, Department of Mathematics.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work is supported by China Scholarship Council (CSC). This work was also supported by the National Science Foundations for Young Scientists of China (11101320, 61201297) and the Fundamental Research Funds for the Central Universities (JB150713).

Zhang X. Liu Z. Liu S. A trust region SQP-filter method for nonlinear second-order cone programming Computers & Mathematics with Applications 2012 63 12 1569 1576 10.1016/j.camwa.2012.01.002 MR2925834 2-s2.0-84862820848 Kato H. Fukushima M. An SQP-type algorithm for nonlinear second-order cone programs Optimization Letters 2007 1 2 129 144 10.1007/s11590-006-0009-2 MR2357594 2-s2.0-35148879971 Zhao X. Y. A semismooth Newton-CG augmented Lagrangian method for large scale linear and convex quadratic SDPs [Ph.D. thesis] 2009 National University of Singapore Sturm J. F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones Optimization Methods and Software 1999 11 1–4 625 653 10.1080/10556789908805766 Tütüncü R. H. Toh K. C. Todd M. J. Solving semidefinite-quadratic-linear programs using SDPT3 Mathematical Programming 2003 95 189 217 Schmieta S. H. Alizadeh F. Associative and Jordan algebras, and polynomial time interior-point algorithms for symmetric cones Mathematics of Operations Research 2001 26 3 543 564 10.1287/moor.26.3.543.10582 MR1849884 ZBL1073.90572 2-s2.0-0035435165 Schmieta S. H. Alizadeh F. Extension of primal-dual interior point algorithms to symmetric cones Mathematical Programming 2003 96 3 409 438 10.1007/s10107-003-0380-z Monteiro R. D. Tsuchiya T. Polynomial convergence of primal-dual algorithms for the second-order cone program based on the MZ-family of directions Mathematical Programming 2000 88 1 61 83 10.1007/pl00011378 MR1765893 Eckstein J. Bertsekas D. P. An Alternating Direction Method for Linear Programming 1967 Cambridge, Mass, USA Laboratory for Information and Decision Systems, Massachusetts Institute of Technology LIDS-P Yu Z. Solving semidefinite programming problems via alternating direction methods Journal of Computational and Applied Mathematics 2006 193 2 437 445 10.1016/j.cam.2005.07.002 MR2229553 ZBL1098.65069 2-s2.0-33646233993 Malick J. Povh J. Rendl F. Wiegele A. Regularization methods for semidefinite programming SIAM Journal on Optimization 2009 20 1 336 356 10.1137/070704575 MR2507127 ZBL1187.90219 2-s2.0-70450212537 Tseng P. Alternating projection-proximal methods for convex programming and variational inequalities SIAM Journal on Optimization 1997 7 4 951 965 10.1137/s1052623495279797 MR1479608 2-s2.0-0031285675 Wang Y. Yang J. Yin W. Zhang Y. A new alternating minimization algorithm for total variation image reconstruction SIAM Journal on Imaging Sciences 2008 1 3 248 272 10.1137/080724265 MR2486032 ZBL1187.68665 Yang J. Zhang Y. Yin W. An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise SIAM Journal on Scientific Computing 2009 31 4 2842 2865 10.1137/080732894 MR2520302 Sun J. Zhang S. A modified alternating direction method for convex quadratically constrained quadratic semidefinite programs European Journal of Operational Research 2010 207 3 1210 1220 10.1016/j.ejor.2010.07.020 MR2727074 2-s2.0-77957723364 Wen Z. Goldfarb D. Yin W. Alternating direction augmented Lagrangian methods for semidefinite programming Mathematical Programming Computation 2010 2 3-4 203 230 10.1007/s12532-010-0017-1 MR2741485 ZBL1206.90088 2-s2.0-79952297247 Faraut U. Koranyi A. Analysis on Symmetric Cone 1994 New York, NY, USA Oxford University Press Outrata J. V. Sun D. On the coderivative of the projection operator onto the second-order cone Set-Valued Analysis 2008 16 7-8 999 1014 10.1007/s11228-008-0092-x MR2466033 2-s2.0-58149165329 Kong L. C. Tuncel L. Xiu N. H. Clarke generalized jacobian of the projection onto symmetric cones Set-Valued and Variational Analysis 2009 17 135 151 Kinderlehrer D. Stampacchia G. An Introduction to Variational Inequalities and their Applications 1980 New York, NY, USA Academic Press MR567696