An alternating direction method is proposed for convex quadratic second-order cone programming problems with bounded constraints. In the algorithm, the primal problem is equivalent to a separate structure convex
quadratic programming over second-order cones and a bounded set. At each iteration, we only need to compute the metric projection onto the second-order cones and the projection onto the bound set. The result of convergence is given. Numerical results demonstrate that our method is efficient for the convex quadratic second-order cone programming problems with bounded constraints.
1. Introduction
In this paper, we consider a convex quadratic second-order cone programming (CQSOCP) problem with bounded constraints which is defined by minimizing a convex quadratic function over the intersection of an affine set, a bounded set, and the product of second-order cones. The primal convex quadratic second-order cone programming problem is defined as(1)min12xTQx+cTxs.t.Ax=bx∈K,x∈Ω,where Ω={x∣l≤x≤u} is a bounded set, Q is an n×n symmetric positive semidefinite matrix, A∈Rm×n, c∈Rn, b∈Rm, and x=[x1,…,xN]∈Rn1×⋯×RnN is viewed as a column vector in Rn1+⋯+nN with ∑i=1Nni=n. In addition, K=Kn1×Kn2×⋯×KnN and xi∈Kni, where Kni is the ni-dimensional second-order cone given by(2)Kni=xi=xi1xi0∈Rni-1×R:xi12≤xi0,where ·2 is the standard Euclidean norm.
Convex quadratic second-order cone programming problem with bounded constraints is a nonlinear programming problem, which can be seen as a trust region subproblem in the trust region method for the nonlinear second-order cone programming [1, 2]. Since Q is symmetric positive semidefinite, we can compute its positive semidefinite square root Q1/2 by the Cholesky method. Then problem (1) can be equivalently transformed as the following mix linear and second-order cone programming (MLSOCP) [3]:(3)mint+cTxs.t.Ax=bt-12+2Q1/2x22≤t+1l≤x≤ux∈K.In paper [1, 2], the authors use those well developed and publicly available softwares, based on interior-point methods, such as SeDuMi [4] and SDPT3 [5] to solve the equivalent MLSOCP (3).
Interior-point methods have been well developed for linear symmetric cone programming [6–8]. However, at each iteration these solvers require to formulate and solve a dense Schur complement matrix, which for the CQSOCP problem with bounded constraints amounts to a linear system of dimension (m+3n+2)×(m+3n+2). In addition, the transformed method needs to compute the square root of semidefinite matrix Q. When n is large, because of the very large size and ill-conditioning of the linear system of equations, interior-point methods are difficult to solve the transformed MLSOCP problem efficiently [3].
The alternating direction method (ADM) has been an effective first-order approach for solving large optimization problems, such as linear programming [9], linear semidefinite programming (LSDP) [10, 11], nonlinear convex optimization [12], and nonsmooth l1 minimization arising from compressive sensing [13, 14]. A modified alternating direction method is proposed for convex quadratically constrained quadratic semidefinite programs in paper [15]. In the thesis [3], a semismooth Newton-CG augmented Lagrangian method is proposed for large scale convex quadratic symmetric cone programming. In paper [16], an alternating direction dual augmented Lagrangian method for solving linear semidefinite programming problems in standard form is presented and extended to the SDP with inequality constraints and positivity constraints.
In the paper, an alternating direction method for the CQSOCP problem with bounded constraints is proposed. Firstly, the primal problem is equivalent to a separate structure convex quadratic programming over second-order cones and a bounded set. Then the alternating direction method is proposed to solve the separate structure convex quadratic programming. In the alternating direction method, we only need to compute the metric projection onto the second-order cones and projection onto the bounded set at each iteration. We also give the convergence results and the numerical results.
2. The Projection on the Second-Order Cone and the Bounded Set
In this section, we will give the projection results on the second-order cones and the bounded set.
Let xi=xi1xi0∈Rni-1×R for i=1,2,…,N; then the spectral decomposition of xi associate with second-order cone Kni can be described as [17–19](4)xi=λ1xic1xi+λ2xic2xi,i=1,2,…,N,where(5)λ1xi=xi0-xi12,λ2xi=xi0+xi12,c1xi=12-w1,c2xi=12w1with w=-xi1/xi12 if xi1≠0 and any vector in Rni-1 satisfying w2=1 if xi1=0.
Next we introduce the projection lemma over the second-order cone [17–19].
Lemma 1 (see [<xref ref-type="bibr" rid="B17">17</xref>–<xref ref-type="bibr" rid="B19">19</xref>]).
For any xi=xi1xi0∈Rni-1×R, let PKni(xi) be the projection of xi onto the second-order cone Kni; then we have(6)PKnixi=λ1xi+c1xi+λ2xi+c2xi,i=1,2,…,N,where s+≔max(0,s) for s∈R.
Let x=[x1,…,xN]∈Rn1×⋯×RnN; then the projection PK(x) of x over the cone K is described as(7)PKx=PKn1x1,…,PKnNxN∈Rn1×⋯×RnN.
Let y∈Rn; then the projection on the bounded set Ω is easy to carry out, namely, through an element by element method:(8)PΩy=maxl,miny,u.
3. An Alternating Direction Method for CQSOCP Problems with Bounded Constraints
In this section, we give an alternating direction method for convex quadratic second-order cone programming problems with bounded constraints.
Firstly, we give an equivalent separate structure convex quadratic programming over second-order cone and bounded set as follows:(9)min12xTQx+cTys.t.Ax=b,x∈Kx=y,y∈Ω.
The Lagrangian function for the separate structure convex quadratic programming problem is written as(10)maxλ,μminx∈K,y∈ΩLx,y,λ,μ=12xTQx+cTy-λTAx-b-μTx-y,where λ∈Rm,μ∈Rn.
Under mild constraint qualifications (e.g., Slater condition), strong duality holds for problem (9), and hence, x∗ is an optimal solution of (9) if and only if there exists (x∗,y∗,λ∗,μ∗)∈K×Ω×Rm×Rn satisfying the following KKT system in variational inequality form:(11)x-x∗,Qx∗-ATλ∗-μ∗≥0,∀x∈K,y-y∗,c+μ∗≥0,∀y∈Ω,Ax∗=b,x∗=y∗.
The augmented Lagrangian function for the the separate structure convex quadratic programming problem is defined as(12)Lx,y,λ,μ=12xTQx+cTy-λTAx-b-μTx-y+12β1Ax-b22+12β2x-y22,where β1,β2>0.
The variational inequality form of alternating direction method for (12) is as follows.
3.1. The Original Alternating Direction Method
Given x0∈K,y0∈Ω,λ0∈Rm,μ0∈Rn, and β1,β2>0. For k=0,1,2,…, then consider the following.
Step 1.
Consider (xk,yk,λk,μk)→(xk+1,yk,λk,μk); we compute xk+1, which satisfies(13)x-xk+1,Qxk+1-ATλk-μk+1β1ATAxk+1-b+1β2xk+1-yk≥0,∀x∈K.
Step 2.
Consider (xk+1,yk,λk,μk)→(xk+1,yk+1,λk,μk); we compute yk+1, which satisfies(14)y-yk+1,c+μk-1β2xk+1-yk+1≥0,∀y∈Ω.
Step 3.
Consider (xk+1,yk+1,λk,μk)→(xk+1,yk+1,λk+1,μk); update the Lagrange multiplier by(15)λk+1=λk-1β1Axk+1-b.
Step 4.
Consider (xk+1,yk+1,λk+1,μk)→(xk+1,yk+1,λk+1,μk+1); update the Lagrange multiplier by(16)μk+1=μk-1β2xk+1-yk+1.
In Steps 1 and 2, we should solve variational inequalities. In the following analysis, we will convert them to simple projection operations.
Lemma 2 (see [<xref ref-type="bibr" rid="B20">20</xref>]).
Let Θ be a closed convex set in a Hilbert space and let PΘ(x) be the projection of x onto Θ. Then(17)z-y,y-x≥0,∀z∈Θ⟺y=PΘx.
Taking x=xk+1-α1(Qxk+1-ATλk-μk+(1/β1)AT(Axk+1-b)+(1/β2)(xk+1-yk)) and y=xk+1 in (17), we see that (13) is equivalent to the following nonlinear equation:(18)xk+1=PKxk+1-α1Qxk+1-ATλk-μk+1β1ATAxk+1-b+1β2xk+1-yk,where α1 can be any positive number.
Taking x=yk+1-α2c+μk-(1/β2)(xk+1-yk+1) and y=yk+1 in (17), we see that (14) is equivalent to the following nonlinear equation:(19)yk+1=PΩyk+1-α2c+μk-1β2xk+1-yk+1,where α2 can be any positive number.
Due to the existence of the terms Qxk+1 and ATAxk+1 in (18), we can not compute xk+1 directly. We therefore use the following approximate approach which is similar to the one in paper [15]. For certain constants γ1 and γ2, let(20)R1xk,xk+1=Qxk+1-Qxk-γ1xk+1-xk,R2xk,xk+1=ATAxk+1-ATAxk-γ2xk+1-xkbe the residual between Qxk+1,ATAxk+1 and their linearization at xk, respectively.
Instead of computing (18), we compute(21)xk+1=PKxk+1-α1Qxk+1-ATλk-μk+1β1ATAxk+1-b+1β2xk+1-yk-R1xk,xk+1-1β1R2xk,xk+1=PKxk+1-α11β2+γ1+γ2β1xk+1-ATλk-μk-1β1ATb-1β2yk+Qxk-γ1xk+1β1ATAxk-γ2β1xk.We choose γ1,γ2 so that γ1>λmax(Q),γ2>λmax(ATA), where λmax(Q) and λmax(ATA) are the largest eigenvalues of Q and ATA, respectively.
Setting(22)α1=1β2+γ1+γ2β1-1in (21), we have (23)xk+1=PKα1ATλk+μk+1β1ATb+1β2yk-Qxk+γ1xk-1β1ATAxk+γ2β1xk,which will be used as an approximation to the solution of variational inequality (13).
Let α2=β2 in (19); we have(24)yk+1=PΩ-α2c+μk-1β2xk+1.
In summary, the modified alternating direction method is given as follows.
3.2. The Modified Alternating Direction Method
Given x0∈K,y0∈Ω,λ0∈Rm,μ0∈Rn, and β1,β2>0. For k=0,1,2,…, then consider the following.
Step 1.
Consider (xk,yk,λk,μk)→(xk+1,yk,λk,μk); we compute xk+1, which satisfies(25)xk+1=PK1β2+γ1+γ2β1-1ATλk+μk+1β1ATb+1β2yk-Qxk+γ1xk-1β1ATAxk+γ2β1xk.
Step 2.
Consider (xk+1,yk,λk,μk)→(xk+1,yk+1,λk,μk); we compute yk+1, which satisfies(26)yk+1=PΩxk+1-β2c+μk.
Step 3.
Consider (xk+1,yk+1,λk,μk)→(xk+1,yk+1,λk+1,μk); update the Lagrange multiplier by(27)λk+1=λk-1β1Axk+1-b.
Step 4.
Consider (xk+1,yk+1,λk+1,μk)→(xk+1,yk+1,λk+1,μk+1); update the Lagrange multiplier by(28)μk+1=μk-1β2xk+1-yk+1.
From Steps 1 and 2, the modified alternating direction method only needs to compute the metric projection of vectors onto K and Ω. From Steps 3 and 4, we could interpret 1/β1 and 1/β2 as the dual stepsizes. Therefore, the iteration of our method is simple and fast.
4. The Convergence Result
In this section, we extended and modified the convergence results of the alternating direction methods for convex quadratically constrained quadratic semidefinite programs in paper [15] and then give the convergence analysis of the alternating direction method for CQSOCP problems with bounded constraints.
Lemma 3.
The sequence {xk,yk,λk,μk} generated by the modified alternating direction method satisfies(29)xk+1-x∗,R1xk,xk+1+1β1R2xk,xk+1+1β2yk+1-y∗,yk-yk+1+β1λk+1-λ∗,λk-λk+1+β2μk+1-μ∗,μk-μk+1≥0,where {x∗,y∗,λ∗,μ∗} is a KKT point of system (11).
Proof.
Let y=yk+1 in the second inequality in system (11); we have(30)yk+1-y∗,c+μ∗≥0.Let y=y∗ in (14), and coupled with (16), we have(31)y∗-yk+1,c+μk+1≥0.Adding (30) and (31) together, we have(32)yk+1-y∗,μ∗-μk+1≥0.In addition, from (14) and (16), we have(33)yk-yk+1,c+μk+1≥0,yk+1-yk,c+μk≥0.Adding the two inequality above, we have(34)yk+1-yk,μk-μk+1≥0.Note that (21) can be written equivalently as(35)x-xk+1,Qxk+1-ATλk+1-μk+1+1β2yk+1-yk-R1xk,xk+1-1β1R2xk,xk+1≥0,∀x∈K.Setting x=x∗, we have (36)x∗-xk+1,Qxk+1-ATλk+1-μk+1+1β2yk+1-yk-R1xk,xk+1-1β1R2xk,xk+1≥0.Let x=xk+1 in the first inequality in system (11); we have (37)xk+1-x∗,Qx∗-ATλ∗-μ∗≥0.Adding (36) and (37) together, we have(38)xk+1-x∗,ATλk+1-λ∗+xk+1-x∗,μk+1-μ∗+xk+1-x∗,1β2yk-yk+1+xk+1-x∗,R1xk,xk+1+1β1R2xk,xk+1≥xk+1-x∗,Qxk+1-x∗≥0.From first part at the left side of (38) and the third equation in system (11), we have(39)xk+1-x∗,ATλk+1-λ∗=Axk+1-Ax∗,λk+1-λ∗=Axk+1-b,λk+1-λ∗=β1λk+1-λ∗,λk-λk+1.From (16), (36), the last equation in system (11), and the second part at the left side of (38), we have(40)xk+1-x∗,μk+1-μ∗+yk+1-y∗,μ∗-μk+1=μk+1-μ∗,xk+1-x∗-yk+1+y∗=μk+1-μ∗,xk+1-yk+1=β2μk+1-μ∗,μk-μk+1.In addition, from the third part at the left side of (38), we have(41)1β2xk+1-x∗,yk-yk+1=1β2yk+1-y∗,yk-yk+1+1β2xk+1-yk+1,yk-yk+1=1β2yk+1-y∗,yk-yk+1-yk+1-yk,μk-μk+1.It follows from (32)-(34) and (38)–(41) that(42)xk+1-x∗,R1xk,xk+1+1β1R2xk,xk+1+1β2yk+1-y∗,yk-yk+1+β1λk+1-λ∗,λk-λk+1+β2μk+1-μ∗,μk-μk+1≥0.
Now, we give the convergent conclusion.
Theorem 4.
The sequence {xk} generated by the modified alternating direction method converges to a solution point x∗ of problem (9).
Proof.
We denote(43)w=xyλμ,G=γ1In-Q+1β1γ2In-ATA00001β2In0000β1Im0000β2In,where In denotes the n-dimensional unit matrix and G is positive definite. Here, we define the G-inner product of w and w¯ as(44)w,w¯G=x,γ1In-Qx¯+1β1γ2In-ATAx¯+1β2y,y¯+β1λ,λ¯+β2μ,μ¯and the associated G-norm as(45)wG=xγ1In-Q+1/β1γ2In-ATA2+1β2y22+β1λ22+β2μ220.5,where xγ1In-Q+1/β1γ2In-ATA2=xT(γ1In-Q)+(1/β1)(γ2In-ATA)x.
Observe that, by Lemma 2, solving the optimal condition (11) for problem (9) is equivalent to finding a zero point of the residual function:(46)ew=x-PKx-α1Qx-ATλ-μy-PΩy-α2c+μAx-bx-y2.From (15), (16), and the first equation in (21), we have that(47)xk+1=PKxk+1-α1Qxk+1-ATλk+1-μk+1+1β2yk+1-yk-R1xk,xk+1-1β1R2xk,xk+1.From (19) and (16), we have (48)yk+1=PΩyk+1-α2c+μk-1β2xk+1-yk+1=PΩyk+1-α2c+μk+1.Based on (47)-(48), (15)-(16), and the nonexpansion property of the projection operator, we have(49)ewk+12≤α1β2yk-yk+1+α1R1xk,xk+1+α1β1R2xk,xk+10β1λk-λk+1β2μk-μk+12≤α1R1xk,xk+1+α1β1R2xk,xk+10β1λk-λk+1β2μk-μk+12+α1β2yk-yk+10002≤δwk-wk+1G,where δ is a positive constant depending on parameters α1,β1,β2,γ1,γ2, and the largest eigenvalue of Q and ATA, for example, setting(50)δ=maxβ1,β2,α12β2,α12λmaxγ1In-Q+1β1γ2In-ATA.
From Lemma 3, we can write (29) as(51)wk+1-w∗,wk-wk+1G≥0,which implies that (52)wk-w∗,wk-wk+1G≥wk-wk+1G.Thus (53)wk+1-w∗G2=wk-w∗-wk-wk+1G2=wk-w∗G2-2wk-w∗,wk-wk+1G+wk-wk+1G2≤wk-w∗G2-wk-wk+1G2≤wk-w∗G2-1δ2ewk+122.From the above inequality, we have (54)wk+1-w∗G2≤wk-w∗G2,k=1,2,….That is, the sequence {wk} is bounded. Thus there exists at least one cluster point of {wk}.
It also follows from (53) that(55)∑k=0∞1δ2ewk+122<+∞,and thus(56)limk→∞ewk+12=0.
Let w¯ be a cluster point of {wk} and the subsequence {wkj} converges to w¯. We have(57)ew¯2=limj→∞ewkj2=0,so w¯ satisfies system (11). Setting w∗=w¯, we have(58)wk+1-w¯G≤wk-w¯G.The sequence {wk} satisfies(59)limk→∞wk=w¯.
5. Simulation Experiments
In this section we present computational results by comparing the modified alternating direction method with the interior-point method. The interior-point method is used to solve the transformed mix linear and second-order cone programming problems (3). All the algorithms are run in the MATLAB 7.0 environment on an Inter Core processor 1.80 GHz personal computer with 2.00 GB of Ram.
The test problems are formulated by random method as follows:
Given the values of n,m,N, ni,i=1,2,…,N with ∑i=0Nni=n.
Generate a random matrix Q~∈Rn×n, and set Q=Q~TQ~. At the same time, generate a random matrix A∈Rm×n with full row rank.
Set l=-e,u=e, where e is a vector whose components are all ones.
Given x=[x1,x2,…,xN]∈Rn1×⋯×RnN, then generate the random vector xi∈{li,ui} and make it an interior point of second-order cone Kni for i=1,2,…,N.
We obtain b by computing b=Ax.
The first set of test problems includes 16 small scale CQSOCP problems with bounded constraints, which is shown in Table 1. In Tables 1 and 3, an entry of the form “20×5” in the “SOC” column means that there are 20 5-dimensional second-order cones, and the “ratio” denotes the ratio between the number of the second-order cones and the value of n.
The test problems with small scale.
Problems
m
n
SOC
Ratio
P01
40
100
1×100
1.00%
P02
40
100
1×40;20×3
21.00%
P03
40
100
20×5
20.00%
P04
40
100
1×4;32×3
33.00%
P05
120
200
1×200
0.50%
P06
120
200
1×100;1×4;32×3
17.00%
P07
120
200
40×5
20.00%
P08
120
200
1×5;65×3
33.00%
P09
200
400
1×400
0.25%
P10
200
400
1×200;1×5;65×3
16.75%
P11
200
400
80×5
20.00%
P12
200
400
1×4;132×3
33.25%
P13
300
600
1×600
0.16%
P14
300
600
1×400;1×5;65×3
11.16%
P15
300
600
120×5
20.00%
P16
300
600
200×3
33.33%
As is known to all, the interior-point methods have proved to be one of the most efficient class of methods for SOCP. Here the Matlab program codes for the interior-point method are designed from the software package by SeDuMi [4]. In the SeDuMi software, we set the desired accuracy parameter pars.eps=10-6.
Let Δfk=fxk-fxk-1, where f(x)=xTQx+cTx. In the alternating direction method, we stop our algorithm when(60)maxxk-xk-12,yk-yk-12,λk-λk-12,μk-μk-12,Δfk≤ϵfor ϵ>0. Here we set β1=0.8,β2=0.8,γ1=λmax(Q)+0.0001,γ2=λmax(ATA)+0.0001 and ϵ=10-6. We choose the initial point x0=en,y0=en,λ0=em, and μ0=en, where en is the n-dimensional vector of ones.
For the first set of test problems, the iteration number and average CPU time are used to evaluate the performances of the modified alternating direction method and the interior-point method by SeDuMi. The test results are shown in Table 2. In the Tables 2 and 4, “Time” represents the average CPU time (in seconds), and “Iter.” denotes the average number of iteration. In addition, “MADM” represents the modified alternating direction method. In Table 4, “/” denotes that the method does not work in our personal computer because the method is “out of memory.”
The results for the test problems with small scale.
Problems
MADM
SeDuMi
Iter.
Time
Value
Iter.
Time
Value
P01
216
0.14
6.8296723
9
0.34
6.8296742
P02
245
0.25
71.9788074
9
0.31
71.9788099
P03
264
0.30
76.2533942
11
0.39
76.2533927
P04
261
0.34
143.5300081
11
0.44
143.5300023
P05
269
0.33
47.1664568
10
0.81
47.1664516
P06
271
0.56
164.0914031
12
1.16
164.0914052
P07
293
0.66
467.8962158
13
1.28
467.8962123
P08
320
0.89
671.0620751
13
1.34
671.0620812
P09
322
1.53
71.2857981
12
5.19
71.2857926
P10
337
2.28
85.4731057
13
7.41
85.4731061
P11
304
2.14
1085.3976583
15
7.94
1085.3976475
P12
316
2.62
2256.4765633
16
10.11
2256.4765646
P13
351
4.17
89.3228874
11
15.03
89.3228832
P14
377
5.14
113.5586948
12
16.16
113.5586923
P15
327
5.13
2198.2136742
16
27.53
2198.2136739
P16
326
5.81
2727.7797204
16
32.67
2727.7797233
The test problems with medium scale.
Problems
m
n
SOC
Ratio
P21
400
1000
100×10
10.00%
P22
400
1000
1×200;160×5
16.10%
P23
400
1000
1×4;332×3
33.30%
P24
600
2000
50×40
2.5%
P25
600
2000
1×400;1×4;532×3
26.70%
P26
600
2000
1×5;665×3
33.33%
P27
800
3000
100×30
3.33%
P28
800
3000
1×600;800×3
26.70%
P29
800
3000
1000×3
33.33%
P30
1000
4000
100×40
2.50%
P31
1000
4000
1×200;760×5
19.02%
P32
1000
4000
1×4;1332×3
33.32%
P33
2000
5000
100×50
2.00%
P34
2000
5000
1×400;920×5
18.42%
P45
2000
5000
1×5;1665×3
33.32%
The results for the test problems with small scale.
Problems
MADM
SeDuMi
Iter.
Time
Value
Iter.
Time
Value
P21
326
10.88
1382.4709177
17
136.73
1382.4709168
P22
330
11.61
206.0023854
15
130.14
206.0023835
P23
378
16.78
7714.2171166
17
178.88
7714.2171243
P24
429
58.73
1678.7355144
19
1013.98
1678.7355313
P25
449
65.80
312.2865058
18
1596.53
312.2865057
P26
520
64.86
26103.1426757
19
1640.63
26103.1426826
P27
448
109.83
2882.0125678
/
/
/
P28
536
141.35
414.5181029
/
/
/
P29
566
150.15
35810.2685515
/
/
/
P30
425
190.92
6190.8005238
/
/
/
P31
506
239.69
801.0388729
/
/
/
P32
599
289.32
53966.4593065
/
/
/
P33
348
273.73
10471.3329318
/
/
/
P34
383
310.31
1145.2045565
/
/
/
P45
507
413.41
170350.7785126
/
/
/
Table 2 shows that the modified alternating direction method costs less CPU time than the interior-point method by SeDuMi. But, the iteration number of the interior-point method is less than that of the modified alternating direction method.
In addition, Table 1 gives different kinds of test problem, including the problems with only one large second-order cone, such as P01, P05, P09, and P13, the problems with many small second-order cones, such as P04, P08, P12, and P16, and the problems with one large second-order cone and some small second-order cones, such as P02, P06, P10, and P14. The test results in Table 2 show that the modified alternating direction method can solve different kinds of convex quadratic second-order cone programming problems within appropriate CPU time and accuracy.
The second set of test problems includes 15 medium scale problems, which is shown in Table 3. For the second set of test problems, the test results are shown in Table 4.
The results in Table 4 show the interior point method by SeDuMi does not work for the transformed problem (3) because of being “out of memory” in our personal computer when n>2000, but the modified alternating direction method is still efficient because the modified alternating direction method needs less memory space than the interior-point method.
In addition, we add test results of P04 and P12 in smaller criteria and with random initial points. The smaller criteria of our method is 10-10. In addition, we do one hundred experiments with the random initial point. The test results are shown in Table 5. In the SeDuMi software, we set the desired accuracy parameter pars.eps=10-8.
The results in smaller criteria and with random initial points.
Problems
MADM
SeDuMi
Iter.
Time
Value
Iter.
Time
Value
P04 (10-6, random point)
275
0.37
169.24049123
21
0.88
169.24049003
P04 (10-6, fixed point)
287
0.41
169.24049234
21
0.88
169.24049003
P04 (10-10, random point)
485
0.62
169.24049007
21
0.88
169.24049003
P04 (10-10, fixed point)
494
0.67
169.24049007
21
0.88
169.24049003
P12 (10-6, random point)
355
2.55
1904.587401
27
20.52
1904.587416
P12 (10-6, fixed point)
367
2.95
1904.587402
27
20.52
1904.587416
P12 (10-10, random point)
628
4.37
1904.587409
27
20.52
1904.587416
P12 (10-10, fixed point)
642
5.20
1904.587408
27
20.52
1904.587416
Table 5 shows that the performances of MADM with random initial points are a bit better than that of MADM with fixed initial points in two different stop criteria. In addition, the number of iteration of MADM with ϵ=10-10 is more than that of MADM with ϵ=10-6, and the CPU time of MADM with ϵ=10-10 is longer than that of MADM with ϵ=10-6.
6. Conclusion
In the paper, a modified alternating direction method is proposed for solving convex quadratic second-order cone programming problems with bounded constraints. The proposed method does not require solving subvariational inequality problems over the second cones and the bounded set. At each iteration, we only need to compute the metric projection onto the second-order cones and a projection onto the bounded set. The proposed modified method does not require second-order information and it is easy to implement. The random simulation results show that our method can efficiently solve some convex quadratic second-order cone programming problems of vector size up to 5000 within reasonable time and accuracy by using a desktop computer.
Disclosure
This work was conducted while Xuewen Mu has been visiting Ohio University, Department of Mathematics.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The work is supported by China Scholarship Council (CSC). This work was also supported by the National Science Foundations for Young Scientists of China (11101320, 61201297) and the Fundamental Research Funds for the Central Universities (JB150713).
ZhangX.LiuZ.LiuS.A trust region SQP-filter method for nonlinear second-order cone programmingKatoH.FukushimaM.An SQP-type algorithm for nonlinear second-order cone programsZhaoX. Y.SturmJ. F.Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric conesTütüncüR. H.TohK. C.ToddM. J.Solving semidefinite-quadratic-linear programs using SDPT3SchmietaS. H.AlizadehF.Associative and Jordan algebras, and polynomial time interior-point algorithms for symmetric conesSchmietaS. H.AlizadehF.Extension of primal-dual interior point algorithms to symmetric conesMonteiroR. D.TsuchiyaT.Polynomial convergence of primal-dual algorithms for the second-order cone program based on the MZ-family of directionsEcksteinJ.BertsekasD. P.YuZ.Solving semidefinite programming problems via alternating direction methodsMalickJ.PovhJ.RendlF.WiegeleA.Regularization methods for semidefinite programmingTsengP.Alternating projection-proximal methods for convex programming and variational inequalitiesWangY.YangJ.YinW.ZhangY.A new alternating minimization algorithm for total variation image reconstructionYangJ.ZhangY.YinW.An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noiseSunJ.ZhangS.A modified alternating direction method for convex quadratically constrained quadratic semidefinite programsWenZ.GoldfarbD.YinW.Alternating direction augmented Lagrangian methods for semidefinite programmingFarautU.KoranyiA.OutrataJ. V.SunD.On the coderivative of the projection operator onto the second-order coneKongL. C.TuncelL.XiuN. H.Clarke generalized jacobian of the projection onto symmetric conesKinderlehrerD.StampacchiaG.