Based on the semidefinite programming relaxation of the binary quadratic programming, a rank-two feasible direction algorithm is presented. The proposed algorithm restricts the rank of matrix variable to be two in the semidefinite programming relaxation and yields a quadratic objective function with simple quadratic constraints. A feasible direction algorithm is used to solve the nonlinear programming. The convergent analysis and time complexity of the method is given.
Coupled with randomized algorithm, a suboptimal solution is obtained for the binary quadratic programming. At last, we report some numerical examples to compare our algorithm with randomized algorithm based on the interior point method and the feasible direction algorithm on max-cut problem. Simulation results have shown that our method is faster than the other two methods.
1. Introduction
In this paper, we consider the following binary quadratic programming:
(1)minxTQx+2rTxs.t.xi2=1,fori=1,…,m,
where Q is real m×m symmetric matrices and r is a real m-dimensional column vector.
The binary quadratic programming is a fundamental problem in optimization theory and practice. Some combinatorial optimization problems and engineering problems can be modeled as binary quadratic programming, such as VLSI design, statistical physics, max-cut problem [1], the optimal multiuser detection [2–5], image processing [6], and the design of FIR filters with discrete coefficients [7]. These problems are known to be NP-hard [1]. One typical approach to solve these problems is to construct lower bounds for approximating the optimal value. Now, the semidefinite programming (SDP) relaxation approach had been studied and proven to be quite powerful for finding approximate optimal solutions. Based on solving its semidefinite programming relaxation, Goemans and Williamson [8] developed a randomized algorithm for the max-cut problem, which provides an approximate solution guaranteed to be within a factor of 0.87856 of its optimal value. Interior point method is a powerful method for SDP with small and moderate scale. But the interior point method is limited to problems of moderate size, which cannot solve SDP with large scale efficiently [1]. So Goemans and Williamson’s method based on the interior point method is not adapted to solve the large scale max-cut problems.
Some efficient nonlinear programming algorithms only based on gradient for solving the SDP relaxation of the max-cut problem have been developed. Homer and Peinado [9] proposed a parallel and distributed approximation algorithms for max-cut problem. In the algorithm, the author transformed the max-cut SDP relaxation into a constrained nonlinear programming problem in the new variable V for using the change of variables X=VVT, V⊂Rn×n, where X is the primal matrix variable of the SDP relaxation. Burer and Monteiro [10] proposed a projected gradient algorithm for solving the max-cut SDP relaxation by using the change of variable X=LLT, where L is a lower triangular matrix. The rank-two relaxation heuristics algorithm in [11] relaxed the max-cut problem to form an unconstrained optimization problem by replacing each binary variable with one unit vector in space R2 and then using polar coordinates. In [12], the rank-two SDP relaxation model is proposed for maximal independent set problem. Based on the low-rank decomposition of the semidefinite matrix, Liu et al. [13] proposed a feasible direction method to solve a nonlinear programming model of binary quadratic programming.
In the paper, we propose a rank-two feasible direction method for the binary quadratic programming. we restrict the rank of matrix variable to be two in the semidefinite programming relaxation and obtain a quadratic objective function with simple quadratic constraints. A feasible direction method is used to solve the nonlinear programming. We also give the analysis of the convergence and the complexity of the method. The randomized algorithm is used to obtain the suboptimal solution of the binary quadratic programming. At last, we compare our method with the randomized algorithm based on the interior point method and the feasible direction method on max-cut problem. Simulation results show that our method costs less CPU time than the two methods.
2. The SDP Relaxation Method for Binary Quadratic Programming
In this section, we introduce the SDP relaxation of binary quadratic programming problem [1].
In problem (1), let n=m+1, z=[xT,xn]T(xn=1), and C1=(QrrT0); then problem (1) can be formulated as
(2)minzTC1zs.t.z∈{-1,1}n.
It is well known that problem (2) is also NP-hard [1].
Let C2=C1-(λmax(C1)+1)In, where λmax(C1) denotes the largest eigenvalue of the matrix C1 and In denotes the unit matrix; then C2 is a negative definite matrix. Problem (2) is equivalent to the following problem below:
(3)minzTC2z+(λmax(C1)+1)ns.t.z∈{-1,1}n.
Letting Z=zzT and ignoring the constant term, then problem (3) is equivalent to the following problem:
(4)minC2•Zs.t.zii=1,i=1,2,…,nrank(Z)=1Z≽0,
where C2•Z=tr(C2TZ) and zii is the diagonal elements of matrix Z. In addition, Z≽0 denotes that matrix Z is semidefinite. Ignoring the nonconvex “rank one” constraint, the SDP relaxation is given as follows [1]:
(5)minC2•Zs.t.zii=1,i=1,2,…,nZ≽0.
Interior point methods have been proved to be quite efficient for small and moderate scale SDP. In the 0.878 randomized method by Goemans and Williamson [8], the author solved the SDP relaxation problem (5) by interior point methods. However, interior point methods are second-order method, so they are quite time and memory intensive and not adapted to large scale binary quadratic problems. The complexity of the primal-dual interior point method based on AHO search direction for the SDP relaxation (5) of the max-cut is O(n4.5ln(1/ϵ)) [14, 15].
3. The Rank-Two SDP Relaxation for Binary Quadratic Programming
In [11], the rank-two SDP relaxation model is proposed for max-cut problem based on the polar direction. In [12], the rank-two SDP relaxation model is proposed for maximal independent set problem. In this section, we present a rank-two SDP relaxation based on the rank-two approximate matrix for binary quadratic programming.
In SDP relaxation problem (5), let C=-C2; we have
(6)maxC•Zs.t.zii=1,i=1,2,…,nZ≽0.
Obviously, matrix C is positive definite.
Let Z=xxT+yyT, x,y∈Rn [12]; then C•Z=xTCx+yTCy and Zii=xi2+yi2=1. We obtain the rank-two SDP relaxation of binary quadratic programming as follows:
(7)maxxTCx+yTCys.t.xi2+yi2=1,i=1,2,…,n;
where xi and yi are the elements of vector x and y. Obviously, matrix Z satisfies that rank(Z)=2, Z≽0, so problem (7) is a rank-two SDP relaxation problem.
Problem (7) is also a nonlinear programming with quadratic objective function and constraints. Compared to the n2 variables of SDP relaxation, the rank-two relaxation has only 2n variables, so this approach possesses scalability for solving large-scale binary quadratic programming problems.
Let
(8)f(x,y)=xTCx+yTCy,hi(x,y)=xi2+yi2-1,Cx+yTCy,hi(x,y)=xi2+yi2-1i=1,2,…,n,
then the gradients of the function f(x,y) and hi(x,y) are
(9)∇f(x,y)=(2Cx,2Cy),∇hi(x,y)=(2eieiTx,2eieiTy),(2Cx,2Cy),∇hi(x,y)=2eieiTx,2eieiTy.i=1,2,…,n.
The KKT condition for problem (7) is given here. If the variable (x,y)∈Rn×2 in problem (7) satisfies the following condition:
(10)∇f(x,y)=∑i=1nμi∇hi(x,y),xi2+yi2=1,i=1,2,…,n,μi≥0,∃iforμi>0,
then (x,y) is a KKT point for problem (7).
It is simple to obtain that
(11)∇f(x,y)=∑i=1nμi∇hi(x,y)⟺((Cx)i,(Cy)i)=μi(xi,yi),i=1,2,…,n.
Then we have the equivalent KKT condition for problem (7) as follows:
(12)((Cx)i,(Cy)i)=μi(xi,yi),i=1,2,…,nxi2+yi2=1,i=1,2,…,nμi≥0,∃iforμi>0.
4. The Rank-Two Feasible Direction Algorithm for Binary Quadratic Programming
Feasible direction algorithm is an efficient algorithm for some special nonlinear programming problems. In [13], the feasible direction algorithm is applied to solve the low-rank nonlinear programming relaxation for binary quadratic programming problems. In [16], the feasible direction algorithm is applied to solve a rank one nonlinear programming relaxation for max-cut problem.
In this section, we extend the feasible direction algorithm to solve problem (7). The algorithm employs only gradient evaluations of the objective function in problem (7), and no calculations on any matrices and no line searches, thus greatly reduces the calculation costs and increases the efficiency of the algorithm.
In the rank-two feasible direction algorithm, we give the following iteration for problem (7):
(13)(xik+1,yik+1)=((2Cx)ik,(2Cy)ik)(∥(2Cx)ik∥2+∥(2Cy)ik∥2)0.5,(xik+1,yik+1)=11.i=1,2,…,n,k=0,1,…,
where (xik+1,yik+1) denotes the element pair of matrix variable (xk+1,yk+1).
The iteration (13) is very simple and has the following characteristics.
No matrix calculations and no line searches are required, and only one gradient evaluation is needed to get the new iteration.
The new iteration point (xk+1,yk+1) is feasible to problem (7).
If the sequence (xk+1,yk+1) converges to (x*,y*), then (x*,y*) is feasible to problem (7).
Define direction ((dx)k,(dy)k) as follows:
(14)((dx)ik,(dy)ik)=(xik+1,yik+1)-(xik,yik),i=1,2,…,n,
as a search direction, where ((dx)ik,(dy)ik) is the elements pair of ((dx)k,(dy)k). Then the iteration (13) can be written as
(15)(xk+1,yk+1)=(xk,yk)+((dx)k,(dy)k).
The following lemmas show that if ((dx)k,(dy)k)=0, then (xk,yk) is a KKT point of problem (7), and if ((dx)k,(dy)k)≠0, then ((dx)k,(dy)k) is a feasible increasing direction for problem (7).
Lemma 1.
If ((dx)k,(dy)k)=0, then (xk,yk) is a KKT point of (7).
Proof.
It is clear that (xk,yk) satisfies the constraint in problem (7). Since ((dx)k,(dy)k)=0, then
(16)((Cx)ik,(Cy)ik)=(∥(Cx)ik∥2+∥(Cy)ik∥2)0.5(xik,yik),=(∥(Cx)ik∥2+∥(Cy)ik∥2)0.5(xik,yik)11i=1,2,…,n.
Let μik=(∥(Cx)ik∥2+∥(Cy)ik∥2)0.5; by the KKT condition (12), we have that (xk,yk) is a KKT point of (7). This completes the proof of the lemma.
Lemma 2.
Suppose that ((dx)k,(dy)k)≠0; then ((dx)k,(dy)k) is a feasible increasing direction for problem (7), and iteration point (xk+1,yk+1) is feasible to problem (7).
Proof.
The feasibility of the iteration point (xk+1,yk+1) directly comes from definition (13). Using the fact that (xik)2+(yik)2=1, we have
(17)∇Tf(x,y)((dx)k,(dy)k)=∇Tf(x,y)((xk+1,yk+1)-(xk,yk))=∑i=1n{((2Cx)ik,(2Cy)ik)T((2Cx)ik,(2Cy)ik)(∥(2Cx)ik∥2+∥(2Cx)ik∥2)0.5-((2Cx)ik,(2Cy)ik)T(xik,yik)((2Cx)ik,(2Cy)ik)(∥(2Cx)ik∥2+∥(2Cx)ik∥2)0.5}=∑i=1n{(∥(2Cx)ik∥2+∥(2Cy)ik∥2)0.5-((2Cx)ik,(2Cy)ik)T(xik,yik)(∥(2Cx)ik∥2+∥(2Cy)ik∥2)0.5}≥0.
So direction ((dx)k,(dy)k) is a feasible increasing direction for problem (7).
The convergence of the feasible direction method is concluded by the following lemmas.
Lemma 3.
Suppose that ((dx)k,(dy)k)→0; then any accumulation point (x*,y*) is a KKT point of (7).
Proof.
Let (x*,y*) be an accumulation point of the sequence {(xk,yk)}; it is simple to obtain the result by Lemma 1.
Lemma 4 (see [1]).
Let matrixes A and B be positive definite; then A•B is bounded by(18)λmin(A)tr(B)≤A•B≤λmax(A)tr(B),
where λmin(A) and λmax(A) denote the smallest and the largest eigenvalues of the matrix A.
Lemma 5.
If ((dx)k,(dy)k)≠0 for all k>0, then ((dx)k,(dy)k)→0.
Proof.
Since
(19)f(xk+1,yk+1)=f((xk,yk)+((dx)k,(dy)k))=f(xk,yk)+∇Tf(xk,yk)((dx)k,(dy)k)+C•(((dx)k,(dy)k)((dx)k,(dy)k)T),
from Lemma 4, we have
(20)f(xk+1,yk+1)-f(xk,yk)≥C•(((dx)k,(dy)k)((dx)k,(dy)k)T)≥λmin(C)tr(((dx)k,(dy)k)((dx)k,(dy)k)T)=λmin(C)∥((dx)k,(dy)k)∥F2,
where ∥A∥ denotes the Frobenius norm matrix A.
From Lemma 2 and Lemma 4, for any K>0, we have
(21)∑k=0K-1∥((dx)k,(dy)k)∥F2≤1λmin(C)∑k=0K-1(f(xk+1,yk+1)-f(xk,yk))=1λmin(C)(f(xK,yK)-f(x0,y0))≤1λmin(C)f(xK,yK)=1λmin(C)C•((xK,yK)(xK,yK)T)≤λmax(C)λmin(C)∥(xK,yK)∥F2=nλmax(C)λmin(C).
This shows that ∑k=1K-1∥((dx)k,(dy)k)∥F2 is convergent, and hence ((dx)k,(dy)k)→0 holds.
In view of Lemmas 1 and 5, the termination criterion used in the rank-two feasible algorithm is ∥((dx)k,(dy)k)∥F<ϵ, where ϵ is a prespecified constant.
Lemma 6.
For all initial point (x0,y0) and ϵ>0, the rank-two feasible direction algorithm terminates in [nλmax(C)/ϵ2λmin(C)] iterations, where [nλmax(C)/ϵ2λmin(C)] is an integer which does not exceed nλmax(C)/ϵ2λmin(C).
Proof.
Based on Lemma 5, the number of iterations of the rank-two feasible direction algorithm is finite. Let K be the number of iterations; then
(22)∑k=0K-1∥((dx)k,(dy)k)∥F2=Kϵ2≤nλmax(C)λmin(C).
So we obtain
(23)K≤nλmax(C)λmin(C)ϵ2.
Now we conclude that [nλmax(C)/ϵ2λmin(C)] is an upper bound on the number of iterations.
Since problem (7) is nonconvex, there is no guarantee that the solution generated from the feasible direction method is a global solution. However, numerical experiments in Section 5 show that the proposed algorithm always converges to the optimal solution set of problem (7).
Now, we derive the complexity of our algorithm.
The complexity of computation of the gradient is O(n2). Each norm of the gradient of the objective function can be computed in O(n), so we conclude that the overall complexity to evaluate the next iteration point is O(n2). Together with Lemma 6, we get that the overall complexity of the rank-two feasible direction algorithm is O((λmax(C)/ϵ2λmin(C))n3). Here we can choose C satisfying λmax(C)/λmin(C)<2; then the complexity does not exceed O((2/ϵ2)n3). We know that the complexity of the primal-dual interior point method based on AHO direction is O(n4.5ln(1/ϵ)) [14, 15]. It is obvious that the complexity of the primal-dual method is higher than that of our algorithm, so our algorithm is faster than the interior-point method for the large-scale SDP relaxation of binary quadratic programming problems. In addition, the complexity of low rank feasible direction method is O((2/ϵ2)n3.5) [13], which is higher than that of our method.
Let the KKT point of problem (7) be (x*,y*); then we can obtain the rank-two solution Z*=x*(x*)T+y*(y*)T. Since the rank-two relaxation has the same form as Goemans and Williamson’s relaxation [8], except that ours has variables in Rn×2 rather than Rn×n, the same analysis as Goeman and Williamson, with minimal changes, can be applied. By the randomized cut generation scheme, the suboptimal solution of binary quadratic programming problem is obtained.
5. Numerical Results
In this section we present computational results by comparing our method with GW randomized algorithm [8] based on interior point method and low-rank feasible direction algorithm to find approximate solutions to the max-cut problem.
In interior point method, we solve the SDP relaxation by three SDP solvers, which include SDPpack software [17], SeDuMi [18], and DSDP [19]. SeDuMi is one of state-of-the-art SDP solvers. The code DSDP uses a dual-scaling interior-point algorithm and an iterative linear-equation solver. It is currently one of the fastest interior-point codes for solving SDP problems. Low-rank feasible direction algorithm is one of the efficient methods for the max-cut problem, and the algorithm is faster than the projected gradient algorithm [13]. The projected gradient algorithm [10] is faster than Homer and Peinado algorithm [9].
All the algorithms are run in the MATLAB 7.0 environment on an Inter Core2 D2.0 GHz personal computer with 2.0 GB of RAM.
5.1. Max-Cut Problem
The max-cut problem is one of the standard NP-complete problems defined on graphs [8]. Let G=(V,E) denote an edge-weighted undirected graph without loops or multiple edges. We use V={1,…,n}, ij for an edge with endpoints i and j and aij for the weight of an edge ij∈E. For S⊆V the cut δ(S) is the set of edges ij∈E that have one endpoint in S and the other endpoint in V∖S. The max-cut problem asks for the cut maximizing the sum of the weights of its edges. Here, we only work with the complete graph Kn. In order to model a graph in this setting, define aij=0forij∉E.A=(aij)∈Sn is referred to as the weighted adjacency matrix of the graph. An algebraic formulation can be obtained by introducing cut vectors x∈{-1,1}nwithxi=1fori∈Sandxi=-1fori∈V∖S. The max-cut problem can be formulated as the integer quadratic program as follows:
(24)max12∑i<jaij(1-xixj)s.t.xi∈{-1,1},i=1,…,n.
The matrix L(G)=Diag(Ae)-A is called the Laplace matrix of the graph G, where e is the unit vector whose every component is 1 and Diag(Ae) is the diagonal matrix whose diagonal elements are Ae. Let C=(1/4)L; the max-cut problem may be interpreted as a special case of the problem (1).
5.2. Numerical Results for the Random Graphs
The first set of test problems contains random graphs with two different edge densities 0.8 and 0.2, which denotes the dense random graphs and sparse random graphs, respectively. The weight on each edge is 1. We select problems in size from n=50 to n=350 for comparing the suboptimal value of max-cut problem and the CPU time of the four methods.
For the interior point method, we use the codes by two SDP solvers, which include SDPpack software [17] and SeDuMi [18]. In our algorithm, the iteration stops once ∥((dx)k,(dy)k)∥F<ɛ is found. The result is shown in Table 1.
Comparison results for the random graphs.
Size
Density
SDPpack
SeDuMi
FD
R2FD
Values
CPU
Values
CPU
Values
CPU
Values
CPU
50
0.2
319
0.73
319
0.59
319
0.05
319
0.04
50
0.8
781
0.67
781
0.59
781
0.06
779
0.04
100
0.2
1185
9.27
1185
1.64
1185
0.21
1182
0.09
100
0.8
3074
10.00
3074
1.69
3075
0.28
3071
0.11
150
0.2
2521
54.46
2521
4.23
2524
0.92
2525
0.19
150
0.8
6811
41.99
6811
4.17
6813
1.15
6811
0.22
200
0.2
4404
121.33
4404
8.71
4407
2.19
4413
0.35
200
0.8
12046
152.46
12046
8.02
12051
2.98
12052
0.39
250
0.2
6729
301.64
6729
12.32
6739
4.64
6738
0.47
250
0.8
18703
336.20
18703
15.15
18701
6.87
18711
0.62
300
0.2
9521
618.06
9521
27.17
9527
10.83
9551
0.86
300
0.8
26501
687.51
26501
26.44
26513
13.85
26541
1.05
350
0.2
12934
1076.24
12934
39.13
12945
18.60
12964
1.33
350
0.8
36067
1345.38
36067
40.92
36064
25.31
36094
1.43
In Table 1, “SDPpack” stands for randomized algorithm based on interior point method solved by SDPpack software, “SeDuMi” for randomized algorithm based on interior point method solved by SeDuMi software, “FD” for feasible direction algorithm coupled with the randomized method, “R2FD” for our rank-two feasible direction algorithm coupled with the randomized method, “CPU” for the CPU time, “Values” for the suboptimal value of the max-cut problem based these methods, and “Density” for edge density of the random graphs.
The SDPpack and SeDuMi provide the currently best conclusion on its performance guarantee in theory. The results in Table 1 show that the approximate solutions obtained by R2FD are as good as those generated by SDPpack, SeDuMi, and FD. In addition, the CPU time of our method is less than that of SDPpack, SeDuMi, and FD. In particular, with the increase of the size of the max-cut problem, the ratios of the CPU time between our methods to the three methods decrease quickly.
5.3. Numerical Results for the G-Set Graphs
The second set of test problems are from the so-called G-set graphs, which are randomly generated by the procedure rudy, a machine independent graph generator written by Rinaldi [20], Helmberg and Rendl [21], and Alperin and Nowak [22]. [20–22]. The test problems include 14 randomly generated large size test problems with nodes from 800 to 2000. Recently, Choi and Ye [19] reported computational results on a subset of G-set graphs that were solved as max-cut problems using their SDP code COPL-DSDP, or simply DSDP. The code DSDP uses a dual-scaling interior-point algorithm and an iterative linear-equation solver. The SDPpack software does not work when the size of the max-cut problems is larger than 350, so we give the results by the randomized method based on the dual-scaling algorithm solved by the DSDP software [19].
Table 2 gives the results of comparison among our R2FD method, the FD method, and the randomized method based on DSDP and SeDuMi on 14 large size test problems in the second set. In Table 2, “DSDP” presents the randomized method based on the dual-scaling algorithm by the DSDP software.
Comparison results for G-set graphs.
Graph
Size
DSDP
SeDuMi
FD
R2FD
Values
CPU
Values
CPU
Values
CPU
Values
CPU
G01
800
11404
66
11389
590
11446
258
11469
4
G03
800
11403
28
11413
626
11431
72
11444
4
G13
800
552
22
552
542
554
384
540
3
G14
800
2979
45
2979
645
2986
336
2998
4
G15
800
2972
32
2974
728
2975
55
2974
4
G22
2000
12978
817
12979
11213
12995
5630
13146
35
G23
2000
12971
898
12976
12546
12969
6637
13074
34
G24
2000
12959
802
12960
10859
12993
5236
13062
37
G35
2000
7438
1387
7438
13236
7446
7397
7504
38
G36
2000
7421
1717
7422
14078
7427
7104
7480
32
G37
2000
7441
1390
7445
13569
7449
6891
7495
35
G43
1000
6504
91
6497
1192
6525
487
6528
8
G44
1000
6470
90
6479
1195
6507
496
6520
7
G53
1000
3731
70
3738
1135
3747
738
3753
7
The results in Table 2 show that the approximate solutions by our method is nearly as good as those of the DSDP cuts. But our method which reaches solutions of the problems is at least 10 times faster than the FD method, 7 times faster than DSDP, and 100 times faster than SeDuMi. In particular, for G35, G36, and G37, the CPU time of our method is almost 40 times less than that of DSDP. Furthermore, We observe that R2FD took less than 5 minutes to return approximate solutions to all the 14 test problems, which required more than 2 hours of computation by the DSDP, more than 11 hours of computation by the FD, and more than 22 hours of computation by the SeDuMi.
6. Conclusion
Because the interior-point method and feasible direction method increase the dimension of a problem from n to n2 and rn (r is the function of n), so the two methods cost more CPU time than our method for solving large size binary quadratic programming problems, especially for problems with a large number of edges. The rank-two feasible direction method only increases the dimension of a problem from n to 2n, so it is efficient for solving large size binary quadratic programming.
Acknowledgments
The work of Xuewen Mu is supported by the National Science Foundation for Young Scientists of China (Grant nos. 11101320 and 61201297) and the Fundamental Research Funds for the Central Universities (Grant no. K50511700007). The work of Yaling Zhang is supported by the Xian University of Science and Technology Cultivation Foundation in Shaan Xi Province of China (Program no. 2010032).
HelmbergC.2000Berlin, GermanyKonrad-Zuse-Zentrum fur Information-StechnikBarahonaF.GroetschelM.JuengerM.ReineltG.An application of combinatorial optimization to statiscal optimization and circuit layout design19883634935132-s2.0-002401251410.1287/opre.36.3.493ZBL0646.90084TanP. H.RasmussenL. K.The application of semidefinite programming for detection in CDMA2001198144214492-s2.0-003541643710.1109/49.942507HasegawaF.LuoJ.PattipatiK.WillettP.Speed and accuracy comparation of techniques to solve a binary programming problem with application to syschronous CDMA20045227752780MuX.ZhangY.A new rank-two semidefinite programming relaxation method for multiuser detection problem2012652232332-s2.0-7995158256510.1007/s11277-011-0246-2KeuchelJ.SchnörrC.SchellewaldC.CremersD.Binary partitioning, perceptual grouping, and restoration with semidefinite programming20032511136413792-s2.0-044231214510.1109/TPAMI.2003.1240111WuS.-P.BoydS.VandenbergheL.FIR filter design via semidefinite programming and spectral factorizationProceedings of the 35th IEEE Conference on Decision and ControlDecember 1996Kobe, Japan2712762-s2.0-003039224010.1109/TPAMI.2003.1240111GoemansM. X.WilliamsonD. P.Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming19954261115114510.1145/227683.227684MR1412228ZBL0885.68088HomerS.PeinadoM.Design and performance of parallel and distributed approximation algorithms for maxcut1997461486110.1006/jpdc.1997.1381BurerS.MonteiroR. D. C.A projected gradient algorithm for solving the maxcut SDP relaxation2001153-417520010.1080/10556780108805818MR1892584ZBL1109.90341BurerS.MonteiroR. D. C.ZhangY.Rank-two relaxation heuristics for max-cut and other binary quadratic programs200112250352110.1137/S1052623400382467MR1885573BurerS.MonteiroR. D. C.ZhangY.Maximum stable set formulations and heuristics based on continuous optimization200294113716610.1007/s10107-002-0356-4MR1953169ZBL1023.90071LiuH.WangX.LiuS.Feasible direction algorithm for solving the SDP relaxations of quadratic {-1,1} programming problems200419212513610.1080/10556780410001647203MR2046957ZBL1072.65081AlizadehF.HaeberlyP. A.OvertonM. L.Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results19988374676810.1137/S1052623496304700MR1636549ZBL0911.65047ToddM. J.Semidefinite optimization20011051556010.1017/S0962492901000071MR2009698ZBL1105.65334XuC.-x.HeX.-l.XuF.-m.An effective continuous algorithm for approximate solutions of large scale max-cut problems2006246749760MR2269957ZBL1133.65034AlizadehF.HaeberlyJ. P.NayakkankuppamM. V.OvertonM. L.SchmietaS.SDPpack user’s guide-version 0.9BetaJune 1997TR1997-737New York, NY, USACourant Institute of Mathematical ScienceSturmJ. F.Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones199911-121–462565310.1080/10556789908805766MR1778433ZBL0973.90526ChoiC.YeY.Solving Sparse Semidefinite Programs Using the Dual Scaling Algorithm with an Iterative Solver2000Iowa City, Iowa, USADepartment of Management Science, University of IowaRinaldiG.Rudy graph generatorhttp://www-user.tu-chemnitz.de/ helmberg/rudy.tar.gzHelmbergC.RendlF.A spectral bundle method for semidefinite programming200010367369610.1137/S1052623497328987MR1741192ZBL0960.65074AlperinH.NowakI.Lagrangian smoothing heuristics for max-cut2005115-64474632-s2.0-2664443904610.1007/s10732-005-3603-zZBL1122.90426