We propose a branch and bound reduced algorithm for quadratic programming problems with quadratic constraints. In this algorithm, we determine the lower bound of the optimal value of original problem by constructing a linear relaxation programming problem. At the same time, in order to improve the degree of approximation and the convergence rate of acceleration, a rectangular reduction strategy is used in the algorithm. Numerical experiments show that the proposed algorithm is feasible and effective and can solve small- and medium-sized problems.
1. Introduction
Quadratic programming problems with quadratic constraints play a very important role in global optimization because quadratic functions are relatively simple functions among all nonlinear functions, and quadratic functions can approach many other functions. Therefore, it is necessary for us to research quadratic problems for researching nonlinear problems better, and quadratic programming problems with quadratic constraints have an important applications in Science and technology. Then, in spite of researching local optimization problems or global optimization problems, quadratic programming problems have got extensive attention; it is obvious that researching this kind of problems is very necessary. In this paper, we consider the following quadratic programming problems with quadratic constraints:
(QP)minf0(x)=xTQ0x+(d0)Tx+c0,s.t.fi(x)=xTQix+(di)Tx+ci≤0,i=1,2,…,p,x∈S={x∈Rn:l≤x≤u},
where Qi=(qj1j2i)n×n are n-dimension symmetric matrices, di=(d1i,d2i,…,dni)T∈Rn, l∈Rn, u∈Rn, ci∈R, and i=0,1,…,p.
In recent years, many researchers have researched this kind of problems and made certain progress. In [1], an effective lower bound of the optimal value of original problem is provided using Lagrange lower estimate, and the local optimal solutions are obtained by Newton methods; then to accelerate the convergence of the global optimal solutions, the local Newton methods are used. A decompose-approach method is put forward in [2]. Literature [3] organically combines the outer approximation method with the branch and bound technique and presents a new branch-reduce algorithm. Literature [4] combines the cutting plane algorithm with the branch and bound algorithm, and puts forwards a new algorithm. Literature [5] presents a branch and bound algorithm by the linear lower function of the bilinear function. Based on [5], literature [6] puts forward a branch-reduce method aiming at objective function and constraint conditions of the linear relaxation programming. A simplex branch and bound algorithm is raised in [7]. There are many different methods for solving quadratic programming problems with quadratic constraints in [8–15].
The rest of this paper is organized as follows. In Section 2, we give the linear relaxation programming problem (LP) of the problem (QP). In Section 3, we give the rectangle subdivision and reduce strategy. We explain the branch and bound algorithm in detail in Section 4, and the convergence of the algorithm is proved. Finally, some numerical results turn out the effectiveness of the present algorithm.
2. Linear Relaxation Programming
In this section, we construct a linear relaxation programming problem of the original problem.
Assume that λmini is the minimum eigenvalue of the matrices Qi, for i=1,2,…,p. If λmini≥0, let θi=0; otherwise, let θi=|λmini|+τi, where τi≥0; then Qi+θiI is semipositive definite.
On the rectangle Sk={x∈Rn:lk≤x≤uk}, for each i, we construct a linear lower functionfi(x) on Sk.
We have
(1)fi(x)=xTQix+(di)Tx+ci=xT(Qi+θiI)x+(di)Tx+ci-θi∥x∥2=(x-lk)T(Qi+θiI)(x-lk)+(di)Tx+ci-θi∑j=1nxj2+2(lk)T(Qi+θiI)x-(lk)T(Qi+θiI)lk.
Suppose that ljk and ujk are the jth indicators of lk and uk, respectively. We know that, for each j∈{1,2,…,n}, a linear lower function of -xj2 is -(ujk+ljk)xj+ujkljk on the interval [ljk,ujk]. Therefore,
(2)φSk(x)≜∑j=1n(-(ujk+ljk)xj+ujkljk)=-(lk+uk)Tx+(lk)Tuk
is a linear lower function of -∑j=1nxj2 on the rectangle [lk,uk]; we construct the following linear function:
(3)lSki(x)=(aSki)Tx+bSki,
where
(4)aSki=di+2(Qi+θiI)lk-θi(lk+uk),bSki=ci-(lk)T(Qi+θiI)lk+θi(lk)Tuk.
We can obtain the following two theorems.
Theorem 1.
For each i∈{0,1,…,p}, let Qi+θiI be semipositive definite. For each i∈{0,1,…,p}, the linear function lSki(x) is a lower function of fi(x) on the rectangle Sk; that is fi(x)≥lSki(x), for all x∈Sk.
Proof.
From the formula (1) and the definitions of the functions φSk(x) and lSki(x), for each i∈{0,1,…,p}, we have
(5)fi(x)≥(x-lk)T(Qi+θiI)(x-lk)+lSki(x),∀x∈Sk.
Moreover, the matrix Qi+θiI is semipositive definite; then,
(6)(x-lk)T(Qi+θiI)(x-lk)≥0,∀x∈Sk.
Consequently, fi(x)≥lSki(x), for all x∈Sk, i∈{0,1,…,p}.
Theorem 2.
Assume that ρ(Qi+θiI) is the spectral radius of the rectangle Qi+θiI; then
(7)max{|fi(x)-lSki(x)|:x∈Sk}≤(ρ(Qi+θiI)+θi)∥uk-lk∥2,i∈{0,1,…,p}.
Proof.
From the formula (1) and the definitions of the functions φSk(x) and lSki(x), we have
(8)fi(x)-lSki(x)=(x-lk)T(Qi+θiI)(x-lk)+θi(-∥x∥2-φSk(x))≤ρ(Qi+θiI)∥uk-lk∥2+θi|(x-lk)T(uk-x)|≤ρ(Qi+θiI)∥uk-lk∥2+θi∥uk-lk∥2=(ρ(Qi+θiI)+θi)∥uk-lk∥2.
Hence, the conclusion is established.
Therefore, from Theorem 1, we obtain the linear relaxation programming problem of (QP) on the rectangle Sk:
(LP(Sk))minlSk0(x)s.t.lSki(x)≤0,i=1,2,…,p,x∈Sk.
Solving the problem (LP(Sk)), its optimal value is obtained, which is a lower bound of the global optimum of the problem (QP) on the rectangle Sk.
3. The Subdivision and Reduction of the Rectangle
In this section, we give the bisection and reduction methods of the rectangle. Let Sk={lk≤x≤uk} be a rectangle on Rn, and xk∈Sk.
3.1. The Subdivision of the Rectangle
The method of the subdivision of the rectangle is described as follows.
Select the longest edge of the rectangle Sk; that is Usk-Lsk=max{Ujk-Ljk:j=1,2,…,n}.
Let Vsk=(Usk+Lsk)/2. Then
(9)Sk1=∏j=1s-1[Ljk,Ujk]×[Lsk,Vsk]×∏j=s+1n[Ljk,Ujk],Sk2=∏j=1s-1[Ljk,Ujk]×[Vsk,Usk]×∏j=s+1n[Ljk,Ujk].
3.2. The Reduction of the Rectangle
Based on [8], in order to improve the convergence of the algorithm, we give two pruning methods of problem (LP). For all Sk={x∈Rn:lk≤x≤uk}⊆S, Sjk=[ljk,ujk], suppose that the objective function of (LP(Sk)) is φ0k(x)=∑j=1ncjkxj+c0k, the constraint functions are ∑j=1naijkxj≤bik, and the upper bound of (QP) is denoted by UB; let
(10)rCk=∑j=1nmin{cjkljk,cjkujk},rLik=∑j=1nmin{aijkljk,aijkujk},i=1,…,p.
Theorem 3 (see [8]).
For any Sk⊆S, if rCk+c0k>UB, then there is no optimal solution of (QP) on Sk; otherwise, if crk>0(r∈{1,…,n}), then there is no optimal solution of (QP) on S¯k=(S¯jk)n×1; if crk<0(r∈{1,…,n}), then there is no optimal solution of (QP) on S_k=(S_jk)n×1, where
(11)S¯jk={Sjk,j≠r,j=1,…,n,(UB-c0k-rCk+cjkljkcjk,ujk]∩Sjk,j=r,S_jk={Sjk,j≠r,j=1,…,n,[ljk,UB-c0k-rCk+cjkujkcjk)∩Sjk,j=r.
Theorem 4 (see [8]).
For any i=1,…,p, if rLik>bik, then there is no optimal solution of (QP) on Sk; otherwise, if airk>0(r∈{1,…,n}), then there is no optimal solution of (QP) on S¯¯k=(S¯¯jk)n×1; if airk<0(r∈{1,…,n}), then there is no optimal solution of (QP) on S__k=(S__jk)n×1, where
(12)S¯¯jk={Sjk,j≠r,j=1,…,n,(bik-rLik+aijkljkaijk,ujk]∩Sjk,j=r,S__jk={Sjk,j≠r,j=1,…,n,[ljk,bik-rLik+aijkujkaijk)∩Sjk,j=r.
From Theorems 3 and 4, we can construct the following pruning rules to delete or reduce the rectangle Sk.
Rule 1.
Compute rCk, if rCk+c0k>UB, then Sk is deleted; otherwise, for any j=1,…,n.
If cjk>0, let ujk=min{ujk,(UB-c0k-rCk+cjkljk)/cjk}.
If cjk<0, let ljk=max{ljk,(UB-c0k-rCk+cjkujk)/cjk}.
Rule 2.
Compute rLik, if rLik>bik, then Sk is deleted; otherwise, for any j=1,…,n.
If aijk>0, let ujk=min{ujk,(bik-rLik+aijkljk)/aijk}.
If aijk<0, let ljk=max{ljk,(bik-rLik+aijkujk)/aijk}, where i=1,…,p.
4. The Algorithm Description and Convergence Analysis
Next, we can describe a branch and bound reduced algorithm of problem (QP) as follows.
Suppose when the iteration proceeds in step k, the feasible region of the problem (QP) is denoted by D, Q represents the feasible set at present, Sk represents the divided rectangle soon, the set of remained rectangle after pruning is denoted by T, and the current lower bound and upper bound of the global optimal value of the problem (QP) are denoted by αk and βk, respectively.
Step 1 (initializing).
Set ε>0, and let T={S}, k=1, Sk=S, and βk=∞. Solving the problem (LP(Sk)), its optimal solution and optimal value are denoted by xk and β(Sk), respectively. Let βk=β(Sk); then βk is a lower bound of global optimal value of the problem (QP); if xk∈D, let Q=Q∪{xk}, the upper bound is Q=Q∪{xk}, and find a current optimal solution x*∈arg minαk.
Step 2 (termination rule).
If there was a condition satisfying between αk-βk≤ε (k=1,2,…) or T=∅, then stop; the global optimal solution x* and the global optimal value f0(x*) are outputted; otherwise, go to the next step.
Step 3 (selection rule).
Select a rectangle which has a minimum lower bound in the rectangle set T; that is, Sk=arg minβk.
Step 4 (subdivision rule).
Using the subdivision method in the former section, then the rectangle Sk can be divided into subrectangles Sk1 and Sk2, and intSk1∩int Sk2=∅.
Step 5 (reduction technique).
Reducing the subrectangles after dividing using the reduction method in the former section, without loss of generality, the new rectangles after reduction are also denoted by Skj, j∈Γ, where Γ is the index set of the rectangles after reduction.
Step 6 (bounding rule).
Lower bound is βk*=min{βk:k=1,2,…}; upper bound is αk*=min{f0(x),x∈Q}.
The current best feasible solution is x*∈arg min{f0(x):x∈Q}.
Step 7 (pruning rule).
Let T=T∖{S:β¯k(S)≥αk*,S∈T}.
Step 8.
Set k=k+1; go to Step 2.
Theorem 5.
(a) If the algorithm terminates in limited steps, then xk is a ε-global optimal solution of problem (QP).
(b) For each k≥1, let xk be the solution after step k. If the algorithm is infinite, then {xk} is a feasible solution sequence of problem (QP), and any accumulation is a global optimal solution of problem (QP), and limk→∞αk=limk→∞βk=ν.
Proof.
(a) If the algorithm is finite, suppose that it terminates in step k(k≥1). Because xk is obtained by solving (LP(Sk)), then xk∈Sk⊆S, and xk is a feasible solution of problem (QP). When αk-βk≤ε, the algorithm terminate. From Steps 1 and 6, we have f0(xk)-βk≤ε; from the algorithm, βk≤v, where v is the global optimal value of problem (QP). Because xk is a feasible solution of problem (QP), so f0(xk)≥v. Thus
(13)v≤f0(xk)≤v+ε.
Therefore, xk is a ε-global optimal solution of problem (QP).
(b) If the algorithm is infinite, then it produces a solution sequence {xk} of problem (QP), where for each k≥1, xk is obtained by solving problem (LP(Sk)). For each Sk⊆S, for the optimal solution xk∈Sk⊆S, the sequence {xk} constitute a solution sequence of problem (QP); from the iteration of the algorithm, we have
(14)βk≤v≤αk=f0(xk),k=1,2,….
Because the series {βk} do not decrease and have an upper bound, and {αk} do not increase and have a lower bound, then the series {βk} and {αk} are both convergent. Taking the limits on both sides of (14), we have
(15)limk→∞βk≤v≤limk→∞αk=limk→∞f0(xk).
Let limk→∞βk=β¯, limk→∞αk=α¯; then the formula (15) converts into
(16)β¯≤v≤limk→∞f0(xk)=α¯.
Without loss of generality, assume that the sequence of rectangle {Sk=[lk,uk]} satisfy xk∈Sk and Sk+1⊂Sk. In our algorithm, the rectangles are divided into two equal parts continuously; then ⋂k=1∞Sk+1={xk}, and because of the continuity of function f0(x),
(17)β¯=v=α¯=limk→∞f0(xk)=f0(x*).
So any accumulation of {xk} is a global optimal solution of problem (QP).
5. Numerical Experiment
Several experiments are given to turn out the feasibility and effectiveness of our algorithm.
Example 1.
(18)minx12+x22s.t.0.3x1x2≥1,2≤x1≤5,1≤x2≤3.
From the algorithm, the initial rectangle is S1=[2.00005.00001.00003.0000]; first, we solve the problem LP(S1), its optimal solution is x1=(2.0000;3.0000), and optimal value is β1=β(S1)=4.9996; then 4.9996 is a lower bound of the global optimal value of problem (QP). Because x1=(2.0000;3.0000) is feasible, then Q=[2.0000;3.0000] is a set of current feasible solutions, and the current upper bound is α1=f0(x1)=13.0000; the current optimal solution is x*=x1=(2.0000;3.0000).
After that, based on our selection rule, select the rectangle with the minimum lower bound S1 to divide; then S1 is divided into two subrectangles S1,1=[2.00003.50001.00003.0000] and S1,2=[3.50005.00001.00003.0000] from the dividing method in Section 3.1; then reduce the rectangles using the reduction technique in Section 3.2, and the new rectangle after reduction is denoted by S2=S1,1=[2.00003.50001.00003.0000]. Solving the linear relaxation programming problem LP on the rectangle S2, its optimal value is β2=β(S1,1)=4.9996; then the lower bound of the original problem is not updated, also being 4.9996. Next, we choose S2 to divide, until ⋯ the 15th iteration, S14=[2.00002.04081.65381.7019]; solve the linear relaxation programming problem LP(S14), its optimal solution is (2.0000;1.6665), and optimal value is 6.7765; while, the current upper bound is 6.8151, the current optimal solution is (2.0000;1.6778). Because |6.8151-6.7765|<0.1, it satisfies our termination rule; then the optimal value of the original problem is 6.8151, the lower bound of the optimal value is 6.7765, and the optimal solution is x=(2.0000;1.6778); here the lower bound of the optimal value is also approximate optimal value, where the accuracy is ε=0.1.
Table 2 shows the different results of Example 1 under different accuracy.
Example 2.
(19)min6x12+4x22+5x1x2s.t.-6x1x2≤-48,0≤x1,x2≤10.
The optimal value is 118.3838.
Example 3.
(20)minx12+x22s.t.-0.3x1x2≤-1,-x1-x2≤1,x∈X0={2≤x1≤5,1≤x2≤3}.
The optimal value is 6.7778.
Example 4.
(21)minx1s.t.14x1+12x2-16x22-16x12≤1,114x12+114x22-37x1-37x2≤1,1≤x1≤5.5,1≤x2≤5.5.
The optimal value is 1.0000.
Example 5.
(22)min-x1+x1x20.5-x2s.t.-6x1+8x2≤3,3x1-x2≤3,x∈X0={x∣0≤xi≤1.5,i=1,2}.
The optimal value is −1.1629.
Example 6.
(23)min6x12+4x22+2.5(x1+x2)2-2.5(10x1+10x2)s.t.3(x1-x2)2-3(10x1+10x2)≤-48,0≤x1,x2≤10.
The optimal value is −31.8878.
Example 7.
(24)min21x12+34x1x2-24x22+2x1-14x2s.t.2x12+4x1x2+2x22+8x1+6x2-9≤0,-5x12-8x1x2-5x22-4x1+4x2+4≤0,x1+2x2≤2,x∈[0,1]2.
The optimal value is −3.3205.
Example 8.
(25)min5.3578x32+0.8357x1x5+37.2392x1s.t.2.584×10-5x3x5-6.663×10-3x2x5-7.34×10-5x1x4≤1,8.53007×10-4x2x5+9.395×10-5x1x4-3.3.85×10-4x3x5≤1,-x2x5-0.42x1x2-0.30586x32≤-1.3303294×103,-x3x5-0.2668x1x3-0.40584x3x4≤-2.2751327×103,2.4186×10-4x2x5+1.0159×10-4x1x2+7.379×10-5x32≤1,2.9955×10-4x3x5+7.992×10-5x1x3+1.2157×10-4x3x4≤1,x∈X0={x∣78≤x1≤102,33≤x2≤45,27≤xi≤45,i=3,4,5}.
The optimal value is 1.0128×104.
We choose ε=1.0ε-4; then the approximate optimal value satisfying accuracy and the CPU running time are obtained; the results are shown in Table 1.
Example
The optimal solution within accuracy or one solution among solutions
x1
x2
x3
x4
x5
1
2.0000
1.6667
2
2.5576
3.1279
3
2.0000
1.6667
4
1.0000
5.5000
5
1.5000
1.2247
6
1.0156
1.5594
7
0.4267
0.5879
8
78.0000
33.0001
29.9958
44.9998
36.7753
Example
Approximate optimal value
Iterations
CPU (s)
1
6.7778
33
8.301128
2
118.3837
49
32.696565
3
6.7778
29
6.676444
4
1.0000
9
3.162106
5
−1.1629
17
4.429806
6
−31.8878
130
54.024140
7
−3.3304
20
5.410943
8
10128
98
193.921992
Different results of Example 1 under different accuracy.
Example 1
ε
Approximate optimal value
Optimal value
1.0e-2
6.777772334392922
6.784953802104409
1.0e-3
6.777777695638590
6.778685210977349
1.0e-4
6.777777810403491
6.777777840618791
6. Conclusion
In this paper, we presented a branch and bound reduced algorithm for solving the quadratic programming problems with quadratic constraints. By constructing a linear relaxation programming problem, the lower bound of the optimal value of original problem can be obtained. Meanwhile, we used a rectangle reduction technique to improve the degree of approximation and the convergence rate of acceleration. Numerical experiments show the effectiveness of our algorithm.
Acknowledgment
The work is supported by the Foundation of National Natural Science of China under Grant no. 11161001.
Van VoorhisT.A global optimization algorithm using Lagrangian underestimates and the interval Newton method200224334937010.1023/A:1020383700229MR1932387ZBL1046.90067ZhengX. J.SunX. L.LiD.Convex relaxations for nonconvex quadratically constrained quadratic programming: matrix cone decomposition and polyhedral approximation2011129230132910.1007/s10107-011-0466-yMR2837884ZBL1236.90089GaoY.XueH.ShenP.A new rectangle branch-and-reduce approach for solving nonconvex quadratic programming problems20051682140914182-s2.0-2604446673910.1016/j.amc.2004.10.024AudetC.HansenP.JaumardB.SavardG.A branch and cut algorithm for nonconvex quadratically constrained quadratic programming2000871131152MR1734662ZBL0966.90057QuS.-J.JiY.ZhangK.-C.A deterministic global optimization algorithm based on a linearizing method for nonconvex quadratically constrained programs20084811-121737174310.1016/j.mcm.2008.04.004MR2473402ZBL1187.90212WuH.ZhangK.A new accelerating method for global non-convex quadratic optimization with non-convex quadratic constraints2008197281081810.1016/j.amc.2007.08.015MR2400704ZBL1141.65048LinderothJ.A simplicial branch-and-bound algorithm for solving quadratically constrained quadratic programs2005103225128210.1007/s10107-005-0582-7MR2146380ZBL1099.90039TuyH.Hoai-PhuongN. T.A robust algorithm for quadratic optimization under quadratic constraints2007374557569SunX. L.LiJ. L.LuoH. Z.Convex relaxation and Lagrangian decomposition for indefinite integer quadratic programming2010595-662764110.1080/02331930801987607MR2724107ZBL1195.90065SalahiM.Convex optimization approach to a single quadratically constrained quadratic minimization problem201018218118710.1007/s10100-009-0106-2MR2658689ZBL1204.90075ZhengX. J.SunX. L.LiD.Nonconvex quadratically constrained quadratic programming: best D.C. decompositions and their SDP representations201150469571210.1007/s10898-010-9630-9MR2821214ZBL1254.90151BaoX.SahinidisN. V.TawarmalaniM.Semidefinite relaxations for quadratically constrained quadratic programming: a review and comparisons2011129112915710.1007/s10107-011-0462-2MR2831406ZBL1232.49035KimD. S.TamN. N.YenN. D.Solution existence and stability of quadratically constrained convex quadratic programs20126236337310.1007/s11590-011-0300-8MR2886605ZBL1262.90120BurerS.DongH.Representing quadratically constrained quadratic programs as generalized copositive programs201240320320610.1016/j.orl.2012.02.001MR2913613ZBL1245.90080ZhengX. J.SunX. L.LiD.XuY. F.On zero duality gap in nonconvex quadratic programming problems201252222924210.1007/s10898-011-9660-yMR2886307ZBL1266.90151