A New Spatial Branch and Bound Algorithm for Quadratic Program with One Quadratic Constraint and Linear Constraints

)is paper proposes a novel second-order cone programming (SOCP) relaxation for a quadratic program with one quadratic constraint and several linear constraints (QCQP) that arises in various real-life fields. )is new SOCP relaxation fully exploits the simultaneous matrix diagonalization technique which has become an attractive tool in the area of quadratic programming in the literature. We first demonstrate that the new SOCP relaxation is as tight as the semidefinite programming (SDP) relaxation for the QCQP when the objective matrix and constraint matrix are simultaneously diagonalizable. We further derive a spatial branch-and-bound algorithm based on the new SOCP relaxation in order to obtain the global optimal solution. Extensive numerical experiments are conducted between the new SOCP relaxation-based branch-and-bound algorithm and the SDP relaxation-based branch-and-bound algorithm. )e computational results illustrate that the new SOCP relaxation achieves a good balance between the bound quality and computational efficiency and thus leads to a high-efficiency global algorithm.


Introduction
In this paper, we consider a quadratic program with one quadratic constraint and several linear constraints in the following form: where x ∈ R n is the decision variable, Q i ∈ R n×n is a symmetric matrix, and q i ∈ R n for i � 0, 1, D ∈ R m×n , d ∈ R m and c ∈ R. (1) covers many important combinatorial optimization problems and engineering problems, such as the binary quadratic programming problem [1], the max-cut problem [2], the quadratic knapsack problem [3,4], the binary least-square problem [5], the image processing problem [6], the multiuser detection problem [7], the project selection and resource distribution problems [8], the multisensor beamforming problem [9], and the system equilibrium problem [10]. It is known that if Q 0 and Q 1 are all positive semidefinite, then the problem becomes convex and can be solved efficiently by using SOCP methods [11]. In this paper, we assume that Q 0 is positive semidefinite while Q 1 is not, and the feasible region of (1) is bounded with nonempty relative interior points.
As a quadratically constrained quadratic programming problem, a typical approach to solve (1) is to exploit the branch-and-bound methods [12,13]. Note that the tightness and the computing efficiency of solving the convex relaxation problem of (1) at each node are critical factors that can affect the performance of the branch-and-bound algorithm. us, designing an effective convex relaxation of (1) is a hot topic in the literature. To the best of my knowledge, the SDP relaxation has become an attractive approach for obtaining good convex relaxations [14,15].
ough the SDP relaxation is tighter, it would also tremendously enlarge the dimension of the problem by lifting the original n-dimensional variable vector x to an (n + 1) × (n + 1) variable matrix. Consequently, solving the SDP relaxation of a huge-size problem should take a very long computational time. Hence, designing a convex relaxation which can be efficiently solved even for huge-size problem while maintaining the strength of the convex relaxation are investigated in the literatures [16,17].
In this paper, we will develop a new SOCP relaxation for (1) employing the simultaneous matrix diagonalization technique. We first give the following definition. Definition 1. M ∈ R n×n and N ∈ R n×n are called simultaneously diagonalizable (SD) if there exists a nonsingular matrix F ∈ R n×n such that F T MF and F T NF are both diagonal matrices.
In the past few years, many researches have shown that for some quadratically constrained quadratic programming problems, the simultaneous matrix diagonalization technique that is embedded into a convex relaxation may result in a tighter convex relaxation than that is not. Ben-Tal and Den Hertog [18] showed that a quadratic program with one or two quadratic constraints has a hidden conic quadratic representation if the matrices in the quadratic forms are simultaneously diagonalizable. Jiang and Li [19] also applied the simultaneous diagonalization techniques to solve a quadratic programming problem. Zhou and Xu [20] proposed a simultaneous diagonalization-based SOCP relaxation for convex quadratic program with linear complementarity constraints. ey also designed a new SOCP relaxation for the generalized trust-region problem and provided a sufficient condition under which the proposed SOCP relaxation is exact [21]. e main contributions of this paper are twofold.
(1) We first decompose the matrix Q 1 according to the signs of its eigenvalues such that Q 1 � A − B, where A and B are both positive semidefinite. en, we propose a new SOCP relaxation via simultaneous diagonalization of the two positive semidefinite matrices Q 0 and B. We further show that the new SOCP relaxation is as tight as the SDP relaxation when Q 0 is positive definite or Q 1 is negative semidefinite.
(2) We derive a spatial branch-and-bound algorithm based on the new SOCP relaxation, and the following extensive numerical results show that the proposed algorithm outperforms the branch-and-bound algorithm as derived from the SDP relaxation. It implies that the new SOCP relaxation balances the bound tightness and computing efficiency better than that of the SDP relaxation. e rest of the paper is organized as follows. In Section 2, a new SOCP relaxation for (1) is introduced. We also show that the new SOCP relaxation is as tight as SDP relaxation in certain cases. In Section 3, we propose a spatial branch-andbound algorithm based on the new SOCP relaxation. Some numerical experiments are conducted to illustrate the effectiveness of the proposed algorithm in Section 4. Finally, some concluding remarks are provided in Section 5.
Notations. For two n by n real matrices A � (A ij ) and B � (B ij ), A − 1 represents the inverse of A and A · B � trace(A T B) � n i,j�1 A ij B ij . For a real symmetric matrix X, X ≽ 0 and X ≻ 0 mean that X is positive semidefinite and positive definite, respectively. I n denotes the ndimensional identity matrix, and e is a unit vector with all elements being 1. Given a vector a ∈ R n , diag(a) corresponds to an n × n diagonal matrix with its diagonal elements equal to a.

A Simultaneous Diagonalization-Based SOCP Relaxation
In this section, we derive a new SOCP relaxation exploiting the techniques of difference of convex (DC) decomposition, and two positive semidefinite matrices are simultaneously diagonalizable. First of all, we present two lemmas concerning simultaneous matrix diagonalization that appear to play a central role in this paper.
Lemma 1 (see [22,23]). If M ∈ R h×h and N ∈ R h×h are two symmetric matrices and M ≻ 0, then there exists a nonsingular matrix F ∈ R h×h such that F T MF and F T NF are both diagonal matrices.
Lemma 2 (see [22,23]). If M ∈ R h×h and N ∈ R h×h are two positive semidefinite matrices, then there exists a nonsingular matrix F ∈ R h×h such that F T MF and F T NF are both diagonal matrices.
Let r be the number of negative eigenvalues of Q 1 , and without loss of generality, the first r eigenvalues of Q 1 are supposed to be negative. us, . . , σ n are eigenvalues and η i , i � 1, . . . , n, are corresponding eigenvectors of Q 1 .
Since Q 0 ≽ 0, B ≽ 0 and rank(B) � r, the proof of Lemma 2 [20] provides a method to find a nonsingular matrix Let x � Fy, p 0 � F T q 0 , and p 1 � F T q 1 ; then, (1) can be reformulated as Equation (2) still cannot be solved in polynomial time as the quadratic constraint remains nonconvex. en, we derive a new SOCP relaxation for (1) by introducing auxiliary variables t i for i � 1, . . . , r,: (3) In general, the new SOCP relaxation provides a weaker lower bound than the SDP relaxation. Compared with the fact that the SDP relaxation lifts the original n-dimensional variable vector to an (n + 1) × (n + 1) variable matrix, the new SOCP relaxation only lifts the original n-dimensional variable vector to an (n + r)-dimensional variable vector, which implies that (3) can be solved more quickly than (4). erefore, the SOCP relaxation presents greater potential in some real-life applications.
Next, we will show that (3) is as tight as the SDP relaxation under certain circumstances. e SDP relaxation of (1) is [14]: Suppose that the nonsingular matrix F from (3) also diagonalizes A. In fact, it implies that Q 0 and Q 1 are simultaneously diagonalizable. In such a case, we have Theorem 1. When Q 0 and Q 1 are simultaneously diagonalizable, (5) is as tight as (4).
Proof. On one hand, if (y, t) is a feasible solution of (5), then let x � Fy and X � F(yy T + diag(t 1 − y 2 1 , . . . , t r − y 2 r , 0, . . . , 0))F T . It is easy to see that X ≽ xx T and It implies that (x, X) is a feasible solution of (4), and the optimal objective function value of (4) is no more than the one of (5).
On the other hand, if (x, X) is a feasible solution of (4), Mathematical Problems in Engineering It implies that (y, t) is a feasible solution of (5) and the optimal objective function value of (5) is no more than that of (4). erefore, (5) is as tight as (4).
□ Proposition 1. When Q 0 is positive definite or Q 1 is negative semidefinite, (5) is as tight as (4). Proof. If Q 0 is positive definite or Q 1 is negative semidefinite, then Q 0 and Q 1 are simultaneously diagonalizable according to Lemmas 1 and 2. Hence, (5) is as tight as (4) according to eorem 1.

□
In what follows, we will use two simple examples to show that (5) is as tight as (4) under the conditions of Proposition 1, but (5) can be solved faster than (4).
e optimal objective function value of (4) is 7.9022, and the CPU time is 0.0448 seconds. Since e optimal objective function value of (4) is − 1, and the CPU time is 0.1364 seconds. Since Q 1 is negative definite, we can find F �

A Spatial Branch-and-Bound Algorithm
In this section, we develop a spatial branch-and-bound algorithm based on (3) for (1). Kim and Kojima [24] pointed out that it is necessary to add some appropriate constraints to the auxiliary variables to improve the lower bound. In order to enhance the relaxation effect and design a branchand-bound algorithm, we introduce reformulation-linearization technique (RLT) constraints to (3). (1) is bounded, and thus there must exist a lower bound l j and an upper bound u j for each y j , j � 1, . . . , r. erefore, we can add r RLT constraints t j ≤ (l j + u j )y j − l j u j for j � 1, . . . , r into (3). e new SOCP relaxation with RLT constraints is in the form min r j�1 μ j t j + n j�r+1 μ j y 2 j + p T 0 y, e initial lower bound l 0 j and upper bound u 0 j of y j , j � 1, . . . , r are solved by the following 2r linear programming problems: Taking Example 1 as an instance, the optimal value of (10) is 15.3976, which is better than the optimal value 7.9022 of (3). (y, t) be an optimal solution of (10) over the initial box [l 0 , u 0 ]. If y 2 j � t j for j � 1, . . . , r, then x � Fy is an optimal solution of (3).

Lemma 3. Let
Proof. If y 2 j � t j for j � 1, . . . , r, then y is a feasible solution of (2). Consequently, x � Fy is an optimal solution of (1). □ According to Lemma 3, we can see that if (y, t) is not an optimal solution, then there must exist some j ∈ 1, . . . , r { } satisfying t j > y 2 j . us, we can select the index j * and split the initial box [l 0 , u 0 ] into two new boxes [l a , u a ] and [l b , u b ] with l a � l 0 , u a � u 0 , u a j * � (l 0 j * + u 0 j * )/2, l b � l 0 , u b � u 0 , and l b j * � (l 0 j * + u 0 j * )/2. Consequently, we generate two new nodes over [l a , u a ] and [l b , u b ], respectively, in the branchand-bound tree. Before describing the spatial branch-and-bound algorithm, we give the following definition.
Definition 2. For a given ε > 0 and a vector y ∈ R n , let x � Fy. If x T Q 1 x + q T 1 x ≤ c + ε, then x is called an ε-feasible solution of (1). Let V QCQP be the optimal objective function value of (1), if the ε-feasible solution x also satisfies then, x is called an ε-optimal solution of (1). Define the function e proposed algorithm is presented in Algorithm 1. Although the branch-and-bound algorithm based on (5) is not detailed here, it can be easily obtained by changing (3) with (5) in the proposed algorithm.
Next, we will prove that Algorithm 1 converges after exploring finite nodes and returns an ε-optimal solution. Lemma 4. Suppose that the node y k , t k , lb k , l k , u k is chosen from D in Line 11 of Algorithm 1 such that lb k � min lb i |lb i ∈ D and j * � argmax j∈ 1,...,r . For any ϵ>0, there exists a δ>0 such that if (u k j * − l k j * )≤δ, then Algorithm 1 terminates in Line 13.
for its optimal objective function value lb b and optimal solution (y
us, we aim to reduce r j�1 λ j (t k j − (y k j ) 2 ) in the branch-and-bound algorithm. Hence, we choose arg max j∈ 1,...,r { } λ j (t k j − (y k j ) 2 ) as the variable selection strategy.
Proof. Algorithm 1 shows that if the algorithm does not terminate in line 13, then a chosen box is split in half and two new boxes are generated. After exploring k nodes, the initial box [l 0 , u 0 ] would be split into k + 1 boxes. If Algorithm 1 does not obtain an ε-optimal solution in line 13, it is easy to check that for each box [l k , u k ] among those k + 1 boxes, u k j − l k j ≥ min (δ/2), u 0 . . , r for j � 1, . . . , r. Otherwise, if there exists a j * such that u k j * − l k j * ≤ δ, and the j * -th edge has been selected as a branching direction, then following from Lemma 4, we conclude that x k � Fy k is an ε-optimal solution. It contradicts with the assumption that Algorithm 1 does not obtain an ε-optimal solution in line 13 at node k. Hence, the volume of each box is no smaller than r j�1 min (δ/2), u 0 j − l 0 j . Due to the fact that the total volume of all the k + 1 boxes is no more than the one of the initial box [l 0 , u 0 ], it is easy to check that, after exploring nodes, Algorithm 1 returns an ε-optimal solution.

Numerical Experiments
In this section, we compare the proposed algorithm with Algorithm 1 in [25] which is a SDP relaxation-based branchand-bound algorithm (SDP_BB). ese algorithms are implemented in MATLAB R2013b on a PC with Window 7, 2.50 GHz Inter Dual Core CPU processors and 8 GB RAM.
e SDP relaxations are solved by SeDuMi 1.02 [26] and the SOCP relaxations are solved by Cplex solver 12.6.3. e error tolerance is set to be ε � 5e − 4. e instances for the experiments are generated as follows. e entries of q i are integers uniformly drawn at random from intervals [− 8, 18] for each j � 0, 1. e elements of matrices Q j are integers uniformly drawn at random from intervals [− 100, 10] (reference [17]). en, Q 0 and Q 1 are replaced by their real symmetric parts. Q 0 is decomposed as Q 0 � V DV T where D is the diagonal matrix of eigenvalues, and V is the orthogonal matrix whose columns are the corresponding eigenvectors. Let Q 0 � V|D|V T . We generate a vector a ∈ R n with the entries of a uniformly drawn at random from intervals [0, 1]. en, we let c � ceil(a T q 1 + a T Q 1 a), Q 0 � Q 0 /|a T q 0 + a T Q 0 a|, q 0 � q 0 / ������������ � |a T q 0 + a T Q 0 a|, D � I − I , and d � e 0 . Five instances are generated for each given problem size in Table 1. e explored nodes (exp nodes) and CPU time in seconds (CPU time) are displayed for each algorithm in Table 1. e symbol "-" means the instance cannot be solved within the given maximum computing time of 10000 seconds. Table 1 shows that (1) It is easy to observe that the average time cost for each explored node, i.e., CPU time/exp nodes of the proposed algorithm SOCP_BB, is much lower than that of the algorithm SDP_BB for each instance. Also, this evidence is deepened when n becomes larger. e observation further illustrates that the computing efficiency of solving the SOCP relaxation is higher than that of solving the SDP relaxation.
(2) For most of the instances, both the number of explored nodes and the CPU time of the proposed algorithm SOCP_BB are smaller than those of the algorithm SDP_BB. It implies that the proposed SOCP relaxation offers a good trade-off between the computing efficiency and bound quality for (1).

Conclusion
By exploiting the simultaneous diagonalization technique, we derive a new SOCP relaxation for (1). en, we prove that the SOCP relaxation is as tight as the well-known SDP relaxation in certain cases. Furthermore, we design a spatial branch-and-bound algorithm based on the new SOCP relaxation. Finally, numerical experiments are conducted to demonstrate the efficiency of the proposed method by comparing with the SDP relaxation-based branch-andbound algorithm, and the promising results illustrates that the simultaneous diagonalization-based SOCP relaxation indeed well balance the bound quality and computing time.
Data Availability e author declares that all data and material in the paper are available and veritable.

Conflicts of Interest
e author declares no conflicts of interest.