A Mixed Line Search Smoothing Quasi-Newton Method for Solving Linear Second-Order Cone Programming Problem

Firstly, we give the Karush-Kuhn-Tucker (KKT) optimality condition of primal problem and introduce Jordan algebra simply. On the basis of Jordan algebra, we extend smoothing Fischer-Burmeister (F-B) function to Jordan algebra and make the complementarity condition smoothing. So the first-order optimization condition can be reformed to a nonlinear system. Secondly, we use the mixed line search quasi-Newton method to solve this nonlinear system. Finally, we prove the globally and locally superlinear convergence of the algorithm.


Introduction
Linear second-order cone programming (SOCP) problems are convex optimization problems which minimize a linear function over the intersection of an affine linear manifold with the Cartesian product of second-order cones.Linear programming (LP), Linear second-order cone programming (SOCP), and semidefinite programming (SDP) all belong to symmetric cone analysis.LP is a special example of SOCP and SOCP is a special case of SDP.SOCP can be solved by the corresponding to the algorithm of SDP, and SOCP also has effectual solving method.Nesterov and Todd [1,2] had an earlier research on primal-dual interior point method.In the rescent, it gives quick development about the solving method for SOCP.Many scholars concentrate on SOCP.
Except for interior point method, semismoothing and smoothing Newton method can also be used to solve SOCP.In [3], the Karush-Kuhn-Tucker (KKT) optimality condition of primal-dual problem was reformulated to a semismoothing nonlinear system, which was solved by Newton method with central path.In [4], the KKT optimality condition of primal-dual problem was reformed to a smoothing nonlinear equations, then it was solved by combining Newton method with central path.References [3,4] gave globally and locally quadratic convergent of the algorithm.

Preliminaries and Algorithm
In this section, we introduce the Jordan algebra and give the nonlinear system, which comes from the Karush-Kuhn-Tucker (KKT) optimality condition.At last, we introduce two kinds of derivative-free line search rules.
Associated with each vector  ∈   , there is an arrowshaped matrix Arw() which is defined as follows: Euclidean Jordan algebra is associated with second-order cones.For now we assume that all vectors consist of a single block  = ( 0 ; ) ∈   .For two vectors  and , define the following multiplication: ∘  = (  ;  0  +  0 ) = Arw ()  = Arw () Arw () .
It is well known that the vector  ∈   has a spectral decomposition as where  1 ,  2 and  1 ,  2 are the spectral values and spectral vectors of  given by where  = 1, 2, and  is any vector in  −1 satisfying ‖‖ = 1.
Apparently, (, , , ) = 0 is equivalent to (8).Let (, , , ) = , then (, , , ) = ().So, the KKT optimality condition is equivalent to the following: Next, we solve (12) by using Broyden rank one quasi-Newton method.When we solve the problem (12) with quasi-Newton method, the gradient or Jacobian does not appear.It can reduce the amount of calculation.While it is not suitable to use the usual line search such as Wolf or Powll rules.Thus, we suggest two kinds of derivative-free line search rule.
In 1986, Griewank [6] put forward a kind of monotonous line search.Set Let   satisfy the following inequality: where  ∈ (0, 1/6) is a constant.Due to (13) and ( 14), we obtain Clearly, the line search is a normal descent method.
Because there is a failure in the line search rule, many scholars put forward some kinds of different derivative-free line search rules.In [7] On the basis of the previous section, we will suggest a Broyden rank one quasi-Newton method for solving SOCP in this section.Now we present an algorithm for solving (12).

Global Convergence
In this section, we prove the global convergence of algorithm.
For this reason, we define some variables in the algorithm and give some hypotheses.We define then then Proposition 3. The sequence {  } which is generated by the algorithm satisfies then the level set is bounded, where  is the constant defined in (17).
According to some results in [8,9], we can obtain the following results.
Specifically, there is a subsequence of {  } which converges to zero.Moreover, if In particular, the whole sequence {  } → 0. Lemma It is clear from (50) that () = 0. Therefore the result holds.

Local Superlinear Convergence
In this section, we prove the local superlinear convergence of the algorithm.
Lemma 12.If (H1) holds and the sequence {  } is generated by the algorithm, then there exist a constant  > 0 and an index  such that whenever   ≤   and for all  sufficiently large.Due to (20), we obtain whenever   ≤ .Hence, when   is sufficiently close to  * for all  large enough,   = 1 can satisfy (62), which prove the conclusion.
Theorem 13.Assume that (H1) holds, then the sequence {  } generated by the algorithm converges superlinearly to the unique solution of (12).
Proof.From (65), it is sufficient to verify that the sequence {  } → 0. Let  and  be determined by Lemma 12.By Lemmas 6 and 9, we have lim Then there exists an index k such that whenever  ≥ k.This implies that Therefore,   → 0 as  → ∞.The proof is finished.

Numerical Experiments
In this section, we carry out a number of numerical experiments based on Algorithm A. The results show that When the condition ‖(  )‖ ≤ 10 −7 holds, the algorithm stops.
In the tables of test results, itr denotes the number of iterations, prec the final residual of ‖(  )‖ when the algorithm stops,  val and  val the optimal values of the primal and dual problems of the test problems, and gap the dual gap of primal-dual problems.For the first nine experiments, the elements of the vectors , , matrix , and initial points , ,  are random numbers from 0 to 10.  0 is the identity matrix. All and the initial point was (, , , ).Let  = 1,  4 ⊂  4 .The data of the result of the problem is listed in Table 1.  and the initial point was (, , , ).Let  = 3,  8 =  2 ×  3 ×  3 .The data of the result of the problem is listed in the Table 3.
In Table 4, there are the final numerical results of the next six experiments, where the notations ,  mean the number of the row and column of the matrix , respectively, and NO denotes the number of the numerical experiments.In these experiments, we let  = 1.
From Table 5, we can see Algorithm A in this paper is efficient.The algorithm not only can solve the case of dense coefficient matrix , but also can solve the case of sparse coefficient matrix .From Table 5, we know that the cone , whether or not partitioned the algorithm, is efficient.

Table 2 .
Example 16.The coefficients were chosen as