Modified Three-Term Conjugate Gradient Method and Its Applications

We propose a modified three-term conjugate gradient method with the Armijo line search for solving unconstrained optimization problems. The proposed method possesses the sufficient descent property. Under mild assumptions, the global convergence property of the proposed method with the Armijo line search is proved. Due to simplicity, low storage, and nice convergence properties, the proposedmethod is used to solveM-tensor systems and a kind of nonsmooth optimization problems with l1-norm. Finally, the given numerical experiments show the efficiency of the proposed method.


Introduction
We consider the following unconstrained optimization problem: min where  :   →  is a continuous function.It is well-known that the nonlinear conjugate gradient method is one of the most effective methods for solving large-scale unconstrained optimization problems due to its simplicity and low storage [1][2][3][4][5][6][7][8].Let  0 be the initial approximation of the solution to (1); the general format of the nonlinear conjugate gradient method is as follows: where   can be obtained by some linear searches, i.e., [6][7][8], and search direction   is computed by where   is the gradient of  at point   and   is a parameter.Different choices for the parameter   correspond to different nonlinear conjugate gradient methods.The Fletcher-Reeves (FR) method, the Polak-Ribiere-Polyak (PRP) method, the Hestenes-Stiefel (HS) method, the Dai-Yuan (DY) method, and the Conjugate Descent (CD) method are some famous nonlinear conjugate gradient methods [1,2,[9][10][11][12], and the parameters   of them are, respectively, defined by where  −1 =   −  −1 and ‖ ⋅ ‖ is the Euclidean norm.Because of the good numerical performance of the conjugate gradient method, in recent years, the nonlinear three-term conjugate gradient method has been paid much attention by researchers, such as the three-term conjugate gradient method [5], the three-term form of the L-BFGS method [13], the three-term PRP conjugate gradient method [14], and the new-type conjugate gradient update parameter similar with    [15].On the other hand, we know that the Armijo line search is widely used in solving optimization problems; i.e., see [8].So, in this paper, we propose a new modified threeterm conjugate gradient method with the Armijo line search.The proposed method is used to solve M-tensor systems [16,17] and a kind of nonsmooth optimization problems with  1 -norm [18][19][20][21][22].
The remainder of this paper is organized as follows: In the next section, we give the new modified three-term conjugate gradient method.Firstly, we give the smooth case of the proposed method and prove the sufficient descent property and the global convergence property of it.Then, we give the nonsmooth case of the proposed method.In Section 3, we present M-tensor systems and a kind of nonsmooth minimization problems with  1 -norm, which can be solved by the proposed method.And, we also give some numerical results to show the efficiency of the proposed method.In Section 4, we give the conclusion of this paper.

Modified Three-Term Conjugate Gradient Method
In this section, we consider the nonlinear conjugate gradient method for solving (1); we discuss the problem in two cases: (1)  is a smooth function; (2)  is a nonsmooth function.

Smooth Case.
Based on nonlinear conjugate gradient methods in [5,8], we propose a modified three-term conjugate gradient method with the Armijo line search.We consider the search direction where From ( 5), (6), and (7), we can obtain that Now, we present the modified three-term conjugate gradient method.
Next, we will give the global convergence analysis of Algorithm 1. Firstly, we give the following assumptions.
Assumption 2. The level set  0 = { ∈   | () ≤ ( 0 )} is bounded; i.e., there exists a positive constant  > 0 such that ‖‖ ≤  for all  ∈  0 .Assumption 3. In the neighborhood  of  0 ,  is continuously differentiable and its gradient  is Lipschitz continuous; that is, there exists a positive constant  > 0, ∀,  ∈ , such that Proof.Firstly, we prove that there exists a constant  > 0 such that, for sufficiently large , The proof of ( 13) can be divided into two following cases.
Now we can get the global convergence of Algorithm 1.
Theorem 6. Suppose {  } and {  } are generated by Algorithm 1, then Proof.Using the technique similar to Theorem 3.1 in [5], we can get this theorem.

Nonsmooth
then we call f :   ×  + →  is a smoothing function of .
Denote g = ∇ f(  ,   ).Now, we present the following smoothing modified three-term conjugate gradient method.
Step 2. Compute the search direction   by using   and   , where where  −1 = g − g−1 .
Next, we give the global convergence analysis of Algorithm 9.

Applications
In this section, the applications of the proposed modified three-term conjugate gradient method are given.The conjugate gradient method is suitable for solving unconstrained optimization problems.In the first subsection, we consider the M-tensor systems, which can be transformed into the unconstrained minimization problem and solved by Algorithm 1. Then in the second subsection, we consider a kind of nonsmooth optimization problems with  1 -norm, which can be solved by Algorithm 9.And in each subsection, the numerical results are given to show the feasibility of the proposed method.

Applications in Solving M-Tensor
Systems.In this subsection, we consider the M-tensor systems, which can be transformed into the general unconstrained minimization problem.We use Algorithm 1 to solve it.The problem of tensor systems [16,17] is an important problem in tensor optimization [23][24][25][26].We consider the tensor system where A ∈ C [,] := C ××⋅⋅⋅× and b ∈ C  .Then the th element of (31) is defined as And if z ∈ C  \ 0 and  ∈ C satisfy where then we call  is an eigenvalue of A and  is a corresponding eigenvector of  [25].The spectral radius [26] of a tensor A is defined as Let L ∈ C [,] be the identity tensor [17], i.e., for all 1 ≤  1 ,  2 , . . .,   ≤ .If there exists a nonnegative tensor B and a positive real number  ≥ (B) such that A = L − B, then the tensor A is called an M-tensor [16].And if  > (B), it is called a nonsingular M-tensor.Suppose A is a nonsingular M-tensor, then for every positive vector b, (31) has a unique positive solution [16].Then (31) can be transformed into the following unconstrained minimization problem min Now, we present numerical experiments for solving Mtensor systems.Some examples are taken from [16].We implement Algorithm 1 with the codes in Matlab Version R2014a and Tensor Toolbox Version 2.6 on a laptop with an Intel(R) Core(TM) i5-2520M CPU(2.50GHz) and RAM of 4.00GB.The parameters involved in the algorithm are taken as  = 0.2,  = 0.25,  = 10 −6 ,  = 0.6.
Example 11.Consider (31) with a 3rd-order 2-dimensional M-tensor, where A = I − B ∈  ×× .And B contains the entries   = 1 with  = 1, 2 and ,  ≥ , and other entries are zeros.Let  = 10,  > (B).Hence A is a upper triangular nonsingular M-tensor.The starting point  0 is set to be rand(, 1) and  is set to be (, 1).The numerical results are given in Table 1 and Figure 1. By Hence A is a symmetric nonsingular M-tensor.The starting point  0 is set to be rand(, 1) and  is set to be (, 1).
When  = 2, the corresponding numerical results are given in Table 2 and Figure
To find  ∈  2 , such that then we get min Now, we give some numerical experiments of Algorithm 9, which are also considered in [19,21,22,27,28].The numerical results of all examples indicate that the modified three-term conjugate gradient method is also effective for solving the  1 -norm minimization problem (40).In our numerical experiments, all codes run in Matlab R2014a.For Examples 13 and 14, the parameters used in Algorithm 9 are chosen as  =  1 = 0.2,  = 0.5,  = 10 −6 , and  = 0.4.(52) In this example, we choose  = 30,  = 100.The numerical results are given in Figure 4.
In this example, we take  = 200,  = 210.The numerical results are given in Figure 5.

Conclusion
In this paper, we propose a modified three-term conjugate gradient method and give the applications in solving Mtensor systems and a kind of nonsmooth optimization problems with  1 -norm.The global convergence of the proposed method is also given.Finally, we present some numerical experiments to demonstrate the efficiency of the proposed method.
Mathematical Problems in Engineering

Figure 4 :
Figure 4: Numerical results for solving Example 13 with Algorithm 9.

Figure 6 :
Figure 6: Numerical results for solving Example 15 with Algorithm 9.

Table 1 :
The numerical results of Example 11.

Table 2 :
The numerical results of Example 12.