Modified Three-Term Liu–Storey Conjugate Gradient Method for Solving Unconstrained Optimization Problems and Image Restoration Problems

A new three-term conjugate gradient method is proposed in this article. The new method was able to solve unconstrained optimization problems, image restoration problems, and compressed sensing problems. The method is the convex combination of the steepest descent method and the classical LS method. Without any linear search, the new method has sufficient descent property and trust region property. Unlike previous methods, the information for the function 
 
 f
 
 
 x
 
 
 
 is assigned to 
 
 
 
 d
 
 
 k
 
 
 
 . Next, we make some reasonable assumptions and establish the global convergence of this method under the condition of using the modified Armijo line search. The results of subsequent numerical experiments prove that the new algorithm is more competitive than other algorithms and has a good application prospect.


Introduction
Consider the following unconstrained optimization: According to the research of other scholars, there are abounding effective forms to solve unconstrained optimization problems. For example, there are the steepest descent method, Newton method, and conjugate gradient method [1][2][3][4][5][6][7][8]. e nonlinear conjugate gradient method is a very effective method for solving large-scale unconstrained optimization problems. e conjugate gradient method has attracted more and more attention [9][10][11][12][13][14] because it is easy to calculate and has low memory requirement and application in many fields [15][16][17][18]. e conjugate gradient iteration formula of (1) is defined as where α k is the step size of the k th iteration obtained by a certain line search rule and d k is search direction of step k, which are two important factors for solving unconstrained optimization problems. Among them, d k is defined as where the parameter β k called the CG parameter is a scalar and g k � g(x k ) � ▽f(x k ). e method of calculating the CG parameters will affect the performance and stability of the whole algorithm, so CG parameters play an utterly significant part. Among the nonlinear conjugate gradient method, we set y k � g k − g k− 1 and let ‖ · ‖ represent the Euclidean norm. en, the classical methods include the following: HS method [19] (the parameter β HS k � g T k y k / − d T k− 1 y k ) FR method [20] (the parameter β FR k � ‖g k ‖ 2 /‖g k− 1 ‖ 2 ) PRP method [21,22] (the parameter β PRP k � g T k y k /‖g k− 1 ‖ 2 ) LS method [23] (the parameter β LS k � g T k y k / − d T k− 1 g k− 1 ) CD method [24] (the parameter β CD k � ‖g k ‖ 2 / − d T k− 1 g k− 1 ) DY method [25] the parameter (β DY k � ‖g k ‖ 2 / − d T k− 1 y k ) is paper mainly studies the Liu-Storey method. At the earliest, Liu and Storey proposed a conjugate gradient method to solve the optimization problem in [9]. In this paper, they demonstrate the convergence of the method when using Wolfe line search. e experimental results show that the algorithm is feasible. Later, it was called the LS method. e LS method is the same as the PRP method with exact linear search. erefore, the LS method and the PRP method have similar forms. As is known to all, the HS method and the PRP method are supposed be the most effective two methods in practical calculation, and a lot of results have been obtained. Consequently, we hope that the theories and analysis techniques to the PRP method can also be applied to the LS method. However, the LS method does not have the nature of automatic descent. In order to surmount the disadvantage, Li [26] proposed an improved Liu-Storey method using Crippo-Lucide search technology, where the direction of search d k is computed as where Nevertheless, this method assumes a minimum step size, and the main reason is the lack of direction of the trust zone. erefore, a number of scholars have considered the combination of conjugate gradient method and trust region property. e three-term conjugate gradient algorithm is easy to converge because it automatically has sufficient descent. erefore, this kind of method has been concerned by many scholars. e conjugate gradient method with three terms was first proposed by Zhang et al. [27]; they defined d k as Recently, a three-term conjugate gradient algorithm was proposed in Yuan's paper [28]. e search direction d k of this algorithm is a based on LS method and the gradient descent method. On the basis of Yuan's research, a modified three-term conjugate gradient method is proposed, which has more functional information than the original method, where d k is computed by e vector y * k not only has gradient information but also has functional information, with good theoretical results and numerical performance (see [29]). One might think that the resulting method is indeed better than the original one; that is why we use y * k instead of y k . According to the meaning of y * k and s * k , the following inequality is obtained: erefore, In [28], Yuan et al. used modified Armijo linear search, which selects the largest α k as the step size in set where α k is the trial step length, which is often set to 1, and ρ 1 ∈ (0, 1), δ ∈ (0, 1) are given constants. As is known to all, Armijo line search technology is a basic and the cheapest method. It is used in various algorithms [30][31][32]. Basically, a number of other methods of line search can be regarded as a modification of Armijo line search. In an unpublished article by Yuan, a new modification of Armijo linear search was designed based on [16,33], which is defined as where c ∈ (0, 1), c 1 ∈ (0, c), and α k � max ρ i |i � 0, 1, 2, 3, . . .} satisfying (12). e modification of Armijo 2 Mathematical Problems in Engineering linear search is verified to be effective in improving the efficiency of the algorithm. e following is an introduction to the paper: In the second section, we propose an improvement algorithm and establish two important properties (sufficient descent property and trust region property) of the algorithm without using linear search. en, some other properties of the algorithm are given, and the global convergence of the algorithm is proved under appropriate assumptions. e third section gives numerical experiments on normal unconstrained optimization problems, image restoration problems, and compressive sensing problems. In the last section, the conclusion of this paper is given.

Conjugate Gradient Algorithm and Convergence Analysis
is section will give a new modified Liu-Storey method combining (7) and (12), as shown below: Lemma 1. d k is generated by formula (7); then, it clears that Proof. When k � 0, it is obvious that g T 0 d 0 � − ‖g 0 ‖ 2 and ‖d 0 ‖ � ‖g 0 ‖.
en, we will focus on the global convergence of the algorithm. In order to achieve this goal, we have to make the following assumptions.
entiable and the gradient g(x k ) is Lipschitz continuous, and there exists a constant (Lipschitz constant) L > 0 such that Theorem 1. Assumption 1 is kept true, and then, there is a positive constant α, which satisfies (12) in Algorithm 1.
Proof. We construct such a function From Assumption 1, f(x k + αd k ) and f k are bounded, note that 0 < δ 1 < δ < 1, and from (13), we have g T k d k ≤ 0. Obviously, φ(0) � 0 holds. Now, we discuss the following two scenarios: For all positive α that are small enough, we can obtain the following inequalities: So, from (20) and (21), there exists, at least, a positive constant ω such that φ(ω) < 0, which also implies that there Mathematical Problems in Engineering 3 must be, at least, one local minimum point α * ∈ (0, ω) such that that is, us, α * satisfies (12). e proof is completed. So, quite evidently, Algorithm 1 is well defined. □ Theorem 2. Let Assumption 1 be satisfied and the iterative sequence x k , α k , d k , g k be generated by Algorithm 1. After that, we obtain Proof. We will use the contradiction to draw this conclusion. When (24) is not correct, there exists a constant ε 0 > 0 such that From the third step of the Algorithm (12), we have namely, and summing up the two sides of the abovementioned inequalities and from Assumption 1 (ii), it is obtained that us, from (28), we have Now, our certificates are divided into the following two cases: Case 1: the step size α k > 0 as k ⟶ ∞. From (29), it is obvious that ‖g k ‖ ⟶ 0 as k ⟶ ∞. is contradicts (24). Case 2: the step size α k ⟶ 0 as k ⟶ ∞.
Combined with (30)-(32), we have So, we get the contradiction. us, result (24) is true, and this completes the proof.

Numerical Experiments
In this section, we have designed three experiments to study the computational efficiency and performance of the proposed algorithm. e first subsection is normal unconstrained optimization problems, the second subsection is the image restoration problem, and the third subsection is compressive sensing problems. All the programs are compiled with Matlab R2017a and implemented on an Intel(R) Core (TM) i7-4710MQ CPU @ 2.50 GHz, RAM 8.0 GB, and the Windows 10 operating system.

Normal Unconstrained Optimization Problems.
In order to test the numerical performance of the TTMLS algorithm, the NLS algorithm [28], LS method with the normal WWP line search (LS-WWP), and PRP method with the normal WWP line search (PRP-WWP) are also experimented as the comparison group. e results can be seen in Tables 1-4. e data used in the experiment are as follows: Dimension: we choose 3000 dimensions, 6000 dimensions, and 9000 dimensions to test. Parameters: all the algorithms run with c � 0.6, c 1 � 0.3, ρ � 0.9, c � 0.7, and ε � 10 − 6 . Stop rule (the Himmelblau stop rule [34] If conditions ‖g(x)‖ < ε or stop 1 < e 2 are satisfied or the iteration number of more than 1000 is satisfied, we stop the process, where e 1 � e 2 � 10 − 5 . Symbol representation: NI: the iteration number. NFG: the total number of function and gradient evaluations. CPU: the CPU time in seconds.
Text problems: we have tested 74 unconstrained optimization problems, and the list of problems can be seen in Yuan's work [16].
Dolan and Moré [35] provided a way to analyze the efficiency of these algorithms. From Figures 1-3, we can see that the 3performance of the TTMLS algorithm, TTMLS algorithm, and NLS algorithm is significantly better than that of the LS-WWP method and PRP-WWP method. Figures 1 and 2 show that the TTMLS algorithm and NLS algorithm can better approximate to the target function than the LS-WWP algorithm and PRP-WWP algorithm; thus, the number of iterations and the total number of function and gradient evaluation are smaller. e reason is that the search direction d k of the TTMLS algorithm contains more function information. Also, the CPU time in Figure 4, TTMLS algorithm is basically the same as the NLS algorithm, which is better than the other two. To sum up, the proposed algorithm has significant advantages.

Image Restoration Problems.
e image restoration problem is a difficult problem in the field of optimization. We will use the TTMLS algorithm and NLS algorithm to minimize Z to recover the original image from an image corrupted by impulse noise. Afterwards, we compare the performance of the two algorithms. e data used in the experiment are as follows: Parameters: all the algorithms run with c � 0.6, c 1 � 0.3, ρ � 0.9, and c � 0.7. Stop rule: if ‖g k+1 − g k ‖/‖g k ‖ < 10 − 4 or ‖x k+1 − x k ‖/‖x k ‖ < 10 − 4 is satisfied, we stop the process. Symbol representation: CPU: the CPU time in seconds. Total: the total CPU time of the four pictures. e information of noise: 30%, 45%, and 60% salt-andpepper noise. Text problems: we restore the original image from the image destroyed by impulsive noise. e experiments chose Lena (512 × 512), Barbara (512 × 512), Man (1024 × 1024), and Baboon (512 × 512) as the test images. Table 5 see that both algorithms can recover 30%, 45%, and 60% salt-and-pepper noise images very well. e data show that, for image restoration problems, the TTMLS algorithm has shorter CPU time than the NLS algorithm when the salt-and-pepper noise is 30%, 45%, and 60%. In conclusion, the TTMLS algorithm is promising and competitive.

Compressive Sensing Problems.
e main work of this section is to accurately recover the image from a few of random projections by compressive sensing. e experimental method derives from the model proposed by Dai and Sha [36]. en, the performance of the TTMLS algorithm and LS method with line search (12) is compared.
It is noted that the gradients g k and d k are square matrices in this experiment, and the matrix obtained by Mathematical Problems in Engineering               g T k d k may appear as a singular matrix, which results in the invalidation of the algorithm. But, when we calculate d k+1 , we only need the value of g T k d k without knowing the information of this square matrix, so in this experiment, we set g T k d k � ‖g T k d k ‖ ∞ in d k+1 . e data used in the experiment are as follows: Parameters: all the algorithms run with c � 0.6, c 1 � 0.3, ρ � 0.9, and c � 0.7. Stop rule: if ‖g T k d k ‖ ∞ < 10 − 3 or the number of iterations exceeds 500 is satisfied, we stop the process. Symbol representation: PSNR: Peak Signal-to-Noise Ratio. It is an objective criterion for image evaluation.  Figure 7 and Table 6, we can see that both algorithms are effective in compression sensing problems. Meanwhile, from the experimental data, we can see that the TTMLS algorithm has more advantages than the LS algorithm.

Conclusions
In this paper, based on the well-known LS method and combined with improved Armijo linear search, this paper presents a three-term conjugate gradient algorithm. Without any linear search, the search direction of the new three-term conjugate algorithm is proved to have two good properties: sufficient descent and trust region properties. Also, the global convergence of the algorithm is established. e numerical results indicate that the new algorithm is effective. e good performance of the algorithm in image restoration problems and compressive sensing problems also proves that the algorithm is competitive.

Data Availability
Data used in this study can be obtained from the corresponding author on reasonable request.

Conflicts of Interest
ere are no potential conflicts of interest.