A Descent Conjugate Gradient Algorithm for Optimization Problems and Its Applications in Image Restoration and Compression Sensing

. It is well known that the nonlinear conjugate gradient algorithm is one of the eﬀective algorithms for optimization problems since it has low storage and simple structure properties. This motivates us to make a further study to design a modiﬁed conjugate gradient formula for the optimization model, and this proposed conjugate gradient algorithm possesses several properties: (1) the search direction possesses not only the gradient value but also the function value; (2) the presented direction has both the suﬃcient descent property and the trust region feature; (3) the proposed algorithm has the global convergence for nonconvex functions; (4) the experiment is done for the image restoration problems and compression sensing to prove the performance of the new algorithm.


Introduction
Consider the following model defined by where f: R n ⟶ R is a continuous function. e above problem (1) has many practical applied fields, such as economics, biology, and engineering. It is well known that the nonlinear conjugate gradient (CG) method is one of the most effective methods for (1). e CG algorithm has the following iterative formula with x k+1 � x k + α k d k , k � 0, 1, 2, . . . , (2) where α k denotes the steplength, x k is the kth iterative point, and d k is the search direction designed by where g k � ∇f(x k ) is the gradient and β k is a scalar which determines different CG algorithms ( [1][2][3][4][5][6][7], etc.), where the Polak-Ribière-Polak (PRP) formula [6,7] is one of the wellknown nonlinear CG formulas with where g k− 1 � ∇f(x k− 1 ) and ‖ · ‖ is the Euclidean norm. e PRP method has been studied by many scholars, and many results are obtained (see [7][8][9][10][11][12] [18][19][20], etc.) by this technique. It has been proved that the nonlinear CG algorithms can be used in nonlinear equations, nonsmooth optimization, and image restoration problems (see [21][22][23][24], etc.). We all know that the sufficient descent property designed by plays an important role for convergence analysis in CG methods (see [13,14,24], etc.), where c > 0 is a constant. ere is another crucial condition about the scalar β k ≥ 0 that has been pointed out by Powell [10] and further emphasized in the global convergence [11,12].
us, under the assumption of the sufficient descent condition and the WWP technique, a modified PRP formula β PRP+ k � max 0, β PRP k is presented by Gilbert and Nocedal [13], and its global convergence for nonconvex functions is established. All of these observations tell us that both property (5) and β k ≥ 0 are very important in the CG algorithms. To get one of the conditions or both of them, many scholars made a further study and got many interesting results. Yu [25] presented a modified PRP nonlinear CG formula designed by where μ > (1/4) is a positive constant and y k � g k+1 − g k , which has property (5) with c � 1 − (1/4μ). Yuan [12] proposed a further formula defined by which possesses not only property (5) with c � 1 − (1/4μ) but also the scalar β mPRP2 k ≥ 0. To get a greater drop, a threeterm FR CG formula is given by Zhang et al. [26]: where it has (5) with c � − 1. Dai and Tian [27] gave another CG direction designed by which also possesses (5) with c � − 1. e global convergence of the above CG method is proved by Dai and Tian [27] for β k � β YuanPRP k and β k � β YuPRP k . For nonconvex functions and the effective Armijo line search, they did not analyze them. One the main reasons lies in the trust region feature. To overcome it, we [28] proposed a CG formula designed by where c k � (|β k |‖d k ‖/‖g k+1 ‖), which possesses not only (5) with c � − 1 but also the trust region property. It has been proved that the CG formula will have better numerical performance if it possesses not only the gradient value information but also the function value [29]. is motivates us to present a CG formula based on (10) designed by e new vector y * k [30] has been proved that it has some good properties in theory and experiment. Yuan et al. [29] use it in the CG formula and get some good results. ese achievements inspire us to propose the new CG direction (11), and this paper possesses the following features: (i) e sufficient property and the trust region feature are obtained (ii) e new direction possesses not only the gradient value but also the function value (iii) e given algorithm has the global convergence under the Armijo line search for nonconvex functions (iv) e experiments for image restoration problems and compression sensing are done to test the performance of the new algorithm e next section states the given algorithm. e convergence analysis is given in Section 3, and experiments are done in Section 4, respectively. e last section proposes one conclusion.

Algorithm
Based on the discussions of the above section, the CG algorithm is listed in Algorithm 1.
Step 1: stop if ‖g k ‖ ≤ ε is true. Step { } satisfying the equation in Step 2 mentioned in Algorithm 1.

Remark 1.
e relation (13) is the so-called trust region feature, and the above theorem tells us that direction (11) has not only the sufficient descent property but also the trust region feature. Both these relations (12) and (13) will make the proof of the global convergence of Algorithm 1 be easy to be established.

Global Convergence
For the nonconvex functions, the global convergence of Algorithm 1 is established under the following assumptions.

Assumption 1.
Assume that the function f(x) has at least a stationary point x * , namely, ‖g(x * )‖ � 0 is true. Suppose that the level set defined by e function f(x) is twice continuously differentiable and bounded below, and its g(x) is Lipschitz continuous. We also assume that there exists a positive constant L > 0 such that Now, we prove the global convergence of Algorithm 1.

Theorem 2. Let Assumption 1 be true. en, we get
Proof. Using (12) and the Step 2 of Algorithm 1, we obtain which means that the sequence f(x k ) is descent and the following relation is true. For k � 0 to ∞, by summing the above inequalities and Assumption 1, we deduce that holds. us, we have is implies that or lim k⟶∞ α k � 0.
Suppose that (22) holds, the proof of this theorem is complete. Assuming that (23) is true, we aim to get (17). Let the stepsize α k satisfy the equation in Step 2 in Algorithm 1; By (12) and (13) and the well-known mean value theorem, we obtain which implies that is true. is is a contradiction to (23). en, only relation (22) holds. We complete the proof.

Remark 2.
We can see that the proof process of the global convergence is very simple since the defined direction (11) has not only the good sufficient descent property (12) but also the perfect trust region feature (13).

Numerical Results
e numerical experiments for image restoration problems and compression sensing will be done by Algorithm 1 and the normal PRP algorithm, respectively. All codes are run on a PC with an Intel (R) Core (TM) i7-7700T CPU @ 2.9 GHz, 16.00 GB of RAM, and the Windows 10 operating system and written by MATLAB r2014a. e parameters are chosen as σ � 0.5, σ 0 � 0.1, δ � 0.9, and μ � 300.
which is the index set of the noise candidates. Suppose that ζ is the observed noisy image of x corrupted by salt-andpepper noise, we let ϕ i,j � (i, j − 1), (i, j+ 1), (i − 1, j), (i + 1, j)} be the neighborhood of (i, j). By applying an adaptive median filter to the noisy image y, ζ is defined by the image obtained. s max denotes the maximum of a noisy pixel, and s min denotes the minimum of a noisy pixel. e following conclusions can be obtained: Mathematical Problems in Engineering be restored. A pixel (i, j) is identified as uncorrupted, and its original value is kept, which means that w * i,j � ζ i,j with the element w * i,j of the denoised image w by the two-phase method. (ii) If (i, j) ∉ N holds, w * m,n � ζ m,n is stetted and ζ m,n is restored if (m, n) ∈ ϕ i,j ∩ N. Chan et al. [31] presented the new function f α and minimized it for the restored images without a nonsmooth term, which has the following form: where α is a constant and ψ α is an even edge-preserving potential function. e numerical performance of f α is noteworthy [32,33].
We choose Barbara (512 × 512), man (256 × 256), Baboon (512 × 512), and Lena (256 × 256) as the tested images. e well-known PRP CG algorithm (PRP algorithm) is also done to compare with Algorithm 1. e detailed performances are listed in Figures 1 and 2. Figures 1 and 2 tell us that these two algorithms (Algorithm 1 and the PRP algorithm) are successful to solve these image restoration problems, and the results are good.
To directly compare their performances, the restoration performance is assessed by applying the peak signal-to-noise ratio (PSNR) defined in [34][35][36]   e so-called Fourier transform technology is used, and the measurement is the Fourier domain.
Figures 3-5 turn out that these two algorithms work well for these figures, and they can successfully solve them.

Conclusion
is paper, by designing a CG algorithm, studies the unconstrained optimization problems. e given method possesses not only the sufficient descent property but also   the trust region feature. e global convergence is proved by a simple way. e image restoration problems and compressive sensing problems are tested to show that the proposed algorithm is better than the normal algorithm. In the future, we will focus on the following aspects to be paid attention: (i) we believe there are many perfect CG algorithms which can be successfully used for image restoration problems and compressive sensing; (ii) more experiments will be done to test the performance of the new algorithm.

Data Availability
All data are included in the paper.

Conflicts of Interest
ere are no potential conflicts of interest.