Research Article New Nonsmooth Equations-based Algorithms for 1 -norm Minimization and Applications

Recently, Xiao et al. proposed a nonsmooth equations-based method to solve the 1-norm minimization problem 2011. The advantage of this method is its simplicity and lower storage. In this paper, based on new nonsmooth equations reformulation, we investigate new nonsmooth equations-based algorithms for solving 1-norm minimization problems. Under mild conditions, we show that the proposed algorithms are globally convergent. The preliminary numerical results demonstrate the effectiveness of the proposed algorithms.


Introduction
We consider the 1 -norm minimization problem and ρ is a nonnegative parameter.Throughout the paper, we use v n i 1 |v i | 2 and v 1 i |v i | to denote the Euclidean norm and the 1 -norm of vector v ∈ R n , respectively.Problem 1.1 has many important practical applications, particularly in compressed sensing abbreviated as CS 1 and image restoration 2 .It can also be viewed as a regularization technique to overcome the ill-conditioned, or even singular,

Preliminaries
By nonsmooth analysis, a necessary condition for a vector x ∈ R n to be a local minima of nonsmooth function f : R n → R is that 0, . . ., 0 T ∈ ∂f x , 2.1 where ∂f x denotes the subdifferential of f at x 23 .If f is convex, then 2.1 is also sufficient for x to be a solution of 1.1 .The subdifferential of the absolute value function |t| is given by the signum function sign t , that is ∂|t| sign t :

2.2
For problem 1.1 , the optimality conditions therefore translate to where ∇ i f x ∂f x /∂x i , i 1, . . ., n.It is clear that the function defined by 1.1 is convex.Therefore a point x * ∈ R n is a solution of problem 1.1 if and only if it satisfies

2.4
Formally, we call the above conditions the optimality conditions for problem 1.1 .
For any given τ > 0, we define a mapping Then H τ is a continuous mapping and is closely related to problem 1.
The above proposition has reformulated problem 1.1 as a system of nonsmooth equations.Compared with the nonsmooth equation reformulation in 17 , the dimension of 2.6 is only half of the dimension of the equation in 17 .
Given a, b, c, d ∈ R. It is easy to verify that see, e.g. 25

2.8
It is clear that 0 ≤ s, t ≤ 1.By 2.5 , we have for any x, y ∈ R n , it holds that

2.10
Then we obtain

2.11
Since ∇f x A T Ax − b , we get The next proposition shows the Lipschitz continuity of H τ defined by 2.5 .
Proposition 2.2.For each τ > 0, there exists a positive constant L τ such that Proof.By 2.10 and 2.12 , we have

2.14
Let L τ 1 τ A T A .Then 2.13 holds.The proof is complete.
The following proposition shows another good property of the system of nonsmooth equations 2.6 .

Proposition 2.3.
There exists a constant τ * > 0 such that for any 0 < τ ≤ τ * , the mapping H τ : R n → R n is monotone, that is 2.15 Proof.Let D ii be the ith diagonal element of A T A. It is clear that D ii > 0, i 1, . . ., n. Set τ * min i {1/D ii }.Note that A T A is symmetric and positive semidefinite.Consequently, for any τ ∈ 0, τ * , matrix T I − S τ I − T TS A T A is also positive semidefinite.Therefore, it follows from 2.12 that This completes the proof.

Algorithms and Their Convergence
In this section, we describe the proposed algorithms in detail and establish their convergence.
Algorithm 3.1 spectral gradient projection method abbreviated as SGP .Given initial point where for each k ≥ 1, θ k is defined by Step 2. Determine steplength α k γ m k with m k being the smallest nonnegative integer m such that Step 3. Compute Set k : k 1, and go to Step 1.

Remark 3.2. i
The idea of the above algorithm comes from 18 .The major difference between Algorithm 3.1 and the method in 18 lies in the definition of y k−1 .The choice of y k−1 in Step 1 follows from the modified BFGS method 26 .The purpose of the term r H x k ν s k−1 is to make y k−1 be closer to H x k − H x k−1 as x k tends to a solution of 2.6 .
ii Step 3 is called the projection step.It is originated in 20 .The advantage of the projection step is to make x k 1 closer to the solution set of 2.6 than x k .We refer to 20 for details.
iii Since − H x k , d k H x k d k , by the continuity of H, it is easy to see that inequality 3.3 holds for all m sufficiently large.Therefore Step 2 is well defined and so is Algorithm 3.1.
The following lemma comes from 20 .

Lemma 3.3. Let
Then for any x * ∈ R n satisfying H x * 0, it holds that

3.6
The following theorem establishes the global convergence for Algorithm 3.1.
Theorem 3.4.Let {x k } be generated by Algorithm 3.1 and x * a solution of 2.6 .Then one has

3.7
In particular, {x k } is bounded.Furthermore, it holds that either {x k } is finite and the last iterate is a solution of the system of nonsmooth equations 2.6 , or the sequence is infinite and Proof.The proof is similar to that in 18 .We omit it here.
Remark 3.5.The computational complexity of each of SGP's steps is clear.In large-scale problems, most of the work is matrix-vector multiplication involving A and A T .Steps 1 and 2 of SGP require matrix-vector multiplication involving A or A T two times each, while each iteration in GPSR-BB involves matrix-vector multiplication only two times.This may bring in more computational complexity.Therefore, we give a modification of SGP.The modified algorithm, which will be called MSGP in the rest of the paper, coincides with SGP except at Step 3, whose description is given below.
Step 3. Let m k/M.If m is a positive integer, compute Proof.Let x k 1 be generated by 3.8 .It follows from Lemma 3.3 that 3.9 holds.In the following, we assume that x k 1 z k .Then, we obtain

Journal of Applied Mathematics
This together with 2.12 implies that

3.11
Let τ ∈ 0, 1/λ max A T A .Then we get This completes the proof.Now we establish a global convergence theorem for Algorithm 3.6.
Theorem 3.8.Let λ max A T A be the maximum eigenvalue of A T A and τ ∈ 0, 1/λ max A T A .Assume that {x k } is generated by Algorithm 3.6 and x * is a solution of 2.6 .Then one has

3.13
In particular, {x k } is bounded.Furthermore, it holds that either {x k } is finite and the last iterate is a solution of the system of nonsmooth equations 2.6 , or the sequence is infinite and lim k → ∞ x k 1 − x k 0.Moreover, {x k } converges to some solution of 2.6 .
Proof.We first note that if the algorithm terminates at some iteration k, then we have d k 0 or H z k 0. By the definition of θ k , we have H x k 0 if d k 0. This shows that either x k or z k is a solution of 2.6 .
Suppose that d k / 0 and H z k / 0 for all k.Then an infinite sequence {x k } is generated.It follows from 3.3 that Let x * be an arbitrary solution of 2.6 .By Lemmas 3.7 and 3.3, we obtain where m is a nonnegative integer.In particular, the sequence { x k − x * } is nonincreasing and hence convergent.Moreover, the sequence {x k } is bounded, and lim m → ∞ x mM 1 − x mM 0.

3.16
Following from 3.8 and 3.14 , we have 3.17 This together with 3.16 yields lim m → ∞ α mM d mM 0.

3.18
Now we consider the following two possible cases: If i holds, then by the continuity of H and the boundedness of {x mM }, it is clear that the sequence {x mM } has some accumulation point x * such that H x * 0. Since the sequence { x k − x * } converges, it must hold that {x k } converges to x * .
If ii holds, then by the boundedness of {x mM } and the continuity of H, there exist a positive constant C and a positive integer m 0 such that On the other hand, from 3.2 and the definitions of s k−1 and y k−1 , we have 3.20 which together with 3.19 and Propositions 2.2 and 2.3 implies Consequently, we obtain from 3.1 , 3.19 , and 3.21 Therefore, it follows from 3.18 that lim m → ∞ α mM 0. By the line search rule, we have for all m sufficiently large, m k − 1 will not satisfy 3.3 .This means

Journal of Applied Mathematics
Since {x mM } and {d mM } are bounded, we can choose subsequences of {x mM } and {d mM } converging to x * * and d * * , respectively.Taking the limit in 3.23 for the subsequence, we obtain However, it is not difficult to deduce from 3.1 , 3.19 , and 3.21 that This yields a contradiction.Consequently, lim inf m → ∞ H x mM > 0 is not possible.The proof is then complete.

Applications to Compressed Sensing and Image Restoration
In this section, we apply the proposed algorithms, that is, SGP and MSGP, to solve some practical problems arising from the compressed sensing and image restoration.We will compare the proposed algorithms with SGCS, SPARSA, and GPSR-BB.The system of nonsmooth equations in SGCS is where z, c, H are defined as those in 17 .The test problems are associated with applications in the areas of compressed sensing and image restoration.All experiments were carried out on a Lenovo PC 2.53 GHz, 2.00 GB of RAM using Matlab 7.8.The parameters in SGCS are specified as follows: The parameters in SGP and MSGP are specified as follows: Throughout the experiments, we choose the initial iterate to be x 0 0. In our first experiment, we consider a typical CS scenario, where the goal is to reconstruct a length-n sparse signal in the canonical basis from m observations, where m < n.The m × n matrix A is obtained by first filling it with independent samples of the standard Gaussian distribution and then orthonormalizing the rows.Due to the storage limitations of PC, we test a small size signal with m 1024, n 4096.The observed vector is b Ax orig ξ, where ξ is Gaussian white noise with variance σ 2 10 −4 and x orig is the original signal with 50 randomly placed ±1 spikes and with zeros in the remaining elements.The regularization parameter is chosen as ρ 0.05 A T b ∞ .We compare the performance of SGP and MSGP with that of SGCS, SPARSA, and GPSR-BB by solving the problem and choose where x is the restored signal.To perform this comparison, we first run the SGCS algorithm and stop the algorithm if the following inequality is satisfied: and then run each of the other algorithms until each reaches the same value of the objective function reached by SGCS.The original signal and the estimation obtained by solving 1.1 using the MSGP method are shown in Figure 1.We can see from Figure 1 that MSGP does an excellent job at locating the spikes with respect to the original signal.In Figure 2, we plot the evolution of the objective function versus iteration number and CPU time, for these algorithms.It is readily to see that MSGP worked faster than other algorithms.
In the second experiment, we test MSGP for three image restoration problems based on the images as House, Cameraman, and Barbara.House and Cameraman images are of size 256 × 256 and the other is of size 512 × 512.All the pixels are contaminated by Gaussian where x orig and x are the original and restored images, respectively.We first run SGCS and stop the process if the following inequality is satisfied: x k < 10 −5 , 4.8 and then run the other algorithms until their objective function value reach SGCS's value.Table 1 reports the number of iterations Iter , the CPU time in seconds Time , and the SNR to the original images SNR .It is easy to see from Table 1 that the MSGP is competitive with the well-known algorithms: SPARSA and GPSR-BB, in computing time and number of iterations and improves the SGCS greatly.Therefore we conclude that the MSGP provides a valid approach for solving 1 -norm minimization problems arising from image restoration problems.
Preliminary numerical experiments show that SGP and MSGP algorithms have improved SGCS algorithm greatly.This may be because the system of nonsmooth equations solved here has lower dimension than that in 17 and the modification to projection steps that we made reduces the computational complexity.

1 Figure 1 :
Figure 1: From top to bottom: original signal, observation, and reconstruction obtained by MSGP.
1 .It is generally not differentiable in the sense of Fréchet derivative but semismooth in the sense of Qi and Sun 24 .The following proposition shows that the 1 -norm minimization problem 1.1 is equivalent to a nonsmooth equation.It can be easily obtained by the use of the optimality conditions and the convexity of the function f defined by 1.1 .
Proposition 2.1.Let τ > 0 be any given constant.A point x * ∈ R n is a solution of problem 1.1 if and only if it satisfies