Approximate Sparsity and Nonlocal Total Variation Based Compressive MR Image Reconstruction

Recent developments in compressive sensing (CS) show that it is possible to accurately reconstruct the magnetic resonance (MR) image from undersampled k-space data by solving nonsmooth convex optimization problems, which therefore significantly reduce the scanning time. In this paper, we propose a new MR image reconstruction method based on a compound regularization model associated with the nonlocal total variation (NLTV) and the wavelet approximate sparsity. Nonlocal total variation can restore periodic textures and local geometric information better than total variation. The wavelet approximate sparsity achieves more accurate sparse reconstruction than fixed wavelet l 0 and l 1 norm. Furthermore, a variable splitting and augmented Lagrangian algorithm is presented to solve the proposedminimization problem. Experimental results onMR image reconstruction demonstrate that the proposed method outperforms many existing MR image reconstruction methods both in quantitative and in visual quality assessment.


Introduction
Magnetic resonance imaging (MRI) is a noninvasive and nonionizing imaging processing.Due to its noninvasive manner and intuitive visualization of both anatomical structure and physiological function, MRI has been widely applied in clinical diagnosis.Imaging speed is important in many MRI applications.However, both scanning and reconstruction speed of MRI will affect the quality of reconstructed image.In spite of advances in hardware and pulse sequences, the speed, at which the data can be collected in MRI, is fundamentally limited by physical and physiological constraints.Therefore many researchers are seeking methods to reduce the amount of acquired data without degrading the image quality [1][2][3].
In recent years, the compressive sensing (CS) framework has been successfully used to reconstruct MR images from highly undersampled -space data [4][5][6][7][8][9].According to CS theory [10,11], signals/images can be accurately recovered by using significantly fewer measurements than the number of unknowns or than mandated by traditional Nyquist sampling.MR image acquisition can be looked at as a special case of CS where the sampled linear combinations are simply individual Fourier coefficients (-space samples).Therefore, CS is claimed to be able to make accurate reconstructions from a small subset of -space data.In compressive sensing MRI (CSMRI), we can reconstruct a MR image with good quality from only a small number of measurements.Therefore, the application of CS to MRI has potential for significant scan time reductions, with benefits for patients and health care economics.
Because of the ill-posed nature of the CSMRI reconstruction problem, regularization terms are required for a reasonable solution.In existing CSMRI models, the most popular regularizers are ℓ 0 , ℓ 1 sparsity [4,9,12] and total variation (TV) [3,13].The ℓ 0 sparsity regularized CSMRI model can be understood as a penalized least square with ℓ 0 norm penalty.It is well known that the complexity of this model is proportional with the number of variables.Particularly when the number is large, solving the model generally is intractable.The ℓ 1 regularization problem can be transformed into an equivalent convex quadratic optimization problem and, therefore, can be very efficiently solved.And under some conditions, the resultant solution of ℓ 1 regularization coincides with one of the solutions of ℓ 0 regularization [14].Nevertheless, while ℓ 1 regularization provides the best convex approximation to ℓ 0 regularization and it is computationally efficient, the ℓ 1 regularization often introduces extra bias in estimation and cannot reconstruct an image with the least measurements when applied to CSMRI [15].In recent years, the ℓ  (0 <  < 1) regularization [16,17] was introduced into CSMRI, since ℓ  regularization can assuredly generate much sparser solutions than ℓ 1 regularization.Although the ℓ  regularizations achieve better performance, they always fall into local minima.Moreover, which  should yield a best result is also a problem.Trzasko and Manduca [18] proposed a CSMRI paradigm based on homotopic approximation of the ℓ 0 quasinorm.Although this method has no guarantee of achieving a global minimum, it achieves accurate MR image reconstructions at higher undersampling rates than ℓ 1 regularization.And it was faster than those ℓ  regularization methods.Recently, Chen and Huang [19] accelerated MRI by introducing the wavelet tree structural sparsity into the CSMRI.
Despite high effectiveness in CSMRI recovery, sparsity and TV regularizers often suffer from undesirable visual artifacts and staircase effects.To overcome those drawbacks, some hybrid sparsity and TV regularization methods [5][6][7][8] are proposed.In [5], Huang et al. proposed a new optimization algorithm for MR image reconstruction method, named fast composite splitting algorithm (FCSA), which is based on the combination of variable and operator splitting techniques.In [8], Yang et al. proposed a variable splitting method (RecPF) to solve hybrid sparsity and TV regularized MR image reconstruction optimization problem.Ma et al. [20] proposed an operator splitting algorithm (TVCMRI) for MR reconstruction.In order to deal with the problem of low and high frequency coefficients measurement, Zhang et al. [6] proposed a new so-called TVWL2-L1 model which measures low frequency coefficients and high frequency coefficients with ℓ 2 norm and ℓ 1 norm.In [7], an experimental study on the choice of CSMRI regularizations was given.Although the classical TV regularization performs well in CSMRI reconstruction while preserving edges, especially for cartoonlike MR images, it is well known that TV regularization is not suitable for images with fine details and it often tends to oversmooth image details and textures.Nonlocal TV regularization extends the classical TV regularization by nonlocal means filter [21] and has been shown to outperform the TV in several inverse problems such as image deonising [22], deconvolution [23], and compressive sensing [24,25].In order to improve the signal-to-noise ratio and preserve the fine details of MR images, Gopi et al. [26], Huang and Yang [27], and Liang et al. [28] have proposed nonlocal TV regularization based MR reconstruction and sensitivity encoding reconstruction.
In this paper, we proposed a novel compound regularization based compressive MR image reconstruction method, which exploits the nonlocal total variation (NLTV) and the approximate sparsity prior.The approximate sparsity, which is used to replace the traditional ℓ 0 regularizer and ℓ 1 regularizer of compressive MR image reconstruction model, is sparser and much easier to be solved.The NLTV is much better than TV for preserving the sharp edges and meanwhile recovering the local structure details.In order to compound regularization model, we develop an alternative iterative scheme by using the variable splitting and augmented Lagrangian algorithm.Experimental results show that the proposed method can effectively improve the quality of MR image reconstruction.The rest of the paper is organized as follows.In Section 2 we review the compressive sensing and MRI reconstruction.In Section 3 we propose our model and algorithm.The experimental results and conclusions will be shown in Sections 4 and 5, respectively.

Compressive Sensing and MRI Reconstruction
Compressive sensing [10,11], as a new sampling and compression theory, is able to reconstruct an unknown signal from a very limited number of samples.It provides a firm theoretical foundation for the accurate reconstruction of MRI from highly undersampled -space measurements and significantly reduces the MRI scan duration.Suppose u ∈ R  is a MR image and F ∈ R × is a partial Fourier transform; then the sampling measurement b ∈ R  of MR image u in -space can be defined as b = Fu. ( The compressive MR image reconstruction problem is to recover u given the measurement b and the sampling matrix F. Undersampling occurs whenever the number of -space sample is less than the number of unknowns ( < ).In that case, the compressive MR image reconstruction is an underdetermined problem.
In general, compressive sensing reconstructs the unknowns u from the measurements b by minimizing the ℓ 0 norm of the sparsified image Φu, where Φ represents a sparsity transform for the image.In this paper, we choose the orthonormal wavelet transform as the sparsity transform for the image.Then the typical compressive MR image reconstruction is obtained by solving the following constrained optimization problem [4,9,12]: However, in terms of computational complexity, the ℓ 0 norm optimization problem ( 2) is a typical NP-hard problem, and it was difficult to solve.According to the certain condition of the restricted isometric property, the ℓ 0 norm can be replaced by the ℓ 1 norm.Therefore, the optimization problem (2) is relaxed to alternative convex optimization problem as follows: When the measurements b are contaminated with noise, the typical compressive MR image reconstruction problem using ℓ 1 relaxation of the ℓ 0 norm is formulated as the following unconstrained Lagrangian version: where  is a positive parameter.
Despite high effectiveness of sparsity regularized compressive MR image reconstruction methods, they often suffer from undesirable visual artifacts such as Gibbs ringing in the result.Due to its desirable ability to preserve edges, total variation (TV) model is successfully used in compressive MR image reconstruction [3,13].But the TV regularizer has still some limitations that restrict its performance, which cannot generate good enough results for images with many small structures and often suffers from staircase artifacts.In order to combine the advantages of sparsity-based and TV model and avoid their main drawbacks, a TV regularizer, corresponding to a finite-difference for the sparsifying transform, is typically incorporated into the sparsity regularized compressive MR image reconstruction [5][6][7][8].In this case the optimization problem is written as where  is a positive parameter.The TV was defined discretely as , where ∇ ℎ and ∇ V are the horizontal and the vertical gradient operators, respectively.The compound optimization model ( 5) is based on the fact that the piecewise smooth MR images can be sparsely represented by the wavelet and should have small total variations.

Proposed Model and Algorithm
As mentioned above, joint TV and ℓ 1 norm minimization model is a useful way to reconstruct MR images.However, they have still some limitations that restrict their performance.ℓ 0 norm needs a combinatorial search for its minimization and its too high sensibility to noise.ℓ 1 problems can be very efficiently solved.But the solution is not sparse, which influences the performance of MRI reconstruction.The TV model can preserve edges, but it tends to flatten inhomogeneous areas, such as textures.To overcome those shortcomings, a novel method is proposed for compressive MR imaging based on the wavelet approximate sparsity and nonlocal total variation (NLTV) regularization, named WasNLTV.
3.1.Approximate Sparsity.The problems of using ℓ 0 norm in compressive MR imaging (i.e., the need for a combinatorial search for its minimization and its too high sensibility to noise) are both due to the fact that the ℓ 0 norm of a vector is a discontinuous function of that vector.The same as [29,30], our idea is to approximate this discontinuous function by a continuous one, named approximate sparsity function, which provides smooth measure of ℓ 0 norm and better sparsity than ℓ 1 regularizer.The approximate sparsity function is defined as The parameter  may be used to control the accuracy with which   approximate the Kronecker delta.In mathematical terms, we have lim Define the continuous multivariate approximate sparsity function Ψ  (x) as It is clear from (7) that Ψ  (x) is an indicator of the number of zero-entries in x for small values of .Therefore, ℓ 0 norm can be approximate by Note that the larger the value of , the smoother the Ψ  (x) and the worse the approximation to ℓ 0 norm; the smaller the value of ℓ 0 norm, the closer the behavior of Ψ  (x) to ℓ 0 norm.

Nonlocal Total Variation.
Although the classical TV is surprisingly efficient for preserving edges, it is well known that TV is not suitable for images with fine structures, details, and textures which are very important to MR images.The NLTV is a variational extension of the nonlocal means filter proposed by Wang et al. [30].NLTV uses the whole image information instead of using adjacent pixel information to calculate the gradients in regularization term.The NLTV has been proven to be more efficient than TV for improving the signal-to-noise ratio, on preserving not only sharp edges, but also fine details and repetitive patterns [26][27][28].In this paper, we use the NLTV to replace the TV in compound regularization based compressive MR image reconstruction.
Let Ω ⊂ R 2 , ,  ∈ Ω, () be a real function  : Ω → R, and let (, ) be a weight function.For a given image (), the weighted graph gradient is ∇  () if defined as the vector of all directional derivatives ∇  (, ⋅) at : Due to being analogous to classical TV, the ℓ 1 norm is in general more efficient than the ℓ 2 norm for sparse The weight function (, ) denotes how much the difference between pixels  and  is penalized in the images, which is calculated by where   () and   () denote a small patch in image  centering at the coordinates  and , respectively.  = ∑ ∈Ω (, ) is the normalizing factor.ℎ is a filtering parameter.

The Description of Proposed Model and Algorithm.
According to the compressive MR image reconstruction models described in Section 2, the proposed WasNLTV model for compressive MR image reconstruction is It should be noted that the optimization problem in (14), although convex, is very hard to solve owing to nonsmooth terms and its huge dimensionality.To solve the problem in (14), we use the variable splitting and augmented Lagrangian algorithm following closely the methodology introduced in [31].The core idea is to introduce a set of new variables per regularizer and then exploit the alternating direction method of multipliers (ADMM) to solve the resulting constrained optimization problems.By introducing an intermediate variable vector (k 1 , k 2 , k 3 ), the problem ( 14) can be transformed into an equivalent one; that is, min The optimization problem (15) can be written in a compact form as follows: where The augmented Lagrangian of problem ( 16) is where  > 0 is a positive constant, d ≡ (d 1 , d 2 , d 3 ), and d/ denotes the Lagrangian multipliers associated to  the constraint Gu + Bk = 0.The basic idea of the augmented Lagrangian method is to seek a saddle point of L(u, k, d), which is also the solution of problem (16).By using ADMM algorithm, we solve the problem ( 16) by iteratively solving the following problems: It is evident that the minimization problem ( 19) is still hard to solve efficiently in a direct way, since it involves a nonseparable quadratic term and nondifferentiability terms.
To solve this problem, a quite useful ADMM algorithm is employed, which alternatively minimizes one variable while fixing the other variables.By using ADMM, the problem ( 19) can be solved by the following four subproblems with respect to u and k.
(1) u subproblem: by fixing k and d, the optimization problem (19) to be solved is Initialization: set  = 0, choose , , , u 0 , k 0 1 , k 0 2 , and k 0  25), ( 27) and ( 29) end for update Lagrange multipliers: It is clear that problem ( 21) is a quadratic function.By direct computation, we get the Euler-Lagrange equation for (21): Therefore, the solution of problem ( 21) is Due to the computational complexity of NLTV, the same as [27], the NLTV regularization in this paper only runs one time.
(2) k 1 subproblem: by fixing k 2 , k 3 , u, and d, the optimization problem (19) to be solved is Clearly, the problem ( 24) is a quadratic function; its solution is simply (3) k 2 subproblem: by fixing k 1 , k 3 , u, and d, the optimization problem (19) to be solved is The same as problem (24), the problem ( 26) is a quadratic function and its gradient ∇ k 2 is simplified as The steepest descent method is desirable to use to solve (26) iteratively by applying (4) k 3 subproblem: by fixing k 1 , k 2 , u, and d, the optimization problem (19) to be solved is  Problem ( 28) is a ℓ 1 norm regularized optimization problem.Its solution is the well-known soft threshold [32]: where soft(, ) = sign() max{|| − , 0} denotes the component-wise application of soft-threshold function.
In conclusion, the ADMM algorithm for optimization problem ( 16) is shown in Algorithm 1.

Experimental Results
In this section, a series of experiments on four 2D MR images (named brain, chest, artery, and cardiac) are implemented to evaluate the proposed and existing methods.Figure 1 shows the test images.All experiments are conducted on a PC with an Intel Core i7-3520M, 2.90 GHz CPU, in MATLAB environment.The proposed method (named WasNLTV) is compared with the existing methods including TVCMRI [19], RecPF [8], and FCSA [5].We evaluate the performance of various methods both visually and qualitatively in where u and ũ denote the original image and the reconstructed image, respectively, and (⋅) is the mean function.
For fair comparisons, experiment uses the same observation methods with TVCMRI.In the -space, we randomly obtain more samples in low frequencies and fewer samples in higher frequencies.This sampling scheme is widely used for compressed MR image reconstructions.Suppose a MR image u has  pixels and the partial Fourier transform F in problem (1) consists of  rows of  ×  matrix corresponding to the full 2D discrete Fourier transform.The  chosen rows correspond to the sampling measurements b.Therefore, the sampling ratio is defined as /.In the experiments, the Gaussian white noise generated by   × randn (, 1) in MATLAB is added, where standard deviation   = 0.01.The regularization parameters , , and  are set as 0.001, 0.035, and 1, respectively.To be fair to compare the reconstruction  observed that the WasNLTV always obtains the best visual effects on all MR images.In particular, the edge of organs and tissues obtained by WasNLTV are much more clear and easy to identify.Figure 7 gives the performance comparisons between different methods with sampling ratios 20% in terms of the CPU time over the SNR.In general, the computational complexity of NLTV is much higher than TV.In order to reduce the computational complexity of WasNLTV, in the experiment, we perform the NLTV regularization once in some iterations.Despite the higher computational complexity of WasNLTV, the WasNLTV obtains the best reconstruction results on all MR images by achieving the highest SNR in less CPU time.

Conclusions
In this paper, we propose a new compound regularization based compressive sensing MRI reconstruction model, which exploits the NLTV regularization and wavelet approximate sparsity prior.The approximate sparsity prior is used in compressive MR image reconstruction model instead of ℓ 0 or ℓ 1 norm, which can produce much sparser results.And the optimization problem is much easier to be solved.Because the NLTV takes advantage of the redundancy and self-similarity in a MR image, it can effectively avoid blocky artifacts caused by traditional TV regularization and keep fine edge of organs and tissues.As for the algorithm, we apply the variable splitting and augmented Lagrangian algorithm to

Figure 2 :
Figure 2: Performance comparisons (sampling rate versus SNR) on different MR images.

3
update iteration:  =  + 1 until the stopping criterion is satisfied.Algorithm 1: Pseudocode of WasNLTV based compressive MR image reconstruction.

Figure 7 :
Figure 7: Performance comparisons (CPU-time versus SNR) on different MR images.

Table 1 :
SNR (dB) results of different methods with different sampling ratios.

Table 2 :
RMSE results of different methods with different sampling ratios.Experiments on test images demonstrate that the proposed method leads to high SNR measure and more importantly preserves the details and edges of MR images.