An Algorithm of l1-Norm and l0-Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection

The l1-norm regularization has attracted attention for image reconstruction in computed tomography. The l0-norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined l1-norm and l0-norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving l0-norm regularization.


Introduction
Computed tomography (CT) has been widely applied in medical imaging and industry for over decades. In some applications, the reconstructions do not require a highquality image, or complete projection data for exact reconstructions is not available because of the scanning conditions or radiation issue. CT reconstruction from limited projection data is of particular importance, and statistical and iterative reconstruction algorithms outperform the analytic approaches in these applications.
The theory of compressed sensing [1,2] has provided a novel framework for CT image reconstruction from limited projection data with the prior knowledge of the sparse gradients for CT images. The total variation (TV) based regularization [3] for CT image reconstruction is connected to compressed sensing under the sparse transform of the gradient operator [4,5]. The TV of an image is the l 1 -norm of the gradients of the image, so the TV minimization is also known as the l 1 -minimization.
The l 1 -norm-based regularization can greatly reduce the streak artifacts arising from few-view CT [6]. However, the results are over smoothing and lose some detailed features including contrast that causes edge blurred [7]. In addition, the l 1 -norm regularization does not provide a representation sparse enough. To overcome the drawback, many improved TV-based reconstruction algorithms have been developed. One of them is the reweighted l 1 minimization [8]. Algorithms for regularization involving multiple norms are also developed. For example, the alternating direction method (ADM) [9,10] and the nonmonotone alternating direction algorithm (NADA) [11,12] are proposed for solving an l 1 -norm minimization with an l 2 -norm constraint.
The l 0 -norm of an image vector, defined as the number of its nonzero elements, measures the sparsity of a vector appropriately. So the l 0 -norm-based regularization for CT represents the sparsity of the gradient image better than the l 1 -norm-based regularization and preserves the edge while suppressing the streak artifacts [13]. However, the l 0 -norm is a nonconvex function and the l 0 -norm regularization problem is NP hard [14]. So many variants of the regularization algorithms are developed. An edgepreserving image reconstruction method for limited-angle CT is investigated based on l 0 -norm regularized gradient prior [15]. An imaging reconstruction model for few-view CT based on l 0 sparse regularization is proposed [16]. A new l 0 regularization with wavelet tight framelets is addressed to suppress the slope artifacts in the limited-angle X-ray CT reconstruction [17]. Regularization involving multiple norms is also developed to address the individual drawbacks of the l 1 -norm and l 0 -norm. An image reconstruction model based on l 0 -norm and l 2 -norm regularization for the limited-angle CT is proposed [18]. Numerical experiments indicate that the algorithm has the advantage in suppressing slope artifacts. A combined smoothed l 0 -norm and l 1 -norm regularization algorithm using the NADA method for CT image reconstruction is proposed and demonstrated to have better performance than the l 1 -norm regularization [19].
In this paper, we will develop a new combined l 1 -norm and l 0 -norm regularization model for CT image reconstruction from limited projection data. The main contributions of this paper are as follows: (1) We generalize the l 1 -norm regularization to an l 1 -norm and l 0 -norm regularization for CT image reconstruction from limited projection data. The proposed model can achieve superior performance compared with the existing related methods for multiple reasons. Firstly, due to the l 1 -norm regularization term, the proposed model can reduce streak artifacts in limited-data CT. Secondly, due to the l 0 -norm regularization term, the oversmoothing from the l 1 -norm regularization term is improved. The proposed model is less smoothing, thus better edge-preserving, and provides a better sparsity representation. Thirdly, since the l 0 -norm is adopted without using the smoothed l 0 -norm approximation, the proposed model has the advantage in suppressing slope artifacts (2) We propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding (NADA-HT) method The rest of the paper is organized as follows. In Section 2, we present a combined l 1 -norm and l 0 -norm-based regularization model and propose an algorithm for CT image reconstruction from limited projection data. Numerical experiments for a comparison of regularization models with/without the l 0 -norm and the discussion are given in Section 3.

Materials and Methods
The projection data in CT can be modelled as a linear system, where Φ is an m × n 2 projection matrix, f ∈ R n 2 represents a 2D n × n image to be reconstructed, e an additive noise with kek 2 ≤ ε for some known ε > 0, and u ∈ R m the noisy projection data. For limited data reconstruction, the underdetermined system (m ≪ n 2 ) has infinitely many solutions. An optimal solution representing the original image as good as possible can be sought by minimizing the energy function with a TV regularization term [8]: where the total variation k f k TV is the l 1 -norm of the magnitude of the discrete gradients, Then, a 2D image f with sparse gradients and noisy measurements in CT can be reconstructed by solving the following l 1 -norm minimization with an l 2 -norm constraint, However, the image reconstructed by solving (4) has slope artifacts near the edge of the object for limited-angle projection data [20].
We consider the following minimization model where k∇f k 0 stands for the number of nonzero gradients ð∇f Þ p , 1 ≤ p ≤ n 2 . The term ðγ/2Þk f k 2 2 , γ ⟶ 0 + , makes the energy of reconstructed image to be minimized and reduces the ill-posedness of CT reconstruction [18].
For convenience, we denote D p f as the forward difference of f at a pixel p in both horizontal and vertical directions, i.e., where the notation kD p f k 0 is set to be 1 for nonzero D p f , otherwise 0. The above minimization can be converted, with a parameter β > 0 and vectors v p ′ s, to 2 International Journal of Biomedical Imaging Adapting Lagrangian vectors λ p ∈ R 2 , 1 ≤ p ≤ n 2 , we convert Minimization (8) into the following problem using the augmented Lagrangian method For simplicity, we write Df , v, and λ as vectors whose p-th components are D p f , v p , and λ p , 1 ≤ p ≤ n 2 , respec- Next, we will develop an algorithm for solving Minimization (10). By convention, Minimization (10) can be solved by ADM iteratively. The k-th step of the ADM involves three procedures, First, Minimization (11) can be solved by ADM with the gradient decent method involving nonmonotone line search, for example, the NADA.
Next, we seek for a solution of (12). Minimization (12) is We introduce a hard thresholds (HT) operator H κ ðwÞ on R 2 with the threshold κ defined as Proof. For simplicity, denote z = v p . The objective function in Minimization (14) is rewritten as Without loss of generality, we assume that w ≠ 0 since z opt = 0 in the case of w = 0.
(i) Case 1. μjjwjj 2 > 1 We claim that g min = kwk 2 + α − 1/ð2μÞ at z = ð1 − 1/ ðμkwk 2 ÞÞw. In fact, it follows from that a minimizer z is a scalar multiple of w. Let z = tw where t > 0 due to the minimization problem. Then, the function reaches its minimum value at t = 1 − 1/ðμkwk 2 Þ. The claim is proved.
Remark. In practice, the condition 2μα > 1 for nonzero α in Theorem 1 is fulfilled. Now we summarize our algorithm as follows.

Results and Discussion
In this section, the performance of the proposed algorithm is compared with that of the l 1 -norm regularization algorithm.

Shepp-Logan l1 regularization l0+l1 regularization
Cardiac image l1 regularization l0+l1 regularization Both algorithms are implemented in MATLAB using the NADA and tested with the 2D Shepp-Logan phantom and a cardiac image [21] of size 128 × 128 on an Intel Core i7 3.40 GHz PC. In each test, the same system u = Φf + e is created, where Φ ∈ R m×n 2 (m ≈ 0:3n 2 ) is a random matrix and the noise e = 0:02 * meanðΦf Þ * randnðmÞ. The weight parameter α is chosen as 0 and 1 for the l 0 -norm regularization and the proposed algorithm, respectively. Other two important parameters in the objective function are taken as β = 2 8 , μ = 2 4 for the Shepp-Logan phantom and β = 2 7 , μ = 2 2 for the cardiac image, respectively. Experiments are conducted to compare the reconstruction by the two algorithms after the same number of iterations. The original and reconstructed images for two images after the same numbers of iterations are shown in Figure 1. The images demonstrate that the proposed algorithm yields much better reconstruction. The quality of reconstructed images is compared using several commonly used criteria: the relative error, the root-mean-square error (RMSE), the normalized root mean square deviation (NRMSD), the nor-malized mean absolute deviation (NMAD), and the structural similarity index (SSIM). The experimental results from 100 tests are summarized in Table 1. It shows that the proposed algorithm produces significantly improved evaluation measurement: 88%-167% for the Shepp-Logan phantom and 67%-114% for the cardiac image, respectively, while taking less CPU time. Overall, the new algorithm provides better accuracy after the same iteration number. Figure 2 shows the graph of relative error v.s. the number of iterations for the two algorithms. The curves indicate that the proposed algorithm requires much less iterations and time to achieve the same accuracy and yields much faster convergence. After a certain number of iteration numbers, the relative error of the proposed algorithm drops sharply, while the relative error of the l 1 -norm regularization is slowly decreasing and does not improve much over the iterations. The proposed regularization model introduces an l 0 -norm term in addition to an l 1 -norm term. The l 0 -norm of a vector is the number of nonzero elements in the vector; thus, the l 0 -norm is appropriate for representing sparsity.   Meanwhile, the l 0 -norm term makes the model less smoothing. Numerical experiments demonstrate that, in the proposed model, the edge is better preserved, and a better sparsity representation is provided. So, the proposed algorithm improves the quality and the efficiency of the reconstruction even with the extra computation from the added l 0 -norm term.
The effect of the weight parameter α of the l 0 -norm is tested. From our experiments, the parameter α from 0:5 to 1 almost does not not affect the efficiency of the algorithm. Selecting good values of other parameters β, μ, and γ in Minimization (10) is a challenging problem and will be further investigated in the future.
High-quality CT image reconstruction from limited projection data is challenging. Learning based algorithms such as structure-aware sparse Bayesian learning [22] could yield improved performance in reconstructing tomographic images from limited data, since structural prior knowledge is exploited. Enhancing the proposed algorithm in a learning-based framework using prior knowledge will be the topic of future research.

Data Availability
The data used to support the findings of the study included within the article.

Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.