Weighting Algorithm and Relaxation Strategies of the Landweber Method for Image Reconstruction

The iterative approach is important for image reconstruction with ill-posed problem, especially for limited angle reconstruction. Most of iterative algorithms can be written in the general Landweber scheme. In this context, appropriate relaxation strategies and appropriately chosenweights are critical to yield reconstructed images of high quality. In this paper, based on reducing the condition number of matrixA푇A, we find onemethod of weightingmatrices for the general Landweber method to improve the reconstructed results. For high resolution images, the approximate iterative matrix is derived. And the newweightingmatrices and corresponding relaxation strategies are proposed for the general Landweber method with large dimensional number. Numerical simulations show that the proposed weighting methods are effective in improving the quality of reconstructed image for both complete projection data and limited angle projection data.


Introduction
Image reconstruction plays a major role in many fields such as X-ray computed tomography (CT).Many imaging processes can be modeled as the following linear systems: where  = ( 1 ,  2 , ⋅ ⋅ ⋅ ,  푁 ) 푇 denotes the reconstructed image,  = ( 1 ,  2 , ⋅ ⋅ ⋅ ,  푀 ) 푇 is measurement data, and  = ( 푖 푗 ) 푀×푁 , where  푖,푗 equals the intersection length of the th projection ray with the th pixel  푗 .Therefore, the reconstructed image can be obtained by solving the system of linear equations.If there is a solution of (1), it is called consistent.Otherwise, it is called inconsistent.Iterative algorithm is important for image reconstruction.In this paper, we use a superscript  to denote the transpose of a vector or matrix and use ‖ ⋅ ‖ to denote the 2-norm of a vector or matrix.We use K 푁 to denote the Hilbert space with the canonical inner product ⟨⋅, ⋅⟩, where the number field K can be the reals R or the complexes C. The orthogonal component of a subspace  in K 푁 is denoted as  ⊥ .For a matrix , the null space and the range of  are denoted by N(A) and R(), respectively.The orthogonal projection from K 푁 to the null space N() is denoted by  퐴 .Because of the rapid increase in computing power of computers, iterative reconstruction algorithms become more and more effective.The general Landweber iterative method is the extensively used scheme which is written as where  푛 is the relaxation coefficient and the weighting matrices  and  are symmetric positive definite [1].The weighted and relaxation strategies are important for the quality of the reconstructed image.Formula (2) becomes some specific iterative methods for the different weighted matrices.If  =  푁×푁 where  denotes the identity matrix and  = (1/)diag (1/‖ 1 ‖ 2 , ⋅ ⋅ ⋅ , 1/‖ 푀 ‖ 2 ) where  푖 denotes the −th row of , (2) is the Cimmino's iterative method [2,3].If  −1 = diag (/ 푗 ) and  = diag (1/ ∑ 푛 푗  푗  2 푖 푗 ) where  푗 ( = 1, 2, ⋅ ⋅ ⋅ , ) is the number of nonzeros in the th column of , (2) becomes the simultaneous version of the diagonally relaxed orthogonal projection (DROP) method [4,5].The component averaging (CAV) method is obtained 2 Mathematical Problems in Engineering with  =  푁×푁 and  = diag (1/ ∑ 푛 푗  푗  2 푖 푗 ) [6].With  =  푁×푁 and  =  푀×푀 , the Landweber method is as follows:  (푛+1) =  (푛) +  푛  푇 ( −  (푛) ) . (3) Convergence conditions of the method (2) have been established [7][8][9].The semiconvergence behavior means that initially the iteration vector approaches a regularized solution, while continuing the iteration often leads to iteration vectors corrupted by noise [10].Based on the analysis of the semiconvergence behavior, T. Elfving et al. proposed two relaxation strategies [4].Later, these two strategies were improved by them [11].If the matrix  푇  is positive definite, as a special case of the Richardson iteration, based on minimizing the spectral radius of iterative matrix, the relaxation coefficient of the Landweber iteration (3) is given by where  1 and  min are the largest and the smallest eigenvalues of  푇  [12,13].However, if matrix  푇  is semipositive definite and singular, it is proved that the iteration  (푛)  generated by the Landweber scheme (3) with is divergent because of  min = 0 [8].In our previous work, based on minimizing the spectral radius of the new iterative matrix, a new relaxation coefficient and an accelerating convergence strategy  푛 ∈ [1/ 1 , 2/ 1 ) ( = 1, 2, ⋅ ⋅ ⋅ ) are proposed for the semipositive definite matrix  푇 , where  1 ,  푚 are the largest and smallest positive eigenvalues of  푇 , respectively [9].
For high resolution images, the number of discretization points of original image is large.The image reconstruction problem is ill-posedness and the coefficient matrix of the reconstructing linear system has a high condition number.Therefore, the motivation of this work is to improve the condition number by weighting applicable matrices.In this paper, we derive an appropriate weighting matrix that can be used to improve the condition number.In addition, for suitable high resolution of reconstructed image, the grid number needs to be increased and the dimensional number of original discrete image becomes large.Given an adequately large number of projections, the discrete original image converges to the continuous original image and  푇  approaches  * , where  is Radon transform and  * is the adjoint operator.Here, we do not give the rigorous theoretical proof.Therefore, the smaller eigenvalues of matrix  푇  tend to zero.The discrete original image can be represented as a linear combination of the eigenvectors of  푇 , and the remainder in this combination tends to zero vector.Then we derive the approximate iterative matrix and propose the new weighting algorithms for Landweber method.Based on minimizing the spectral radius of iterative matrix, we obtain the corresponding relaxation strategies for the proposed methods.The numerical simulations show the advantages of the proposed algorithms in both convergence speed and iteration error compared with Landweber method and Cimmino's method.The proposed methods are applied to the reconstruction from limited angle projection data and improve the condition number of matrix  푇 .
The organization of the paper is as follows.Section 2 analyzes the relation between the condition number and the spectral radius of iterative matrix.In Section 3, we give a method of weighting matrices to improve the condition number.For the higher resolution images, based on the derived approximate iterative matrix, we propose a new weighting algorithm for the general Landweber and the corresponding relaxation strategies.In Section 4, numerical simulations of image reconstruction are carried out.The advantages of proposed algorithms are verified from two aspects: reconstruction by the complete projection data and reconstruction by the limited angle projection data.Finally, some relevant conclusions are discussed.

Preliminaries
2.1.Preparations.In this section, with the new derived iterative representation, the condition number is analyzed in the iterative matrix.For the sake of argument, some notations are explained as follows.Let { 푘 } 푚 푘=1 be the distinct positive eigenvalues of  푇  which are ordered such that  1 > ⋅ ⋅ ⋅ >  푚 > 0. Each  푘 has the algebraic multiplicity  푘 , respectively, for 1 ≤  ≤ .Let  푗,푘 (1 ≤  ≤  푘 ) be the corresponding orthonormal eigenvectors.It follows that If 0 is an eigenvalue of  푇  with an algebraic multiplicity , let V 푗 be the corresponding orthonormal eigenvectors for 1 ≤  ≤ ; then  푇 V 푗 = , where  is a zero vector.With the above notations, the 2-norm of the matrix  satisfies Let , and  = ( 1 , ⋅ ⋅ ⋅ ,  푚 , ), and then we have where  푘 represents the -order unit matrix and  푞 denotes the -order null matrix.The least-squares functional is used to find the minimal 2-norm solution of (1).The gradient of () is All the minimizers of () satisfy the equation ∇() = , i.e., which is called the normal equation.Equation ( 12) is always solvable and has a minimal 2-norm solution  + , which is equal to  † , where  † is the Moore-Penrose inverse of  [14].If (1) is inconsistent, which is only caused by the noise,  can be decomposed into two parts  0 + , where  0 ∈ R(),  ∈ R() ⊥ , and  is caused by noise.By the conclusion of orthogonal decomposition, there is R() ⊥ = N( 푇 ).Thus, for the inconsistent equation, the treatment is as follows: The minimal 2-norm solution  + can be written as where  푗,푘 ∈ K, for 1 ≤  ≤ , 1 ≤  ≤  푘 .The initial guess  (0) can be decomposed as where  (0) 푗,푘 ∈ K, for 1 ≤  ≤ , 1 ≤  ≤  푘 .We use the following convergence results proposed [8,9].Theorem 1.For  = 1, ⋅ ⋅ ⋅ , the th iteration  (푛) generated by the Landweber method (3) satisfies where where  1 is the largest eigenvalue of  푇 .

Theorem 2. (a) For
(푛) − ( + +  퐴 ( (0) )) (b) Assume that the relaxation coefficients are chosen such that 0 <  푛 < 2 for  ≥ 0. Then the iteration  (푛) generated by the Landweber method (3) converges to a solution of the normal equation (12) for any  ∈ K 푀 and any initial guess Under the conditions, the limit of the iteration  (푛) is equal to For the Landweber method (3), by (16), we define the error vector  (푛) and the iterative matrix ( 푛 ) Then and where  (0) = .The following two relaxation strategies are proposed [9].

Relation between the Condition Number and
Spectral Radius Definition 5.The condition number of the matrix  푇  is defined as where  1 ,  푚 are the largest and smallest positive eigenvalues of matrix  푇 , respectively.
In the following, we analyze the relation between the condition number and spectral radius.Let  푛 =  푛  1 ; then the equivalent form of (3) can be written as Based on minimizing the spectral radius of the newly derived iterative matrix ( 푛 ) in ( 22), if  1 and  푚 are all known, then the optimal relaxation strategy is If only  1 is known, then the second relaxation strategy is For clarity, we use ((), ) to represent the spectral radius of the iterative matrix ( 푛 ) in ( 22) being relative to matrix .By (22) and Lemma 4, let (31) According to (32 Therefore, in this case, we get By ( 32), (34), If  of (28) is equal to  1 and  2 , respectively, two iterative algorithms are as follows: and where  (1)  1 ,  (2)  1 are the largest eigenvalues of  푇 1  1 ,  푇 2  2 , respectively.The iterative matrices corresponding to the two algorithms (38), (39) are denoted as  1 ( 푛 ),  2 ( 푛 ).By (35), (36), and (37), we have the following theorem.

Reduction of Condition Number.
According to Proposition 6, we consider making K( 푇 ) become smaller by the weighting matrix ,  in (2).Since  1 >  2 > ⋅ ⋅ ⋅ >  푚 > 0 is decreasing, by (9), using  as the transformation matrix of similarity diagonalization, the positive eigenvalues of the diagonalization matrix constitute an increasing sequence.

Analysis for Dimensional Number Increases.
In this subsection, we analyze the relations between the original continuous image and discrete image, the Radon transform, and the coefficient matrix of reconstructing algebraic system.Based on these relations, the weighted methods are proposed.The original image is a continuous function  generally, assumed to be  2 -integrable.Algebraic iterative reconstruction is discretizing the original continuous image and Radon transform  to be the discrete image and the linear system, respectively, and then finding an iterative solution of the linear system.To satisfy the suitable high reconstructed resolution for applications, the discretized grid is refined and the grid number is increasing, so the dimensional numbers of the iteration solution and coefficient matrix  become larger.If the projected data are many enough, the discrete original image converges to the continuous original image and  푇  approaches  *  as the grid number increases in a certain form.Since the imaging object is in a bounded domain, the image function is compactly supported.For the Radon transform  in bounded domain,  has singular value decomposition and  *  is the positive definite compact operator [10].By the theory of spectrum of the positive definite compact operator, as the dimension of the matrix  푇  becomes very large, the smaller positive eigenvalues of  푇  tend to be 0 [10].Since Radon transform  has unique solution, the linear system (1) has unique solution as the projection data is appropriately large [10].Due to the space limit of the paper, we will not prove these.Then the original discrete image  is and where  푗,푘 ∈ K, for 1 ≤  ≤ , 1 ≤  ≤  푘 , and the eigenvectors  푗,푘 , ( = 1, . . .,  푘 ;  = 1, . . ., ) constitute the orthonormal basis of Hilbert space K 푁 .As the grid number increases, the value of  increases and the original discrete image  tends to be the original image .Assuming that  is an allowed iterative error.The remainder in (77) is denoted as where  0 is unknown.If the reconstructed error vector  (푛) < , there exists  0 such that  푚 0 < /2.So  푚 0 can be negligible.By Theorem 2, By (47), the approximate iterative matrix is (81)

Weighting and Relaxation Strategies.
Based on analysis in Section 3.2.1 and minimizing the condition number of the approximate iterative matrix (81), by Proposition 7, the parameter  in (46) is taken as The criterion  describes the overall nearness between the original and reconstructed image, the criterion  is the mean relative error of reconstruction, and the criterion  is the maximum difference between the original and reconstructed image.The smaller the values of , , , the better the reconstructed results.
Based on the relaxation strategies proposed in Section 3, for Landweber, Cimmino, NWL-1, and NWL-2 algorithms, the corresponding relaxation coefficients are obtained by the conclusions (25), (87), and (88), respectively.For the algorithms NWL-a, the values of parameter  and  푛 satisfy  ∈ [1.002, 1.5],  푛 ∈ [2/ 2  1 , 8/ 2  1 ].The principle of determining the number of iterations is that the relative error does not significantly decrease as the number of iterations increases.According to the results of repeated experiments, the number of iterations is chosen as 200 and 1000, respectively, for the complete projection data and the limited angle projection data.

Complete Projection Data.
We use 180 projections with 95 rays per projection for the reconstruction of complete projection data.The calculated coefficient matrix  has the dimensions 17100 × 4096.For the algorithm NWL-a, the corresponding values of  and  푛 are as follows.
The largest eigenvalue of  푇  is  1 .
For  = 1.002,  푛 = 8/ 2 1 ; for  = 1.005,  푛 = 7.5/ 2 1 ; for  = 1.01,  푛 = 7/ 2 1 ; and for  = 1.2,  푛 = 5/ 2 1 .We compare the performance of different algorithms in terms of reconstruction error and image distances.Table 1 shows the computation time for completing 200 iterations by formula (90).It can be seen that the time spent by other algorithms is similar except that the Cimmino algorithm takes a little longer time.

Noise-Free Projection Data.
The comparison of relative errors from the proposed algorithms is shown in Figure 1, from which we observe that the relative errors obtained from the NWL-a method are less than those obtained from the Cimmino method, the NWL-1 and NWL-2 methods, and the Landweber method.Figure 2 shows the reconstructed phantom images from the noise-free projections.It can be seen that the reconstructed images obtained from the NWL-a method are clearer than those obtained from the Cimmino method, the NWL-1 and NWL-2 methods, and the Landweber method.The image distances for the noise-free projection data are given in Table 2, from which we see that the image distances from NWL-a algorithm are smaller than those from Cimmino and Landweber algorithms.Figure 3 plots the curves of relative error and it shows that the relative errors from the NWL-a method are less than those from the other methods.Figure 4 shows the comparisons of the 32nd column data in the original and the reconstructed images.It is observed that the main features of the reconstructed images obtained from the NWL-a method with  = 1.01 are closer to those of the original image.The image distances generated from different algorithms are given in Table 3, from which we see that the image distances from NWL-a algorithm are smaller than those from DROP and CAV algorithms.The image distances for the noisy projection data are given in Table 4, from which we see that the image distances from NWL-a algorithm are smaller than those from Cimmino and Landweber algorithms.The conclusion is the same as above.
The quality of the reconstructed images obtained from the NWL-a method is better than the quality of those obtained from the other methods.

Limited Angle Reconstruction.
In the theory of continuous case, the problem of limited angle reconstruction has been discussed [17][18][19][20].Assume that  ∈  2 (R 2 ) is compactly supported and the support of  is supp() ⊂  = { ∈ R 2 : || ≤ }.In two-dimensional space,  1 + = { = (cos , sin ) : 0 ≤  ≤ }.Let Θ 0 = { = (cos , sin ) : 0 ≤  ≤  < }.The Radon transform of  is defined by Let Ω = R× 1 + and Ω 0 = ×Θ 0 .If (, ) is only known on  ∈ R and a subset Θ 0 ⊂  1 + , the data is called angle-limited.By projection slice theorem, Let  퐵 and  Ω 0 be the characteristic functions of  and Ω 0 , () =  Ω 0 ().Then where B =  퐵  and Ω 0 FB =  Ω 0 FB.We let A = Ω 0 FB.It follows from (101) that the adjoint operator A * of A is given by Multiplying A * on both sides of (101), we obtain The integral equation (101) can be uniquely solved since A * A is self-adjoint and positive definite [18].Therefore the eigenvalues  푘 of the eigenfunctions of A * A satisfy 1 >  1 ≥  2 ≥ ⋅ ⋅ ⋅ .It turns out that the eigenvalues  푘 are split into two rather distinct subsets: some are "close to 1" and the rest are "close to 0" [18].When the range of known limited angles become small, more eigenvalues are "close to 0", which leads to ill-posed problem becoming serious and degrading the quality of reconstructed image.Here we do not study the theoretical relation between the eigenvalue distributions of derived equation (103) in continuous case and those of discrete algebraic linear systems for limited angle reconstruction problem; we just use our methods to improve the condition number of discrete algebraic linear systems, so as to improve the results of limited angle reconstruction.From Figure 8, we see that the reconstruction images from the NWL-a method are clearer than those obtained from the other methods.Although the results obtained from the Cimmino method are good, it is time-consuming to calculate the iteration matrix  of Cimmino method before the iteration begins.Figure 9 shows that the NWL-a method makes the relative error smaller.The comparison of the 32nd column data is shown in Figure 10, from which we observe that, for incomplete projection data, the reconstructed image has a certain deviation from the original image.However the reconstructed images obtained from the NWL-a method are the closest to the original image.
Figure 11 shows the reconstructed results by the methods FBP (Filter Backprojection), NWL-1, NWL-2, and Cimmino.Figure 12 shows the reconstructed results from the NWLa method with  = 1.005,  = 1.01,  = 1.2.From these figures, it is observed that the quality of the reconstructed images becomes worse with the narrowing of the limited angle range.Corresponding to Figure 12, the image distances from the NWL-a method are represented by Tables 5 and 6.It can be seen that the image distances , , and  obtained by NWL-a method with  = 1.005, 1.01, 1.2 are similar for each kind of limited angle projection data.The values of , , and  become larger with the narrowing of the limited angle range, which is consistent with the fact.However the reconstructions of the main features in the original image are clear when the range of the limited angle is at least 90 ∘ .In contrast, the reconstructed images obtained via the NWLa method are more clear.The reconstructed images are not clear when the range of the limited angle is less than 90 ∘ .Figures 13 and 15 show the comparison of relative errors from the six iterative methods.It is observed that the convergence speed of the NWL-a method is the fastest.From Figures 14  and 16, we see that the reconstructed image obtained from the NWL-a method is closer to the original image.The numerical simulations of limited angle reconstruction show that the NWL-a method has improved the reconstruction results.
The simulation results show that the proposed weighting method NWL-a ( ∈ [1.002, 1.5]) has better convergence results.

Conclusion
Based on the analysis of the relation between the condition number and the spectral radius of the iterative matrix, the new weighted matrices have been derived to improve the ill conditioned number.For the higher resolution of reconstructed image, the approximate iterative matrix has been derived for the weighted Landweber method with large dimensional number.According to these, the new weighting algorithm NWL-a and corresponding relaxation strategies for the weighted Landweber method are proposed.Numerical simulations verify the feasibility and the better performance of NWL-a method for complete projection data, especially for limited angle projection data.For  ∈ [1.002, 1.5], the reconstruction results obtained by NWL-a are similar.Therefore, when using the NWL-a method, a number can be selected from the interval as the value of the parameter .Since we have no imaging laboratory here, the proposed algorithm cannot be applied to actual experiments, which will be a part of our future work.

2 Figure 1 :
Figure 1: Relative error histories generated from seven iterative algorithms.

Figure 4 :Figure 5 :Figure 6 : 2 Figure 7 :
Figure 4: Comparisons of the 32nd column in the reconstructed and original images.The red line is reconstruction and blue line represents the original image.

Figure 9 :
Figure 9: Relative error histories generated from seven iterative algorithms for the limited angle 0 ∘ −119 ∘ .

Figure 10 :
Figure 10: Comparisons of the 32nd column in the reconstructed and original images.The red line is reconstruction and blue line represents the original image.

Figure 11 :
Figure 11: The first to fourth rows are the results reconstructed by FBP, NWL-1, NWL-2, and Cimmino, respectively, corresponding to four kinds of limited angle projection data.

Figure 12 :Figure 13 : 2 Figure 14 :
Figure 12: The first to third rows are the results reconstructed by NWL-a with  = 1.005,  = 1.01, and  = 1.2, respectively, corresponding to four kinds of limited angle projection data.

Figure 15 :
Figure 15: Relative error histories generated from seven iterative algorithms for the limited angle 0 ∘ −79 ∘ .

2 Figure 16 :
Figure 16: Comparisons of the 32nd column in the reconstructed and original images.The red line is reconstruction.

Table 1 :
Computation time for completing 200 iterations.
4.1.2.Nois6,and 7.tion Data.In this section, we test the performance of different algorithms for the projection data with noise.For this purpose, the white Gauss noise with signal-to-noise ratio (SNR) of 10 is added to the projections .The reconstructed results are shown inFigures 5,6,and 7.

Table 2 :
Image distances from different methods for the noise-free projection data.

Table 3 :
Comparison of image distances with DROP and CAV algorithms.

Table 4 :
Image distances from different methods for the noisy projection data.
Mathematical Problems in Engineering with 95 rays per projection.The calculated coefficient matrix  has the dimensions 11400 × 4096.The number of iterations is 1000.For the algorithms NWL-a, the corresponding values of  and  푛 are as follows.The largest eigenvalue of  푇  is  1 .For  = 1.005,  푛 = 8/ 2 1 ; for  = 1.01,  푛 = 7/ 2 1 ; and for  = 1.2,  푛 = 5/21 .Completing 1000 iterations, the Cimmino method and the proposed methods take about 15 seconds.
4.2.1.Limited Angle 0 ∘ −119 ∘ .To test the effectiveness of the new algorithms in this aspect, image reconstruction using limited angle projection data is simulated in this section.The projection angle is set from 0 ∘ −119 ∘ and we use 120 projections