Antireflective Boundary Conditions for Deblurring Problems

This survey paper deals with the use of antireflective boundary conditions for deblurring problems where the issues that we consider are the precision of the reconstruction when the noise is not present, the linear algebra related to these boundary conditions, the iterative and noniterative regularization solvers when the noise is considered, both from the viewpoint of the computational cost and from the viewpoint of the quality of the reconstruction. In the latter case, we consider a reblurring approach that replaces the transposition operation with correlation. For many of the considered items, the anti-reflective algebra coming from the given boundary conditions is the optimal choice. Numerical experiments corroborating the previous statement and a conclusion section end the paper.


Introduction
Formation of a blurred signal/image is typically modeled as a linear system: where f is the true object, g is the blurred object, and A models the blurring process.Obtaining an accurate deblurring model (i.e., the matrix A) requires essentially two main pieces of information: (1) identification of the blur operator, called a point spread function (PSF), (2) choosing an appropriate boundary condition (BC), assuming that the observed image is always finite.
The identification of the blur operator is related to the infinite dimensional problem and it decides the essential structure of the matrix A. In the spatially invariant case, due to the shift invariance of the blurring, for image deblurring we deduce a two-level Toeplitz structure (i.e., a two-level Toeplitz matrix or in a different language a block Toeplitz matrix with Toeplitz blocks).The choice of the BCs influences small sections of A by a low-rank plus low-norm term.However, these correction matrices have a substantial impact in two important directions: (a) precision of the reconstruction especially close to the boundaries (presence of ringing effects); (b) cost of the computation for recovering the "true" image from the blurred one with or without noise.
On the other hand, the involved correction matrices do not modify significantly the spaces of ill-conditioning (where the coefficient matrix A shows small eigenvalues).This happen because the global matrix is of convolution type.Therefore, the eigenvectors look globally like the Fourier vectors m k (they are exactly the Fourier vectors when periodic BCs are imposed).Conversely, the small rank matrices induced by the chosen BCs are not generic since they are necessarily localized in space.More specifically, in one dimension there exists k n, n being the size of the matrix, such that the low-rank correction belongs to the span of canonical matrices E i, j = e i e T j with i, j ≤ k and/or i, j ≥ n + 1 − k, e k being the kth vector of the canonical basis.In that case, we observe E i, j m k 2 m k 2 with • 2 being the classical Euclidean norm and hence the term E i, j is unable to modify significantly the characterization in frequencies of the eigenspaces.In actuality, due to the relation E i, j m k 2 m k 2 , the effect induced by any E i, j is more or less uniformly distributed in all the Fourier directions.
For image deblurring, a classical choice is to use zero (Dirichlet) BCs (see Andrews and Hunt [1], pp.211−220) where we assume that the values of the image f outside the domain of consideration are zero.The corresponding blurring matrix is two-level Toeplitz, which is known to be computationally expensive to invert when direct solvers are used (see for instance the review [2]): the fastest available direct Toeplitz solver is of O(n 4 ) operations for an n 2 -byn 2 two-level Toeplitz matrix [3].In addition, we have no good news when applying iterative methods due to the negative results in [4][5][6], where it is proved that we should not expect the classical matrix algebra preconditioners to lead to optimal iterative solvers in the ill-conditioned case.More specifically, we lose the strong cluster at unity of the preconditioned matrix sequence [4,5] and the spectral boundedness of the preconditioned matrix sequence and its inverse [6].Therefore, by the convergence analysis of Axelsson and coworkers (see e.g., the pioneering paper [7]), we cannot expect optimal convergence (i.e., a convergence rate independent of the matrix size) both for Krylov type methods and for classical stationary solvers (with the partial exception of multigrid type techniques, see [8,9] for a recent adaptation in the context of image deblurring).In image restoration problems, we consider regularized problems so that the previous statement is less relevant; however, for linear systems associated to the Tikhonov regularization, the optimality could guarantee a convergence speed independent of the regularizing parameter (for a discussion on this kind of robustness see [9,10]).Moreover, an image has boundaries, and if the true image is not close to zero at the boundaries, it means that Dirichlet BCs may introduce an artificial discontinuity, which in turn results in the wellknown Gibbs phenomenon (observed as a ringing effect in the reconstructed image).Therefore, we face substantial problems both with respect to (a) and (b).A possibility for drastically reducing the computational cost is to impose periodic BCs since this implies that A is a two-level circulant matrix which can be efficiently inverted by means of few fast Fourier transforms (FFTs).In this case, we have a good solution to problem (b) (see Gonzalez and Woods,[11] p. 258) but often the image at the right (upper) boundary has nothing to do with the image at the left (lower) boundary.Again this implies that we are introducing an artificial discontinuity which in turn leads to ringing effects.An important proposal in the direction of reducing the ringing effect and of designing fast algorithms is the one by Ng et al. [12] (see also Strang,[13] page 145, and Chan et al. [14]).In particular, they consider Neumann (reflective) BCs, which means that the data outside the domain of consideration is taken as a reflection of the data inside.The approach is fast for circularly symmetric PSFs since the corresponding blurring matrix belongs to a special twolevel algebra simultaneously diagonalized by the fast cosine transform (DCT III).Because a two-level DCT III requires only real multiplications and can be done at half of the cost of a single FFT (see Rao and Yip,[15] pp.59-60), inversion of these matrices is substantially faster than those obtained from zero BCs and is comparable to (or little better than) the case of periodic BCs.Thus, for circularly symmetric PSFs, requirement (b) is satisfied when using reflective BCs.Moreover, the artificial boundary discontinuities are eliminated, and therefore the ringing effects are strongly reduced.However, although reflection guarantees continuity, it generally fails to guarantee continuity of the normal derivative except for the nongeneric case where the normal derivative is zero at the boundary.Therefore if the image is smooth, then the reflective BCs only save the continuity but introduce an artificial discontinuity of the normal derivative (though requirement (b) is substantially improved with respect to periodic and Dirichlet BCs).In the image processing literature, other methods have also been proposed to assign boundary values as a local image mean, or are obtained by extrapolation methods in order to insure continuity (see Lagendijk and Biemond [16], p. 22, and the references therein).However, these techniques that meet the approximation requirement (a), generally do not satisfy the computational requirement (b) so we still have to face a difficult linear algebra problem.
Here we follow the approach in [17] in which new BCs are proposed that can further reduce ringing effects and that, under the assumption of symmetry of the PSF, leads to a matrix A belonging to the sine algebra of type I for which fast (real) algorithms are available.More precisely, we consider the use of antireflective BCs (AR-BCs) where the data outside the domain of consideration are taken as an antireflection of the data inside.Consider the original signal T and the normalized blurring function given by The blurred signal g is the convolution of h and f and consequently g i = ∞ j=−∞ h j f i− j .The deblurring problem is to recover the vector f = ( f 1 , . . ., f n ) T given the blurring function h and a blurred signal g = (g 1 , . . ., g n ) T of finite length, that is, Thus the blurred signal g is determined not by f only, but also by ( f −m+1 , . . ., f 0 ) T and ( f n+1 , . . ., f n+m ) T and the linear system (3) is underdetermined.To overcome this, we make certain assumptions (called BCs) on the unknown boundary data f −m+1 , . . ., f 0 and f n+1 , . . ., f n+m in such a way that the number of unknowns equals the number of equations.
In terms of the objects defined so far, we recall that zero (Dirichlet) BCs means f j = f n+ j = 0 for all j in (3) so that a Toeplitz structure is encountered.If we consider periodic BCs we set f j = f n+ j , for all j in (3) and the matrix system in (3) becomes n-by-n circulant so that it can be diagonalized by the discrete Fourier matrix and the above system can be solved by using three FFTs (one for finding the eigenvalues of the blurring matrix and two for solving the system).For the Neumann BCs, we assume that the data outside f are a reflection of the data inside f (refer to [12]) so that f 1− j = f j for j = 1, . . ., m and f n+ j = f n+1− j for all j = 1, . . ., m in (3).Thus (3) becomes C f = g, where C is neither Toeplitz nor circulant but it is a special n-by-n Toeplitz plus Hankel matrix which is diagonalized by the discrete cosine transform provided that the blurring function h is symmetric, that is, h j = h − j for all j in (2).It follows that the above system can be solved by using three transforms DCT III in O(n log n) operations (refer to [12]).The latter approach is computationally interesting since a DCT III requires only real operations and is about twice as fast as the FFT (see [15], pp.59-60), and this is true in two dimensions as well.With the help of a different Toeplitz plus Hankel structure, we establish similar results for the antireflective BCs both in one dimension and two dimensions.It is worth finally to remark that La Spina has analyzed [18] the BCs associated with all the known trigonometric algebras: the interesting facts are two, first some matrix algebras do not lead to any boundary condition, second the highest precision is reached by the classical DCT III algebra and by the classical sine transform algebra of type I which ensures only the continuity of the signal.Therefore, in this sense we can claim that among the known algebras the antireflective one is that related to the most precise reconstructions.
The paper is organized as follows.In the Section 2 we examine the antireflective BCs, the related algebra, and the related transforms also from a computational viewpoint.At the end of Section 2 we briefly consider the multilevel extension while in Section 3 we study some regularization techniques specifically adapted to the features of the antireflective algebra.Finally, Section 4 contains numerical experiments both with Tikhonov like solvers (with reblurring) and with Krylov techniques with early termination as stopping criterion.A final section of conclusions and future problems end the paper.

The Algebra of AR Matrices
In this section we describe the AR-BCs and the algebra AR n ≡ AR, n ≥ 3, of matrices arising from the imposition of AR-BCs.

The Antireflective BCs.
When defining the antireflective boundary conditions, we assume that the data outside f are an antireflection of the data inside f.More precisely, if x is a point outside the domain and x * is the closest boundary point, then we have x = x * − δx and the quantity f (x) is approximated by f (x * ) − ( f (x * + δx) − f (x * )) so that we impose in (3).If we define the vector z whose components are z j = 2 m k= j h k for j ≤ m and zero otherwise and if we define the vector w whose components are w n+1− j = 2 m k= j h −k for j ≤ m and zero otherwise, then (3) becomes where e k is the kth vector of the canonical basis, J = 0 0 0 J , J = J 0 0 0 with J denoting the n − 1 dimensional flip matrix having entries [J] s,t = 1 if s + t = n + 1 and zero otherwise, and where the matrices T l , T, and T r are given by Therefore, the matrices (0 | T l ) J and (T r | 0) J involved in (5) take the form We remark that the coefficient matrix A in ( 5) is neither Toeplitz nor circulant, but it is a Toeplitz plus Hankel plus 2 rank correction matrix, where the correction is placed at the first and the last column.We will show that the linear system (5) can be reduced to an (n − 2) by (n − 2) new system whose coefficient matrix can always be diagonalized by the discrete sine transform DST I (associated with the τ class) matrix provided that the blurring function h is symmetric, that is, h j = h − j for all j in (2).It follows that (5) can be solved by using three FSTs in O(n log n) operations.This approach is computationally attractive as FST requires only real operations and is about twice as fast as the FFT and hence solving a problem with the AR BCs is twice as fast as solving a problem with the periodic BCs, has the same cost as solving a problem with the reflective BCs, and one gains one order of precision due to the C 1 continuity.Moreover, all these remarks stand in two dimensions as well and indeed an abstract treatment of the two-level and multilevel settings is reported in Section 2.5.
Finally it is worth mentioning that the use of AR BCs has been considered by several authors (see e.g., [18][19][20][21][22][23][24][25][26]).In particular in [24] the mean BCs have been introduced as a slight variation of the AR BCs.The order of approximation is the same but the hidden constants are smaller so that the mean BCs are generally more precise than the AR BCs.However, the resulting matrices do not form an algebra so that we cannot define a new transform and this is a confirmation of the negative result found in [18].

The τ Algebra.
Let Q be the type I sine transform matrix of order n (see [27]) with entries It is known that the real matrix Q is orthogonal and symmetric (Q −1 = Q T = Q).For any n-dimensional real vector v, the matrix-vector multiplication Qv (DST-I transform) can be computed in O(n log(n)) real operations by using the algorithm FST-I.Let τ be the space of all the matrices that can be diagonalized by Q: τ = QDQ : Dis a real diagonal matrix of size n .(9) Let X = QDQ ∈ τ, then QX = DQ.Consequently, if we let e 1 denote the first column of the identity matrix, then the relationship QXe 1 = DQe 1 implies that the eigenvalues Therefore, the eigenvalues of X can be obtained by applying a DST-I transform to the first column of X and, in addition, any matrix in τ is uniquely determined by its first column.Now we report a characterization of the τ class, which is important for analyzing the structure of AR-matrices.Let us define the shift of any vector h = [h 0 , . . ., h n−1 ] T as σ(h) = [h 1 , h 2 , . . ., h n−1 , 0] T .According to a Matlab-like notation, we define T(x) to be the n-by-n symmetric Toeplitz matrix whose first column is x and H(x, y) to be the n-by-n Hankel matrix whose first and last column are x and y, respectively.Every matrix of the class (9) can be written as (see [27]) where h = [h 0 , . . ., h n−1 ] T ∈ R n and J is the flip matrix.The structure defined by (10) means that matrices in the τ class are special instances of Toeplitz-plus-Hankel matrices.Moreover, the eigenvalues of the τ matrix in (10) are given by the cosine function h(y) evaluated at the grid points vector G n = [kπ/(n + 1)] n k=1 , where i 2 = −1, and h j = h | j| for | j| ≤ n − 1.The τ matrix in (10) is usually denoted by τ(h) and is called the τ matrix generated by the function or symbol h = h(•) (see the seminal paper [27] where this notation was originally proposed).

The AR-Algebras AR. Let h
) coincides with the matrix in ( 10)-( 11), when l ≤ k − 1).Hence the Fourier coefficients of h are such that where It is interesting to observe that h(y) − h(0) has a zero of order at least 2 at zero (since h is a cosine polynomial) and therefore φ(h) = (φ(h))(•) is still a cosine polynomial of degree l − 1, whose value at zero is −h (0)/2 (in other words the function is well defined at zero).As proved in [28], with the above definition of the operator AR n (•), we have (1) , for real α and β and for cosine functions These properties allow us to define AR as the algebra (closed under linear combinations, product, and inversion) of matrices AR n (h), with h being a cosine polynomial.By standard interpolation arguments it is easy to see that AR can be defined as the set of matrices AR n (h), where h is a cosine polynomial of degree at most n − 3. Therefore, it is clear that dim(AR) = n − 2.Moreover, the algebra AR is commutative thanks to item 2, since h 1 (y)h 2 (y) = h 2 (y)h 1 (y) at every y.Consequently, if matrices of the form AR n (h) are diagonalizable, then they must have the same set of eigenvectors [29].This means there must exist an "antireflective transform" that diagonalizes the matrices in AR.Unfortunately this transform fails to be unitary, since the matrices in AR are generically not normal.However the AR-transform and its inverse are close in rank to orthogonal linear mappings and only moderately ill conditioned.
Following the analysis given in [17], if the blurring function (the PSF) h is symmetric (i.e., h i = h −i , for all i ∈ Z), if h i = 0 for |i| ≥ n − 2 (degree condition), and if h is normalized so that m i=−m h i = 1, then the structure of the n × n antireflective blurring matrix A is where According to the brief discussion of Section 2.2, relation (15) implies that (10) and ( 11)).Moreover in [28] it is proved that A = AR n (h).

Eigenvalues and Eigenvectors of AR-BC Matrices.
We first describe the spectrum of AR-BC matrices, under the usual mild degree condition (i.e., the PSF h has finite support), with symmetric, normalized PSFs.Then we describe the eigenvector structure and we introduce the AR-transform.
The spectral structure of any AR-BC matrix, with symmetric PSF h, is concisely described in the following result.
Theorem 1 (see [28]).Let the blurring function (PSF) h be symmetric (i.e., h s = h −s ), normalized, and satisfying the usual degree condition.As a consequence the eigenvalues of the n × n AR-BCs blurring matrix A in (14), n ≥ 3, are given by h(0) = 1 with multiplicity two and h(G n−2 ).
The proof can be easily derived by (12) which shows that the eigenvalues of AR n (h) are h(0) with multiplicity 2 and those of τ n−2 (h), that is, h(G n−2 ), with multiplicity 1 each.
Here we will determine the eigenvectors of every matrix AR n (h).In particular, we show that every AR-BCs matrix is diagonalizable, and we demonstrate independence of the eigenvectors from the symbol h.With reference to the notation in ( 8)−( 11), calling q (n−2) T is an eigenvector of AR n (h) related to the eigenvalue h(0), then the other is its flip, that is, [ 0, (Jp) T , 1] T .Let us look for this eigenvector by imposing the equality which is equivalent to seeking a vector p that satisfies (12) and the lines below), and because of the algebra structure of τ n−2 and thanks to the above relation, we deduce that the vector p satisfies the relation where . Therefore, by (19), the solution is given by p ) is invertible (as it happens for every nontrivial PSF, since the unique maximum of the function is reached at y = 0, which is not a grid point of G n−2 ), then the solution is unique.Hence, independently of h, we have Now we observe that the jth eigenvector is unitary, j = 2, . . ., n − 1, because Q n−2 is unitary: we wish to impose the same condition on the first and the last eigenvector.The interesting fact is that p has an explicit expression.By using standard finite difference techniques, it follows that p j = 1 − j/(n − 1) so that the first eigenvector is exactly the sampling of the function 1 − x on the grid j/(n − 1) for j = 0, . . ., n − 1.
, where, for nonnegative sequences β n , γ n , the relation γ n ∼ β n means γ n = β n (1+o(1)).In this way, the (normalized) AR-transform can be defined as Remark 2. With the normalization condition in (21), all the columns of T n are unitary.However orthogonality is only partially fulfilled since it holds for the central columns, while the first and last columns are not orthogonal to each other, and neither one is orthogonal to the central columns.
We can solve the first problem: the sum of the first and of the last column (suitably normalized) and the difference of the first and the last column (suitably normalized) become orthonormal, and are still eigenvectors related to the eigenvalue h(0).However, since q (n−2) 1 has only positive components and the vector space generated by the first and the last column of T n contains positive vectors, it follows that T n cannot be made orthonormal just by operating on the first and the last column.Indeed, we do not want to change the central block of T n since it is related to a fast O(n log(n)) real transform and hence, necessarily, we cannot get rid of this quite mild lack of orthogonality.
Remark 3.There is a suggestive functional interpretation of the transform T n .When considering periodic BCs, the transform of the related matrices is the Fourier transform: its jth column vector, up to a normalizing scalar factor, can be viewed as a sampling, over a suitable uniform gridding of [0, 2π], of the frequency function exp(−i j y).Analogously, when imposing reflective BCs with a symmetric PSF, the transform of the related matrices is the cosine transform: its j th column vector, up to a normalizing scalar factor, can be viewed as a sampling, over a suitable uniform gridding of [0, π], of the frequency function cos( j y).Here the imposition of the antireflective BCs can be functionally interpreted as a linear combination of sine functions and of linear polynomials (whose use is exactly required for imposing C 1 continuity at the borders).
The previous observation becomes evident in the expression of T n in (21).Indeed, by defining the one-dimensional grid , which is a subset of [0, π], we infer that the first column of T n is given by α −1 n (1 − y/π)| Gn , the jth column of T n , j = 2, . . ., n − 1, is given by 2/(n − 1)(sin( j y))| Gn , and finally the last column of T n is given by α −1 n (y/π)| Gn , that is, Finally, it is worth mentioning that the inverse transform is also described in terms of the same block structure since Theorem 4 (AR n (•) Jordan Canonical Form).With the notation and assumptions of Theorem 1, the n × n AR-BCs blurring matrix A in (14), n ≥ 3, coincides with where T n and T −1 n are defined in (22) and (23), while

Multilevel Extension.
Here we provide some comments on the extension of our findings to d-dimensional objects with d > 1.When d = 1, h is a vector, when d = 2, h is a 2D array, when d = 3, h is a 3D tensor and so forth.With reference to Section 2.3 we propose a (canonical) multidimensional extension of the algebras AR and of the operators AR n (•), n = (n 1 , . . ., n d ): the idea is to use tensor products.If h = h(•) is d-variate real-valued cosine polynomial, then its Fourier coefficients form a real d-dimensional tensor which is strongly symmetric, that is symmetric with respect to every direction, that is, h j = h | j| for all j ∈ Z d .In addition, h(y), y = (y 1 , . . ., y d ), can be written as a linear combination of independent terms of the form m(y) = d j=1 cos(α j y j ) where any α j is a nonnegative integer.Therefore, we define where ⊗ denotes Kronecker product, and we force for every real α and β and for every d-variate real-valued cosine polynomials we impose this property in ( 26) by definition) is sufficient for defining completely the operator in the d-dimensional setting.
With the above definition of the operator AR n (•), we have for real α and β and for cosine functions The latter properties of algebra homomorphism allows to define a commutative algebra AR of the matrices AR n (h), with h(•) being a d-variate cosine polynomial.By standard interpolation arguments it is easy to see that AR can be defined as the set of matrices AR n (h), where h is a d-variate cosine polynomial of degree at most n j − 3 in the jth variable for every j ranging in {1, . . ., d}: we denote the latter polynomial set by P (d,even) , with e being the vector of all ones.
Here we have to be a bit careful in specifying the meaning of algebra when talking of polynomials.More precisely, for satisfying the following interpolation condition: If the degree of h 1 plus the degree of h 2 in the j th variable does not exceed n j − 2, j = 1, . . ., d, then the uniqueness of the interpolant implies that h coincides with the product between polynomials in the usual sense.The uniqueness holds also for d ≥ 2 thanks to the tensor form of the grid G (d) n−2 (see [28] for more details).The very same idea applies when considering inversion.In conclusion, with this careful definition of the product/inversion and with the standard definition of addition, P (d,even)   n−2e has become an algebra, showing the vector-space dimension equal to (n 1 − 2) Without loss of generality and for the sake of notational clarity, in the following we assume n j = n for j = 1, . . ., d. Thanks to the tensor structure emphasized in (25)−( 26), and by using Theorem 4 for every term AR n (cos(α j y j )), j = 1, . . ., d, of AR n (m) the d-level extension of such a theorem easily follows.More precisely, if h is a d-variate real-valued cosine symbol related to a d-dimensional strongly symmetric and normalized mask h, then (d times) where D n is the diagonal matrix containing the eigenvalues of AR n (h).The description of D n in d dimensions is quite involved when compared with the case d = 1, implicitly reported in Theorem 1.
For a complete analysis of the spectrum of AR n (h) we refer the reader to [28].Here we give details on a specific aspect.More precisely we attribute a correspondence in a precise and simple way among eigenvalues and eigenvectors, by making recourse only to the main d-variate symbol h(•).Let x n = x (1)  n ⊗ x (2) [0, q T sj , 0 ] T , 1 ≤ s j ≤ n − 2 and q sj is the (s j ) th column of with x n being the generic eigenvector, that is, the generic column of T (d)  n .The eigenvalue related to x n is where we can evaluate the d-variate function h on G (d)   n by h(reshape( G (d)  n , n)), where reshape(X, n) (32) arranges the entries of X in a d-dimensional array of length n along each direction according to Matlab notation.Using this notation the following compact and elegant result can be stated (its proof is omitted since it is simply the combination of the eigenvalue analysis in [28], of Theorem 4, and of the previous tensor arguments).
Theorem 5 (AR n (•) Jordan Canonical Form).The n d × n d AR-BCs blurring matrix A, obtained when using a strongly symmetric d-dimensional mask h such that where T (d)  n and G (d)  n are defined in (28) and (31).
It is worth observing that the matrix AR n (h) spectrally analyzed in the previous theorem is exactly the same matrix arising from the imposition of AR-BCs applied separately in every direction, when h is the multivariate cosine symbol coming from the d-D tensor mask h defining the shiftinvariant d-dimensional blurring operator.

Regularization by Reblurring
When the observed signal (or image) is noise-free, then there is a substantial gain of the reflective boundary conditions (R-BCs) with respect to both the periodic and zero Dirichlet BCs and, at the same time, there is a significant improvement of the AR-BCs with regard to the R-BCs (see [17,30]).Since the deconvolution problem is ill posed (components of the solution related to high frequency data errors are greatly amplified) regardless of the chosen BCs, it is evident that we have to regularize the problem.Two classical methods, that is, Tikhonov regularization, with direct or iterative solution of the Tikhonov linear system, and regularization iterative solvers, with early termination, for normal equations (CG [31] or Landweber method [32]) can be used.We observe that in both the cases, the coefficient matrix involves a shift of A T A and that the righthand-side is given by A T g.Quite surprisingly, the AR-BCs may be spoiled by this approach at least for d = 1 and if we compute explicitly the matrix A T A and the vector A T g, see [33]: more in detail, even in presence of a moderate noise and a strongly symmetric PSF, the accuracy of AR-BCs restorations becomes worse in some examples than the accuracy of R-BCs restorations (see [33]).The reason of this fact relies upon the properties of the matrix A T , and we give some insights in the following.

The Reblurring Operator.
A key point is that, for zero Dirichlet, periodic and reflective BCs, when the kernel h is symmetric, the matrix A T is still a blurring operator since A T = A, while, in the case of the AR-BCs matrix, A T cannot be interpreted as a blurring operator.A (normalized) blurring operator is characterized by nonnegative coefficients such that every row sum is equal to 1 (and it is still acceptable if it is not higher than 1): in the case of A T with AR-BCs the row sum of the first and of the last row can be substantially larger than 1.This means that modified signal A T g may have artifacts at the borders and this seems to be a potential motivation for which a reduction of the reconstruction quality occurs.
Furthermore, the structure of the matrix A T A is also spoiled and, in the case of images (d = 2) we lose the O(n 2 log(n)) computational cost for solving a generic system A T Ax = b.The cost of solving such a type of linear systems is proportional to n 3 by using for example, Shermann-Morrison formulae (which by the way can be numerically unstable [34]).Dealing with higher dimensions, the scenario is even worse [35], because in the d-dimensional setting the solution of the normal equation linear system is asymptotic to n 3(d−1) , where n d is the size of the matrix A. In order to overcome these problems (which arise only with the most precise AR-BCs for strongly symmetric PSFs), we replace A T by A , where A is the matrix obtained by imposing the current BCs to the center-flipped PSF (i.e., in the 2D case, to the PSF rotated by 180 degrees).
For the sake of simplicity we first consider a strongly symmetric PSF so that the associated normal equations can be read as A 2 f = Ag, whenever zero Dirichlet, periodic or reflective BCs are considered.Therefore, the observed image g is reblurred.The reblurring is the key of the success of classical regularization techniques (Tikhonov or CG, Landweber for the normal equations) since also the noise is blurred and this makes the contribution of the noise less evident.We notice that the two systems A 2 f = Ag and Af = g are algebraically equivalent if A is invertible: in any case, if A is also positive definite, the first represents the minimization of the functional Af − g 2  2 while the second represents the minimization of the functional A 1/2 f − A −1/2 g 2 2 so that the first can be considered the blurred version of the second and in fact the first approach is uniformly better than the second.On these grounds, in the case of AR-BCs, since A T / = A, we can replace A T by A which is again a low-pass filter (see [33]).In this way, we overcome also the computational problems.
In order to provide a general reblurring approach also in the case of nonsymmetric PSFs, we consider the correlation operation instead of the transposition (see [36]).In the discrete 2D case, the correlation performs the same operation of the convolution, but rotating the mask (the PSF in our case) of 180 degrees.Moreover, we note that in the continuous case over an infinite domain, the correlation and the adjoint are exactly the same operation, provided that the convolution kernel is real.Indeed, let K be the convolution operator with shift-invariant kernel k(s), then [K f ](x) = k(x − y) f (y)dy.Since the PSF (and then k) is real (and then real valued), the adjoint operator dy which is a correlation operator.We remark that here the convolution and the correlation use the same kernel except for the sign of the variable (i.e., k(s) vs k(−s)), and, in the 2D case, the change of sign in the variable s of the kernel can be viewed as a 180 degrees rotation of the PSF mask.
By virtue of these arguments, in order to overcome the problems arising with the normal equations approach for AR-BCs 2D restorations, we propose to replace A T by A (the reblurring matrix), where A is the matrix obtained by imposing the BCs to the PSF rotated by 180 degrees.Using Matlab notation, if H is a q × q PSF, its 180 degrees rotated version is H = fliplr(flipud(H)) = J q HJ q , where J q is the flip matrix defined as [J q ] i, j = 1 if i + j = q + 1 for i, j = 1, . . ., q, and zero otherwise.For a d-dimensional problem, A is obtained by imposing the BCs to the PSF flipped with respect to the origin, or, in other words, to the new PSF where all the coefficients are flipped with respect to every variable.
In this way A has the same computational properties of A (it belongs to AR in the case of AR BCs) and it is certainly a low-pass filter.In the reblurring approach the normal equations are replaced by Furthermore, using the Tikhonov regularization in the reblurred version, we propose to use with L being any discrete.differential operator with AR-BCs.In general A A is nonsymmetric, but the asymptotic eigenvalue analysis in [28, Section 3.3] has shown that the spectrum is clustered around the positive real interval given by the range of |h| 2 if h is the symbol associated to the PSF.Such a cluster is strong if the decay of the PSF coefficients is fast enough, as it occurs in real world PSFs.The latter statements give a reasonable support to the applicability of the CGLS when A T is replaced by A and, indeed, in our numerical tests such a regularizing method has always worked perfectly.From the viewpoint of the modeler, the previous considerations can be summarized in the following motivation.The image restoration problem is the restriction in the FOV of an infinite dimensional problem.We can follow two ways to design the linear system to solve: (1) to impose BCs and then to look at a least-squares solution, (2) to formulate a least-squares solution on the infinite dimensional problem, and then to impose the BCs to the two infinite dimensional operators K and K * , separately.
A third possibility is to formulate a least-squares solution on the infinite dimensional problem, and then to impose the BCs to this minimum problem: a difficulty in this case is that, even without noise, the resulting system is not equivalent in an algebraic sense to the original equations Af = g.
In the first case we resort to the normal equations in the finite dimensional space.On the contrary, in the second case we apply the BCs to K and K * in the infinite dimensional normal equations (where the adjoint operator K * performs a correlation operation) and then we obtain (34).More precisely, the discretization of K and K * in the continuous equation K * K f = K * g with any fixed BCs gives (34).

Linear Algebra and Computational
Issues.We note that in the 1D case A n = J n A n J n .In the d-dimensional case, let n = (n 1 , n 2 , . . ., n d ) be the partial dimensions of the problem, whose total size is d i=1 n i .By flipping each variable, we obtain For the analysis of properties of the reblurring scheme (34) with respect to all the different BCs, we now study the discretization of the continuous operator K. Let us consider the Toeplitz d-level matrix T n (φ) of partial dimensions n = (n 1 , . . ., n d ) ∈ N d + and generating function φ [2], which is defined as Here ∈ R m×m is the matrix whose (s, r) th entry is 1 if s − r = j, and 0 elsewhere.As it is well known for multilevel Toeplitz matrices T H n (φ) = T n (φ), where φ is the conjugate of the function φ, and the Fourier coefficients of φ are the same of φ, but conjugated and flipped.Moreover, since This means that for Dirichlet BCs (D-BCs) and periodic BCs (P-BCs) the reblurring approach is exactly equal to the classical normal equations approach, since in these two cases the corresponding blurring matrix A n is multilevel Toeplitz: indeed, concerning P-BCs, we notice that the resulting multilevel circulant structure is a special instance of the multilevel Toeplitz case.Unfortunately, in the case of Hankel matrices (or multilevel mixed Toeplitz-Hankel matrices with at least one Hankel level) this is no longer true in general.However, a sufficient and necessary condition to have , which is a multilevel antidiagonal symmetry called persymmetry.Therefore in the case of R-BCs, where the matrix A n involves nested Hankel parts, in general A n / = A T n , while A n = A T n only when the PSF is strongly symmetric since in this case J n A n = (J n A n ) T .Dealing with the AR-BCs, the situation is even more involved, since A n / = A T n also for strongly symmetric PSFs, owing to the low-rank correction term.Hence, we can state that the reblurring is a new proposal not only for the AR-BCs, but also for all those BCs for which J n A n / = (J n A n ) T .As a nontrivial and unexpected example, it is important to stress that the imposition of R-BCs with nonstrong symmetric PSFs implies We provide now a computational motivation for the choice of using A as an alternative to A T : A is the usual operation which has to be implemented to perform the adjoint operation in the Fourier domain.Indeed, the convolution with prescribed BCs can be implemented by first enlarging the image according to the considered BCs and then by computing the matrix vector product by a simple circular convolution operation, see [37].More precisely, let X and H be two matrices such that X represents an n × n image and H is the discrete q × q 2D PSF, q = 2m + 1, which leads to the matrix blurring A. By using the Matlab notation x = X(:) (i.e., the vector x is the column-stacked version of X), the product Ax in the 2D case can be implemented (1) by using an enlarged image X, which is the (n + q − 1) × (n + q − 1) image X extended at the boundaries according to the imposed BCs, (2) computing H * X, where the symbol " * " denotes the circular convolution operator (H should be zero padded to have the same size of X), (3) and then taking the inner part of the result having the same size of X.
The circular convolution can be computed using the 2D discrete Fourier transform (DFT2) and its inverse (IDFT2), since we have where " " is the componentwise product.If C k ( f ) denotes the block circulant matrix with circulant blocks, of block size k with blocks of size k and generating function f , then (39) represents the same operation as C n+q−1 (φ) x, where x = X(:) (according to a 2D ordering).Conversely, it is well known that the operation corresponding to the product with the adjoint operator, in the Fourier domain gives rise to where the overline symbol denotes the complex conjugation.As a result, since the transform H → H is equivalent to the transform φ → φ (because φ(x, y) = i, j∈Z H i, j exp i (i, j)|(x,y) ), and since Therefore, for any of the considered BCs, the inner part of y is exactly A x.Here it is worthwhile to specify exactly what we mean for inner part: if the vector y is partitioned in n + q − 1 blocks of size n + q − 1, q = 2m + 1, then for inner part we mean that we are excluding the first and the last m blocks and, in any of the remaining blocks, we are deleting the first and the last m entries.More generally, if the PSF is arbitrary (e.g.nonsymmetric) that is, the nonzero coefficients of the PSF have first index belonging to [−m − 1 , m + 1 ] and second index in the range [−m − 2 , m + 2 ], then we have to delete the first m − 1 and the last m + 1 blocks and, in any of the other blocks, we have to exclude the first m − 2 and the last m + 2 entries.Since the DFT and its inverse can be computed in O(n 2 log(n)) arithmetic operations using FFTs, the previous approach is implemented in the Matlab toolbox RestoreTools [37].We have added the AR-BCs in such a toolbox for the matrix vector product, suitable for iterative regularizing methods.This code has been used for the numerical tests of Section 3 and it is downloadable from the homepage "http://scienzecomo.uninsubria.it/mdonatelli/software.html".

Filtering Methods for AR-BCs Matrices.
As mentioned in Section 1, regardless of the imposed BCs, matrices A that arise in signal and image restoration are typically severely ill conditioned, and regularization is needed in order to compute a stable approximation of the solution of (1).A class of regularization methods is obtained through spectral filtering [38,39].Specifically, if the spectral decomposition of A is with d = h( G n ), then a spectral filter solution is given by where φ i are filter factors that satisfy The small eigenvalues correspond to eigenvectors with high frequency components, and are typically associated with the noise space, while the large eigenvalues correspond to eigenvectors with low-frequency components, and are associated with the signal space.Thus filtering methods attempt to reconstruct signal space components of the solution, while avoiding reconstruction of noise space components.For example, the filter factors for two well-known filtering methods, truncated spectral value decomposition (TSVD) and Tikhonov regularization, are where the problem dependent regularization parameters δ and λ must be chosen [39].Several techniques can be used to estimate appropriate choices for the regularization parameters when the SVD is used for filtering (i.e., d i are the singular values), including generalized cross validation (GCV), L-curve, and the discrepancy principle [38,40,41].
In our case, the notation in (44) defines a slight abuse of notation, because the eigenvalues d i are not the singular values: in fact the Jordan canonical form (CF) in ( 24) is different from the singular value decomposition (SVD), since the transform T n is not orthogonal (indeed it is a rank-2 correction of a symmetric orthogonal matrix).Therefore, note that the use of φ tsvd i in (42) defines the filtering of the eigenvalues in the Jordan CF instead of the more classical filtering of the singular values in the SVD.However, we note that in general computing the SVD can be computationally very expensive, especially in the multidimensional case and also in the strongly symmetric case.Moreover, quite surprisingly, a recent and quite exhaustive set of numerical tests, both in the case of signals and images (see [42]), has shown that the truncated Jordan CF is more or less equivalent to the truncated SVD in terms of quality of the restored object: indeed this is a delicate issue that deserves more attention in the future.
Our final aim is to compute (42) in a fast and stable way.This is the classic approach implemented for instance with periodic BCs by using three FFTs.In our case we employ the AR-transform (21), its inverse (23), and a fast algorithm for computing the eigenvalues described in [28].
T are the eigenvalues of τ n−2 (h) that can be computed by a fast sine transform (FST).
The product T n f can be clearly computed in a fast and stable way by one FST.Indeed for all where x(2 : n − 1) in Matlab notation is the vector x with components indexed from 2 to n − 1.A similar strategy can be followed for computing the matrix-vector product T −1 n g.
Remark 7. As discussed in Remark 3, there is a natural interpretation in terms of frequencies when considering onedimensional periodic and reflective BCs.The eigenvalue obtained as a sampling of the symbol h at a grid-point close to zero, that is, close to the maximum point of h, has an associated eigenvector that corresponds to lowfrequency (signal space) information, while the eigenvalue obtained as a sampling of the symbol h at a grid-point far away from zero (and, in particular, close to π), has an associated eigenvector that corresponds to high-frequency (noise space) information.Concerning antireflective BCs, the same situation occurs when dealing with the frequency eigenvectors 2/(n − 1)(sin( j y))| Gn , j = 2, . . ., n − 1.The other two exceptional eigenvectors generate the space of linear polynomials and therefore they correspond to lowfrequency information: this intuition is well supported by the fact that the related eigenvalue is h(0), that is, the maximum and the infinity norm of h, and by the fact that AR-BCs are more precise than other classical BCs.
Remark 8.For d = 1 and with reference to the previous sections, we have proved that, thanks to the definition of a (fast) AR-transform, it is possible to define a truncated spectral decomposition which is computationally very effective and, surprisingly enough, quite competitive when compared with the celebrated but costly truncated SVD in terms of restoration quality.However, we are well aware that the real challenge is represented by a general extension to the multidimensional setting.Indeed, except for more involved multi-index notations, all the techniques are plainly generalized in the multilevel setting, maintaining a cost proportional to three d-level FSTs of size (n − 2) d , and the key tool is the simplified eigenvalue-eigenvector correspondence concisely indicated in Theorem 5.In reality, regarding the previous Algorithm 6 the only difficult task is the computation in step (2), where we have to compute the eigenvalues in the right order.For this task we refer to [28], where an algorithm is proposed and studied: more specifically the related procedure in [28] is based on a single d-level FST of size (n − 2) d plus lower order computations.

Convergence of the Reblurring Approach.
We remark that the antireflective transform can be defined also by the eigenvector matrix where Note that 1 and l differ from the corresponding eigenvectors [1, p T , 0] T and [0, Jp T , 1] T used in Section 2.4, but they are a linear combination of them.They have been chosen here to form an orthonormal basis of the grid samples of all linear polynomials and this property will be useful in the following.
According to Theorem 4 the spectral decomposition of AR n (h) can now be written as where the diagonal entries λ j j of Λ are given by Here we prove that the reblurring approach for Tikhonov regularization in (35) where L = I is a regularization method.For a complete analysis we need to compute the SVD of V n .We use the notation y .= z if the two vectors y and z depend on n and for each entry y i of y and the corresponding entry z i of z there holds y i /z i → 1 as n → ∞.
Theorem 9 (see [43]).The two dominant singular values of V n are given by and the left singular vectors are Remark 11.It is important to note that the ill-conditioned subspace of V −1 n has dimension two, independent of n, since V n has two singular values that decay like 1/ √ n while all others are between one and two.Also, V −1 n only amplifies vectors that fail to be orthogonal to U = span{u n−1 , u n }.According to Theorem 9 the vectors from U are essentially zero, except for their two boundary values.
Convergent bounds for the Tikhonov reblurring approach, can be obtained with the usual remedy from the theory of ill-posed problems, which consists in so-called smoothness assumptions, the most simple one being as follows.
Assumption 12. Let f be itself a blurred version of a continuous signal w, that is, where w satisfies the same BCs as f (i.e., periodic, reflective, or antireflective ones).
On the grounds of Assumption 12 we may therefore assume that with a moderate bound where Since the observed object is usually affected by noise, instead of the blurred object g we have to work with the blurred and noisy object g = g + e, where e ∞ < ε.In this way, for L = I (35) becomes (A A + μI)f = A g. Theorem 13.Let the exact solution f of (1) satisfy (56) with (57).Furthermore the total error of the reblurring strategy with AR BCs satisfies for α = α(ε) = ε/ρ, where the constant in the O(•)-notation is independent of the dimension n.
Note that the upper bound from Theorem 13 is the same as for Tikhonov regularization with reflective or periodic BCs; only the constant hidden in the O(•)-notation may be somewhat larger for the reblurring strategy.

Numerical Results
The section is devoted to numerical experiments, with reference to the Tikhonov regularization in the reblurred version and to the classical conjugate gradient (CG) regularization with early termination.In both cases the use of antireflective BCs improves the quality of the restored image, without penalizing the computational cost.Another promising approach not discussed here both from the viewpoint of the quality and of the complexity is that based on a regularized version of multigrid-type techniques (see [44,45]): also this idea can be successfully implemented in combination with the choice of AR BCs.
In our numerical experiments we use Matlab 6.5 and the toolbox RestoreTools [37] suitably extended for dealing with AR-BCs.The relative restoration error (RRE) is f − f 2 / f 2 , where f is the computed approximation of the true image f.The signal-to-noise ratio (SNR) is computed as 20log 10 g b 2 / ν 2 , where g b is the blurred image without noise and ν is the noise vector [32].[19,33,46]), where the advantage on some classes of images, in terms of the restored image quality, of the application of AR-BCs has been emphasized.Here we present only a 2D image deblurring example with Gaussian blur and various levels of white Gaussian noise.The true and the observed images are in Figure 1, where the observed image is affected by a Gaussian blur and 1% noise.We compare the AR-BCs only with the reflective BCs since for this test other BCs like periodic or Dirichlet do not produce satisfactory restorations.In Figure 2 we observe a better restoration and reduced ringing effects at the edges for AR-BCs with respect to reflective BCs.Restored images in Figure 2 are obtained with the minimum relative restoration error varying several values of the regularization parameter λ.
From Table 1, we note that for the 10% noise case, all of the approaches give comparable restorations.On the other hand, decreasing the noise, that is, passing to 1% and then to 0.1% noise, the AR-BCs improve the restoration while the reflective BCs are not able to do that, due to the barrier of the ringing effects.

Reblurring and CG Regularization.
In the following tests, the reblurring strategy will be applied with R-BCs and AR-BCs when the PSF is not necessarily strongly symmetric.Indeed, in the case of D-BCs and P-BCs, the reblurring approach is equal to the classical normal equations, while, in the case of R-BCs and AR-BCs, this is no longer true in general.
We provide two problems using iterative regularization by CG.
Test I: Cameraman.The first test is reported in Figure 3.The true image is a cameraman and the 61 × 61 PSF is associated with a Gaussian distribution in the square domain [−30, 30] × [−30, 30], with variance equal to four.We add a Gaussian noise.
Test II: Astronomy.We are dealing with a nonsymmetric experimental 256 × 256 PSF developed by US Air Force Phillips Laboratory, Lasers and Imaging Directorate, Kirtland Air Force Base, New Mexico, widely used in literature (see e.g.[12,47]).The true object is the image of Saturn in Figure 4; a Poissonian noise is added, as it is customary when dealing with astronomical images.We show the results corresponding only to P-BCs, R-BCs, and AR-BCs.For shortness, we do not report the reconstructions coming from D-BCs, since the related restorations are usually not better than those with P-BCs.
Tables 2 and 3 show the best RREs for various levels of noise.In Table 2 we also report the L 2 norm of the residuals, that is, g − A f 2 , where g is the observed image, f is the computed approximation, and A is the coefficient matrix constructed according to the considered BCs: the latter measure is the sum of square errors and it represents, up to the scaling of the variance, the χ 2 statistical measure of the error.As already pointed out, the choice of the BCs is important mainly for low levels of noise, that is, for high values of SNR.Indeed, in the last row of these tables (SNR = 10), the errors due to noise start to dominate the restoration process and therefore the choice of particular BCs is not relevant for the restoration accuracy.In the other rows, where the noise is lower, the choice of the BCs becomes crucial.In particular, the AR-BCs improve substantially the quality of the restorations with respect to the other BCs.This is especially evident in Test II (see Table 3).The reason of the observed high improvement is due to the shape of the PSF, since, basically, the more the support of the PSF is large, the more the ringing effects (and hence the BCs) become dominating.
To emphasize the quality of the restored images, we consider the reconstruction in the case of Test I for a fixed SNR equal to 40.In Figure 5, we report the restored images and in Figure 6 the residuals of the computed solutions for each pixel divided by the variance of the noise.The last one should have a normal distribution in the case of a good restoration since we add a Gaussian noise.In Figure 5 is evident the reduction of the ringing effects passing from P-BCs to R-BCs (ringings such as the horizontal white line on the top and the horizontal black line on the bottom disappear) and from R-BCs to AR-BCs (ringings such as the two vertical white lines in the bottom left disappear).Indeed, in the same figure we note a higher level of detail in the case of AR-BCs, especially concerning the face of the cameramen.The image restored with R-BCs is smoother when compared with the one restored with AR-BCs, where we can see the "freckles" effect, typical of the L 2 norm restoration.Indeed the CGLS method computes the least-squares solution that is well known to be affected by such a phenomenon [39].When passing to the R-BCs, the considered effect is less evident: indeed it seems that the slightly greater ringing crucial and difficult task, can be done with low precision.Indeed, in order to stress the applicability of the AR-BCs to real problems, we consider for the Test I the discrepancy principle widely used with iterative methods [31].Since we know the L 2 norm of the error, we stop the CG when the L 2 norm of the residual becomes lower than the L 2 norm of the error.Such a criterion seems to work quite well for the AR-BCs as we can see in Figure 8.The restored images are good enough with respect to the optimal solution and also the stopping iteration, at least in this example, is close to the optimal one.On the other hand, such criterion is not always effective for the other BCs in this example.For instance, the stopping iteration for R-BCs is greater than 1000 in the case of SNR = 50 and it is 13 for SNR = 30.However, an analysis of the stopping criterion, in connection with AR-BCs, should be further investigated in the future.
Finally, we remark that also for Test II the CG applied to the linear system, (34) works without breakdown, both with R-BCs and AR-BCs.Therefore, it is possible that the applicability of the CG to (34) is a general property, which does not depend on the particular choice of the BCs.

Conclusions
In this contribution we have dealt with the use of antireflective BCs for deblurring problems where the considered issues have been: the precision of the reconstruction when the noise is not present, the linear algebra related to these BCs, the computational costs, and the reconstruction quality associated with iterative and noniterative regularizing solvers when the noise is considered.For many of the considered items, the antireflective algebra coming from the given BCs is the optimal choice: for instance in the work of La Spina it is proven that no one of the trigonometric algebras can be associated with BCs of the same precision as the antireflective.Numerical experiments corroborating the previous statements have been reported and discussed: in this direction it remains an open problem to understand why the CG works without any numerical problem even if the antireflective structure is non symmetric (an event not normal as emphasized by the Jordan decomposition).

Corollary 10 .
) respectively.The remaining singular values are equal to one, and the corresponding left and right singular vectors have homogeneous boundary values.Now we are in the position to determine the condition number of the antireflective transform to first order.The condition number of the antireflective transform satisfies

Figure 4 :
Figure 4: Test II: true image, PSF, and the blurred image without noise.

Table 1 :
RRE for the test problem in Figure1.

Table 2 :
Test I: best RREs and L 2 norm of the residuals for CG with reblurring varying the SNR (SNR = ∞ means 0% of noise).
2norm of the residuals ( g − A f