Vanishing Moments for Scaling Vectors

One advantage of scaling vectors over a single scaling function is the compatibility of symmetry and orthogonality. This paper investigates the relationship between symmetry, vanishing moments, orthogonality, and support length for a scaling vector Φ. Some general results on scaling vectors and vanishing moments are developed, as well as some necessary conditions for the symbol entries of a scaling vector with both symmetry and orthogonality. If orthogonal scaling vector Φ has some kind of symmetry and a given number of vanishing moments, we can characterize the type of symmetry for Φ, give some information about the form of the symbol P(z), and place some bounds on the support of each φ i. We then construct an L 2 (R) orthogonal, symmetric scaling vector with one vanishing moment having minimal support. 1. Introduction. In this paper, we will discuss conditions for vanishing moments for scaling vector Φ = (φ 1 ,...,φ r) T solutions to the matrix refinement equation (MRE)


Introduction.
In this paper, we will discuss conditions for vanishing moments for scaling vector Φ = (φ 1 ,...,φ r ) T solutions to the matrix refinement equation (MRE) with r × r matrix coefficients C k . Taking the Fourier transform of this MRE giveŝ where the 2-scale symbol P is the r × r matrix We will use p ij (z) to denote the i-j entry of P , and we will occasionally interchange z and w for the argument of the symbol P ; the context should prevent confusion.
Scaling vectors and solutions to MREs have been studied in [1,3,5,6,7] and many others. In particular, Heil and Colella [5] showed that the eigenvalues of ∆ ∞ = lim j→∞ P j (1) are crucial to understanding solutions of the MRE. The unconstrained solutions are precisely those which are the compactly supported distributional solutions to the MRE (1.1). For this reason, we will restrict our attention in this paper to unconstrained solutions to ( It is also shown in [5] that unconstrained MRE solutions Φ satisfŷ (1. 4) As is standard in the literature, we say the polynomial accuracy p of a scaling vector is the maximum integer p for which x k ∈ V 0 for k = 0, 1,...,p −1. A common sufficient condition for the density condition of a multiresolution analysis (MRA) generated by Φ requires polynomial accuracy at least p ≥ 1 (e.g., [3]). Throughout this paper, all scaling vectors are assumed to have accuracy p ≥ 1.
The following combines some important results on accuracy and smoothness found in [1,7] necessary for our discussion. (1) Φ provides polynomial accuracy m; (2) the elements of P (w) are trigonometric polynomials and there are vectors y k for (1.5) (3) the symbol P (z) factors as Whether the MRA generated by Φ is orthogonal is determined by the following condition found in [10]. (1.8) One advantage to scaling vectors over a single scaling function is the compatibility of symmetry and orthogonality, as seen in [3] and elsewhere. The symmetry condition given below for a scaling vector was proved by Plonka and Strela and appears in [7].  (1.9) where the T j are the points of symmetry for the φ j and the ± sign is + when φ j is symmetric and − when antisymmetric.
When r = 2 and T 1 = 0, the condition P (z) = E(z 2 )P (z −1 )E −1 (z) is equivalent to the following, useful in the sequel: (1.10) Remark 1.5. We will say that a scaling vector Φ is symmetric if all component functions of Φ are either symmetric or antisymmetric.
The following result, needed in the sequel, shows that the length of each polynomial entry in the symbol is unaffected by integer shifts of the scaling functions. (1.12)

Scaling vectors with vanishing moments.
The study of vanishing moments for a single scaling function is discussed in [2] and many other papers. In [2], Daubechies shows how to link the symbol's condition for vanishing moments with the condition for polynomial accuracy via Bezout's theorem to obtain a family of coiflets. In the scaling vector situation, the symbol is a matrix, and matters are more complex. In this section, we develop some general results on scaling vectors and vanishing moments, as well as some necessary conditions for the symbol entries of a scaling vector with both symmetry and orthogonality. We first characterize the vanishing moments of Φ in terms of the derivatives of its symbol D k P (0) according to the following result, first presented in [8].

Proof.
That not all φ i can be antisymmetric follows immediately from Corollary 2.4.
To prove that not all φ i can be symmetric, we prove the contrapositive. Begin by shifting each φ i so that it is symmetric about 0, if necessary. Now let P (z) be the symbol of this shifted Φ. By the symmetry Theorem 1.4, Letting n kj denote the degree of the highest-degree nonzero term a n kj z n kj of p kj (z) and n k = max j (n kj ), we see that θ kk has 2n k degree term n kj =n k a n kj z n kj 2 + a n kj (−z) n kj 2 = 2z 2n k n kj =n k a n kj 2 ≠ 0.
Since the basis is nontrivial, P (z) is not constant, so n k > 0 for at least one value of k whence θ kk is not constant for this k value. Thus θ is not the identity matrix, and so by (1.8), Φ does not generate an orthonormal basis.
We introduce the notation degL(R) = n − m for a polynomial R(z) = a m z m + ··· + a n z n , where m ≤ n. The following results are useful for analyzing the symbol P (z) when scaling vector Φ is symmetric. Lemma 2.6. Let polynomial R(z) = a m z m + ··· + a n z n satisfy z k R(z) = ±R(z −1 ), where a m ,a n ≠ 0, m ≤ n. Then m = −n − k and degL(R) = 2n + k is even if and only if k is even.
Proof. The condition z k R(z) = ±R(z −1 ) can be rewritten as follows: a m z m+k +···+a n z n+k = ± a m z −m +···+a n z −n . (2.9) Then the highest-degree terms of (2.9) must be a n z n+k = ±a m z −m so n + k = −m whence degL(R) = n − m = 2n + k.

T generates a nontrivial orthonormal basis with the φ k (anti)symmetric about integers. Then
Proof. By Theorem 2.5, we may assume φ 1 is symmetric and φ 2 is antisymmetric. Then shift each φ k to be (anti)symmetric about 0. Note that this does not affect degL(p jk ) for any j, k by Proposition 1.6. Letting P (z) denote the symbol of this shifted scaling vector, Theorem 1.
Proof. Substituting u = x − S, apply the binomial theorem and hypothesis: 3. Support bounds for orthogonal, symmetric scaling vectors with vanishing moments. We now limit discussion to scaling vectors with r = 2 component functions.
If orthogonal Φ = (φ 1 ,φ 2 ) T has some kind of symmetry and a given number of vanishing moments, we can characterize the type of symmetry for Φ, give some information about the form of the symbol P (z), and place some bounds on the support of each φ i .
We also demonstrate how to construct an L 2 (R) orthogonal symmetric scaling vector with minimal support for one vanishing moment. This support length depends on the type of symmetry.
We first introduce some standard notation about support for scaling vectors needed for our analysis of the interplay between vanishing moments and support. We will say that the support of Φ = (φ 1 ,...,φ r ) T is supp(Φ) = ∪ i supp(φ i ), with each supp(φ i ) being the convex hull of the support points of φ i , as in [9]. The support length of φ i , denoted by suppL(φ i ), is the length of the interval supp(φ i ).
A family {f i } of functions on R is locally linearly independent (LLI) if i c i f i (x) = 0 on any nontrivial interval I implies that c i = 0 for all i for which supp(f i ) ∩ I ≠ ∅. We say a scaling vector Φ is LLI if the family {φ i (x − k) : 1 ≤ i ≤ r , k ∈ Z} is LLI. Wavelets and the LLI property have been studied together [4,9], and So and Wang [9] proved the following result about the connection between the symbol P (z) entries' degrees and each supp(φ i ). This result allows us to put some bounds on the support length of φ i .
The next two lemmas contain useful information about scaling vectors with vanishing moments. Lemma 3.3. Suppose symmetric, orthogonal scaling vector Φ = (φ 1 ,φ 2 ) T has accuracy p ≥ 1 and at least one vanishing moment. If φ 1 is symmetric about 0, thenΦ(1) must be a multiple of (1, 0) T , P (1) has a one-dimensional right 1-eigenspace, and y 0 of Theorem 1.2 must be a nonzero scalar multiple ofΦ (1).
Proof. By Proposition 2.3, Corollary 2.4, and Theorem 2.5, φ 2 must be either antisymmetric or symmetric about some β ≠ 0. In either case,φ 2 (1) = R φ 2 = 0, soΦ(1) must be multiple of (1, 0) T and P (1)Φ(1) =Φ(1); so By Definition 1.1, the eigenvalue y must be 1 or less than 1 in magnitude. For the sake of contradiction, suppose that y = 1. Then P (1) has two linearly independent eigenvectorŝ F 1 (1),F 2 (1) corresponding to two linearly independent scaling vector solutions F 1 , F 2 by [5,Theorem 2.3]. Thus one of these, say On the other hand, since Φ is symmetric and orthogonal, its symbol P has the form requiring all other scaling vector solutions to be symmetric and orthogonal, by Theorems 1.4 and 1.3, with the first component function symmetric about 0. So, just as with Φ, f 2 2 must be either antisymmetric, or symmetric about some β ≠ 0, and sof 2 2 (1) = R f 2 2 = 0. This is a contradiction; so |y| < 1, and P (1) has a one-dimensional right 1-eigenspace.
Example 3.8. Set L = 2. We use this theorem to guide a construction of an L 2 (R) LLI, orthogonal, symmetric scaling vector with one vanishing moment and minimal support. Naturally φ 1 is symmetric about 0 and φ 2 is antisymmetric about 1/2, suppL(φ 1 ) ≥ 3, suppL(φ 2 ) ≥ 4. We begin with symbol of the form given in Theorem 3.7 part 3. Brute force with (1.8) shows that orthogonality cannot be obtained if degL(Q 11 ) < 2, so we try degL(Q 11 ) = 2 and thus degL(Q 12 ) = 2 is minimal. We put the unknowns Q 11 , Q 12 , P 22 , R 21 into form ensuring the symmetry conditions, and solved for the parameters to guarantee orthogonality, with the help of Mathematica. Theorem 1.2 was used to verify that the solution was indeed L 2 (R). The supports suppL(φ 1 ) = 4, suppL(φ 2 ) = 4 are minimal for the conditions required for Φ.