This paper deals with the problem of two-dimensional autoregressive (AR) estimation from noisy observations. The Yule-Walker equations are solved using adaptive steepest descent (SD) algorithm. Performance comparisons are made with other existing methods to demonstrate merits of the proposed method.
1. Introduction
The problem of two-dimensional (2D) autoregressive modeling is very important in many signal processing applications. This problem has applications in image processing, radar, sonar, and communications. In image processing, it has been applied to image modeling [1], texture analysis [2], and hyperspectral imagery [3]. In radar and sonar, it can be included in direction finding, model based detection, and spectral estimation [4–6]. It also can be applied to fading channel estimation in communications [7].
In noise-free case, 2D AR estimation is investigated in [8–10]. A 2D lattice structure is proposed for 2D AR modeling in [8] which is capable of simultaneously providing all possible types of 2D causal quarter-plane (QP) and asymmetric half-plane (ASHP) AR models. Model identification of a noncausal 2D AR process is presented in [9]. An order recursive algorithm is proposed to solve 2D Yule-Walker equations. Modeling of 2D AR processes with various regions of support is considered in [10].
The one-dimensional AR parameter estimation from noisy observations is well investigated in the literatures (see [11–14]). The literature is insufficient for two-dimensional noisy autoregressive fields. In [15], a method based on combination of the Yule-Walker equations and third order moment is proposed. In this method, the driving noise process is forced to be non-Gaussian. Recently, in [16], combinations of the low order and high order Yule-Walker equations are solved as a quadratic eigenvalue problem. This method is an extension of the method proposed by Davila in [11].
In this paper, we propose a Yule-Walker based method for the estimation of 2D AR parameters from noisy observations. Due to the observation noise variance, the Least-Squares (LS) estimate of the parameters is biased. We propose an estimation scheme using the additional Yule-Walker equation beyond the order of the model for the observation noise variance estimation. Then, by steepest descent (SD) algorithm, we compensate bias from the estimation. Numerical examples are given to show the effectiveness of the proposed estimation method.
2. Problem Formulation
A 2D autoregressive (AR) field is given by
(1)xm,n=∑i=0p1∑j=0p2(i,j)≠(0,0)ai,jxm-i,n-j+um,n,
where a(0,0)=1, (p1,p2) is the order of field, and x(m,n) is output of the model, and the driving input field, u(m,n), is a 2D white zero-mean stationary field having variance of σu2. In practice, the observed field is given by
(2)ym,n=xm,n+vm,n,
where v(m,n) is the observation noise of zero-mean stationary field having variance of σv2 and it is assumed to be uncorrelated with u(m,n).
The model is assumed causal and stable and with quarter-plane (QP) support. The order of model (p1,p2) is assumed to be known. Some order selection methods in 2D AR case are presented in [17].
The power spectral density (PSD) corresponding to noiseless 2D AR field in (1) is given by [18]
(3)Sw1,w2=σu21-∑k=0p1∑l=0(k,l)≠(0,0)p2a(k,l)e-(iw1k+iw2l)2.
The Yule-Walker (YW) equations for noiseless 2D AR field given by (1) are as follows [19]:
(4)∑i=0p1∑j=0p2a(i,j)rx(k-i,l-j)=rx(k,l)+σu2δ(k,l)k≥0,l≥0,i,j≠0,0,
where rx(k,l)=E{x(m,n)x(m-k,n-l)} is 2D autocorrelation function, E is the expectation operator, and δ(k,l) is Kronecker delta function, defined by
(5)δ(k,l)=1(k,l)=(0,0)0k,l≠0,0.
Due to uncorrelatedness of u(m,n) and v(m,n), we have E{u(m,n)v(m,n)}=0∀m,n. So, the autocorrelation function of the observed field is given by
(6)ryk,l=rxk,l+σv2δk,l.
Our objective is to estimate ai,j for i=0,…,p1, j=0,…,p2, (i,j)≠(0,0) from the observations y(m,n) for (m=1,…,M,n=1,…,N).
3. The Algorithm
The YW equations in (4) can be written in a matrix form as
(7)R-ya-=σv2a-+σu2b00⋮0,
where a-=a0a1⋮ap1,
(8)R-y=Ry(0)Ry(-1)⋯Ry(-p1)Ry(1)Ry(0)⋯Ry(-p+1)⋮⋮⋱⋮Ry(p1)Ry(p1-1)⋯Ry(0),Ryi=ryi,0ryi,-1⋯ryi,-p2ryi,1ryi,0⋯ryi,-p2+1⋮⋮⋱⋮ryi,p2ryi,p2-1⋯ryi,0,ai=a(i,0)a(i,1)⋮a(i,p2), and b0=10⋮0 are the (p2+1)×1 vectors.
Because σu2 is unknown, the first row in (7) is removed and after rearranging the equations in terms of a, the YW equations can be rewritten as follows:
(9)Rya-σv2a=ry,
where Ry is (p1p2+p1+p2)×(p1p2+p1+p2) matrix, ry, and a are (p1p2+p1+p2)×1 vectors. Note that a is a- after removing the first element a(0,0).
Multiplying both sides of (9) by Ry-1, we obtain
(10)a=aLS+σv2Ry-1a.
If σv2 is assumed to be known, we can iteratively estimate a using SD algorithm as follows [20]:
(11)a(j)=a(j-1)+μaLS+σv2Ry-1aj-1-aj-1,
where aLS=Ry-1ry is the Least-Squares (LS) estimate of a, μ is the step size parameter, and ry and Ry can be estimated from the observations. We can also control the stability and rate of convergence of the algorithm by changing μ. The above equation converges if μ is selected between zero and 2/1-λmin where λmin is the minimum eigenvalue of the σv2Ry-1 matrix.
In many signal processing applications, the observation noise variance is unknown, so we must estimate it. In the following subsection, we present a method to estimate the observation noise variance.
3.1. The Observation Noise Variance Estimation Given <bold>a</bold>
Consider the YW equations for lag p1+1; we have
(12)Ryp1+1Ryp1⋯Ry1a-=0.
We arrange the equations in (12) in terms of a as
(13)Rqa=rq,
where Rq is (p2+1)×(p1p2+p1+p2) matrix and rq is (p2+1) vector.
Multiplying both sides of (10) by Rq and using (13), we obtain
(14)σv2RqRy-1a=rq-RqaLS.
The LS estimate of σv2 can be obtained via
(15)σv2=RqRy-1aT(rq-RqaLS)RqRy-1a2,
where · is the Euclidean norm. Now, we can estimate a and σv2 by an iterative algorithm which is summarized in the following subsection.
3.2. The Proposed Algorithm
The proposed estimation algorithm can be summarized as follows.
Step 1.
Estimate the autocorrelations r^yk,lk=0,…,p1+1, l=0,…,p2 using data samples as follows [15]:
(16)r^yk,l=1MN∑m=k+1M∑n=l+1Nym,nym-k,n-l,r^yk,l=r^y-k,-l,r^yk,-l=r^y-k,l=1MN∑m=1M-k∑n=l+1Nym,ny(m+k,n-l).
Step 2.
Form R^y, r^y, R^q, and r^q and compute a^LS=R^y-1r^y.
Step 3.
Set j=0 and
(17)a^(j)=a^LS.
Step 4.
Set j=j+1 and compute
(18)σ^v2=R^qR^y-1a^j-1T(r^q-R^qa^LS)R^qR^y-1a^(j-1)2a^(j)=a^(j-1)+μa^LS+σ^v2R^y-1a^j-1-a^j-1.
Step 5.
If [13]
(19)a^(j)-a^(j-1)a^(j)≤δ,
where δ is a small positive number, for example, δ=10-3, the convergence is achieved and the iteration process must be terminated; otherwise, go to Step 4.
3.3. Convergence Analysis
The convergence analysis of the proposed algorithm is similar to the method proposed in [21]. The following result discusses the convergence conditions of the proposed algorithm. In this analysis, the observation noise variance is assumed to be known.
Theorem 1.
The necessary and sufficient condition for the convergence of the proposed algorithm is to require the step size parameter μ to satisfy the following condition:
(20)0<μ<21-λmin,
where λmin is the minimum eigenvalue of the σv2Ry-1 matrix.
Proof.
Defining estimation error matrix Δa(j)=a(j)-a, substituting a-aLS-σv2Ry-1a=0 into (11), and using Δa(j)=a(j)-a, we obtain
(21)Δa(j)=I-μI-σv2Ry-1Δaj-1.
The eigenvalues of I-μ(I-σv2Ry-1) are 1-μ(1-λj) where λj=λj(σv2Ry-1) is between 0 and 1.
If -1<1-μ(1-λj)<1, then limi→∞Δa(j)=0. In this case, we have 0<μ<2/(1-λj), and their intersection is 0<μ<2/(λmax(I-σv2Ry-1))=2/(1-λmin).
4. Simulation Results
In order to evaluate the performance of the proposed method and to compare with the Least-Squares (LS) method, two examples are presented. In the first example we generate data using a synthetic 2D noisy AR and in the second example we apply the methods for 2D sinusoidal spectral estimation.
Example 1.
Consider a synthetic 2D noisy AR model as follows:
(22)xm,n=a(0,1)x(m,n-1)+a(0,2)x(m,n-2)+a(1,0)x(m-1,n)+a(1,1)x(m-1,n-1)+a(1,2)x(m-1,n-2)+a(2,0)x(m-2,n)+a(2,1)x(m-2,n-1)+a2,2xm-2,n-2+um,n,ym,n=xm,n+vm,n,
where the u(m,n) and v(m,n) are white Gaussian noises which are mutually uncorrelated.
The signal to noise ratio (SNR) is calculated by
(23)SNR=10log10σx2σv2dB,
where σx2 is the variance of the signal x(m.,n). We assume that σu2=1 and σv2 is adjusted to produce a value of the SNR. The N is assumed to be equal to M. The step size, μ, is set to one.
The methods are compared in terms of normalized root mean squared error (RMSE) which is defined by
(24)RMSE=∑j=1Ja^j-a2/Ja,
where a^j is the estimate of a in the jth trial and J is the total number of trials. The mean number of iterations per test (NIPT) for the proposed method is also presented against the criterion (19) with a choice of δ=0.001.
In this example, we set N=M=128, SNR = 5 dB, and J=100. Results of simulations are summarized in Table 1. From Table 1, it can be seen that the performance of the proposed method is better than the LS method.
Simulation results (J=100, N=128, and SNR = 5 dB).
True value
LS mean of estimated value
LS standard deviation of estimated value
Proposed mean of estimated value
Proposed standard deviation of estimated value
a(0,1) = 0.3
0.2221
0.0088
0.2947
0.0217
a(0,2) = 0.2
0.1602
0.0069
0.1957
0.0153
a(1,0) = 0.3
0.2226
0.0073
0.2952
0.0246
a(1,1) = −0.09
−0.0366
0.0084
−0.0863
0.0223
a(1,2) = −0.06
−0.0284
0.0082
−0.0574
0.0137
a(2,0) = 0.2
0.1602
0.0079
0.1955
0.0128
a(2,1) = −0.06
−0.0284
0.0087
−0.0565
0.0140
a(2,2) = −0.04
−0.0217
0.0085
−0.0387
0.0138
RMSE (%)
26.5437
6.1187
NIPT
—
5.5400
Example 2.
Consider 2D sinusoidal signal in the presence of the noise as follows:
(25)xm,n=a1cosω1m+ω2n+ϕ1,ym,n=xm,n+wm,n,
where ω1=1, ω2=2, rad/sample ϕ1 is a random phase uniformly distributed over [0,2π], and v(m,n) is zero-mean Gaussian noise with variance σv2=1. Based on linear prediction property of sinusoidal signals, we can model x(m,n) as a 2D AR order (2,2) with zero input driving noise u(m,n)=0. Then, we estimate the model parameters and compute the corresponding normalized power spectrum.
In this example, parameter a1 is adjusted to produce SNR equal to −10 dB and N=M=128. The mean of the spectrum is depicted in Figures 1 and 2 for the proposed and the LS methods, respectively.
Mean of the normalized power spectrum, the proposed method, N=128, SNR = −10 (dB), and J=100.
Mean of the normalized power spectrum, the LS method, N=128, SNR = −10 (dB), and J=100.
It can be seen from figures of the power spectrum that the proposed method can estimate a spectrum sharper than that of the LS method. Note that, in all simulations presented in this section, the convergence of the proposed method was reached usually within a few iterations (NIPT = 6 iterations on average).
5. Conclusion
The two-dimensional noisy AR problem is addressed. The Yule-Walker equations are solved by using adaptive steepest descent algorithm. The induced bias from the observation noise variance is removed using the Yule-Walker equation beyond the order of the model. Simulation results showed that the proposed method can estimate frequency of sinusoidal signals sharper than that of the LS method.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
ZhangX.WuX.Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimationOeS.Texture segmentation method by using two-dimensional AR model and Kullback informationHeL.YuZ.GuZ.LiY.Long-tail distribution based multiscale-multiband autoregressive detection for hyperspectral imageryHansenR. R.Jr.ChellappaR.Noncausal 2-D spectrum estimation for direction findingKayM.NageshaV.SalisburyJ.Broad-band detection based on two-dimensional mixed autoregressive modelKayS. M.DoyleS. B.Rapid estimation of the range-Doppler scattering functionUmanskyD.PätzoldM.A two-dimensional autoregressive model for MIMO wideband mobile radio channelsProceedings of the IEEE Global Telecommunications Conference (GLOBECOM '08)December 2008New Orleans, La, USAIEEE1610.1109/glocom.2008.ecp.7582-s2.0-67249109274KayranA. H.CamciogluE.New efficient 2-D lattice structures for general autoregressive modeling of random fieldsChoiB.Model identification of a noncausal 2-D AR process using a causal 2-D AR model on the nonsymmetric half-planeChoiB.PolitisD. N.Modeling 2-D AR processes with various regions of supportDavilaC. E.A subspace approach to estimation of autoregressive parameters from noisy measurementsMahmoudiA.KarimiM.AmindavarH.Parameter estimation of autoregressive signals in presence of colored AR(1) noise as a quadratic eigenvalue problemZhengW. X.Autoregressive parameter estimation from noisy dataMahmoudiA.KarimiM.Estimation of the parameters of multichannel autoregressive signals from noisy observationsLeeS.StathakiT.Two-dimensional autoregressive modelling using joint second and third order statistics and a weighting schemeProceedings of the 12th European Signal Processing Conference (EUSIPCO '04)2004MahmoudiA.Two dimensional autoregressive estimation from noisy observations as a quadratic eigenvalue problemAksasseB.RadouaneL.Two-dimensional autoregressive (2-D AR) model order estimationZhangX.-D.ChengJ.High resolution two-dimensional ARMA spectral estimationKayS. M.HaykinS.MahmoudiA.Adaptive algorithm for multichannel autoregressive estimation in spatially correlated noise