A Stochastic Total Least Squares Solution of Adaptive Filtering Problem

An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs.


Introduction
Ordinary least squares methods are extensively used in many signal processing applications to extract the system parameters from input/output data [1,2]. These methods yield an unbiased solution of adaptive least squares problem having no interference in both inputs and outputs or having interference only in the outputs of the unknown system and clean inputs. However, if interference exists in both input and output of the unknown system or adaptive filtering problem, the ordinary least squares solution gets biased [3].
Total least squares (TLS) method [4] is an efficient technique to achieve an unbiased estimate of the system parameters when both input and output are contaminated by noise. Golub and Van Loan [5] provided an analytical procedure to get an unbiased solution of the TLS problem using singular value decomposition (SVD) of data matrices. This technique is extensively used in data processing and control applications [4,6,7]. However, application of TLS methods in signal processing is still limited because computation of SVD requires a high complexity of ( 3 ) for an × matrix.
TLS solutions of adaptive filtering problem gained importance after the pioneer work done by Pisarenko [8]. He presented an efficient solution of adaptive TLS problem by adaptively computing the eigenvector corresponding to smallest eigenvalue of augmented input/output signal's autocorrelation matrix. Since then, several algorithms have been proposed based on the adaptive implementations of Pisarenko. The adaptive TLS algorithms proposed in [9][10][11] are able to achieve an unbiased TLS solution of adaptive filtering problem with a complexity of ( ). However they are sensitive to the correlation properties of input signals and have a drawback of bad performance under correlated inputs.
In this paper, an iterative algorithm is presented to find an optimal TLS solution of adaptive FIR filtering problem. A stochastic technique similar to that of least mean squares (LMS) algorithm of adaptive least squares filtering is employed to develop a total least mean squares (TLMS) algorithm for adaptive total least squares problem. Instead of basing the approach on the minimum mean squares error as the LMS algorithm does, the proposed (TLMS) algorithm is based on the total mean squares, obtained by minimizing the weighted cost function for the TLS solution of adaptive filtering problem. The proposed algorithm has maintained the ( ) complexity of adaptive TLS algorithms with an 2 The Scientific World Journal additional quality of having steady state convergence under correlated inputs. Convergence analysis is presented to show the global convergence of the proposed algorithm under all kinds of inputs provided the stepsize parameter is suitably chosen.
This paper is outlined as follows: we start with a mathematical formulation of adaptive least squares problem in Section 2 and derivation of the TLMS algorithm is given in Section 3, including its convergence analysis in Section 3.1. After that efficiency of the proposed algorithm is tested in Section 4 by applying it for an unknown system identification problem and comparing the results with conventional LMS and normalized LMS (NLMS) algorithms. Concluding remarks are given in Section 5.

Mathematical Formulation of Adaptive Total Least Squares Problem
Consider an unknown system to be identified by adaptive FIR filter of length and response vector w , at time , with an assumption that both input and output are corrupted by an additive white Gaussian noise (AWGN). The noise free input vector a ∈ R is formed from the input signals ( ), such that The desired output of the unknown system is then given bỹ where ( ) = w a is system's output and Δ ( ) an added white Gaussian noise of zero mean and variance 2 Δ . The primary assumption of an adaptive least squares (ALS) problem is that perturbations occur in the output signals only and that the input signals are exactly known. This assumption is not practical enough, because perturbations due to sampling or modeling or measurement errors affect the input signals too. A sensible choice to overcome such situations is to introduce perturbations in input signals in addition to perturbations of output signals. A schematic diagram of an adaptive filter with perturbed input is depicted in Figure 1.
If Δa = [Δ ( ), Δ ( − 1), . . . , Δ ( − + 1)] ∈ R , denote the perturbations in input vector a , where Δ ( ) is an additive white Gaussian noise (uncorrelated from the output noise) of zero mean and variance 2 Δ , then noisy input vector isã = a + Δa . ( It is clear from Figure 1  estimation of the solution of adaptive filtering problem because of the presence of noise in filter input. Casting adaptive filtering problem as total least squares problem can, however, restructure the poor estimation of solution under noisy input [10,11]. The following definition is made to adopt a more general signal model for ATLS-based filtering. Definition 1 (augmented data vector). Define an ( + 1) × 1 augmented data vectorz as An alternate form of ( ), in terms of augmented data vector of Definition 1, is obtained as follows: wherew = [w − 1] denote the ( + 1) × 1 extended parameter vector. The TLS solution of adaptive filtering problem is an eigenvector associated with the smallest eigenvalue of extended autocorrelation matrixR : Instead of minimizing the mean square error { 2 ( )}, adaptive total least squares problem is concerned with minimizing the total mean square error { 2 ( )} and cost function (w ) = { 2 ( )}, where the total error ( ) is given by The TLS cost functioñ(w ) is then defined in terms of total error as The Scientific World Journal The adaptive total least squares problem is a minimization problem of the form [10,11]: Note that an optimal solution w opt of the TLS problem (9) is an eigenvector corresponding to the smallest eigenvalue of R . In practice SVD technique is used to solve TLS problems since it offers lower sensitivity to the computational errors; however, it is computationally expensive [5]. An alternate choice to estimate eigenvector corresponding to smallest eigenvalue is to use an adaptive algorithm [1,2].

Derivation of Total LMS Algorithm for Adaptive Filtering Problem
In adaptive least squares problem, conventional LMS algorithm is a steepest descent method which uses an instantaneous cost function = 2 ( ) for computation of gradient vector [1]. Using a similar implementation in TLS problem, the total LMS (TLMS) algorithm is obtained by having an instantaneous estimate of the cost function (8) as̃= 2 ( ).
The recursive update equation of TLMS algorithm is then given asw where is the stepsize parameter or convergence parameter.  Using ( ) =zw =wz and ‖w ‖ = √ww , then above equation becomes ∇w̃= 2 Substituting (12) in (10), the updated equation of TLMS algorithm becomes Oncew +1 is computed using (13), the TLS solution update w +1 is obtained by the following formula: The detailed TLMS algorithm is summarized in Table 1.
A complexity measure of the algorithms shows that it is a computationally linear algorithm, requiring a total of 6 + 9 multiplications/divisions per iteration. This computational simplicity of adaptive TLMS algorithm makes it a better choice than computationally expensive SVD based TLS algorithm, which requires 6 3 computations per iteration [5].
This shows that the proposed algorithm is a variable stepsize algorithm, with̃= /‖w ‖ 2 . An appropriate way to choose is to initialize the algorithm such that ‖w − w opt ‖ is less than 2‖w opt ‖ [13]. According to this result for w = 0, ‖w ‖ = 1 and = , while = ‖z ‖ 2 −̃( ) > 0.

Application of TLMS Algorithm in System Identification
To examine the performance of proposed TLMS algorithm, an unknown system identification model, shown in Figure 2, is used. A white Gaussian input signal of variance 2 = 1 is passed through a coloring filter with frequency response [1]: where | | < 1, is a correlation parameter and controls the eigenvalue spread of input signals. = 0 corresponds to the case when eigenvalue spread of input signals is close to 1, and eigenvalue spread increases with an increase in the value of . A white Gaussian noise of SNR = 30 dB is added in the input signal ( ) to get noisy signal̃( ), and an output signal ( ) is obtained by corrupting the output signal ( ) with an additive white Gaussian noise of SNR 30 dB. Proposed TLMS algorithm is compared with LMS and NLMS algorithms of [1] to get an FIR vector for a filter of length = 10. Least squares misalignment ‖w − w opt ‖ is compared with the total least squares misalignment ‖w − w opt ‖/√ww , and simulations results are recorded for 2000 iterations with an ensemble average of 1000 independent runs.  Figure 3(a) shows that algorithm still converges.

Convergence Behavior Corresponding to Correlation
Parameter . To check effect of changes in correlation parameter on the steady state convergence behavior of TLMS algorithm, different simulations are presented in Figure 3, each showing four learning curves of total misalignment of TLMS algorithm corresponding to = 0.5, 0.25, 0.125, and 0.0625. In the first two simulations = 0.3 in Figures 3(a) and 3(b), it is 0.6 in Figure 3(c), and 0.9 in Figure 3(d). It is clear from the results of all these simulation curves that increase in correlation of data signals has not affected the steady state performance of the algorithm. Although the convergence speed seems to slow down, but all the curves converge to optimal solution. Figure 4 shows the comparison of misalignment of three algorithms, that is, LMS, NLMS, and TLMS algorithms. The first two compute a least squares solution of adaptive total least squares problem, while the third one computes TLS solution of adaptive total least squares problem. Taking = 0.3, stepsize parameter for LMS algorithm is chosen as 0.015, for NLMS algorithm as 0.3, and for TLMS algorithm, it is 0.25. The results in this simulation show that the convergence of TLMS algorithm increases with an increase in the iteration, and it presents a better solution of adaptive TLS problem.

Conclusion
In this paper, an efficient TLMS algorithm is presented for the total least squares solution of adaptive filtering problem. The proposed algorithm is derived by using cost function of 6 The Scientific World Journal weighted instantaneous error signals and an efficient computation of misalignment in terms of mean squares deviation. TLMS algorithm has better ability to tackle with perturbations of both input and output signals, because it is chiefly derived for the purpose. Since in real life problems, both input and output signals are contaminated by noise, therefore TLMS algorithm has great applicability. Convergence analysis shows that the proposed algorithm has global convergence, provided that the stepsize parameter is chosen appropriately. Furthermore, it is computationally simple and requires only ( ) complexity, while other algorithms for TLS problems either require higher complexity or are sensitive to correlation properties of data signals.