A Sequential Selection Normalized Subband Adaptive Filter with Variable Step-Size Algorithms

This letter proposes a sequential selection normalized subband adaptive filter (SS-NSAF) in order to reduce the computational complexity. In addition, a variable step-size algorithm is also proposed using the mean-square deviation analysis of the SS-NSAF. To enhance the performance in terms of the convergence speed, we propose an improved variable step-size SS-NSAF using a twostage concept. The simulation results show the low computational complexity and low misalignment errors using the proposed algorithm.


Introduction
The normalized least mean-square (NLMS) algorithm has been used in a variety of applications such as network echo cancellation, system identification, and channel estimation because of its low computational complexity and ease of implementation.However, it suffers from the performance degradation in terms of the convergence rate for colored input signals.To address this drawback, affine projection algorithm (APA), zero-attracting algorithms,  0 -norm constrained algorithms, and normalized subband adaptive filter (NSAF) have been proposed [1][2][3][4].
The NSAF improves a convergence rate for colored input signals using multiple subbands with small computational complexity compared to APA.To achieve better performance in terms of the steady-state error and the convergence rate, variable step-size schemes have also been proposed.In [5], the variable step-size NSAF (VSS-NSAF) was developed by minimizing the mean-square deviation (MSD) of the NSAF.In [6], the variable step-size matrix NSAF effectively estimates the noise variance of each subband.Furthermore, several algorithms have been proposed to reduce the computational complexity of the NSAF.The simplified selective partial update subband adaptive filter reduces the computational complexity by updating partial filter coefficients in each subband rather than the entire filter at every adaptation [7].The dynamic selection NSAF (DS-NSAF) updates only a selected subset of subband filters [8], leading to the largest decrease between the successive MSDs at every adaptation.Recently, Rabiee [9] introduced the flexible complexity VSS-NSAF (FC-VSS-NSAF) to reduce the computational complexity and improve the convergence performance.
This paper proposes a sequential selection NSAF (SS-NSAF) to reduce complexity and a variable step-size algorithm to improve performance in terms of the convergence speed and the misalignment errors.The variable step-size algorithm is derived by the MSD analysis of the SS-NSAF.In addition, to improve the convergence speed, we propose an improved variable step-size algorithm using a two-stage concept [10].It carries out the conventional NSAF with a fixed step size in the first stage to achieve a fast convergence rate and then performs SS-NSAF with variable step size.We confirm that the proposed algorithm has low misalignment errors while reducing the computational complexity.

Sequential Selection NSAF (SS-NSAF)
We consider data () derived from an unknown system: where w is an m-dimensional unknown vector to be estimated, V() accounts for a measurement noise with zero mean and variance  2 V , and u() denotes an m-dimensional input vector.
The conventional NSAF update equation is where  is a step size.
The SS-NSAF updates using sequentially selected  subband filters to reduce the computational complexity, where  ≤ .The proposed SS-NSAF is where   () has 1 or 0, which is an element of vector S(k) = [ 0 ()  1 () ⋅ ⋅ ⋅ s −1 ()]  .The S(k) is obtained as follows: where the initial values are

Variable Step-Size SS-NSAF (VSS-SS-NSAF)
A weight-error vector is defined as w() ≜ w − ŵ() and the SS-NSAF update equation (3) can be rewritten in terms of w as follows: where ) and I is the identity matrix.For a given set U() ≜ {u  () | 0 ≤  ≤ , 0 ≤  ≤ }, the covariance matrix and MSD are defined as MSD () ≜  (w  () w ()) ≡  (P ()) , (8) where {⋅} is the expectation value of a random variable and (⋅) is the trace of a matrix.Let us define P() as the conditional covariance matrix of (7) as follows: By substituting ( 6) into ( 9), we have After taking the trace on both sides of (10), the update recursion of the (P) has the form where Using the result [11] with  = 1, we can assume that is available, which leads to From (7), after taking the expectation value of both sides of (13), because a probability (  () = 1) = /, the update recursion of the MSD is derived as follows: where  2 V , =  2 V / [12].From the MSD analysis of the NSAF given by ( 14), the steady-state value of the MSD can be obtained as The steady-state MSD values are same when setting at same step size for various .
By minimizing the value of (P( + 1)) with respect to step size , the optimal step size () is derived by

Improved VSS-SS-NSAF (IVSS-SS-NSAF)
Since the SS-NSAF updates only  subband filters at every iteration, small  leads to low computational complexity and slow convergence rate.Therefore, the VSS-SS-NSAF also has slow convergence rate when it uses small .To improve the convergence rate of the VSS-SS-NSAF, we propose the IVSS-SS-NSAF that has two stages.In the first stage, the NSAF for a large step size (0) is performed to guarantee a fast convergence.Then, when the NSAF reaches the steady state, the proposed algorithm uses only  subband filters with optimal step size as (16), resulting in low computational complexity.From ( 15), the steady-state value of the MSD with (0) can be obtained as Therefore, the proposed algorithm uses all subband filters when (P()) > MSD  .

Practical Considerations.
The proposed algorithm achieves the optimal step size using the current MSD value; however, it is always decreased, which leads to the performance degradation in a fast-varying system.To reflect the nonstationarity of the system, the current MSD value can be rewritten by output error variance as follows [11,13]: where Because it is hard to obtain the exact value of  2  , () and  2    (), they are estimated by a moving average method as follows: where  ∈ [0, 1) is smoothing factor.Algorithm 1 summarizes the proposed algorithm and Table 1 shows the computational Initialization: ŵ(0) = 0, (0), (P(0)), S(0)  2  , (0) = 0, , . end end Algorithm 1: Algorithm summary.complexity of the NSAF, FC-VSS-NSAF, VSS-SS-NSAF, and IVSS-SS-NSAF for every iteration.To reduce the computational complexity, we used a recursive computation of the norms of the input signals [6,14].The FC-VSS-NSAF and IVSS-SS-NSAF have more computational complexity than conventional NSAF algorithm when () = ; however, they can reduce average computational complexity and power consumption by the number of selected subbands, (), based on the value of complexity reduction factors,  and , respectively.

Simulation Results
In this section, computer simulations in the system identification are used to illustrate the performance of the proposed algorithm.In these simulations, the unknown system is the acoustic impulse response of a room truncated to 512 for 128 taps.We assume that the adaptive filters and the unknown system have the same number of taps, i.e.,  = 512 or  = 128.The colored input signals are generated by filtering white Gaussian noise through the following system: The measurement noise is added to the output u  ()w such that the signal-to-noise ratio (SNR) = 30 dB, where SNR is defined as In addition, the normalized mean squared deviation (NMSD) is defined as NMSD ≜ 10 log 10 (  [w  () w ()] Assume that  2 V is known, since it can be easily estimated during silences and online [15][16][17][18].Each subband adaptive  filter uses an eight-band filter bank that has 32 filter lengths, i.e.,  = 8 and  = 32.We set (P(0)) to 1 for initialization of the proposed algorithm.In addition, the proposed algorithm is performed for  = 0.995 when  = 512 and  = 0.95 when  = 128.The simulation results were obtained by ensemble averaging over 30 trials.

MSD Estimation of SS-NSAF.
Figures 2, 3, and 4 show the estimated MSD curves (as in (18)) and simulation results when setting at  = 1,  = 0.5, and  = 0.1 for different .As it is shown, all curves have same steady-state value regardless .However, the smaller  leads to the slower convergence rate.The estimated MSD curves are close to simulation results.

Performance Comparison. Figures 5(a) and 5(b) compare
the NMSD learning curves of the conventional NSAFs, SM-NLMS [19] ( = √ 2 V ), VSS-NLMS [20] ((0) = 1,   = 10 −15 ,   = 1,  = 30,  = 0.9995 or 0.9999), FC-VSS-NSAFs, VSS-SS-NSAFs, and IVSS-SS-NSAFs ((0) = 1) for colored input generated by  1 () when  = 512 and  = 128, respectively.Figure 6 shows average number of selected subband filters for updating in FC-Vss-NSAFs, VSS-SS-NSAFs, and IVSS-SS-NSAFs when  = 512.This simulation abruptly changes the unknown system at the midpoint of the test interval.The FC-VSS-NSAFs perform differently depending on the value of the complexity reduction factor  [9].The IVSS-SS-NSAFs also perform differently depending  on the  but the IVSS-SS-NSAF for  = 2 has lower steadystate errors and lower computational complexity than the FC-VSS-NSAF for  = 0, which has best performance among them.As can be seen, this simulation results confirm that the proposed algorithm, which has the lower computational complexity than the FC-VSS-NASF which performs well in terms of misalignment errors.Figures 7(a    echo-cancellation application.We used the speech signal as input signal that is sampled by 8kHz.

Conclusion
In this paper, we proposed the SS-NSAF and variable stepsize algorithms to reduce computational complexity and achieved low steady-state errors.By the MSD analysis of the SS-NSAF, the variable step size algorithm was derived.
In addition, to improve the convergence performance, we proposed IVSS-SS-NSAF using a two-stage concept.The simulation results showed that the proposed algorithm is better than the FC-VSS-NSAF in terms of misalignment errors.

Figure 2 :Figure 3 :
Figure 2: NMSD learning curves of the experimental results and the estimated MSD values at  = 1 for colored input generated by  1 ().

Table 1 :
Computational complexity of various NSAF algorithms and data memory usage.