Robust Normalized Subband Adaptive Filter Algorithm with a Sigmoid-Function-Based Step-Size Scaler and Its Convex Combination Version

In this paper, by inserting the logarithm cost function of the normalized subband adaptive filter algorithm with the step-size scaler (SSS-NSAF) into the sigmoid function structure, the proposed sigmoid-function-based SSS-NSAF algorithm yields improved robustness against impulsive interferences and lower steady-state error. In order to identify sparse impulse response further, a series of sparsity-aware algorithms, including the sigmoid L0 norm constraint SSS-NSAF (SL0-SSS-NSAF), sigmoid step-size scaler improved proportionate NSAF (S-SSS-IPNSAF), and sigmoid L0 norm constraint step-size scaler improved proportionate NSAF (SL0-SSS-IPNSAF), is derived by inserting the logarithm cost function into the sigmoid function structure as well as the L0 norm of the weight coefficient vector to act as a new cost function. Since the use of the fix step size in the proposed SL0-SSS-IPNSAF algorithm, it needs to make a trade-off between fast convergence rate and low steady-state error. )us, the convex combination version of the SL0-SSS-IPNSAF (CSL0-SSS-IPNSAF) algorithm is proposed. Simulations in acoustic echo cancellation (AEC) scenario have justified the improved performance of these proposed algorithms in impulsive interference environments and even in the impulsive interference-free condition.


Introduction
Adaptive filtering is famous for its numerous practical applications, such as system identification, acoustic echo cancellation, channel equalization, and signal denoising [1][2][3][4][5]. Due to easy complementation and low computational complexity, the least mean square (LMS) algorithm and the normalized least mean square (NLMS) algorithm become distinguished. However, the main disadvantage of these two algorithms is that they have a slower convergence speed in case the input signal is colored. For settling this issue, the subband adaptive filter (SAF) structure has been presented.
is is because the colored input signal can be decomposed into multiple mutually independent white subband signals by the analysis filter bank [6]. Based on this structure and by solving a multiple-constraint optimization problem, the normalized SAF (NSAF) algorithm has been generated to speed up the convergence rate of the NLMS algorithm [7].
When identifying a sparse system, the traditional NSAF algorithm offers the same step size for all components of the weight coefficient vector regardless of the own characteristic of the sparse system. us, its convergence rate is dramatically degraded [8,9]. For improving the convergence behavior of the NSAF algorithm in a sparse system, a family of proportionate NSAF algorithms [10,11], such as proportionate NSAF (PNSAF), μ-law proportionate NSAF (MPNSAF), and improved proportionate NSAF (IPNSAF), have been proposed, wherein each tap of the filter is updated independently by allocating different step sizes which are in proportion to the magnitude of the estimated filter coefficient.
While all the above-mentioned algorithms, including the NLMS algorithm, the NSAF algorithm and its improved proportionate version have awful robustness against impulsive interferences. e classical sign subband adaptive filter (SSAF) algorithm derived from L 1 -norm optimization criterion only uses the sign information of the subband error signal, thus obtaining superb capability of suppressing impulsive interference [12], while its weakness is a relatively higher steady-state error and a slower convergence rate [13]. For the purpose of decreasing steady-state error and speeding up the convergence rate of the SSAF algorithm, variable regularization parameter SSAF (VRP-SSAF) [12], some variable step-size SSAF algorithms [14,15], and affine projection SSAF [16,17] have been proposed. Nowadays many researchers have demonstrated that making full use of the saturation property of the error nonlinearities can gain splendid robustness against impulsive interferences, such as normalized logarithmic SAF (NLSAF) [18], arctangentbased NSAF algorithms (Arc-NSAFs) [19], maximum correntropy criterion (MCC) [20], the adaptive algorithms based on the step-size scaler (SSS) [21,22], and based on sigmoid function [23,24], and M-estimate based subband adaptive filter algorithm [25].
In this paper, by inserting the logarithm cost function of the normalized subband adaptive filter algorithm with the step-size scaler (SSS-NSAF) [22] into the sigmoid function structure, the proposed sigmoid-function-based SSS-NSAF (S-SSS-NSAF) algorithm yields improved robustness against impulsive interferences and lowers steady-state error. In order to identify sparse impulse response further, a series of sparsity-aware algorithms, including the sigmoid L 0 norm constraint SSS-NSAF (SL 0 -SSS-NSAF), sigmoid step-size scaler improved proportionate NSAF (S-SSS-IPNSAF), and sigmoid L 0 norm constraint improved proportionate NSAF (SL 0 -SSS-IPNSAF), are derived by inserting the logarithm cost function into the sigmoid function structure as well as the L 0 norm of the weight coefficient vector to act as a new cost function. Since the use of the fix step size in the proposed SL 0 -SSS-IPNSAF algorithm, it needs to make a tradeoff between fast convergence rate and low steady-state error.
us, in its convex combination version, the proposed CSL 0 -SSS-IPNSAF algorithm is proposed. Simulations in the AEC scenario with impulsive interference have justified the improved performance of these proposed algorithms.

Review of the SSS-NSAF Algorithms
Suppose w o ∈ R L×1 is a weight coefficient vector of the unknown system in system identification model, u(n) � [u(n), u(n − 1), . . . , u((n − L + 1)] T stands for the input signal, and L denotes the filter length, where T represents vector or matrix transposition. e desired output signal d(n) is usually modeled as d(n) � u T (n)w o + ϑ(n), where ϑ(n) is additive noise which contains Gaussian measurement noise v(n) plus impulsive interferences η(n), i.e., ϑ(n) � v(n) + η(n). Figure 1 shows the multibandstructure of the NSAF algorithm. e input signal u(n) and desired output signal d(n) are, respectively, separated into N subband signals u i (n) and d i (n) by analysis filter bank H 0 (z), H 1 (z), . . . , H N−1 (z) . e subband output signals y i (n) are obtained by filtering subband input signal u i (n) through an adaptive filter w(k) � [w 0 (k), w 1 (k), . . . , w L− 1 (k)] T which is an estimate of the unknown w o . en, the subband signals d i (n) and y i (n) are decimated in a lower sampling rate to generate signals d i,D (k) and y i,D (k), respectively. Here, n and k are used to index the original sequences and the decimated sequences. And the decimated subband output signal y i,D (k) is expressed as us, the ith decimated subband error signal is computed as In [22], two types of cost functions, i.e., tanh-type cost function and ln-type cost function, which use the square value of the normalized error signal with respect to the input signal, are introduced to subband structure to generate two novel SSS-NSAF algorithms. However, the tanh-type cost function needs to use the exponential function, which contains the sum of the normalized subband output errors with respect to the subband input vectors. As a result, it brings about a heavy calculation burden. In contrast, the ln-type cost function reduces computational complexity to a large extent. erefore, due to its low computation, the proposed algorithm in this paper is primarily based on the simplified ln-type version of the step-size scaler. For the convenience of the discussion in the next section, the SSS-NSAF algorithm based on the tanhtype cost function is no longer presented. e ln-type cost function of the SSS-NSAF algorithm is given as follows: where α > 0 is a constant parameter which controls the sharpness of the sharp. By using the gradient descent method, the SSS-NSAF algorithm is derived by minimizing the ln-type cost function with respect to the normalized subband error signal, and update equation of its weight coefficient vector can be derived easily as follows: where μ is the step size and A(m(k)) plays a role as the stepsize scaler, which helps to shrink the step size μ whenever impulsive noise happens and then eliminate the unfavorable effect of impulsive interferences on system updating.

Derivation of the Proposed SL 0 -SSS-NSAF Algorithm.
By inserting the ln-type cost function of the SSS-NSAF algorithm into the sigmoid function structure, a new sigmoid function is defined as follows: where β > 0 determines the sharpness of the sigmoid function. e aim of embedding the cost function of the SSS-NSAF algorithm into the sigmoid structure is to eliminate the adverse influence of impulsive interferences better, especially when the possibility of impulsive interferences is large.
Combining the above sigmoid function and exploiting the L 0 norm constraint of the estimated weight vector, a new robust cost function is introduced as follows: where J po (k) stands for the cost function of the proposed algorithm, ‖ · ‖ 0 indexes the L 0 norm constraint, and ρ is a small positive value that controls the weight between the sigmoid function and L 0 norm constraint term. Taking the derivative of (6) with respect to the estimated weight vector w(k), we get the following: By employing the gradient descent rule, the update equation of the coefficient vector of the sigmoid L 0 norm constraint SSS-NSAF (SL 0 -SSS-NSAF) algorithm is obtained as follows: where μ is the step size. Considering that L 0 norm minimization is a Non-Polynomial (NP) problem, the following continuous differentiable function is usually used to approximate ‖w(k)‖ 0 [13,26,27], where ϕ determines the attraction degree with regard to the small magnitude values of w(k). erefore, for m � 0, 1, . . . , L − 1, the mth component of the derivative of ‖w(k)‖ 0 is easily calculated as follows: Discussion 1. If the L 0 norm constraint of the estimated weight vector is not considered, i.e., ρ � 0, the proposed SL 0 -SSS-NSAF algorithm becomes the sigmoid SSS-NSAF (S-SSS-NSAF) algorithm. erefore, its coefficient vector update equation and cost function are expressed as follows: Combining the cost function formula (1) of the original SSS-NSAF algorithm and the above S-SSS-NSAF algorithm updating formula (11), it is easy to find out that the sigmoid function J SSS−NSAF (k) ⟶ 0 for the small subband error signal e i,D (k), then S(k) ≈ 0.5 and S(k)[1 − S(k)] ≈ 0.25, making the performance of the S-SSS-NSAF algorithm is similar to that of the original SSS-NSAF algorithm. While whenever impulsive interferences occur, the subband error signal e i,D (k) will be very large, so does the value of J SSS−NSAF (k), and thus S(k) approaches to constant one, which result in the termination of iteration of the SL 0 -SSS-NSAF algorithm.
is demonstrates that the proposed sigmoid-function-based algorithms not only retain the outstanding performance of the original SSS-NSAF algorithm in the noise-free impulsive condition but also possess strong robustness against impulsive noise.
In fact, the robustness of the SSS-NSAF algorithm against impulsive noise primarily relies on the step size scaler. When impulsive noise appears, the step size scaler instantly scales down the step size to restrain the adverse effect from the contaminated subband error signal. Contrasting the update equation (3) of the weight coefficient vector of the SSS-NSAF and the S-SSS-NSAF's update equation (11), A(m(k)) and S(k)[1 − S(k)]A(m(k)) are the step-size scaler of the original SSS-NSAF and the proposed S-SSS-NSAF algorithms, respectively. As a matter of fact, the suppressing effect of the proposed algorithms on impulsive noise is stronger than that of the original SSS-NSAF algorithm, which can be observed from their cost functions. Figure 2 presents the stochastic cost functions of the proposed S-SSS-NSAF with β � 0.5 and the original SSS-NSAF algorithm. Obviously, the stochastic cost function of the proposed S-SSS-NSAF algorithm is less steep than that of the original SSS-NSAF algorithm for large and small perturbations on the normalized subband error signal, which illustrates that the proposed S-SSS-NSAF algorithm can still obtain improved performance even in the impulsive-interference-free environment when compared with the original SSS-NSAF algorithm.

3.2.
e Proportionate Version of the SL 0 -SSS-NSAF Algorithm. Inspired by the work in [10], these adaptive filtering algorithms containing zero attracting terms and the proportionate control matrix have gained improved performance in terms of convergence rate and steady-state error. erefore, for obtaining a fast convergence rate of the proportionate control matrix and low steady-state error of the zero attracting term simultaneously, a gain control matrix is introduced to the SL 0 -SSS-NSAF algorithm to further accelerate its convergence tare. As a result, the proportionate version of the SL 0 -SSS-NSAF algorithm (SL 0 -SSS-IPNSAF) is yielded in an analogy way where G(k), named proportionate matrix, is a diagonal matrix with its diagonal elements being g 0 (k), g 1 (k), . . . , g L−1 (k). So far, a different method of choosing G(k) has been put forward [10]. Among them, due to the robustness to the different sparseness degrees of unknown impulse response, the following strategy is the most widely used procedure to compute the diagonal elements of the matrix G(k).  where τ ∈ [−1, 1], w l (k) is the lth element of w(k), ζ is a small positive constant to avoid division by zero. (13) of the SL 0 -SSS-IPNSAF algorithm, some relating algorithms can be derived:

Discussion 2. From the equation formula
(1) Letting the weight ρ which controls the sigmoid function and L 0 norm constraint term equal to zero, the SL 0 -SSS-IPNSAF algorithm becomes the S-SSS-IPNSAF algorithm (2) When the proportionate matrix G(k) becomes identity matrix, the SL 0 -SSS-IPNSAF algorithm reduces to the SL 0 -SSS-NSAF algorithm (3) If ρ � 0 and G(k) � I, the SL 0 -SSS-IPNSAF algorithm turns into the S-SSS-NSAF algorithm

Adaptive Convex Combination of Two SL 0 -SSS-IPNSAF Algorithms (CSL 0 -SSS-IPNSAF)
Similar to all fixed-step-size adaptive filter algorithms, the proposed SL 0 -SSS-IPNSAF algorithm with a large step size has a fast convergence rate but a high steady-state error. erefore, there always exists the conflicting demands of the fast convergence rate and low steady-state error in the proposed SL 0 -SSS-IPNSAF. In order to address this issue, the CSL 0 -SSS-IPNSAF algorithm is proposed by combining two different step sizes SL 0 -SSS-IPNSAF algorithms and the diagram of the adaptive combination scheme for an ith subband is presented in Figure 3, where w 1 (k) stands for weight coefficient vector of the SL 0 -SSS-IPNSAF algorithm with large step size μ 1 , w 2 (k) corresponds a small step size μ 2 , and w j (k) � [w 0,j (k), w 1,j (k), . . . , w L− 1,j (k)] T , j � 1, 2. e coefficient vector w(k) of the overall filter can be generated by using a variable mixing parameter λ(k), where 0 ≤ λ(k) ≤ 1. Based on the convex combination strategy, the ith subband output signal y i,D (k) of the overall filter can be formulated as follows: where y i,D,j (k), j � 1, 2, are the decimated subband outputs of the component filters and y i,D,j (k) � u T i (k)w j (k). Similarly, the overall subband error signal can be expressed as From (15), we know that the performance of the overall filter largely relies on the choice of λ(k). us, an appropriate method to recursively compute λ(k) is important. To constrain the λ(k) value in the [0, 1], a sigmoidal function which depends on an auxiliary variable a(k) is applied According to the gradient descent method, the auxiliary variable a(k) can be recursively updated by minimizing the power of the system output error, which is equal to the sum of squared subband errors of the overall filter, i.e., ���������� where μ a is the step size for adapting a(k), and the introduction of ε is to prevent the update process of a(k) from stalling whenever λ(k) is equal to 0 or 1. Besides, to make adaptation at a minimum a(k) is suggested to lie in [−a + , a + ] [28].

Mathematical Problems in Engineering
Actually, the component filter with a small step size may reduce the convergence rate of the overall filter in the initial phase of iteration. e weight transfer scheme is utilized as follows to avoid this.

Simulation Results
In order to measure the performance of the proposed S-SSS-NSAF, SL 0 -SSS-NSAF, S-SSS-IPNSAF, and SL 0 -SSS-IPNSAF algorithms, simulations are presented in the system identification and acoustic echo cancellation context with impulsive interferences. e cosine-modulated filter bank is utilized with the number of subband N � 4. e unknown impulse responses w o with their length L � 512 taps are illustrated in Figure 4. Assuming that the adaptive filter has the same length as the unknown vector, for examining the robustness of these proposed algorithms against impulsive interferences, the impulsive interferences ϑ(n) is added into the output of the identified unknown system, which is modeled as ϑ(n) � B(n)g(n), where B(n) indexes a Bernoulli process with the probability mass function expressed as p(B(n) � 1) � P r and p(B(n) � 0) � 1 − P r (P r � 0.1 means the occurrence possibility of impulsive interferences), and g(n) is a zero-mean white Gaussian noise with variance σ 2 ]. An independent white Gaussian measurement noise v(n) is added to the unknown system output with a 30 dB signal-to-noise (SNR). In subsequent Sections 5.1 and 5.2, the input signal is an AR(1) input and the unknown impulse responses are multiplied by −1 at the middle of the iterations to investigate the tracking capability of all algorithms, while in the Section 5.3, a speech input signal is used.
All algorithms' performance is measured by the normalized mean square deviation (NMSD), defined as 20 log 10 [‖w o − w(k)‖ 2 2 /‖w o ‖ 2 2 ]. And All learning curves are obtained by averaging over 10 independent trails (except for speech input).
Since the proposed S-SSS-NSAF algorithm does not belong to a sparsity-aware family, the identified unknown system is a dispersive impulse response illustrated in Figure 4(a). Figure 5 presents the comparison of the performance of the proposed S-SSS-NSAF algorithm with that of the SSAF and SSS-NSAF algorithms. Compared with the conventional SSS-NSAF algorithms, the proposed S-SSS-NSAF algorithm obtain lower steady-state error with almost the same initial convergence rate. While when the unknown system is changed suddenly, its tracking capability is not pretty well. e performance comparison of the proposed SL 0 -SSS-NSAF and SL 0 -SSS-IPNSAF algorithms with the SSAF and SSS-NSAF algorithms is reflected in Figure 6. e used impulse response is sparse, which is given in Figure 4(b). e SSS-NSAF-2 algorithm has almost the same performance as the SSS-NSAF-1 algorithm before the unknown system is changed abruptly, while the SSS-NSAF-2 algorithm has better tracking capability than the SSS-NSAF-1 algorithm. e proposed SL 0 -SSS-NSAF algorithm obtains lower steady-state error and stronger robustness against impulsive interference than the SSS-NSAF algorithms with the same convergence rate. It is noted that, as a proportionate version of the proposed SL 0 -SSS-NSAF algorithm, the proposed SL 0 -SSS-IPNSAF algorithm achieves significantly improved convergence behavior and better tracking capability than the SL 0 -SSS-NSAF algorithm with the same steady-state error.
is demonstrates that the role of the proportionate scheme is to speed up convergence rate of the original algorithm. Figure 7 compares the performance of the proposed S-SSS-IPNSAF and SL 0 -SSS-IPNSAF algorithms with that of the SSAF and SSS-NSAF algorithms in sparse impulse response with P r � 0.1. e performance behavior of the SSS-NSAF-1 algorithm and the SSS-NSAF-2 algorithm is similar  to those in Figure 6. As can be observed from Figure 7, the proposed S-SSS-IPNSAF algorithm provides a faster convergence rate, lower steady-state error, and more splendid tracking ability than the SSS-NSAF algorithms. By adding L 0 -norm constraint term into the S-SSS-IPNSAF algorithm, the proposed SL 0 -SSS-IPNSAF algorithm obtains the same convergence rate but lower steady-state error and better tracking capability. It can be concluded that L 0 -norm constraint term offers lower steady-state error in comparison to the original algorithm.
As can be seen from Figure 8, since the step size parameter of the proposed SL 0 -SSS-IPNSAF algorithm is fixed, the proposed SL 0 -SSS-IPNSAF algorithm with a large step size μ � 1.8 has a fast convergence rate but a high steadystate error, but the one with a small step size μ � 0.2 obtains lower steady-state error but slower convergence rate. By utilizing a convex combination scheme, the proposed CSL 0 -SSS-IPNSAF algorithm possesses a fast convergence rate with a large step size original algorithm and low steady-state error with small step size algorithm simultaneously. Figures 9 and  10 illustrate the NMSD learning curves of the standard SSAF, SSS-NSAF-1, SSS-NSAF-2, the proposed SL 0 -SSS-NSAF, S-SSS-IPNSAF, and SL 0 -SSS-IPNSAF algorithms with P r � 0. Obviously, even in an impulsive interferencefree environment, these proposed algorithms with a sigmoid-function-based step-size scaler gain improved performance than the original SSS-NSAF algorithms in terms of convergence rate, steady-state error and tracking capability. With having the same steady-state error, compared with the SL 0 -SSS-IPNSAF algorithm, the SL 0 -SSS-NSAF algorithm converges slowly at the initial phase of the iteration but obtains faster convergence rate in near steady state. In  Figure 6). Figure 10, the proposed S-SSS-IPNSAF algorithm gains a faster convergence rate than the standard SSS-NSAF algorithms with the same steady-state error. Like the results in impulsive interference environment, the proposed SL 0 -SSS-IPNSAF algorithm obtains the same initial convergence rate but lower steady-state error and better tracking capability than the S-SSS-IPNSAF algorithm.

AEC Scenario.
e comparison of the NMSD learning curves of the standard SSAF, SSS-NSAF-1, SSS-NSAF-2, the proposed SL 0 -SSS-NSAF, S-SSS-IPNSAF, and SL 0 -SSS-IPNSAF algorithms in AEC is presented in Figures 11 and  12. e used speech input signal is given in Figure 13. As shown, the proposed SL 0 -SSS-NSAF algorithm performs not very well, while the proposed S-SSS-IPNSAF algorithm achieves a significantly faster convergence rate, lower steadystate NMSD, and better tracking ability than the original SSS-NSAF algorithm. Furthermore, due to the combination of the benefits of the proportionate scheme and the L 0 norm constraint, the SL 0 -SSS-IPNSAF algorithm performs even  Figure 7).  Figure 11: NMSD learning curves comparison of the SSAF, SSA-NSAF-1, SSS-NSAF-2, the proposed SL 0 -SSS-NSAF, and SL 0 -SSS-IPNSAF algorithms for speech input with P r � 0.1 (all parameters setting of all algorithms are the same as those of Figure 6).  much better than the splendid S-SSS-IPNSAF algorithm in the AEC scenario.
As the result observed from Figure 14, by utilizing a convex combination scheme and weight transfer strategy, the proposed CSL 0 -SSS-IPNSAF algorithm inherits a fast convergence rate with a large step size SL 0 -SSS-IPNSAF algorithm and low steady-state error with small step size SL 0 -SSS-IPNSAF algorithm simultaneously.

Conclusion
In order to improve the performance of the SSS-NSAF algorithm when identifying sparse system, a series of sparsityaware algorithms, including the SL 0 -SSS-NSAF, S-SSS-NSAF, and SL 0 -SSS-IPNSAF algorithm, are proposed by inserting the logarithm cost function of the SSS-NSAF algorithm into the sigmoid function structure. Besides, the convex combination version of the SL 0 -SSS-IPNSAF is proposed to making the SL 0 -SSS-IPNSAF algorithm obtaining a fast convergence rate and low steady-state error. Simulations in the AEC scenario with impulsive interference have justified the improved performance of these proposed algorithms.
Although the proposed sparsity-aware algorithms in this paper essentially belong to linear adaptive filtering scheme, it also can be extended to the active noise control in linear systems and/or nonlinear systems [30,31] and other fields [32] in the future.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.  Figure 8).