Sparse Multipath Channel Estimation Using Norm Combination Constrained Set-Membership NLMS Algorithms

A norm combination penalized set-membership NLMS algorithm with l0 and l1 independently constrained, which is denoted as l0 and l1 independently constrained set-membership (SM) NLMS (L0L1SM-NLMS) algorithm, is presented for sparse adaptive multipath channel estimations.The L0L1SM-NLMS algorithm with fast convergence and small estimation error is implemented by independently exerting penalties on the channel coefficients via controlling the large group and small group channel coefficients which are implemented by l0 and l1 norm constraints, respectively. Additionally, a further improved L0L1SM-NLMS algorithm denoted as reweighted L0L1SM-NLMS (RL0L1SM-NLMS) algorithm is presented via integrating a reweighting factor into our L0L1SM-NLMS algorithm to properly adjust the zero-attracting capabilities. Our developed RL0L1SM-NLMS algorithm provides a better estimation behavior than the presented L0L1SM-NLMS algorithm for implementing an estimation on sparse channels.The estimation performance of the L0L1SM-NLMS and RL0L1SM-NLMS algorithms is obtained for estimating sparse channels. The achieved simulation results show that our L0L1SMand RL0L1SM-NLMS algorithms are superior to the traditional LMS, NLMS, SM-NLMS, ZA-LMS, RZA-LMS, and ZA-, RZA-, ZASM-, and RZASM-NLMS algorithms in terms of the convergence speed and steady-state performance.


Introduction
Broadband communication technology has attracted much attention for modern wireless communications, and signal transmission on such a broadband wireless channel usually encounters a frequency selecting phenomenon [1][2][3][4][5].Moreover, the frequency-selective behavior often leads to sparse properties in such broad channels [1,6].In those broadband wireless multipath communication channels, most of these unknown channels are often sparse [1][2][3], in which only a fewer coefficients are large in magnitude.In the sparse channel, only fewer channel taps are dominant, indicating that a number of channel taps are close to zeros or zeros.On the other hand, signal though such frequency selecting sparse channels and noise environments may reduce the communication quality.Therefore, precision channel estimation methods are desired to recovery unknown channel which is often illustrated by finite impulse response (FIR) [1][2][3].Adaptive filter techniques are regarded to be useful channel estimation methods owing to their fast convergence speed and good estimation behavior [7][8][9].After that, various adaptive channel estimation algorithms were presented in order to guarantee the stable propagation and effective signal transmission [8][9][10][11][12][13], such as LMS and SM-NLMS algorithms [10][11][12][13].Although these adaptive filtering algorithms can achieve robust estimation performance, they cannot deal with the sparse channel estimation well.
In order to make use of the prior sparseness characteristics of the multipath broadband channels, several sparse LMS algorithms were reported by modifying the cost function of the conventional LMS algorithms [14][15][16][17][18], which is inspired by the compressive sensing (CS) [19,20].In [14], a sparse LMS algorithm has been reported by using the modified cost function via a  1 -norm penalty, resulting in zero-attraction (ZA) LMS (ZA-LMS) algorithm.ZA-LMS can provide a good behavior for sparse adaptive channel estimations by exerting a zero-attraction on all channel taps.The estimation behavior of the ZA-LMS may be reduced 2 Wireless Communications and Mobile Computing because of the uniform penalty.Then, a reweighted ZA-LMS (RZA-LMS) algorithm was presented by considering a log-sum function as a constraint of the channel coefficient vector [14].Subsequently, the RZA-LMS achieved an improved channel estimation behavior in accordance with convergence and estimation misalignment.After that, the zero-attracting technique has been realized by using   -norm [21], smooth approximation  0 -norm [15], and nonuniform constraints [18] to further develop the sparse properties in broadband channel.However, their estimation behavior may be degraded since the LMS is sensitive to the input signal scaling.Then, the zero-attracting methods were expanded to the NLMS [22], LMF [23], LMS/F [24][25][26][27], leaky LMS [28], and affine projection algorithms [29][30][31][32][33].However, some of them are high in complexity and others need an extra balance parameter to adjust the combination of the LMF and LMS algorithms.
Recently, the zero-attraction methods and set-membership filtering theory have been used to take use of the spareness of the sparse channel to reduce computational complexity [10,34,35].In [35], the ZA and RZA methods are further investigated on the SM-NLMS to develop RZASMand ZASM-NLMS algorithms.As a result, the RZASMand ZASM-NLMS algorithms can provide a better channel estimation behavior than NLMS and its variants.However, these two sparse SM-NLMS adaptive filtering algorithms cannot effectively adjust the zero-attracting according to the channel coefficients in real time.
We propose a norm combination penalized SM-NLMS algorithm with  0 and  1 independently constrained, which is realized by employing a constraint on the channel taps according to the value of the channel coefficients and it is named as  0 and  1 independently constrained setmembership NLMS (L0L1SM-NLMS) algorithm.The realization of our L0L1SM-NLMS is devised via integrating  0norm and  1 -norm into the SM-NLMS's cost function for constructing a desired zero-attraction term which separates the channel taps into small and large groups.Then, the two different groups are attracted to zero based on  0norm and  1 -norm penalties.The proposed L0L1SM-NLMS algorithm is given in detail.Also, a reweighing method is introduced into the zero-attraction term to enhance the robustness of the L0L1SM-NLMS algorithm, resulting in a reweighting L0L1SM-NLMS (RL0L1SM-NLMS) algorithm.We give an evaluation of our developed L0L1SM-NLMS and RL0L1SM-NLMS algorithms in designated sparse channels.The obtained results by estimating sparse channels illustrate that our L0L1SM-and RL0L1SM-NLMS algorithms are more excellent compared with the LMS, NLMS, ZA-LMS, RZA-LMS, and ZA-, RZA-, SM-, ZASM-, and RZASM-NLMS algorithms as for the convergence and estimation misalignment.
The rest of the paper is given herein.Section 2 gives a review of the conventional SM-NLMS and SM filtering theory.Also, the previously proposed ZASM-NLMS algorithm is discussed.In Section 3, our developed L0L1SMand RL0L1SM-NLMS algorithms are derived thoroughly.In Section 4, channel estimation behavior of our L0L1SM-NLMS and RL0L1SM-NLMS is investigated and discussed in detail.The last section is conclusion of this paper.

Conventional SM-NLMS Algorithm
2.1.SM Filtering Theory.A training signal x() = [( − 1), (), . . ., ( − )]  is considered to give a discussion on the SM-NLMS.An AWGN signal V() and an expected signal () are used to discuss the SM filtering (SMF) theory and typical adaptive channel estimation (ACE) system.x() is conveyed to an unknown FIR wireless communication channel w and the multipath fading channel's output is () that is gotten by () = x  ()w.At receiver side, the expected signal () is contaminated by V().The purpose of the ACE aims to minimize the estimation error () which denotes a difference between () and ().Thus, we can get () = () − x  ()ŵ().
The typical adaptive filter (AF) algorithms are employed to get an estimation of the unknown FIR channel w via minimizing an error function related to estimation error ().For instance, the LMS algorithm uses the secondorder estimation error  2 ().Moreover, the NLMS algorithm utilizes normalized power of x() to improve the estimation behavior of the LMS.As for SMF theory, a certain bound is exerted on () [10][11][12][13].The SMF methods employ an interested subspace to create a model ().Assume that a model space  is comprised of input-vector-desired-output pairs (IVDOPs).In SMF theory, an error criterion is used for bounding .Parameter estimations are bounded based on a parameter  for all the data in .Thereby, SMF algorithm should properly choose a special set in the parameter space and hence it is different with point estimation.According to the SMF principle, we have [10][11][12][13] where ((), x()) denotes IVDOPs.As arbitrary ∀((), x()) ∈ , we have a solution of the possible vectors w by the use of the following equation [10][11][12][13]: where R  is a vector space with a dimension of .If  IVDOPs {  (),   ()}  =1 are used for training the filter, the measurement set of the SMF algorithms can be written as [10][11][12][13] The SMF algorithm is to find the solutions following an exact set Ψ  which possesses  observed IVDOPs In fact, Θ is a subset of Ψ  in each iteration.
Wireless Communications and Mobile Computing ( We use Lagrange multiplier method to find out the minimization of (5).Therefore, the SM-NLMS's updating equation is where where  > 0 is to avoid dividing by zero.From ( 6), we can see that the SM-NLMS's updating formula is similar to basic NLMS algorithm.Herein,  acts as a step factor.Inspired by the CS and ZA techniques, sparse SM-NLMS algorithm denoted as ZASM-NLMS was presented by exerting a  1 -norm constraint on the SM-NLMS's cost function, and hence, the ZASM-NLMS solves the following problem, given as [35] min where  1 denotes a ZA parameter.Similarly, we employ the Lagrange multiplier method to acquire the updating equation of the reported ZASM-NLMS whose updating equation is where G() = {x()x  ()/x  ()x() − I}.Since the updating equation for ZASM-NLMS is complex, a simple method can be employed to solve (8), which brings about an optimization problem without constraint.For the sake of compatibility with the traditional SM-NLMS, we can use the following equation to get the solution of (8) [35]: where where  > 0 denotes a ZA ability factor to give a balance between the estimation misalignment and the sparse constraint of ŵ() and sgn[⋅] represents a sign function with a component-wise implementation, and it is given by [14] sgn In contrast to the update equation in (6), it is found that the ZASM-NLSM provides an extra term − sgn[ŵ()] in its update equation, which is to provide a fast attraction to force the small coefficients in magnitude to zero rapidly.Moreover, the ZA strength is controlled by parameter  and the ZASM-NLSM algorithm provides the same zero-attraction on all the channel coefficients.As a result, the ZASM-NLSM algorithm cannot effectively distinguish the zero and nonzero channel coefficients, which may degrade its performance for less sparse systems.To improve its performance, an improved sparse SM-NLMS is presented by using a sum-log function to replace the  1 -norm in ZASM-NLSM algorithm, which was named as RZASM-NLSM algorithm.Although the RZASM-NLSM algorithm improves the performance of the ZASM-NLSM, it also increases the computational complexity.

Proposed L0L1SM-NLMS Algorithms
As is known to us, the previously presented ZASM-and RZASM-NLMS algorithms estimate the sparse channel by using an  1 -norm and sum-log function penalty to achieve good channel estimation performance.However, they may neglect the sparse structure in-nature unknown channel in practical applications.Inspired by the CS and the ZA techniques [14][15][16][17][18][19][20], we propose an adaption-norm penalized SM-NLMS algorithm by exerting  0 -norm and  1 -norm on the general cost function of the basic SM-NLMS to construct a desired independent norm penalty.The proposed algorithms are to produce desired zero-attraction terms which divides the filter coefficients into large and small groups.By dynamically assigning the  0 -norm and  1 -norm penalties to the different channel taps, the proposed algorithms can attract the large and small group channel taps to zero based on  0norm and  1 -norm penalties.Here, a mixture of  0 -norm and  1 -norm penalty is implemented by using an   -norm penalty.The   -norm is described as where we have a definition of 0 ≤  ≤ 1.Then,  0 -norm and We note that ‖ŵ( + 1)‖ 0 can count the nonzero channel coefficients in the sparse channel.
where  pro is a ZA parameter used for controlling the sparsity and convergence rate.As a result, the modified cost function of our developed algorithm is The Lagrange multiplier method is also adopted to find out a minimization of cost function in (17).Then, we have Form ( 18), we get () ŵ ( + 1) =  () − .
By left-multiplying    () on both sides of (19) and combining it with equation (20), we have Then, substituting (21) In order to prevent dividing by zero, we add a very small constant  > 0 into (23) However, we note that the ZA term in the update equation will inevitably result in error for obtaining accurate sparse channel estimation.Fortunately, we note that the parameter  is a variable value according to our previous definition.Thus, we can redefine the   -norm penalty further, which is illustrated as By considering the definition of ( 24), ( 23) is updated to be From ( 25), we can find that the zero attractor term suggests that the value of ŵ ( + 1) can be divided into small and large groups.Herein, we define an expectation value For the large group, we aim to minimize which is subjected to ŵ () > ℎ().As for the small group, we use   = 1 to balance the solution [18].Thereby,   is assigned to 0 or 1 for ŵ () > ℎ() or ŵ () < ℎ(), respectively.Till now, the proposed -norm is separated into  0 -norm and  1norm by considering a mean of the channel taps.Therefore, the proposed algorithm mentioned above can be regarded as  0 -norm and  1 -norm penalty SM-NLMS for the large and small group, respectively.Then, the updating equation of ( 25) is changed to be ŵ ( + 1) = ŵ () +   ()    ()   () +  { () − } where   () = {  ()   ()/   ()  () − 1} and  pro is a ZA strength controlling factor used to provide a balance between the convergence and spareness, and   is In fact, we can use a matrix ] × (30) to define all   in each iteration of (29).Then, the proposed algorithm is also written as As we know, the proposed algorithm is complex for calculating G().We can use a simple method to find out the solution of ( 16) by considering (24) and the separation of the  0 -norm and  1 -norm.Then, the L0L1SM-NLMS algorithm solves an unconstrained optimization problem The developed L0L1SM-NLMS has a last zero attractor term, which is to exert  0 -norm or  1 -norm penalties on the channel taps according to sparse channel property in practical application.The  0 -norm or  1 -norm penalty is assigned via the F matrix to the separated large and small groups.Thus, our L0L1SM-NLMS algorithm can be used to enhance the convergence rate and it can reduce the estimation bias for estimating sparse channels.The developed L0L1SM-NLMS algorithm synthesizes the advantages of  0 -norm and  1 -norm penalties and serves as a dynamic adaptive norm constraint to give an  0 -norm penalty on the large channel taps and to exert an  1 -norm on the small channel taps.Similar to the RZASM-NLMS, the proposed L0L1SM-NLMS algorithm can be further enhanced by introducing a reweighing factor into (33) to achieve a reweighing L0L1SM-NLMS algorithm (RL0L1SM-NLMS) algorithm.As a result, the related updating equation of RL0L1SM-NLMS is where  RL0L1 is a ZA parameter and  is a positive parameter to adjust the reweighing strength.By considering all the channel taps and the training signals, ( 34) is changed to be Here, the matrix F is the same as that in (33).We observe that our RL0L1SM-NLMS algorithm has a reweighing factor 1/(1 + |ŵ()|) in its update equation in comparison with L0L1SM-NLMS algorithm.Thus, we can use the reweighing factor to adjust the zero attractor ability by properly choosing parameter .A suitable  can create an attracting constraint on the small or the large grouped channel coefficients to effectively adjust the  0 -norm or  1 -norm constraints, respectively.

Simulation Results
Here, we give an investigation on the behavior of our presented L0L1SM-NLMS and RL0L1SM-NLMS algorithms.
Exactly, the convergence properties and steady-state misalignment of our developed L0L1SM-and RL0L1SM-NLMS algorithms are accessed via sparse channels with different sparse level .Furthermore, we also give the effects of the ZA parameters.The performance is obtained by the use of computer simulation and the behaviors of the L0L1SM-and RL0L1SM-NLMS algorithms are compared with classic LMS, NLMS, and SM-NLMS algorithms and popular ZA-LMS, RZA-LMS, and ZA-, RZA-, ZASM-, and RZASM-NLMS.We adopt the MSE criterion to evaluate the performance of sparse channel estimators.Herein, the MSE is In all the experiments, x() denotes a Gaussian random signal which is independent of V().The power of x() and white noise is 1 and 1 × 10 −2 , respectively.The signal-to-noise ratio (SNR) is set to be 20 dB in the experiments.In the first experiment, we consider a channel whose length is set to be  = 16, and only one nonzero tap ( = 1) is set to find out the effects of  L0L1 and  RL0L1 on our developed L0L1SM-and RL0L1SM-NLMS algorithms, respectively.Here, the simulation parameters are  = 0.01 and  = √ 2 2 V in L0L1SM-NLMS algorithm.Figure 1 gives the effects of  L0L1 on our L0L1SM-NLMS algorithm.As we can see form Figure 1, there is a large MSE for  L0L1 = 1 × 10 −1 .When  L0L1 decreases from 5 × 10 −2 to 1 × 10 −3 , the MSE of our developed L0L1SM-NLMS algorithm is reduced, indicating that our L0L1SM-NLMS algorithm achieves a low estimation misalignment.If  L0L1 continues to decrease, the MSE of our L0L1SM-NLMS algorithm is getting larger.Next, the effect of  RL0L1 is shown in Figure 2 with  = 0.05 and  = √ 2 2 V .It is clear to see that the MSE gradually decreases with  RL0L1 reducing from 1 × 10 −1 to 5 × 10 −3 .When  RL0L1 ranges from 1 × 10 −3 to 5 × 10 −5 , the MSE is getting larger with the decrement of  RL0L1 .Thus,  L0L1 and  RL0L1 should be properly selected to well improve the behavior of our presented L0L1SM-and RL0L1SM-NLMS algorithms.
Then, we will verify estimation behaviors of the L0L1SMand RL0L1SM-NLMS algorithms over a sparse channel with different sparsity levels and their estimation behaviors are compared with LMS, NLMS, SM-NLMS, and their sparse forms.The convergence of our presented L0L1SM-and RL0L1SM-NLMS algorithms compared with NLMS algorithms is shown in Figure 3.We observe that both L0L1SMand RL0L1SM-NLMS algorithms converge faster than those Iterations  Iterations the parameter  NLMS denotes a step size for NLMS algorithm and its sparse forms, and  ZASM ,  ZANLMS , and  RZASM are the ZA parameters for the already reported ZASM-, ZA-, and RZASM-NLMS algorithms, respectively.The steady-state misalignment of our L0L1SM-and RL0L1SM-NLMS gives lower estimation error floors compared to the traditional NLMS and SM-NLMS, and their sparse variants were recently developed because our proposed algorithms can adaptively adjust the ZA though dividing the sparse channel taps into small and large groups and exerting  1 and  0 -norm penalties on each group, respectively.The proposed RL0L1SM-NLMS algorithm can provide lowest estimation misalignment compared to the recent reported ZASM-NLMS and RZASM-NLMS algorithm when  equals 2. When  increases from 4 to 8, the steady-state misalignment becomes higher in comparison with that of  = 2.However, the steadystate error is better than those sparse SM-NLMS and NLMS algorithms and their related sparse algorithms realized by using ZA techniques.The steady-state behaviors of our L0L1SM-and RL0L1SM-NLMS algorithms are fully verified and well compared with the existing LMS algorithms, including LMS, ZA-LMS, RZA-LMS, ZA-NLMS, and RZA-NLMS.Here, the parameters of our L0L1SM-and RL0L1SM-NLMS algorithms are the same as those in the last experiment, while other extra simulated parameters are listed as follows:  LMS = 0.05,  ZA-LMS = 5 × 10 −4 , and  RZA-LMS = 9 × 10 −4 , where  LMS is a step size for LMS algorithm and  ZA-LMS and  ZA-RLMS are the ZA parameters for ZA-and RZA-LMS algorithms, respectively.The estimation behaviors for  = 2,  = 4, and  = 8 are described in Figures 7, 8, and 9, respectively.It is found that our RL0L1SM-NLMS algorithm achieves fast convergence and possesses the lowest estimation misalignment.Additionally, the proposed L0L1SM-NLMS algorithm is also better than the mentioned algorithms in terms of MSE.When the number of the dominant coefficients increases to 8, the steady-state misalignment of our developed algorithms increased.However, our developed algorithms still yield excellent steady-state behavior, implying that our presented L0L1SM-and RL0L1SM-NLMS algorithms are more powerful than the popular LMS and its sparse versions.In addition, the estimation behaviors of our constructed L0L1SM-and RL0L1SM-NLMS algorithms are discussed in Figure 10.We can see that the estimation misalignment of RL0L1SM-NLMS algorithm is better than the L0L1SM-NLMS algorithm for any sparsity level , indicating the proposed RL0L1SM-NLMS algorithm is more robust.
Finally, the estimation behavior of our L0L1SM-and RL0L1SM-NLMS algorithms is studied for estimating an echo channel which is illustrated in Figure 11 as an example.There are 16 nonzero taps ( = 16) within the echo channel and its length is 256 ( = 256).In this experiment, the sparsity is illustrated as  12 (w) = (/( − √ ))(1 − ‖w‖ 1 / √ ‖w‖ 2 ) [28,35].The SNR is 30 dB and the used parameters in this experiment are  NLMS =  ZA-NLMS = 0.5,  ZA-NLMS = 3×10 −6 ,  ZASM = 5 × 10 −6 ,  RZASM = 6 × 10 −6 ,  L0L1 = 7 × 10 −6 ,  RL0L1 = 1 × 10 −5 , and the other parameters are the same as the above experiments.We use  12 (w) = 0.8222 and  12 (w) = 0.7362 for the first 10000 and second 10000 iterations, respectively.Herein, only the NLMS and its sparse forms are used for comparison since the NLMS algorithm is better than the LMS algorithm [22].The computer simulation result is given in Figure 12.As is described in Figure 12, the steady-state misalignment of our developed L0L1SM-and RL0L1SM-NLMS algorithms is superior to the previously developed RZASM-NLMS algorithm.After the first 10000 iterations, the steady-state error of our algorithms is getting larger.However, their estimation behaviors are still better than mentioned channel estimation algorithms.Therefore, a conclusion is given that our developed L0L1SM-and RL0L1SM-NLMS algorithms have an excellent performance based on the evaluation criterion of convergence and estimation behavior for sparse adaptive channel estimation.

Conclusion
A L0L1SM-NLMS algorithm and a RL0L1SM-NLMS algorithm have been proposed and their derivations have been introduced in detail.The proposed L0L1SM-NLMS algorithm was realized by integrating a -norm and separating the norm into  0 and  1 norm via a mean of the channel taps and the RL0L1SM-NLMS algorithm was implemented by using a reweighting factor in the L0L1SM-NLMS algorithm to enhance the ZA strength.Both the proposed algorithms can provide a ZA-like performance on the small channel coefficients and give rise to an  0 -norm penalized ZA on the large channel taps.The proposed L0L1SM-and RL0L1SM-NLMS algorithms have been discussed via a sparse channel with different sparsity levels and a designated echo channel.The computer simulations obtained from these channels give a conclusion that our RL0L1SM-NLMS algorithm has the best behavior with respect to the convergence and steadystate channel estimation behavior.Furthermore, both the proposed L0L1SM-and RL0L1SM-NLMS algorithms provide better estimation behaviors than the traditional LMS, NLMS, and SM-NLMS and the previously reported popular sparse algorithms.

Figure 5 :Figure 6 :
Figure 5: Steady-state behavior comparisons between the proposed algorithms and NLMS algorithms for  = 4.

Figure 7 :Figure 8 :
Figure 7: Steady-state behavior comparisons between our algorithms and LMS algorithms for  = 2.

Figure 9 :Figure 10 :
Figure 9: Steady-state behavior comparisons between our algorithms and LMS algorithms for  = 8.