Sparse Adaptive Channel Estimation Based on lp-Norm-Penalized Affine Projection Algorithm

We propose an lp-norm-penalized affine projection algorithm (LP-APA) for broadband multipath adaptive channel estimations. The proposed LP-APA is realized by incorporating an lp-norm into the cost function of the conventional affine projection algorithm (APA) to exploit the sparsity property of the broadband wireless multipath channel, by which the convergence speed and steadystate performance of the APA are significantly improved.The implementation of the LP-APA is equivalent to adding a zero attractor to its iterations. The simulation results, which are obtained from a sparse channel estimation, demonstrate that the proposed LPAPA can efficiently improve channel estimation performance in terms of both the convergence speed and steady-state performance when the channel is exactly sparse.


Introduction
Recently, with the fast increasing demand for high data rate and wide bandwidth in wireless mobile communication, the use of broadband signal transmission has become an important technique for next-generation wireless communication systems, for instance, 3GPP long-term evolution (LTE) and worldwide interoperability for microwave access (WiMAX) [1][2][3].Coherent detection and equalization in broadband communication systems require perfect channel state information [4], which is not known at the receiver.Therefore, the achievable performance of coherent detection for such broadband communication system relies heavily on the accuracy of the channel estimation [2][3][4][5][6][7], which can help to improve the communication quality.Fortunately, the accurate channel estimation can be obtained by means of the adaptive filter algorithms, such as least mean square (LMS), recursive least squares (RLS), and affine projection algorithm (APA) [8,9].On the other hand, normalized LMS (NLMS) algorithm, which is an improved LMS algorithm, has been widely studied and applied in channel estimation owing to its low complexity, high stability, and easy implementation.However, NLMS algorithm converges slowly, making it difficult to track the rapid time-varying channels.Consequently, the APA with an acceptable computational complexity between the NLMS and RLS algorithms has been deeply developed and applied in echo cancellation and channel estimations [9,10].
On the other hand, the measurement results of the broadband channel showed that the wireless multipath channel consists of only a few dominant active propagation paths whose magnitudes are nonzero, even though they have large propagation delays [5,11,12].Thus, these channels can be regarded as a sparse channel with a few nonzero taps which are dominant, while other inactive taps are zero or close to zero because of the noise in the channel.However, the classical adaptive channel estimation algorithms, such as NLMS algorithm and APA, may perform poorly when the channel is exactly sparse [13].As a consequence, a great number of sparse signal estimation algorithms have been presented to improve the estimation performance for sparse channels, such as compressed sensing (CS) [5,[14][15][16] and zeroattracting adaptive channel estimation algorithms [13,[17][18][19][20][21][22][23].However, these CS reconstruction algorithms are sensitive to the noise in the channel estimation and have high computational complexity [19].
Other effective adaptive channel estimation algorithms, denoted as zero-attracting algorithms, have been reported by the combination of the CS theory [15,16] and the LMS International Journal of Antennas and Propagation algorithm [8], which are famous as zero-attracting LMS (ZA-LMS) and reweighted ZA-LMS (RZA-LMS) algorithms [13].Recently, these zero-attracting (ZA) techniques have been expanded to the APA in order to improve the convergence speed of the zero-attracting LMS algorithms [20], which are denoted as zero-attracting APA (ZA-APA) and reweighted ZA-APA (RZA-APA).As a result, the zero-attracting APAs converge faster than those of the ZA LMSs due to the reuse data scheme in the APA.However, these previously proposed zero-attracting algorithms, which include the ZA-LMS algorithm and the ZA-APA, are realized by integrating an  1 -norm into the cost functions of the standard LMS and APA, respectively.Moreover, these  1 -norm-penalized algorithms impose a condition that the number of the active taps must be very small as compared to the number of inactive channel taps.
In this paper, we proposed an   -norm-penalized APA (LP-APA) that incorporates an   -norm into the cost function of the conventional APA on the basis of the concepts of zeroattracting algorithm proposed in [13,[17][18][19][20][21][22][23], by which the convergence speed and steady-state performance of the conventional APA can be significantly improved when the channel is exactly sparse.Moreover, the proposed LP-APA has an extra parameter , which is more flexible than the previously proposed zero-attracting APAs [20][21][22].The LP-APA is realized by introducing a zero attractor in its iterations, which is used to attract the inactive taps to zero quickly.In other words, our proposed LP-APA can inherit the benefits of both the conventional APA and the past zero-attracting algorithms and, hence, it can achieve faster convergence speed and smaller steady-state error in comparison with the conventional APA.In this study, our proposed LP-APA is implemented over a sparse multipath channel in single antenna systems in order to verify the channel estimation performance in comparison with the NLMS, APA, ZA-APA, and RZA-APA.Computer simulation results demonstrate that the proposed LP-APA achieves better estimation performance in terms of both the convergence speed and steady-state behavior for sparse channel estimation.
The remainder of this paper is organized as follows.In Section 2, we briefly discuss the previously proposed conventional APA, ZA-APA, and RZA-APA based on a sparse multipath communication system.In Section 3, we mathematically propose the LP-APA by the use of an   -norm-penalty in the cost function of the conventional APA.Furthermore, the update function of the LP-APA is obtained by using Lagrange multiplier method.In Section 4, the channel estimation performance of the proposed LP-APA is experimentally investigated over a sparse channel and compared with those of the ZA-APA, RZA-APA, APA, and NLMS algorithms.Finally, Section 5 draws a conclusion for this paper.

Conventional Channel Estimation Algorithms
In this section, a sparse multipath communication system shown in Figure 1 is employed in order to illustrate the conventional channel estimation algorithms, namely, APA, ZA-APA, and RZA-APA.The input signal x() = [(), ( − 1), . . ., (−+1)]  , which contains the  most recent samples, is transmitted over an unknown finite impulse response (FIR) channel with channel impulse response (CIR) h = [ℎ 0 , ℎ 1 , . . ., ℎ −1 ]  , where (⋅)  is the transposition operation.The input signal x() is also an input of the channel estimator ĥ() with  coefficients to generate an estimation output ŷ(), and the desired signal d(), which is obtained at the receiver, is composed of the channel output y() and the noise v() in the channel.The purpose of the channel estimation is to estimate the unknown channel h by using the APA, ZA-APA, and RZA-APA.

Review of Conventional APA.
The APA adopts multiple projection scheme by utilizing past vectors from time iteration  to time iteration ( −  + 1), where  is defined as the affine projection order.In the APA, we assume that the last  input signal vectors are organized as a matrix as follows: . . .
where x() is the input signal.We also define the following useful vectors to further describe the APA, such as the desired signal d(), the estimation output of the APA filter ŷ(), and the additive white Gaussian noise k(): ( For channel estimation, the APA is used to minimize      ĥ ( + 1) − ĥ () International Journal of Antennas and Propagation 3 Here, the Lagrange multiplier method is employed in order to find out the solutions that minimize the cost function  APA ( + 1) of the APA and  APA ( + 1) is given by  APA ( + 1) =      ĥ ( + 1) − ĥ () where  APA is a  × 1 Lagrange multiplier vector with By calculating the gradient of  APA ( + 1), we have where In order to balance the convergence speed and the steady-state performance, a step size  APA is introduced into (6), and hence, the update function ( 6) of the APA can be modified to ĥ ( + 1) = ĥ () +  APA U + () e () .
It is worthwhile to note that the APA is a NLMS algorithm when the affine projection order  is set to one.

Review of the ZA-APA and RZA-APA.
In this subsection, we briefly review the ZA-APA and RZA-APA.On the basis of the past studies, we know that the cost function of the ZA-APA is defined by combining the cost function  APA ( + 1) of the standard APA with an  1 -norm-penalty of the channel estimator and is expressed as ĥ ( + 1) − ĥ ()      2 + [d () − U () ĥ ( + 1)]   ZA +  ZA      ĥ ( + 1)     1 , where  ZA is the Lagrange multiplier vector with a size of  × 1, while  ZA is a regularization parameter which is used to balance the estimation error and the sparse  1 -norm-penalty of ĥ( + 1).To minimize the cost function of the ZA-APA, we use the Lagrange multiplier method on  ZA ( + 1), and we obtain ĥ ( + 1) = 2 ĥ ( + 1) − 2 ĥ () − U  ()  ZA +  ZA sgn [ ĥ ( + 1)] , where sgn[] is a component-wise sign function defined as follows: In order to get the minimization of ( 9), the left-hand side (LHS) of ( 9) is set to zero.Therefore, we have ĥ ( + 1) = ĥ () + 1 2 Then, by multiplying U() on both sides of (11) and using the e() = d() − U() ĥ(), we can get Substituting ( 12) into (11), assuming sgn[ ĥ( + 1)] ≈ sgn[ ĥ()] at the steady stage, and introducing a step-size  ZA to balance the convergence speed and the steady-state performance, we can obtain the update function of the ZA-APA ĥ ( + 1) = ĥ () +  ZA U + () e () From the update equation ( 13) of the ZA-APA, we find that there are two additional terms in comparison with the update equation ( 7) of the conventional APA, which attract the inactive taps to zero when the tap magnitudes of the sparse channel are zero or close to zero.These two additional terms are regarded as zero attractors whose zero-attracting strengths are controlled by the regularization parameter  ZA .In a word, the zero attractor can speed up the convergence of the ZA-APA when the majority of taps of the channel h are inactive ones, such as sparse channel.
Unfortunately, the ZA-APA cannot distinguish the active taps and the inactive taps of the sparse channel so that it exerts the same penalty to all the channel taps, which forces all the channel taps to zero uniformly [13,20].Therefore, the performance of the ZA-APA might be degraded for less sparse channel.In order to improve the estimation performance of the ZA-APA, a heuristic method first investigated in [24] and used in [20] to reinforce the zero attractor was proposed, which was denoted as RZA-APA.In the RZA-APA, ) is adopted instead of ‖ ĥ()‖ 1 used in ZA-APA.Thus, the cost function of the RZA-APA can be written as

=
ĥ ( + 1) − ĥ () where  RZA is a regularization parameter for balancing the estimation error and the strength of the zero attractor,

International Journal of Antennas and Propagation
RZA > 0 is a positive threshold which is set to 10 in [13,20] to obtain optimal performance, and  RZA is a vector of the Lagrange multiplier with size of  × 1.We use the Lagrange multiplier to the cost function of the RZA-APA and assume sgn[ ĥ(+1)]/(1+ RZA | ĥ(+1)|) ≈ sgn[ ĥ()]/(1+ RZA | ĥ()|) in the steady stage.Then, we can get the update equation of the RZA-APA by taking the statistical property of the channel ĥ ( + 1) = ĥ () +  RZA U + () e () where  RZA is the step size of the RZA-APA.

Results and Discussions
In this section, we use the computer simulation to investigate the channel estimation performance of our proposed LP-APA over a sparse multipath communication system.The simulation results are compared with those of the previously proposed sparsity-aware algorithms including ZA-APA and RZA-APA as well as the standard APA and NLMS algorithms.
Here, we consider a sparse channel h whose length  is 32 and whose number of dominant active taps  is set to two different sparsity levels, namely,  = 1 and 4, similarly to past studies in [13,[17][18][19].In all the simulations, the dominant active channel taps are obtained from a Gaussian distribution which is subjected to ‖h‖ 2 2 = 1, and the positions of these dominant active channel taps are randomly distributed within the length of the channel.The input signal x() used in this paper is a Gaussian random signal while the v() is an additive zero-mean Gaussian noise with variance  2 V , which is independent with the input signal x().An example of a typical sparse multipath channel with a channel length of  = 32 and a sparsity level of  = 3 is described in Figure 2. In all the simulations, the power of the received signal is   = 1, and hence, the signal-to-noise ratio (SNR) can be defined as SNR = 10 log(  / 2 V ).The difference between the actual and estimated channels based on these sparse adaptive channel estimation algorithms and the sparse channel discussed above is evaluated by using mean square error (MSE) which is defined as follows: In this paper, the following parameters are used to obtain the channel estimation performance:  NLMS = 0.73,  APA =  ZA =  RZA =  LP = 0.5,  ZA =  RZA = 5 × 10 −4 ,  RZA = 10,  = 0.5,  LP = 4 × 10 −5 , and   = 0.05.Here,  NLMS is the step size of the NLMS algorithm.In the investigation of the effects on the parameters, we change one of these parameters, while other parameters are invariable.

Effects of Parameters on the Proposed LP-APA.
In the proposed LP-APA, two more parameters,  and  LP , are introduced to design the zero attractors compared with the conventional APA.Furthermore, we also investigate the effects on the performance of the LP-APA with different affine projection order .Next, we show how these three parameters affect the proposed LP-APA over a sparse channel with channel length  = 32 and the sparsity level  = 4.The computer simulation results for different value of ,  LP , and  are presented and shown in Figures 3, 4, and 5, respectively.We can see from Figure 3 that the steady-state error of the proposed LP-APA is reduced with an increase of  ranging from 0.4 to 0.5.When  = 0.6, the LP-APA can achieve the same steady-state error as that of  = 0.5.However, the steady-state performance is becoming worse for  = 0.8 and 1.In fact, when  = 1, the proposed LP-APA is the ZA-APA.In addition, we obverse that the LP-APA can achieve the same convergence speed at the early iteration stage; after that, the convergence speed of the LP-APA slows down with increasing of .Now, we turn to discuss the effects of the  LP on the proposed LP-APA.We can see from Figure 4 that the steadystate performance of our proposed LP-APA is improved with a decrease of  LP when  LP is greater than 4 × 10 −5 .When  LP continues to decrease, the steady-state error increases again.This is because a small  LP results in a weak zero attracting strength, which consequently reduces the convergence speed and degrades the steady-state performance.According to the discussions of those effects on parameters  and  LP , it is observed that a small  can speed up the convergence and reduce the steady-state error of the LP-APA for 0.5 <  < 1.The effect of parameter  LP shown in Figure 4 is similar to the parameter  in ZA-APA [20].Thus, we can fix the parameter  LP on the basis of investigation of the ZA-APA and select a small  to obtain better performance.
Then, we show the channel estimation performance of the LP-APA with different value of .The simulation results are shown in Figure 5.It is found that the convergence speed is significantly improved with the increasing of the parameter  for the proposed LP-APA and the ZA-APA, while the steadystate errors for both the LP-APA and ZA-APA are increased.This is due to the reuse data scheme in the APAs, which can accelerate their convergence speed.Furthermore, we found that the LP-APA can achieve faster convergence speed than the APA and NLMS algorithms with the same steady-state error floor when  = 8.Thus, we can draw a conclusion  from the discussions alluded to above that we should carefully select the parameters ,  LP , and  to balance the convergence speed and steady-state performance for the proposed LP-APA.

Effects of Sparsity
Level  on the Proposed LP-APA.In view of the results discussed above for our proposed LP-APA, we choose  = 0.5,  LP = 4 × 10 −5 , and  = 2 to evaluate the channel estimation performance of the LP-APA over a sparse channel with channel length of  = 32 and  = 1 and 4 for which the obtained simulation results are given in Figure 6 at 30 dB.We can see from Figure 6(a) that our proposed LP-APA can achieve the fastest convergence speed and lowest steadystate error when  = 1 in comparison with the previously proposed ZA-APA, RZA-APA, and the conventional APA and NLMS algorithms.When  = 4, we can see from Figure 6(b) that our proposed LP-APA still has the highest convergence speed.However, our proposed LP-APA achieves nearly the same steady-state error floor as that of RZA-APA.This is owing to that these sparsity-aware algorithms attract the inactive taps to zero quickly when  = 1, and hence their convergence speeds are improved so much, while their convergence speed reduced because of the reduction of the zero taps when  = 4.With the reduction of the sparsity of the sparse channel, the steady-state error floors are deteriorated and the convergence speeds are reduced for all the sparse-aware APAs.Moreover, our proposed LP-APA still has fastest convergence speed from  = 1 to  = 4. Thus, we can summarize this discussion by saying that the convergence speed and the steady-state performance of the LP-APA can be improved for sparse channel estimation applications by proper selection of its parameters.

Conclusion
In this paper, we proposed an LP-APA to exploit the sparsity of the broadband multipath channel and to speed up the convergence of the standard APA.The LP-APA was realized by incorporating an   -norm into the cost function of the conventional APA, resulting in a zero attractor in its iterations, which attracted the inactive taps to zero quickly and hence accelerated the convergence speed of the APA.The simulation results showed that our proposed LP-APA with acceptable computational complexity increased the convergence speed and reduced the steady-state error of the APA as well as the ZA-APA and RZA-APA for sparse channel estimation.

Figure 4 :
Figure 4: Effects of  LP on the proposed LP-APA.

5 Figure 5 :
Figure 5: Effects of affine projection order  on the proposed LP-APA.

Figure 6 :
Figure 6: Performance of sparse channel estimation with different sparsity levels.