Performance Evaluation of LMS and CM Algorithms for Beamforming

In this paper, we compare the performances of the least mean square (LMS) and constant modulus (CM) algorithms for beamforming. Our interest in these algorithms finds its origins in their reliability as a source-receiver pair. In addition, their use brings a great frequency of diversity even to respond quickly to the increasing spectral demand.)e results suggest that the greater the number of elements in the antenna, the better the directivity for both LMS and CM.We also note that a judicious choice of the control parameter mu leads to a better speed of convergence for the two algorithms. Let us note, however, that LMS is more efficient. Our simulations show that in an environment affected by white Gaussian noise, LMS is more robust than CM. )is confirms the theoretical result due to the fact that LMS uses a sequence for learning. Performance analyses of the two techniques are simulated in the MATLAB environment.


Introduction
Forecasts for the mobile communications market predict that traffic density will continue to increase over the 2030 horizon, leading to the coexistence of a large number of users and a multitude of standards [1]. Given the high density of expected traffic, this brings us back to the challenges posed by the question of the efficiency of resource allocation between users. e impending problem of channel interference will be much more intense. Given their promising characteristics, smart antennas with adaptive beamforming algorithms can be used for impact interference suppression [2]. Adaptive filtering is frequently used for nonstationary signals and in applications that require low processing times. Applications of adaptive filters include noise cancellation, cellular mobile channel equalization, and echo cancellation. e reason for this interest in beamforming is that if the signal is transmitted correctly and then received by the base station in the direction of the desired receiver, this will considerably reduce the interference in the considered system. e output signal is adaptively filtered to detect the desired signal and reject interfering signals. e adaptive beamforming technique can be used at both transmission and reception to improve spatial selectivity [3]. e first algorithm on which our study relates is the LMS (least mean square). It is widely used in the design of adaptive filters thanks to its various advantages. It was first introduced by Widrow and Hoff in 1959 and uses a method based on stochastic gradient descent [4].
First, we examine the performance of the LMS algorithm from different angles, namely: the number of elements constituting the array, the angle of arrival, the step size, and finally the noise impact. However, we point out that only additive Gaussian noise is considered, and as LMS belongs to the category of nonblind adaptive algorithms, i.e., having the statistical properties of the transmitted signal, theoretically this algorithm will have robust behavior with respect to this type of noise.
It is worth mentioning that during the update and if the data does not bring innovation, one will be able to appeal to the work carried out by Chien Ying-Ren and Chih-Hsiang [5]. Indeed the algorithm offers great possibilities of update in the presence of IN (impulse-noise). It should also be remembered that the algorithm developed in this paper can be used in association with the innovative approach that Diniz developed in [6], commonly known as DS-LMS (dataselective least mean square).
In second time, conventional equalization techniques use a training sequence known in advance by the receiver [7]. e receiver adapts the equalizer so that its output perfectly matches the known reference training signal. However, the inclusion of such sequences comes up against bandwidth limitations. In this case, blind adaptation is preferred. e most widely used and most interesting blind adaptation algorithm is the constant modulus algorithm (CMA). Briefly, CMA tends to minimize cost by the constant modulus (CM) criterion. It penalizes the deviations in the modulus of the equalized signal with respect to a fixed point [8]. We submit this algorithm to the same analysis mentioned above, performed on the LMS. e article is organized as follows: in section 2, we briefly describe the model used. Section 3 gives a brief description of adaptive beamforming that automatically adapts to changes.
is section describes summarily the two algorithms, LMS and CM, while simulation-based experimental comparisons between these two filters are discussed in section 4. Section 5 presents our conclusions and presents our outlook for the future.

System Models
Let us assume a uniform linear array (ULA) with N isotropic elements. e output of the antenna array is described by the following equation: (1) x(k) is the (N × 1) complex snapshot vector of array observations, W � [W 1 , W 2 , . . . , W N ] T . is excitation weight vector and superscript H shows the Hermitian transpose, with n designating the discrete time index. e main objective of the algorithms in question in this work aims to reduce the error between the desired signal d(k), k ∈ 1, 2, . . . N { }, and output of the antenna array. is error can be written as (2) e role of the various beamforming algorithms is to minimize errors by optimizing the weights of the vectors so as to obtain the best correspondence between the output of the antenna and the desired signal. ose algorithms will be described in the next section.

Adaptive Beamforming Algorithms
As opposed to beamforming with fixed weights, adaptive beamforming adjusts the weights of the antenna elements adaptively to optimize the quality of the received signal under certain performance criteria. ere are several types of adaptive beamforming algorithms, most of which belong to two major classes. Figure 1shows the algorithm types, namely, those which use or do not use a training signal.
Nonblind adaptive beamforming algorithms make use of training sequences to update their weight vectors. Indeed, upon receipt, this information is used to calculate the new complex weight [9].
Adaptive beamforming is today an important and inescapable aspect of network processing, which has been widely used in various technologies, namely radar, sonar, wireless communications, radio astronomy, and other fields [10].
e process is to transmit the signal correctly in the direction of the base station of the desired receiver. e output signal is then the linear combination of the vectors of weight W and the inputs of discrete samples. We assume that the data and weights are complex because in many applications, a quadrature receiver is used on each sensor to generate in-phase and quadrature data.

Least Mean Square Algorithm.
Generally, adaptive beamforming algorithms iteratively adjust the weights to minimize the cost function. In this sense, the least mean square (LMS) algorithm is commonly used, given its low complexity and its ability to integrate new observations. LMS performs a linear update of the vector of weights along the direction of the gradient based on the negative steepest descent method. Beamforming, therefore, estimates the signal from the received signal by minimizing the error between the reference signal, which closely approaches the estimate of the desired signal d(t), and the output of the beamformer. Figure 2 gives a summary of the approach used by the LMS algorithm.
It should however be noted that the quadratic function of error |e(k)| 2 has only one minimum, this implies the convergence of the steepest descent [11].
For each index k, the LMS algorithm uses the gradient operator to update the vector of weights. e weight update equation is where μ is the step size parameter introduced to control the rate of adaptation and the steady-state behavior of the LMS algorithm. e process describes a series of iterations controlled by the step μ, each time updating the gradient and the step size. e term e (k) � (d * (k) − y(v)), is the prediction error, and d * (t) designs a signal closely correlated to desired original signal [12], E denotes an expectation operator.
Under the assumption that the parameter μ is small enough, it is proved in [13] that the LMS algorithm converges if the step satisfies the inequality: 0 < μ < 2/λ max , with λ max design the largest eigenvalue of the correlation matrix R xx , of order (N × N). We also point out that the constant of positive value μ controls the incremental correction applied to the weight vector. Consequently, its size will influence the 2 Advances in Materials Science and Engineering speed of convergence and the variation of the learning curve [12]. e gradient vector in equation (3) can be calculated as follows: e term (x(k))d(v), denotes the cross correlation between the input and the desired signal, it is of order (N × N).
It is trivial that the convergence of the LMS algorithm depends on the structure of the eigenvalues. is convergence can be slow if the eigenvalues are very widespread. Consequently, the largest value of the step is obtained for the smallest eigenvalue of the correlation matrix. Note also that the procedure used here requires a priori knowledge of the signal transmitted. is is achieved by sending a pilot sequence known on the reception side.

Constant Modulus Algorithm.
A wide range of phase-or frequency-modulated signals have a constant complex envelope.
erefore, these kinds of signals have a property called constant modulus (CM) [14]. Since the creation of CMA in the 1980s, the algorithm has been used successfully in numerous applications, including equalization for microwave radio links and blind beamforming, among others.
In some telecom systems, the use of a training sequence is not very desirable because it consumes a lot of resources. Techniques based on the constant modulus (CM) approach are of great help in this case [15].
CMA adjusts the weight vector of the adaptive filter to minimize the variation of the desired signal at the network level. e CMA tries to minimize the cost function of the form as follows: Once p and q are fixed, we define a function called the (p, q) CM cost function [16], we also point out that the solution p � 1 and q � 2 leads to a very efficient signal to noise and interference ratio (SINR). By applying the steepest descent method, the update of the weight vector is given by the following equation: e parameter μ designates the size of the step, when the function (1, 2) is chosen, by applying the gradient vector to the equation (4) and neglecting the expectation operation, we end up with the following recurrent formula: where e(k) � y(k)/|y(k)| − y(k). A great similarity to be drawn is that the term y(k)/|y(k)| in the CM algorithm plays the same role as the desired signal d(k) in the LMS algorithm. However, the latter remains dependent on the existence of a reference signal [16,17].

Results and Discussion
In this section, we evaluate the performance of the LMS and CM algorithms. We consider a phase modulated signal with an arrival angle of 25°. Two interfering sources are supposed to strike the antenna array from the 0°and −35°directions. e noise variance is between 0.001 and 0.2 in all our simulations. N denotes the number of elements in the linear smart antenna, the spacing between adjacent elements is fixed at half-wavelength (d � 0.5λ). e channel is assumed to be a flat-fading channel. All the simulation plots that follow are obtained by 400 independent iterations.
Keeping in view the scenario described above, we will evaluate the effect of the variation in the number of elements constituting the antenna on the performance of the two algorithms. We set the value of the step at 0.05. Experiment 1 shows the amplitude response of the LMS algorithm for different numbers of elements (Figure 3). It is evident that in the use of the LMS algorithm, the response gives maximum amplitude at the desired angle (25°) and completely canceled out the unwanted signals (0°and −35°).

Advances in Materials Science and Engineering
Using 8 elements in the array clearly improved the response of the algorithm by narrowing the response band. Under the same assumptions, experiment 2 relates this time to the CM algorithm. e results reflected in Figure 4 show that the final weighted table has a peak in the desired direction of 25°and two zeros in the interference directions of 0°and −35°. It is crucial to note once again that the width of the beam narrows as the number of elements increases.
In Figure 3, which deals with the case of LMS, the optimal SLL obtained is approximately −17, 74 dB. It is clear that the SLL is reduced from-17, 23 dB (for N � 4) to −17, 74 dB (for N � 8).
In Figure 4, which deals with the case of the CM algorithm, the optimal value of the SLL is approximately −15, 26 dB. It is trivial to note that the value of SLL is reduced from −15.7 (for N � 4) to −15.26 dB (for N � 8).
From Figures 3 and 4, it can be seen that the level of the SLL decreases with the number of elements. e results of the simulations of the two algorithms, LMS and CM, show that the higher the number of elements in the array, the better the response of the algorithms. Note, however, that the weights in the LMS are updated using a training signal.
It is also clear that our simulations illustrate the ability of the two algorithms to focus a wireless signal on a specific receiving device, instead of broadcasting it in all directions.
Note, however, that the complexity increases with the increase in the number of network elements while their beamwidth decreases [18], which results in an improvement in the angular resolution. Figures 5 and 6 show two distinct results, the first for conventional LMS and the second for CM-based filtering. For the learning curves of the two techniques, we fix the number of elements in the array to 6 and we establish the curves for two values of μ, namely, 0.05 and 0.09.
It is shown in Figure 5 that when the learning constant is defined on the lowest value, which is μ � 0.05, the algorithm converges towards its minimum value after about 30 iterations. e magnitude of error in this case is of the order of 11. 10 − 4 dB.
is result indicates the great speed of the algorithm to update the weight of the filter. However, when the step increases, the process seems to take more time to converge, and indeed, it will have taken 100 iterations to achieve the expected result. Moreover, the magnitude of the error observed is of the order of 19. 10 − 2 , which is strictly greater than in the case where mu is equal to 0.05. Note that the CM algorithm converges more slowly than the LMS algorithm (see Figure 6).
is slowness of convergence can be an obstacle, especially when the environment is dynamic. Note also that the main drawback of the two algorithms is that they require a judicious choice of the step size variable used to adjust their behavior. e effect of noise variance on the convergence of LMS and CM was also studied. To do this, two experiments were performed where we varied the amount of input noise by varying the variance of the noise data. As stated above, the   performance of both algorithms depends on the step size parameter, so we set mu equal to 0.05 for both techniques.
According to the results of the simulation (Figure 7), it is seen that when the noise level of the system has been multiplied by 10, the LMS algorithm shows great robustness and keeps its convergence rate almost stable. For the CMA and when the noise variance increases, CM's capabilities are degrading; indeed, one notes from Figure 8 that there is an instability and a rather slow convergence speed.
For all practical purposes, in this work we have only taken into account the single case of white Gaussian noise. e reader can refer to the work carried out in [5] for the case where the noise is not white Gaussian noise.
In a future perspective, it will be relevant to analyze the performance in a noisy environment by varying the number of iterations, the signal to noise ratio (SNR), and the step size, using the mean square error (MSE) as a performance criterion.

Conclusion
e results obtained for both LMS and CM proved that an increase in the number of elements and a fixed spacing between them in a linear array would result in a higher directivity and a smaller beamwidth. is performance can be used to reject interfering signals in the directions specified by the antenna pattern. e results of this study also confirm the theoretical results concerning the speed of convergence. In fact, the convergence is all the faster as the size of the step is small for the two algorithms in question. e CM algorithm is very suitable in situations where no reference signal is available, even if the results obtained have shown that this method is not the best in terms of speed and suppression of interfering signals. LMS filtering remains a potential candidate in terms of performance tradeoffs between array dimensions and computational complexity.
Another aspect that tilts the balance on the LMS side is its robustness to achieve high performance in noisy environments.
is gives its great reputation for being adopted to overcome the interference of noise in various signal processing applications.
In a future perspective, we plan to examine the contribution of binary genetic algorithms in the synthesis of beamforming.
Data Availability e present study examines the impact of the parameterization (angle of arrival, step size, noise variance, and number of elements in the array) on the behavior of the algorithms in question. e analyses of the convergence and the sensitivity of the two techniques concerning the noise of the system are supported by simulations in the MATLAB environment.

Conflicts of Interest
e authors declare that they have no conflicts of interest.