Optimized Hyper Beamforming of Linear Antenna Arrays Using Collective Animal Behaviour

A novel optimization technique which is developed on mimicking the collective animal behaviour (CAB) is applied for the optimal design of hyper beamforming of linear antenna arrays. Hyper beamforming is based on sum and difference beam patterns of the array, each raised to the power of a hyperbeam exponent parameter. The optimized hyperbeam is achieved by optimization of current excitation weights and uniform interelement spacing. As compared to conventional hyper beamforming of linear antenna array, real coded genetic algorithm (RGA), particle swarm optimization (PSO), and differential evolution (DE) applied to the hyper beam of the same array can achieve reduction in sidelobe level (SLL) and same or less first null beam width (FNBW), keeping the same value of hyperbeam exponent. Again, further reductions of sidelobe level (SLL) and first null beam width (FNBW) have been achieved by the proposed collective animal behaviour (CAB) algorithm. CAB finds near global optimal solution unlike RGA, PSO, and DE in the present problem. The above comparative optimization is illustrated through 10-, 14-, and 20-element linear antenna arrays to establish the optimization efficacy of CAB.


Introduction
Beamforming is a signal processing technique used to control the directionality of the transmission and reception of the radio signals [1]. This is achieved by distributing the elements of the array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both transmitting and receiving ends in order to achieve spatial selectivity. Hyper beamforming refers to spatial processing algorithm used to focus an array of spatially distributed elements (called sensors) to increase the signal to interference plus noise ratio at the receiver. This beamforming processing improves significantly the gain of the wireless link over a conventional technology, thereby increasing range, rate, and penetration [2][3][4]. It has found numerous applications in radar, sonar, seismology, wireless communication, radio astronomy, acoustics, and biomedicine [5]. It is generally classified as either conventional (switched and fixed) beamforming or adaptive beamforming. Switched beamforming system [6,7] is a system that can choose one pattern from many predefined patterns in order to enhance the received signals. Fixed beamforming uses fixed set of weights and time delays (or phasing) to combine the signals received from the sensors in the array, primarily using only information about the locations of the sensors in space and the wave direction of interest [8]. Adaptive beamforming is based on the desired signal maximization mode and interference signal minimization mode [9][10][11]. It is able to place the desired signal at the maximum of main lobe. The hyper beamforming/any other beamforming offers high detection performance like beam width, the target bearing estimation and reduces false alarm, sidelobe suppression. A new optimized hyper beamforming technique is presented in this paper, and collective animal behaviour (CAB) approach is applied to obtain optimal hyperbeam patterns [12,13] of linear antenna arrays.
The classical gradient-based optimization methods are not suitable for optimal design of hyper beamforming of linear antenna arrays due to the following reasons: (i) "highly sensitive to starting points when the number of solution variables and hence the size of the solution space increase, " (ii) frequent convergence to local optimum solution 2 The Scientific World Journal or divergence or revisiting the same suboptimal solution, (iii) requirement of continuous and differentiable objective function (with gradient search methods), (iv) requirement of the piecewise linear cost approximation (linear programming), and (v) problem of convergence and algorithm complexity (with nonlinear programming). So, evolutionary methods have been employed for the optimal design of hyper beamforming of linear antenna arrays with better parameter control. Different evolutionary optimization algorithms such as simulated annealing algorithms [14] and genetic algorithm (GA) [15][16][17][18][19] have been widely used to the synthesis of design methods capable of satisfying constraints which would be unattainable. When considering global optimization methods for antenna arrays design, GA seems to be the promising one. Standard GA (herein referred to as real coded GA (RGA)) has a good performance for finding the promising regions of the search space, but it is prone to revisiting the same suboptimal solutions.
DE algorithm [31][32][33][34][35][36][37][38][39][40][41][42] was first introduced by Storn and Price in 1995 [31]. Like RGA, it is a randomized stochastic search technique enriched with the operations of crossover, mutation, and selection. DE is also prone to premature convergence and stagnation. So, to enhance the performance of optimization algorithms in global search (exploration stage) as well as local search (exploitation stage), the authors suggest an alternative technique as collective animal behaviour (CAB) algorithm for the optimization problem of hyper beamforming.
The rest of the paper is arranged as follows. In Section 2, the design equations of hyper beamforming of linear antenna array are formulated. Section 3 briefly discusses on evolutionary algorithms RGA, PSO, DE, and CAB employed for the designs. Section 4 describes the simulation results obtained by employing the algorithms. Finally, Section 5 concludes the paper.

Design Equations
Hyperbeam technique generates a narrow beam as compared to conventional beam with improved performance of SLL and FNBW that depend on the variation of exponent parameter value ( ). In hyper beamforming for linear antenna array, the interelement spacing in either direction is /2 in order to steer the beam in that particular direction. The sum beam can be created by summation of the absolute values of complex left and right half beams, as shown in Figure 1. The difference beam is the absolute magnitude of the difference of complex right beam half beam and left half beam signals. Furthermore, the difference beam has a minimum in the direction of the sum beam at zero degree as shown in Figure 2. The resulting hyperbeam is obtained by subtraction of sum and difference beams, each raised to the power of the exponent . Consider a broadside linear array of equally spaced isotropic elements as shown in Figure 3. The array is symmetric in both geometry and excitation with respect to the array center [8].
For broadside beams, the array factor is given in [6]: where = angle of radiation of electromagnetic plane wave; = interelement spacing; = propagation constant; = total number of elements in the array; = excitation amplitude of th element. The equations for the creation of sum, difference, and simple hyperbeam pattern in terms of two half beams are as follows [8]:

Sum Pattern
Difference Pattern The Scientific World Journal 3 dx Y X 1 2 3 4 5 6 7 · · · · · · · · · N Left half beam (N/2 elements) Right half beam (N/2 elements) Hyperbeam is obtained by subtraction of sum and difference beams, each raised to the power of the exponent ; the general equation of hyperbeam is a function of hyperbeam exponent as given in where ranges from 0.2 to 1. If lies below 0.2, hyperbeam pattern will contain a large depth spike at the peak of the main beam without changing in the hyperbeam pattern. If increases more than 1, sidelobes of hyperbeam will be more as compared to conventional radiation pattern. All the antenna elements are assumed isotropic. Only amplitude excitations and interelement spacing are used to change the antenna radiation pattern. The cost function (CF) for improving the SLL of radiation pattern of hyperbeam linear antenna arrays is given in where 0 is the angle where the highest maximum of central angle is attained in ∈ [− /2, /2]. msl1 is the angle where maximum side lobe AF Hyper ( msl1 , ) is attained in the lower band of hyperbeam pattern. msl2 is the angle where the maximum sidelobe AF Hyper ( msl2 , ) is attained in the upper side band of hyperbeam pattern. In CF, both numerator and denominator are in absolute magnitude. Minimization of CF means maximum reduction of SLL. RGA, PSO, DE, and CAB are employed individually for minimization of CF by optimizing current excitation weights of elements and interelement spacing. Results of the minimization of CF and SLL are described in Section 4.

Real Coded Genetic Algorithm (RGA).
Real coded genetic algorithm (RGA) is mainly a probabilistic search technique, based on the principles of natural selection and evolution. At each generation, it maintains a population of individuals where each individual is a coded form of a possible solution of the problem at hand called chromosome. Chromosomes are constructed over some particular alphabet, for example, the binary alphabet {0, 1}, so that chromosomes' values are uniquely mapped onto the real decision variable domain. Each chromosome is evaluated by a function known as fitness function, which is usually the objective function of the corresponding optimization problem [15][16][17][18][19].
The basic steps of RGA are shown as follows.
Step 1. Initialize the real chromosome strings of population, each consisting of a set of coefficients of current excitation weights and interelement spacing CF.
Step 2. Decoding the strings and evaluation of each string.
Step 3. Selection of elite strings in order to increase CF values from the minimum value.
Step 4. Copying the elite strings over the nonselected strings.
Step 5. Crossover and mutation generate the offsprings.
Step 7. The iteration stops when the maximum number of cycles is reached. The grand minimum CF and its corresponding chromosome string or the desired solution of coefficients of optimal current excitation weights and optimal interelement spacing are finally obtained.

Particle Swarm Optimization (PSO)
. PSO is a flexible, robust population-based stochastic search or optimization technique with implicit parallelism, which can easily handle with nondifferential objective functions, unlike traditional gradient-based optimization methods. PSO is less susceptible to getting trapped on local optima unlike GA, simulated annealing, and so forth. Eberhart et al. developed PSO concept similar to the behaviour of a swarm of birds [20][21][22][23][24][25][26][27][28][29][30]. PSO is developed through simulation of bird flocking and fish schooling in multidimensional space. Bird flocking optimizes a certain objective function. Each particle knows its best value so far ( ). This information corresponds to personal experiences of each particle. Moreover, each particle knows the best value so far in the group ( ) among all s. Namely, each particle tries to modify its position using the following information: (i) the distance between the current position and the ; (ii) the distance between the current position and the .

4
The Scientific World Journal Mathematically, velocities of the vectors are modified according to the following equation: where is the velocity of vector at iteration ; is the weighting function; 1 and 2 are called social and cognitive constants, respectively; rand is the random number between 0 and 1; is the current position of vector at iteration ; is the of vector ; is the of the group of vectors at iteration . The first term of (7) is the previous velocity of the vector. The second and third terms are used to change the velocity of the vector. Without the second and third terms, the vector will keep on "flying" in the same direction until it hits the boundary. The parameter corresponds to a kind of inertia and tries to explore new areas. Here, the vector is termed for the string of real current excitation weight coefficients ( number) and uniform interelement spacing (01 number). Total number of variables = var = + 1 in each vector.
The best values of 1 , 2 , and CFa are found to vary with the design sets.
Inertia weight ( +1 ) at ( + 1)th cycle is as given in where max = 1.0; min = 0.4; max = maximum number of iteration cycles. The searching point/updated vector in the solution space can be modified by The basic steps of PSO are shown as follows.
Step 1 (initialization). Population (swarm size) of particle vectors, = 120; maximum iteration cycles = 100; number of current excitation weights and one number uniform interelement spacing, total optimizing coefficients equal var = +1; fixing values of 1 , 2 as 1.5; minimum and maximum values of current excitation coefficients, min = 0, max = 1; minimum and maximum values of interelement spacing, min = 0.5 , max = ; initialization of the velocities of all the particle vectors.
Step 2. Generation of initial particle vectors, each vector consisting of current excitation weights and uniform interelement spacing randomly with limits; computation of initial CF values of the total population, .
Step 3. Computation of population-based minimum CF value and computation of the personal best solution vectors ( ), group best solution vector ( ).
Step 4. Updating the velocities as per (7); updating the particle vectors as per (11), and checking against the limits of current excitation weights coefficients and one number uniform interelement spacing; finally, computation of the updated CF values of the particle vectors and populationbased minimum CF value.
Step 5. Updating the vectors, vector; reuse of the updated particle vectors as initial particle vectors for Step 4.
Step 6. Iteration continues from Step 4 till the maximum iteration cycles or the convergence of minimum CF values; finally, is the vector of optimal current excitation weights ( number) and uniform interelement spacing (01 number).

Differential Evolution (DE) Algorithm.
The crucial idea behind DE algorithm is a scheme for generating trial parameter vectors and adds the weighted difference between two population vectors to a third one. Like any other evolutionary algorithm, DE algorithm aims at evolving a population of , -dimensional parameter vectors, so-called individuals, which encode the candidate solutions, that is, where = 1, 2, 3, . . . , . The initial population (at = 0) should cover the entire search space as much as possible by uniformly randomizing individuals within the search constrained by the prescribed minimum and maximum parameter bounds: For example, the initial value of the th parameter of the th vector is where = 1, 2, 3, . . . , . The random number generator, rand(0, 1), returns a uniformly distributed random number from within the range "DE/best/1": "DE/rand-to-best/1": "DE/best/2": "DE/rand/2": The indices 1 , 2 , 3 , 4 , 5 are mutually exclusive integers randomly chosen from the range [1, ], and all are different from the base index . These indices are randomly generated once for each mutant vector. The scaling factor is a positive control parameter for scaling the difference vector. best, is the best individual vector with the best fitness value in the population at generation " . " In the present work, (17) has been used.
(b) Crossover. To complement the differential mutation search strategy, crossover operation is applied to increase the potential diversity of the population. The mutant vector V , exchanges its components with the target vector , to generate a trial vector: In the basic version, DE employs the binomial (uniform) crossover defined as where = 1, 2, . . . , . The crossover rate is user-specified constant within the range (1, 0), which controls the fraction of parameter values copied from the mutant vector. rand is a randomly chosen integer in the range [1, ]. The binomial crossover operator copies the th parameter of the mutant vector ⃗ V , to the corresponding element in the trial vector ⃗ , if rand , (0, 1) ≤ or = rand . Otherwise, it is copied from the corresponding target vector ⃗ , .
(c) Selection. To keep the population size constant over subsequent generations, the next step of the algorithm calls for selection to determine whether the target or the trial vector survives to the next generation, that is, at = + 1. The selection operation is described as where ( ) is the CF (in this work) to be minimized. So, if the new vector yields an equal or lower value of the objective function, it replaces the corresponding target vector in the next generation; otherwise, the target is retained in the population. Hence, the population either gets better (with respect to the minimization of the objective function) or remains the same in fitness status, but never deteriorates. The above three steps are repeated generation after generation until some specific termination criteria are satisfied.

Control Parameter Selection of DE.
Proper selection of control parameters is very important for the success and performance of an algorithm. The optimal control parameters are problem-specific. Therefore, the set of control parameters that best fit each problem has to be chosen carefully. Values of lower than 0.3 may result in premature convergence, while values greater than 1 tend to slow down the convergence speed. Large populations help maintaining diverse individuals, but also slow down convergence speed. In order to avoid premature convergence, or should be increased or should be decreased. Larger values of result in larger perturbations and better probabilities to escape from local optima, while lower preserves more diversity in the population, thus avoiding local optima.

Algorithmic Description of DE
Step 1 (generation of initial population). Set the generation counter = 0 and randomly initialize -dimensional individuals (parameter vectors/target vectors), where = 1, 2, 3, . . . , . The initial population (at = 0) should cover the entire search space as much as possible by uniformly randomizing individuals within the search constrained by the prescribed minimum and maximum parameter bounds: 6 The Scientific World Journal Step 2 (mutation). For = 1 to , generate a mutated vector, ⃗ V , = {V 1, , , V 2, , , . . . , V , , } corresponding to the target vector ⃗ , via mutation strategy (17).

Collective Animal Behavior (CAB).
CAB is an optimization technique which mimics the collective behaviour of animals [12,13]. CAB algorithm assumes the existence of a set of operations that resemble the interaction rules that model the collective animal behaviour. In this approach, each solution within the search space represents an animal position. The "fitness value" refers to the animal dominance with respect to the group. The complete process mimics the collective animal behaviour. CAB implements a memory for storing best solutions (animal positions) mimicking the aforementioned biologic process. Such memory is divided into two different elements, one ( ) for maintaining the best locations at each generation and the other ( ℎ ) for storing the best historical positions during the complete evolutionary process. [12,13] is an iterative process that starts by initializing the population randomly (generated random solutions or animal positions). Then, the following four operations are applied until a termination criterion is met (i.e., number of iteration cycles NI):

Description of the CAB Algorithm. CAB algorithm
(i) keep the position of the best individuals; (ii) move from or to nearby neighbours (local attraction and repulsion); (iii) move randomly; (iv) compete for the space within a determined distance (update the memory).

Initializing the
and being the parameter and individual indexes, respectively. , is the th parameter of the th individual. All the initial positions are sorted according to the fitness function (dominance) to form a new individual set = { 1 , 2 , . . . , }, so that the best positions are chosen, which are initially stored in both memories and ℎ . Thus, both memories share the same information only at this initial stage.

Keep the Position of the Best Individuals.
Analogous to the biological metaphor, this behavioural rule, typical from animal groups, is implemented as an evolutionary operation in our approach. In this operation, the first elements ({ 1 , 2 , . . . , }), of the new animal position set , are generated. Such positions are computed by the values contained inside the historical memory ℎ , considering a slight random perturbation around them. This operation is modelled as where ∈ {1, 2, . . . , }, while ℎ represents the -element of the historical memory ℎ . V is a random vector with a small enough length.

Move from or to Nearby Neighbours.
From the biological inspiration, animals experiment a random local attraction or repulsion according to an internal motivation. Therefore, new evolutionary operators are implemented that mimic such biological pattern. For this operation, if a uniform random number generated within the range [0, 1] is less than a threshold , a determined individual position is attracted/repelled considering the nearest best historical position within the group (i.e., the nearest position in ℎ ); otherwise, it is attracted/repelled to/from the nearest best location within the group for the current generation (i.e., the nearest position in ). Such operations are modelled as where ∈ { + 1, + 2, . . . , }, nearest ℎ and nearest represent the nearest elements of ℎ and to , respectively, while is a random number within [−1, 1]. Therefore, if > 0, the individual position is attracted to the position nearest ℎ or nearest ; otherwise such movement is considered as repulsion.
where ∈ { + 1, + 2, . . . , } , " " is a random vector defined in the search space. This operator is similar to reinitializing the particle in a random position, as it is done by (27).

Compete for the Space within a Determined Distance (Update the Memory).
Once the operations to keep the position of the best individuals, such as moving from or to nearby neighbours and moving randomly, are applied to all animal positions, generating new positions, it is necessary to update the memory ℎ . In order to update the memory ℎ , the concept of dominance is used. Animals that interact within the group maintain a minimum distance among them. Such distance, which is defined as depends on how aggressive the animal behaves. Hence, when two animals confront each other inside such distance, the most dominant individual prevails, meanwhile the other withdraws. The historical memory ℎ is updated considering the following procedure.
(2) Each element of the memory is compared pair-wise to the remaining memory elements ({ 1 , 2 , . . . , 2 −1 }). If the distance between both elements is less than , the element getting a better performance in the fitness function prevails, meanwhile the other is removed.
(3) From the resulting elements of (Step 2), the best values are selected to build the new ℎ .
The computational steps for the CAB algorithm can be summarized as follows.
Step 1. Set the population size of vectors (each having number of current excitation weight coefficients and uniform interelement 01 ( var = + 1) in -dimensional search space), CAB parameters ( , , and ), and NI (maximum number of generations).
Step 4. Choose the first positions of and store them into the memory .
Step the elements of ℎ making a slight random perturbation around them. = ℎ + V; being V a random vector of a small enough length.
Step 7. Generate the rest of the elements using the attraction, repulsion, and random movements: if ( 1 < ) then (attraction and repulsion movement) Step 8. If maximum number of iteration cycles (NI) is completed, the process is finished; otherwise go back to Step 3. The best value in ℎ represents the global solution for current excitation weights coefficients ( number) and uniform interelement (01 number).

Simulation Results
All simulation results were obtained by programming in MATLAB language using MATLAB 7.5 on dual core processor, 2.88 GHz with 2 GB RAM. Table 1 shows the best chosen parameters for RGA, PSO, DE, and CAB, respectively.

Algorithms
Optimized current excitation weights and  The following observations are made from Table 3 and

Convergence Profiles of RGA, PSO, DE, and CAB
The algorithms can be compared in terms of the cost function (CF) values, Figures 10 and 11 show the convergences of log 10 (CF) values obtained for 10-element array sets, for = 0.5 and 1, respectively, as RGA, PSO, DE, and CAB are employed, respectively. CAB converges to the least minimum CF as compared to RGA, PSO, and DE which yield suboptimal higher values of CF. CAB thus yields the near-global optimal current excitation weights and optimal inter element spacing of hyperbeam of linear antenna arrays.  CAB are lesser than those of RGA and DE but not those of PSO. With a view to the above fact, it may be inferred that the performance of the proposed CAB algorithm is the best among the algorithms for solving the optimization problem of hyper beamforming design.

Conclusions
In this paper, a novel algorithm based on collective animal behaviour (CAB) is used for finding the best optimal nonuniform excitation weights, (0 < ≤ 1) and optimal uniform interelement spacing, ( /2 ≤ < ) for hyper beamforming of linear antenna arrays. Three broad cases of arrays are considered in the study. The first two cases are (i) conventional uniformly excited ( = 1) linear antenna arrays  with interelement spacing, = /2 and (ii) nonoptimized uniformly excited ( = 1) hyper beamforming of linear antenna arrays with interelement spacing, = /2. The last one is of actual concern, which is hyper beamforming of linear antenna arrays with optimized interelement spacing ( /2 ≤ < ) along with optimized nonuniform excitations (0 < ≤ 1). The optimization algorithms considered are RGA, PSO, DE, and CAB. Extensive experimental results reveal that the other algorithms RGA, PSO, and DE are entrapped to suboptimal designs. Whereas the collective animal behaviour (CAB) yields optimal designs, offering in the highest reduction in sidelobe level (SLL) and much more improved first null beam width (FNBW) as compared to the other two cases for any hyperbeam exponent parameter.