Evolutionary algorithms play an important role in synthesizing linear array antenna. In this paper, the authors have compared quantum particle swarm optimization (QPSO) and backtracking search algorithms (BSA) in failure correction of linear antenna arrays constructed using half wavelength uniformly spaced dipoles. The QPSO algorithm is a combination of classical PSO and quantum mechanics principles to enhance the performance of PSO and BSA is considered as a modernized PSO using historical populations. These two algorithms are applied to obtain the voltage excitations of the nondefective elements in the failed antenna array such that the necessary objectives, namely, the minimization of parameters like side lobe level (SLL) and voltage standing wave ratio (VSWR), are achieved leading to their values matching closely the desired parameter values. The results of both algorithms are compared in terms of parameters used in the objective function along their statistical parameters. Moreover, in order to reduce the processing time, inverse fast Fourier transform (IFFT) is used to obtain the array factor. In this paper, an example is presented for the application of the above two algorithms for a linear array of 30 parallel half wavelength dipole antennas failed with 4 elements and they clearly show the effectiveness of both the QPSO and BSA algorithms in terms of optimized parameters, statistical values, and processing time.
1. Introduction
Any failure in the elements of the antenna array [1] will end up in corruption of radiation pattern and lead to deviation of the radiation pattern parameters, namely, side lobe level, first null beam width (FNBW), half power beam width (HPBW), and so forth, thus leading to degradation in the performance of the communication systems. Researches in the past have clearly depicted various ways of recovery [2–4]. A simple way to recover the radiation pattern is to manually replace the failed antenna elements, which is not quite a better solution when the deformation occurs in applications like satellites, radar, and so forth. Literature survey has revealed that the idea of adjusting the beam weights of the remaining nondefective elements in the array results in production of radiation pattern approaching the original pattern in terms of the radiation pattern parameters being extensively used. Mailloux [2] utilized the idea of replacing the signals in a digitally beam formed array in a multiple Signal environment without laying any restrictions on correlation properties of signals. A method [3] based on evolutionary algorithm, namely, genetic algorithm [3–5], has been utilized for failure correction by considering the beam weights as an array of complex vector numbers. This algorithm has also been used by Rodriguez et al. [4], successfully in such a way that the adjustments of excitations for minimum number of elements were needed to recover the radiation pattern. Thus, genetic algorithm has established itself as one of the primitive algorithms which enjoyed its tenure in the past and the primary reason was due to its different mating schemes, decimal crossover principles, and so forth. Following it, particle swarm optimization [6] came into picture which dealt with the intelligence and cooperation rules followed by a certain set of birds. Till then, evolutionary algorithms started occupying the prime position of providing adjustment of beam weights for the correction of radiation patterns. The beam weights can refer to only amplitude or both amplitude and phase excitations of the individual elements. Amplitude only control avoids the unnecessary use of phase shifters, thus reducing the overall complexity in feed circuitry.
Researches also have shown the validity of algorithms [7] superseding over others in terms of the values of optimized outputs, convergence time, and statistical values like mean of the fitness value, standard deviation, and so forth. Moreover, literature survey also reveals that inverse fast Fourier transform [5, 8, 9] can be used to speed up the whole process of failure correction. Here, the relationship of inverse fast Fourier transform (IFFT) with the current excitations of the antenna leading to the array factor is utilized and has been used for synthesis of low side lobe radiation patterns [9]. This relationship is primarily used here for the sole purpose of reducing the overall computing time. Wang et al. [5] incorporated the principles of IFFT in obtaining the array factor computation in a short time and verified successfully the proposed idea for both the linear and planar arrays.
In this paper, mutual coupling [8, 10–12] between the elements is taken into consideration as, in practice, mutual coupling deteriorates not only the antenna radiation pattern but also the matching characteristics. In other words, it reduces the antenna efficiency and performance of antennas in both transmit and receive mode.
Here, the problem is to correct the radiation pattern of 30-element linear antenna array corrupted with 4 elements with the objective of making the SLL and VSWR approach close to the expected values. Two algorithms, namely, QPSO [13, 14] and BSA [15, 16], have been used for this purpose. Moreover, SLL refers to the ratio of the amplitude of the peak lobe to that of the side lobe of a radiation pattern. It is usually expressed in decibels. The second parameter that is used in this paper for correction, namely, VSWR, is a measure of the reflected power in decibels. It also describes how well the antenna is matched to the transmission line to which it is connected.
The rest of the paper is organized as follows. Section 2 covers the methodology used which also includes the radiation pattern equation of linear antenna with mutual coupling effects and the fitness function that has been used in this problem. Sections 3 and 4 detail the quantum particle swarm optimization and backtracking search optimization evolutionary algorithm principles. Section 5 depicts the simulated results followed by conclusion in Section 6 and references.
2. Methodology
The radiation pattern F(ϕ) in x-y plane of a linear array [1, 10] of parallel dipole antennas uniformly spaced at a distance d apart along the x-axis is given by
(1)F(ϕ)=AF(ϕ)×EP(ϕ)=[∑n=1NInej(n-1)kdcosϕ]×EP(ϕ),
where AF is array factor, EP is element pattern of a vertical half wavelength dipole antenna, which is assumed to be omnidirectional, N is total number of elements, n is the element number, d is spacing between individual elements in the array, k is the wave number, λ is the wavelength of operation, ϕ is azimuth angle as shown in Figure 1, and In refers to the amplitude of current of nth dipole element. Substituting u=cosϕ in (1) will lead to far field pattern in u-domain.
Linear array of parallel half wavelength dipole antenna along x-axis.
Moreover, the current amplitudes are obtained by dividing the input voltage excitations of the array by the mutual impedance components stored in a mutual impedance matrix Z [10, 11]. The terms in this matrix Z, namely, self-impedances and mutual impedances of Z, are calculated considering the distributions of the currents on the dipole to be sinusoidal. The array pattern has been directly computed through a 4096-point IFFT [5, 8, 9] operation on the current excitation amplitudes. This reduces the time taken for completion of the operation of array factor drastically.
A measure of the voltage [10, 11] across the nth dipole can be given by
(2)Vn=InZnn+∑m≠nImZnm,
where Znn is the self-impedance of dipole n and Znm is the mutual impedance between dipoles n and m. The active impedance [10, 11] of dipole n, ZnA, is given by
(3)ZnA=VnIn=Znn+∑m≠n(ImIn)Znm,VSWRmax=1+|Γ|max1-|Γ|max,
where Γ is the reflection coefficient across the nth dipole and is equal to (ZnA-Zo)/(ZnA+Zo) and Zo refers to the characteristic impedance and its value used in this paper is 50 Ω. The smaller the value of VSWR, the better the matching condition of antenna. Or in other words, it can be concluded as a good match between the antenna and the feed line connected to it. Failure elements are not considered for calculation of VSWR as their value of active impedance is zero. And moreover, failure of a dipole element refers to the zero value of the voltage excitation at its input. However, current flowing through such faulty element is not zero because of mutual coupling. The faulty element acts as a parasitic radiator. The fitness function F has been written in order to obtain new excitation voltage amplitudes of the remaining unfailed elements (excluding the failed elements). The algorithms that are used here are QPSO and BSA ones
(4)F=w1*F12+w2*error,
where
(5)F1={SLLo-SLLd,if⟶SLLo>SLLd0,if⟶SLLo≤SLLd,ifVSWRmax≤1.4,WWerror=0;elseerror=abs(VSWRmax-1.4).
The coefficients w1 and w2 refer to the weights given to their corresponding terms associated and are made equal to unity here. SLLd and SLLo are desired (−25 dB) and obtained values of SLL, respectively.
3. Quantum Particle Swarm Optimization
Various versions of the classical PSO [6] with Newtonian rules introduced by Kennedy and Eberhart in 1995 have proved its superiority over other algorithms like genetic algorithm in the past. The versions included the provisions of adding or tuning of parameters to guarantee the convergence, to improve convergence, and to increase in validity success of algorithms. This resulted in introduction of QPSO [13, 14] in 2004 using quantum mechanics principles which was found to be similar to PSO with less usage of control parameters and it paved its success over classical one by testing using numerous standard benchmark functions. The other main reason for its success is fast global convergence characteristics.
In classical PSO, convergence to a particular location is made possible by careful tuning of the social and cognitive factors of the algorithm and in order to confine all the particles in the population within a space or boundary as required by the objective function, proper choice of maximum velocity in each dimension is required. And moreover, the state or condition of any particle in the population is defined by its velocity and position. An additional parameter is also introduced to elevate the convergence speed. Still, the global convergence is not guaranteed.
Unlike the above, in QPSO, the particle’s state is defined by a wave function instead of position and velocity. There are no cognitive and social factors in the algorithm. This certainly reduced the total number of control parameters in this algorithm. The steps involved in this algorithm [14] are summarized as follows.
Based on dynamic range, initialize randomly the positions of all swarm in population.
Evaluate the particles and the obtained personal best (pb) of all the particles are compared with their current fitness value obtained from the objective function. Their original values are replaced with the current one, if current value is better than the original one.
The overall mean best (mb) position of all P particles is obtained using
(6)mb=1P∑i=1Ppbi.
The above procedure is adopted for whole population and current value is checked again for best fitness and designated as global best (gb), if it is found to be better than original fitness value.
The next step is to obtain the particle’s vector local focus using (7) as
(7)xiwk=rand1iwk*(pbiw)+(1-rand1iwk)*(gb).
Update position of the wth dimension of ith particle using (8) and repeat steps (ii) to (vi) till global best is obtained, which is regarded as the final optimized solution. One has
(8)Xiwk=xiwk+(-1)ceil(0.5+rand2iwk)*α*|mb-Xiwt-1|*log(1/rand3iwk).
If Xiwk<Xmnk, then
(9)Xiwk=Xmnk+0.25*rand4iwk*(Xmxk-Xmnk).
If Xiwk>Xmxk,
(10)Xiwk=Xmxk-0.25*rand5iwk*(Xmxk-Xmnk),
where rand1, rand2, rand3, rand4, and rand5 denote the uniform random numbers between 0 and 1, the desired minimum and maximum limits are Xmnk and Xmxk, and x refers to the particle’s local attractor with α=0.75 being the coefficient dealing with the convergence. Equations (9) and (10) are used to maintain the position in a particular limit, thus avoiding particles getting exploded.4. Backtracking Search Algorithm
Backtracking search algorithm (BSA), an evolutionary algorithm, has been proposed by Civicioglu [15, 16]. Backtracking search optimization algorithm (BSA) is a population-based iterative stochastic search evolutionary algorithm that is widely used to solve nonlinear, nondifferentiable, and complex numerical optimization problems.
Unlike many search algorithms, BSA has a single control parameter called dim_rate. Moreover, BSA’s problem-solving performance is not oversensitive to the initial value of this parameter.
BSA uses three basic genetic operators such as selection, mutation, and crossover to generate trial individuals. BSA uses a nonuniform crossover strategy that is more complex than the crossover strategies used in many genetic algorithms. BSA has a random mutation strategy that uses only one direction individual for each target individual.
Step 1.
The algorithm is run for N times. At the start of every run, randomly generate an initial population (P×D) of P individuals with D dimensions of every individual within the variable constraint range. Also, generate another set of population of size (P×D) within the variable constraint range, called historical population. Calculate the fitness value of initial population. The initial population (IP) and historical population (HP) are different in every other run. Consider
(11)IPi,j=U(lowj,upj),WWHP=U(lowj,upj),wherei=1,2,3,…,P,j=1,2,3…,D,
where U is uniform random numbers between 0 and 1.
Step 2.
Start iteration loop. The iteration number is fixed for every run. Provisions are also available for making historical population equal to initial population [15]. The order of individual in historical population (HP) is randomly changed through a random shuffling function.
Step 3.
New offspring Ti,j are generated from the combination of initial and historical population through mutation and crossover strategy as follows:
(12)Ti,j=IP+(Λ.*F).*(HP-IP),where i=1,2,3…,P,j=1,2,3…,D,
where Λ is binary integer-valued matrix of size (P×D). It controls the number of elements of individuals that will mutate in a trial. F is a scale factor which is equal to 3*randn, with randn being standard normal distribution.
Two predefined strategies are randomly used to define BSA’s Λ. The first strategy uses dim_rate parameter and the second strategy allows only one randomly chosen element of an individual to mutate in each trial. BSA’s crossover process is more complex than the processes used in Differential evolution.
As shown in Algorithm 1, randperm(D) functionrandomly permutates all integer numbers between 1 and D.
Some offspring obtained at the end of BSA’s crossover process and mutation strategy can be out of bound and therefore out of the allowed search space limits. The individuals beyond the search-space limits are restricted to lie within the upper and lower bound of every element of any individual. This is done in the following way:
(13)Ti,j={rand*(upj-lowj)+lowj,if⟶Ti,j<lowjrand*(upj-lowj)+lowj,if⟶Ti,j>upj.
<bold>Algorithm 1: </bold>
Λ=zeros(P,D);
if rand < rand,
forn=1:P, t=randperm(D); Λ(n,t(1:ceil(DIM_RATE*rand*D)))=1; end
else
forn=1:P, Λ(n,randi(D))=1; end
end
Step 4.
Calculate the fitness value of offspring.
Compare the fitness value of every offspring with the fitness value of initial population. If the fitness value of any offspring is better, then assign the fitness value of offspring to initial population and assign the current coordinates of offspring to initial population coordinates. Thus, a new set of population is generated. This new set of population becomes initial population in the next iteration. One has
(14)|fitnessIPi=fitnessTiIPi=Ti〉Wif⟶fitnessTi<fitnessIPi.
Step 5.
Determine the current best fitness value in the whole newly generated population and its coordinates. This will give global best value and its coordinates in the current iteration.
Step 6.
Repeat Steps 2–5 until a stopping criterion, such as a maximum number of iterations being completed, is satisfied. The best scoring individual in the population is stored at the end of maximum number of iteration in every run. This will complete iteration loop in one run.
Step 7.
Repeat Steps 1–6 until a stopping criterion, such as a sufficiently good solution being discovered or a maximum number of runs being completed, is satisfied. The best scoring individual among N gbest individuals (one gbest individual in every run) considering all the runs is taken as the final answer. The mean and standard deviation of all N gbest values is then calculated.
5. Results
In this paper, the authors used a linear antenna array of 30 half wavelength parallel dipoles placed along x-axis with uniform spacing of λ/2 between individual elements. The requirement is a broadside pattern in azimuthal plane with a SLL of −25 dB and VSWR of 1.4. The dipole element’s radius used here is 0.005λ. Four elements, namely, 3, 6, 20, and 28, are chosen randomly to be failed elements (V=0).
Three cases are discussed here.
Case 1.
For generation of original pattern without any failures, QPSO is run for 100 iterations with a population of 60. A set of excitation voltage amplitudes is obtained.
Case 2.
In this case, damaged pattern is obtained by considering the dipole elements 3, 6, 20, and 28 to be failed ones (randomly chosen). This is done by setting V (3, 6, 20, and 28) to be equal to zero in the voltage excitations obtained using Case 1.
Case 3.
For obtaining the corrected pattern, QPSO is run 5 times, each for 1000 iterations. A set of new excitation voltage amplitudes is obtained for nondefective elements at the end of five runs.
The best fitness value among five runs is taken as the final answer and regarded as global best value.
The above three cases are repeated for BSA algorithm also.
MATLAB program is written and run for the above in a PC with Intel Core Duo CPU E8400 at 2.99 GHz and 1.94 GB of RAM. Table 1 shows the voltage excitations for original, damaged, and corrected patterns (using QPSO and BSA) and Table 2 summarizes comparative results of both algorithms. Figure 2 shows the fitness values versus number of iterations for both algorithms and Figures 3 and 4 show the radiation patterns obtained.
Voltage amplitude distribution for original and corrected patterns.
Element number
Original pattern
Corrected pattern
Element number
Original pattern
Corrected pattern
QPSO
BSA
QPSO
BSA
1
0.2766
0.0563
0.0700
16
0.8300
0.7611
0.6549
2
0.2456
0.1627
0.2717
17
0.6931
0.8722
0.9331
3
0.1485
0.0000
0.0000
18
0.7520
0.7207
0.6330
4
0.4343
0.1953
0.2654
19
0.9573
0.7747
0.9004
5
0.2666
0.2701
0.4432
20
0.8130
0.0000
0.0000
6
0.5048
0.0000
0.0000
21
0.7879
0.7581
0.7960
7
0.5010
0.4821
0.6664
22
0.3656
0.5205
0.5042
8
0.4498
0.3314
0.3976
23
0.8085
0.5717
0.6534
9
0.5969
0.5435
0.7057
24
0.3853
0.4186
0.4441
10
0.5295
0.3504
0.4351
25
0.4461
0.4129
0.4574
11
0.5183
0.7543
0.8206
26
0.4322
0.2631
0.3286
12
0.8032
0.6261
0.6572
27
0.3077
0.4083
0.3357
13
0.8796
0.6012
0.7785
28
0.2245
0.0000
0.0000
14
0.6779
0.8560
0.8408
29
0.3390
0.1570
0.1470
15
0.7857
0.6352
0.8312
30
0.2010
0.1507
0.1309
Comparative results.
Design parameters
Case 1 (original pattern)
Case 2 (damaged pattern)
Case 3 (corrected pattern)
QPSO
BSA
SLL (dB)
−25.7022
−20.4538
−24.9824
−24.9797
VSWR
1.3970
1.6121
1.4806
1.5281
Mean fitness value of five runs
0.1127
0.1737
Standard deviation
0.0318
0.0283
Processing time in seconds
772
660
Global best fitness value among five runs
0.0808
0.1286
Fitness value versus number of iterations for the best run.
Normalized original, damaged, and corrected power patterns in dB using QPSO.
Normalized original, damaged, and corrected power patterns in dB using BSA.
6. Conclusions
This paper presented a comparative analysis between QPSO and BSA algorithms in terms of the expected design parameters as well as statistical ones. The tables depicting the results show that even though both algorithms suited well for correction of the damaged pattern, the processing time taken for QPSO is slightly more than BSA. The expected values of antenna parameters, namely, SLL and VSWR, match very closely each other, with QPSO slightly edging over BSA in providing better VSWR value. The same is the case with the statistical parameters where both algorithms match very closely each other, with QPSO showing slightly better results than BSA in global best fitness value and mean. The processing time is lower in BSA. This comparative analysis can be extended to other antenna array configurations also.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
BalanisC. A.MaillouxR. J.Array failure correction with a digitally beamformed arrayYeoB.-K.LuY.Array failure correction with a genetic algorithmRodríguezJ. A.AresF.MorenoE.FranceschettiG.Genetic algorithm procedure for linear array failure correctionWangL. L.FangD. G.ShengW. X.Combination of genetic algorithm (GA) and fast Fourier transform (FFT) for synthesis of arraysKennedyJ.EberhartR.Particle swarm optimizationProceedings of the IEEE International Conference on Neural NetworksDecember 1995194219482-s2.0-0029535737BoeringerD. W.WernerD. H.Particle swarm optimization versus genetic algorithms for phased array synthesisPathakN.BasuB.MahantiG. K.Combination of inverse fast Fourier transform and modified particle swarm optimization for synthesis of thinned mutually coupled linear array of parallel half-wave length dipole antennasKeizerW. P. M. N.Low-sidelobe pattern synthesis using iterative fourier techniques coded in MATLABElliottR. S.RodriguezJ. A.AresF.MorenoE.Feeding in-phase dipole arrays: a tutorial and a MATLAB programThevenotM.MenudierC.El Sayed AhmadA.Zakka El NashefG.FezaiF.AbdallahY.ArnaudE.TorresF.MonediereT.Synthesis of antenna arrays and parasitic antenna arrays with mutual couplingsSunJ.FengB.XuW.Particle swarm optimization with particles having quantum behaviorProceedings of the Congress on Evolutionary Computation (CEC '04)June 2004Portland, Ore, USA3253312-s2.0-4344586511SunJ.XuW.FengB.A global search strategy of quantum behaved Particle swarm optimizationProceedings of the IEEE Conference on Cybernetics and Intelligent SystemsDecember 2004Singapore1111162-s2.0-11244260805CiviciogluP.Backtracking search optimization algorithm for numerical optimization problemsCiviciogluP.Circular antenna array design by using evolutionary search algorithms