Passive Target Localization Problem Based on Improved Hybrid Adaptive Differential Evolution and Nelder-Mead Algorithm

,


Introduction
Determining the position of a passive target using time of arrival (TOA) measurements has become an important issue for a number of different applications, such as radar, sonar, telecommunications, mobile communications, and navigation [1,2]. In general, localization systems can be classified into active and passive [1]. With the active localization system, the target actively participates in the localization process. Compared to the active case, in passive localization, the target is not involved in the localization process and can only reflect or scatter the signals from the transmitter to receivers [2]. Therefore, the passive localization has been widely applied to different fields, such as robots, underwater acoustics, radar, crime-prevention, surveillance, and urban environments [3,4].
Hence, this paper proposes a passive target localization problem using noisy TOA measurements. In this way, the signal propagation time from the transmitter via the target to the receiver can be measured and used to determine range measurements, e.g., transmitter-target-receiver distances. Due to the nonlinear relationship between the target position and measurements, various efficient estimation methods have been proposed, such as nonlinear least squares (NLS) and maximum likelihood (ML) [5]. To obtain a closedform solution, the NLS problem has been linearized using the linear least squares (LLS) and the weighted least squares (WLS) algorithms [6,7]. In order to improve the accuracy, especially in the case of high measurement noise, the constrained weighted least squares (CWLS) is introduced [8]. Hence, the CWLS problem can be formulated as a quadratically constrained quadratic program (QCQP), which is converted to an unconstrained optimization problem using the method of Lagrange multipliers [8]. Another widely used estimation method is the ML estimator, which is commonly applied when the measurement error distribution is previously known [2]. In general, the ML objective function is highly nonlinear, with multiple local optima, and thus, the solution in closed-form cannot be obtained. Therefore, finding the global optimal solution by conventional optimization methods is difficult, due to the multimodal objective function [9]. For this reason, various efficient optimization algorithms have been derived to overcome these types of difficulties and to answer the challenges of complex optimization problems [10].
Motivated by these facts, this paper proposes evolutionary algorithms (EAs) and their hybrid variants to overcome these drawbacks and improve the localization performance [11]. Generally, the optimization process of EAs consists of two phases, namely, exploration and exploitation [10]. In such a context, the first phase consists of exploring the search space and locating the region of the global optimum, while the exploitation phase investigates the promising area to refine the solution around the current global best solution. Therefore, finding an appropriate balance between the exploration and exploitation abilities during the evolution process is a significant challenge for optimization algorithms.
Various EAs have been successfully applied to solve different complex optimization problems, such as the particle swarm optimization (PSO) [9], the differential evolution (DE) [12], the artificial bee colony (ABC) [13], and the cuckoo search algorithm (CS) [14]. Among these algorithms, the DE has emerged as a very efficient and robust EA for global optimization with successful applications in finding a global optimum [15].
The DE is a population-based EA, which has been widely used to solve numerous optimization problems in different fields of science and engineering [15,16]. Easy implementation, compact structure, reliability, and robustness are the main advantages of the DE algorithm [15]. However, some difficulties occur, such as a weak local search and slow convergence [17]. In general, the main factors that affect the performance are mutation and crossover operators and their corresponding control parameters, such as the scale factor (F), crossover rate (CR), and population size (NP). In this direction, various empirical guidelines have been provided for selecting the most suitable values for control parameters such as F ∈ ½0:4, 1 and CR ∈ ½0, 1 [18]. To obtain acceptable results for a given problem, different control parameter values at different stages of the evolution process are needed. In this way, a large F is required at the early stage of the evolution process for the purpose of preventing premature convergence, while a smaller F is preferred to accelerate the convergence at the later stage of the evolution process [19]. A large value of CR in the early stage of evolution can increase the population diversity. On the contrary, a smaller CR can improve the local exploitation ability and convergence at the later stage of evolution [19]. In this regard, different adaptive mechanisms eliminate the need for manual tuning of control parameters, which are adjusted adaptively using the feedback from the evolution process [20,21].
Furthermore, choosing the appropriate mutation operator can significantly affect the optimization performance of the DE algorithm. In this way, a number of different mutation operators have been developed, such as DE/rand/1, DE/rand/2, DE/best/1, DE/best/2, DE/current-to-best/1, DE/current-to-rand/1, and different variants of them [15]. The DE/rand/1 and DE/rand/2 generally have powerful global exploration ability, while the DE/best/1 and DE/current-to-best/1 have strong local exploitation ability [20].
Hybridization of DE with other algorithms is another way to overcome the drawbacks of both algorithms and further enhance the optimization performance. Depending on the type of algorithm, the DE can be hybridized with other EAs, such as ABC, CS, and PSO [13,22] or with different local search methods such as Powell's method, the Hook-Jeeves (HJ), and the Nelder-Mead (NM) [23][24][25]. Among them, the NM method has been chosen, due to its excellent local search ability. However, the convergence of the NM method is extremely sensitive to the choice of the initial point [25], and thus, it cannot be used alone to find the global optimum of a multimodal objective function.
Based on the above considerations, this paper is aimed at proposing an improved HADENM algorithm, in order to efficiently find the estimated position of a passive target. The proposed algorithm is a two-stage method, where in the first stage, the adaptive differential evolution (ADE) algorithm is used as the global optimizer, to quickly locate the promising region containing the global optimum. Then, in the second stage, the NM method is employed to improve the accuracy of the best solution obtained from the ADE algorithm. Moreover, an adaptive adjustment parameter is introduced in the mutation phase of the ADE algorithm to automatically apply the appropriate mutation operator, in order to avoid the problem of stagnation and premature convergence. In addition, to further improve the optimization performance, the control parameters of the ADE algorithm are automatically and adaptively adjusted during the evolution process. Therefore, the purpose of this paper is to propose a robust optimization algorithm for the passive target localization problem for a wide range of measurement noise levels.
Finally, the Cramer-Rao lower bound (CRLB), which provides a lower bound on the variance of any unbiased estimator, has been derived and used as a benchmark to evaluate the localization performance [26]. Hence, the CRLB for the passive target localization problem is compared with the root mean square error (RMSE) performance of the proposed HADENM algorithm and the existing CWLS and DE algorithms.
The contributions of this paper are summarized as follows: (i) The passive target localization problem in LOS (lineof-sight) environment is formulated using the TOA measurements obtained from a set of receivers and a single transmitter, in Wireless Sensor Networks (WSNs)   2 Journal of Sensors (ii) An improved HADENM algorithm, as the hybridization of the ADE and NM algorithms, is proposed in order to efficiently solve the passive target localization problem. Furthermore, the scale factor and the crossover rate are updated using an adaptive strategy. In addition, an adaptive mutation operator has been designed in order to maintain a balance between the exploration and exploitation. To further increase the exploitation ability, the NM method is employed with the aim to enhance the accuracy of the best solution previously obtained by the ADE algorithm (iii) The experiments are carried out to evaluate the benefits of the proposed modifications on the optimization performance of the proposed HADENM algorithm. Based on the statistical analysis of the proposed algorithm and its versions, it can be concluded that the modifications proposed in this paper can improve the overall optimization performance. Furthermore, the simulation results show the effectiveness of the HADENM algorithm for a wide range of measurement noise levels compared to the CWLS and DE algorithms. Additionally, the proposed algorithm attains the CRLB accuracy and shows better robustness against variations in network topology The structure of the rest of the paper is organized as follows. Section 2 presents a review of background and related work. In Section 3, the passive target localization problem using noisy TOA measurements is presented. Section 4 gives the derivation of the ML objective function for the passive target localization problem. In Section 5, the CWLS algorithm is described. Section 6 introduces the DE algorithm followed by corresponding modifications. A local search NM method is described in Section 7. In Section 8, the HADENM algorithm is introduced to solve the considered passive target localization problem. In Section 9, the derivation of the CRLB is developed. To evaluate the localization accuracy, experimental results are presented in Section 10. The conclusion and future work are given in Section 11.

Background and Related Work
Highly accurate passive target localization can be considered as one of the most significant and challenging issues in WSNs [26]. The accuracy of the estimated target position depends on the geometric configuration of sensors and the measurement accuracy [1]. The localization algorithms in WSNs can be divided into range-based and range-free [5]. Range-based algorithms use range measurements including distance or angle between the target and the receivers, such as TOA [26], time difference of arrival (TDOA) [8], received signal strength (RSS) [27], angle of arrival (AOA) [28], or a combination of them [29]. Among them, TOA is the most commonly used algorithm for solving the localization problems, which can achieve high localization accuracy [26]. On the contrary, the range-free algorithms do not measure the distance or angle information; these algorithms use connectivity information in WSNs to estimate sensor positions [30]. Compared with range-based algorithms, the range-free algorithms do not require complex hardware structure. Hence, the range-free algorithms are cost effective and easy to implement; however, they are less accurate in estimating the sensor position [30].
There are many applications, in which global positioning system (GPS) or other navigation systems cannot be used, such as in indoor, underwater acoustics, and urban environments [31]. In these cases, when GPS signals are not available or do not have sufficient accuracy, the passive localization system can be employed as an efficient alternative. In this regard, the passive target localization system has become an attractive solution for determining the unknown position of the passive target under various circumstances and environments [32]. In this way, a novel two-step algorithm for TOA passive target localization has been proposed for synchronous networks [26]. Furthermore, the two new algorithms based on belief propagation on a factor graph have been developed to localize multiple passive targets, in the case where the receiver position errors exist [2]. In addition, the two-step linear algorithm has been proposed to simultaneously estimate the position of the passive target and the unknown time offset in a quasisynchronous network using TOA measurements [33].
Numerous estimation methods have been proposed to find the position of a target, where the well-known NLS estimation method is commonly employed. To obtain a closed-form solution, the LLS and WLS [6,7] methods have been widely applied and implemented. However, these linear estimation methods cannot provide high localization accuracy of the target for different measurement noise levels [34]. To further improve the localization performance, especially in the presence of high noise levels, the CWLS algorithm is developed [8]. An additional widely used estimation method in the literature is the ML estimator, which maximizes the likelihood function of the unknown target position. However, the corresponding ML objective function is a highly nonlinear function, under the Gaussian noise assumption [9]. In order to avoid difficulties related to the multimodal nature of the ML objective function, a semidefinite programing (SDP) relaxation is applied to transform the nonconvex problem into a convex problem [35]. However, in the presence of significant measurement noise, the SDP has some disadvantages in terms of accuracy; thus, it can only provide a near-global optimal solution [35]. Thus, to solve the ML estimation problem with high accuracy and improve the convergence, for a given localization problem, researchers have developed and applied various efficient EAs [10].
A number of different EAs like DE, PSO, ABC, and CS have been widely applied in order to solve the localization problem, in terms of determining the unknown position of the target [36]. In this regard, the PSO algorithm and its improved variants have been applied to accurately estimate 3 Journal of Sensors the position of the passive target using TDOA measurements [37]. The simulation results have shown that improved variants of PSO provide better localization performance compared to the conventional PSO and well-known LLS and WLS algorithms. In addition, the cuckoo search (CS) algorithm has been used to estimate the position of the target in the passive localization system based on TDOA measurements [14]. Based on the simulation results, it can be concluded that the CS algorithm has faster convergence speed and more efficient global exploration ability compared to the PSO and Newton iteration algorithms.
In recent years, a number of hybrid variants of the DE algorithm have been created in order to efficiently balance the exploration and exploitation abilities during the evolution process [23][24][25]. With respect to this, the direct search Powell's method has been combined with DE (DESAP) [23], where the near-global solution obtained by the DE algorithm is improved using Powell's method. To enhance the performance, the hybridization between DE and a local search algorithm (MLSHADE-SPA), with linear population size reduction and semiparameter adaptation, has been proposed [38]. Furthermore, the DE algorithm has been hybridized with Hook-Jeeves in distributed memetic DE algorithm (DMDE), to efficiently find the global optimum and achieve a better trade-off between the exploration and the exploitation [24]. In addition, a new reflection-based mutation operation, inspired by the reflection operations in the NM method, has been incorporated into the DE algorithm, with a combination of multiple mutation strategies based on roulette wheel selection (MM-RDE) [25]. Therefore, this paper proposes an improved HADENM algorithm, based on the hybridization of ADE and NM algorithms, to solve the multimodal passive target localization problem with high accuracy even in the presence of high measurement noise.

Localization Problem
This section considers passive target localization problem in two dimensions, using noisy TOA measurements. The unknown position of a passive target can be determined using one transmitter T x and a set of receivers where N is the number of receivers, such that N ≥ 3. Let x = ½x y T ∈ R 2 be the unknown position of a passive target, x r i = ½x r i y r i T ∈ R 2 , ∀i ∈ f1, 2, ⋯, Ng be the known coordinates of the ith receiver, and x T = ½0 0 T ∈ R 2 be the coordinates of the transmitter, as shown in Figure 1.
The passive target localization system starts with the transmitter emitting a signal, and upon arrival, the signal is reflected by the target. The considered TOA algorithm employs the information of the absolute signal travel time from the transmitter to the receivers to obtain range measurements. Therefore, it is necessary to know the signal departure time, which is achieved by the synchronization between the transmitter and receivers [26]. In addition, the Gaussian noise is widely used in the localization algorithms under LOS environ-ment [2]. In this case, the noisy TOA measurements are obtained as follows: where k⋅k 2 denotes the Euclidean distance, c is the speed of light, and n i denotes the zero-mean Gaussian measurement noise with variance σ 2 i . Then, multiplying t i by the speed of light c, the range measurements (transmitter-target-receiver distances), denoted by fr i g N i=1 , can be written as where d i = kxk 2 + kx − x r i k 2 is the true range measurement and n i = c n i is the measurement noise, which follows the Gaussian distribution N ð0, σ 2 ni Þ, with zero-mean and variance σ 2 ni = c 2 σ 2 i . From geometric interpretation in Figure 1, in the absence of measurement noise, the true range measurement d i defines an ellipse E i with focal points placed at x T and x r i , respectively. Therefore, the corresponding ellipse E i can be defined as In this regard, the true passive target position x = x y ½ T ∈ R 2 is determined at the intersection point of at least three solid line ellipses fE i g N≥3 i=1 , corresponding to the TOA measurements, as shown in Figure 1.
Then, in the presence of measurement noise, the vector form of Equation (2) can be written as in which the vector of true range measurements is n = n 1 ⋯ n N ½ T is the corresponding measurement noise vector. In this case, three or more ellipses derived from measurements do not have a unique intersection point. Thus, an estimated position of the passive target can be found inside the bounded region, i.e., the region surrounded by the bold black curve, as depicted in Figure 1. In this way, the main goal is to estimate the unknown position of the passive target based on noisy TOA measurements, which involves solving a highly nonlinear and multimodal ML estimation problem explained in the following section.

Maximum Likelihood Method
The unknown position of the passive target can be estimated through maximizing the likelihood function. Under the assumptions of Gaussian noisy TOA measurements, which are independent and identically distributed, the likelihood function LðxÞ for the passive target position [26] can be expressed as where f ðrjxÞ denotes the probability density function and C = diag fσ 2 1 , ⋯, σ 2 N g is a diagonal covariance matrix. In order to simplify the maximization problem, it is often convenient to take the logarithm of the likelihood function, which can be written as where c = ln ð1/ðð2πÞ N/2 det ðCÞ 1/2 ÞÞ is a constant independent of x. Consequently, the ML estimator requires the maximization of the log-likelihood function, which is equivalent to minimizing the negative logarithm of the likelihood function. Then, after neglecting the constant terms, the ML estimation problem can be formulated as where J ML ðxÞdenotes the corresponding ML objective function. Thus, the goal is to find the optimal solutionx that minimizes the objective function J ML ðxÞ with respect to x, such thatx = argmin Hence, the J ML ðxÞ for the passive target localization problem is depicted in Figure 2. Figure 2 provides information about the nature of the optimization problem and shows that J ML ðxÞ is a highly nonlinear and nonconvex function with multiple local optima. Therefore, finding the global optimal solution for a given optimization problem becomes a significant challenge. Thus, in order to solve this kind of complex optimization problem, it is necessary to employ different optimization algorithms, which are described in the next sections.

Constrained Weighted Least Squares Algorithm
As a well-known and widely used localization algorithm in WSNs, the CWLS algorithm is considered in this paper in order to compare and evaluate the localization performance of the proposed HADENM algorithm. The passive target position is estimated using CWLS algorithm by squaring both sides of Equation (2) and introducing R t = kxk 2 = ffiffiffiffiffiffiffiffiffiffiffiffiffi x 2 + y 2 p [8], which yields No range error With range error x T x Figure 1: Passive target localization system using noisy TOA measurements.

Journal of Sensors
where the second-order term of noise n 2 i can be neglected for small noise levels. Hence, Equation (10) is linear, which can be written as where Then, based on Equation (11) through Equation (15), the following WLS optimization problem can be formulated as where, under the small noise assumption, W ∈ R N×N is a symmetric weighting matrix, which can be defined as It should be noted that since the measurement noise m from Equation (15) is Gaussian distributed and due to the linear relationship in Equation (11), the objective function of the ML estimator J ML ðxÞ, given in Equation (8), is equivalent to that of the WLS estimator [8]. Then, by incorporating the relationship between unknown target position x and an auxiliary variable R t as a second-order equality constraint in the WLS estimator, the following CWLS target localization problem is obtained as where S = diag f1, 1,−1g is a diagonal matrix. The closed-form solution is not available due to the nonlinearity of the constrained problem. In this regard, an unconstrained optimization problem is obtained by forming the auxiliary Lagrange function ℒ ðθ, λÞ, as follows: where λ is the Lagrange multiplier associated with the equality constraint. The corresponding optimal solution b θ CWLS to the CWLS problem can be obtained as Hence, the Lagrange multiplier λ is found as the root of a 4th-order polynomial [8], as follows:  Journal of Sensors [8]. Then, based on the CWLS method, the algorithm for the passive target localization can be stated as follows: Step 1. Set the symmetric weighting matrix W as unit matrix I, i.e., W = I.
Step 2. Find all roots of Equation (21) taking into consideration only real roots.

Differential Evolution Algorithm and the
Proposed Modified Version 6.1. Differential Evolution Algorithm. The DE is a populationbased EA successfully applied for global optimization, which is introduced by Storn and Price [12]. The evolution process of the DE algorithm is based on four basic steps, i.e., initialization, mutation, crossover, and selection, which are described below.
6.1.1. Initialization. The evolution process of the DE algorithm starts with an initial population of n-dimensional N P individuals, in which each individual represents a candidate solution of the problem, i.e., fx where G denotes the current generation. For the considered localization problem, every individual x ðGÞ i in two-dimensional space (i.e., n = 2) has two variables corresponding to x and y coordinates of the target position. Here, x L i,j and x U i,j are the lower and upper bounds of the jth component of the ith individual x i,j , respectively. At G = 0, every individual is randomly generated as where rand j is a uniformly distributed random number in the range ½0, 1.
i . The most widely used mutation operators in the literature [15] are where r 1 , r 2 , r 3 , r 4 , and r 5 are distinct integers randomly selected from f1, 2, ⋯, N P g \ fig, x ðGÞ best is the individual with the lowest objective function value, and F ∈ ½0, 1 is the scale factor.
where rand i,j is an uniform number generated within ½0, 1, CR ∈ ½0, 1 is the crossover rate, and j rand ∈ f1, 2, ⋯, ng is an integer selected randomly.
In this way, the vector with a better objective function value is selected, according to 6.2. A Modified Differential Evolution Algorithm. In order to improve the performance of the conventional DE algorithm, this subsection introduces modifications to the 7 Journal of Sensors scale factor and crossover rate. Furthermore, an automatically adapted mutation operator is proposed, to select an appropriate mutation strategy based on the current optimization state.
6.2.1. Adaptive Scale Factor. The performance of the DE algorithm depends largely on the proper choice of the scale factor, which can affect the convergence. In this context, a higher value of the scale factor, in the early stage, improves the exploration ability, which has a positive effect on the population diversity and, thus, avoids premature convergence at local optima [18,39]. In contrast, a smaller value of the scale factor, in the later stage, improves the exploitation ability, which can enhance the convergence speed [18,39].
Based on the above analysis, an adaptive scale factor F ðGÞ i , which is dynamically adapted in each generation for each individual, is introduced as follows: where F min and F max are minimum and maximum values of In this respect, Figure 3 illustrates the changes of the proposed adaptive scale factor F ðGÞ i defined in Equation (31) during the evolution process, where F max = 0:9 and F min = 0:5.
According to Figure 3, it is evident that in the early stage, the adaptive scale factor F ðGÞ i has a larger value, which is suitable for exploration. During the later stage, with the increase of generations, the adaptive scale factor F ðGÞ i starts to decrease, which improves exploitation and accelerates the local convergence of the algorithm.

Adaptive Crossover Rate.
According to the analysis in [18,40], adapting the suitable value of the crossover rate can maintain the diversity of the population and improve the quality of the solution. From Equation (30), it is evident that for the large value of the crossover rate, the mutant vector v ðGÞ i has a greater contribution to the trial vector u ðGÞ i . In this case, a higher crossover rate increases the population diversity and enhances the global search. In contrast, for a smaller crossover rate, any trial vector u ðGÞ i keeps the previous state x ðGÞ i with a large probability. This can further refine the trial vector, which is beneficial for improving the quality of the solution.
In this regard, an adaptive crossover rate CR ðGÞ is proposed here, to address the above mentioned issues, which is defined as where G max is the maximum number of generations and CR max and CR min are the maximum and minimum values of CR ðGÞ , respectively.
In this way, Figure 4 shows the proposed adaptive crossover rate CR ðGÞ versus the number of generations, where CR max = 0:9 and CR min = 0:1.
As can be seen from Figure 4, the CR ðGÞ has a large value in the initial stage of the evolution process and gradually decreases with the increase of generations. In this way, larger CR ðGÞ can advance the population diversity and strongly enhance the exploration ability. During the later stage of the evolution process, the CR ðGÞ takes a small value, which is beneficial for improving the quality of the solution.
6.2.3. The Adaptive Mutation Operator. The performance of the DE algorithm, such as convergence, population diversity, and exploration ability, is greatly affected by mutation operators. The appropriate mutation operators for different evolutionary stages have been proposed here, with the aim to avoid premature convergence and prevent stagnation. In this manner, the DE/rand/1 and DE/rand/2 have strong global exploration ability, while the DE/best/1 operator has a good local exploitation ability [20]. In order to further improve the local search ability, the DE/current-to-best/1 mutation operator can be employed [20].
Based on the above considerations, to dynamically adjust the global exploration and local exploitation abilities, this paper proposes an adaptive adjustment parameter δ ðGÞ , as follows: where f ðGÞ mean is mean value of the objective function. Thus, the pseudocode of the proposed adaptive mutation operator is shown in Algorithm 1.
As a matter of fact, the adaptive parameter δ ðGÞ ∈ ½0, 1 has an important influence on the identification of evolutionary stages in the search process. According to Equation (33), in the early stage, the value of δ ðGÞ is close to 1, which indicates that the population is far from the region of the global optimum, and this corresponds to the global exploration in the search space. Hence, the DE/rand/1 and DE/rand/2 will be chosen randomly with the probability of 0.5, in order to improve the exploration and find the region of the global optimum. In the later stage, the value of δ ðGÞ is close to 0, which shows that the population is near the region of the global optimum, and this corresponds to the local exploitation. Therefore, the DE/best/1 and DE/current-to-best/1 will be randomly selected, with the probability of 0.5, to strengthen the exploitation ability and further improve the solution quality.

Nelder-Mead Method
The Nelder-Mead is a local search method that does not require the derivative information of the objective function 8 Journal of Sensors [25]. In general, the NM method is described for the minimization of an objective function in n-dimensional space. The evolution process of the NM method starts with an initial simplex in the search space, which is a polyhedron with n + 1 vertices, i.e., fx i : 1 ≤ i ≤ ng. The objective function is evaluated at each vertex, and then all vertices are ranked based on their objective function values, i.e., the vertex corresponding to the best objective function value is denoted as x 1 and the vertex corresponding to the worst objective function value is denoted as x n+1 . The NM method uses four elementary geometric transformations, called reflection, expansion, contraction, and shrinkage. Using these geometric transformations, the simplex moves through the search space towards the optimal solution. The following steps are executed repeatedly until reaching stopping criteria:   Figure 4: The changes of the proposed adaptive crossover rate CR ðGÞ during the evolution process.

Journal of Sensors
Step 1. Initialization. Given an initial vertex x ð0Þ 1 , obtained by the ADE algorithm, generate the remaining n vertices fx ð0Þ i+1 : 1 ≤ i ≤ ng according to where e i is the unit vector of the ith axis and λ is the initial step size, set as λ = 1.
Step 2. Sorting. Rank the vertices as follows where k is the current iteration. Here, the objective function value is obtained from the ML objective function in Equation Step 3. Reflection. Generate the vertex x ðkÞ r by reflecting the vertex x ðkÞ n+1 , as follows: where α > 0 is the reflection coefficient, usually suggested as α = 1 and is the centroid of the n best vertices. If f ðx Step 4. Expansion. Calculate the vertex x ðkÞ e in the same direction as x ðkÞ r , as follows: where β > 1 is the expansion coefficient, usually suggested as β = 2. Step 5. Contraction. After reflection, there are two possible contraction cases: Step 5.
where 0 < γ < 1 is the contraction coefficient, usually suggested as γ = 0:5. Step 6. Shrinkage. Perform the shrinkage on the vertices ∀i ∈ f2, ⋯, n + 1g as follows: where 0 < δ < 1 is the shrinkage coefficient, usually suggested as δ = 0:5. The optimization process described in Steps 1-6 is repeated until the termination criteria are reached. The obtained optimal solution represents the estimated position x of the unknown target position x.

HADENM Algorithm
In this section, the proposed HADENM algorithm is introduced that hybridizes the ADE algorithm with the local search NM method, in order to efficiently solve the passive target localization problem given in Equation (8). In this way, the pseudocode of the HADENM algorithm is presented in Algorithm 2, for the passive target localization problem.

Cramer-Rao Lower Bound
In the passive target localization problem, the CRLB can be used as a benchmark for evaluating the performance of if δ ðGÞ > 0:5 then if rand > 0:5 then v Algorithm 1: Pseudocode of the adaptive mutation operator. 10 Journal of Sensors   Journal of Sensors different algorithms. The CRLB is obtained from the diagonal elements of the inverse of the Fisher information matrix (FIM) [26], denoted by IðxÞ, which is based on the probability density function f ðrjxÞ, defined in Equation (6), and is represented as Based on this, the FIM can be defined as where the corresponding elements are given as follows: Hence, the relationship between the variance and CRLB can be determined as wherex is the estimated value of x.

Experimental Study
In this section, experiments are conducted to evaluate the localization performance and to perform the analysis of the benefits of the proposed modifications on the optimization performance of the HADENM algorithm. In this regard, the presentation of the experimental results is divided into two subsections, described below.

A Parametric Study on HADENM Algorithm.
In this subsection, the experiments are carried out to evaluate the performance of the HADENM algorithm by studying the effects of the proposed adaptive scale factor and crossover rate, as well as the proposed adaptive mutation operator on the optimization performance. Furthermore, the effects of the hybridization between the ADE and NM algorithms have been analysed. Then, the experiments are performed in order to verify that the performance is enhanced after combining the previously described improvements.
To evaluate the performance of the proposed HADENM algorithm, the solution error measure ð f ðxÞ − f ðx * ÞÞ has been employed, wherex denotes the best solution of the algorithm obtained in one run and x * represents the well-known global optimal solution of the corresponding ML objective function. All experiments for each algorithm have been independently run 30 times and statistical results are provided. From the statistical point of view, the quality of the obtained solutions has been analysed and compared using two nonparametric statistical hypothesis tests such as the Wilcoxon signed-rank test and Friedman test.
Firstly, the Wilcoxon signed-rank test can be used to determine the significant differences between two samples obtained by the algorithms. This statistic test has been applied with a significance level of α = 0:05. Here, R + denotes the sum of ranks for the problem in which the first algorithm outperformed the second and R − is the sum of ranks for the problem where the first algorithm performed worse than the second. According to [41], in the Wilcoxon signed-rank test for the null hypothesis, it is assumed that, "there is no difference between the mean results of the two samples." On the contrary, the alternative hypothesis is that, "there is a difference in the mean results of the two samples." In the statistical analysis, the p value is used and compared to the significance level. Thus, the null hypothesis can be rejected when the p value is less than or equal to α = 0:05. Based on the obtained results of the statistical test, one of the following three signs (+, −, and ≈) has been assigned for the comparison between any two algorithms. Thus, the plus (+) sign denotes that the first algorithm is significantly better than the second, the minus (−) sign means that the first algorithm is significantly worse than the second, and the

12
Journal of Sensors approximation (≈) sign denotes that there is no significant difference between two algorithms. The second test is the Friedman test, which obtains the ranks of all considered algorithms over every tested function, with the aim to find the significant differences in performance between two or more algorithms. In this statistical test, the algorithm with a minimum rank value is considered as the best performing algorithm, while the one with the highest rank value is considered as the worst. According to [41], the null hypothesis for Friedman test states that, "there is no difference among the performance of all algorithms," whereas the alternative hypothesis states that, "there is a difference among the performance of all algorithms".
In this regard, the presentation of the obtained results of the considered evaluations is divided into four subsubsections. In the first sub-subsection, the effectiveness of the proposed adaptive scale factor and crossover rate has been evaluated. In the second sub-subsection, the effectiveness of the proposed adaptive mutation operator has been considered. The third sub-subsection considers the hybridization of ADE and NM algorithms and the overall performance improvements. Finally, in the fourth sub-subsection, the statistical results of applying the Friedman test between the considered algorithms have been analysed.
10.1.1. Effectiveness of the Adaptive Scale Factor and Crossover Rate. In this sub-subsection, the experimental studies have been performed to evaluate the effectiveness and benefits of the proposed adaptive scale factor F ðGÞ i (given in Equation (31)) and adaptive crossover rate CR ðGÞ (given in Equation (32)) on the optimization performance of the HADENM algorithm. Firstly, the effectiveness of the adaptive scale factor is considered, where the performance of the HADENM algorithm is evaluated with two different fixed values of F, as follows: (1) HADENM F=0.5 , which has the same operators as HADENM, except that the scale factor is set to a fixed value of F = 0:5 (2) HADENM F=0.9 , which has the same operators as HADENM, except that the scale factor is set to a fixed value of F = 0:9 The summary of statistical results of applying the Wilcoxon test between the proposed HADENM and the above two algorithms is presented in Table 1.
From the results in Table 1, it can be seen that the proposed HADENM algorithm is significantly better than HADENM F=0.5 . On the other hand, in the case of HADENM versus HADENM F=0.9 , the proposed algorithm has a higher R + value than R − . This shows the effectiveness of the proposed adaptive scale factor F ðGÞ i given in Equation (31) and the idea of increasing the scale factor during the evolution process. Hence, the main purpose of this comparison is to show that the HADENM algorithm with the adaptive scale factor F ðGÞ i can achieve better performance than the same algorithm with a fixed value of F.
Secondly, in order to show the efficiency of the proposed adaptive crossover rate CR ðGÞ in Equation (32), the performance of the HADENM algorithm is compared with three different fixed values of CR, as follows: (1) HADENM CR=0.1 , which has the same operators as HADENM, except that the crossover rate is set to a fixed value of CR = 0:1 (2) HADENM CR=0.5 ,which has the same operators as HADENM, except that the crossover rate is set to a fixed value of CR = 0:5 (3) HADENM CR=0.9 , which has the same operators as HADENM, except that the crossover rate is set to a fixed value of CR = 0:9 Table 2 shows the summary of statistical results of applying the Wilcoxon test between the proposed HADENM and the above three algorithms.
From the results in Table 2, it can be seen that the proposed HADENM algorithm has significantly better performance than HADENM CR=0.5 and HADENM CR=0.9 . Furthermore, in the case of HADENM versus HADENM CR=0.1 , it can be observed that the proposed algorithm provides higher R + values than R − in all considered cases. This shows that the proposed adaptive crossover rate CR ðGÞ plays a vital role in determining the optimal crossover rate value for the considered optimization problem.

Effectiveness of the Proposed Adaptive Mutation
Operator. In this sub-subsection, to evaluate the effectiveness and improvement of the proposed adaptive mutation operator on the optimization performance of the HADENM algorithm, the experimental studies have been performed. In this way, two different versions of the HADENM algorithm have been tested and compared against the proposed one, such as (1) HADENM-1, which has the same operators as HADENM, except that only the explorative mutations of the proposed adaptive mutation operator are applied. Hence, only the mutation operators DE/rand/1 and DE/rand/2 will be chosen randomly with the probability of 0.5 (2) HADENM-2, which has the same operators as HADENM, except that only the exploitative mutations of the proposed adaptive mutation operator are applied. In this case, the mutation operators DE/best/1 and DE/current-to-best/1 will be randomly selected, with the probability of 0.5  Table 3, the summary of statistical results of applying the Wilcoxon test between the proposed HADENM, HADENM-1, and HADENM-2 algorithms is shown.
According to the Wilcoxon test, given in Table 3, it can be observed that the proposed HADENM algorithm provides higher R + values than R − in both cases. The obtained results indicate that the proposed adaptive mutation operator can effectively keep the balance between exploration and exploitation abilities, thus improving the overall optimization performance of the HADENM algorithm.

Effectiveness of the Proposed Hybridization.
To study the effects of the proposed hybridization between the ADE and NM algorithms, in this sub-subsection, the following experiment has been performed. In this regard, the performance of the HADENM algorithm without using the NM method has been compared to the performance of the proposed HADENM algorithm. Thus, the summary of statistical results of applying the Wilcoxon test between the two abovementioned algorithms is shown in Table 4.
According to the obtained statistical results in Table 4, it can be concluded that the proposed HADENM algorithm has better performance compared to the algorithm without using the NM method. This shows that the NM method can further enhance the quality of the obtained solution, and in this way, it can improve the optimization performance of the HADENM algorithm.
10.1.4. Comparison of the Proposed Improvements. In this sub-subsection, the experimental studies have been performed in order to demonstrate the effectiveness of the proposed HADENM algorithm. As there are more than two algorithms for comparison, the overall performance of the considered algorithms has been compared using the Friedman test. In this regard, Table 5 shows the average ranks according to the Friedman test for the considered algorithms, using different values of the variance of the noise 10 log ðσ 2 ni Þ, where the proposed HADENM algorithm is selected as the base algorithm. The best ranks are shown in bold, and the second ranks are in italics.
From Table 5, it can be noted that the p values computed through the Friedman test for different values of the variance of the noise 10 log ðσ 2 ni Þ are less than 0.05. Therefore, this means that there is a significant difference in performance between the considered algorithms. Furthermore, it can be observed that the proposed HADENM algorithm outper-forms other considered algorithms for all values of 10 log ðσ 2 ni Þ, which demonstrates the effectiveness of the modifications proposed in this paper.      14 Journal of Sensors Here, the RMSE is employed to evaluate the localization accuracy, which is expressed as where the estimated value of the true target position isx ðnÞ and N m = 1000 is the number of simulation runs for a given variance of the noise σ 2 ni . Figure 5 shows the results of the first scenario, where the RMSEs of the considered algorithms versus p = 10 log ðσ 2 ni Þ are plotted and compared against the CRLB.
From Figure 5, it can be observed that the RMSE performance of the HADENM algorithm is better compared to the DE and the CWLS, and the proposed algorithm attains the CRLB for all the considered ranges of p. Moreover, the DE and the CWLS algorithms can achieve the RMSE performance several dBs above the CRLB, for small measurement noise. However, as the noise level increases, i.e., p ≥ 20 dB, the CWLS algorithm separates faster from the CRLB accuracy.
The simulation results of the second scenario are presented in Figure 6, where the RMSEs of all the considered algorithms versus measurement noise p are plotted.
According to the simulation results in Figure 6, it can be observed that the HADENM algorithm attains the CRLB accuracy and has a superior localization performance compared to the other considered algorithms. Furthermore, it is concluded that there is a significant degradation in the localization accuracy of the CWLS and DE algorithms compared to the first simulation scenario. It is also noted that, the RMSE of the CWLS algorithm diverges rapidly from the CRLB for large values of the measurement noise, i.e., when p ≥ 5 dB.
Moreover, the RMSEs of the considered algorithms versus p are plotted in Figure 7 for the third simulation scenario, where the true position of the passive target is randomly generated within the considered area for each simulation run.
As expected, the results of the third simulation scenario, presented in Figure 7, show that the HADENM algorithm attains the CRLB accuracy and provides superior performance over the CWLS and DE algorithms. Therefore, summarizing the results of the three simulation scenarios, presented in Figures 5-7, it can be concluded that the proposed HADENM algorithm has a better localization performance compared to the considered algorithms and successfully attains the CRLB for every considered scenario. Furthermore, it is observed that the proposed algorithm is less sensitive to the changes in the geometric configuration of the transmitter, receivers, and target.
For a better comparison of the simulation results, the cumulative distribution functions (CDFs) of the passive target localization error of the considered algorithms are investigated, with different variances of the measurement noise σ 2 ni . The localization error (LE) is defined as the Euclidean distance between the estimated and the true position of the passive target, i.e., , ∀n ∈ 1, ⋯, N m f g : ð49Þ Figure 8 represents the simulation results for the second scenario with the corresponding CDFs in terms of the localization error obtained for each algorithm for different variances of the measurement noise σ 2 ni . From the CDFs of the localization error, depicted in Figure 8, it can be observed that for a fixed CDF percentage, e.g., 90%, under the measurement noise σ 2 ni = 1 m 2 , the HADENM algorithm has the lowest localization error of 1:2 m, while the DE and CWLS algorithms have 1.3 m and 1.5 m, respectively. When the value of the measurement noise is increased to σ 2 ni = 10 m 2 , the HADENM algorithm has the localization error of 3.8 m, while the DE and the CWLS show localization errors of 3.9 m and 4.2 m, respectively. For the higher value of measurement noise, e.g., σ 2 ni = 20 m 2 , the HADENM algorithm has a localization error of 5.6 m, while the DE and CWLS algorithms have 5.8 m and 6.1 m, respectively. Based on the above, it is evident that the HADENM algorithm has a smaller localization error compared to the other considered algorithms.
Finally, the influence of increasing the number of receivers on the localization accuracy of different algorithms is investigated for the first simulation scenario. In this regard, an arrangement of N j = 21 receivers uniformly distributed on a circle of radius R = 60 ffiffi ffi 2 p m is considered, where the coordinates of the ith receiver are obtained as follows: x r i = R cos φ i R sin φ i ½ T , ∀i ∈ f1, 2, ⋯, N j g. In this way, Figure 9 shows the RMSE performances of the considered algorithms versus the number of receivers, when the variance of the measurement noise is σ 2 ni = 1 m 2 . From Figure 9, it is observed that with the increase of the number of receivers from 4 to 15, the RMSE performances of the CWLS, DE, and HADENM algorithms improve significantly. Further increase in the number of receivers does not explicitly enhance the localization accuracy. Thus, the difference between the RMSE values of the considered algorithms is smaller. Therefore, based on the obtained results, it can be concluded that the proposed HADENM algorithm is robust against variations in the topology and provides a superior performance among all the considered algorithms.

Computational Complexity of the Considered
Algorithms. In this sub-subsection, the computational complexity of the HADENM algorithm and other considered algorithms is presented, and the analysis of the average computation time is also given for the comparison. It should be noted that in this paper, the computational complexity of the considered algorithms is only analysed in a single generation. It has been previously shown that the computational complexity of the CWLS algorithm is OðG max ðn + 1ÞÞ [8], while the conventional DE algorithm has the complexity of OðG max N P nÞ [42]. During one generation of the proposed HADENM algorithm, all the individuals in the population are sorted according to the objective function value, where the average computational complexity of this process is OðN P log ðN P ÞÞ. Then, all individuals in the population go through mutation, crossover, and selection operations, which have the computational complexity approximately equal to OðN P ðn + Coc + CosÞÞ [43]. Here, Coc and Cos denote the costs of crossover and selection operations in the DE algorithm, respectively. Afterwards, the accuracy of the best solution is further improved by the NM algorithm,  17 Journal of Sensors which has the computational complexity Oðn log nÞ [44]. Based on the above considerations, the computational complexity of the HADENM algorithm can be simplified to O ðG max N P n + n log nÞ.
Next, the average computation times in searching for the global optimum have been determined on the same computer with 3.2 GHz CPU and 16 GB of RAM. In this way, based on the considered simulation scenarios, a comparison between the average computation times of the CWLS, DE and HADENM algorithms is shown in Table 6. Table 6 shows that the CWLS algorithm has the fastest implementation among the considered algorithms, while there is no significant difference between the HADENM and DE algorithms. Based on the results of the analysis of localization accuracy, the results in Table 6 indicate that the proposed HADENM algorithm achieves the best compromise between the localization accuracy and the average computation time in searching for the global optimum.

Conclusion
In this paper, the passive target localization problem has been considered, based on TOA measurements, for the system with one transmitter and a set of receivers. In order to solve this nonlinear and nonconvex localization problem even in highly noisy environments, an improved HADENM algorithm, based on the hybridization of the ADE and NM algorithms, has been proposed. An adaptive mutation operator has been developed to provide a good balance between the global exploration and local exploitation abilities. In addi-tion, to further enhance the optimization performance, the control parameters of the ADE are adaptively updated during the evolution process. Furthermore, the exploitation ability is enhanced by the NM method, which improves the accuracy of the best solution previously obtained by the ADE algorithm. To evaluate the benefits of the proposed modifications on the optimization performance, statistical analysis has been conducted.
Based on the comparison results between the HADENM algorithm and its versions, it can be concluded that the modifications proposed in this paper can improve the overall optimization performance. Furthermore, the simulation results show that the proposed HADENM algorithm provides superior localization performance, under different measurement noise conditions, compared to the DE and CWLS algorithms. Moreover, the HADENM algorithm attains the CRLB accuracy and exhibits better robustness against variations in network topology and under high measurement noise. In this way, the HADENM algorithm has the ability to provide both accuracy and robustness compared to the other considered algorithms.  18

Journal of Sensors
Future research can be focused on further improving the performance of the proposed algorithm and applying it to other complex optimization problems in WSNs.

Data Availability
No data were used to support this study.

Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.