A Novel UAV Path Planning Algorithm Based on Double-Dynamic Biogeography-Based Learning Particle Swarm Optimization

Particle swarm optimization (PSO), one of the classical path planning algorithms, has been considered for unmanned aerial vehicle (UAV) path planning more frequently in recent years. A large amount of studies on UAV path planning based on modified PSO have been reported.However,mostUAVpath planning algorithms still optimize only one kind terrain problemwhich ismountain terrain. At the same time, many modified PSO algorithms also have some problems, such as insufficient convergence and unsatisfactory efficiency. In this paper, six kinds of terrain functions of UAV path planning are proposed to simulate real-world application.*e terrain functions contain city, village without houses, village with houses, mountainous area without houses, mountainous area with houses, and mountainous area with a huge building. Inspired by CLPSO and BLPSO, we proposed a new double-dynamic biogeography-based learning particle swarmoptimization (DDBLPSO) algorithm to solve these problems.*e double-dynamic biogeography-based learning strategy replacing the traditional learningmechanism from the personal and global best particles is used to select the learning particles. In this strategy, each particle will learn from the better one of two selected particles which are not worse than itself. However, one random component of particle will replaced by corresponding component of other particle if all components of the particle only learn from itself. In this way, particles sufficiently learn from better objects and maintain the ability of jumping out of local optimality. *e superiority of our algorithm is verified with four relevant algorithms, a PSO variant, and a BBO variant on the benchmark suite of CEC2015. Realworld application demonstrates that the algorithmwe proposed outperforms four relevant algorithms, a PSO variant, and a BBO variant both in small-scale problems and large-scale problems. *is paper shows a good application of our novel algorithm.


Introduction
UAV path planning is designed in a space which represents environment experienced by UAV during its flight. e path is a link formed by the combination of all the points and the lines connecting them, and each path is a solution. Let population X � [X 1 , X 2 , . . . , X N ], where N is population size and X i is the ith solution. Every path has m points, and it does not include the starting and the end points. X i � [x i1 , y i1 , z i1 ; . . . ; x iy , y iy , z iy ; . . . ; x im , y im , z im ], where (x iy , y iy , z iy ) is the coordinate of the jth point of the ith path. Solution is an m × 3 matrix, and population is an m × 3 × N tensor. e space is divided into m + 1 equal parts along X axis which ensures that the flight path goes from the starting point to the end point without forming a circle during halfway.
where W i (i � 1, 2, . . . , m + p) are weight coefficients. In this paper, UAV aims to fly as short distance as possible, fly as low as possible, try not to cross danger zones as possible, and try not to cross the ground and buildings as possible. So, three objectives and one constraint are selected which are described as follows: where W 1 , W 2 , W 3 , W 4 ∈ (0, 1] are weight coefficients, f length is a path function penalizing the longer paths, f altitude is an altitude function penalizing higher average altitudes, f danger is a danger function penalizing the paths going through danger zones, and c collision is a collision function penalizing the paths colliding with ground or buildings. f length , f altitude , f danger are objective functions, and c collision is a constraint function. Path function is used to penalize longer paths which are modelled as equation (4) in this paper.
where L P1P2 is the length of the line segment which connects the starting point P1 and the end point P2 and L traj is the actual length of the trajectory. is means that L traj is the length of a broken line connecting all points from the starting point to the end point. It is easy to get that f length ∈ (0, 1]. Altitude function is used to penalize higher average altitudes which is modelled as equation (5) in this paper.
where A traj is the average altitude and Z min and Z max are the upper and lower limits of the elevation in the search space. However, we usually choose a very low height rather than zero as the optimal choice which means that Z min is a small positive number. It is easy to get f altitude ∈ (0, 1]. Danger function is used to penalize the paths going through danger zones which is modelled as equation (6) in this paper.
where L danger is the sum of the length of the subsections of the trajectory which go through danger zones, d i is the diameter of the ith danger zone, and n is the number of danger zones. In this paper, danger zone usually means that radar can detect and is described as a hemispheric region. If f danger > 1, f danger is set to be 1 to ensure f danger ∈ (0, 1].
Collision function is used to penalize the paths colliding with ground or buildings which is modelled as follows: where L infeasible is the total length of the subsections of the trajectory which travels below the ground or buildings, P is the penalty constant, and L traj is the length of the trajectory. It is easy to say that c collision ∈ 0 { }⋃ [P, P + 1]. If cost function F(X i ) is greater than P, it means that F(X i ) is an infeasible solution.
ere are many modified PSO algorithms, and some of them have been applied to UAV path planning [1][2][3][4][5][6][7][8][9][10]. However, many PSO variants still have some problems like insufficient convergence and accuracy and unsatisfactory efficiency. In this paper, we proposed a novel double-dynamic biogeography-based learning particle swarm optimization (DDBLPSO) algorithm to solve these problems. e algorithm not only considers the problem of insufficient convergence and accuracy and unsatisfactory efficiency for global optimization but also can be applied to UAV path planning.
e main contributions of our paper are summarized as follows.
(i) A novel double-dynamic biogeography-based learning strategy replacing the traditional learning mechanism is proposed. is strategy makes the utmost of the advantages of particles which only learn from particles that are not worse than themselves.
(ii) Six kinds of terrain functions are designed in UAV path planning. (iii) e experiment results show that DDBLPSO demonstrates the best performance on UAV path planning with four other representative algorithms, a PSO variant, and a BBO variant in CEC2015 benchmark functions and UAV path planning. e remainder of the paper is organized as follows. Section 2 gives the literature review. Section 3 introduces several existing algorithms such as PSO, BBO, CLPSO, and BLPSO. Section 4 shows the details of the proposed algorithm DDBLPSO. Section 5 presents simulation results for global optimization. Application in UAV path planning is elaborated in Section 6. Section 7 draws the conclusions.

Literature Review
Unmanned aerial vehicle (UAV) has been applied to many areas with the rapid progress of science and technology. Evolutionary algorithm-based methods solving the UAV path planning problem are always a hot topic. In [1], the authors proposed an evolutionary algorithm based on offline/online path planning for UAV navigation. Phase angleencoded and quantum-behaved particle swarm optimization was proposed and applied to three-dimensional route 2 Mobile Information Systems planning for UAV [2]. Pehlivanoglu [3] introduced a new vibrational genetic algorithm which was enhanced by Voronoi diagram for path planning of autonomous UAV. An improved constrained differential evolution algorithm was introduced for UAV global route planning [4]. e algorithm demonstrates a good performance in terms of the solution quality, robustness, and the constraint-handling ability. A method was proposed to compare the planner performance by jointly employing several general and problem-specific quality indexes, which takes into account the complexity and particularities of the problem [5]. In [6], a novel predator-prey pigeon-inspired optimization (PPPIO) was proposed to solve the three-dimension path planning problem for unmanned combat air vehicle (UCAV) in dynamic environment. is algorithm mainly focuses on optimizing the flight route considering different types of constraints under complex combating environment. e modified wolf pack search (WPS) was applied to compute the quasi-optimal trajectories for the rotor wing UAVs in the complex three-dimensional (3D) spaces including the real and fake 3D spaces [7]. In [8], the authors proposed a constrained adaptive multiobjective differential evolution algorithm for bistatic SAR path planning. It generates multiple feasible paths for the UAV receiver with different trade-offs between navigation for UAV and bistatic SAR imaging performance. e authors compared genetic algorithm and particle swarm optimization for real-time UAV path planning in [9].
Particle swarm optimization (PSO) [10] is one of the most commonly used algorithms to solve the problems of UAV path planning. In recent years, many improved PSO variants have emerged one after another. In [11], the authors put forward an adaptive particle swarm optimization (APSO) algorithm, in which two main steps are conducted to adaptively adjust the parameters when the swarm lies in a different evolutionary state (exploration, exploitation, convergence, and jumping out) in each generation. en, an elitist learning strategy was used when the evolutionary state was classified as convergence state. Nickabadi et al. [12] studied on adaptive inertia weight and introduced a novel particle swarm optimization algorithm. Numerical experiments show that this algorithm is quite effective in adapting the value of w in the dynamic and static environments. Li and Yao [13] presented a new cooperative coevolving particle swarm optimization (CCPSO) algorithm to scale up particle swarm optimization algorithms in solving large-scale optimization problems (up to 2000 real-valued variables). Comprehensive learning particle swarm optimization (CLPSO) uses a learning strategy whereby all other particles' historical best information is used to update a particle's velocity [14]. In [15], the authors proposed a novel biogeography-based optimization algorithm with momentum migration and taxonomic mutation to deal with problems whose function values change dramatically or barely. In [16], the authors proposed an enhanced particle swarm optimization algorithm (pkPSO) by combining k-nearest neighbours (k-NN) with pattern search (PS). At the same time, the cooperative effect of k-NN and PS strategies was verified. A novel particle swarm optimization algorithm was proposed for parameter determination and feature selection of support vector machines [17]. A novel PSO-GA-based hybrid training algorithm with Adam optimization which performs well in training artificial neural networks was introduced by Yadav and Anubhav [18]. In [19], the authors presented a cellular learning automata-based bare bones PSO algorithm with maximum likelihood rotated mutations. e PSO technique was used to identify the uncertain physical parameters of a real vehicle ETC system [20]. In [21], the authors proposed a new particle swarm optimization algorithm with simple PID-based strategy, which has good global optimization ability. Biogeography-based optimization (BBO) is an algorithm that simulates biological migration to search optimal solution [22]. Biogeography-based learning strategy is a nice method for optimization. In [23], biogeography-based learning particle swarm optimization (BLPSO) uses a learning strategy based on migration of biogeography-based optimization and outperforms other representative algorithms. In [24], the authors introduced a hybrid differential evolution with biogeographybased optimization for global numerical optimization.

Particle Swarm Optimization (PSO).
Particle swarm optimization is a classical evolutionary computing technique which was firstly proposed by Eberhart and Kennedy [10]. Inspired from the study of the predation behaviour of birds, the main idea of PSO algorithm is sharing cooperation and information among individuals to find the optimal solution. PSO simulates birds in a flock by designing a massless particle, which only has velocity and position. Velocity represents how fast and which direction the particle moves along. Position represents where the particle is. For each particle, only the personal best experience and the global best experience of the entire swarm can be learned. Let x i � (x i1 , x i2 , . . . , x i D ) T and v i � (v i1 , v i2 , . . . , v i D ) T denote the position and velocity of particle i(i � 1, 2, . . . , N { }), respectively, where D is dimension of the initial space and N is population size. Let pbest i � (pbest i1 , pbest i2 , . . . , pbest i D ) T and gbest � (gbest 1 , gbest 2 , . . . , gbest D ) T be the personal best position of particle i and the global best position of the whole swarm. e update of velocity and position of the particle is indicated as follows: where i � 1, 2, . . . , N; d � 1, 2, . . . , D; w is the inertia weight; c 1 and c 2 are the acceleration coefficients; and rand 1 (0, 1) and rand 2 (0, 1) ∈ [0, 1] are uniform random numbers. e algorithm sets the maximum velocity to prevent the velocity from getting too large. e particle will be set as the border of initial region if it flies out of initial region.

Biogeography-Based Optimization (BBO).
In [22], the author proposed a new algorithm named biogeographybased optimization which solves optimization problems by Mobile Information Systems 3 simulating biological migration. In BBO, solution is also called as habitat, fitness of solution is called as habitat suitability index (HSI), and component of solution is called as suitable index variable (SIV). e main body of BBO is migration operation and mutation operation.
Each habitat x i (i � 1, 2, . . . , N) has two parameters to describe the migration, which are an immigration rate λ i and an emigration rate μ i . Both parameters are closely related to HSI. For high HSI habitat, there will be a high trend of outward migration. At this time, the emigration rate is high and the immigration rate is low due to the pressure of species competition. However, for low HSI habitat, the opposite is true. Assuming habitat x i currently accommodates S i species, S max is the maximum number of species. Immigration rate λ i and emigration rate μ i are usually described as follows: where I is the maximum immigration rate and E is maximum emigration rate. e component x id (d � 1, 2, . . . , D) of habitat x i will immigrate depend on immigration rate λ i . e habitat x j is selected depending on emigration rate μ j . e properties of the habitat such as HSI and the number of species may changes due to unexpected events. At the same time, mutation rate depends on species probability. According to biogeography, when the number of species in habitat is too large or too small, species probability is low. When the number of species in habitat is moderate, species probability is high. Mutation rate m i is described as follows: where P i is the species probability of x i decided by the number of species S i , P max is the maximum species probability, and m max is the maximum mutation rate.

Comprehensive Learning Particle Swarm Optimization
(CLPSO). Comprehensive learning particle swarm optimization [14] adopts the strategy of comprehensive learning to select the objects to learn instead of learning from itself and the global optimal individual. e velocity updating equation in CLPSO is defined as follows: where f i defines which particles' pbests that the particle i should follow and rand(0, 1) ∈ [0, 1] is a uniform random number. e inertia weight w in CLPSO is a linear attenuation coefficient. CLPSO assigns a learning probability Pc i for each particle i using the following equation: For each solution x i , it will learn from many particles instead of only two particles. Each component of particle i learn from itself or other particle depending on learning probability Pc i . A random component of particle i will learn from other particle if all its components learn from itself. e higher fitness value a solution has, the greater possibility that particle will be learned. (BLPSO). Biogeography-based learning particle swarm optimization [23] adopts the strategy of migration operation of BBO to select the objects to learn. Each component of particle i learns from itself or a selected particle j depending on immigration rate λ i . e selected particle j depends on emigration rate μ j . A random component of particle i will learn from other particle if all components of particle i learn from itself. e higher the fitness value of a solution is, the smaller the immigration rate and the greater the migration rate of the particle are. It means that the better particle has larger probability to be learned and the worse particle has the larger probability to learn from others. e velocity updating equation in BLPSO is the same as equation (11). e learning probability Pc i for particle i and the selected probability Ps j for particle j can be expressed as follows:

Biogeography-Based Learning Particle Swarm Optimization
It should be noted that learning probability Pc is different from selected probability Ps. Particle i has a probability of learning probability Pc i to learn from other particles. At the same time, particle j has a selected probability Ps ij to be selected as an object which particle i learn from. e relationship between learning probability and selected probability is shown in Figure 1.
It is worth mentioning that it uses the quadratic migration model instead of the linear migration model. e quadratic migration model is as follows: e differences of CLPSO and BLPSO are learning probability and the selection strategy of the learning particle.

Double-Dynamic Biogeography-Based Learning Particle Swarm Optimization (DDBLPSO)
Although many promising PSO variants have emerged, they still have some problems like insufficient convergence and accuracy and unsatisfactory efficiency. ese cause them to perform poorly on some complicated functions such as multimodal functions, hybrid functions, composition functions, and real-world problems. Regardless of CLPSO or BLPSO, there is a probability that particle learns from those particles with smaller fitness than itself. ese strategies lead some superior particles are not taken full advantage of their superiority and result a low convergence. A double-dynamic selection strategy based on biogeography-based learning is proposed in this paper, in which particles learn from the better one of two dynamic roulette winners. is strategy guides the particle to learn from others that are not worse than itself. In this way, the advantages of the superior particles will be greatly exploited.

Dynamic Biogeography-Based Learning.
Original biogeography-based learning strategy guides particles to learn from the promising particles. However, most particles still have a certain probabilities of learning from worse particles. For each particle, dynamic biogeography-based learning strategy only guides it to learn from other particles not worse than itself. All the particles are sorted in ascending order based on fitness. Particle i will learn from a selected particle if a uniform random number generated in [0,1] is smaller than predefined λ i . e selected probability Ps ij of particle j being selected by particle i is defined as follows: where j ≥ i. e greater the index number is, the higher fitness the particle has. e learning object particle j is selected with a dynamic roulette which constitutes even more competitive individuals μ i , μ i+1 , . . . , μ N . In this way, particle will not learn from the worse particles which can reduce the possibility for the particles getting worse and worse. Generally speaking, on the one hand, dynamic biogeography-based learning strategy keeps the worse particles larger exploration abilities to learn from others. On the other hand, the strategy gives the better particles fewer objectives to learn from. It should be pointed out that the best solution only can learn from itself.

Double-Dynamic Biogeography-Based Learning.
Original biogeography-based learning strategy only has one selected particle to learn while original comprehensive learning strategy has two selected particles and then chooses the better one. e strategy has two selected particles, which has a greater possibility to have a high fitness offspring than other strategy only has one selected particle. Inspired by comprehensive learning strategy, double-dynamic biogeography-based learning also has two selected particles and leaves the better one to ensure the high rate of convergence. is strategy allows particles to adequately learn from better particles. For each component x id , the learning objective x Li,d is selected by using the following equation: where x a and x b are two particles which are selected by dynamic roulette and x ad and x bd represent the dth component of x a and x b , respectively.

Mutation.
According to above strategies, although the particles will fly to a better area, particles are easy to fall into local optimum. One random component of particle will be replaced by other particle's component if the particle only learns from itself. Only high fitness particles have large probability to learn from themselves. ere are two reasons for this phenomenon. One is that a high fitness particle has a larger immigration rate. e other is that a high fitness particle only has few particles better than itself. is strategy helps particles to jump out of local optimum. If all components of x i only learn from themselves, the learning objective x Li is set as follows: where j and l are random numbers and l ∈ 1, 2, . .
In the early stage of DDBLPSO, the algorithmic search area is large because initial particles evenly distribute in the search area. In the last stage of DDBLPSO, the algorithmic search area becomes smaller and smaller because most particles tend to be very close together. However, the better particle will have the greater probability to learn from other particle's past optimal position because of the mutation mechanism. It prevents the algorithm from falling into local optimum.

Algorithm Process.
Double-dynamic biogeographybased learning particle swarm optimization is proposed based on above strategies. e process of DDBLPSO is described in Algorithm 1.
DDBLPSO, CLPSO, and BLPSO have some commonalities and differences. It is necessary to illustrate the differences among them. Two main differences are summarized as follows.
(i) In CLPSO, learning objective is selected uniformly and randomly. In BLPSO, learning objective is selected by roulette based on equation (14). In DDBLPSO, learning objective is selected by the dynamic roulette based on equation (16). (ii) In BLPSO, each component of a particle only has one learning objective. However, in CLPSO and DDBLPSO, each component of a particle has two learning objectives and then selects the better one.

Test Functions and Parameter
Settings. CEC2015 [25] benchmark functions are used to verify the performance of DDBLPSO. CEC2015 benchmark suite is simply introduced as follows. f1 and f2 are unimodal functions, f3 − 5 are multimodal functions, f6 − 8 are hybrid functions and f9 − 15 are composition functions. CLPSO [14] is a classical algorithm proposed based on PSO [10], and BLPSO [23] is an algorithm proposed based on BBO [22] and PSO. Inspired by CLPSO and BLPSO, we proposed DDBLPSO. DDBLPSO has an obvious relationship with BBO, PSO, CLPSO, and BLPSO, so we selected these four algorithms for comparison experiments. At the same time, we also selected two other representative algorithms which are PBSPSO [21] and DEBBO [24] for comparison experiments. We conducted four sets of numerical experiments which are labelled as experiment one, experiment two, experiment three, and experiment four. DDBLPSO is compared with four relevant algorithms (PSO, BBO, CLPSO, and BLSPSO), a PSO variant (PBSPSO), and a BBO variant (DEBBO). Design of the four experiments is shown in Table 1. e maximal number of function evaluations (maxFES) is set as 10000 * D. Search range is [−100, 100] D , and 30 independent runs are conducted in MATLAB 2017b. Other parameters are shown in Table 2.

Experimental Results and Discussion.
Results of experiment one, experiment two, experiment three, and experiment four are statistically shown in Tables 3-6, respectively.  Tables 7-10 show the ranks of five algorithms according to the Friedman test of experiment one, experiment two, experiment three, and experiment four, respectively. Min, Mean, Median, and Std indicate the minimum function error value, the mean function error value, the median function error value, and the standard deviation of error values, respectively. In order to exhibit the evolution trend of five algorithms more vividly, converging curves of the average best fitness of functions in experiment one and experiment three are shown in Figures 2 and 3. e converging (1) Initial maximum velocity v max , upper bound x max and lower bound x min of the initial region, population x, velocity v, inertia weight w, acceleration coefficient c, the maximal number of function evaluations maxFES; (2) Record personal best position pbest and global best position gbest; (3) while FES < maxFES do (4) Sort solutions in ascending order based on fitness; (5) Calculate immigration rate λ i and emigration rate μ i (i � 1, 2, . . . , N), update inertia weight w, and let learning population xL � zeros(size(x)); Select pbest a and pbest b (a, b ∈ 1, 2, . . . , N { }) with a dynamic roulette; (10) if fitness of x a is greater than fitness of x b (11) xL id � pbest ad ; (12) else (13) xL id � pbest bd ; (14) end if (15) else (16) xL id � pbest id (17) end if (18) end for (19) if (28) Update personal best position pbest and global best position gbest; (29) end while (30) Output the final result. ALGORITHM 1: Double-dynamic biogeography-based learning particle swarm optimization (DDBLPSO). 6 Mobile Information Systems   curves of functions in experiment two and experiment four are not shown due to their similarities with experiment one and experiment three, respectively. e boldface in Tables  3-6 indicates the best experimental results in experiment one to four, respectively. Unimodal functions are used to explore the convergence rate of the optimization problem. Multimodal functions are used to explore the ability to jump out of local optimum. Hybrid functions consider that in the real-world optimization problems, different subcomponents of the variables may have different properties. Composition functions consider that in the real-world optimization problems, different simple problems composite a more complex problem.
is is a challenge for algorithmic convergence rate. Most of the multimodal functions contain a number of local optima, which may lead to premature convergence of traditional algorithms. It is difficult for traditional algorithms to locate the global optimum for these functions.
It is easy to observe that DDBLPSO shows its dominance over other competitors on unimodal functions and hybrid functions from Table 3. However, DDBLPSO performs less on multimodal functions and composition functions.
is means that DDBLPSO has much greater convergence rate than its competitors and performs slightly better than its competitors in jumping out of local optimum.
e double-dynamic biogeography-based learning strategy is the main reason for this phenomenon. is    strategy guides particle to learn from other particles not worse than itself, which leads a greater convergence rate of DDBLPSO. At the same time, DDBLPSO has a slightly better ability of jumping out of local optimum than other competitors because of simple mutation and complexity of functions. One the other hand, learning from the better particles means that the exploration ability of algorithm will be reduced slightly. For this reason, DDBLPSO does not have such eye-catching performance on multimodal functions and composition functions. Table 7 shows that DDBLPSO attains the best rank, BLPSO attains the second, and CLPSO attains the third, followed by BBO and PSO. It is obvious that DDBLPSO is the most competitive algorithm in this group of experimental comparison. Table 4 reveals that DDBLPSO performs much better than other four algorithms on f1, f4, f5, f6, f8, f9, f10, and f12. DDBLPSO cannot achieve obvious dominance on each kind of functions due to the problems of being different types and large scale. On the whole, DDBLPSO still performs best. Table 8 presents the same ranking result as Table 7. Figure 2 shows that DDBLPSO has the strongest evolving trend on f1, f2, f4, f6, f7, f8, f10, and f14. DDBLPSO is also the most competitive algorithm in this group of experimental comparison with relatively large-scale problems.
Generally speaking, it is observed from Tables 3, 4, 7 and 9 and Figure 2 that DDBLPSO achieves the best performance compared with its competitors. Table 5 shows that DDBLPSO beats PBSPSO and DEBBO on most functions, and Table 9 shows that DDBLPSO attains the best rank, DEBBO attains the second, and PBSPSO attains the third. Table 6 reveals that DDBLPSO         performs much better than other two algorithms on f1, f4, f5, f6, f8, f11, and f12. Table 10 presents the same ranking result as Table 9. Figure 3 shows that DDBLPSO has the strongest evolving trend on f1, f2, f4, f5, f6, f7, f8, f10, f11, and f14. Generally speaking, it is observed from Tables 5, 6, 9 and 10 and Figure 3 that DDBLPSO beats a PSO variant and a BBO variant.

Six Kinds of Terrain Functions.
City, village without houses, village with houses, mountainous area without houses, mountainous area with houses, and mountainous area with a huge building are the most common terrains in UAV path planning. Six kinds of terrain functions are designed depending on these conditions. City has flat ground and tall buildings. Village without houses means uneven ground. Village with houses means uneven ground and low buildings. Mountainous area without houses means uneven ground and mountains. Mountainous area with houses has uneven ground, mountains, and low buildings. Mountainous area with a huge building has uneven ground, mountains, and a huge building. (x ′ , y ′ ) is the coordinate of any point on the plane. e details of six terrain functions are as follows. e terrain function of city f terrain1 is defined as follows: where T i is the height of the tall building i, Ω i is the area occupied by the tall building i, and T is the number of tall buildings. e terrain function of village without houses f terrain2 is defined as follows: where a 1 , a 2 , a 3 , a 4 , a 5 , and a 6 are terrain parameters, which decide the uneven degree of the terrain. e terrain function of village with houses f terrain3 is defined as follows:  where S i is the height of the low building i, Γ i is the area occupied by the low building i, and S is the number of low buildings. e terrain function of mountainous area without houses f terrain4 is defined as follows: where M i is the mountain function, R i , R i , Q i , and K i describe the position and the shape of mountain i, and M is the number of mountains. e terrain function of mountainous area with houses f terrain5 is defined as follows: where S i is the height of the low building i, Γ i is the area occupied by the low building i, and S is the number of low buildings. e terrain function of mountainous area with a huge building f terrain6 is defined as follows:   Table 16. e detailed parameter settings of mountains are given in Table 17. Danger areas are designed as equation (25). Other parameter settings of algorithms are the same as in Section 5.
x ′ + 25 2 + y ′ + 25 2 + z ′ 2 � 10 2 ,  are shown in Figures 4 and 5, respectively. e converging curves of six kinds of terrain functions in experiment six and experiment eight are not shown due to the similar converging behaviours with experiment five and experiment seven, respectively. Table 12 shows that DDBLPSO is the most competitive algorithm for the relatively small-scale problems with 14 points. It is easy to see that DDBLPSO attains the best rank and BLPSO is the second, followed by CLPSO, PSO, and BBO in Table 18. For the relatively large-scale problems with 49 points, Tables 13 and 19 reveal that DDBLPSO achieves the best results in experiment six. Table 19 shows that DDBLPSO attains the best rank, BLPSO is the second, and CLPSO is the third, followed by PSO and BBO. Furthermore, the experimental results indicate that it is much better than other competitors. However, Tables 12 and 13 show that DDBLPSO has relatively larger Std, which indicates a possible enhancement on the convergence capability of DDBLPSO. Figure 4 shows that DDBLPSO has the strongest evolving trend on all the terrain functions for  Table 16: Parameter settings of tall, short, and huge buildings.     the relatively large-scale problem in experiment five. Generally speaking, it can be observed from Tables 12, 13, 18, 19 and Figure 4 that DDBLPSO performs very encouraging in UAV path planning, but there is still room for improvement in terms of Std. It is easy to see that DDBLPSO beats DEBBO and PBSPSO for the relatively small-scale problems with 14 points in Table 14. Table 15 shows that DDBLPSO attains the best rank, DEBBO is the second, and PBSPSO is the third in small-scale problems of UAV path planning. Figure 5 shows that DDBLPSO has the stronger evolving trend on all the terrain functions than DEBBO and PBSPSO in experiment seven. Tables 17 and 21 reveal the same results in large-scale problems.

Conclusions
is paper presents a novel UAV path planning algorithm based on double-dynamic biogeography-based learning particle swarm optimization. DDBLPSO adopts doubledynamic biogeography-based learning strategy. Under this strategy, particle only learns from itself or other even better individuals. Six terrain functions are presented to simulate the most six common terrains in UAV path planning. Computational experiments and simulations demonstrate the performance advantages of the algorithm both in global optimization and UAV path planning. Especially for the relatively large-scale problems, it performs even more competitive.
Our algorithm has the potential to be applied to other problems such as oil exploration, scheduling, and deep learning.
e UAV path planning problem also can be extended to multiobjective or constrained optimization problems to be suitable for more complex situations.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this article.

Authors' Contributions
Yisheng Ji was responsible for novelty construction and verification, simulation design, and manuscript writing. Xinchao Zhao was responsible for research motivation, simulation design, and analysis. Junling Hao read through the manuscript and gave some suggestions on structure and representation.

22
Mobile Information Systems