Improved Laplacian Biogeography-Based Optimization Algorithm and Its Application to QAP

Laplacian Biogeography-Based Optimization (LxBBO) is a BBO variant which improves BBO’s performance largely. When it solves some complex problems, however, it has some drawbacks such as poor performance, weak operability, and high complexity, so an improved LxBBO (ILxBBO) is proposed. First, a two-global-best guiding operator is created for guiding the worst habitat mainly to enhance the exploitation of LxBBO. Second, a dynamic two-differential perturbing operator is proposed for the first two best habitats’ updating to improve the global search ability in the early search phase and the local one in the late search one, respectively. )ird, an improved Laplace migration operator is formulated for other habitats’ updating to improve the search ability and the operability. Finally, some measures such as example learning, mutation operation removing, and greedy selection are adopted mostly to reduce the computation complexity of LxBBO. A lot of experimental results on the complex functions from the CEC-2013 test set show ILxBBO obtains better performance than LxBBO and quite a few state-of-the-art algorithms do. Also, the results on Quadratic Assignment Problems (QAPs) show that ILxBBO is more competitive compared with LxBBO, Improved Particle Swarm Optimization (IPSO), and Improved Firefly Algorithm (IFA).


Introduction
Optimization has always been a hot topic for various researchers, engineers, and others. In solving some complex problems, traditional optimization methods mainly rely on empirical analysis or the use of accurate mathematical models. However, they cannot find the optimal solution on other problems or within a reasonable time. In recent decades, inspired by nature, many Intelligent Optimization Algorithms (IOAs) such as Particle Swarm Optimization (PSO) [1], Shuffled Frog Leaping Algorithm (SFLA) [2], Differential Evolution (DE) [3], Krill Herd (KH) [4], Harmony Search (HS) [5], Cuckoo Search (CS) [6], Grey Wolf Optimizer (GWO) [7], and Biogeography-Based Optimization (BBO) [8], have sprung up, and they are widely used in many areas [6,[9][10][11].
BBO is a biogeography-based IOA proposed by Simon [8]. e mathematical models of biogeography describe the migration and mutation of species. According to the models, two operators are created in BBO. e migration operator of BBO shares information between habitats to improve the quality of poor solutions, and it demonstrates the exploitation ability of BBO. e mutation operator, which randomly generates new feature values, can get good population diversity to reflect the exploration ability. As BBO has a good model, simple search mechanism, and excellent performance [8], not only has it achieved great success on numerical optimization problems [12] but also is being used in so many applications [13][14][15][16]. Compared with some IOAs, BBO is more competitive and has attracted more widespread attention [17]. However, BBO has some defects such as easy entrapment into local optima and weak exploration [18].
Quadratic Assignment Problem (QAP) is a mathematical model for the location of indivisible economic activities, which is one of the most difficult combinational optimization problems [19]. QAP has been studied for several years in an assignment problem that models a variety of real-world problems, such as backboard wiring problem, campus planning problem, airport gate assignment, and traveling salesman problem [20]. Hospital layout problem is a typical QAP to minimize the total travel distance of patients. It can bring better service to patients by renovating the existing hospital and optimizing the reallocation of the departments, reduce the time consumption of each patient, and improve the efficiency of hospital service for more patients [20]. IOAs are devoted to the search for the good quality solutions, which are applied to solve QAP [19].
Although BBO variants are proposed to deal with the defects of BBO, whether BBO or its variants, the migration operator plays an important role. Garg and Deep [21] proposed a novel BBO based on the Laplace migration operator (LxBBO), which is an improved migration operator to strengthen the search capability of BBO. However, LxBBO still has some drawbacks such as poor performance, weak operability, and high complexity. So, in this paper, an improved LxBBO (ILxBBO) is proposed, and it is used to solve QAP: an alternative approach for locating hospital departments. e contributions of this paper are described as follows: (1) A dynamic two-differential perturbing operator is proposed to update the first two best habitats and used mainly to enhance the exploration in the early search phase, and the local search ability can be improved in the late search phase (2) A two-global-best guiding operator is presented to update the worst habitat and used mainly to enhance exploitation; at the same time, the global search ability can be improved in the early search phase (3) An improved Laplace operator is formulated for the other habitats' updating to fasten the convergence speed and improve the operability (4) ILxBBO is used for complex function optimization on CEC-2013 and applied to QAP; a lot of experimental results show ILxBBO obtains more performance than the comparison algorithms.
e graphical abstract of this paper is shown in Figure 1. e rest of this paper is organized as follows: Section 2 gives the related work. e proposed ILxBBO is elaborated in Section 3. In Section 4, experimental results on the CEC-2013 test set and QAP are reported and analyzed. Section 5 gives conclusions and future work.

Biogeography-Based
Optimization. BBO mainly uses migration and mutation models of species in biogeography to solve optimization problems. In BBO, each solution is called a "habitat" with a Habitat Suitability Index (HSI) to measure the quality of the habitat. e factors of a habitat that characterize habitability are called Suitability Index Variables (SIVs). BBO searches the best solution mainly based on migration and mutation steps.

Migration Operator.
In BBO, a good solution tends to have a high HSI and it is analogous to a habitat with many species, which has high emigration and low immigration rates, and vice versa. e purpose of the migration operator is to share information between different solutions. Good solutions tend to share their features with poor solutions, and poor solutions accept a lot of features from good solutions. Each habitat has its own immigration rate λ and emigration rate μ, and they are calculated as follows: where I is the maximum immigration rate, E is the maximum emigration rate, N k is the number of species of the habitat H k , and N is the maximum number of species. From equations (1) and (2), the migration presents a simple linear model, but more often, there are complex nonlinear models [22]. e migration operator modifies the habitat's SIVs by accepting features from other good habitats, which can be expressed as follows: where H i is the immigration habitat and H k is the emigration habitat, which is selected by the roulette wheel selection [8].

Mutation
Operator. Sudden events may drastically alter certain characteristics of a habitat, thereby changing its HSI and causing a significant change in the number of species. e mutation rate of the habitat is inversely proportional to the species probability in BBO. e mutation rate m i is calculated by the probability p i of the species number, and its expression is as follows: where m max is the maximum mutation rate, which is a userdefined parameter, the computation way of p i refers to [8], and p max = max (p i ). e mutation can be conducted below: where H i is the mutation habitat, j ∈ [1, D] (where D is the decision variable number), lb j and ub j are the lower and the upper boundary values of the jth SIV of H i , respectively, and rand is a uniformly distributed random real number between 0 and 1. In order to save the best solutions in the search process, the elitism strategy is used to keep some best solutions. At each iteration, after operations such as migration and mutation are carried out, the population is sorted. e 2 Complexity several worst habitats are replaced with some elitists kept before, and the population is sorted again. e steps of BBO are given as follows: Step 1: set the parameters and initialize the population randomly Step 2: evaluate each habitat and sort the population from the best to the worst by their HSIs Step 3: calculate the immigration, emigration, and mutation rates and keep the elitists Step 4: perform the migration operator by equation (3) Step 5: perform the mutation operator by equation (5) Step 6: limit each new solution's boundary Step 7: calculate each habitat's HSI and sort the population from the best to the worst Step 8: replace some worst habitats with the elitists Step 9: sort the population again from the best to the worst Step 10: decide whether the stopping criterion is met Step 11: if it is so, output the best solution; otherwise, return to Step 3 According to the steps of BBO, BBO can get strong local search capability through migration and global search ability by mutation. However, there are some drawbacks in BBO. For example, the migration operator simply provides new solutions through directly copying some features of good solutions and cannot generate new features from new promising areas in the search space. e mutation operator affects the accuracy of the solution in the later search of algorithm and so on.
To remedy BBO's defects, great efforts have been made to improve BBO by many researchers. e improvements are mainly divided into the following aspects. (1) Topology.
Zheng et al. [23] used three different topologies, namely, ring topology, square topology, and random topology, to enhance the exploration ability of BBO. Feng et al. [24] presented a hybrid migration operator with random ring topology to enhance the potential population diversity of BBO. (2) Migration operator. Ma and Simon [25] proposed a blended migration operator, in which a new solution consists of two parts, the features of other solutions and its own features. Xiong et al. [18] presented a polyphyletic migration operator to raise the population diversity of BBO, and an orthogonal learning strategy was used to make a systematic and elaborate search. Li et al. [22] presented a perturbed migration in order to enhance the exploration ability and integrated the Gaussian mutation into BBO to improve the exploration. Chen et al. [26] put forward the covariance matrix for reducing the dependence on a coordinate system and enhancing BBO's rotational invariance. Zhang et al. [27] presented an Efficient and Merged BBO (EMBBO) to enhance the optimization efficiency. (3) Mutation operator. Gong et al. [28] embedded Gaussian, Cauchy, and Levy mutations into BBO, respectively. Lohokare et al. [29] proposed a mutation operator which combined two individuals to generate a new feasible solution to improve the exploration ability. (4) Hybrid BBO with other IOAs. Gong et al. [30] combined the exploration of DE with the exploitation of BBO to enhance the performance. Savsani et al. [31] integrated BBO with Artificial Immune Algorithm (AIA) and ACO, respectively, and proposed four mixed BBOs. Khademi et al. [32] combined the Invasive Weed Optimization (IWO) with BBO to enhance the performance of BBO. Zhang et al. [33] combined BBO and GWO to obtain a BBO with strong universal applicability. In addition, many improvements are from not only one way but many ones to maximize the performance of BBO at present. erefore, further improvements to the BBO variant are necessary. Two-global-best guiding operator

Laplacian Biogeography-Based Optimization. Garg and
Deep proposed LxBBO based on the Laplace crossover to improve the optimization performance of BBO [21]. e Laplace crossover is described as follows.
ere are two parents, x 1 the immigration habitat,x 2 the emigration habitat that is selected by the roulette wheel selection. A random number (β) is generated, and it follows Laplace distribution, which is given by the following equation: where a ∈ R is called the location parameter and b > 0 is called the scale parameter. en, two new offsprings are generated as e two new habitats are blended to make a new habitat y (see equation (9)) with a blended parameter c given by equation (10): where c min and c max are two parameters which are the minimum and maximum values of c, respectively, and these two parameters lie in [0, 1], and t is the current iteration number. k is a user-defined parameter and less than 1. From equations (7) and (8), the difference between the two equations lies in the first item of them; the first term of equation (7) is x 1 , while the first one of equation (8) e value of c is getting smaller as t increases in equation (10) (see Figure 2(a)). From equation (10), in the earlier search phase, the value of c is larger, so y is mostly affected by the offspring y 1 ; the difference of x 1 and x 2 is larger, and the search range around x 1 is larger, so the algorithm has stronger exploration. In the later search phase, the value of c is smaller, so y is mostly affected by the offspring y 2 ; the difference between x 1 and x 2 is smaller, the search range around x 2 (good position) is smaller, and the algorithm has stronger exploitation.
In LxBBO, except the migration operator is replaced by the Laplace operator, the rest is the same as in BBO, for example, both use the mutation and the elitist strategy. LxBBO uses the immigration and emigration habitats as a parent to generate two offsprings and then get a new habitat.
is way can generate new positions from new promising areas in the search space to enhance the optimization performance of BBO.

Defects of Laplacian Biogeography-Based Optimization.
Although LxBBO enhances the performance of BBO largely, there are some drawbacks. (1) In the Laplacian operator, there are many parameters. From equations (6) to (10), a, b, c min , c max , and k need tedious tuning in various applications. In the experiment, according to [21], k is set to 0.95, and the power of the difference between the maximum value and the minimum value is calculated at each iteration, which results in some computation cost. (2) LxBBO and BBO share some of the drawbacks. For example, when the emigration habitats are selected by the roulette wheel selection, a poor habitat may be chosen to share its information with a good habitat to reduce the quality of the population and the roulette wheel selection has high calculation complexity. (3) Both BBO and LxBBO use the mutation operation. Although the mutation operator can enhance the global exploration ability, it is possible to mutate some better habitats and destroy them, causing population degradation and affecting the convergence quality especially in the late search phase. (4) e mutation operator needs to compute the mutation rate of each habitat and complete the mutation operation. (5) e elitist strategy is used, and the population needs to sort twice at each iteration, which results in high computation complexity. In order to solve the above drawbacks, several creative improvements are brought up in this paper.

Improved Laplace Operator.
To enhance the performance and operability of LxBBO, an improved Laplace operator is proposed. Inspired by the optimal idea of [34], when an individual (namely, H e ) with better fitness subtracts an individual with poor fitness (namely, H k ) (see equations (11) and (12)), it will search toward the good individual to accelerate the convergence speed. e random number β is calculated by equation (13), and the two habitats H 1 and H 2 are generated by equations (11) and (12). Compared with equations (7) and (8), the difference is that the emigration habitats of equations (11) and (12) are selected by the example learning selection [33]. H e has a better fitness value than H k does, using the difference between H e and H k to ensure that the search direction is closer to the better solution, and the convergence quality is improved:  Complexity β � −0.5 * log(u), u ≤ 0.5, ere are many parameters to be set and much complexity in equation (10), so a new dynamic weight parameter c is adopted. It is expressed as equation (14), and the difference of c in equations (10) and (14) is shown in Figure 2. Figure 2(a) represents the curve of r in LxBBO, and Figure 2(b) represents the curve of r in ILxBBO. From Figure 2, c is getting smaller as t increases, and it keeps an almost constant value (0.1) after about 100 iterations in LxBBO. However, c increases linearly with t in ILxBBO, when t = 0 and c = 0.5, and when t = MaxDT, c = 1, so c is a linear number between 0.5 and 1. No parameters need tuning and that makes ILxBBO get stronger operability by using the dynamic weight c: where MaxDT is the maximum iteration number. H is given by the following equation: From equation (15), in the earlier stage, H accepts the information from H 1 and H 2 to increase the diversity. In the later stage, H is more affected by H 2 to enhance the exploitation and the convergence speed.

Improved Migration
Operator. In LxBBO, the emigration habitats are selected by the roulette wheel selection like BBO, and good solutions may get the features from poor solutions, so the population may be degenerated. In order to overcome the drawbacks of the roulette wheel selection, the example learning selection [33] is adopted to replace the roulette wheel selection. e selection approach is described as follows. When the population is sorted from the best to the worst, the k-th habitat has a higher value than the one behind the k-th habitat does. When H k is selected for migration, there are k − 1 habitats for it to learn from. e emigration habitat H e is calculated as follows: where ceil () is the function which rounds toward positive infinity. From equation (16), H e is not worse than H k , which improves the quality of the solution by sharing features from good solutions. Furthermore, ILxBBO only needs to calculate the immigration rate and does not need to calculate the emigration rate, which reduces the computation complexity further. e emigration habitat can be selected by only equation (16). e example learning selection overcomes the defects of the roulette wheel selection, and its calculation is simple to reduce the computational complexity. e improved Laplace operator and the improved migration operator together form an improved Laplace migration operator.

Dynamic Two-Differential Perturbing
Operator. DE, proposed by Storn and Price in 1997 [3], is a popular optimization algorithm. It generates temporary individuals based on the degree of individual difference in the population and realizes evolution by random recombination. In the earlier search period, the individuals have more difference in the population, and the algorithm can search in a large range, thus obtaining global exploration ability. In the later search period, the algorithm searches in the vicinity of the individual and obtains local search ability [35]. A dynamic differential evolution algorithm is proposed by Wu et al. [36], and the experiments show that the dynamic method has better performance.
LxBBO can get a global exploration ability by its mutation operator. However, the mutation operation is at random, and it is easy to destroy the better solutions, especially in the late search stage. In addition, the mutation operation needs to calculate the mutation rate of each habitat, along with its own calculation, and those increase the computation complexity. So the mutation operator is removed in ILxBBO. In Section 3.3, the emigration habitats are selected by the example learning approach and poor habitats can accept their features from good habitats. However, some best habitats cannot be updated because there are few examples for them to learn from leading to low search efficiency. Although the best habitat may be the example of the second habitat, there is little difference between the two habitats in many cases, the second one is almost unchanged, and the search efficiency is also low. So a dynamic two-differential perturbing operator is adopted in the best habitat and the second best habitat to enhance the search ability. e value of the scaling factor (w) of the dynamic two-differential perturbing operator decreases with the increase in the current iteration number, which is expressed as follows: e dynamic two-differential perturbing operator is expressed as follows: where H k is the best habitat or the second best habitat, H r and H m are two habitats selected randomly from the current population, and they satisfy m ≠ k ≠ r. In the early earlier stage, the values of (H m − H r ) and (H b − H k ) and the value of w are all comparatively large, so H k searches in a larger range to enhance the global search ability. e values of (H m − H r ) and (H b − H k ) and the value of w are comparatively small in the late stage, so H k searches in a smaller range to enhance the local search ability. From equation (18), H k is affected by itself, the dynamic coefficient factor, and the two differences. erefore, it is called a dynamic two-differential perturbing operator.

Two-Global-Best Guiding Operator.
In order to further enhance the local search ability, the best and subbest habitats are used to guide the worst habitat to update. In the first half of the search, the mathematical equation is expressed as follows: where H s is the subbest habitat. In the second half of the search, equations (20) and (21) are used to update the worst habitat based on the same probability: From equation (19), the worst habitat is affected by three aspects: the worst habitat itself, the random weighted (−1∼1) difference between the best habitat and the worst habitat, and the random weighted (−1∼1) difference between the subbest habitat and the worst. In the early search stage, the difference between the best habitat or the subbest habitat and the worst habitat is bigger. e range of 2 * (rand − 0.5) is from −1 to 1. So the worst habitat searches around the wider range of itself to obtain some global search abilities. In the late stage, the worst habitat adopts equation (20) or equation (21) to update. ey are similar to equation (19) but with different random weights. Under the guidance of the best and the subbest habitats, the worst habitat can obtain a local search ability by searching in a small range around itself. From equations (19)- (21), H w is all affected by the best habitat and the subbest habitat, so this operator is called twoglobal-best guiding operator.
3.6. Other Improvements. In addition, the following improvements are also adopted. First, the greedy selection [24,37] replaces the elitism strategy [8]. So, on the one hand, the population is just needed to sort once at each iteration to reduce computing complex; on the other hand, the greedy selection avoids setting the elitist parameter. Second, the immigration rate calculation step is moved outside of the iteration loop. It means the immigration rate is calculated once in the whole iterations to reduce the computation complexity. e flowchart of ILxBBO is given in Figure 3.
From the above description, there are the following differences between ILxBBO and LxBBO. (1) In ILxBBO, the mutation operator is removed to omit calculating the mutation probability and the mutation operation to reduce the computation complexity. (2) e best habitat and the subbest habitat adopt a dynamic two-differential perturbing operator to update in ILxBBO, while the first two best habitats use the Laplace operator to update in LxBBO. (3) e worst habitat uses a two-global-best guiding operator to update, while the worst habitat also uses the Laplace operator in LxBBO. (4) e remaining habitats use an improved Laplacian migration operator in ILxBBO, while the Laplace operator is also used in LxBBO. (5) ere are few parameters to be tuned in ILxBBO, while LxBBO has many parameters to be tuned in various applications. (6) LxBBO uses the roulette wheel selection, while the emigration habitat is selected by the example learning approach in ILxBBO. e population tends to be in the best direction, the example learning approach does not require calculating the migration rate, and the emigration habitat can be selected with only a small amount of calculation. (7) e elitist strategy is adopted in LxBBO, while the greedy selection is used instead of the elitist strategy in ILxBBO. It saves one sorting step to further reduce the computation complexity. To be fair, in all the experiments on the CEC-2013 test set, according to [38], the independent run number (Run) is 51, MaxDT dynamically adjusts based on the dimension of 6 Complexity the problem, with 30-D and 50-D, MaxDT is 3000 and 5000, respectively, and maximum number of function evaluations (MaxNFE S ) is set to D * 10,000. e population size (N) is 100 in LxBBO and ILxBBO. e maximum immigration rate I of ILxBBO is 1 like LxBBO. According to [21], the parameters used in LxBBO are as follows: the maximum emigration/immigration rate (E/I) = 1, m max = 0.005, c min = 0.1, c max = 1, a = 0, b = 0.5, and k = 0.95. We evaluate the statistical average (Mean) and standard deviation (Std) of these algorithms after some independent runs. For an algorithm, Mean represents its optimization ability and Std embodies its stability. e ranking criteria are described below: first, we compare each algorithm's mean value on each function; the better the mean value is, and the better the ranking is. If some algorithms obtain the same mean values, then we compare their Std values; the better the Std value is, the better the ranking is. If some algorithms obtain the same Mean and Std values, their rankings are considered to be the same. In addition, the best values are in bold in all the result tables.

Comparison of ILxBBO with Its Incomplete Variants.
To illustrate the effectiveness of each component of ILxBBO, ILxBBO is only compared with its incomplete variants and LxBBO on the 30-dimensional functions. ese incomplete variants are described as follows: GLxBBO is an incomplete variant of ILxBBO, which is LxBBO with only the two-global-best guiding operator, without the dynamic two-differential perturbing operator and improved Laplace migration operator DLxBBO is an incomplete variant of ILxBBO, which is LxBBO with only the dynamic two-differential perturbing operator without the improved Laplace migration operator and two-global-best guiding operator OLxBBO is an incomplete variant of ILxBBO, which contains the improved Laplace migration operator without the dynamic two-differential perturbing operator and two-global-best guiding operator e experimental results are shown in Table 1. Form Table 1, ILxBBO obtains 14 times ranking the first. LxBBO obtains 4 times ranking the first. GLxBBO obtains 9 times ranking the first. DLxBBO and OLxBBO obtain 1 time ranking the first. ILxBBO's, LxBBO's, GLxBBO's, DLxBBO's and OLxBBO's average rankings are 1.79, 4.11, 2.32, 3.14, and 3.36, respectively. ILxBBO's average ranking is significantly higher than those of its incomplete algorithms. e average ranking of LxBBO is the last. It shows that each improvement on LxBBO is effective and the two-global-best guiding strategy contributes most to ILxBBO. e average ranking of ILxBBO is the first, and it shows that every improvement in LxBBO is essential.

Comparison with BBO's Variants.
In this experiment group, ILxBBO is compared with quite a few state-of-the-art BBO's variants on the 30-dimensional and 50-dimensional functions from the CEC-2013 test set. e comparison algorithms include TDBBO [39], BIBBO [25], BBOM [40], DEBBO [30], BLPSO [41], PRBBO [24], WRBBO [37], EMBBO [27], and BHCS [42]. ese algorithms are all BBO variants proposed in recent years, with much comparability. eir common parameter settings are the same as those of ILxBBO, and other parameter settings are referred to their corresponding references. e experimental results on the 30-dimensional functions and 50-dimensional functions are shown in Tables 2 and 3, respectively. From Table 2, ILxBBO obtains 9 times ranking the first and the optimal value (0) on f 1 is obtained by ILxBBO. TDBBO obtains 1 time ranking the first. BIBBO, BLSPO, and BHCS obtain 0 times ranking the first. DEBBO obtains 4 times ranking the first, and both WRBBO and EMBBO obtain 5 times ranking the first. On the 5 unimodal functions, ILxBBO obtains 2 times ranking the first and 2 times ranking the second, and it shows that the improved migration operator and the two-global-best perturbing operator enhance the local search ability. On the 15 basic multimodal functions, ILxBBO obtains 6 times ranking the first and 3 times ranking the second, and it shows that the dynamic two-differential perturbing operator enhances the global search ability. On the 8 composition functions, ILxBBO obtains 1 time ranking the first. On average ranking, ILxBBO's average ranking (2.36) is significantly higher than those of the comparison algorithms.
ese comparison results show that, in general, ILxBBO obtains the most significant optimization performance among these 9 optimization algorithms.
From Table 3, ILxBBO obtains 10 times ranking the first and gets the first average ranking (2.14). Its performance is just as good as that on the 30-dimensional functions.

Comparison with Other IOAs.
To further verify the effectiveness of ILxBBO, ILxBBO is compared with other IOAs on the 30-dimensional functions. ese algorithms include YYPO [43], DPCABC [44], DFnABC [45], FMPSO [46], GLPSO [47], HFPSO [48], and MEGWO [49], which are the latest and most famous IOAs and are competitive and representative. DPCABC and DFnABC are ABC variants, and MEGWO is a GWO variant. FMPSO is a PSO variant, and GLPSO and HFPSO are PSO hybrid algorithms with Genetic Algorithm (GA) and Firefly Algorithm (FA), respectively. For comparison algorithms, on the 30dimensional functions, MaxFEs is 300,000, and Run is 51(except FMPSO is 30). e other parameters settings are referred to their corresponding references. e data of these comparison algorithms are directly from their corresponding to references. e experimental results are shown in Table 4.
From Table 4, ILxBBO obtains 9 times ranking the first, YYPO obtains 2 times ranking the first, DPCABC obtains 4 times ranking the first, DFnABC and GLPSO obtain 7 times ranking the first, FMPSO obtains 3 times ranking the first, HFPSO obtains 1 time ranking the first, and MEGWO obtains 0 times ranking the first. e average rankings of IlxBBO (2.71) is the first; the next is GLPSO, DFnABC, YYPO, FMPSO, DPCABC, HFPSO, and HFPSO. erefore, Complexity     this further verifies the optimization performance of ILxBBO.

Convergence Analysis.
In order to highlight the difference between ILxBBO and LxBBO on convergence, Figure 4 shows the convergence curves of ILxBBO and LxBBO only on the 10-dimensional functions.  Figure 5 shows the average runtime comparison results between ILxBBO and 9 BBO variants. In Figure 5, the ycoordinate is the average runtime and its unit is "second"(s).

Application to Quadratic Assignment Problem (QAP).
QAP is an NP-hard problem, which was first introduced by Koopmans and Beckmann [50], and it has been among the problems studied most in all of combinatorial optimization problems. QAP can be described as the problem of assigning a set of facilities to a set of locations with given distances between the locations and given flows between the facilities. e approach is to have two matrices of size N * N given by the following equations: where N � 1, 2, ..., n and f ij is the flow or weight between each pair of facilities which represents the flow from facility i to facility j. d ij is the distance between each pair of locations      which represents the Euclidean distance from Location i to Location j. e objective function is given as follows: QAP can be applied to many practical problems, such as modeling the location of hospital departments, optimizing the configuration of departments, reducing the time consumption of each patient, and improving the efficiency of hospital services for patients. For example, a simple example, in a single layer of a hospital, five departments (D1, D2, D3, D4, and D5) are set at five locations (1, 2, 3, 4, and 5), as shown in Figure 6.
In a single layer, the importance of each department is different, and the departments with more people flow are more important. On the contrary, the departments with less people flow are not more important, and the importance between the two departments is represented by the flow. e matrix F represents the flow between two departments, and the matrix L represents the distance between the two locations. e next step is to assign this group of departments to this group of locations to minimize the cost of patients between departments.
ILxBBO deals with continuous problems on which the search agents are represented by real values. To solve QAP with ILxBBO, it is necessary first for ILxBBO how to change from continuous optimization problems to discrete optimization problems.
e changed method from [20] is adopted in this paper, and the method is explained as follows: the dimensionality of each solution represents the number of positions or facilities in QAP, and each solution represents the arrangement of different positions, corresponding to a location allocation scheme. e best allocation scheme is obtained through an IOA's optimization process. Suppose a QAP problem has 10 facility points that need to be allocated to 10 location points, as shown in Figure 7(a). Facility 1 is assigned to Location 8, Facility 2 is assigned to Location 5, and Facility 3 is assigned to Location 1, and so on. QAP is considered as a permutation discrete problem, so when IOA is used to solve QAP, it uses the maximum real value to map the real value into the permutation sequence, as shown in Figure 7(b). at is, the maximum real value 13.26 corresponds to the minimum integer 1, the minimum real value 0.85 to the maximum integer 10, and so on.
ILxBBO is used to solve QAP, and it has two modifications: the first is that each random real solution or updated real solution is changed into a permutation sequence according to real value mapping as in Figure 7; the second is that equation (24) is considered as the objective function. e comparison algorithms include IPSO and IFA, they both come from [51], and they are improved algorithms of PSO and FA, respectively. ey are good at solving QAP and therefore have strong comparability.
To be fair, according to the best recommendations of common parameters from [20], MaxNFEs is 100,000, and Run is 30. So the parameters of the 4 comparison algorithms are set as follows: Run is 30, N * MaxDT = 100,000, N is 100, and MaxDT is 1000. Table 5 records Mean and Std of each algorithm on the benchmark data, and the data are from [20] and have been used as a standard set for solving QAP. Bestknown refers to the best solution found so far.
From Table 5, on Mean, ILxBBO obtains the best results on all the 10 data. On Std, ILxBBO gets 0 on 7 (had12, had14, had16, scr12, tal13a, tal12b, and chr12b) of 10 data, and it indicates that ILxBBO has strong stability. On had12, had14, had16, scr12, tai12a, tai12b, and chr12b, the mean value of ILxBBO is the same as those of Best-known, but on scr15, tai15a, and chr12b, the mean value of ILxBBO is slightly less than that of Best-known. Generally, it shows that ILxBBO can solve QAP better than LxBBO can.  Complexity and R − refers to the sum of ranks for the opposite. When ILxBBO and the comparison algorithm obtain the equal optimization performance, the corresponding ranks are split evenly to R + and R − [33]. e p values can be computed according to the R + and R − values. "n/w/t/l" means the number of the benchmark problems is n, and ILxBBO wins on w functions, ties on t functions, and loses on l functions. e following criterion is applied to compare the results:

Wilcoxon Signed
(1) When p > 0.05, the difference of both algorithms is not significant (2) When p ≤ 0.05, the difference of both algorithms is significant From Table 6, the value of p is more than 0.05 for ILxBBO versus GLPSO, so ILxBBO is not significantly better

Conclusions and Future Work
In this paper, an improved LxBBO (ILxBBO) is proposed to improve the optimization performance of LxBBO. ese improvements are as follows: the mutation operator is deleted to simplify the search process and to reduce the computational complexity. A dynamic two-differential perturbing operator is proposed to update the first two best habitats so that the global search ability and the local one can be improved in the early search phase and in the late one, respectively. e worst habitat adopts a two-global-best guiding operator to improve the search ability. An improved Laplace migration operator is used for other habitats' updating to reduce some parameters setting and fasten the convergence speed. In addition, some approaches such as example learning selection instead of roulette wheel selection, greedy selection instead of elitism strategy, and so on are adopted to reduce the computation complexity and enhance the performance. In order to verify ILxBBO, a large number of experiments are made on the CEC-2013 test set. Experiment results verify ILxBBO outperforms quite a few state-of-the-art algorithms on most cases. Moreover, the obtained results indicate how effective ILxBBO is in solving QAP. In the future, the improved Laplace operator of this paper may be applied to other BBO variants and other IOAs, and it is expected to apply ILxBBO to other more engineering problems.

Data Availability
CEC-2013 databases used in this paper are publicly available for download and, in particular, can be accessed from http:// www.rforge.net/cec2013/files/, whereas QAP databases can be downloaded from http://anjos.mgi.polymtl.ca/qaplib/inst.html.

Disclosure
is article does not contain any studies with human participants performed by any of the authors.

Conflicts of Interest
e authors declare that they have no conflicts of interest.