Bat Algorithm Based on an Integration Strategy and Gaussian Distribution

The bat algorithm (BA) is a recent heuristic optimization algorithm based on the echolocation behavior of bats. However, the bat algorithm tends to fall into local optima and its optimization results are unstable because of its low global exploration ability. To solve these problems, a novel bat algorithm based on an integration strategy (IBA) is proposed in this paper. Through the integration strategy, an appropriate operator is adaptively selected to perform global search, so that the global search ability of the IBA is improved. Furthermore, the IBA disturbs the local optimum through a linear combination of Gaussian functions with different variances to avoid becoming trapped in local optima. The IBA also updates the velocity equation with an adaptive weight to further balance the exploration and exploitation. Moreover, the global convergence of the IBA is proved based on the convergence criterion of a stochastic algorithm. The performance of the IBA is evaluated on CEC2013 benchmark functions and compared with that of the standard BA as well as several of its variants. The results show that the IBA is superior to other algorithms.


Introduction
Optimization usually involves highly nonlinear complex problems with many design variables and complex constraints [1]. Generally, the form of nonlinear constrained optimization problem can be formulated as follows: Minimize f(x), s.t. g l (x) ≤ 0, l � 1, 2, . . . , k, h j (x) � 0, j � 1, 2, . . . , p, where x � (x 1 , x 2 , . . . , x n ) is n-dimensional decision variables, f (x) is the objective function,g l (x) ≤ 0 denotes the inequality constraints, and h j (x) � 0 denotes the equality constraints. Traditional deterministic methods or algorithms do not cope well when solving a large number of problems in practice, especially when the objective function is multimodal with many local optima. Over the past years, over a dozen metaheuristic algorithms have been developed based on inspiration from different natural processes. For instance, the genetic algorithm [2] is based on the biological evolution processes and ant colony optimization [3] is based on swarm behavior. Harmony search is an algorithm inspired by the music composition process of musicians. e particle swarm optimization (PSO) algorithm [4] is inspired from swarming behaviors such as bird flocking and fish schooling in nature. An evolutionary algorithm named the oriented cuckoo search (OCS) algorithm [5] was motivated by the aggressive breeding habits of a bird called the "cuckoo." ese algorithms have been used to solve nonlinear complex problems because of their simple structures and abilities to obtain a solution, and they are referred to as nature-inspired algorithms or bioinspired algorithms. In recent years, many such metaheuristic algorithms have been proposed. Wu et al. [6] proposed an enhanced harmony search algorithm with circular region perturbation, and Gupta and Deep [7] introduced a new crossover operator called the double distribution crossover. An aggregative learning gravitational search algorithm was proposed by Lei et al. [8], and Mohamed et al. [9] proposed a novel nature-inspired algorithm called the gaining sharing knowledge algorithm, which mimics the process of gaining and sharing knowledge during the human life span. Attention should also be drawn to novel algorithms [10][11][12][13][14] based on sine cosine algorithms. Moreover, many metaheuristic algorithms were proposed for solving constrained nonlinear programming problems (CNLPPs). Han et al. [15] developed a new hybrid moth search-fireworks algorithm to solve numerical and constrained engineering optimization problems. Baykasoglu et al. [16] introduced a new metaheuristic, single seekers society algorithm, for solving unconstrained and constrained continuous optimization problems. Shadravan et al. [17] presented a novel natureinspired metaheuristic optimization algorithm, called sailfish optimizer, which is inspired by a group of hunting sailfish. Kaur et al. [18] proposed a novel hybrid multiobjective optimization algorithm by synthesizing the strengths of multiobjective spotted hyena optimizer and salp swarm algorithm. Montemurro et al. [19] presented a new penalty-based approach, developed within the framework of genetic algorithms for constrained optimization problems. Montemurro et al. [20] presented a nonclassical genetic algorithm to solve the design of laminates with a minimum number of layers. Costa et al. [21] provided a general methodology to approximate sets of data points through nonuniform rational basis spline (NURBS) curves. e bat algorithm (BA) is a nature-inspired metaheuristic algorithm. It was proposed by Yang in 2010 to imitate the echolocation behavior of bats [22]. e BA has been widely applied in many applications, such as engineering optimization [23,24] and pattern recognition [25]. Next, we introduce three aspects of the BA in detail: parameters, algorithm structure, and application.

Parameters.
Four main parameters are involved in the standard BA: pulse frequency, pulse rate, velocity, and a constant. For the standard BA, it is difficult to find a balance between global search and local search, which leads to a slow convergence rate. To solve this problem, Gandomi and Yang [26] introduced chaos into the standard BA (CBA) to increase its global search mobility for robust global optimization. In CBA, different chaotic systems are used to replace the parameters in BA. Xie et al. [27] proposed an improved BA based on the Lévy flight trajectory. is algorithm can effectively jump out of local optima using the strategy of an adaptively adjusted frequency. Gan et al. [28] proposed a new BA based on iterative local search and stochastic inertia weight. A stochastic inertial weight is considered in the velocity updating equation, which can enhance the diversity and flexibility of the bat population.

Algorithm Structure.
e optimization performance of the standard BA mainly depends on the interaction and influence between individuals, which may lead to a local optimum. Liu and Chunming [29] introduced the Lévy flight behaviors of bats and took full advantage of the trait of uneven random walks to enable the algorithm to avoid becoming trapped in a locally optimal solution. To enhance the ability of the algorithm to escape from locally optimal values, Boudjemaa et al. [30] proposed the fractional Lévy flight BA (FLFBA), in which the velocity is updated through fractional calculus. Fister et al. [31] proposed a hybrid BA by combining it with differential evolution. To improve the global searching ability, Al-Betar and Awadallah [32] divided the whole bat population into two subgroups and specified the movement of bat individuals from one group to another by mobility. Jaddi et al. [33] proposed to modify the velocity equation of the standard BA to better balance exploration and exploitation in the population, and Ghanem and Jantan [34] proposed an enhanced BA to enhance the diversity of the standard BA using a special mutation operator.

Applications.
Recently, BA has been widely used in the fields of optimization, modeling, and control. Dao et al. [35] used parallel BA to solve a workshop scheduling problem. Osaba et al. [36] proposed a discrete BA to solve the vehicle path problem of drug waste collection and distribution. Aiming at the data loss problem in high-dimensional data, Leke and Marwala [37] proposed to estimate missing data based on BA. To improve the accuracy of the generated fuzzy rules, Cheruku et al. [38] analyzed big data for diabetes detection by combining rough set feature selection with optimized BA. Nakamura et al. [39] proposed a binary BA to solve feature selection problems and proved that the algorithm outperforms other swarm intelligence algorithms. Goyal and Patterh [40] proposed a modified BA to evaluate the precision of node localization in wireless sensor networks.
Although the aforementioned algorithms addressed the problems of BA, they cannot balance exploration and exploitation capabilities, and the stability of the results cannot be guaranteed. Hence, these methods cannot really improve the performance of the standard BA. To tackle the above problems, this paper proposes a novel BA that uses an integration strategy (IBA) to enhance the search ability while maintaining the stability of the results. In this paper, we present three main contributions: (i) we adaptively select the appropriate operator for performing global search through the integration strategy, which can improve the global search ability of the algorithm; (ii) we disturb a local optimum through a linear combination of Gaussian functions with different variances, so that the IBA has the ability to jump out of the local optima; and (iii) we update the velocity equation of the standard BA with an adaptive weight to balance the exploration and exploitation and keep the algorithm stable.
In addition, different constraint-handling methods were proposed to solve CNLPPs: (i) hybrid methods; (ii) repair algorithms; (iii) unique representatives and operators; (iv) isolation of objectives and constraints; (v) penalty functions. In this paper, we use penalty function to solve SNLPPs because penalty function is the simplest one to solve the constrained problem among the above methods. e rest of the paper is organized as follows: Section 2 describes the structure of standard BA. Section 3 describes is section explains the basic principle of IBA in detail, which is based on integrated strategy and local search with adaptive parameters. e goal is to address the problems of standard BA: local-optima traps, slow convergence speed, and unstable optimization results. is paper presents the following improvements: (i) an adaptive weight; (ii) a representation of the random disturbance using a linear combination of two Gaussian distributions with different variances; (iii) determination of optimal solution with an integrated strategy; and (iv) local search.

Constraint-Handling Technique-Based Method.
For the constrained nonlinear programming problem (CNLPP), the penalty method is a common method, whose core idea is to transform the constrained problem into unconstrained problem with penalty function. In general, for the metaheuristic algorithm, the equality constraint in equation (1) can be modified as follows: where δ is a plus tolerance number to equality constraints. erefore, we define the following penalty function: We define the new objective function according to equation (1): where λ is a penalty parameter. en, the new objective function can be further expressed as where φ e and ψ j denote inequality constraints and equality constraints, respectively, and μ e and v j denote the penalty parameters of inequality constraints and equality constraints. e value of the penalty parameter should be taken as large as possible depending on the solution quality needed. From the above analysis, we can see that when the equality constraints are satisfied, the effect of μ e to objective function is zero. However, when the equality constraints are not satisfied, μ e is heavily penalized as it significantly increases. Similarly, it is the same with the penalty parameter v j for the case of inequality constraints.

Mathematical Problems in Engineering
When BA is used to solve the CNLPPs, its searching mechanism can be expressed as the following optimization problem: where f(x) is fitness function and R D denotes the searching space of BA. Suppose position X i � (x i,1 , x i,2 , . . . , x i,D ) is a feasible solution. In initialization, the bat individuals are generated randomly in a searching space. e bat parameters, including pulse frequency, velocity, and position, are updated according to equations (2)-(4). It can be seen from equations (2)-(4) that the velocity V i and position X i are updated according to the randomly generated pulse frequency f i . After the update, BA searches the local-optimal solution according to the average loudness of all bats in a random manner, as shown in equation (5). It needs to be noted that the local search is carried out with the pulse rate r i . e local search will be conducted if the random number is greater than r i . From equation (5), we can see that the range of local search is dependent on the average loudness. en, if the random number is lower than A i and the value of current solution is lower than the value the optimal solution, the new solution will be accepted according to the rules of the feasible solution. e loudness and pulse rate are updated according to equations (6) and (7). Particle swarm optimization algorithm SLPSO Social learning particle swarm optimization SPSO A novel supervised particle swarm optimization  Similar to [41], the feasibility-based rules for BA can be defined as follows: (1) any feasible solution is superior to any infeasible solution. (2) Between two feasible solutions, the one having a better objective function value is preferred. (3) Between two infeasible solutions, the one having a smaller constraint value is preferred. To summarize, these rules are to choose a solution that lies closer to the feasible region.
According to the above description, we present the pseudocode of BA in Algorithm 1.

Parameter Improvement.
It is generally known that a suitable value for the inertia weight provides a balance between the global and local exploration ability of the algorithm. Shi and Eberhart [42] pointed out that a better performance would be obtained if the inertia weight was chosen a time varying, linearly decreasing quantity. It was inferred that the system should start with a high inertia weight for global exploration and this weight should decrease to facilitate finer local explorations in later iterations. e concave model [43], as a nonlinear model can meet these requirements for inertia weight. However, the inertia weight generated by a concave function will greatly accelerate the convergence rate, which tends to make the algorithm fall into local optima. Inspired by Kentzoglanakis and Poole [44], we define an adaptive weight w as follows, with the help of the sine function: where ε ⟶ 0+. We then modify the velocity update equation as follows to solve this issue: Figure 1 shows how the values of w vary with time t. It can be seen from Figure 1 that the values of w tend to decrease as the number of iterations increases. In equation (13), t is subtracted by ε to avoid outputting 0 for w when t is equal to T max . We introduce an adaptive weight into equation (13), which will make the velocity update more flexible. At the beginning of the iterations, the individual has a higher speed when the value of the weight is large, which can speed up the search process and improve its global search ability. In contrast, the individual has a lower speed when the value of the weight is smaller in the last stages of the iterations, which can improve its local search capability and ensure the stability of the algorithm. e optimal position of the bat population is adjusted using random number X new � X old + 0.5(N(0, 1) + N(0, 2))A t , which follows a uniform distribution in the interval [− 1, 1]. To enhance the search performance of the algorithm, He et al. [45] introduced Gaussian perturbations into the standard BA, instead of a uniform distribution.
Inspired by Chellapilla et al. [46], we modify the random disturbance in the standard BA into a linear combination of two standard Gaussian distributions as follows: where N(0, 1) is a random number drawn from a distribution with zero mean and a standard deviation of one; N(0, 2) is a random number drawn from a Gaussian distribution with zero mean and a standard deviation of two. Figure 2 shows the probability density function (PDF) of the linear combination. Two standard Gaussian PDFs are also plotted for comparison. For analysis, based on Figure 2, the range of x-axis is split into two categories, around the mean (− 1.8-1.8) and far from the mean (<− 1.8, or >1.8). In comparison with N (0, 1), this linear combination generates fewer random numbers around the mean. In comparison with N (0, 2), the linear combination generates fewer random numbers far from the mean. us, the linear combination can achieve better disturbance performance and avoid allowing an individual falling into local optima.

Improved Local Search
Algorithm. e standard BA will fall into the local optima during the iterations. To solve this issue, we propose an improved local search algorithm (ILSA). e basic principle of ILSA is to find the exact optimal solution according to multiple fitness values. ILSA operates as follows: Step 1. Generate the neighborhood set of the best position X * using the following equation: where rand is a random number in the range of [0, 1]. We obtain a neighborhood set N(X * ) � X * 1 , X * 2 from equation (16). We also assume that X * 1 < X * 2 .
Step 2. Calculate the fitness value f(X * 1 ) of X * � X * 1 according to the objective function f(X * 1 ) < f(X * ). Step Step 4. Output the best solution X * and stop the search.
In terms of the optimal algorithm, there exists a certain specific operator in the iterations whose performance is superior than other operators [47]. erefore, the global search capability of the algorithm can be further improved by selecting the appropriate operators at different times. In this paper, we propose an optimal operator selection strategy for the velocity update. e main idea is to update the velocity by selecting the appropriate operator and further improve the exploration ability of the algorithm. e selection strategy determines whether the IBA can jump out of Mathematical Problems in Engineering the local optima. If the IBA can jump out the local optima, we randomly select other operators. Otherwise, we select the current operator.
Based on the above idea, equation (14) can be modified as follows: where mix k (t) represents the k th velocity update operator.
In this paper, we select the following three velocity update operators: (1) An operator based on the standard BA [48]: (2) An operator based on chaotic BA [27]: (1) Initialize position X i and velocity V i , i � 1, 2, . . . , M; (2) Initialize pulse rates r i and loudness A i ; (3) Define pulse frequency f i of the ith bat and in the range between f min and f max ; (4) Evaluate all the elements in the population by objective function f (X) and the constraint value of each bat by the constrained functions. (5) Initialize the number of iteration t � 1; (6) while (t < max number of iterations N) (7) For each bat (8) Update the locations/solutions (equation (4)); (9) If rand > r i (10) Select a solution from the best solutions; (11) Generate a local solution around the best solution (equation (5)); (12) End if (13) Evaluate the fitness values and constraint values of the offspring (14) If Accept the new solutions as the feasibility-based rules; (16) Update the fitness; (17) Update the pulse rate r i and loudness A i (equations (6) and (7)); (18) End if (19) End for (20) Rank the bats and find the current best X * ;   where CM i is the generic name of chaotic maps consisting of some map functions.
(3) An operator based on an improved BA [49]: where W t is the worst position.
We present the pseudocode of the IBA in Algorithm 2.

Convergence Analysis of IBA
IBA is a stochastic optimization algorithm just like other heuristic optimization algorithms [50]. In this section, we give the global convergence proof of the IBA based on the convergence criteria of stochastic algorithms [51]. We first introduce the definition and theorem and then prove the global convergence of IBA.

Definition of IBA
Definition 1. State and state space of IBA: X is the state of bat and L represents the feasible solution space, denoted as X ∈ L.ψ indicates the state of IBA consists of all bats, denoted as ψ � (X 1 , X 2 , . . . , X N ). Furthermore, S is the state space of IBA, denoted as S � (ψ 1 , ψ 2 , . . . , ψ N ).

Definition 2.
State of IBA transition probability: in IBA, the process of changing from one state X i to another state X j is defined as the state transition of a bat, denoted as E ψ (X i ) � X j . e probability equation is defined as Lemma 1. (see [52]). In probability space, z denotes the existing solution of the random algorithm, D represents the operator of generating solution by the algorithm, and ζ indicates the generating solution of the random algorithm. For the objective function f, if f(D(z, ζ)) ≤ f(z), then the inequality can be expressed as follows: Lemma 2. (see [52]). For any Borel set in the domain S of an objective function, if L(A) > 0, the formula is shown as follows: where L(A) is the Lebesgue measure of set A and μ k (A) is the probability of generating set A.

Lemma 3.
(see [52] Proof. In every iteration of ILSA, the current optimal solution X old is perturbed, the best solution X * is disturbed, and the acceptance criterion is used. For this reason, IBA obtains the best solution in every iteration and eorem 1 is proved.

Theorem 2. B is a closed set in IBA.
Proof. ∀ψ i ∈ B, ∀ψ j ∉ B, and the transition probability from state ψ i to ψ j after the M th (M ≥ 1) step is as follows: e state transition probability of IBA is shown as follows: Proof. We make the following assumption: a nonempty closed set E exists in the state space S. Let , then it has P M (T S (ψ j ) � ψ * j ) > 0. As we can see from eorem 2, E is not a closed set, which is a contradiction with Lemma 3. eorem 3 is proved, which gives us G ∩ B � Φ. □ Theorem 4. e state of IBA becomes optimal state set B as the iteration times tend to infinity.
Proof. As can be seen from eorems 2 and 3, state space S is not composed of closed sets, which is beyond the optimal state set B. If ζ j ∉ B, we have lim n⟶∞ P(T S (X n ) � X j ) � 0. erefore, optimal state set B includes the state of IBA, and eorem 4 is proved. Proof. As we can see from eorem 4, when the number of iterations approaches infinity, the probability that the globally optimum solution is searched becomes 1 and Mathematical Problems in Engineering where G is the best state set B. erefore, eorem 2 is proved.
Based on the above theoretical analyses, it can be concluded that Lemmas 1 and 2 can be satisfied at the same time and IBA is globally convergent.

Simulation Results
In this section, we will prove the superiority of the algorithm on CECE2013 benchmark function compared with other algorithms. It should be noted that the proposed algorithm is applied to solve the unconstrained optimization problem and cannot handle constrained optimization problem. Firstly, CEC2013 benchmark functions and parameter settings are introduced. After that, simulations were carried out.

CEC2013 Function and Algorithm Parameter Setting.
Simulation on the CEC2013 benchmark set was done to evaluate the performance of the proposed IBA. e test set consists of three groups: (i) Bat algorithm (BA) [22] (ii) Chaotic bat algorithm (CBA) [26] (iii) Bat algorithm with Lévy distribution (LBA) [29] (iv) Fractional Lévy flight bat algorithm (FLFBA) for global optimizations [30] (v) Oriented cuckoo search (OCS) [5] (vi) IBA without the integration strategy (IBA-1) (vii) IBA without the Gaussian function (IBA-2) (viii) IBA without the adaptive weight (IBA-3) Table 3 shows the parameter settings for the nine algorithms according to the CEC2013 benchmark [53]. Note (1) Define the objective function f(X), X � (x 1 , x 2 , . . . , x D ) T ; (2) Initialize position X i and velocity V i , i � 1, 2, . . . , M; (3) Initialize pulse rates r i and loudness A i ; (4) Define pulse frequency f i of the i th bat and in the range between f min and f max ; (5) Evaluate all the elements in the population by objective function f (X); (6) Initialize the number of iteration t � 1; (7) while (t < max number of iterations N) (8) For each bat (9) If ILSA jump out of the local optima (10) Select other velocity update operators randomly and update velocities (equations (17)-(21)); (11) Else (12) Select the current velocity update operator, update velocities (equations (17)- (21)); (13) End if (14) Update the locations/solutions (equation (4)); (15) If rand > r i (16) Select a solution from the best solutions; (17) Generate a local solution around the best solution (equation (5)); (18) End if (19) If Accept the new solutions; (21) Update the fitness; (22) Update the pulse rate r i and loudness A i (equations (6) and (7)); (23) End if (24) End for (25) Obtain the disturbed solutions X * 1 , X * 2 (equation (16)); End if (32) End if (33) Rank the bats and find the current best X * ; (34) t � t+1 (35) End while (36) Postprocess results and visualization; ALGORITHM 2: IBA. 8 Mathematical Problems in Engineering that the parameters of the algorithms are not optimized. As can be seen from Table 3, each algorithm is terminated when the number of iterations reaches 3,000. In our algorithm, we used the following indicators to evaluate the experimental results: where F i is the i th solution and F value is the actual solution set of the benchmark function. In the following experiments, we take the average solution of each algorithm over the 51 trial runs, where the value of 51 is set according to the CEC2013 benchmark. Table 4. On the last line of Table 4, w denotes the number of times IBA performs better than other algorithms, t refers to the number of times IBA performs similar to the other algorithms, and l indicates the number of times IBA performs worse than the other algorithms. In addition, the best results in Table 4 are presented in bold.

Comparison of the IBA with Other Algorithms. e average error in equation (27) obtained by the evaluated algorithms on different test functions is shown in
As shown in Table 4, IBA outperformed other algorithms on 26,22,24,23,21,25,27, and 26 functions when compared with BA, CBA, LBA, FLFBA, OCS, IBA-1, IBA-2, and IBA-3, respectively. BA performers the worst. Compared with the IBA, CBA, and LBA, FLFBA has better performance on  erefore, we conclude that the IBA can find effective solutions on most of the benchmark test functions. Figure 3 shows the results of the convergence for different test functions. As is clear in the figure, the proposed IBA performs well in terms of convergence in most cases. However, the IBA performs poorly compared with other algorithms on functions F3, F5, F6, F7, F14, F15, and F18.
is is because a suboptimal strategy was selected that made the algorithm fall into a local optimum. It can be seen clearly from Figure 3 that the convergence rate of the IBA is faster than that of OCS on most functions. e performance of IBA-2 is superior to those of IBA-1 and IBA-3 on most functions, which indicates that the Gaussian distribution has little impact on the algorithm accuracy, but it can accelerate the convergence rate of the IBA. IBA-1 converges more slowly than IBA-3 on most functions, which indicates that the integration strategy can improve accuracy. In terms of search accuracy and convergence, the performances of IBA-1 and IBA-2 are superior to that of the BA, which means that the adaptive weight can improve the stability and search speed of the IBA. From these results, it can be concluded that the IBA has high accuracy and better convergence rate than the original BA algorithm. e results of the Friedman test [52,54] can be seen in Table 5. Smaller rank values indicate better performance of the algorithm. Compared with the other eight algorithms, IBA has the smallest rank. us, we can come to the conclusion that the IBA is the best algorithm among the nine methods.
To evaluate the performance of the proposed algorithm and related algorithms at different numbers of high dimensions, scaling simulations were performed on the CEC2008 benchmark set. e parameter settings of the algorithms are the same as in Table 3. e results of IBA and the other algorithms in different dimensions are compared in Tables 7 and 8. Figures 4 and 5 show the results of the convergence for different test functions. As is clear from the figure, the proposed IBA performs well in terms of convergence in most cases, although it performs poorly in some functions.
As it can be seen from Tables 6 and 7, as the dimensions of the functions increase, the performances of all algorithms decrease. However, IBA performs better than the other algorithms for most of the functions. IBA had better results for six functions (F1, F3, F4, F5, F6, and F7) in 100 dimensions and for six functions (F1, F2, F3, F5, F6, and F7) in 1,000 dimensions.
e above results show that the IBA is superior to the algorithms at different numbers of high dimensions. e results of the Friedman test are listed in Table 9 for different dimensions. Compared with the other five algorithms, IBA has the smallest ranking value. us, we can conclude that the IBA is the best algorithm of the six methods. e results of the Wilcoxon test are listed in Table 10 for different dimensions. As can be seen in Table 10, IBA performs better than the other five algorithms.

Comparison on Two Real-World Application Problems.
To test the feasibility and performance of IBA on real-world applications, we chose two problems from the real world [56]: the design of a gear train [57] and parameter estimation for frequency-modulated (FM) sound waves [58]. We compared the IBA with the following ten algorithms: (i) Social learning PSO (SLPSO) [56] (ii) Adaptive PSO (APSO) [59] (iii) Comprehensive learning PSO (CLPSO) [60] (iv) Cooperative approach to PSO (CPSOH) [61] (v) Fully informed particle swarm (FIPS) [62] (vi) Supervised PSO (SPSO) [63] (vii) Adaptive differential evolution with optional external archive (JADE) [64] (viii) Global and local real-coded genetic algorithm (HRCGA) [65] (ix) Frankenstein's PSO (FPSO) [66] (x) Restart CMA evolution strategy with increasing population size (G-CMA-ES) [67] e first problem is to optimize the gear ratio for a compound gear train that contains three gears. e function of the problem is as follows: where x j ∈ [12,60], j � 1, 2, 3, 4. e second problem is to estimate the parameters of an FM synthesizer. e components of the parameters vector are as follows: X � a 1 , w 1 , a 2 , w 2 , a 3 , w 3 , and the expressions for the estimated target sound waves are shown as follows: where θ � 2π/100, and the parameters are defined in the range [− 6.4, 6.35]. e fitness function is the sum of the squared errors between the estimated wave in equations (29) and (30) as follows: A comparison of the performances on two real-world problems is listed in Table 11. A set of heuristic algorithms were used for the comparison. e parameter settings of IBA are the same as those listed in Table 3. e IBA and the    related algorithms were 30 times for a crucial analysis, where "Min," "Max," "Mean," and "Std" denote the best, worst, mean, and standard deviation values, respectively.
As can be seen from Table 11, IBA can easily solve the first real-world problem and obtained the best values. For the second one, the proposed IBA can find the optimal solution and obtained the smallest standard deviation values, which shows that IBA has good ability to find the global           optimum when dealing with real-world problems with good stability and is stable.

Summary
Although heuristic algorithms perform well in optimization problems, it is easy for them to fall into local optima and output unstable results. To solve these problems, the IBA was proposed to improve the performance of the BA. e convergence of the algorithm was proved by mathematical analysis and simulation experiments. In the future, we will use the IBA to solve modeling, optimization, and control problems in different fields。In particular, we will further improve our algorithm to solve the constrained optimization problem.
Data Availability e code used in this paper is released, which was written in MATLAB, has been publicly available at https://github.com/ yzbyhjq/iba.git.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.