An Enhanced Grasshopper Optimization Algorithm to the Bin Packing Problem

The grasshopper optimization algorithm (GOA) is a novel metaheuristic algorithm. Because of its easy deployment and high accuracy, it is widely used in a variety of industrial scenarios and obtains good solution. But, at the same time, the GOA algorithm has some shortcomings: (1) original linear convergence parameter causes the processes of exploration and exploitation unbalanced; (2) unstable convergence speed; and (3) easy to fall into the local optimum. In this paper, we propose an enhanced grasshopper optimization algorithm (EGOA) using a nonlinear convergence parameter, niche mechanism, and the β -hill climbing technique to overcome the abovementioned shortcomings. In order to evaluate EGOA, we ﬁrst select the benchmark set of GOA authors to test the performance improvement of EGOA compared to the basic GOA. The analysis includes exploration ability, exploitation ability, and convergence speed. Second, we select the novel CEC2019 benchmark set to test the optimization ability of EGOA in complex problems. According to the analysis of the results of the algorithms in two benchmark sets, it can be found that EGOA performs better than the other ﬁve metaheuristic algorithms. In order to further evaluate EGOA, we also apply EGOA to the engineering problem, such as the bin packing problem. We test EGOA and ﬁve other metaheuristic algorithms in SchWae2 instance. After analyzing the test results by the Friedman test, we can ﬁnd that the performance of EGOA is better than other algorithms in bin packing problems.


Introduction
Bin packing problem (BPP) is one of the most important combinatorial optimization problems. It is widely used in many fields of science and engineering and is the basis of many practical engineering optimization problems, including processors scheduling [1] and cloud computing resource allocation [2] in the field of computer science. BPP belongs to NP-hard problems [3], so that basic heuristic algorithms has been proposed for solving BPP. e basic heuristic algorithms can ensure fast and accurate solutions for small-scale bin packing problems. But, the performance of this way starts to deteriorate with the increase of scale.
us, metaheuristic algorithms which can find a highquality solution though adjusting the exploitation and exploration of search space become a new trend to solve BPP.
In [4], the authors combine a genetic algorithm with a grouping mechanism. In [5], the authors apply first fit and ranked order values (ROVs) in CS to solve the BPP. In [6], the authors solve the BPP using an improved whale optimization algorithm.
Metaheuristic algorithms have developed rapidly and have been widely applied in many different fields, such as signal detection [7], resource allocation [8], load balancing [9], feature selection [10], task scheduling [11], and engineering applications [12]. e representative algorithms contain Dragonfly Algorithm (DA) [13], Ant Lion Optimization (ALO) [14], Bee Algorithm (BA) [15], and Particle Swarm Optimization (PSO) [16,17]. ese algorithms seek inspiration from physical or biological phenomena to solve optimization problems, including ant colony foraging, bird migrating, bee colony behavior, grey wolves hunting, and fish schooling [18]. It has been proved in practice that they are superior to traditional optimization methods in many application scenarios. e metaheuristic algorithms form a randomly initialized population for a given problem, evaluate the solutions using the objective function (s), and improve the random solutions during the iteration until satisfying the terminating condition in order to find the global optimum. is unique way to find solution give them the advantages as follows: (i) simplicity, mainly inspired by fairly simple concepts and are easy to apply; (ii) flexibility, no special requirements on the objective function; and (iii) independent of derivation, no need to calculate the derivative in the process of finding the optimal solution. ese advantages make metaheuristic algorithms not only limited to specific problems but also have a wide range of applications.
Although there are differences between different metaheuristic algorithms, they all have one feature in common, which is the two phases during the search process: exploration and exploitation. In the exploration phase, the algorithm explores the search space as widely as possible for promising regions. While in the exploitation phase, it pays more attention to local search ability for the optimal solution around promising regions. How to balance the relationship between exploration and exploitation has become the key to design and improve a new metaheuristic algorithm. e GOA is a state-of-the-art population-based metaheuristic algorithm that was inspired by the long-range and abrupt movements of adult grasshoppers in groups [19]. Also, GOA is widely used in a variety of industrial scenarios. Aljarah proposes a hybrid approach based on the grasshopper to optimize the parameters of the SVM model [20]. Hekimoglu Baran and Ekinci Serder employ GOA to solve many optimization problems in an automatic voltage regular system [21]. Wu proposes a dynamic GOA for optimizing the distributed trajectories of unmanned aerial vehicles in urban environments [22]. In [23], the authors apply the basic multiobjective GOA to solve several benchmark problems with superior performance.
While the GOA can obtain good solutions in a reasonable timeframe, it presents some shortcomings: (1) original linear convergence parameter makes the processes of exploration and exploitation unbalanced, (2) unstable convergence speed, and (3) easy to fall into the local optimum. Some studies have proposed improved GOA algorithms. Luo proposes an improved grasshopper using levy flight in [24].
arwat applies GOA for constrained and unconstrained multiobjective optimization problems in [25]. Ewees introduces an opposition-based learning strategy to the GOA in [26]. is strategy improves the ability of jumping out of local optimum and movement issues. But, the improvement of algorithm in other shortcomings is not significant because they consider neither the balance between exploration and exploitation nor the relationship between the diversity of the population and the convergence speed during the group optimization process.
To overcome these disadvantages, this paper proposed an enhanced grasshopper optimization algorithm. e main contributions of this paper could be summarized as follows: (i) We introduced a nonlinear convergence parameter into the basic GOA to balance the exploration and exploitation phases to improve the overall performance. (ii) We applied a niche repulsing mechanism to the basic GOA. is mechanism directs group optimization and ensures the diversity of the search space to increase the speed of convergence. (iii) We adapted the β-hill climbing (BHC) technique to GOA to avoid falling into the local optimums. e remainder of this paper is organized as follows. e basic GOA and EGOA are introduced in Sections 2 and 3. Several experiments of two benchmark sets are implemented and the analysis results are shown in Section 4. e BPP and the applicability of the EGOA to the BPP are discussed in Section 5. Finally, conclusions and future works are discussed in Section 6.

Biological Inspiration.
Grasshoppers are considered to be pests based on the damage they inflict on crops and vegetation. Instead of acting individually, grasshoppers form some of the largest swarms among all living creatures. Millions of grasshoppers jump and move like large rolling cylinders. e influences of the individuals in a swarm, wind, gravity, and food sources all affect swarm movement.

Main Procedures for the GOA.
e GOA is a novel swarm-intelligence-based metaheuristic algorithm that was inspired by the long-range and abrupt movements of adult grasshoppers in groups. Food source seeking is an important behavior of grasshopper swarms. Metaheuristic algorithms logically divide the search process into two phases: exploration and exploitation. e long-range and abrupt movements of grasshoppers represent the exploration phase, and local movements to search for better food sources represent the exploitation phase.
A mathematical model for this behavior was presented by Mirjalili in [19]. is model is defined as follows: where x i is the position of the i grasshopper, S i represents the social interaction in a group, G is the force of gravity acting on the i grasshopper, and A is the wind direction. By expanding S i , G and A in (1), the equation can be rewritten as follows: where s(r) � fe − r/l − e − r is a function simulating the impact of social interactions and N is the number of grasshoppers. ge g is the expanded G component, where g is the gravitational force and e g is a unit vector pointing toward the center of the earth. ue w is the expanded A component, where u is a constant drift and e w is a unit vector pointing in the 2 Journal of Control Science and Engineering direction of the wind. d ij is the distance between the i and j grasshoppers and is calculated as Because grasshoppers quickly find comfortable zones and exhibit poor convergence, the influences of wind and gravity are far weaker than the relationships between grasshoppers, meaning this mathematical model should be modified as follows: where ub and lb represent the upper and lower boundaries of the search space, T d is the value relative to the target (best solution found so far), and c is a decreasing coefficient that balances the processes of exploitation and exploration, which is defined as follows: where c max is the maximum value (equal to 1), c min is the minimum value (equal to 0.00001), iter represents the current iteration, and Max iter is the maximum number of iterations.

Enhanced Grasshopper Optimization Algorithm
First, the EGOA utilizes a nonlinear convergence parameter to balance the exploration phase exploitation phase. Second, the EGOA is hybridized with a niche mechanism to balance diversity and convergence speed. Finally, the EGOA is combined with the β-hill climbing technique to avoid local optimum.

Nonlinear Convergence Parameter.
In metaheuristic algorithm, the convergence parameter c directly affects the step size. e larger the convergence parameter, the larger the step size. In the exploration phase of metaheuristic algorithm, the step size should be as large as possible to ensure that the algorithm can search the global optimum in a wider range to prevent falling into the local optimum. In the exploitation phase, the step size should be as small as possible to ensure that the algorithm gradually converges to the global optimum to prevent skipping the optimal value. In the basic GOA, the convergence parameter c decreases linearly in the two phases of exploration and exploitation. Also, the size of the step cannot be balanced according to the characteristics of the two phases, which affects the performance of the algorithm. In order to make up for the lack of linear convergence parameters, we introduce the tanh function, a nonlinear adjustment parameter widely used in the field of deep learning with good performance. Tanh function can compensate for the lack of GOA nonlinear convergence parameters and the characteristics of exploration and exploitation phases based on the nonlinear characteristics to improve performance at different phases of GOA. e formula for the tanh function is defined as follows: A nonlinear convergence parameter based on a variant of the tanh function is introduced as follows: where l c is defined as follows: where iter is the current iteration and w is the adjustment factor.

Niche
Mechanism. e idea of the GOA algorithm to solve the problem is that grasshoppers interact with each other. When solving problems, the agent in GOA mimics the grasshoppers' interaction and searches the global optimum in the solution space. In the process of the agent searching for the global optimum, when a local optimum is found, all agents will move closer to this local optimum. If the moving is too fast, the agent swarm loses diversity. is leads to a local optimum and slows down convergence speed. In order to balance the convergence speed and diversity of GOA, we introduce niche technology.
Niche technology is inspired by the natural phenomenon where creatures tend to live with similar creatures. It can be used to balance GOA's convergence speed and the diversity of a swarm. erefore, to maintain the diversity of the swarm, we design a niche repulsing mechanism for the EGOA. rough this mechanism, the algorithm maintains the diversity of the swarm while preserving local optimum. e simulation of this mechanism is defined as follows: First, calculate d km , where d km is the Euclidean distance between a search agent x k and search agent x m (there are N search agents). en, get d i and calculate D, where d i is the minimum distance between search agent x i and any other search agent in the population, dim is the dimensionality of the search agents. Afterwards, when d km is less than D, compare the fitnesses of x k and x m , and penalty is assigned to the worse one. If fitness(x k ) > fitness(x m ), then fitness(x k ) is p; otherwise, fitness(x m ) is p. Finally, after all individuals have been processed, they are ranked according to their fitness. m% N Journal of Control Science and Engineering 3 individuals with poor fitness will be eliminated. To maintain the population and increase diversity, we select the best and worse m% N individuals to generate new individuals.
3.3. e β-Hill Climbing Technique. As the iterative process continues, the GOA is similar to many other metaheuristic algorithms. It only focuses on the process of converging to the global optimum and ignores the mechanism of avoiding the local optimum. When falling into a local optimum, GOA search cannot continue. is is very fatal when solving practical engineering problems.
To allow the algorithm to escape from local optimum, the β-hill climbing (BHC) technique is introduced [27] in the EGOA. e BHC technique can iteratively enrich a series of randomly approximated solutions. In the BHC technique, a stochastic strategy called the β-operator is employed to establish a fine balance between exploration and exploitation throughout a global search. e BHC technique begins exploitation with a search agent x i � (x i,1 , x i,2 , x i,3 , . . . , x i,dim ) that represents a poor solution. It uses the β-operator throughout its exploitative steps to generate a new solution e elements of the new solution are updated according to their current values or filled randomly with a probability of β as follows: where U(0, 1) and z represent a random value in the range (0, 1). Next, the BHC technique compares x i ′ to x i and records the best (downhill) value as follows: In terms of exploration, the BHC technique's ability to escape from local optima solutions using the β-operator can be considered as the key concept for avoiding local optimum (Algorithm 1).

Experimental Results on Benchmarks
In this section, the efficiency of the proposed EGOA is compared to that of the basic GOA and several other metaheuristic algorithms, namely Dragonfly Algorithm (DA), Ant Colony Optimizer (ALO), Particle Swarm Optimization (PSO), and opposition-based learning GOA (OBLGOA) based on various benchmark problems. All codes utilized in this section were implemented consistently using MATLAB 9.3 (R2017b) and executed on a PC with the Windows 10 64 bit professional operating system and 16 GB of RAM. In order to fully verify the performance of EGOA, this paper cites two benchmark sets: e benchmark set 1 is the GOA original algorithm test environment. is part of benchmark set is used to verify the performance improvement of the original algorithm and the comparison with other algorithms [28,29]. e benchmark set 2 is the most novel metaheuristic algorithm performance benchmark set CEC2019 [30]. It can be further verified that EGOA also has good performance in the recent benchmark set. To investigate the differences between the results obtained by the EGOA and those obtained by the other algorithms, a nonparametric Wilcoxon rank sum test [31] with a significance level of 5% was adopted in this study. e Wilcoxon rank sum test generates a p value to determine if two datasets came from the same distributed set. e higher the p value is, the more similar the two datasets are. If the p value is less than 0.05, then the two datasets are considered to be independent.

Benchmark Set 1.
e benchmark set 1 used in this paper contains 29 test functions. Different test functions have different characteristics, which can comprehensively measure the performance of the algorithm. Test functions include unimodal, multimodal, fixed-dimension multimodal functions, and composite functions which are listed in Tables 1 and 2. In the tables, the column labelled Function represents the objective fitness function. e column labelled Dim represents the dimensionality of each function, Range represents the boundaries of the search space, f min represents the optimal value, and Type represents the function type.
For the unimodal, multimodal, and fixed-dimension multimodal functions, each of the benchmark functions uses 30 search agents over 500 iterations. e presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg) which represents the average performance and reliability of the algorithm, standard deviation of fitness (std) which represents the stability of the algorithm, best fitness (best) which represents the best optimization ability of the algorithm, and worst fitness (worst) which represents the capability boundary.

Evaluation of Exploitation Capability.
Unimodal functions, which have only one global optimum, can test the exploitation capabilities of algorithms. Table 3 shows the results for the unimodal functions. It can be clearly seen that EGOA achieves best performance in F 1 , F 2 , and F 4 and second best performance in F 7 . Moreover, EGOA outperforms the other algorithms in F 3 in terms of the avg fitness and the best avg fitness. In addition, EGOA achieves the best avg fitness among all compared algorithms in F 5 . Besides, most of the p values in Table 4 are much less than 0.05 which means that EGOA and other solutions can be considered irrelevant. From the abovementioned results, we can conclude that EGOA has a good exploitation capability. e main reasons for the superior performance of EGOA are the embedded BHC local search and the exploitative patterns inherited from the basic GOA.  Table 7 indicate that the EGOA outperforms the other algorithms in F 24 and F 29 , gets the best avg fitness in F 25, and achieves the best std fitness and avg fitness in F 27 . e results can demonstrate that the EGOA is able to escape from local optimums.

Analysis of EGOA Convergence Curves.
In this subsection, the convergence speed of the EGOA is analyzed. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. e results for the EGOA, GOA, DA, ALO, PSO, and OBLGOA are presented in Figure 1. e abscissa uses the number of current iterations, and the ordinate represents the best fitness value in each iteration. e representative convergence curves in each part of benchmark functions are selected, which can show that EGOA has an outstanding performance in solving different kinds of benchmark functions compared to well-known metaheuristic algorithm. During the iterations of F 1 , F 4 , and F 10 , the convergence speed of EGOA is approximately in the middle of the other five comparison algorithms before about 250 iterations. But, after 250 iterations, an inflection point appears, and EGOA accelerates the convergence process. is is because EGOA is hybridized with a niche mechanism to provide better individual direction and faster convergence speed. Besides, during the iterations of F 8 , F 15 , and F 21 , the slope of the convergence curve is larger in the initial iteration, and after several jumps, it jumps out of the local optimum and finally obtains the optimal fitness. is can be explained as EGOA utilizes a nonlinear convergence parameter f to balance the two search phases, and the combination of EGOA and β-hill climbing technique can improve the ability to jump out of local optimization. Overall, the convergence curves demonstrate that the proposed EGOA provides relatively fast convergence speed in most of the iterations.

Benchmark Set 2.
e CEC2019 benchmark set contains 10 test functions. Unlike the previous CEC benchmark set, the complexity of the test function is significantly increased, and more attention is paid to the ability of evaluation algorithms to find an accurate solution. In Table 8, the column labelled Function represents the objective fitness function. e column labelled Dim represents the dimensionality of each function, Range represents the boundaries of the search space, and f min represents the optimal value.
Each test function uses 30 search agents for 500 iterations.
e presented results were recorded based on 30 independent trials with random initial conditions to calculate statistical results, including the average fitness (avg), standard deviation of fitness (std), best fitness (best), and worst fitness (worst) which represent the same meaning as mentioned above in the benchmark set 1. To assess the accuracy of the algorithm, in this part of the solution space, we need to focus more on average fitness and best fitness.

Evaluation of EGOA's Performance in CEC2019.
e CEC2019 test functions, which have complicated solution space, are more challenging test problems for metaheuristic algorithms. Furthermore, they are suitable for evaluating the algorithm's ability to find accurate solutions in the complicated search process. e results Initialize the swarm position X(x i , i � 1, . . . , N) and the parameters Initialize c max , c min , and Max iter as the maximum number of iterations Calculate the fitness of each search agent and let T represent the best fitness while (iter < Max iter ) do Calculate f using equation (6) for i � 1 : N do Update x i using equation (3) Calculate the fitness if current fitness is worse than target fitness then x i conducts BHC Using equations (10) and (11) end if Bring the current search agent back if it travels outside the boundaries end for X conducts niche mechanism using equation (9) Update T and position iter � iter + 1 end while Return the target fitness and target position ALGORITHM 1: Enhanced grasshopper optimization algorithm.

10.5363
Fixed-dimension multimodal      for CEC2019 test functions listed in Table 9 indicate that EGOA works best in f 4 and f 10 and second in f 5 and f 6 . Meanwhile, it gets the best avg fitness in 7 of the 10 benchmark tests and achieves the best fitness in 5 of 10. e results demonstrate that compared with other algorithms, EGOA has a good average performance and higher reliability. At the same time, it shows the best optimization ability in most search processes.

Analysis of EGOA's Convergence Curves in CEC2019.
In this subsection, the convergence behavior of the EGOA in CEC2019 test functions is investigated. Several benchmark functions were chosen and executed with 30 search agents over 500 iterations. e results over part of the benchmark functions are presented in Figure 2. e parameter space in Figure 1 indicates that the solution space of the CEC2019 test function is more complex than test functions solution space in benchmark set 1, and it is more challenging to find the optimal fitness. During the iterations of f 4 , f 7, and f 10 , EGOA keeps a large slope relatively in the initial iteration, and after several sudden drops of the curve, it converges to the optimal fitness in the search space, which is far superior to other algorithms. Although during the iterations of f 2 OBLGOA achieves the best result, EGOA ranks second. Considering comprehensively, it can still prove that the proposed EGOA has also superior search ability in more complex solution spaces, compared with well-known metaheuristics.

BPP Formulation.
e BPP consists of packing a set of items with different weights into a minimum number of bins, which may have different capacities. Mathematically, where y i is a binary variable that represents whether or not bin i contains any items, z ij is a binary variable that indicates whether or not item j is assigned to bin i, w j is the weight of item j, C is the bin capacity, and n is the number of available bins. e goal is to minimize the number of bins. e sum of the sizes of the items assigned to any bin cannot exceed its capacity. Use of the bin number as a fitness function may lead to algorithm stagnation because there are many solutions, which have the same number of bins. However, this information can be helpful if integrated with other information, such as the fullness of the bins. e following formulation was introduced in [33]: where nbin is the number of used bins, ocup i is the occupancy of each bin i, which is the sum of the weights of all items packed, C is the capacity, and s is a constant that defines an equilibrium point for filling bins.

Discretization Search Space.
e basic version of the GOA was proposed for optimization problems in continuous search spaces. To apply such algorithm to combinatorial search spaces, different techniques have been proposed, such as largest order values [34], ROV [35], smallest position values [36], and largest ranked values [37]. ese techniques transform continuous solutions into permutations with different orders. In this study, the ROV [35] rule was adopted for experimentation. e ROV rule is a simple method based on random key representations for forming permutations, as shown in Table 10. We assign items to available bins according to the first-fit policy. In the first-fit policy, each item is placed in the first bin that has the capacity to hold the item.
In the EGOA, the current best number of bins is compared to the theoretical optimal solution. e optimal solution can be computed as follows: where itemsizes represents the number of items that fit into a bin and bincapacity is equal to C. If the current best number of bins is equal to the optimal number, the search stops.

Experimental Results.
In order to test the validity of the proposed algorithm, some experiments are conducted on selected benchmarks from the "SchWae2" instances [38] in which EGOA is compared to other metaheuristic algorithms. For the parameter settings, small numbers of search agents and iterations are selected for the EGOA and other metaheuristic algorithm. Specifically, 30 agents     where the weights of items are generated using the BPPGEN method discussed in [38]. BPPGEN assumes that components are uniformly distributed in the interval [v1 * C, v2 * C] and determines weight values based on the following two steps: (i) Generation of a realization of a weight w i , which is distributed in (v1 * C, v2 * C + 1), w i � (v1 + (v2− v1) * rand i (0, 1)) * C + rand i (0, 1) (ii) Rounding down to the nearest integer, where the weight of the item is the resulting integer (w i � [w i ]) In addition, the Friedman test is one of the nonparametric tests, which is applied to test the average ranking values of these algorithms. e Friedman test [31] is next used to analyze the experimental results to show the performance difference between EGOA and the comparison algorithm. e obtained results for the SchWae2 instances are summarized in Table 11. e first column labelled Number represents the numbers of items to be placed into bins. e column labelled C represents the capacity of the bin. e column labelled v1v2 represents the upper and lower bounds of the weight distributions. Also, the column labelled Optimal represents the theoretical optimal numbers of bins. e columns labelled GOA, EGOA, ALO, DA, PSO, and OBLGOA represent the results of the corresponding algorithms.
Different pairs of v1, v2 determine the difficulty levels of the problems. A greater value of v2 results in more complicated item weights. When v1 is equal to 0.1 and v2 is equal x 1 x 2 (b) Figure 2: Some convergence curves for benchmark set 2.   Journal of Control Science and Engineering to 0.2, the weights of items are ranging between 100 and 200. Similarly, when v1 is equal to 0.1 and v2 is equal to 0.5, the weights of items are ranging between 100 and 500. Also, the bin capacity is equal to 1000. e results of our experiments demonstrate that EGOA is the same as or closer to the optimal value compared to other algorithms in all cases which indicates the good searching performance of EGOA. In addition, Figure 3 is the conclusion drawn by Friedman test. It can be intuitively seen that EGOA ranks first, indicating that EGOA has excellent and stable performance in BPPs of different difficulty levels.

Conclusions
In this paper, a new variant of the GOA called the EGOA has been presented. First, a nonlinear convergence parameter, niche mechanism, and the BHC technique are added to improve the performance of the GOA. Next, we evaluate EGOA based on the benchmark set proposed by the GOA authors and the CEC2019 benchmark set and compare with the original GOA, DA, ALO, PSO, and OBLGOA in performance. e results of experiments and the Wilcoxon rank sum test indicate that the EGOA has excellent exploitation capability and competitive exploration capability. Finally, the EGOA is applied to the BPP. e EGOA is tested on the SchWae2 instances and compared to the GOA, DA, ALO, PSO, and OBLGOA algorithms. e experimental results and Friedman test show the ability of the EGOA to find optimal solutions in different problem sizes efficiently.
In the future, we wish to apply the EGOA to other variants of the BPP and engineering problems. Further studies could reveal additional methods for enhancing the EGOA to solving additional optimization problems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare no conflicts of interest.