A New Design of Metaheuristic Search Called Improved Monkey Algorithm Based on Random Perturbation for Optimization Problems

'e aim of this paper is to present a design of a metaheuristic search called improved monkey algorithm (MA+) that provides a suitable solution for the optimization problems. 'e proposed algorithm has been renewed by a new method using random perturbation (RP) into two control parameters (p1 and p2) to solve a wide variety of optimization problems. A novel RP is defined to improve the control parameters and is constructed off the proposed algorithm.'emain advantage of the control parameters is that they more generally prevented the proposed algorithm from getting stuck in optimal solutions. Many optimization problems at the maximum allowable number of iterations can sometimes lead to an inferior local optimum. However, the search strategy in the proposed algorithm has proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many optimization problems for the lowest number of iterations against the original monkey algorithm. All details in the improved monkey algorithm have been represented in this study. 'e performance of the proposed algorithm was first evaluated using 12 benchmark functions on different dimensions. 'ese different dimensions can be classified into three different types: low-dimensional (30), medium-dimensional (60), and high-dimensional (90). In addition, the performance of the proposed algorithm was compared with the performance of several metaheuristic algorithms using these benchmark functions on many different types of dimensions. Experimental results show that the improved monkey algorithm is clearly superior to the original monkey algorithm, as well as to other well-known metaheuristic algorithms, in terms of obtaining the best optimal value and accelerating convergence solution.


Introduction
In this section, the background understanding of optimization in calculus, mathematical optimization, heuristics, and metaheuristic approaches is given. e research based on optimization [1][2][3] seeks out a solution iteratively for analytical solutions that have been analyzed. e design of an improved monkey algorithm for a multivariate system is noticed.
Fermat and Lagrange were the first to suggest formulas that are based on calculations for determining the optima. Newton and Gauss were the first to suggest iterative methods for finding the best solution. Actually, this means an approach to optimization in calculus in the case of a point on a function of one variable. It gives the best solution (the maximum or minimum of the function). Many optimization problems are primarily to find the best solution within certain boundaries. It refers to the best available functions that solve objective applied mathematics functions. Formally, "linear programming" was started by Kantorovich in 1939. It is also called linear optimization (LO).
e LO is a technique to get the best solution in a calculus model whose elements are represented by linear relationships. erefore, linear optimization is also a special case of mathematical optimization. e first well-known approach was the simplex method by using mathematical optimization. Danzig studied the simplex method in 1947 for solving linear programming problems. Since then, many optimization methods or techniques have been developed.
Karush-Kuhn-Tucker conditions are first derivative tests for a solution in nonlinear programming (nonlinear optimization) as an optimal expression in mathematical optimization. Kuhn and Tucker studied first derivative tests in 1951. Karush explained in his Master's thesis in 1939 the necessary conditions for a constrained optimum. e Karush-Kuhn-Tucker conditions of nonlinear programming generalize the method of Lagrange multipliers, which allows only equality constraints. Mathematical programming is briefly about the selection of a best element from some set of available alternatives. Quantitative disciplines are common optimization problems of sorts arising in all from computer science, engineering operations research, economics, and industry. Since then, many optimization methods or techniques have been developed as solutions of interest in mathematics for centuries. us, mathematical programming is a rising trend for many fields. ese kinds are, respectively, as follows: linear programming [11][12][13], nonlinear programming [14,15], objective programming [16,17], and dynamic programming [18,19]. With regard to nonlinear optimization, that is, having at least one goal or nonlinear constraint function, the known approaches have encountered a lot of difficulties. Unfortunately, all tasks in engineering design are almost nonlinear.
Heuristic methods were first used in philosophy and mathematics for finding solutions to complex problems. Heuristics are problem-dependent methods. us, they are usually adapted to a specific problem and try to make full use of its features. However, they are often too greedy, tend to fall into the local optimum trap, and generally cannot get a global optimal solution. e study of this method was developed in human decision-making in the 1970s-1980s by Tversky and Kahneman. In the 1980s, metaheuristic approaches attracted the attention of engineers and they studied all kinds of optimization. Metaheuristics are problem-independent methods and they are of a high level. A set of strategies are provided for developing heuristic optimization algorithms. In general, they are not greedy. In fact, they can even accept temporary deterioration of the solution which allows them to explore the solution space more deeply and thus get a better solution. One of the most well-known approaches was genetic algorithms (GAs). Holland studied the principle of "survival of the fittest" in the 1960s. Subsequently, Simulated Annealing was published in 1983. e optimization problems were solved by Simulated Annealing (SA). e SA is currently formulated by an objective function for many variables; that is, it means several constraints. erefore, with SA in practice, the constraint can be penalized as part of the objective function for the best solution. e following are five metaheuristic periods: ( Nowadays, there are many optimization algorithms that are designed to find the global optimal solutions to optimization problems. One of them is metaheuristic algorithms that can be efficiently used to solve "local minima" problems and determine global solutions of the optimization problems. e set of metaheuristic algorithms include ant colony optimization (ACO) [20,21], ant lion optimizer (ALO) [22], bat algorithm (BAT) [23], cuckoo search (CS) [24], elephant herding optimization (EHO) [25], particle swarm optimization (PSO) [26], krill herd (KH) [27], moth-flame optimization (MFO) [28], monarch butterfly optimization (MBO) [29,30], mussels wandering optimization (MWO) [31], moth search algorithm (MSA) [32], and whale optimization algorithm (WOA) [33] for finding good solution to optimization problems. Even today, new methods are being developed as new metaheuristics are invented. Other metaheuristics research works have been done on the designing of the evolutionary theory such as biogeography-based optimization (BBO) [34], the differential evolution (DE) [35], evolution strategies (ES) [36], genetic algorithm (GA) [37,38], harmony search (HS) [39], gravitational search algorithm (GSA) [40], sine cosine algorithm (SCA) [41], dragonfly algorithm (DA), and hybrid ABC/DA (HAD) [42].
What is more, the improved monkey algorithm (MA+), which finds the best solution and solves optimization problems, is designed in this study. In addition, the proposed algorithm is a new metaheuristic search for the optimization of multivariate systems. ere exists much insufficiency for monkey algorithm about its solution search area which may bring about the premature convergence and the low search accuracy when solving complex optimization of multivariate systems.
en, considering that monkey algorithm converges very slowly, a random perturbation method can be used to ensure the diversity of monkey algorithm against premature convergence. e design of a random perturbation into two parameters in a convergence state helps the best monkey position to jump out of possible local optima to further increase the performance of the proposed algorithm (MA+). us, the search strategy in the proposed algorithm has proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many complex optimization problems for the lowest number of iterations.
is paper is organized as follows: Section 2 describes the proposed algorithm and the design of a random perturbation into two parameters is explained clearly. Section 3 describes the experimental results and discussion. e information of twelve benchmark functions is given. Moreover, the performance of the proposed algorithm is evaluated and is compared with many comparative algorithms (many metaheuristic algorithms and modified comparative algorithms) on different dimensional functions. Finally, the conclusion is summarized in Section 4.

Related Work
e aim of this paper is to present the design of a new optimization method called improved monkey algorithm (MA+) to find a good solution for the optimization of multivariate systems. e design of the proposed algorithm (MA+) is a new metaheuristic search method for optimization problems inspired by the behavior of the movement of a monkey. e original monkey algorithm (MA) mainly consists of four processes, namely, initialization process, climb process, watchjump process, and somersault process. e improvement of the monkey algorithm is renewed by adding random perturbation (RP) in the original four processes. All processes in the proposed algorithm have been designed in Figure 1.
Step A. Initialization Process e proposed algorithm begins with random generation of a position for each monkey. Each monkey position is set to M, where M represents population size (number of monkeys). Hence, ith is a position as x i � (i � 1, 2, . . ., M) for each monkey as in the following equation: Each monkey's position is evaluated in objective function. It is also set to be present in the searching area (lower boundary-upper boundary). Lower and upper boundaries (lb and ub) are for all solutions (monkey's position). All solutions should be in the searching area between lower boundary (lb) and upper boundary (ub).
Step B. Climb Process e climb process changes the monkeys' positions step by step from the initial positions to new ones that can evaluate an improvement in the objective function.
Step length (a) parameter in the design of climb process updates the movement of monkeys' positions. e total number of monkeys is M. Hence, ith is a position as x i � ((x i1 , x i2 , . . . x in ), i � 1, 2, . . ., M) for each monkey. e step length in changing (updating) position of monkey is in the following equation: where Δx ij is updating the position of monkeys (j � 1, 2, . . ., n); a (positive number) is step length in climb process with a � 10 −3 . Each ith monkey position evaluates an improvement in the objective function (F ′ ) in the climb number of iterations (N c ). is function is called the pseudogradient of the objective function and is expressed as follows: e step length in the climb process has a crucial role in the precision of the approximation of the local solution, so the climb process supports a feasible solution. y is a feasible position for each monkey. x i is updating with y till reaching a feasible solution. Otherwise, x i does not change.
Step C. Watch-Jump Process is process checks each monkey position after the climb process. In other words, it is checked whether their position has reached the top or not. Moreover, each monkey looks around to see whether there is a position that is higher than the current position. If they have it, they will jump from their current position. Otherwise, it means that their position has not reached the top. Of course this will be realized for the monkeys who have the best positions (close to the top or at the top). erefore, each monkey takes a maximal distance from its current position. e maximal distance in the watch-jump process is parameter b as the eyesight. is process is expressed as follows: (4) till reaching an appropriate point (y). en, in this process, the climb process is repeated by employing y. us, each monkey takes a maximal distance from its current position.
Step D. Somersault Process is process enables finding out new positions (searching domains).
is process enables finding new positions (searching domains) to the barycentre of all monkeys' current positions defined as a pivot. Monkeys will somersault along the direction pointing to pivot. All solutions should be in somersault interval [c, d] with [−1, 1]. e search space of monkeys for this problem has large feasible spaces till increasing values of |c| and d, respectively. In this process, a real number is generated randomly as s between somersault intervals [−1, 1] and is expressed as follows: where p is the somersault pivot, j � 1, 2, . . .,n, and then x i � y if y � y 1 , y 2 , . . . y n till feasible solution. Otherwise, this process repeats equations (5) and (6) until a feasible solution (y) is found.
Step E. Random Perturbation (RP) Process is process controls monkeys current position, which can be stuck at some optimal solution. After somersault process, a novel random perturbation (RP) process is constructed of the proposed algorithm into two control parameters with p1 � 0.5 and p2 � 0.2, respectively. p1 improves monkeys' current position if they are stuck at a local minimum. e same or worse value is found in searching space consecutively; they have a tolerance number (tolX) in the number of iterations for improving monkeys' position (one at a time). p2 improves along the direction pointing to their current position with different perturbation to out from some possible local minimum or to search for other (and better) minima. e details regarding the pseudocode of the proposed algorithm are shown in Algorithm 1. e design of the proposed algorithm for solution to global numerical optimization problems begins with population (M), boundaries (lb, ub), eyesight (b), climb number Scientific Programming 1], and control parameters (p1, p2). All input parameters are designed for the proposed algorithm. In addition, a random dimension is calculated as ceil (rand x D). is calculation is evaluated as a random scalar that is drawn from the standard normal distribution multiplying with the parameter p1.
us, it prevents the proposed algorithm from getting stuck in some optimal solutions while controlling the monkeys' position.
e second one is about uniformly distributed random numbers (rand [1 x D]) multiplying with the parameter p2.
is scalar point is combined with each element of the vector x. us, improvement of monkeys' positions is found along the direction pointing with different perturbation to out from some possible local minimum or to search for other (and better) minima. e special steps of the improved control parameters with RP are described in Algorithm 1.

Benchmark Functions.
e performance of the improved monkey algorithm (MA+) is implemented in Matlab (2017). e features of equipment of the computer used are given as follows: (1) CPU: i5-6200 U e performance of the improved monkey algorithm and the performance of comparative algorithms (metaheuristic) are evaluated against 12 benchmark functions in the next section.

e Performance of Improved Monkey Algorithm on Different Dimensional Functions.
e performance of the improved monkey algorithm and the performance of original monkey algorithm (MA) were evaluated against 12 benchmark functions on different dimensions that can be  Table 2 that shows 12 benchmark optimization functions (F1-F12), and the improved monkey algorithm (MA+) achieves the best optimization results on the best, mean, and standard deviation values. erefore, the experimental results on the all-dimensional benchmark functions showed that the performance of the improved monkey algorithm is much better than that of the original monkey algorithm (MA) for the all-dimensional functions.
e experimental results are more intuitively demonstrated by the convergence plots and global search ability of the two algorithms (MA and MA+), and the convergence plots of the both algorithms on the 30-dimensional functions are in Figures 2(a)-2(l) for all benchmark functions. Table 2 shows the best mean optimization results for all functions (F1-F12) on the 30 dimensions, and the performance of the proposed improved algorithm was obtained: 3.49E -54, and 1.15E − 37 respectively. Additionally, Figures 2(a) to 2(l) reveal that the original monkey algorithm is much poorer for the 30-dimensional functions, while the proposed improved algorithm still shows a distinguished searching ability, global optimal solution, and its convergence speed in all functions.
Step A-D:

Name of function Equation Range
Sphere

Comparison of MA+ with Metaheuristic Algorithms on Different Dimensions.
e improved monkey algorithm (MA+) was compared with many metaheuristic optimization algorithms against 12 benchmark optimization functions on different dimensions. e information of benchmark functions is listed in Table 1. All algorithms have the same information and use the same initial parameters, the same dimensions, and the same number of iterations, and their maximum function evaluation times are equal  Tables 3-6.  Tables 3 to 5 show the experimental comparative results for all functions (F1-F12) on the 30, 60, and 90 dimensions. Each algorithm is run 100 times independently for all 12 benchmark optimization functions on all these dimensions in these tables. However, Table 6 shows the experimental comparative results for some functions (F1-F5) on 20, 50, and 100 dimensions. Each algorithm is run 30 times independently for each of 5 benchmark optimization functions on all these dimensions in Table 6.
e initial parameters and the same conditions for all algorithms were set as follows: size of population is set to 50 and each algorithm ran till it reached a number of iterations (50).
In the first stage, the performance of the proposed algorithm is compared with the performances of a selected collection of comparative algorithms that have been evaluated. e included algorithms are ABC, DA, and HAD. e best mean and the best standard deviation of experimental results are shown in Table 3 for each function. Table 3 shows that the proposed algorithm has an outstanding performance in the majority of the evaluation cases for F1-F7 and F9-F12 benchmark functions, respectively. However, the performance of HAD algorithm is equal to the result in the case of all dimensions on only Dixon-Price function (F8) with the proposed algorithm. e best standard deviation on Dixon-Price function was obtained by HAD algorithm. All details are shown in Table 3.
In the second stage, the performance of the proposed algorithm is compared with those of some metaheuristic optimization algorithms that have been evaluated. e included algorithms are ACO, BAT, BBO, DE, GA, and PSO. e best mean of experimental results is listed in Table 4 for each function.

Scientific Programming
In the third stage, the performance of the proposed algorithm is compared with those of other metaheuristic optimization algorithms that have been evaluated. e included algorithms are EHO, KH, MFO, MSA, SCA, and WOA. e best mean of experimental results is listed in Table 5 for each function.
Finally, the performance of the proposed algorithm was also compared with the performances of three algorithms, namely, the monarch butterfly optimization (MBO) algorithm, MBO with greedy strategy and self-adaptive crossover operator (GCMBO), and MBO with opposition-based learning and random local perturbation (OPMBO) using five benchmark functions, and all details are listed in Table 6.
To sum up, the experimental comparative results showed reaching the much better solution and the best convergence performance of escaping local optimum for the proposed algorithm when it is compared with ACO, BAT, BBO, DE, GA, PSO, EHO, KH, MFO, MSA, SCA, WOA, MBO, GCMBO, and OPMBO. All those comparative results showed an outstanding performance of the proposed algorithm in the majority of the evaluation cases. All details are listed in Tables 4-6.

Conclusions
is paper presented a novel metaheuristic search and cognitively inspired algorithm, based on the monkey algorithm. e proposed algorithm has been widely employed for solving various kinds of optimization problems and was evaluated extensively against 12 benchmark optimization functions on different types of dimensions for each function. A new random perturbation was defined to improve the control parameters and was constructed of the proposed algorithm. e main advantage of the control parameters was that they efficiently prevented the improved monkey algorithm from getting stuck in optimal solutions and found global optimal solution for 8 benchmark functions, namely,    Figure (2h) caused getting stuck in optimal solutions and obtained poor performance for only the lowest iterations, but of course the proposed algorithm obtained the best solution for these functions at the maximum allowable number of iterations. Briefly, the search strategy in the proposed algorithm has more generally proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many optimization problems for the lowest number of iterations against the original monkey algorithm. e performance of the improved monkey algorithm was compared with many metaheuristic optimization algorithms, including a collection of 18 optimizer algorithms. e comparative results included simple statistics for the best, mean, and convergence plots. All those comparative results showed that the proposed algorithm had an outstanding performance in majority of the evaluation cases.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e author declares that he has no conflicts of interest.