An Elite Decision Making Harmony Search Algorithm for Optimization Problem

This paper describes a new variant of harmony search algorithmwhich is inspired by awell-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.


Introduction
In 2001, Geem et al. 1 proposed a new metaheuristic algorithm, harmony search HS algorithm, which imitates the behaviors of music improvisation process.In that algorithm, the harmony in music is analogous to the optimization solution vector, and the musicians improvisations are analogous to local and global search schemes in optimization techniques.The HS algorithm does not require initial values for the decision variables.Furthermore, instead of a gradient search, the HS algorithm uses a stochastic random search that is based on the harmony memory considering rate and the pitch adjusting rate so that derivative information is unnecessary.These features increase the flexibility of the HS algorithm and have led to its application to optimization problems in different areas including music composition 2 , Sudoku puzzle solving 3 , structural design 4, 5 , ecological conservation 6 , where f x is an objective function, x is the set of each decision variable x i , N is the number of decision variables, and X i is the set of the possible range of values for each decision variable, that is x L i ≤ X i ≤ x U i and x L i and x U i are the lower and upper bounds for each decision variable, respectively.

The General HS Algorithm
The general HS algorithm requires several parameters as follows: HMS: harmony memory size, HMCR: harmony memory considering rate, PAR: pitch adjusting rate, bw: bandwidth vector.
Remarks.HMCR, PAR, and bw are very important factors for the high efficiency of the HS methods and can be potentially useful in adjusting convergence rate of algorithms to the optimal solutions.These parameters are introduced to allow the solution to escape from local optima and to improve the global optimum prediction of the HS algorithm.
The procedure for a harmony search, which consists of Steps 1-4.
Step 1. Create and randomly initialize an HM with HMS.The HM matrix is initially filled with as many solution vectors as the HMS.Each component of the solution vector is generated using the uniformly distributed random number between the lower and upper bounds of the corresponding decision variable x L i , x U i , where i ∈ 1, N .The HM with the size of HMS can be represented by a matrix as Step 2. Improvise a new harmony from the HM or from the entire possible range.After defining the HM, the improvisation of the HM, is performed by generating a new harmony vector x x 1 , x 2 , . . ., x N .Each component of the new harmony vector is generated according to where HMCR is defined as the probability of selecting a component from the HM members, and 1-HMCR is, therefore, the probability of generating a component randomly from the possible range of values.Every x i obtained from HM is examined to determine whether it should be pitch adjusted.This operation uses the PAR parameter, which is the rate of pitch adjustment as follows: x i ± rand 0, 1 × bw with probability PAR, x i , with probability 1-PAR, 2.4 where rand 0, 1 is the randomly generated number between 0 and 1.
Step 3. Update the HM.If the new harmony is better than the worst harmony in the HM, include the new harmony into the HM and exclude the worst harmony from the HM.
Step 4. Repeat Steps 2 and 3 until the maximum number of searches is reached.Numerical results reveal that the HS algorithm with variable parameters can find better solutions when compared to HS and other heuristic or deterministic methods and is a powerful search algorithm for various engineering optimization problems, see 11 .

Global Best Harmony Search (GHS) Algorithm
In 2008, Omran and Mahdavi 14 presented a GHS algorithm by modifying the pitch adjustment rule.Unlike the basic HS algorithm, the GHS algorithm generates a new harmony vector x by making use of the best harmony vector x best {x best 1 , x best 2 , . . ., x best n } in the HM.The pitch adjustment rule is given as follows: where k is a random integer between 1 and n.The performance of the GHS is investigated and compared with HS.The experiments conducted show that the GHS generally outperformed the other approaches when applied to ten benchmark problems.

A Self-Adaptive Global Best HS (SGHS) Algorithm
In 2010, Pan et al. 16 presented a SGHS algorithm for solving continuous optimization problems.In that algorithm, a new improvisation scheme is developed so that the good information captured in the current global best solution can be well utilized to generate new harmonies.The pitch adjustment rule is given as follows: x j x best j , 2.9 where j 1, . . ., n. Numerical experiments based on benchmark problems showed that the proposed SGHS algorithm was more effective in finding better solutions than the existing HS, HIS, and GHS algorithms.

An Elite Decision Making HS Algorithm
The key differences between the proposed EDMHS algorithm and IHS, GHS, and SGHS are in the way of improvising the new harmony.

EDMHS Algorithm for Continuous Design Variables Problems
The EDMHS has exactly the same steps as the IHS with the exception that Step 3 is modified as follows.
In this step, a new harmony vector x x 1 , x 2 , . . ., x N T is generated from with probability 1-HMCR, 3.1 where HM s, i and HM b, i are the ith element of the second-best harmony and the best harmony, respectively.

EDMHS Algorithm for Integer Variables Problems
Many real-world applications require the variables to be integers.Methods developed for continuous variables can be used to solve such problems by rounding off the real optimum values to the nearest integers 14, 21 .However, in many cases, rounding-off approach may result in an infeasible solution or a poor suboptimal solution value and may omit the alternative solutions.
In EDMHS algorithm for integer programming, we generate the integer solution vector in the initial step and improvise step, that is, each component of the new harmony vector is generated according to x i ∈ round HM s, i , HM b, i with probability HMCR, where round * means round off for * .The pitch adjustment is operated as follows:

Numerical Examples
This section is about the performance of the EDMHS algorithm for continuous and integer variables examples.Several examples taken from the optimization literature are used to show the validity and effectiveness of the proposed algorithm.The parameters for all the algorithm are given as follows: HMS 20, HMCR 0.90, PAR min 0.4, PAR max 0.9, bw min 0.0001, and bw max 1.0.In the processing of the algorithm, PAR and bw are generated according to 2.5 and 2.6 , respectively.

Some Simple Continuous Variables Examples
For the following five examples, we adopt the same variable ranges as presented in 4 .Each problem is run for 5 independent replications, the mean fitness of the solutions for four variants HS algorithm, IHS, SGHS, SGHS, and EDMHS, is presented in tables.

Rosenbrock Function
Consider the following: Due to a long narrow and curved valley present in the function, Rosenbrock function 4, 22 is probably the best known test case.The minimum of the function is located at x * 1.0, 1.0 with a corresponding objective function value of f x * 0.0.The four algorithms were applied to the Rosenbrock function using bounds between −10.0 and 10.0 for the two design variables x 1 and x 2 .After the 50,000 searches, we arrived at Table 1.Consider the following:

4.2
Goldstein and Price function I 4, 13, 23 is an eighth-order polynomial in two variables.However, the function has four local minima, one of which is global, as follows: f 1.2, 0.8 840.0, f 1.8, 0.2 84.0, f −0.6, −0.4 30, and f * 0.0, 1.0 3.0 global minimum .In this example, the bounds for two design variables x 1 and x 2 were set between −5.0 and 5.0.After 8000 searches, we arrived at Table 2.

Eason and Fenton's Gear Train Inertia Function
Consider the following:

4.3
This function 4, 24 consists of a minimization problem for the inertia of a gear train.The minimum of the function is located at x * 1.7435, 2.0297 with a corresponding objective function value of f * x 1.744152006740573.The four algorithms were applied to the gear train inertia function problem using bounds between 0.0 and 10.0 for the two design variables x 1 and x 2 .After 800 searches, we arrived at Table 3.

Wood Function
Consider the following:

4.4
The Wood function 4, 25 is a fourth-degree polynomial, that is, a particularly good test of convergence criteria and simulates a feature of many physical problems quite well.The minimum solution of the function is obtained at x * 1, 1, 1, 1 T , and the corresponding objective function value is f * x 0.0.When applying the four algorithms STO the function, the four design variables, x 1 , x 2 , x 3 , x 4 , were initially structured with random values bounded between −5.0 and 5.0, respectively.After 70,000 searches, we arrived at Table 4.

Powell Quartic Function
Consider the following: The second derivative of the Powell quartic function 4, 26 becomes singular at the minimum point, it is quite difficult to obtain the minimum solution i.e., f * 0, 0, 0, 0 0.0 using gradient-based algorithms.When applying the EDMHS algorithm to the function, the four design variables, x 1 , x 2 , x 3 , x 4 , were initially structured with random values bounded between −5.0 and 5.0, respectively.After 50,000 searches, we arrived at Table 5.
It can be seen from Tables 1-5, comparing with IHS, GHS, and SGHS algorithms, that the EDMHS produces the much better results for four test functions.Figures 1-5 present    a typical solution history graph along iterations for the five functions, respectively.It can be observed that four evolution curves of the EDMHS algorithm reach lower level than that of the other compared algorithms.Thus, it can be concluded that overall the EDMHS algorithm outperforms the other methods for the above examples.

More Benchmark Problems with 30 Dimensions
To test the performance of the proposed EDMHS algorithm more extensively, we proceed to evaluate and compare the IHS, GHS, SGHS, and EDMHS algorithms based on the following 6 benchmark optimization problems listed in CEC2005 27 with 30 dimensions.
The parameters for the IHS algorithm, HMS 5, HMCR 0.9, bw max x U j − x L j /20, bw min 0.0001, PAR min 0.01, and PAR max 0.99 and for the GHS algorithm, HMS 5, HMCR 0.9, PAR min 0.01, and PAR max 0.99.
Table 6 presents the average error AE values and standard deviations SD over these 30 runs of the compared HS algorithms on the 6 test functions with dimension equal to 30.

Integer Variables Examples
Six commonly used integer programming benchmark problems are chosen to investigate the performance of the EDMHS integer algorithm.For all the examples, the design variables, x i , i 1, . . ., N, are initially structured with random integer values bounded between −100 and 100, respectively.Each problem is run 5 independent replications, each with approximately 800 searches, all the optimal solution vector are obtained.

Test Problem 1
Consider the following:

Test Problem 3
Consider the following:

Test Problem 5
Consider the following:

Conclusion
This paper presented an EDMHS algorithm for solving continuous optimization problems and integer optimization problems.The proposed EDMHS algorithm applied a newly designed scheme to generate candidate solution so as to benefit from the good information inherent in the best and the second best solution in the historic HM.Further work is still needed to investigate the effect of EDMHS and adopt this strategy to solve the real optimization problem.

Table 1 :
Four HS algorithms for Rosenbrock function.

Table 2 :
Four HS algorithms for Goldstein and Price function I.

Table 3 :
Four HS algorithms for Eason and Fenton's gear train inertia function.

Table 4 :
Four HS algorithms for Wood function.

Table 5 :
Four HS algorithms for Powell quartic function.

Table 6 :
AE and SD generated by the compared algorithms.