An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

As a novel evolutionary optimization method, extremal optimization (EO) has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO) for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimensionN = 30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA) versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO), and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.


Introduction
It has been widely recognized that a variety of real-world complex engineering optimization problems can be formulated as continuous unconstrained optimization problems [1].On the other hand, these benchmark functions of unconstrained optimization problems have been often used to evaluate the performances of various evolutionary optimization algorithms [2], for example, genetic algorithm (GA) and its modified versions.This paper focuses on another novel evolutionary algorithm called extremal optimization (EO) for continuous unconstrained optimization problems.
Originally inspired by far-from-equilibrium dynamics of self-organized criticality (SOC) [3,4], EO provides a novel insight into optimization domain because it merely selects against the bad instead of favoring the good randomly or according to a power-law distribution [5,6].From the perspectives of evolutionary computation, EO is much simpler than other popular evolutionary algorithms, such as genetic algorithms (GA), because it has only selection and mutation operations and less adjustable parameters [7,8].As a consequence, the basic EO algorithm and its modified versions have been successfully applied to a variety of benchmark and real-world engineering optimization problems, such as graph partitioning [9], graph coloring [10], travelling salesman problem [11,12], maximum satisfiability (MAX-SAT) problem [13,14], and steel production scheduling [15].The more comprehensive introduction concerning EO is referred to in the surveys [16,17].
However, the applications of EO in continuous optimization problems are relatively rare [18][19][20][21][22][23].Sousa and Ramos [19] presented generalized EO (GEO) for continuous optimization problems, where each variable is encoded a binary bit.Furthermore, the GEO has been successfully

EO.
In general, the -EO [5,6] algorithm and its modified versions consist of the following basic operations, such as initialization of a random solution, evaluation of global fitness and local fitness, selection of some bad local variables based on power-law probability distribution, mutation for the selected variables and generation of a new solution, and updating the solution by accepting the new solution unconditionally [7].The flowchart of the -EO for a minimization optimization problem is presented in Figure 1.
Remark 1.According to the seminal work [5,6], the global fitness () of a solution  for an optimization problem with  optimized variables should be decomposed into  equivalent degrees of freedom, that is, the local fitness   .Furthermore, Liu et al. [12] give consistency and equivalence conditions between global fitness and local fitness.
Remark 2. The power-law based probability selection [25] is described as follows: where () is the probability of the th rank variable (or element) selected from the  variables (or elements) for mutation and  is a positive parameter controlling the powerlaw probability.

The Proposed Algorithm for Continuous Unconstrained Optimization Problems
In this section, we propose an improved real-coded population-based EO (IRPEO) algorithm for continuous unconstrained optimization problems.The basic idea behind the IRPEO is the population-based iterated optimization consisting of the following operations: generation of realcoded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally.The proposed algorithm is described in the following steps.
Input.They are a continuous unconstrained optimization problem and the control parameters of the IRPEO, including the control parameter  of power-law probability distribution, the size of population SP, and maximum number of iteration  max .
Output.They are the best solution  best and the corresponding fitness  best .
Step 1. Generate an initial population P I = {  subject to the given domains randomly by the equations, and set P = P I : More specifically, each variable   in   is rewritten as follows: where rand(0, 1) is a random number during 0 and 1.
Step 2. Evaluate the fitness   of each solution   according to the objective function of the continuous problem to be optimized and the fitness  of the population P according to the following equation: Step 3. Rank the values of {  }; that is, find a permutation Π 1 of the labels  such that  Π1(1) ≥  Π1(2) ≥ ⋅ ⋅ ⋅ ≥  Π1(SP) for minimization problems ( Π1(1) ≤  Π1(2) ≤ ⋅ ⋅ ⋅ ≤  Π1(SP) for maximization problem) and obtain the best solution  best =  Π1(SP) and  best =  Π 1 (SP) .
Step 4. Select a bad solution  Π1() based on power-law probability distribution () defined as ( 2) and generate a new solution   by adopting uniform random mutation.To be more precise, generate a random number  , during 0 and 1 firstly and then the th element of th new solution; that is,   (  ) is obtained by (6).Denote the new population as P N = { 1 ,  2 , . . .,   }, where we replace  Π1() by the new solution   and keep the other solutions unchanged: Step 5. Evaluate the fitness   of each solution   according to the objective function of the continuous problem to be optimized and rank the values of {  }; that is, find a permutation Π 2 of the labels  such that  Π2(1) ≥  Π2(2) ≥ ⋅ ⋅ ⋅ ≥  Π2(SP) for minimization problems (or  Π2(1) ≥  Π2(2) ≥ ⋅ ⋅ ⋅ ≥  Π2(SP) for maximization problems).
Step 7. Accept P = P N unconditionally.
Step 8. Repeat Step 2 to Step 7 until the stopping criteria; for example, the maximum number of iteration  max is satisfied.
Step 9. Output the best solution  best and the corresponding fitness  best .
From the above description of the proposed algorithm, it is obvious that the parameters used in IRPEO algorithm including the size of population (SP), the maximum number of iterations ( max ), and power-law coefficient  play critical roles in controlling the performances of IRPEO.From the perspectives of algorithm design, IRPEO is simpler than other reported popular algorithms, for example, GA [24], PSO [22], PEO [22], and PSO-EO [22], because IRPEO has less parameters to be tuned in the practical experiments.More details concerning the parameters used in different evolutionary algorithms and the effects of these parameters SP,   max , and  on the performance of IRPEO will be discussed in the next section.
The optimization dynamics of the proposed algorithm for the benchmark test functions [24] are illustrated in Figure 2. The test function  3 is a maximum optimization problem with the global optimum 1.000 while  4 is a minimum optimization problem with the global optimum 0.000.Obviously, IRPEO can all find a potential optimal region fast and converge fast to the optimum (red dashed) for minimum and maximum optimization problems.

Experimental Results.
To demonstrate the superiority of the proposed IRPEO algorithm, 10 benchmark functions [24] including  1 to  10 with the dimension  = 30 and 2 benchmark functions [22] including  11 ,  12 with the dimension  = 10, 30, respectively, shown in Table 1 are chosen as test functions.These test functions include unimodal and multimodal functions.The performances of these algorithms for each benchmark test function are measured by the statistical results including best fitness   , the average fitness   , the worst fitness   , and the standard deviation (SD) under 30 independent runs.It should be noted that all the experiments have been implemented by MATLAB software on a 2.50 GHz PC with processor i5-3210 M and 2 GB RAM.
The control parameters used in IRPEO for the following experiments are set as SP = 50,  max = 10000-50000, and  = 1.04-1.06.The comparative performances of IRPEO with the recently reported GA with different mutation operations including GA-ADM, GA-RM, GA-PLM, GA-NUM, GA-MNUM, and GA-PM [24] for the above test functions are shown in Table 2. Clearly, the proposed RPEO outperforms these reported modified GA versions [24] and PEO [22] for test functions  1 - 6 in terms of   and SD, so the comprehensive performance of IRPEO ranks first among these evolutionary algorithms.For the test functions  7 - 10 , the performance of IRPEO is only worse than GA-ADM and GA-MNUM while being better than the other four GA versions.For the same random mutation (RM) operation, the proposed IRPEO provides better performance than GA-RM [24] for all these test functions.Furthermore, the IRPEO is much simpler than these GA versions because IRPEO has only selection and mutation operations with fewer adjustable parameters to tune.In this sense, the proposed IRPEO is competitive or even better than the recently reported various GA versions with different mutation operations.
Table 3 gives the comparative performances of IRPEO with other reported evolutionary algorithms [22] including PSO-EO, PSO, PEO, and GA for the test functions  11 and  12 .It is evident that the proposed IRPEO is also superior to these algorithms in terms of   ,   ,   , and SD.

Parameters versus Performance.
The parameters used in these tested evolutionary algorithms in the last subsection are shown in Table 4.It is clear that the proposed IRPEO is simpler than other reported popular algorithms, for example, GA [24], PSO [22], PEO [22], and PSO-EO [22], because IRPEO has fewer parameters to be tuned in the practical experiments.
The effects of parameters including SP,  max , and  on the performance of IRPEO for test function  2 are illustrated in Figures 3, 4, and 5, respectively.It should be noted that the performance of IRPEO is measured by the error between the statistical results (  ,   , and   ) under 10 independent runs and the optimum of the test function.Generally, the performance is improved as the values of SP and  max increase, but its improvement is not distinct when these values reach to some constants.Additionally, the satisfied performance is obtained when  ranges 1.02 to 1.06.In fact, the effects of these parameters on the performance of IRPEO for other test functions are obtained by similar analysis method.

Conclusion
In this paper, an improved real-coded population-based EO method (IRPEO) has been proposed to solve continuous unconstrained optimization problems.The proposed IRPEO is population-based iterated optimization consisting of the following operations: generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to powerlaw probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally.The experimental results on 10 continuous unconstrained optimization benchmark test functions have shown that the average performance of IRPEO is competitive or even better than that of various GAs [24] with different mutation operations and the original PEO algorithm [22].In addition, the superiority of IRPEO to other evolutionary algorithms such as PSO, original PEO, and the hybrid PSO-EO algorithm [22] is also demonstrated by the experimental results on these benchmark functions.Furthermore, the effect of the adjustable parameters used in IRPEO on its performance is discussed in this work.In fact, the IRPEO algorithm is also enhanced by tuning the parameters carefully.However, more experiments for benchmark test functions and real-world engineering optimization problems will be done to further validate the superiority of IRPEO to other optimization algorithms.Moreover, the extension of IRPEO algorithm to constrained optimization problems is another future research work.

Figure 1 :
Figure 1: The flowchart of -EO algorithm for a minimization problem.

Figure 2 :
Figure 2: The optimization process of the proposed EO for test functions.

Figure 3 :
Figure 3: The effect of parameter SP on the performance of IRPEO for test function  2 when  max = 50000 and  = 1.05.

Figure 4 :
Figure 4: The effect of parameter  max on the performance of IRPEO for test function  2 when SP = 50 and  = 1.05.

Figure 5 :
Figure 5: The effect of parameter  on the performance of IRPEO for test function  2 when  max = 50000 and SP = 50.

Table 4 :
Parameters used in different evolutionary algorithms. max , inertia weight factor  max ,  min , acceleration parameter  1 ,  2  max ,  max ,  min ,  1 ,  2 , TC and TG used in the hybrid GC mutation IRPEO 3 S P , max , power-law coefficient