A Novel Approach to Improve the Performance of Evolutionary Methods for Nonlinear Constrained Optimization

Evolutionary methods are well-known techniques for solving nonlinear constrained optimization problems. Due to the exploration power of evolution-based optimizers, population usually converges to a region around global optimum after several generations. Although this convergence can be efficiently used to reduce search space, in most of the existing optimization methods, search is still continued over original space and considerable time is wasted for searching ineffective regions. This paper proposes a simple and general approach based on search space reduction to improve the exploitation power of the existing evolutionary methods without adding any significant computational complexity. After a number of generations when enough exploration is performed, search space is reduced to a small subspace around the best individual, and then search is continued over this reduced space. If the space reduction parameters (red_gen and red factor) are adjusted properly, reduced space will include global optimum. The proposed scheme can help the existing evolutionary methods to find better near-optimal solutions in a shorter time. To demonstrate the power of the new approach, it is applied to a set of benchmark constrained optimization problems and the results are compared with a previous work in the literature.


Introduction
A significant part of today's engineering problems are constrained optimization problems (COP). Although there exist efficient methods like Simplex for solving linear COP, solving nonlinear COP (NCOP) is still open for novel investigations. Different methods have been proposed for solving NCOP. Among them, natural optimization and especially population-based schemes are the most general and promising ones. These methods can be applied to all types of COP including convex and nonconvex, analytical and nonanalytical, real-, integer-and mixed-valued problems. One of the most applied techniques for solving NCOP are evolutionary methods.
Various techniques have been introduced for handling nonlinear constrains by evolutionary optimization (EO) methods. These approaches can be grouped in four major categories [1,2]: (1) methods based on penalty functions that are also known as indirect constraint handling, (2) methods based on a search of feasible solutions including repairing unfeasible individuals [3,4], superiority of feasible points [5], and behavioral memory [6], (3) methods based on preserving feasibility of solutions like preserving feasibility by designing special crossover and mutation operators [7], the GENOCOP system [8], searching the boundary of feasible region [9], and homomorphous mapping [10], and (4) Hybrid methods [11][12][13]. Also, decoding such as transforming the search space can be considered as the fifth category which is less common. None of these approaches are complete and each of them has both advantages and weak-points. For example, although the third method (preserving feasibility) might perform very well, it is usually problem dependent and designing such a method for a given problem may be difficult, computationally expensive, and sometimes impossible. Among these approaches, the most general one is the first technique.
Penalty-based constraint handling incorporates constraints into a penalty function that is added to the main fitness function. By this work, the main constrained problem is converted to an unconstrained problem. The main advantage of this method is its generality and simplicity (problemindependent penalty functions). Thus, this method is known as the most common approach for handling nonlinear constraints in EO.
Adding a penalty function to a fitness (objective) function creates a new unconstrained problem that might have further complexity. The introduction of penalties may transform a smooth objective function into a rugged one and the search may then become more easily trapped in local optimum [14]. Therefore, several penalty-based constraint handling methods have been proposed to improve the performance of penalty-based constrained evolutionary optimization. In [2], a survey has been performed on several types of these methods including death penalty [2,15], static penalty [16,17], dynamic penalty [18,19], annealing penalty [20,21], adaptive penalty [22][23][24], segregated GA [25], and coevolutionary penalty [26]. In addition to these types, other hybrid (e.g., niched-penalty approach [27]) and heuristic techniques (e.g., stochastic ranking [28]) could be found in the literature.
Due to its generality and applicability, this paper focuses on penalty-based constraint handling without loss of generality. However, the proposed approach is independent from the type of constraint handling and optimization technique. This paper demonstrates how the power of exploitation of constrained EO (CEO) can be increased by reducing the search space after enough exploration is performed. The proposed approach is simple and general and does not add any computational complexity to the original algorithm. Also, it could be applied to other optimization techniques like constrained PSO and hybrid methods. This paper is organized as follows. In Section 2, the proposed approach is described and the details are explained and illustrated over a specific constrained optimization problem introduced in [29]. In Section 3, the performance of the proposed scheme is tested on eleven well-known test problems and the results are compared with [10].

Proposed Approach
A general constrained nonlinear programming problem is formulated as follows: where x = (x 1 , x 2 , . . . x n ) is the vector of decision variables, f (x) is a scalar lower-bounded objective function, {g i (x), i = 1, . . . , p} is a set of p inequality constraints, {h i (x), i = p + 1, . . . , m} is a set of m-p equality constraints, and [l j , u j ] is the domain of the ith variable. f (x), g i (x), and h i (x) are allowed to be either linear or nonlinear, convex or nonconvex, differentiable or nondifferentiable, continuous or discrete, and analytical or nonanalytical. Also, x can be either discrete or continuous or mixed-valued. Without loss of generality, the problem is considered minimization since max f (x) is equivalent to − min(− f (x)). As mentioned in [10], it is a common practice to replace equation h i (x) = 0 by the inequality |h i (x)| ≤ δ for some small δ > 0. This replacement is considered in the rest of this paper, and, consequently, the above problem consists of m inequality constraints. For simplicity, all of the constraints are shown by Note that bound constraints l j ≤ x j ≤ u j can be directly handled in most of the population-based optimization methods such as EO. Thus, only g i (x) ≤ 0 (i = 1, 2, . . . , m) are considered as the constraints. Without loss of generality, to simplify the understanding of the proposed idea, its details are explained over a specific constrained optimization problem as an illustrative example chosen from the literature.

An Illustrative Example.
Consider the following constrained nonlinear programming problem [29]: The feasible space and the global minimum of this problem are displayed in Figure 1. In this paper, a simple constrained GA (CGA) has been used for solving constrained optimization problems. This CGA is composed of a simple real-valued GA introduced in [30] and a static penalty function that is added to the original objective (fitness) function as follows: In this equation, max{0, g i (x)} gives the violation value of the constraint g i (x) ≤ 0, and p is the penalty coefficient that must be determined by the designer. In some cases, assigning a proper value to p is difficult and this is known as the main disadvantage of static penalty functions. As an alternative, adaptive [22][23][24] and dynamic [18,19] penalty functions have been proposed. By defining f p (x), the constrained nonlinear programming problem of (2) is converted to the following unconstrained nonlinear programming problem: This problem (4)  Global minimum Feasible space generations is allowed to be 10. Figure 2 indicates the best individuals (solutions) of 20 independent runs. Regarding this figure, all of these individuals have been converged to the global minimum through 10 generations. Figure 3 illustrates these individuals on the original search space. A new idea is emerged from considering these figures, that is, the best individual will converge to the global optimum after a number of generations. In other words, the global optimum will locate in a small neighborhood of the best individual after a number of generations. Consequently, after a number of generations when enough exploration has been performed, the search space can be reduced to a small subspace around the best individual, and then the search can be continued over this reduced space. The experimental study of the next section demonstrates that if the space reduction parameters (red gen and red factor) are adjusted properly, the reduced space will include the global optimum.
Here, the reduced search space is defined as follows:  reduced u i = min best ind red gen In these equations, [l i , u i ] and [reduced l i , reduced u i ] are the original and reduced domains of variables, respectively. best ind(red gen) is the best individual of the generation red gen. red gen specifies the generation in which the reduction of search space is performed. Here, this search space reduction is done only one time at generation red gen. The size of the reduced search space is determined by red factor that varies between 0 and 1. Also, the value of mutation rate can be changed after this reduction. Indeed, the value of red gen determines the total number of generations in CGA (constrained GA) that are devoted to exploration in order to find a region around of the global minimum. The value of red gen determines a tradeoff between exploration and exploitation and highly depends on the general characteristics of the given optimization problem. Table 1: A comparison between the proposed approach, [22] and [29] for the same number of function evaluations in solving the problem of (2). The best results are indicated in boldface.

Best
Mean Worst Proposed approach 13.590846 13.61073 13.84861 [29] 13.59658 -244.11616 [22] 13.59085 30.74880 152.54840 If the global minimum is inside a narrow (sharp) valley, CGA usually needs more time to find the global valley, and thus a large value of red gen should be considered. Also, since the valley is narrow, red factor could be set to a small value. In contrast, if the global minimum is inside a wide (flat) valley, CGA usually needs less time to find the global valley, and therefore smaller values can be used for red gen. Also, since the valley is wide, red factor should be large. The value of red gen is usually increased by increasing the number of decision variables. In most of the optimization problems, since the general characteristics of the given problem can be approximately identified by the designer, some appropriate values for red gen and red factor can be heuristically guessed by the designer. In general, red gen and red factor should be large enough to guarantee that the global optimum is included in the reduced search space. For this example, the reduced search space is shown in Figure 3 for red factor = 0.05. The value of red gen must be determined by user. However, systematic or adaptive schemes can be developed in further investigations. The value of red gen influences the power of exploration. This value should be large enough to guarantee that after red gen number of generations, the best individual and the global optimum are close together. Moreover, the size of the reduced search space (the value of red factor) must be large enough to include the global optimum.
This example has been solved by the proposed approach using the above-mentioned CGA. Here, the value of penalty coefficient is 20, the maximum number of generations is 50, red factor = 0.05, and and red gen = 5. The mutation rate is 0.2 for the first five generations and after Generation 5 (red gen), it is changed into 0.05. The results of 50 independent runs are displayed in Table 1. The same experiment has been also done by [22,29] with the same number of function evaluations. Table 1 compares the results of the proposed approach with these two references where the best results are indicated in boldface.Although the proposed approach has been incorporated with a simple method of CEO which is trivial in contrast to the other existing techniques of CEO in the literature, it could get better results in this example for the same number of function evaluations in comparison with [22,29].
According to the above-mentioned explanations, the proposed method and formulation of this paper can be directly applied to different continuous (real-valued) optimization problems and incorporated with different optimization methods. Moreover, since the main contribution of this paper is the notion of search space reduction, that is general and problem-independent, the proposed approach is flexible enough to be generalized (with minor changes) to discrete and mixed-valued optimization problems such as combinatorial optimization problems where the domain is not continuous. In the next section, the performance of the new scheme is examined by a set of eleven well-known continuous test problems.

Experimental Study
To demonstrate the power of the proposed approach to improve the performance of constrained evolutionary methods, it is incorporated with the simple CEO of the previous section and applied to a set of eleven well-known test problems introduced in [1,10]. These test problems consist of objective functions of different types (linear, quadratic, cubic, polynomial, and nonlinear) and various types and numbers of constraints (linear inequalities, nonlinear equations, and inequalities). The ratio between the size of the feasible search space |F| and the size of the whole search space |S| for these test cases vary from 0% to almost 100% and the topologies of feasible search spaces are also quite different [10]. Table 2 summarizes the main characteristics of these test cases.
The CGA routine is the same as the previous section except for the following parameters. Here, population size is 70 and the maximum number of generations is 5000. The values of mutation rate, red gen, red factor, and penalty coefficient are different for each test case as shown in Table 3. Equality constraints (h(x) = 0) were converted into inequality constraints by δ = 0.0001 (|h(x)| ≤ δ). For each test case, 20 independent runs were performed. The same experiment has been done on this test set in [10]. It should be mentioned that the authors in [10] utilizes the third method of constraint handling (preserving feasibility by homomorphous mapping). Table 4 indicates the results of the proposed approach and its comparison with [10] for the same number of function evaluations. The best results are indicated in Boldface. For problem G5, we could not find any suitable static penalty coefficient for handling the constraints. This is due to the inadequacy of static penalty functions. However, this problem could not be solved by [10] too. It should be mentioned that this test case (G5) has been solved in some other references [22,28].
According to comparative results of Table 4, although the proposed approach had been incorporated with a simple method of CEO which is trivial in contrast to the other existing techniques of CEO in the literature, its performance (finding a better near-optimal solution) is better than [10] in 70% of cases in this benchmark experimental study. Since the number of function evaluations is the same in this study and in the study of [10], it can be concluded that the proposed approach can also improve the efficiency (convergence speed) of the CEO in 70% of cases in contrast to [10].

Conclusions
This paper proposes a general and computationally simple approach to improve the performance of evolution-based optimization methods for solving nonlinear constrained optimization problems. After a number of generations when enough exploration is performed, the search space is reduced to a small subspace around the best individual, and the search is continued over this reduced space. If the reduction parameters (red gen and red factor) are adjusted properly, the reduced search space will include the global optimum. Here, this method was incorporated with a simple constrained GA and its performance was tested and compared with the method in [10] on a set of eleven benchmark test problems. The comparative results of the experimental study demonstrate that the proposed approach can considerably improve the performance (finding better near-optimal solutions) and efficiency (convergence speed) of the simple constrained GA in comparison with [10] without adding any considerable computational complexity to the original algorithm. The proposed scheme is general and can be incorporated with other population-based optimization methods for solving nonlinear programming problems.