A Hybrid Search Model for Constrained Optimization

is paper proposes a hybrid model based on decomposition for constrained optimization problems. Firstly, a constrained optimization problem is transformed into a biobjective optimization problem. en, the biobjective optimization problem is divided into a set of subproblems, and dierent subproblems are assigned to dierent Fitness functions by the direction vectors. Dierent from decomposition-based multiobjective optimization algorithms in which each subproblem is optimized by using the information of its neighboring subproblems, the neighbors of each subproblem are deFined based on corresponding direction vector only in themethod. By combining threemain components, namely, the local searchmodel, the global searchmodel, and the direction vector adjusting strategy, the population can gradually move toward the global optimal solution. Experiments on two sets of test problems and Five real-world engineering design problems have shown that the proposedmethod performs better than or is competitive with other compared methods.


Introduction
Constrained optimization has a wide application background in many important Fields, such as economics, engineering, and science [1][2][3]. In general, the mathematical deFinition of a constrained optimization problem (COP) is as below: min f(X),X x 1 , ...,x d ∈ Ω,l j ≤ x j ≤ u j st: g r (X) ≤ 0, r 1,..., kh w (X) 0, w 1, ..., z, (1) where X represents a d-dimensional solution vector. f(X) represents the objective function. h w (X) and g r (X) denote z-equality constraints and k-inequality constraints. x j is restricted by the upper and lower bounds u j and l j , respectively.
In COPs, an equality constraint is generally converted into the following inequality forms. h w (X) − ξ ≤ 0, w 1, . . . , z, where ξ denotes a small positive value (i.e., 10 − 4 ). In order to judge whether a solution X in COPs is a feasible solution, we must consider its overall constraint violation degree, which is computed as below: where G(X) ≥ 0 and X satisFies the constraints if and only if G(X) 0. Evolutionary algorithm (EA), which is a metaheuristic algorithm, has been adopted to deal with COPs in the past two decades. When EAs are employed to solve COPs, the constraint-handling techniques (CHTs) should be considered. e current popular CHTs include the penalty function methods [4][5][6][7], the multiobjective optimization methods [8][9][10][11][12][13], the feasibility rule methods [14][15][16][17][18][19], the ϵ constrained methods [20][21][22], and the hybrid methods [23][24][25][26]. In the penalty function methods, a penalty Fitness function is deFined by adding a penalty term to the objective function. In the feasibility rule methods, the feasible individuals are superior to the infeasible individuals. e ϵ constrained method is a representative CHT, in which the ϵ level is utilized to relax the constraints. And the hybrid method solves COPs by combining multiple constraint-handling techniques.
e multiobjective optimization methods have been adopted to solve COPs in the last two decades. ese methods always transform a COP into a biobjective optimization problem (BOP), in which one objective is the overall constraint violation degree G(X) and another objective is the original objective f(X).
en, the multiobjective optimization techniques, such as the Pareto dominance or the aggregation method, are utilized to compare the individuals. For example, Wang et al. [27] employed a dynamic hybrid model for solving COPs, in which Pareto dominance is employed for the comparison. Gao et al. [28] proposed a dual-population method to solve COPs, where f(X) and G(X) are optimized by the corresponding subpopulation, respectively. Moreover, Wang et al. [29] utilized the correlation between the objective function and the constraints to deal with COPs.
Decomposition method [30] is a representative multiobjective optimization method. To solve COPs through the decomposition-based multiobjective optimization methods, the transformed BOP is converted into a set of subproblems; that is, a group of Fitness functions are constructed by assigning different direction vectors between f(X) and G(X). Generally, in the decomposition-based multiobjective optimization method, each individual optimizes a subproblem by combining the information of its neighboring individuals. However, little effort to optimize each subproblem by using the information of its direction vector in the objective space.
Based on the above analysis, a hybrid search model for constrained optimization, called HyCO, is designed to solve COPs in this paper. First of all, a BOP is decomposed into K subproblems.
en, to balance the diversity and convergence, the local and global models are employed to optimize these subproblems. During the local search model, the whole population is decomposed into K subpopulations by adopting the classiFication operator, and each subpopulation optimizes a subproblem. During the global search model, the whole population is guided by a deFined search direction toimprove the convergence. In the process, differential evolution (DE) is utilized to generate the offsprings, and the direction vectors are adjusted to Fit the characteristic of COPs. Furthermore, a simple restart strategy is proposed by Wang et al. [31] where S � n i�1 [a i , b i ] represents the n-dimensional decision space. f i (X) denotes the ith objective function. For two solutions X and Y, some concepts related to MOP are introduced as follows.
Definition 2. X * ∈ S is called a Pareto optimal solution. If ∃X ∈ S, such that X≼X * .

Definition 3.
A set of all Pareto optimal solutions is called the Pareto set (PS).

Vector Angle.
In MOPs, the vector angle represents the angle between two individuals in the objective space. Typically, for two individuals X j and X q , the vector angle between them can be computed as below: is computed according to the following equation: where Z min i and Z max i represent the minimum and maximum values of the ith objective. f i (X j ) represents the ith objective function value. ‖ · ‖ represents the norm of a vector. Generally, the vector angle is used to maintain the population diversity for MOPs [32,33]. SpeciFically, if the vector angle of two individuals is small enough, then their search directions are similar. On the contrary, a large vector angle of two individuals means the different search directions, and the diversity between them can be maintained. Motivated by the above considerations, a new clustering method is designed in HyCO and the population (P) would be clustered into K subpopulations according to the vector angle to maintain the diversity.

Proposed Method
3.1. Motivation. When using EAs to solve COPs, there are two important issues need to be solved: firstly, achieving the balance between the diversity and convergence; secondly, achieving the balance between the constraints and objective function. In HyCO, the local and global search models based on decomposition are proposed to balance the diversity and convergence. Specifically, in the local search model, a clustering method is designed to divide the population into several subpopulations, and each subpopulation optimizes a subproblem to maintain the population diversity. In the global search model, a direction vector is deFined to guide the evolution and enhance the population convergence. In addition, a direction vector 2 Discrete Dynamics in Nature and Society adjustment strategy (DVA) is used in HyCO to balance the objective and constraints, which can guide the population to converge to the feasible optimal solution.
Based on the above considerations, this paper utilizes the local and global search models based on decomposition to solve COPs.
Input: e population size m, the number of subpopulations K, and the total number of function evaluations T_FEs. Output: e best feasible solutions in P.
(1) Initialize a population randomly P X 1 , . . . , X m ; (2) Calculate f(X i ) and G(X i ) of each individual X i in P.
(3) while stopping conditions are not satis ed do (4) Execute the local search model; (5) Execute the global search model; (6) Execute DVA; (7) Execute the restart strategy; Calculate the angle θ X i ,λ j according to (9); (4) Find the corresponding minimum [m/K] individuals, which are the minimum distance to the direction vector (λ j , 1 − λ j ), to form a subpopulation; (5) Eliminate these solutions from P; (6) end for ALGORITHM 2: Classi cation operator. Discrete Dynamics in Nature and Society

HyCO.
In HyCO, a transformed BOP is first decomposed into K subproblems. Next, the local and global search models are designed to optimize these subproblems, and DVA is proposed to adjust the direction vector. e detailed steps of HyCO are provided in Algorithm 1.
DE [34] is employed to generate the offsprings since its powerful search performance. en, the weighted sum approach is adopted to compare the fitness of two candidate solutions. For a solution X t i , its weight sum can be deFined as below: Select the direction vector (λ t i , 1 − λ t i ) corresponding to SubP t+1 i ; (7) for end if (13) end for (14) end for Randomly generate a F value in [0, 1]; (5) if rand < 0.5 then (6) Generate a candidate solution U t i according to (10); (7) else (8) Generate a candidate solution U t i according to (11) and (12); (9) end if (10) OSubP t j � OSubP t j ∪ U t i ; (11) end for (12) end for ALGORITHM 4: Local search algorithm.
Input: Initial population P t . Output: Updated population P t+1 .
(1) Set P t+1 � ∅; (2) Calculate λ t c according to (13); end if (10) end for ALGORITHM 5: Global search model. 4 Discrete Dynamics in Nature and Society where where ξ is a parameter in DVA. f t min and f t max represent the minimum and maximum objective function values, respectively. t represents the current generation number. G t min represents the minimum overall constraint violation.
is the direction vector. G t max represents the maximum overall constraint violation.
In the following sections, the local search model, the global search model, and DVA are introduced, respectively.

Local Search Model.
e purpose of the local search model is to optimize each subproblem from different search directions and Find the promising search directions. In this model, the whole population is divided into K subpopulations by a classiFication operator and each subpopulation is used to optimize its assigned subproblem.
Randomly generate an F value from 1.0, 0.8, or 0, 6 Randomly generate a CR value from 1.0, 0.2, or , 1 Create a candidate solution U t i according to (14); Create a candidate solution U t i according to (15); (9) end if (10) OP t � OP t ∪ u →t i ; (11) end for ALGORITHM 6: Global search algorithm.  Discrete Dynamics in Nature and Society   To introduce the classification operator, the vector angle between the direction vector and the normalized objective vector is firstly defined as shown in Figure 1 and calculated as follows: where F( is the direction vector, and F(X i ) · w(λ j ) denotes the inner product between F(X i ) and w(λ j ). en, the classification operator is given in Algorithm 2. As shown in Figure 2, each subpopulation owns a region which centered by a direction vector. Note that the Kth subpopulation contains all the remaining individuals in P, and the size of each subpopulation may not be equal. When the direction vector is adjusted, all individuals will be reclassified by the classification operator. erefore, the individuals may be classified to the subpopulations different from the previous one, which results in the coevolution of different subpopulations. e whole process of the local search model is given in Algorithm 3.
In the process of local search, two modiFied DEs are employed to generate the offsprings. eir formulations are given as follows: (ii) DE/ModiFied/2 where X i and U i represent the ith target vector and trial vector, respectively. X ij and U ij represent the jth dimension of them. X best and X mean denote the best individual and the mean vector in the subpopulation, respectively. X r1 , X r2 , and X r3 represent three individuals in P, which satisfy X r1 ≠ X r2 ≠ X r3 ≠ X i . X r1,j , X r2,j , and X r3,j represent the jth dimension of X r1 , X r2 , and X r3 , respectively. l is a random value in [− 1, 1]. l is an integer selected from {1, 2}. F denotes the scaling factor. λ i , 1 − λ i represents the ith direction vector. rand 1 and rand 2 are randomly generated from [0, 1].
As shown in (10), each subpopulation is guided by its best individual, which prevents the population from trapping into the local optimal solution. As shown in (11) and (12), the information of the individual with the smaller weighted sum is employed to generate the candidate solutions, which can enhance the rate of convergence. e procedure of the local search algorithm is described in Algorithm 4.

Global Search Model.
In the local search model, each subproblem is optimized by corresponding direction vector, which may lead to the slow convergence rate. Candidate solutions are generated by using the individuals within one subpopulation, resulting in the weak information exchange among different subpopulations. erefore, the diversity can be maintained but the convergence cannot be proved in the local search process. In order to improve the convergence, Mean O ± S 2.87E + 01 ± 6.24E-08− 7.84E + 01 ± 6.31E + 01− 2.06E + 01 ± 1.31E + 01+ 6.68E + 01 ± 4.26E + 02− 9.89E + 01 ± 6.26E + 01− 2.52E + 01 ± 9.51E + 00   Discrete Dynamics in Nature and Society the global search model is proposed. In this model, a direction vector is deFined to guide the whole population as follows: where (λ i , 1 − λ i ) is the direction vector that a subproblem has been improved. a is the number of improved subproblems. e framework of the global search model is described in Algorithm 5.
In the process of global search, two DE operators are combined to generate the offsprings. eir formulations are introduced as follows [30,35,36]: (i) DE/rand-to-best/1/bin (ii) DE/current-to-rand/1 where V i represents the ith mutant vector. V i,j denotes the jth dimension of it. r1, r2, and r3 are three integers in P, which satisfy r1 ≠ r2 ≠ r3 ≠ i. X best is the best individual according to the weighted sum. CR denotes the crossover probability. And j rand is a random value in 1, . . . , d { }.
With respect to (14), the solution X best is utilized for enhancing the convergence. In (15), a randomly chosen solution X r1 is employed for promoting the diversity. In this paper, these two operations are executed with a probability of 0.5. Its effectiveness has been validated in [22,37]. e whole process of the global search algorithm is introduced in Algorithm 6.

DVA. DVA is the major component when solving the transformed BOP through the local and global search models based on decomposition. For MOPs, the image of all
Pareto optimal solutions is distributed on the PF [38]. However, for a BOP, only one global optimal solution needs to be obtained. erefore, the direction vectors need to be adjusted to Fit the characteristic of COPs. DVA is proposed by Wang et al. [30], and the direction vector is adjusted according to the sigmoid function as follows: where T represents the maximum generation number. c and Γ are two positive values to control the change trend of ξ. Moreover, the ε constrained method is proposed to determine whether ξ should be reduced to a small number for COPs. e details of DVA are given in Algorithm 7.

Experiment Settings.
To test the performance of HyCO, two sets of COPs are adopted. e first set contains 36 COPs from IEEE CEC 2010 [39] and the second set involves 56 COPs from IEEE CEC 2017 [39]. ey have different characteristics, such as multimodality, extremely strong nonlinearity, rotated landscape, and so on. e population size (m), the number of subpopulations (K), and the total number of function evaluations (T_FEs), which are reported in Table 1, where d represents the dimension of COPs. In addition, each COP runs 25 times independently. μ in the restart strategy is set to 10 − 4 . ρ and β in the ϵ constrained methods are set to 0.85 and 6, respectively. In (15), Γ and c are set to 30 and 0.75, respectively.

Experiments on the 36 COPs from IEEE CEC 2010.
First of all, 36 COPs from CEC 2010 are tested in this section. Five competitive methods are selected: FROFI [40], ITLBO [41], DeCODE [30], AIS-IRP [42], and ECHTDE [43]. e experimental results of these methods are obtained from the literature [30,37]. Since the true optimal value of this test suite is unknown, the "Mean O" and "S" are selected as the comparison criterion. "Mean O" and "S" are the mean and standard deviation of the results, respectively. Furthermore, the multiple-problem Wilcoxon's test and the Friedman's test are obtained via KEEL software [44].
In the case of COPs with 10d, the results of "Mean O" and "S," the Friedman's test, and the multiple-problem Wilcoxon's test are given in Tables 2-4, respectively. In Table 2, "▽" represents any feasible solutions of the compared algorithm cannot be found after the evolution. "+," "≈," and "− " represent that HyCO is worse than, competitive with, and better than the selected method, respectively. It can be seen from Table 2 that HyCO surpasses FROFI, ITLBO, DeCODE, AIS-IRP, and ECHTDE on 6, 7, 5, 9, and 9 test problems, respectively. In contrast, FROFI, ITLBO, DeCODE, AIS-IRP, and ECHTDE outperform HyCO on 3, 4, 3, 6, and 6 test problems, respectively. As shown in Table 4, the R − values cannot exceed the R + values in all comparisons.     Tables 5-7, respectively. As described in Table 5, HyCO is superior to FROFI, ITLBO, DeCODE, AIS-IRP, and ECHTDE on 6, 14, 6, 15, and 15 test problems, respectively. In contrast, FROFI, ITLBO, DeCODE, AIS-IRP, and ECHTDE exhibit better performance than HyCO on 1, 0, 2, 2, and 1 test problems, respectively. From Table 6, HyCO obtains the rst rank. In addition, according to the multiple-problem Wilcoxon's test, the Rvalues cannot exceed the R + values in all comparisons, and ρ ≤ 0.05 can be seen in three cases (i.e., HyCO vs. AIS-IRP, HyCO vs. ECHTDE, and HyCO vs. ITLBO). In summary, HyCO exhibits better performances than other compared methods on 18 COPs with 30 d from CEC 2010.

Experiments on the 56 COPs from IEEE CEC 2017.
To evaluate the performance of HyCO on complicated COPs, 56 high-dimensional COPs from IEEE CEC 2017 are employed. Two methods, which are derived from the competition at IEEE CEC 2017, are selected as the competitors: LSHADE44 [45] and UDE [46]. e results are reported in Table 8. e test functions C17, C18, C19, C26, C27, and C28 cannot nd feasible solutions by these three algorithms, and thus they are removed from the comparison.
As described in Table 8, with respect to 28 COPs with 50 d from CEC 2017, HyCO surpasses LSHADE44 and UDE on 12 and 16 test problems, respectively. However, LSHADE44 and UDE provide better results on 6 and 2 test functions, respectively. In terms of 28 COPs with 100 d from CEC 2017, HyCO outperforms LSHADE44 and UDE on 12 and 17 test functions, respectively, while LSHADE44 and UDE perform better than HyCO on 7 and 3 test problems, respectively. erefore, HyCO exhibits better performance for high-dimensional COPs.

Visualization of the Evolution Process.
e convergence graphs of HyCO on six representative COPs are plotted in Figure 3. As shown in Figure 3, at the early evolving stage, the convergence speed is slow, and the local search model plays an important role in guiding the population to explore more promising areas. Along with the evolution, the convergence rate becomes faster, and some individuals in the population are feasible. At this time, the global search model plays an important role in guiding the population toward the feasible region su ciently. erefore, the local and global search models proposed in this paper can achieve a balance between convergence and diversity.

Real-World Application.
To test the performance of HyCO in real-world COPs, five engineering design problems are adopted. e details of these five engineering problems are obtained from the literature [47]. CMODE [48], which is a representative constrained optimization method, is selected as a competitor. e maximum number of evaluations of these five problems are set to 500, 70000, 10000, 10000, and 5000, respectively. e population size and the number of subpopulations are set to 100 and 15. e parameters of CMODE are consistent with the original literature. e results of these two methods are reported in Table 10.
As shown in Table 10, HyCO outperforms CMODE on 3 engineering design problems, while CMODE cannot be better than HyCO in any problems. In summary, HyCO is effective for solving the real-world engineering optimization problems.

Conclusion
In this paper, HyCO is designed to solve COPs. In the method, the local and global search models are designed to balance both diversity and convergence. To balance constraints and objective function, the direction vector is adjusted according to the direction vector adjustment strategy. Experiment results on three benchmark test suites, namely, 36 COPs from IEEE CEC 2010, 56 COPs from IEEE CEC 2017, and 5 real world engineering design issues, demonstrate the following conclusions: (1) HyCO is competitive than other selected methods. (2) e local and global search models can achieve the balance between diversity and convergence. (3) e direction vector adjustment strategy can guide the population to converge to the feasible optimal solution.
In our future research, it is meaningful to design a selfadaptive the direction vector adjustment strategy in HyCO to solve high-dimensional test functions. In addition, online learning [49][50][51] will be introduced into constraint optimization in the future.

Data Availability
Data and code are available upon request through sending an e-mail to gaoweifeng2004@126.com.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.