JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 841780 10.1155/2013/841780 841780 Research Article A Hybrid Multiobjective Differential Evolution Algorithm and Its Application to the Optimization of Grinding and Classification Wang Yalin 1 http://orcid.org/0000-0002-7188-032X Chen Xiaofang 1 Gui Weihua 1 Yang Chunhua 1 Caccetta Lou 2 Xu Honglei 2 Li Dewei 1 School of Information Science and Engineering Central South University Changsha 410083 China csu.edu.cn 2 Department of Mathematics & Statistics Curtin University Perth WA 6845 Australia curtin.edu.au 2013 19 11 2013 2013 19 07 2013 05 09 2013 2013 Copyright © 2013 Yalin Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The grinding-classification is the prerequisite process for full recovery of the nonrenewable minerals with both production quality and quantity objectives concerned. Its natural formulation is a constrained multiobjective optimization problem of complex expression since the process is composed of one grinding machine and two classification machines. In this paper, a hybrid differential evolution (DE) algorithm with multi-population is proposed. Some infeasible solutions with better performance are allowed to be saved, and they participate randomly in the evolution. In order to exploit the meaningful infeasible solutions, a functionally partitioned multi-population mechanism is designed to find an optimal solution from all possible directions. Meanwhile, a simplex method for local search is inserted into the evolution process to enhance the searching strategy in the optimization process. Simulation results from the test of some benchmark problems indicate that the proposed algorithm tends to converge quickly and effectively to the Pareto frontier with better distribution. Finally, the proposed algorithm is applied to solve a multiobjective optimization model of a grinding and classification process. Based on the technique for order performance by similarity to ideal solution (TOPSIS), the satisfactory solution is obtained by using a decision-making method for multiple attributes.

1. Introduction

Grinding-classification is an important prerequisite process for most mineral processing plants. The grinding process reduces the particle size of raw ores and is usually followed by classification to separate them into different sizes. Grinding-classification operation is required to produce pulp with suitable concentration and fineness for flotation. The pulp quality will directly influence the subsequent flotation efficiency and recovery of valuable metals from tailings. In order to improve economic efficiency and energy consumption, the process optimization objectives include product quality and output yields. Under certain mineral source conditions, the objectives are decided by a series of operation variables such as the solid flow of feed ore to ball mill, the steel ball filling rate, and the flow rates of water added to the first and the second classifier recycles. To solve the optimization model of products’ output and quality in the grinding-classification process is of great significance to improve the technical and economic specifications, and it has been a continuous endeavor of the scientists and engineers .

Grinding-classification is an energy-intensive process influenced by many interacting factors with mutual restraints. The goals of grinding-classification optimization problem are decided by multiple constrained input control variables of nonlinear relationships. So, the optimization model of grinding-classification operation is a complex constrained multiobjective optimization problem (CMOP). Generally, constrained multiobjective problems are so difficult to be solved that the constraint handling techniques and multiobjective optimization methods need to be combined for optimization.

Multiobjective optimization problems (MOPs), in the case of traditional optimization methods, are often handled by aggregating multiple objectives into a single scalar objective through weighting factors. MOPs have a set of equally good (nondominating) solutions instead of a single one, called a Pareto optimum which was introduced by Edgeworth in 1881  and later generalized by Pareto in 1896 . The practical MOPs are often implicated in series of equations, functions, or procedures with complicated constraints. Therefore, the evolutionary algorithms are attractive approaches for low requirements on mathematical expression . Since the mid 1980s, there has been a growing interest in solving MOPs using evolutionary approaches . One of the most successful evolutionary algorithms for the optimization of continuous space functions is the differential evolution (DE) . DE is simple and efficiently converges to the global optimum in most cases [12, 13]. Its efficiency has been proven  in many application fields such as pattern recognition  and mechanical engineering .

There have been many improvements for DE to solve MOPs. Abbass  firstly provided a Pareto DE (PDE) algorithm for MOPs in which DE was employed to create new solutions, and only the nondominated solutions were kept as the basis for the next generation. Madavan  developed a Pareto differential evolution approach (PDEA) in which new solutions were created by DE and kept in an auxiliary population. Xue et al.  introduced multiobjective differential evolution (MODE) and used Pareto-based ranking assignment and crowding distance metric, but in a different manner from PDEA. Robic and Filipi , also adopting Pareto-based ranking assignment and crowding distance metric, developed a DE for multiobjective optimization (DEMO) with a different population update strategy and achieved good results. Huang et al.  extended the self-adaptive DE (SADE) to solve MOPs by a so called multiobjective self-adaptive DE (MOSADE). They further extended MOSADE by using objectivewise learning strategies . Adeyemo and Otieno  provided multiobjective differential evolution algorithm (MDEA). In MDEA, a new solution was generated by DE variant and compared with target solution. If it dominates the target solution, then it was added to the new population; otherwise, a target solution was added.

On the other hand, single-objective constrained optimization problems have been studied intensively in the past years . Different constraint handling techniques have been proposed to solve constrained optimization problems. Michalewicz and Schoenauer  divided constraints handling methods used in evolutionary algorithms into four categories: preserving feasibility of solutions, penalty functions, separating the feasible and infeasible solutions, and hybrid methods. The differences among these methods are how to deal with the infeasible individuals throughout the search phases. Currently, the penalty function method is most widely used, and this algorithm strongly depends on the choice of the penalty parameter.

Although the multiobjective optimization and the constraint handling problem have received lots of contribution, respectively, the CMOPs are still difficult to be solved in practice. Coello and Christiansen  proposed a simple approach to solve CMOPs by ignoring any solution that violates any of the assigned constraints. Deb et al.  proposed a constrained multiobjective algorithm based on the concept of constrained domination, which is also known as superiority of the feasible solution. Woldesenbet et al.  introduced a constraint handling technique based on adaptive penalty functions and distance measures by extending the corresponding version for the single-objective constrained optimization.

In the MOP of grinding and classification process, the definitions of Pareto solutions, Pareto frontier, and Pareto dominance are in consistency with the classic definitions. Clearly, the Pareto frontier is a mapping of the Pareto-optimal solutions to the objective space. In the minimization sense, general constrained MOPs can be formulated as follows (1)minF(X)=min[f1(X),f2(X),,fr(X)],s.t.gi(X)0(i=1,2,,p),hj(X)=0(j=p+1,,q),xk[xkmin,xkmax](k=1,2,,n), where F(X) is the objective vector, X=(x1,,xn)n is a parameter vector, gi(X) is the ith inequality constraint, and hj(X) is the jth equality constraint. xkmin and xkmax are, respectively, the lower and upper bounds of the decision variable xk.

In this paper, based on the specific industrial background of continuous bauxite grinding-classification operation, a new hybrid DE algorithm is proposed to solve complex constrained multiobjective optimization problems. Firstly, a hybrid DE algorithm for MOPs with simplex method (SM-DEMO) is designed to overcome the problems of global performance degradation and being trapped in local optimum. Then, for the MOPs with complicated constraints, the proposed algorithm is formed by combining SM-DEMO and functional partitioned multi-population. In this method, the construction of penalty functions is not required, and the meaningful infeasible solutions are fully utilized.

The remainder of the paper is structured as follows. Section 2 describes the SM-DEMO algorithm for unconstrained cases. The proposed algorithm of multipopulation for constrained MOPs is given in Section 3 with verification of performance by benchmark testing results. Section 4 describes the model of products’ output and quality in the grinding-classification process in detail and the application of the proposed algorithm in the optimization model. Finally, the conclusions based on the present study are drawn in Section 5.

2. SM-DEMO Algorithm for Unconstrained MOPs

In order to efficiently solve multiobjective optimization problem and find the approximately complete and nearoptimal Pareto frontier, we proposed a hybrid DE algorithm for unconstrained multiobjective optimization with simplex method.

The differential evolution, with initialization, crossover, and selection as in usual genetic algorithms, uses a perturbation of two members as the mutation operator to produce a new individual. The mutation operator of the DE algorithm is described as follows.

Considering each target individual xiG, in the Gth generation of size Np, a mutant individual x^iG+1 is defined by (2)x^iG+1=xr3G+F(xr1G-xr2G), where indexes r1, r2, and r3 represent mutually different integers that are different from i and that are randomly generated over [1,Np], and F is the scaling factor.

The simplex method, proposed by Spendley, Hext, and Himsworth and later refined by Nelder and Mead (NM) , is a derivative-free line-search method that is particularly designed for traditional unconstrained minimization scenarios. Clearly, NM method can be deemed as a direct line-search method of the steepest descent kind. The ingredients of the replacement process consist of four basic operations: reflection, expansion, contraction, and shrinkage. Through these operations, the simplex can improve itself and approximate to a local optimum point sequentially. Furthermore, the simplex can vary its shape, size, and orientation to adapt itself to the local contour of the objective function.

2.1. Main Strategy of SM-DEMO

The SM-DEMO algorithm is improved by the following three points compared with DE.

2.1.1. Modified Selection Operation

After traditional DE evolution, the individual uijG+1 may violate the boundary constraints xijmax and xijmin. uijG+1 is replaced by new individual wijG+1 being adjusted as follows: (3)wijG+1={xijmax+rand()*(xijmax-uijG+1),if  (uijG+1>xijmax),xijmin+rand()*(xijmin-uijG+1),if  (uijG+1<xijmin),uijG+1,otherwise.

The new population is combined with the existing parent population to form a new set Mg of bigger size than Np. A nondominated ranking of Mg is performed, and the Np best individuals are selected. This approach allows a global nondomination checking between both the parent and the new generation rather than only in the new generation as is done in other approaches, whereas it requires additional computational cost in sorting the combined.

2.1.2. Nondominated Ranking Based on Euclidean Distance

The solutions within each nondominated frontier that reside in the less crowded region in the frontier are assigned a higher rank, as the NSGA-II algorithm  developed by Deb et al. indicated. The crowding distance of the ith solution in its frontier (marked with solid circles) is the average side length of the cuboids (shown with a dashed box in Figure 1(a)). The crowding-distance computation requires sorting the population according to each objective function value in ascending order of magnitude. As shown in Figure 1, A and C are two solutions near B in the same rank, and σ0(B) is the crowding distance of the Bth solution, traditionally calculated as follows: (4)σ0(B)=j=1n|fj(A)-fj(C)|, where fj(A), fj(C) are the objective vectors. For each objective function, the boundary solutions (solutions with the smallest and the largest function values) are assigned an infinite distance value.

Crowding-distance diagram.

A crowding-distance metric is used to estimate the density of solutions surrounding a particular solution in the population and is obtained from the average distance of the two solutions on either side of the solution along each of the objectives. As shown in Figure 1, A, B, C are the individuals of the generation on the same frontier, and we easily know that the density in Figure 1(a) is better than that in Figure 1(b). If we use (4) to calculate the crowding distance of C, we only know that in Figure 1(a) it is better than in Figure 1(b); the crowding distance of C in Figures 1(a) and 1(c) is equal, which is against the knowledge.

To distinguish the mentioned situations, we propose an improved crowding-distance metric based on Euclidean distance. M is the center point of the line AB, fj is the jth objective vector, and the crowding distance σ(B) is defined as follows: (5)σ(B)=|AC|-|BM|=j=1n[fj(A)-fj(C)]2-j=1n{fj(B)-[fj(A)+fj(C)]2}2.

The crowded-comparison operator guides the selection process at the various stages of the algorithm toward a uniformly distributed Pareto-optimal frontier. To carry out the comparison, we assume that every individual in the population has two attributes: (1) nondomination rank irank and (2) crowding distance idistance. Then, a partial order is defined as ij. If ij, that is, between two solutions with different nondomination ranks, we prefer the solution with the lower (better) rank, namely, irankjrank. Otherwise, if both solutions belong to the same frontier, that is, irank=jrank, then we prefer the solution that is located in a lesser crowded region, that is, idistancejdistance.

2.1.3. Simplex Method for Local Search

The simplex method for local search is mixed in the evolution process to enhance the searching strategy in the optimization process. The goal of integrating NM simplex method  and DE is to enrich the population diversity and avoid being trapped in local minimum. We apply NM simplex method operator to the present population when the number of iterations is greater than a particular value (like Gmax/2). The individuals that achieved the single extreme value in each objective function are marked as the initial vertex points of simplex method. The present population is updated according to simplex method until the terminal conditions are satisfied.

The computation steps of the algorithm are included in Section 3.2.

2.2. Evaluation Criteria

Unlike the single-objective optimization, it is more complicated for solution quality evaluation in the case of multiobjective optimization. Many of the suggested methods can be summarized in two types. One is to evaluate the convergence degree by computing the proximity between the solution frontier and the actual Pareto frontier. The other is to evaluate the distribution degree of the solutions in objective space by computing the distances among the individuals. Here, we choose both methods to evaluate the performance of the SM-DEMO algorithm.

(1) Convergence Evaluation. Deb et al.  proposed this method in 2002. It is described as follows: (6)γ=1Q(i=1QminP*-PFT), where γ is the extent of convergence to a known of Pareto-optimal set, P* is the obtained nondomination Pareto frontier, PFT is the real nondomination Pareto frontier, P*-PFT is the Euclidean distance of P* with PFT, and Q is the number of obtained solutions.

(2) Distribution Degree Evaluation. The nonuniformity in the distribution is measured by SP as follows: (7)SP=1(Q-1)i=1Q(d¯-di)2, where di is the Euclidean distance among consecutive solutions in the obtained nondominated set of solutions and parameter d¯ is the average distance.

2.3. Experimental Studies

Four well-known benchmark test functions  are used here to compare the performance of SM-DEMO with NSGA-II, DEMO/Parent. These four problems are called ZDT2, ZDT3, ZDT4, and ZDT6; each has two objective functions. We describe them in Table 1.

Test problems.

Test problems Objective functions minF(X)=min[f1(X),f2(X)] Range of variable
ZDT2 f 1 ( X ) = x 1 , f 2 ( X ) = g ( X ) ( 1 - ( f 1 g ( X ) ) 2 ) , g ( X ) = 1 + 9 i = 2 n x i n - 1 n = 30 0 x i 1

ZDT3 f 1 ( X ) = x 1 , f 2 ( X ) = g ( 1 - ( f 1 g ) - ( f 1 g ) sin ( 10 π f 1 ) ) , g ( X ) = 1 + 9 i = 2 n x i n - 1 n = 30 0 x i 1

ZDT4 f 1 ( X ) = x 1 , f 2 ( X ) = g ( 1 - ( f 1 g ) ) , g ( X ) = 1 + 10 ( n - 1 ) + i = 2 n ( x i 2 - 10 cos ( 4 π x i ) ) n = 10 0 x 1 1 - 5 x i 5 , i = 2 , , n

ZDT6 f 1 ( X ) = 1 - exp ( - 4 x 1 ) sin 6 ( 6 π x 1 ) , f 2 ( X ) = g ( 1 - ( f 1 g ) 2 ) , g ( X ) = 1 + 9 i = 2 n x i ( n - 1 ) 0.25 n = 10 0 x i 1

The simulation is carried out under the environment of Intel Pentium 4, CPU 3.06 GHz, 512 MB memory, Windows XP Professional, Matlab7.1. Initialization parameters are set as follows: population size Np=100, scaling factor F=0.8, cross rate CR=0.6, maximum evolution generation Gmax=250, and number of SM evolution iterations GSM=100.

All of the three algorithms are real coded, with equal population size and equal maximum evolution generation. Each algorithm independently runs 20 times for each test function. Because we cannot get the real Pareto-optimal set, we will take 60 Pareto-optimal solutions obtained by the three algorithms as a true Pareto-optimal solution set.

We evaluated the algorithms based on the two performance indexes γ and SP. Table 2 shows the mean and variance of γ and SP using three algorithms: SM-DEMO, NSGA-II, and DEMO/Parent. We can learn from Table 2, for the ZDT2 function, that all of the three algorithms have a good performance, while the SM-DEMO is slightly better than the other two algorithms. In terms of convergences, for ZDT3, ZDT4 and ZDT6, which are more complex than ZDT2, SM-DEMO is significantly better than DEMO/Parent and NSGA-II.

The performance results of the each algorithm on the test function.

Test function Algorithm γ SP
ZDT2 DEMO/Parent 0.005120 ± 0.000312 0.000630 ± 0.000010
NSGA-2 0.007120 ± 0.000413 0.000540 ± 0.000940
SM-DEMO 0.004013 ± 0.000230 0.000423 ± 0.000011

ZDT3 DEMO/Parent 0.009704 ± 0.000027 0.007512 ± 0.000165
NSGA-2 0.014067 ± 0.000059 0.006540 ± 0.000124
SM-DEMO 0.004704 ± 0.000003 0.004450 ± 0.000153

ZDT4 DEMO/Parent 2.009704 ± 0.901164 0.011031 ± 0.001104
NSGA-2 3.144067 ± 2.100740 0.010122 ± 0.000072
SM-DEMO 0.874001 ± 0.014323 0.008721 ± 0.000159

ZDT6 DEMO/Parent 0.649704 ± 0.004912 0.104520 ± 0.015486
NSGA-2 1.014067 ± 0.010421 0.007942 ± 0.000105
SM-DEMO 0.007750 ± 0.000083 0.002014 ± 0.000117

Figure 2 shows a random running of SM-DEMO algorithm. It is clear that SM-DEMO algorithm can produce a good approximation and a uniform distribution.

SM-DEMO simulation curve.

ZDT2

ZDT3

ZDT4

ZDT6

3. Proposed Hybrid Algorithm for CMOP

The space of constrained multiobjective optimization problem can be divided into the feasible solution space and the infeasible solution space, as shown in Figure 3, where S is the search space, Ω is the feasible solution space, and Z is the infeasible solution space. xi(i=1,2,3,4) is the feasible solution, and yi(i=1,2,3,4) is the infeasible solution. Assume that x* is the global optimal solution and y1 is the closest one to x*. If the infeasible population y1 is not excluded by the evolution algorithm, it is permitted to explore boundary regions from new directions, where the optimum is likely to be found.

Distribution diagram of search space.

3.1. General Idea of the Proposed Algorithm

Researchers have gradually realized the merit of infeasible solutions in searching for the global optimum in the feasible region. Some infeasible solutions with better performance are allowed to be saved. Farmani et al.  formulated a method to ensure that infeasible solutions with a slight violation become feasible in evolution. Based on the constraints processing approach of multiobjective optimization problems, the proposed hybrid DE algorithm avoids constructing penalty function and deleting meaningful infeasible solutions directly.

Here, the proposed algorithm will produce multiple groups of functional partitions, which include an evolutionary population Pg of size Np, an intermediate population Mg to save feasible individuals, an intermediate population Sg to save infeasible individuals, a population Pf to save the optimal feasible solution found in the search process and a population Pc to save the optimal infeasible solution. The relationship of multi-population is shown in Figure 4.

The relationship diagram of multipopulation demonstrating.

With the description of (1), equality constraints are always transformed into inequality constraints as |hj(X)|-δ0, where j=p+1,,q and δ is a positive tolerance value. To evaluate the infeasible solution, the degree of constraint violation of individual X on the jth constraint is calculated as follows: (8)Vi(X)={max{0,gi(X)}(i=1,2,,p),max{0,|hj(X)|-δ}(j=p+1,,q).

The final constraint violation of each individual in the population can be obtained by calculating the mean of the normalized constraint violations.

In order to take advantage of the infeasible solutions with better performance, we proposed the following adaptive differential mutation operator to generate individual variation learning from the mutation operators in DE/rand-to-best/1/bin, according to rules defined by Price et al. . Considering each individual vector xiG, a mutant individual x^iG+1 is defined by (9)x^iG+1={xiG+F1·(xf1G-xr1G)+F2·(xf2G-xr2G),RCrand(),xiG+F1·(yiG-xr1G)+F2·(xf1G-xr2G),RC<rand(), where r1 and r2 represent different integers and also different from i, randomly generated over [1,Np]; F is the scaling factor; xfiG(i=1,2,,n) is randomly generated from Pf, yiG(i=1,2,,n) is randomly generated from Pc; and RC is the mutation factor as follows: (10)RC=Rc0·γG+constγG-1+const, where Rc0 is the initial value of the variability factor, const is a small constant, to ensure that the fractional is meaningful, and γG is defined as follows: (11)γG=the  number  of  infeasible  solutions  in  PgNp.

3.2. Framework of the Proposed Algorithm

The proposed algorithm is described as follows.

Step 1 (initialization).

Generate the population Pg, Pf, and Pc of size Np, NP1, and NP2. Set the value of CR (crossover probability), Gmax (the number of function evaluations), GSM (the iterative number of evolution by NM simplex method), g=1 (the current generation number), and positive control parameter for scaling the difference vectors F1, F2. Randomly generate the parent population Pg from the decision space. Set the Pf, and Pc, and let the intermediate populations Sg and Mg be empty.

Step 2 (DE reproduction).

By (3) and (9) for mutation, crossover, and selection, an offspring Sg is created. Judge the constraints of all individuals in Pg. In accordance with (8), we first calculate constraint violation degree Vi(X) of all of the individuals. If Vi(X)=0, the solution is feasible and preserved to the intermediate set Mg; if Vi(X)>0, the solution is infeasible and preserved to the intermediate set Sg.

Step 3 (simplex method).

Apply NM simplex method operator to the present population if gGmax/2. Update the present population Mg when the number of iterations exceeds maximum iterations.

Step 4 (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M186"><mml:mi>P</mml:mi><mml:mi>f</mml:mi></mml:math></inline-formula> construction).

Rank chromosomes in Mg based on (5), and generate the elitist population Pf (the size is Np) from the ranked population Mg.

Step 5 (<inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M191"><mml:mi>P</mml:mi><mml:mi>c</mml:mi></mml:math></inline-formula> construction).

Add the chromosomes in Sg with slight constraint violations to the Pc.

Step 6 (mixing the population).

Combine Sg with the existing parent population to form a new set Mg. Remove the duplicate individuals in Mg and the existing parent population.

Step 7 (evolution).

Randomly choose chromosomes from Pc, Pg, and Pf. Use the adaptive differential mutation and uniform discrete crossover to obtain the offspring population Pg+1.

Step 8 (termination).

If the stopping criterion is met, stop and output the best solution; else, go to Step 2.

3.3. Experimental Study

In this section, we choose three problems CTP, TNK, and BNH, as shown in Table 3, to test the proposed method, and compare the method with the current CNSGA-II .

Test functions.

Test function Objective function  minF(X)=min[f1(X),f2(X)] Constraints Range of variable
CTP f 1 ( X ) = x 1 , f 2 ( X ) = exp ( - f 1 ( X ) c ( X ) )          × { 41 + i = 2 5 [ x i 2 - 10 cos ( 2 π x i ) ] } g 1 ( X ) = cos ( θ ) ( f 2 ( X ) - e ) - sin ( θ ) f 1 ( X ) , g 2 ( X ) = a | sin { b π [ sin ( θ ) ( f 2 ( X ) - e )          + cos ( θ ) f 1 ( X ) ] c } | d , g 1 ( X ) g 2 ( X )    0 x 1 1 - 5 x i 5 i = 2,3 , 4,5

BNH f 1 ( X ) = 4 x 1 2 + 4 x 2 2 , f 2 ( X ) = ( x 1 - 5 ) 2 + ( x 2 - 5 ) 2 g 1 ( X ) = ( x 1 - 5 ) 2 + x 2 2 - 25 , g 2 ( X ) = - ( x 1 - 8 ) 2 + ( x 2 + 3 ) + 7.7 , g 1 ( X ) 0 , g 2 ( X ) 0 0 x 1 5 0 x 2 3

TNK f 1 ( X ) = x 1 , f 2 ( X ) = x 2 g 1 ( X ) = - x 1 2 - x 2 2 + 1 + 0.1 cos ( 16 arctan ( x 1 x 2 ) ) , g 2 ( X ) = ( x 1 - 0.5 ) 2 + ( x 2 - 0.5 ) , g 1 ( X ) 0 , g 2 ( X ) - 0.5 0 0 x i π i = 1,2

For CTP problem, there are the six parameters θ, a, b, c, d, and e that must be chosen in a way so that a portion of the unconstrained Pareto-optimal region is infeasible. Each constraint is an implicit non-linear function of decision variables. Thus, it may be difficult to find a number of solutions on a non-linear constraint boundary. We take two sets of values for six parameters in CTP problem, which are determined as (1) CTP1: θ=0.1π, a=40, b=0.5, c=1, d=2, and e=-2; (2) CTP2: θ=-0.2π, a=0.2, b=10, c=1, d=6, and e=1. The Pareto frontiers, the feasible solution spaces, and the infeasible solution spaces are shown in Figure 5.

CTP solution space.

CTP1

CTP2

The parameters are initialized as follows. The size of population Pg is Np=200, size of Pf is NP1=150, size of Pc is NP2=10, scaling factors F1 and F2 are randomly generated within [0.5,1], cross rate is CR=0.6, maximum evolution generation is Gmax=300, and number of SM evolution iterations is GSM=100. All of the proposed algorithms and CNSGA-II are real coded with equal population size and equal maximum evolution generation. Each algorithm runs 20 times independently for each test function.

Figure 6 shows the result of a random running of the proposed algorithm and NSGA-II, the smooth curve “—” represents the Pareto frontier, and “◆” stands for the solution achieved by the proposed algorithm or NSGA-II.

The comparison chart of Pareto frontier.

CTP1 calculated by the proposed algorithm

CTP1 calculated by CNSGA-II

CTP2 calculated by the proposed algorithm

CTP2 calculated by CNSGA-II

BNH calculated by the proposed algorithm

BNH calculated by CNSGA-II

TNK calculated by the proposed algorithm

TNK calculated by CNSGA-II

It is obvious that the proposed algorithm returns a better approximation to the true Pareto-optimal frontie and a distribution of higher uniformity. We also evaluated the algorithms based on the two criterions γ and SP, as shown in Table 4. It can be observed from the data in Table 4 that the proposed algorithm performs significantly better than the classical CNSGA-II algorithm in convergence and distribution uniformity. The simulation results show that this algorithm can accurately converge to global Pareto solutions and can maintain diversity of population.

The comparison of performance.

Test function Algorithm γ SP
CTP1 CNSGA-Π 0.021317 ± 0.000323 0.873321 ± 0.08725
The proposed 0.009836 ± 0.000410 0.567933 ± 0.01845

CTP2 CNSGA-Π 0.011120 ± 0.000753 0.78314 ± 0.02843
The proposed 0.007013 ± 0.000554 0.296543 ± 0.00453

BNH CNSGA-Π 0.014947 ± 0.000632 0.336941 ± 0.00917
The proposed 0.013766 ± 0.000043 0.209321 ± 0.00561

TNK CNSGA-Π 0.013235 ± 0.000740 0.464542 ± 0.00730
The proposed 0.006435 ± 0.000017 0.224560 ± 0.00159
4. Optimization of Grinding and Classification Process 4.1. Bauxite Grinding and Classification Process

The grinding and classification process is the key preparation for the bauxite mineral processing. Here, we consider a bauxite grinding process in a certain mineral company with single grinding and two-stage classification, as shown in Figure 7.

Flow diagram of the grinding and classification process.

The process consists in a grinding ball mill and two spiral classifiers. First classifier recycle will be put back to the ball mill for regrinding, and the first-stage overflow will be put into second spiral classifier after being mixed with water; the second classifier recycle will be prepared for Bayer production as the rough concentrate, and the second-stage overflow will be sent to the next flotation process. The production objectives are composed of the production yields, technically represented by the solid flow of feed ore since the process is nonstorable, and the mineral processing quality, represented by percentage of the small-size fractions of mineral particles in the second-stage overflows.

4.2. Predictive Model of the Grinding and Classification Process

Here, we establish the mathematical predictive model of each unit process in the bauxite grinding and classification process. The notations of the indexes, decision variables, and parameters are listed in Table 5. These notations will be used for the model of the grinding and classification process.

Notations for the model of the grinding and classification process.

Notation Description
p i Particle percentage of the ith size fraction in the ball mill overflow

f j Particle percentage of the ith size fraction in the Feed ore

C Rate of the first classifier recycle

E a i The efficiency of the first spiral classifier

τ The mean residence time

P 1 The internal concentration in ball mill

M MF The solid flow of feed ore

W 1 The water addition of the first classifier recycle

W 2 The classifier water addition

b i j The breakage distribution function

S i The breakage rate function

d i The particle with the ith size

α c i The particle percentage of the ith size in the first classifier overflow

d o The unit size of the particle

P 2 First-stage overflow

ϕ B Ball filling rate

A c , B c Parameters of the first-stage classifier overflow size fraction distribution

E a i The efficiency of the first spiral classifier

d 50 c The particle size fraction after correction separation

m The separation accuracy

k The intermix index

E a min , d 50 c , m , k The corresponding key parameters to the efficiency of the second spiral classifier

a , α , μ , Λ Four key parameters to control the breakage rate function

d min , d max The minimum and maximum particle sizes

A MF , B MF Parameters of feed ore size fraction distribution

F ( i ) The cumulative particle percentage less than the ith size fraction in feed ore

α c i The particle percentage of the ith size fraction in the second classifier overflow

E a i The efficiency of the second spiral classifier

4.2.1. Ball Mill Circuit Model

Here, pi is the particle percentage of ith size fraction in the ball mill overflow, fj is the particle percentage of ith size fraction in feed ore, rate of the first classifier recycle C is known, and Eai is the efficiency of the first spiral classifier. According to a technical report of field investigation and study, we have that (12)pi(1+C)=diifi+j=1,i>1i-1dij[Eaipj(1+C)+fj]1-diiEai,dij={ej,i=j,k=ji-1cikcjk(ek-ei),i>j,cij={-k=ij-1cikcjk,i<j,1,i=j,1Si-Sjk=ji-1Skbikckj,i>j,ej=1(1+0.5·τSj)(1+0.25·τSj)2,τ=8P1MMF, where τ is the mean residence time of minerals, P1 is the internal concentration in ball mill, and (13)P1=MMF(1+C)MMF(1+C)+W1+0.3(W1+W2), where MMF (t/h) is the solid flow of feed ore, W1 is the water addition of the first classifier recycle, and W2 is the classifier water addition. bij is the breakage distribution function; Si is the breakage rate function, and it satisfied the following equation: (14)Si=(a(di/do)α)(1+(di/μ)Λ),where di is the particle with the ith size, do it is a unit, when per millimeter is a unit, do=1, di=i  (mm), and a, α, μ, and Λ are four key parameters to control the breakage rate function.

In a concrete grinding and classification process, the ball mill size is fixed, and the speed of ball mill is constant. Through data acquisition and testing of grinding and classification steady-state loop, the regression model between a, α, μ, Λ and condition variables, size fraction distribution can be established. The input variables are ball filling rate ϕB, solid flow of feed ore MMF, water addition of the first classifier recycle W1, and parameters of feed ore size fraction distribution AMF, BMF. The regression model is (15)[aαμΛ]T=[x11x1jxi1xij]·[W1ϕBAMFBMFMMF]T.

The value of xij(i=1,2,3,4;j=1,2,,5) can be obtained by the experimental data regression, AMF, BMF can be obtained from feed ore size fraction distribution, and F(i) is the cumulative particle percentage less than the ith size fraction in feed ore, and it is represented as follows: (16)F(i)=1-exp(-AMFdiBMF).

4.2.2. Spiral Classifier Model

α c i is the particle percentage of the ith size fraction in the first classifier overflow, and pi is the particle percentage of the ith size fraction in ball mill overflow. The spiral classifier model is as follows: (17)αci=pi×(1-Eai)i=1(pi×(1-Eai))×100%, where Eai is the efficiency of the first spiral classifier and the mechanism formula of Eai is shown as follows: (18)Eai=1-exp[-0.693(di-dmind50c-dmin)m]+Eamin·[1-(di-dmindmax)]k, where di is the particle with the ith size, dmin and dmax represent maximum and minimum particle sizes, d50c is the particle size fraction after correction separation, m is separation accuracy, and k is intermix index.

Through data acquisition and testing of grinding and classification steady-state loop, the regression model between classification parameters and condition variables, size fraction distribution can be established. The input variables include the solid flow of feed ore MMF, the classifier water addition W2, and the parameters of ball mill overflow size fraction distribution AMF, BMF. The regression model is shown as follows: (19)[Eamind50cmk]T=[y11y1jyi1yij]·[MMFW2AMPBMP]T, where the value of yij(i=1,2,3,4;j=1,2,3,4) can be obtained by data regression.

The first-stage overflow P2 calculation formula is as follows: (20)P2=MMF(MMF+W1+W2).

Similarly, we can get the second spiral classifier model as follows: (21)αci=αci×(1-Eai)i(αci×(1-Ea))×100%, where αci is the particle percentage of the ith size fraction in the second classifier overflow, αci is the particle percentage of the ith size fraction in the first classifier overflow, and Eai is the efficiency of the second spiral classifier. The spiral classifier model is as follows: (22)Eai=1-exp[-0.693(di-dmind50c-dmin)m]+Eamin·[1-(di-dmindmax)]k, where, Eamin, d50c, m, and k are key parameters to the efficiency of the second spiral classifier. Through data acquisition and testing of grinding and classification steady-state loop, the regression model between classification parameters and condition variables, size fraction distribution can be established. The input variables include solid flow of feed ore MMF and parameters of the first-stage classifier overflow size fraction distribution Ac, Bc, which are solved by similar equation to (20). The regression model is shown as follows: (23)[Eamind50cmk]T=[y11y1jyi1yij]·[MMFAcBc]T, where the value of yij(i=1,2,3,4;j=1,2,3) can be obtained by experimental data regression.

4.3. Optimization Model of Grinding and Classification Process

Two objective functions in the process are identified: one is to maximize output f1(X), and the other is to maximize the small-size fractions (less than 0.075 mm fractions) in the second-stage overflow f2(X). It is also necessary to ensure that the grinding product meets all of other technical requirements and the least disturbance in the following flotation circuit. As the constraints, the feed load of the grinding circuit MMF, the steel ball filling rate ϕB, the first and the second overflows P1 and P2, and the particle percentage of fine size fraction in the first and the second classifier overflows αc-0.075 and αc-0.075 should be within the user specified bounds.

The operation variables are the solid flow of feed ore MMF, water addition of the first classifier recycle W1, ball filling rate ϕB, and water addition of the second classifier W2. Based on all of the above, grinding and classification process multiobjective optimization model is as follows: (24)maxF=max[f1(X),f2(X)],f1(X)=MMF,f2(X)=αc-0.075=f(MMF,W1,W2,ϕB),s.t.MMFminMMFMMFmax,ϕBminϕBϕBmax,P1minP1P1max,P2minP2P2max,αc-0.075αcmin,αc-0.075αcmin.

In (24), f(MMF,W1,W2,ϕB) implicates the model of grinding and classification represented by (12)–(23). The proposed algorithm is applied to solve the problem, and the optimization results are shown in Table 6.

The optimization results calculated by the proposed algorithm.

Number f 1 ( X ) (t/h) f 2 ( X ) (%)
1 92.972 90.814
2 91.810 91.900
3 92.360 90.923
4 90.620 93.460
5 91.310 92.494
6 90.170 93.800
7 89.390 95.200
8 89.480 94.932
9 91.530 92.400
10 89.298 95.763
11 89.824 94.549
12 89.800 94.395
13 89.710 94.618
14 91.170 92.800
15 91.880 91.840
16 92.190 91.281
17 89.870 94.090
18 91.000 93.285
19 91.960 91.520
20 91.124 93.221

With the practical process data from a grinding circuit of a mineral plant, the simulation of this hybrid intelligent method adopted the same parameters on the variation in fresh slurry feed velocity, density, particle size distribution, and cyclone feed operating configurations.

The comparison of production data and optimization results in Table 6 is shown in Figure 8, where “◆” represents the proposed algorithm optimization results and “○” represents the original data collected from the field without optimization of raw data. According to the objectives, the data point closer to the upper right edge is more beneficial. Obviously, the proposed optimization result is far better than the original data, indicating the effectiveness of the optimization approach.

The comparison chart between optimization results and industrial data.

4.4. TOPSIS Method for Solution Selection

The resolution of a multiobjective optimization problem does not end when the Pareto-optimal set is found. In practical operational problems, a single solution must be selected. TOPSIS  is a useful technique in dealing with multiattribute or multicriteria decision-making (MADM/MCDM) problems in the real world. The standard TOPSIS method attempts to choose alternatives that simultaneously have the shortest distance from the positive-ideal solution and the farthest distance from the negative-ideal solution. According to the TOPSIS method, the relative closeness coefficient is calculated, and the best solution in Table 6 is the solution number 10 as f1(X)=89.298  (t/h) and f2(X)=95.763  (%). The corresponding decision variables are MMF=89.298  (t/h), ϕB=32%, W1=75.127  (t/h), and W2=16.296  (t/h).

5. Conclusions

Promoted by the requirements of engineering optimization in complex practical processes of grinding and classification, we proposed a hybrid multiobjective differential evolution algorithm with a few beneficial features integrated. Firstly, an archiving mechanism for infeasible solutions is established with functional partitioned multi-population, which aims to direct the population to approach or land in the feasible region from different directions during evolution. Secondly, we propose an infeasible constraint violations function to select infeasible population with better performance, so that they are allowed to be saved and to participate in the subsequent evolution. Thirdly, a nondominating ranking strategy is designed to improve the crowding-distance sorting and return uniform distribution of Pareto solutions. Finally, the simplex method is inserted in the differential evolution process to purposefully enrich the diversity without excessive computation cost. The advantage of the proposed algorithm is the exemption from constructing penalty function and the preservation of meaningful infeasible solutions directly. Simulation results on benchmarks indicate that the proposed algorithm can converge quickly and effectively to the true Pareto frontier with better distribution.

Based on the investigated information about grinding circuit process, we established a multiobjective optimal model with equations from mechanism knowledge, parameters recognized by data regression, and constraints of technical requirements. The nonlinear multiobjective optimization model is too complicated to be solved by traditional gradient-based algorithms. The proposed hybrid differential evolution algorithm is applied and tested to achieve a Pareto solution set. It is proven to be valuable for operation decision making in the industrial process and showed superiority to the operation carried out in the production. In fact, many operating parameters in complex processes are highly coupled and conflicting with each other. The optimal operation of the entire production process is very difficult to obtain by manual calculation; let alone the fluctuation situation of process conditions. The application case indicates that the proposed method has good performance and is helpful to inspire further research on evolutionary methods for engineering optimization.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (61374156, 61273187, and 61134006), the National Key Technology Research and Development Program of the Ministry of Science and Technology of China (2012BAK09B04), the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (61321003), the Fund for Doctor Station of the Ministry of Education (20110162130011 and 20100162120019) and the Hunan Province Science and Technology Plan Project (2012CK4018).

Mitra K. Gopinath R. Multiobjective optimization of an industrial grinding operation using elitist nondominated sorting genetic algorithm Chemical Engineering Science 2004 59 2 385 396 2-s2.0-0942269397 10.1016/j.ces.2003.09.036 Yu G. Chai T. Y. Luo X. C. Multiobjective production planning optimization using hybrid evolutionary algorithms for mineral processing IEEE Transactions on Evolutionary Computation 2011 15 4 487 514 2-s2.0-79961065923 10.1109/TEVC.2010.2073472 Ma T. Y. Gui W. H. Wang Y. L. Yang C. H. Dynamic optimization control for grinding and classification process Control and Decision 2012 27 2 286 290 2-s2.0-84857943699 Edgeworth F. Y. Mathematical Physics: An Essay on the Application of Mathematics to the Moral Sciences 1881 London, UK C. Kegan Paul & Company Pareto V. Cours d'Economie Politique 1896 Lausanne, Switzerland F. Rouge Chen X. F. Gui W. H. Wang Y. L. Cen L. Multi-step optimal control of complex process: a genetic programming strategy and its application Engineering Applications of Artificial Intelligence 2004 17 5 491 500 2-s2.0-3142682442 10.1016/j.engappai.2004.04.018 Coello C. A. C. Evolutionary multi-objective optimization: a historical view of the field IEEE Computational Intelligence Magazine 2006 1 1 28 36 2-s2.0-33644978280 10.1109/MCI.2006.1597059 Deb K. Pratap A. Agarwal S. Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II IEEE Transactions on Evolutionary Computation 2002 6 2 182 197 2-s2.0-0036530772 10.1109/4235.996017 Fonesca C. M. Fleming P. J. Genetic algorithms for multi objective optimization: formulation, discussion and generalization Proceedings of the 5th International Conference on Genetic Algorithms 1993 San Mateo, Calif, USA Morgan Kaufmann 415 423 Sengupta S. Das S. Nasir M. Vasilakos A. V. Pedrycz W. Energy-efficient differentiated coverage of dynamic objects using an improved evolutionary multi-objective optimization algorithm with fuzzy-dominance Proceedings of IEEE Congress on Evolutionary Computation 2012 Brisbane, Australia IEEE Press 1 8 10.1109/CEC.2012.6256541 Price K. V. Storn R. M. Lampinen J. A. Differential Evolution a Practical Approach to Global Optimization 2005 Berlin, Germany Springer Gong W. Y. Cai Z. H. An improved multiobjective differential evolution based on Pareto-adaptive ε-dominance and orthogonal design European Journal of Operational Research 2009 198 2 576 601 2-s2.0-63349106651 10.1016/j.ejor.2008.09.022 Zeng X. H. Wong W. K. Leung S. Y. An operator allocation optimization model for balancing control of the hybrid assembly lines using Pareto utility discrete differential evolution algorithm Computers and Operations Research 2012 39 5 1145 1159 2-s2.0-80052272685 10.1016/j.cor.2011.07.020 Zitzler E. Thiele L. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach IEEE Transactions on Evolutionary Computation 1999 3 4 257 271 2-s2.0-0033318858 10.1109/4235.797969 Al-Ani A. Alsukker A. Khushaba R. N. Feature subset selection using differential evolution and a wheel based search strategy Swarm and Evolutionary Computation 2013 9 15 26 10.1016/j.swevo.2012.09.003 Li G. Y. Liu M. G. The summary of differential evolution algorithm and its improvements Proceedings of the 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE '10) August 2010 Chengdu, China V3153 V3156 2-s2.0-78149334861 10.1109/ICACTE.2010.5579677 Abbass H. A. The self-adaptive Pareto differential evolution algorithm Proceedings of the IEEE Congress on Evolutionary Computation May 2002 Honolulu, Hawaii, USA IEEE Press 831 836 10.1109/CEC.2002.1007033 Madavan N. K. Multiobjective optimization using a Pareto differential evolution approach Proceedings of the IEEE Congress on Evolutionary Computation May 2002 Honolulu, Hawaii, USA IEEE Press 1145 1150 10.1109/CEC.2002.1004404 Xue F. Sanderson A. C. Graves R. J. Pareto-based multi-objective differential evolution Proceedings of the IEEE Congress on Evolutionary Computation, December 2003 Piscataway, NJ, USA IEEE Press 862 869 10.1109/CEC.2003.1299757 Robic T. Filipi B. DEMO: differential evolution for multiobjective optimization Evolutionary Multi-Criterion Optimization 2005 Berlin, Germany Springer 520 533 ZBL1109.68633 Huang V. L. Qin A. K. Suganthan P. N. Tasgetiren M. F. Multi-objective optimization based on self-adaptive differential evolution algorithm Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07) September 2007 Singapore IEEE Press 3601 3608 2-s2.0-79955310132 10.1109/CEC.2007.4424939 Huang V. L. Zhao S. Z. Mallipeddi R. Suganthan P. N. Multi-objective optimization using self-adaptive differential evolution algorithm Proceedings of the IEEE Congress on Evolutionary Computation (CEC '09) May 2009 Trondheim, Norway IEEE Press 190 194 2-s2.0-70449839425 10.1109/CEC.2009.4982947 Adeyemo J. A. Otieno F. A. O. Multi-objective differential evolution algorithm for solving engineering problems Journal of Applied Sciences 2009 9 20 3652 3661 10.3923/jas.2009.3652.3661 Wang Y. Cai Z. Combining multiobjective optimization with differential evolution to solve constrained optimization problems IEEE Transactions on Evolutionary Computation 2012 16 1 117 134 2-s2.0-84856541608 10.1109/TEVC.2010.2093582 Homaifar A. Qi C. X. Lai S. H. Constrained optimization via genetic algorithms Simulation 1994 62 4 242 253 2-s2.0-0028413196 10.1177/003754979406200405 Joines J. A. Houck C. R. On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA's Proceedings of the 1st IEEE Conference on Evolutionary Computation June 1994 Orlando, Fla, USA 579 584 2-s2.0-0028602950 Jiménez F. Verdegay J. L. Evolutionary techniques for constrained optimization problems Proceedings of the 7th European Congress on Intelligent Techniques and Soft Computing (EUFIT '99) 1999 Aachen, Germany Sun C. Zeng J. Pan J. An improved vector particle swarm optimization for constrained optimization problems Information Sciences 2011 181 6 1153 1163 2-s2.0-79251595398 10.1016/j.ins.2010.11.033 Michalewicz Z. Schoenauer M. Evolutionary algorithms for constrained parameter optimization problems Evolutionary Computation 1996 4 1 1 32 2-s2.0-25444524580 10.1162/evco.1996.4.1.1 Coello C. A. C. Christiansen A. D. Moses: a multiobjective optimization tool for engineering design Engineering Optimization 1999 31 1–3 337 368 2-s2.0-0002490932 10.1080/03052159908941377 Woldesenbet Y. G. Yen G. G. Tessema B. G. Constraint handling in multiobjective evolutionary optimization IEEE Transactions on Evolutionary Computation 2009 13 3 514 525 2-s2.0-67650119309 10.1109/TEVC.2008.2009032 Nelder J. A. Mead R. A simplex method for function minimization The Computer Journal 1965 7 4 308 313 10.1093/comjnl/7.4.308 ZBL0229.65053 van Veldhuizen D. A. Lamont G. B. Multiobjective evolutionary algorithms: analyzing the state-of-the-art Evolutionary Computation 2000 8 2 125 147 2-s2.0-0034201456 10.1162/106365600568158 Farmani R. Savic D. A. Walters G. A. Evolutionary multi-objective optimization in water distribution network design Engineering Optimization 2005 37 2 167 183 2-s2.0-12344311476 10.1080/03052150512331303436 Deb K. Goel T. Controlled Elitist Non-Dominated Sorting Genetic Algorithms for Better Convergence 2001 Berlin, Germany Springer Lai Y. J. Liu T. Y. Hwang C. L. Topsis for MODM European Journal of Operational Research 1994 76 3 486 500 2-s2.0-0028483495 10.1016/0377-2217(94)90282-8 ZBL0810.90078