Cloud manufacturing (CMfg) is a new service-oriented smart manufacturing paradigm, and it provides a new product development model in which users are enabled to configure, select, and utilize customized manufacturing service on-demand. Because of the massive manufacturing resources, various users with individualized demands, heterogeneous manufacturing system or platform, and different data type or file type, CMfg is fully recognized as a kind of complex manufacturing system in complex environment and has received considerable attention in recent years. In practical scenarios of CMfg, the amount of manufacturing task may be very large, and there are always quite a lot of candidate manufacturing services in cloud pool for corresponding subtasks. These candidate services will be selected and composed together to complete a complex manufacturing task. Obviously, manufacturing service composition plays a very important role in CMfg lifecycle and thus enables complex manufacturing system to be stable, safe, reliable, and efficient and effective. In this paper, a new manufacturing service composition scheme named as Multi-Batch Subtasks Parallel-Hybrid Execution Cloud Service Composition for Cloud Manufacturing (MBSPHE-CSCCM) is proposed, and such composition is one of the most difficult combination optimization problems with NP-hard complexity. To address the problem, a novel optimization method named as Improved Hybrid Differential Evolution and Teaching Based Optimization (IHDETBO) is proposed and introduced in detail. The results obtained by simulation experiments and case study validate the effectiveness and feasibility of the proposed algorithm.
National Natural Science Foundation of China617014436187616861403342Natural Science Foundation of Zhejiang ProvinceLY18F0300201. Introduction
The continuing rise in customer expectation, demand of environmental-friendly production, rapid responsiveness to market changes, and other market competitions poses a critical challenge to manufacturing industry. Under these market pressures, it is very important that manufacturing partners in the industry chain should work together to provide offerings, which contain material products and immaterial services or functionalities, so as to achieve a win-win situation among users, enterprises, environment, and society. So, cloud manufacturing (CMfg) emerges as the time require. The term “cloud manufacturing” was firstly coined by Li et al. [1]. CMfg is a special networked manufacturing mode which is different from traditional networked manufacturing mode such as ASP and MGrid. Traditional networked manufacturing is an independent and static system and lacks dynamics, intelligent client, and effective business model. Its shared resources are mostly software and other soft resource, less involving hardware resources, and the overall manufacturing capacity of enterprises. The concept of traditional networked manufacturing emphasizes the centralized use of decentralized resources. As a new manufacturing paradigm, CMfg is a kind of large-scale networked distributed manufacturing mode that simultaneously provides multiple users with customized manufacturing services on a CMfg system or platform by organizing online manufacturing resources which are always virtualized and encapsulated as manufacturing services. Both the concepts of “integration of distributed resources” and “distribution of integrated resources” are reflected in CMfg. So, the advantage of CMfg is that the philosophy of “software as a service” is expanded to “manufacturing as a service”, which enables the cloud system to virtualize and servitize the manufacturing resources and capabilities including not only soft resource but also hard resource. In CMfg, there are massive manufacturing resources, various users with individualized demands, heterogeneous manufacturing system or platform, and different data type or file type, so the CMfg is obviously a complex system running in a complex environment. It is a strong demand that the system reaches stability, safety, reliability, and efficiency and effectiveness, and there are many theoretical and technological challenges for the CMfg in practical application.
In the past decades, through the integration of some advanced technology such as Cyber-Physical System (CPS) [2], Artificial Intelligence (AI) [3], Internet of Things (IoT) [4], Big Data (BD) [5], supply chain management [6], cloud computing [7], and so on, CMfg has been improved rapidly and gradually in focus. Now, many research institutions are focusing on CMfg, and many theoretical researches on the definition, architecture, resource modeling, QoS evaluation, and resource scheduling of CMfg have been reported. More importantly, thanks to these aforementioned technologies, more and more national governments are also focusing on CMfg and promoting its greater development. Taking “Industrie 4.0” of the German government as an example, any production system can seamlessly access to the cloud platform to provide remote maintenance or to find personalized custom solutions.
Since the manufacturing services in CMfg are massive, users can acquire a lot of production capability to fulfill their customized demand. Especially for a complex manufacturing task, it is always divided into several subtasks, and for every subtask there are also many manufacturing services that can be selected. Although these candidate services have different performance or QoS, they all satisfy or are within the range of user’s requirements. So, such collaboration is essentially enabled by the composition of manufacturing services, i.e., cloud service composition for cloud manufacturing (CSCCM). CSCCM involves selecting appropriate and optimal manufacturing services from candidate services and assembling them together with logistic. In other words, CSCCM is essentially an interconnected set of multiple specialized manufacturing services to offer the products or functionalities to solve a complex manufacturing task according to user’s requirement. Obviously, such composition is one of the most difficult combination optimization problems with NP-hard complexity. Tao et al. [8] divide existing service composition method into five categories: business flow-based service composition, AI-based service composition, graph-based service composition, agent-based service composition, and QoS-aware service composition. Now, many researches focus on the QoS-aware service composition, because the QoS such as time, cost, reliability, and reputation influent user satisfaction deeply. Besides, many classical swarm intelligent and heuristic algorithms which have been widely studied in traditional manufacturing system are also introduced into CMfg, such as genetic algorithm (GA) [9], simulated annealing (SA) [10], swarm optimization (PSO) [11], ant colony optimization (ACO) [12], artificial bee colony (ABC) [13], chaos optimization (CO) [8], differential evolution algorithm (DE) [14], and teaching-learning based optimization (TLBO) [15, 16].
Although the aforementioned algorithms and related improved versions can solve NP-hard problem to a certain degree, they cannot address well the mass task in CMfg. In order to achieve a better optimization, adopted are several candidate manufacturing services, when the task amount is very large and there are quite a lot of candidate manufacturing services for corresponding subtasks. Moreover, when the interface is highly standardized, it is possible that multi-services execute parallel and hybrid. The encoding methods and operations of the aforementioned algorithms are just on single gene location, which is not suitable for the scenario of the multi-services selected for one subtask, because to the multi-services, the transportation scheme between subtasks is diversity and should be under consideration when establish the objective function. In addition, the global search ability of the aforementioned algorithms is not ideal for high dimensional and complex objective function in CMfg. So, in this paper, we propose a manufacturing service composition scheme named as Multi-Batch Subtasks Parallel-Hybrid Execution Cloud Service Composition for Cloud Manufacturing (MBSPHE-CSCCM). Meanwhile, a novel optimization method named as Improved Hybrid Differential Evolution and Teaching Based Optimization (IHDETBO) is proposed by adopting and improving the classical DE and TLBO algorithms.
The remainder of the paper is organized as follows. In Section 2, our previously proposed CMfg architecture is presented, and the problem statement is discussed. In Section 3, the basic concepts and operations of DE and TLBO algorithms are both summarized. In Section 4, the proposed algorithm IHDETBO is introduced in detail. In Section 5, the simulation experiments and case study are given, and the experimental results are discussed. In Section 6, the paper is summarized with concluding remarks and future work.
2. Problem Statement
In our previous work, we have discussed the CMfg architecture [17–19]. As shown in Figure 1, there are three kinds of roles in CMfg: (1) the resource demander (RD), who is the demander of products, manufacturing resource, or service, for example, the users or some public institutions; (2) the resource provider (RP), who is the provider of products, manufacturing resource, or service, for example, the enterprise or service cooperator; (3) the manager, who designs, develops, and maintains the CMfg equipments and related software like some professional middleware. From the perspective of CMfg system, it is composed of a cloud manufacturing platform (CMP) and cloud end (CE). The latter contains a cloud demander (CD) and a cloud provider (CP) both of which are corresponding to RD and RP, respectively. The CPs publish, update, cancel, and provide manufacturing resource and service through CMP; the CDs, in turn, submit the requirement to and obtain the corresponding products or services from CMP, the functionality of which is like a great resource pool consisting of several sub-CMPs that can interact with each other. In addition, through cutting-edge technologies such as Internet of Things (IoT), Big Data (BD), Cyber-Physical System (CPS), and so on, manufacturing resources in CP are encapsulated as cloud service (CS), which then can be accessed and obtained by CD from CMP. Thus, CMfg can provide users with manufacturing resources highly virtualized as services in the manufacturing life cycle [20].
The architecture of CMfg system [17].
Generally speaking, the lifecycle of CMfg has several phases just like cloud computing: the definition and publication of CS, the proposal of manufacturing task requirement, the matching of CS, the composition and provision of CS, the determination of manufacturing contract, manufacturing and distribution, and the disposal of manufacturing task [22]. The work proposed in this paper will address the phase of the composition and provision of CS. As shown in Figure 2, after users have submitted manufacturing task to CMP, the CMP first decomposes the task to several subtasks according to the knowledge of domain experts and searches proper CSs to the corresponding subtasks from cloud service pool (CSP) and then selects and composes these CSs with optimization algorithms to satisfy users’ QoS requirements such as time, cost, availability, reputation, and so on. Because the total amount of tasks may be very large, as well as there are many candidate CSs according to the performance requirement of corresponding subtasks, several CSs would like to be chosen to complete the same subtask parallel, and each of them would deliver its production results to one or more CSs for next subtask. Taking the first and second subtasks as an example (shown as SubT1 and SubT2 in Figure 2), they have two corresponding candidate CS sets CSS1=CSS1,1,CSS1,2,…,CS1,m1 and CSS2=CSS2,1,CSS2,2,…,CS2,m2, and total amounts of these two subtasks are both T. Some of these CSs will be allocated production task and the amount is Ti,j, where ∑j=1m1T1,j=∑j=1m2T2,j=T. So, some CSs in CSS1 will be allocated production task and will deliver their production results to one or more CSs in CSS2 after completing their own tasks. For example, in composition scenario shown in Figure 2, for the first subtask, three candidate CSs are chosen, i.e., CS1,1, CS1,2, and CS1,j1. After completing their own tasks, the production results of CS1,1 will be delivered to CS2,1 and CS2,j2, similarly, CS1,2 to CS2,1 and CS2,2, and CS1,j1 to CS2,j2 and CS2,m2. Thus, a mass task can be transformed into multi-batch subtasks which will be parallel-hybrid executed. Obviously, the key issue is CS composition with respect to users’ QoS requirement, which is a combinatorial optimization problem with strong NP-hard problem in essence.
Schematic of cloud service composition for cloud manufacturing.
3. Basic Concepts and Operations of DE Algorithm and TLBO Algorithm
MBSPHE-CSCCM is a combinatorial optimization problem with strong NP-hard problem. Furthermore, with the rapid increasing number of candidate CSs and the amount of cloud manufacturing tasks and subtasks, the search space is quite large. It is hard to solve MBSPHE-CSCCM problem with a large search scale by traditional methods. In this paper, the DE algorithm and TLBO algorithm are adopted to design a new method termed as IHDETBO. DE and TLBO are both recently developed meta-heuristic algorithms to enhance and balance the exploration and exploitation capacities. The basic concepts and operations of these two algorithms are detailed in the following sections.
3.1. DE Algorithm
DE was firstly coined by Storn and Price [14]. It is a powerful evolutionary algorithm based on stochastic search technique, which is an efficient and effective global optimizer in the continuous search domain [23]. Similar to other evolutionary algorithms such as ABC, DE firstly generates a population consisting of NP n-dimensional initial vector randomly, which is so-called individuals, i.e., Xd0=xd,10,xd,20,⋯,xd,j0⋯xd,n0∣d=1,2,⋯NP, generated by (1). Here, NP is one of the parameters of DE and indicates population size: (1)xd,j0=xjlow+rand0,1xjup-xjlowwhere j=1,2,⋯n; xjlow,xjup is the boundary of the jth variable; and rand0,1 is a random number uniformly distributed over the interval 0,1.
Then the population evolves over generations through three types of operations such as mutation, crossover, and selection till one of the termination criterions is satisfied.
3.1.1. Mutation Operation
In the mutation operation of the kth generation, DE produces a mutant vector Udk. Seven most frequently referred mutation operations are listed as follows:
where d1, d2, d3, d4, and d5 are distinct random integers uniformly generated from the set 1,2,⋯,NP∖d; Xbestk represents the best individuals in the kth generation; Xpbestk represents the individuals randomly selected from the top 100∗p best individuals in the kth generation, and p∈5%,20%; K∗is a control parameter randomly chosen with the range 0,1; F is a real fixed number (F∈0,2) named as scale parameter (step size) to control the amplification of the difference.
3.1.2. Crossover Operation
In the crossover operation after mutation operation of the kth generation, the trial/offspring vector Vdk=vd,1k,vd,2k,⋯,vd,jk,⋯,vd,nk is generated by mixing with each pair of the target vector Xdk and its corresponding mutant vector Udk. In the basic version, the binomial crossover operation is defined as follows:(9)vd,jk=ud,jk,ifrand0,1≤CRorj=jrand,xd,jk,otherwise.where rand0,1 is a uniform random number on the interval 0,1; CR is crossover constant determined by users within the interval 0,1; jrand∈1,2,⋯n is a randomly chosen index which ensures that the trial vector Vdk gets at least one element different from Udk.
3.1.3. Selection Operation
A greedy selection operation is used to select individual from each pair of the target vector Xdk and the corresponding trial/offspring vector Vdk into the next in the next (k+1)th generation by comparing their fitness. For minimization problems, whose fitness is smaller, they will be chosen, and, for maximization problems, the opposite is true.
3.2. TLBO Algorithm
TLBO algorithm was firstly proposed by Rao [15, 16]. Based on the effect of the influence of a teacher on the output of learners in a class, teaching-learning is an important motivated progress where any individual tries to learn something from the other and simulates the traditional teaching-learning phenomenon of class room [27]. So, teaching-learning algorithm has two fundamental modes of learning: (i) learning from the teacher (known as teacher phase) and (ii) interacting with the other learners (known as learner phase). The initialization is as the same as DE by using (1).
3.2.1. Teacher Phase
In the teacher phase, the algorithm simulates the learning of students from teacher. The teacher (always the best individual of the entire population) will put maximum effort to increase the mean grade of the class from any value to his value, and the learners will gain knowledge according to the quality of teaching delivered by a teacher and the quality of learners present in the class [27]. At any teaching-learning cycle k, let Xteacherk be the teacher and Xmeank be the mean whose every element is the mean value of the corresponding dimension, i.e., Xmeank=(1/NP∑i=1NPxi,1k,1/NP∑i=1NPxi,2k,⋯,1/NP∑i=1NPxi,jk,⋯, 1/NP∑i=1NPxi,nk), where j and n have the same meaning with DE. Teacher Xteacherk will try to improve other individuals Xdk=xd,1k,xd,2k,⋯xd,nk by shifting their positions towards the position of his own level, where d≠teacher. The difference between the result of the teacher and the mean result of the learner for the learner d is calculated as follows:(10)Diff_Md,jk=rdxteahcer,jk-TFxmean,jkwhere rd is a random number in the range 0,1; TF is the teaching factor which decides the value of the mean to be changed and can be either 1 or 2 and decided randomly as follows:(11)TF=round1+rd
Based on the Difference_Meandk=Diff_Md,1k,Diff_Md,2k,⋯,Diff_Md,nk, individual Xdk in the current population is updated according to the following:(12)Xd′k=Xdk+Difference_Meandkwhere Xd′k is the updated value and will be accepted if it gives a better fitness value.
3.2.2. Learner Phase
In the learner phase after teacher phase, the algorithm will simulate the learning of the students (individuals) through interacting with each other by discussions, presentations, formal communications, and so on. A learner will develop his knowledge if the other learners are better. So in this phase of the kth generation, a learner Xdk randomly selects another learner Xd1k, if the fitness value of Xd1k is better than that of Xdk, then Xdk will be updated by (13), otherwise by (14).(13)Xd′k=Xdk+rdXd1k-Xdk(14)Xd′k=Xdk+rdXdk-Xd1kwhere rd is a random number within the range 0,1, and Xd′k will be accepted if it gives a better fitness value.
4. IHDETBO for Cloud Manufacturing
To solve the MBSPHE-CSCCM problem, a so-called IHDETBO algorithm is proposed by integrating improved DE (named as IDE phase) and the improved teacher phase of TLBO (named as IT phase) to enhance and balance the exploration and exploitation capacities. At first, block encoding and initialization are operated in population initialization. In the IDE phase, block mutation, block crossover and block selection are operated. Besides, factors F and CR will be both improved and calculated with adaptive strategy to enhance the population diversity and generate better individuals into the next phase [28]. In the learner phase of canonical TLBO, due to the fact that the learner may select an inappropriate learner (a poor student) to learn, the convergence speed will slow down and the effect of local search will be reduced. On the contrary, every learner will be improved to the teacher’s level in the teacher phase of canonical TLBO, and the algorithm puts up a better convergence performance. So, operations in teacher phase will be adopted and improved as IT phase, and factor TF will be also improved to make the simulation more in line with the actual. The algorithm is discussed as follows in detail.
4.1. Parameter Settings
To facilitate the discussion of MBSPHE-CSCCM problem, the parameter setting is unified as follows.
Task: cloud manufacturing task.
K: the total amount of cloud manufacturing task.
SubT=subt1,subt2,⋯subti,⋯,subtn: the set of subtasks decomposed by CMP, where i is a natural number in the range 1,n; n is the total amount of subtasks. So, let the amount of every subtask be Ki. Generally, Ki>K and Ki=ζK must both hold, where ζ is a positive integer.
CSi=cs1i,cs2i,⋯,csji,⋯,csMii: the set of candidate CSs for corresponding subtask subti, where j=1,2,⋯,Mi, Mi is the amount of the candidate CSs.
4.2. Block Encoding and Initialization
To address the MBSPHE-CSCCM problem, a new encoding method named as block encoding is proposed. A MBSPHE-CSCCM scheme can be encoded as a chromosome by the integer array with the length equal to the number of CSs [21], and the integers are all in the range 0,Ki. By adopting calculation method of subtask height, a genebit partition method based on subtask rank is proposed; thus every genebit in the integer array is one-to-one correspondence with the CS [29]. Shown as Figure 3, the rank of the corresponding subtask is the location in the cloud manufacturing chain, and the amount of genebit in every rank is the amount of candidate CS for the corresponding subtask. So, the genebits in the same rank can be seen as a block of the chromosome. The genebit index of csmn is calculated as follows:(15)m+∑i=1n-1Mi,n>1m,n=1
Schematic of encoding [21].
The initialization is generated randomly and the sum of every rank equals to the amount of the corresponding subtask, so the initialization of every genebit is calculated as follows:(16)xji=randj0,1·Ki∑j=1Mirandj0,1
4.3. IDE Phase
In the IDE phase, there are still three operations: mutation, crossover, and selection mentioned above. Based on the block encoding and initialization, mutation and crossover are both executed rank by rank, which are named as block mutation and block crossover, respectively. Additionally, parameters F and CR will be both improved in the block mutation operation and the block crossover operation, respectively.
4.3.1. Improved Block Mutation Operation
Let Xh,i=xdh,i∣d=1,2,⋯NP;i=1,2,⋯,n be rank i of the hth generation, where xdh,i=xd,1h,i,xd,2h,i,⋯,xd,Mih,i is the integer array in the genebit of rank i of individual d. After this operation, a block mutant vector is udh,i=ud,1h,i,ud,2h,i,⋯,ud,Mih,i. To enhance the exploration capacity and population diversity, we employ “rand/1/bin” mutation operation like (17), which demonstrates slow convergence speed but has strong exploration capability [14]. Obviously, the sum of every element of udh,i equals that of xdh,i.(17)udh,i=xd1h,i+Fdxd2h,i-xd3h,i
In the IHDETBO algorithm, the mutation factor Fd is improved and calculated adaptively with the aim of generating diversified individuals. At each generation h, Fd of each individual xdh,i is independently generated like (18), according to a Cauchy distribution with location parameter μF and scale parameter 0.1, and then truncated to be 1 if Fd>1 or regenerated if Fd≤1.(18)Fd=randcdμF,0.1
Let SF be the set of all successful mutation factors in this generation. The location parameter μF of the Cauchy distribution is initialized to be 0.5 and then updated at the end of this iteration as follows:(19)μF=1-c·μF+c·meanLSFwhere c is a positive constant between 0 and 1, and meanLSF is the Lehmer mean and calculated as equation (20).(20)meanLSF=∑F∈SFF2∑F∈SFF
According to the existing research results [24], a truncated Cauchy distribution is introduced to generate adaptive Fd, because it is more helpful to diversify Fd and thus avoid premature convergence which often occurs in greedy mutation strategies if Fd is highly concentrated around a certain value, so as to enhance the population diversity. Additionally, the Lehmer mean of SF makes the adaptation of μF place more weight on larger successful mutation factors and is also helpful to propagate larger μF so as to avoid premature convergence at the end.
4.3.2. Improved Block Crossover Operation
Let vdh,i=vd,1h,i,vd,2h,i,⋯,vd,Mih,i be the block trial/offspring vector, and the block crossover operation is executed rank by rank. According to (9), the block crossover operation is calculated as follows. To make the sum of every element of vdh,i remain unchanged, fine-turning operation will be needed; i.e., every element of vdh,i will increase or decrease proportionally.(21)vd,jh,n=ud,jh,n,ifrand0,1≤CRdorj=jrand,xd,jh,n,otherwise.
In the IHDETBO algorithm, the crossover probability CRd is also improved and calculated adaptively with the aim of generating better individuals into the next IT phase. At each generation h, CRd of each individual xdh,i is independently generated as (22), according to a normal distribution of mean μCR and standard deviation 0.1, and then truncated to 0,1.(22)CRd=randndμCR,0.1
Let SCR be the set of all successful crossover probabilities in this generation. The mean μCR of the normal distribution is initialized to 0.5 and then updated at the end of this iteration as follows:(23)μCR=1-c·μCR+c·meanASCRwhere c is a positive constant between 0 and 1, and meanASCR is the arithmetic mean of SCR.
According to the existing research results [24], better control parameter values tend to generate better individuals that are more likely to survive and thus these values should be propagated to the following generations. So, the set SCR records recent successful crossover probabilities, and randndμCR,0.1 with a small standard deviation in (22) leads to generate a new CRd that has a great probability close to the successful crossover probability values.
4.3.3. Selection Operation
After block mutation and block crossover, we can obtain a trial/offspring individual Vdh=vdh,1,vdh,2,⋯,vdh,n, and abovementioned greedy selection operation is also adopted to choose Xdh or Vdh into next phase.
4.4. IT Phase
In IHDETBO algorithm, the teacher phase of TLBO is improved and adopted as IT phase following the IDE phase. Based on the block operations illustrated in Section 4.3, the operation in this phase is also executed block by block. Additionally, factor TF will be improved to make the simulation more in line with the actual.
4.4.1. Block Teaching Operation
Just as introduced in Section 3.2.1, the teacher (the best individual of this population) tries to disseminate knowledge among learners (other individuals of this population), which will in turn increase the knowledge level of the whole class (population). So, after IDE phase of the hth generation, let the individual which has the best fitness value be the teacher Xth=xth,1,xth,2,⋯,xth,i,⋯,xth,n and Xmh=xmh,1,xmh,2,⋯,xmh,i,⋯,xmh,n be the mean individual, where i and n have the same meaning with that in Section 4.2; and xmh,i=1/NP∑j=1NPxj,1h,i,1/NP∑j=1NPxj,2h,i,⋯,1/NP∑j=1NPxj,Mih,i. The difference of block i between the result of the teacher and the mean result of the learner for the learner d is calculated as follows:(24)Diff_Md,jh,i=rdxteahcer,jh,i-TF,dxmean,jh,i(25)Difference_Meandh,i=Diff_Md,1h,i,Diff_Md,2h,i,⋯,Diff_Md,Mih,iwhere rd is a random number in the range 0,1; TF is the teaching factor which can be calculated by (11). Then, block i of individual d will be updated, and the updated individual Xd′h will be accepted if it gives a better fitness value.(26)xd′h,i=xdh,i+Different_Meandh,i
4.4.2. Improvement of TF
In the canonical TLBO algorithm, the teaching factor TF,d is either 1 or 2. It means that the learners learn nothing from the teacher or learn all the things from the teacher, respectively. Obviously, it is not in line with the actual. In actually teaching-learning phenomenon, the learners may learn in any proportion from the teacher because of the learners’ learning ability, the teacher’s teaching ability, or other reasons, so the teaching factor TF,d is not always at its end state for learners but varies in-between also [27, 30]. Therefore, TF,d is calculated as follows:(27)TF,d=1+fXth-fXdhfXth,formaximizaitonproblem,1+fXdh-fXthfXdh,forminimizaitonproblem.where fXth andfXdh are the fitness values of Xth and Xdh, respectively.
5. Experiments and Discussions
In this section, the effectiveness of the proposed IHDETBO algorithm is examined by several benchmark functions, and the process of IHDETBO applied to MBSPHE-CSCCM is demonstrated by a case study. These experiments are implemented in a PC with an Intel® Core™ i5-3337U CPU operating at 1.80GHz and 8.00GB of RAM, and operating system is Windows 7 (64 bit). The programming software for the experiments with benchmark functions and that for the case study are Matlab R2016a and Microsoft Visual C++ 6.0, respectively.
5.1. Experiments with Benchmark Functions
To investigate the performance of the proposed IHDETBO algorithm, six different benchmark functions with different characteristics of objective functions and different dimensions and search space are adopted. The results obtained by using the IHDETBO algorithm are also compared with other optimization algorithms such as PSO, DE, and TLBO algorithms with different dimensions and population sizes.
5.1.1. Benchmark Functions
To analyze and compare the performance and accuracy of the IHDETBO algorithm, we adopt six different benchmark functions shown as in Table 1. These functions have different characteristics such as unimodality (U), multimodality (M), separable (S), and non-separable (N), which have been shown in column C of Table 1, and the optimum is all 0 which is the minimum. For a unimodal function, the local minimum is also the global minimum, conversely, there are several global minimums for multimodal functions. So, it is more difficult to find the global minimum for multimodal function than that for unimodal function, because the former requires better global search ability. Additionally, the variables are affected by other variables in non-separable functions, but not for separable functions, so it is more difficult to find an optimum for the non-separable function than that for separable function. Therefore, the abilities of exploration, exploitation, and finding an optimum can be assessed by these functions, which can be seen as the abstraction for some engineering problem in practice.
We test proposed IHDETBO algorithm with the same parameters setting and compare the test results with PSO and DE algorithms. For PSO algorithm, parameter v is half of the search space. As for the DE algorithm, scale factor F and crossover probability CR are 0.5 and 0.9, respectively [31]. The maximum iteration is 500 shown in column M of Table 2. The population size is an important parameter for heuristic algorithms, and it is set to 10, 20, and 50 shown in column N of Table 2. In addition, the dimensionality of the search space is another important issue for the methods, and it is set to 2, 5, and 10 for each population size shown in column D of Table 2. All experiments are operated 50 times independently.
The results of comparison test on 6 benchmark functions with different parameters settings.
Benchmark function
algorithm
N
10
20
50
D
2
5
10
2
5
10
2
5
10
f1
IHDETBO
best
7.58E-154
3.56E-68
1.11E-36
2.52E-135
6.14E-54
1.27E-27
9.72E-120
6.39E-47
8.65E-23
worst
2.63E-138
3.63E-55
8.63E-29
1.38E-122
2.95E-48
3.64E-22
5.31E-113
4.35E-43
4.44E-20
mean
6.36E-140
7.66E-57
5.76E-30
2.84E-124
2.54E-49
1.46E-23
2.43E-114
5.53E-44
5.47E-21
std
3.68E-139
5.07E-56
1.53E-29
1.93E-123
6.60E-49
5.19E-23
8.54E-114
9.57E-44
8.16E-21
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
PSO
best
0.00E+00
0.00E+00
2.78E+01
0.00E+00
0.00E+00
8.41E-01
0.00E+00
0.00E+00
2.76E-15
worst
1.65E-43
5.00E+00
5.34E+03
2.03E-48
6.62E-30
1.62E+02
7.32E-55
6.56E-53
1.60E-03
mean
4.17E-45
2.42E-01
9.88E+02
4.67E-50
3.42E-31
3.98E+01
5.41E-56
2.18E-54
4.60E-05
std
2.34E-44
7.58E-01
9.09E+02
2.85E-49
1.23E-30
4.76E+01
1.47E-55
1.01E-53
2.27E-04
FE
1.14E+04
1.23E+04
1.24E+04
2.22E+04
2.31E+04
2.42E+04
5.45E+04
5.45E+04
5.91E+04
DE
best
4.55E-111
1.73E-02
3.64E+01
1.83E-113
5.16E-47
1.78E-15
5.66E-112
9.89E-44
9.08E-20
worst
2.68E-01
6.28E+02
2.42E+03
3.01E-104
4.33E-06
4.66E+01
2.65E-104
2.60E-40
1.11E-17
mean
5.40E-03
5.57E+01
4.51E+02
7.63E-106
9.90E-08
1.51E+00
1.07E-105
2.41E-41
1.56E-18
std
3.75E-02
1.03E+02
5.24E+02
4.24E-105
6.11E-07
6.64E+00
4.31E-105
4.91E-41
1.84E-18
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
TLBO
best
1.39E-206
1.71E-140
2.09E-141
1.97E-195
2.50E-123
2.96E-117
3.53E-182
2.07E-114
5.37E-87
worst
1.68E-189
3.79E-131
3.74E-130
1.10E-177
5.87E-116
8.28E-110
4.21E-174
5.35E-109
2.22E-84
mean
4.12E-191
1.08E-132
9.72E-132
2.20E-179
1.05E-117
6.04E-111
1.57E-175
1.35E-110
3.58E-85
std
0.00E+00
5.65E-132
5.27E-131
0.00E+00
8.25E-117
1.67E-110
0.00E+00
7.51E-110
5.36E-85
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
f2
IHDETBO
best
0.00E+00
1.44E-02
1.02E+00
0.00E+00
6.04E-04
2.27E+00
0.00E+00
1.39E-16
5.70E-03
worst
3.18E-17
4.00E+00
7.30E+00
0.00E+00
5.72E-01
4.55E+00
0.00E+00
2.51E-01
4.01E+00
mean
6.37E-19
5.27E-01
4.22E+00
0.00E+00
2.85E-01
3.72E+00
0.00E+00
3.40E-02
2.82E-02
std
4.45E-18
6.57E-01
7.70E-01
0.00E+00
1.20E-01
3.11E-01
0.00E+00
5.54E-02
8.68E-01
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
PSO
best
0.00E+00
7.80E-05
1.00E+01
0.00E+00
0.00E+00
5.88E+00
0.00E+00
0.00E+00
7.65E-01
worst
0.00E+00
4.63E+00
1.96E+02
0.00E+00
3.93E+00
5.76E+01
0.00E+00
3.93E+00
9.57E+00
mean
0.00E+00
2.30E+00
4.18E+01
0.00E+00
3.15E-01
1.11E+01
0.00E+00
2.36E-01
5.57E+00
std
0.00E+00
1.58E+00
3.48E+01
0.00E+00
1.07E+00
8.77E+00
0.00E+00
9.34E-01
2.22E+00
FE
1.11E+04
1.24E+04
1.24E+04
2.16E+04
2.31E+04
2.42E+04
5.32E+04
5.49E+04
5.98E+04
DE
best
0.00E+00
2.16E-02
6.77E+00
0.00E+00
1.27E-04
2.89E-01
0.00E+00
7.77E-30
1.63E+00
worst
4.95E+00
8.75E+01
1.81E+02
2.97E-01
3.93E+00
2.59E+01
0.00E+00
1.06E+00
6.05E+05
mean
2.48E-01
6.10E+00
4.40E+01
2.17E-02
1.32E+00
7.80E+00
0.00E+00
2.77E-01
4.19E+00
std
7.36E-01
1.34E+01
3.47E+01
6.03E-02
1.10E+00
3.39E+00
0.00E+00
3.21E-01
1.24E+00
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
TLBO
best
5.03E-29
1.40E-04
5.18E+00
5.70E-29
2.73E-05
5.89E-01
9.67E-28
1.78E-05
1.03E-02
worst
7.91E-21
2.06E+00
7.16E+00
3.08E-22
2.77E-01
5.11E+00
6.81E-24
3.60E-03
7.24E-01
mean
3.27E-22
5.45E-01
6.49E+00
8.28E-24
7.20E-03
3.74E+00
2.66E-25
7.77E-04
8.77E-02
std
1.31E-21
5.11E-01
4.53E-01
4.32E-23
3.86E-02
8.08E-01
9.64E-25
6.35E-04
1.23E-01
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
f3
IHDETBO
best
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
8.30E-14
0.00E+00
0.00E+00
2.76E-11
worst
0.00E+00
4.89E-15
2.82E-13
0.00E+00
4.89E-15
3.68E-11
0.00E+00
4.89E-15
4.25E-10
mean
0.00E+00
3.82E-15
2.42E-14
0.00E+00
3.18E-15
4.71E-12
0.00E+00
2.26E-15
1.21E-10
std
0.00E+00
1.63E-15
4.86E-14
0.00E+00
1.77E-15
7.58E-12
0.00E+00
1.56E-15
8.58E-11
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
PSO
best
0.00E+00
1.48E-06
5.34E+00
0.00E+00
0.00E+00
1.76E+00
0.00E+00
0.00E+00
1.16E+00
worst
4.89E-15
7.50E+00
1.74E+01
0.00E+00
4.17E+00
1.25E+01
0.00E+00
2.32E+00
5.09E+00
mean
1.47E-15
3.17E+00
1.06E+01
0.00E+00
1.02E+00
5.38E+00
0.00E+00
3.10E-01
2.88E+00
std
6.96E-16
2.01E+00
2.71E+00
0.00E+00
2.00E+00
2.10E+00
0.00E+00
6.67E-01
9.53E-01
FE
1.11E+04
1.22E+04
1.23E+04
2.16E+04
2.23E+04
2.41E+04
5.33E+04
5.40E+04
5.88E+04
DE
best
0.00E+00
5.20E-03
2.72E+00
0.00E+00
0.00E+00
1.59E-10
0.00E+00
0.00E+00
1.56E-10
worst
2.58E+00
1.41E+01
1.49E+01
0.00E+00
1.65E+00
2.58E+00
0.00E+00
4.89E-15
3.60E-09
mean
5.87E-02
2.66E+00
8.67E+00
0.00E+00
6.58E-02
3.30E-01
0.00E+00
1.47E-15
9.13E-10
std
3.62E-01
3.07E+00
2.59E+00
0.00E+00
3.23E-01
6.53E-01
0.00E+00
6.96E-16
6.01E-10
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
TLBO
best
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.33E-15
worst
4.89E-15
4.89E-15
8.44E-15
0.00E+00
4.89E-15
8.44E-15
0.00E+00
4.89E-15
4.89E-15
mean
1.47E-15
4.03E-15
5.10E-15
0.00E+00
3.25E-15
5.03E-15
0.00E+00
3.32E-15
4.81E-15
std
6.96E-16
1.52E-15
8.44E-12
0.00E+00
1.77E-15
9.95E-16
0.00E+00
1.76E-15
4.97E-16
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
f4
IHDETBO
best
0.00E+00
0.00E+00
2.27E-13
0.00E+00
0.00E+00
2.73E-11
0.00E+00
0.00E+00
1.39E-09
worst
2.38E+02
4.75E+02
7.14E+02
1.18E+02
1.18E+02
3.57E+02
0.00E+00
0.00E+00
8.78E-08
mean
2.37E+01
1.09E+02
3.02E+02
4.74E+00
1.42E+01
6.88E+01
0.00E+00
0.00E+00
1.71E-08
std
5.31E+01
1.13E+02
1.84E+02
2.32E+01
3.85E+01
8.25E+01
0.00E+00
0.00E+00
1.65E-08
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
PSO
best
0.00E+00
1.23E-02
7.38E+02
0.00E+00
0.00E+00
4.85E+02
0.00E+00
1.18E+02
2.38E+02
worst
3.57E+02
1.08E+03
2.54E+03
2.38E+02
8.69E+02
2.26E+03
1.18E+02
9.52E+02
1.92E+03
mean
9.68E+01
5.72E+02
1.69E+03
8.29E+01
4.43E+02
1.39E+03
4.50E+01
3.88E+02
1.19E+03
std
9.35E+01
2.41E+02
4.11E+02
7.21E+01
2.26E+02
3.85E+02
5.75E+01
1.89E+02
3.69E+02
FE
1.08E+04
1.17E+04
1.23E+04
2.12E+04
2.18E+04
2.38E+04
5.21E+04
5.26E+04
5.59E+04
DE
best
0.00E+00
3.86E-01
5.20E+02
0.00E+00
0.00E+00
1.18E+02
0.00E+00
0.00E+00
1.23E-02
worst
2.74E+02
7.90E+02
1.82E+03
1.40E+02
6.25E+02
1.03E+03
0.00E+00
1.25E+02
1.26E+03
mean
6.52E+01
3.94E+02
1.15E+03
1.80E+01
1.94E+02
5.98E+02
0.00E+00
1.32E+01
4.76E+02
std
6.93E+01
1.93E+02
2.83E+02
4.23E+01
1.83E+02
2.34E+02
0.00E+00
3.62E+01
3.10E+02
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
TLBO
best
0.00E+00
1.18E+02
7.13E+02
0.00E+00
0.00E+00
3.55E+02
0.00E+00
0.00E+00
1.25E+02
worst
2.39E+02
7.15E+02
1.58E+03
1.18E+02
6.51E+02
1.28E+03
0.00E+00
3.55E+02
1.04E+03
mean
3.08E+01
3.22E+02
1.28E+03
1.18E+01
1.93E+02
7.33E+02
0.00E+00
7.88E+01
5.35E+02
std
5.20E+01
1.52E+02
2.23E+02
3.55E+01
1.29E+02
2.16E+02
0.00E+00
9.06E+01
1.86E+02
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
f5
IHDETBO
best
0.00E+00
0.00E+00
3.92E-07
0.00E+00
0.00E+00
1.30E-05
0.00E+00
9.58E-13
6.32E-04
worst
7.40E-03
1.72E-02
3.20E-02
0.00E+00
1.12E-05
2.84E-02
0.00E+00
1.75E-05
2.18E-02
mean
1.48E-04
2.70E-03
7.90E-03
0.00E+00
6.10E-07
7.20E-03
0.00E+00
8.31E-07
8.80E-03
std
1.00E-03
4.70E-03
7.80E-03
0.00E+00
2.23E-06
6.20E-03
0.00E+00
2.69E-06
5.50E-03
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
PSO
best
0.00E+00
7.88E-02
2.67E+00
0.00E+00
3.45E-02
1.95E-01
0.00E+00
7.40E-03
8.42E-02
worst
7.89E-02
3.16E+00
7.86E+01
6.66E-02
8.06E-01
1.16E+01
3.95E-02
4.43E-01
1.37E+00
mean
1.40E-02
4.95E-01
1.36E+01
9.20E-03
2.46E-01
1.70E+00
2.50E-03
1.60E-01
3.62E-01
std
1.69E-02
5.05E-01
1.22E+01
1.37E-02
1.50E-01
1.81E+00
6.20E-03
9.92E-02
3.47E-01
FE
1.09E+04
1.22E+04
1.23E+04
2.12E+04
2.20E+00
2.40E+04
5.22E+04
5.30E+04
5.89E+04
DE
best
0.00E+00
3.41E-02
7.99E-01
0.00E+00
7.40E-03
9.90E-03
0.00E+00
2.80E-03
1.21E-09
worst
5.75E-01
6.62E+00
3.85E+01
1.33E-02
1.58E-01
6.43E-01
0.00E+00
8.85E-02
4.96E-01
mean
3.19E-02
6.20E-01
9.95E+00
1.20E-03
4.47E-02
8.61E-02
0.00E+00
4.52E-02
1.94E-01
std
8.11E-02
1.17E+00
7.67E+00
3.00E-03
3.19E-02
1.00E-01
0.00E+00
1.98E-02
1.50E-01
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
TLBO
best
0.00E+00
2.28E-08
0.00E+00
0.00E+00
2.90E-03
0.00E+00
0.00E+00
7.66E-08
0.00E+00
worst
7.40E-03
1.11E-01
1.69E-01
2.80E-03
7.15E-02
6.89E-02
8.72E-08
5.15E-02
4.92E-02
mean
9.32E-04
3.86E-02
1.28E-02
1.17E-04
3.08E-02
9.90E-03
1.74E-09
1.76E-02
9.60E-03
std
2.20E-03
2.87E-02
2.81E-02
5.42E-04
1.50E-02
1.74E-02
1.22E-08
1.08E-02
1.37E-02
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
f6
IHDETBO
best
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.88E-13
worst
0.00E+00
2.98E+00
3.98E+00
0.00E+00
9.95E-01
9.95E-01
0.00E+00
0.00E+00
5.38E-11
mean
0.00E+00
2.79E-01
1.21E+00
0.00E+00
5.97E-02
1.99E-02
0.00E+00
0.00E+00
5.02E-12
std
0.00E+00
5.98E-01
1.22E+00
0.00E+00
2.36E-01
1.39E-01
0.00E+00
0.00E+00
8.67E-12
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
PSO
best
0.00E+00
0.00E+00
1.27E+01
0.00E+00
0.00E+00
4.44E+00
0.00E+00
0.00E+00
2.00E+00
worst
1.99E+00
3.08E+01
5.97E+01
9.95E-01
2.69E+01
4.12E+01
9.95E-01
1.09E+01
3.68E+01
mean
4.18E-01
7.14E+00
3.59E+01
2.39E-01
5.63E+00
2.21E+01
3.98E-02
3.78E+00
1.44E+01
std
5.30E-01
5.89E+00
1.16E+01
4.25E-01
4.20E+00
8.63E+00
1.95E-01
2.43E+00
7.14E+00
FE
1.09E+04
1.22E+04
1.23E+04
2.12E+04
2.19E+04
2.40E+04
5.21E+04
5.30E+04
5.83E+04
DE
best
0.00E+00
7.53E-02
3.25E+00
0.00E+00
0.00E+00
1.17E+00
0.00E+00
0.00E+00
1.22E+01
worst
1.25E+00
1.77E+01
4.27E+01
0.00E+00
2.98E+00
2.12E+01
0.00E+00
1.13E-02
3.27E+01
mean
3.19E-01
3.80E+00
1.64E+01
0.00E+00
5.40E-01
5.58E+00
0.00E+00
2.27E-04
2.34E+01
std
4.60E-01
2.81E+00
8.16E+00
0.00E+00
7.31E-01
3.63E+00
0.00E+00
1.60E-03
5.54E+00
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
TLBO
best
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
2.47E-12
worst
0.00E+00
3.98E+00
1.19E+01
0.00E+00
1.99E+00
1.39E+01
0.00E+00
9.95E-01
5.97E+00
mean
0.00E+00
8.65E-01
4.91E+00
0.00E+00
3.24E-01
4.32E+00
0.00E+00
2.04E-02
2.05E+00
std
0.00E+00
1.27E+00
3.02E+00
0.00E+00
5.10E-01
2.81E+00
0.00E+00
1.39E-01
1.53E+00
FE
1.00E+04
1.00E+04
1.00E+04
2.00E+04
2.00E+04
2.00E+04
5.01E+04
5.01E+04
5.01E+04
5.1.3. Discussion on the Experimental Results
Every experiment is operated 50 times independently. The results of comparison test on 6 different benchmark functions are shown as Table 2. The column best, worst, mean, std, and FE are the best value, worst value, mean value, standard deviation, and mean value of function evaluations for the 50 times operations, respectively. The mean value is the most important index that will validate the algorithm performance well. Additionally, for better analysis through the benchmark function, the objective function landscapes of them for D=2 are shown as Figure 4, through which we can view the shape and obtain the corresponding characteristics.
Objective function landscape of the 6 benchmark functions for D=2.
According to the landscape shown in Figure 4, their characteristics will be analyzed theoretically. The first benchmark function f1 is a unimodal separable function. It is a very simple benchmark function, which is the easiest to find the global minimum among the six functions. It requires nearly no global search capability and pays special attention on local convergence speed of an algorithm. As shown in Table 2, the performance rank is TLBO>IHDETBO>DE>PSO. The second one f2 is a unimodal non-separable function. Although it is unimodality, its shape is spiral, so it is prone to oscillation that makes it difficult to identify the search direction. Due to the feature, it is difficult to find global minimum, and this benchmark function is always used to evaluate the global search capability of an algorithm. As shown in Table 2, the performance rank is IHDETBO>TLBO>DE>PSO. The third one f3 is a multimodal non-separable function. Although it is multimodality and there are many local minimums, most of them exist in a long and narrow place around the global minimum. Thanks to this feature, these local minimums have no deceptive. So, it is also very easy to find the global minimum, and the global search capability of an algorithm has little effect on the optimization result. As shown in Table 2, the performance rank is TLBO>IHDETBO>PSO>DE. The fourth one f4 is a multimodal separable function and has a very strong deception. Around the global minimum, there are many local minimums, whose gradients are very similar to that of global minimum, and the optimization algorithm may be mistaken for finding the global minimum. So, it can well evaluate the diversity of population and the global search capability of an algorithm. As shown in Table 2, the performance rank is IHDETBO>DE>TLBO>PSO, and IHDETBO performs better much more. The fifth one f5 is a typical non-linear multimodal non-separable function and has a wide search space. The variables in every dimension are closely related to and interact with each other, and there are a lot of local minimums. So, it is usually considered as a complex multimodal problem which is difficult to deal with by optimization algorithm. As shown in Table 2, the performance rank is IHDETBO>TLBO>PSO>DE. Similar to f5, the sixth one f6 is a typical non-linear multimodal non-separable function, too. In the D-dimensional search space, there are about 10D local minimums, and the shapes of these irregular peaks are uneven and jump up and down. So, the effect of traditional gradient-based algorithm is often not ideal, and it is also difficult to find global minimum. As shown in Table 2, the performance rank is IHDETBO>TLBO>PSO>DE, and IHDETBO performs better much more.
In summary, the classical DE algorithm has very strong global search capability, but its convergence speed is slow. As to classical TLBO algorithm, every individual tries its best to approach the teacher individual in teacher phase, and then in learner phase positive learning and reverse learning are carried out when excellent partner and poor partner are selected, respectively. So, the local search capability is very strong and the convergence speed is very high, which we can obtain through f1 and f3, but the performance is mediocre for complex deceptive benchmark functions such as f2, f4, f5, and f6. The algorithm proposed in this paper is divided into two phases, i.e., IDE and IT. In the IDE phase, the mutation factor Fd and crossover probability CRd are both improved. Especially, the mutation factor Fd that generated with a Cauchy distribution is very important to keep the diversity of population and improve the global search capability. The teacher phase of classical TLBO algorithm has very strong local search capability and leads to a very high local convergence speed; the improvement in the IT phase not only enhances the local search capability, but also avoids losing the possibility of finding better solutions due to overreliance on the teacher individual, so as to better balance the exploration and exploitation capacities. According to the experimental results, the algorithm proposed in this paper performs better to the high-dimensional non-linear multimodal benchmark function, which is always considered as mathematical model of complex engineering problems such as cloud service composition for CMfg.
5.2. Case Study
CMfg is a complex manufacturing system. Taking the car manufacturing as an example, automobile industry is a large and complex manufacturing system involving more than 200 industry fields such as design, material, electronic equipment, and so on. For every automaker, nearly 70% spare parts are outsourced. In this paper, we take the tire manufacturing as a case study, and it refers to raw material production, tire production, hub production, wheel assembly, vehicle assembly, and auto dealer. So, we can decompose the task into 5 subtasks: raw material production is subtask 1, tire production and hub production is subtasks 2.1 and 2.2 which can be executed parallel and regarded as subtask 2, wheel assembly is subtask 3, vehicle assembly is subtask 4, and auto dealer is subtask 5.
5.2.1. Case Data
The case data is from [21]. Assume that a user needs 1000 cars and submits the requirements to CMP, then the CMP searches several candidate CSs for each subtask shown as Table 3. The amount of each subtask is 4000, 4000, 4000, 1000, and 1000, respectively, and the time consumption for each spare part (hour) is shown in column t of Table 3.
: List of candidate CSs.
Subtask 1
Subtask 2
Subtask 3
Subtask 4
Subtask 5
CS
t
CS
t
CS
t
CS
t
CS
t
1
0.80
1
0.0873
1
0.0078
1
0.3040
1
0.3333
2
0.70
2
0.0775
2
0.0071
2
0.2670
3
0.63
3
0.0685
3
0.0058
3
0.2812
4
0.61
4
0.0901
4
0.0065
5
0.57
5
0.0823
5
0.0069
6
0.64
6
0.0734
6
0.0069
7
0.71
7
0.0060
8
0.76
8
0.0063
9
0.55
5.2.2. Objective Function
Objective function is the goal of MBSPHE-CSCCM. Normally, it is significant to optimize the QoS according to customer’s preferences. For the sake of discussion, we take the production time as the optimization objective. So, the objective function is defined as follows:(28)minZ=Tmaxwhere Tmax=maxT1,T2,⋯Tn, and Ti is the production time of the ith batch subtask. Aiming at minimizing time consumption, the production results of each CS for subtask i will be delivered to the CS whose production plan is started the earliest and related production time is the largest among the candidate CSs for the following subtask (i+1). So, we firstly establish the time matrix Tend as follows: (29)Tend=T11T12⋯T1NT21T22⋯T2N⋮⋮⋱⋮TM11TM22⋯TMNN=s11,x11,Te11s12,x12,Te12⋯s1N,x1N,Te1Ns21,x21,Te21s22,x22,Te22⋯s2N,x2N,Te2N⋮⋮⋱⋮sM11,xM11,TeM11sM22,xM22,TeM22⋯sMNN,xMNN,TeMNNwhere every element Tjiof Tend contains three variables: the first one sji is the start time which is initialized to 0; the second one xji is subtask amount of CSji obtained from individual of IHDETBO; and the third one Teji is the end time of subtasks for each CS which is initialized to the calculation by Tji=tji·xji, where i=1,2,⋯N, j=1,2,⋯Mi, and tji is the corresponding time consumption for each spare part obtained from Table 3. Then, the transportation scheme between subtasks is designed as Algorithm 1.
Algorithm 1: Algorithm of objective function.
forn=2:N
while∑jn-1=1Mn-1xjn-1n-1≠0
Examine row (n-1), choose Tjn-1n-1 in case of xjn-1n-1≠0 and Tejn-1n-1 is the smallest;
Examine row n, choose Tjnn in case of xjnn≠0, sjnn is the smallest and Tejnn-sjnn is the largest;
ifsjnn<Tejn-1n-1
ifxjn-1n-1>xjnn
Tejnn=sjnn=Tejn-1n-1+tjnn∙xjnn;
xjn-1n-1=xjn-1n-1-xjnn;
xjnn=0;
else
sjnn=Tejn-1n-1+tjnn∙xjn-1n-1;
xjnn=xjnn-xjn-1n-1;
xjn-1n-1=0;
Tejnn=sjnn+tjnn∙xjnn;
end
else
ifxjn-1n-1>xjnn
Tejnn=sjnn=sjnn+tjnn∙xjnn;
xjn-1n-1=xjn-1n-1-xjnn;
xjnn=0;
else
sjnn=sjnn+tjnn∙xjn-1n-1;
xjnn=xjnn-xjn-1n-1;
xjn-1n=0;
Tejnn=sjnn+tjnn∙xjnn;
end
end
end
end
Examine column N, return TejNwhich is the largest;
5.2.3. Experimental Results
In this experiment, the parameters are set as follows: the population size N is 50, the positive constant c is 0.1, and the max iteration is 1000. Based on the proposed IHDETBO algorithm, case data, and objective function, the experimental result are shown in Tables 4 and 5. Table 4 shows the production time of the 50 schemes which are the corresponding individuals or integer arrays in the last generation. We can conclude that the individual No. 20 is the best one, and the production time is 444.208. The subtask amount of every CS in the best scheme which is indicated as integer in the corresponding genebit of individual No. 20 is shown in Table 5.
The production time of every scheme (individual) in the last generation.
No.
1
2
3
4
5
6
7
8
9
T
457.318
452.676
448.705
444.982
452.278
453.831
486.805
453.236
476.361
No.
10
11
12
13
14
15
16
17
18
T
483.416
450.803
445.757
449.945
487.402
450.2
486.736
449.795
451.612
No.
19
20
21
22
23
24
25
26
27
T
453.816
444.208
453.446
449.327
488.205
447.775
484.68
486.12
484.042
No.
28
29
30
31
32
33
34
35
36
T
485.717
471.496
484.962
450.663
453.67
447.113
449.442
448.972
452.468
No.
37
38
39
40
41
42
43
44
45
T
453.126
446.577
470.006
454.908
447.644
451.225
449.972
457.097
450.997
No.
46
47
48
49
50
T
482.804
478.697
458.643
454.673
453.376
The subtask amount of every CS.
Subtask 1
1
2
3
4
5
6
7
8
9
Amount
326
464
435
504
546
420
368
462
475
Subtask 2
1
2
3
4
5
6
Amount
281
955
606
1026
810
322
Subtask 3
1
2
3
4
5
6
7
8
Amount
634
382
755
495
716
250
494
274
Subtask 4
1
2
3
Amount
553
307
140
Subtask 5
1
Amount
1000
Figure 5 shows the batch division and transportation scheme in detail. The circle indicates the candidate CS for every subtask, and the number inside indicates the corresponding subtask amount. The arrow direction indicates the delivery destination for the next subtask and the number on which indicates the delivery amount. The circles marked in yellow color indicate the CS whose time to complete its own task is the latest in one subtask, and the related solid line indicated the production line which takes the longest time. So, we can conclude that the cloud manufacturing task has been divided into several small batches executed parallel and hybrid. Obviously, thanks to the MBSPHE-CSCCM, the production time is reduced a lot.
Schematic of the best scheme.
6. Conclusions and Future Work
With the intense competition in the global market and increasingly serious energy and environmental issues, the integration and sharing of manufacturing resources have been becoming more and more important in manufacturing industry. As one of the new manufacturing paradigms, CMfg has been proposed to address to these problems and has gradually been in focus. In practice, CMfg is a large-scale networked distributed manufacturing. The manufacturing resources which always scatter all over the world have the characteristics of massive, heterogeneous, complexity, and coarse granularity. Besides, the transportation among them is very complex because of today advanced logistic. Generally speaking, CMfg is a typical complex system in complex environment, and the manufacturing resources are encapsulated as CS. In complex system, because the total amount of task may be very large, the problem of service composition also becomes very complex. In this paper, we begin with a discussion of the state-of-the-art CMfg and then introduce the manufacturing scheme named as MBSPHE-CSCCM, in which a mass task can be transformed into multi-batch subtasks which will be parallel-hybrid executed. To address the service composition problem for MBSPHE-CSCCM, a novel optimization method IHDETBO for MBSPHE-CSCCM is proposed. This method can be divided into two phases: the first phase is IDE phase, based on basic concept and operation of DE, factors F and CR are both improved and calculated with adaptive strategy to enhance the population diversity and generate better individuals; the second phase is IT phase, the teacher phase of classical TLBO is adopted, and factor TF will be also improved to make the simulation more in line with the actual. In addition, to adapt the special condition of CMfg, block operation including block encoding and initialization, block mutation, block crossover, block selection, and block teaching operation are also proposed. Finally, with the simulation experiments and a case study, we demonstrate the advantage of the proposed method. MBSPHE-CSCCM plays a very important role in CMfg. Besides, there are also many other problems of CMfg that need to be studied, such as task decomposition, the evaluation of CS QoS, CS selection based on performance matching, and so on, which deserves our further consideration.
Data Availability
The experimental data and case study data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was partly supported by the National Natural Science Foundation of China (Grants nos. 61701443, 61876168, and 61403342) and Zhejiang Provincial Natural Science Foundation of China (LY18F030020).
LiB. H.ZhangL.WangS. L.TaoF.CaoJ. W.JiangX. D.SongX.ChaiX. D.Cloud manufacturing: a new service-oriented networked manufacturing model2010161116LovasR.FarkasA.MarosiA. C.ÁcsS.KovácsJ.SzalókiÁ.KádárB.Orchestrated platform for cyber-physical systems20182018168281079AziziA.Introducing a novel hybrid artificial intelligence algorithm to optimize network of industrial applications in modern manufacturing20172017188728209MR3666257RibeiroL.HochwallnerM.On the design complexity of cyberphysical production systems20182018134632195WangD.FanJ.FuH.ZhangB.Research on optimization of big data construction engineering quality management based on RNN-LSTM20182018169691868CannellaS.DominguezR.FraminanJ. M.PonteB.Evolving trends in supply chain management: complexity, new technologies, and innovative methodological approaches2018201837916849BalaA.ChanaI.Autonomic fault tolerant scheduling approach for scientific workflows in Cloud computing201523127392-s2.0-8492510182610.1177/1063293X14567783TaoF.LailiY.XuL.ZhangL.FC-PACO-RM: a parallel method for service composition optimal-selection in cloud manufacturing system2013942023203310.1109/TII.2012.2232936WuJ.BinD.FengX.WenZ.ZhangY.GA based adaptive singularity-robust path planning of apace robot for on-orbit detection20182018113702916Zbl1398.93251ChenS.TanD.A SA-ANN-Based modeling method for human cognition mechanism and the PSACO cognition algorithm20182018216264124MR3745621Abd-ElazimS. M.AliE. S.A hybrid particle swarm optimization and bacterial foraging for power system stability enhancement20152122452552-s2.0-8494555936710.1002/cplx.21601WangZ.-J.LiuZ.-Z.ZhouX.-F.LouY.-S.An approach for composite web service selection based on DGQoS2011569–121167117910.1007/s00170-011-3230-92-s2.0-80053571783OshabaA. S.AliE. S.Abd ElazimS. M.PI controller design using artificial bee colony algorithm for MPPT of photovoltaic system supplied DC motor-pump load2016216991112-s2.0-8497870104310.1002/cplx.21670StornR.PriceK.Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces199711434135910.1023/A:1008202821328Zbl0888.901352-s2.0-0142000477RaoR. V.SavsaniV. J.VakhariaD. P.Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems2012183111510.1016/j.ins.2011.08.006MR28470142-s2.0-80055062464RaoR. V.KalyankarV. D.Parameter optimization of modern machining processes using teaching-learning-based optimization algorithm20132615245312-s2.0-84870064537ZhaoY.-W.ZhuL.-N.Service-evaluation-based resource selection for cloud manufacturing20162443073172-s2.0-8500641079110.1177/1063293X16646634ZhuL. N.ZhaoY. W.ZhaoC.ShenG. J.A multidimensional extension-based method for resource performance matching in cloud manufacturing201826327628610.1177/1063293X16646634ZhuL.LiP.YangX.ShenG.ZhaoY.EE-RJMTFN: A novel manufacturing risk evaluation method for alternative resource selection in cloud manufacturing201810.1177/1063293X18795210TaoF.ZhangL.GuoH.LuoY.-L.RenL.Typical characteristics of cloud manufacturing and several key issues of cloud service composition20111734474862-s2.0-79955891947ZhuL.WangW.ShenG.Resource optimization combination method based on improved differential evolution algorithm for cloud manufacturing20172312032142-s2.0-85016454618BreiterG.BehrendtM.Life cycle and characteristics of services in the world of cloud computing2009534182-s2.0-77955062327QinA. K.HuangV. L.SuganthanP. N.Differential evolution algorithm with strategy adaptation for global numerical optimization20091323984172-s2.0-5964908382610.1109/TEVC.2008.927706ZhangJ. Q.SandersonA. C.JADE: adaptive differential evolution with optional external archive200913594595810.1109/TEVC.2009.20146132-s2.0-70349860273ZhouJ.YaoX.Multi-population parallel self-adaptive differential artificial bee colony algorithm with application in large-scale service composition for cloud manufacturing2017563793972-s2.0-8501716261510.1016/j.asoc.2017.03.017WangY.CaiZ. X.ZhangQ. F.Differential evolution with composite trial vector generation strategies and control parameters2011151556610.1109/TEVC.2010.20872712-s2.0-79952003746RaiD. P.Comments on “A note on multi-objective improved teaching-learning based optimization algorithm (MO-ITLBO)”2017821791902-s2.0-84994896176BrestJ.GreinerS.BoškovićB.MernikM.ZumerV.Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems200610664665710.1109/TEVC.2006.8721332-s2.0-33847199831HouE. S. H.AnsariN.RenH.A genetic algorithm for multiprocessor scheduling19945211312010.1109/71.2659402-s2.0-0028381511RaoR. V.PatelV.An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems201320371072010.1126/science.12418822-s2.0-84887319714ZhouJ.YaoX.Hybrid teaching–learning-based optimization of correlation-aware service composition in cloud manufacturing2017919-12351535332-s2.0-8501079672710.1007/s00170-017-0008-8