The main aim of this paper is to present a new hybridization approach for combining two powerful metaheuristics, one inspired by physics and the other one based on bioinspired phenomena. The first metaheuristic is based on physics laws and imitates the explosion of the fireworks and is called Fireworks Algorithm; the second metaheuristic is based on the behavior of the grey wolf and belongs to swarm intelligence methods, and this method is called the Grey Wolf Optimizer algorithm. For this work we studied and analyzed the advantages of the two methods and we propose to enhance the weakness of both methods, respectively, with the goal of obtaining a new hybridization between the Fireworks Algorithm (FWA) and the Grey Wolf Optimizer (GWO), which is denoted as FWA-GWO, and that is presented in more detail in this work. In addition, we are presenting simulation results on a set of problems that were tested in this paper with three different metaheuristics (FWA, GWO, and FWA-GWO) and these problems form a set of 22 benchmark functions in total. Finally, a statistical study with the goal of comparing the three different algorithms through a hypothesis test (Z-test) is presented for supporting the conclusions of this work.
1. Introduction
In recent years, the global world of computer science is creating an interesting environment [1] for research, especially, with algorithms that aim at solving different optimization problems [2], which means maximizing or minimizing depending on the goal and the requirements of the particular problem [3].
At present time, a family of recent algorithms having great impact is the so-called bioinspired algorithms [4]; the reason is because they offer an interesting way of simulating the combination among the natural phenomena, mathematics, and computation. Combination is a key word among these algorithms; for example, in genetic algorithms [5], a combination is performed with two operators, such as crossover and mutation.
Keeping on with the combination concept, another related concept is hybridization [6], and we can understand, for hybridization, processes in which discrete structures, which can exist separately, are combined to generate new structures, objects, and methods.
In recent works, it has been demonstrated that the hybridization of bioinspired algorithms has yielded very good results, and in this regard the main contribution in this work is the proposed hybridization between the Fireworks Algorithm (FWA) [7] and the Grey Wolf Optimizer (GWO) [8] taking advantage of their best features and combining them for obtaining a better overall performance for solving problems, in a new hybrid algorithm. The above-mentioned hybridization is described and explained in more detail in the following sections of this paper.
The remainder of the paper is organized as follows: we start with basic concepts in Section 2, which is organized in two parts, FWA in the first part and GWO in the second part. In Section 3, we present the proposed FWA-GWO method, then the simulation results are presented in Section 4, and finally, we conclude the paper, by mentioning possible future work in Section 5.
2. Basic Concepts
In this section, the basic concepts about FWA and GWO are presented to provide the reader with a background for understanding the proposed hybrid approach (FWA-GWO).
2.1. Fireworks Algorithm (FWA)
The FWA is a metaheuristic method based on the explosion of fireworks behavior [9]. Every firework performs an explosion process generating a number of sparks, which are placed on a local space around the corresponding firework to represent possible solutions in a search space [10, 11].
2.1.1. Number of Sparks
Equations (1) are used to calculate the number of sparks [12, 13].(1)Minimizefxi∈R,xi_min≤xi≤xi_maxsi=mymax-fxi+ε∑i=1nymax-fxi+εSi^=rounda·mifSi<amroundb·mifSi>bm,a<b<1,roundSiotherwise.
2.1.2. Explosion Amplitude
The explosion amplitude [14] is calculated for each firework by the following equation:(2)Ai=A^·fxi-ymin+ϵ∑i=1nfxi-ymin+ϵ.
2.1.3. Generating Sparks
Random dimensions are calculated with the following equation knowing that when a firework explodes, its sparks will take different random directions.(3)z=roundd·rand0,1.
2.1.4. Selection of Locations
After the explosion of the fireworks, to maintain the diversity of the sparks, n-1 places are selected with respect to the location of the others. To calculate the distance among xi and the other places, the following equations are used:(4)Rxi=∑j∈Kdxi,xj=∑j∈Kxi-xj.(5)pxi=Rxi∑j∈KRxj
Figure 1 shows the general flowchart of the FWA, where the original author of the method [7] described each equation that was presented above and that we can find in more detail in the original paper [11] and other variants presented in [15–17].
General flowchart of the FWA.
2.2. Grey Wolf Optimizer
The Grey Wolf Optimizer (GWO) algorithm [18] is a metaheuristic created by Seyedali Mirjalili in 2014. In this algorithm the author takes advantage of the main features that the grey wolf has in nature based on the research of Muro et al. [19] about hunting strategies of the Canis lupus or grey wolf. When the original author designed the algorithm, the following features were highlighted: the hierarchy into the pack [20] and the hunting mechanism.
Basically, the optimization process is guided by the three best solutions in the pack or in all the population, so in order to recognize these better solutions, we assume that the best solution is called the alpha (α) wolf, then the second and third best solutions in the optimization are called beta (β) and delta (δ) wolves, respectively, and finally the rest of the candidate solutions are called omega (ω) wolves.
In addition it is important to mention that the hunting mechanism behavior of the grey wolf and its main phases are described as follows: first, pursue and approach the prey, after that encircle and harass the prey until it stops moving, and finally attack the prey. In the algorithm this behavior is simulated by(6)D→=C→·Xp→t-X→t.(7)X→t+1=Xp→t–A→D→In this case, (6) represents a distance between the best solution (Xp→t) and a random motion (C→) that is represented in (9) and is a random value in a range [0,2] minus the solution that we have evaluated (X→(t)).
The next position of the current solution is presented in (7) that is a subtraction of the best solution and the distance that we have obtained in (6) multiplied by a weight (A), which is assigned to this distance and this is described in(8)A→=2a→·r1→-a→.(9)C→=2·r2→A and C coefficients represent the ways for the exploration and exploitation in the algorithm to occur [21], both in direct and in indirect ways, and we can find extensive information of these coefficients in the original paper, where the method was originally proposed [22].
Equations (10) and (11) are the same as (6) and (7), respectively, but in this case the best solution is represented by the leaders of the pack and in this case alpha, beta, and delta, as we mentioned above.(10)Dα→=C1→·Xα→-X→,Dβ→=C2→·Xβ→-X→,Dδ→=C3→·Xδ→-X→,(11)X1→=Xα→-A1→·Dα→,X2→=Xβ→-A2→·Dβ→,X3→=Xδ→-A3→·Dδ→,(12)X→t+1=X1→+X2→+X3→3.Finally, in (12) we can find the next position of the solution that we are evaluating, which is basically an average based on the three best wolves in the pack, so in these equations that we described above we can find the main inspiration for the GWO algorithm, which is the hierarchical pyramid of leadership and the hunting mechanism. In Figure 2 we can find the flow chart of the algorithm.
General flowchart of the GWO.
3. Proposed FWA-GWO
The main goal of the hybridization between both methods [6] is to take advantage of their main features and in this paper we present the following features of each method that we are using for achieving hybridization:
FWA:
Initialization
Explosion amplitude
Locations
GWO:
Hierarchical pyramid
Interaction among the population
Next location of population
Figure 3 shows a general flow chart of the proposed hybridization between the FWA and GWO algorithms and the red blocks represent the considered features of the FWA and the blue blocks the GWO features that are used in the proposed hybrid approach.
General flowchart of the FWA-GWO.
In order to explain the performance of the FWA-GWO, we are presenting below the details of each block in the flowchart that we illustrated in Figure 3.
3.1. Select n Initial Packs and m Wolves
We start to select a number n of locations as in the FWA, but in this case we are representing the number of the packs in the population. In addition we need to select the number of wolves for each pack represented by m. It is important to mention that the conventional FWA works with a number of function evaluations and the GWO works with the iterations as stopping criteria, so in (13) we show the relation between both types of stopping criteria.(13)T=On∗m,where T is the total number of iterations; O is the number of function evaluations; n is the number of packs; and m is the number of wolves for each pack.
For example if we have 4 packs and 6 wolves for each pack, we have 24 possible solutions for each iteration in the algorithm; therefore, for 15,000 function evaluations the FWA-GWO algorithm will execute 625 iterations [23].
3.2. Set n Initial Packs with m Wolves for the Search Space
We add a way to select n initial locations in this algorithm, which means that, depending on the number of packs, the corresponding number of partitions is found. For example, if we have a search space with a lower bound and upper bound defined in a range [-100,100],(14)TS=ub-lb,where TS is a total search space and ub and lb represent the upper and lower bounds, respectively.
We need subspaces where each pack performs the initialization of its subpopulation and this behavior is represented in(15)SS=TSn.SS is the distance that should exist between each range for each pack. These ranges for each pack are calculated with a general linear function, as shown as follows:(16)fn=an+b,where a=SS, n is the number of packs, and b is a calculated constant value. In this case, f(1)=lb of the search space, such that the search ranges for each pack are [f1,f(2)], [f2,f(3)]⋯[f(n),f(n+1)]. We can represent this as follows: fn+1=ub.
In Figure 4, we can find an example of this initialization method with 4 packs and a search space in a range of of [-100, 100] with 2 dimensions.
Example of initialization for each pack in FWA-GWO.
An advantage of this algorithm is the initialization of the population in subspaces of the search space; FWA-GWO partitions the search space for the function to be evaluated based on the number of packs, with the aim of covering most of the search space and assuring achieving the exploration. Figure 5 shows the initialization of the population based on the leader.
Initialization of the population in FWA-GWO.
3.3. Update the Positions for Each Agent Based on Its Packs
As we mentioned above, this part simulates the update of the wolves based on the leaders of each pack, and to understand better its performance, we illustrate in a graphical way a representation of this behavior.
Figure 6 shows an example of how to update the positions in FWA-GWO; is a bidimensional plot with a function in a range of [-100,100] and with an optimum equal to zero. Also in Figure 6, we can find in a graphical way the convergence of the all search agents in the algorithm. In addition, we can mention that blue, red, aqua, and green points represent the members of the 4 packs for this example, and the magenta points represent the initialization of each pack.
Convergence of search agents in FWA-GWO.
On the other hand, to keep the equilibrium of this algorithm (FWA-GWO), we decided to partition the search process in 3 phases; 100 percent of the total number of iterations was divided into 3 parts. We consider the first phase in an interval of 0% to 33.3% of the iterations and we consider this as the exploration phase, the second phase takes an interval from 33.3% to 66.6%, in this phase sometimes the FWA-GWO algorithm is exploring and other times exploiting, and finally, we consider the third phase as the exploitation and take the remaining part of the percentage (66.6% to 99.9%) of the total number of iterations.
It is worth mentioning that the number of packs is not constant, while the FWA-GWO is working. In other words, the objective of having three phases in this algorithm is to reduce the number of packs according to the particular phase in this algorithm. In addition, in the first phase (exploration) the algorithm will work with the number of packs and wolves that we selected at the beginning; in the second phase, the numbers of packs is reduced at half with the help of (17) and finally the FWA-GWO algorithm always finishes the optimization with only one big pack.(17)n_=n2.Table 1 shows examples of the relationship between the phases and number of packs, respectively, based on the number of packs that were initialized.
Relationship between phases and number of packs in the FWA-GWO.
Phase
Number of packs
1
2
3
4
5
6
7
8
2
1
1
2
2
3
3
4
3
1
1
1
1
1
1
1
In addition, we are presenting this information (Table 1) in a graphical way to explain the relationship in more detail. Figure 7 represents phase 1 where n=4 and m=6.
Example of phase 1 in FWA-GWO.
In Figure 8 we present the second phase, so the new value of n=2 is based on (17) and the new value for m is 12, because we divided the population into the number of packs (n). In addition we can mention that the total population is the same as that was originally initialized.
Example of phase 2 in FWA-GWO.
Finally, in Figure 9 we can find only one big pack as we mentioned above. In addition we can mention that this is an example of how the population is distributed into the 3 phases that the algorithm has and finally we assume that each pack has their corresponding leaders (alpha, beta, and delta).
Example of phase 3 in FWA-GWO.
3.4. Mathematical Model of FWA-GWO
For the hybridization between FWA and GWO, we modified the equations in GWO to calculate the distance for updating the next position of the current solution, and we introduced the amplitude explosion of the FWA into the distance for updating the next position in GWO (coefficient C in (6)). Equation (18) shows the changes that we mentioned above.(18)An=n=1,A1=0.5n=2,A1=1,A2=2n≥3,An=A^·fxn-ymin+ϵ∑i=1nfxn-ymin+ϵ,where An represents the amplitude explosions of each pack and n is the number of packs. When the number packs is 1 then the amplitude is 0.5, when the number of packs is 2 then the amplitude of the packs is 1 and 2, respectively [8]. When the number of packs is equal or greater than 3, we can then use the formula of the amplitude explosion of FWA.
The reason why the parameters are 0.5, 1, and 2 when the number of packs is one or two is because we are trying to maintain the parameter values in a range of [0,2], as the parameter C in (6) in GWO is the one that controls the exploration and exploitation in the algorithm.
Equation (19) shows how we can normalize the parameter of the explosion amplitude between 0 and 2 when the number of packs is greater than 2.(19)An^=2∗AnmaxAn,where An^ represents the explosion amplitude normalized for each pack and maxAn is the maximum value of all amplitudes.
The distance between the best wolf with a random motion and the current wolf of each pack is calculated in the following way:(20)Dn→=An^·Xpn→t-Xn→t,where Dn→ is the obtained distance of each pack, Xpn→t is the leader of the pack, and Xn→t is the current wolf of each pack.
The weight for each distance of the omega wolves and the leaders of the pack that we described above is defined as follows:(21)En→=2a·r1n→.In (21), En→ represents the weight for (22), a is a value that decreases during iterations in a range of [2,0], and r1n→ is a random value between 0 and 1 for each pack.
Equation (22) allows updating the next position of the current wolf and the mathematical model is defined as follows:(22)Xn→t+1=Xpn→t–En→Dn→,where Xn→(t+1) represents the next position, Xpn→t is the best wolf of each pack, and Dn→ and En→ are described in (20) and (21), respectively.
To apply randomness in the method, (23) is used with the aim that each leader obtains different moves for each omega wolf.(23)Cn→=An^·r2n→.Equations (24) and (25) are the same as (10) and (11) in GWO, but the difference is that now the leaders are represented for each pack.(24)Dαn→=C1n→·Xαn→-Xn→,Dβ→=C2n→·Xβn→-Xn→,Dδ→=C3n→·Xδn→-Xn→,(25)X1n→=Xαn→-E1n→·Dαn→,X2n→=Xβn→-E2n→·Dβn→,X3n→=Xδn→-E3n→·Dδn→,(26)Xn→t+1=X1n→+X2n→+X3n→3.In (26) we can find the next position of the solution that we are evaluating as the GWO works and is an average of the three best wolves, but in this case this is represented for each pack. Finally, Algorithm 1 presents the pseudocode for the FWA-GWO.
Algorithm 1: Pseudocode for the FWA-GWO.
Initialize n and m
Initialize the grey wolf population Xni(i=1,2,…,n)
Initialize a, En and An^
Calculate the fitness of each search agent
Xnα = the best search agent of n pack
Xnβ = the second best agent of n pack
Xnδ = the third best search agent of n pack
while (t < Max number of iterations)
for each search agent
Update the position of the current search
agent by Equation (5)
end for
Update a, An^ and En
Update n and m
Calculate the fitness of all search agents
Update Xnα, Xnβ and Xnδ
t=t+1
end while
return best (X1α,X2α,…,Xnα)
4. Simulation Results and Discussion
In this section we are presenting the benchmark functions that are used in this work.
Table 2 shows the equations of the first 13 benchmark functions [24–26] used for the tests with the algorithms (FWA, FWO, and FWA-GWO) that can be classified as unimodal and multimodal, respectively. Figure 10 shows the graphical representations of these benchmark functions in their 3D versions.
Examples of the unimodal and multimodal benchmark functions in their 3D versions.
Also in this paper we used another set of 9 benchmark functions that are called fixed-dimension multimodal and we can find their corresponding equations in Table 3 with the number of dimensions that were considered, the range of the search space and the optimal value for each function, respectively. Finally, in Figure 11 we show the graphical representations of these benchmark functions in their 3D versions.
Examples of the fixed-dimension multimodal benchmark functions in their 3D versions.
For the experiments that were performed in this paper, we are presenting three different configurations for the FWA-GWO algorithm and we can find their descriptions as follows:
Version 1
4 packs
6 wolves for each pack
625 iterations
15,000 function evaluations
Version 2
5 packs
6 wolves for each pack
500 iterations
15,000 function evaluations
Version 3
8 packs
5 wolves for each pack
375 iterations
15,000 function evaluations
For comparing the performance of all the algorithms, we performed hypothesis tests (Z-test) [22, 27, 28] with the following parameters:
μ1=NewMethod (FWA-GWO)
μ2=FWAorGWO
The mean of the FWA-GWO is lower than the mean of the original method (claim)
H0:μ1≥μ2
Ha:μ1<μ2Claim
α=0.05
Z0=-1.645
We show the hypothesis tests results in the following tables, where we are comparing FWA-GWO with FWA and FWA-GWO with GWO, respectively.
4.1. Comparison between GWO and FWA-GWO
In Tables 4–11, we are presenting a comparison and hypothesis tests between the GWO and the hybrid method for 30, 60, and 90 dimensions and the three different versions, respectively.
Comparison between GWO and the FWA-GWO in version 1 with 30 dimensions.
30 dimensions
Function
GWO
STD
Hybrid V1
STD
Z-value
F1
1.45E-27
2.39E-27
6.43E-28
1.17E-27
-2.1364
F2
1.28E-15
1.01E-15
1.70E-16
1.63E-16
-7.6564
F3
1.96E-05
4.19E-05
8.47E-04
0.0025
2.3188
F4
8.40E-07
8.83E-07
0.0383
0.0584
4.6322
F5
28.3085
8.5466
27.6283
0.5878
-0.5614
F6
0.7575
0.3307
0.8856
0.4284
1.6727
F7
-843.5616
64.6298
-775.8908
98.3576
-4.0658
F8
13.2227
47.144
1.7697
6.7631
-1.7004
F9
154.0219
272.2662
9641.7985
11406.7060
5.8798
F10
4.6208
4.4183
1.7310
2.2942
-4.1045
F11
20.8930
0.0927
5.8606
9.4934
-11.1962
F12
0.0617
0.0779
0.0693
0.0297
0.6441
F13
0.0053
0.0103
0.00
0.00
-3.6561
Comparison between GWO and the FWA-GWO in version 2 with 30 dimensions.
30 dimensions
Function
GWO
STD
Hybrid V2
STD
Z-value
F1
1.45E-27
2.39E-27
1.84E-22
7.49E-22
1.7398
F2
1.28E-15
1.01E-15
1.17E-13
1.20E-13
6.8015
F3
1.96E-05
4.19E-05
0.0044
0.0166
1.8497
F4
8.40E-07
8.83E-07
0.0409
0.0440
6.5681
F5
28.3085
8.5466
27.5222
0.5276
-0.6493
F6
0.7575
0.3307
0.5746
0.3179
-2.8192
F7
-843.5616
64.6298
-811.8833
98.7647
-1.8978
F8
13.2227
47.144
7.55E-05
1.91E-04
-1.9832
F9
154.0219
272.2662
15241.4387
16021.8731
6.6577
F10
4.6208
4.4183
1.3627
2.0485
-4.7306
F11
20.8930
0.0927
1.71E-11
4.23E-11
-1594.3538
F12
0.0617
0.0779
0.0705
0.0316
0.7432
F13
0.0053
0.0103
0
0
-3.6561
Comparison between GWO and the FWA-GWO in version 3 with 30 dimensions.
30 dimensions
Function
GWO
STD
Hybrid V3
STD
Z-value
F1
1.45E-27
2.39E-27
1.78E-18
2.70E-18
4.6680
F2
1.28E-15
1.01E-15
4.19E-11
3.52E-11
8.4273
F3
1.96E-05
4.19E-05
0.0095
0.0241
2.7765
F4
8.40E-07
8.83E-07
0.0248
0.0207
8.4815
F5
28.3085
8.5466
27.7054
0.5847
-0.4979
F6
0.7575
0.3307
0.59297
0.35281
-2.4065
F7
-843.5616
64.6298
-753.7345
291.2693
-2.1289
F8
13.2227
47.144
0.1544
0.6904
-1.9599
F9
154.0219
272.2662
6439.18
23019.40
1.9305
F10
4.6208
4.4183
3.1066
3.9244
-1.8118
F11
20.8930
0.0927
1.33E-09
1.34E-09
-1594.3538
F12
0.0617
0.0779
0.0416
0.0249
-1.7389
F13
0.0053
0.0103
9.33E-17
2.21E-16
-3.6561
Comparison between FWA and the FWA-GWO in version 1 with 30 dimensions.
30 dimensions
Function
FWA
STD
Hybrid V1
STD
Z-value
F1
0.0237
0.0148
6.43E-28
1.17E-27
-11.3515
F2
0.5501
0.2083
1.70E-16
1.63E-16
-18.6715
F3
2.19E-04
0.0005
8.47E-04
0.0025
1.7326
F4
0.0407
0.0078
0.0383
0.0584
-0.2951
F5
29.8533
1.6787
27.6283
0.5878
-8.8458
F6
1.0293
0.7131
0.8856
0.4284
-1.2221
F7
-851.1396
80.7011
-775.8908
98.3576
-4.1822
F8
0.3258
0.3166
1.7697
6.7631
1.5079
F9
9.0217
17.8576
9641.7985
11406.7060
5.9714
F10
1.6570
0.8116
1.7310
2.2942
0.2150
F11
0.0449
0.0081
5.8606
9.4934
4.3318
F12
0.0532
0.0318
0.0693
0.0297
2.6125
F13
7.04E-04
5.16E-04
0.00
0.00
-9.6465
Comparison between FWA and the FWA-GWO in version 2 with 30 dimensions.
30 dimensions
Function
FWA
STD
Hybrid V2
STD
Z-value
F1
0.0237
0.0148
1.84E-22
7.49E-22
-11.3515
F2
0.5501
0.2083
1.17E-13
1.20E-13
-18.6715
F3
2.19E-04
0.0005
0.0044
0.0166
1.7642
F4
0.0407
0.0078
0.0409
0.0440
0.0250
F5
29.8533
1.6787
27.5222
0.5276
-9.3676
F6
1.0293
0.7131
0.5746
0.3179
-4.1179
F7
-851.1396
80.7011
-811.8833
98.7647
-2.1764
F8
0.3258
0.3166
7.55E-05
1.91E-04
-7.2752
F9
9.0217
17.8576
15241.4387
16021.8731
6.7226
F10
1.6570
0.8116
1.3627
2.0485
-0.9446
F11
0.0449
0.0081
1.71E-11
4.23E-11
-39.3428
F12
0.0532
0.0318
0.0705
0.0316
2.7297
F13
7.04E-04
5.16E-04
0
0
-9.6465
Comparison between FWA and the FWA-GWO in version 3 with 30 dimensions.
30 dimensions
Function
FWA
STD
Hybrid V3
STD
Z-value
F1
0.0237
0.0148
1.78E-18
2.70E-18
-11.3515
F2
0.5501
0.2083
4.19E-11
3.52E-11
-18.6715
F3
2.19E-04
0.0005
0.0095
0.0241
2.7175
F4
0.0407
0.0078
0.0248
0.0207
-5.0876
F5
29.8533
1.6787
27.7054
0.5847
-8.5443
F6
1.0293
0.7131
0.59297
0.35281
-3.8782
F7
-851.1396
80.7011
-753.7345
291.2693
-2.2788
F8
0.3258
0.3166
0.1544
0.6904
-1.5961
F9
9.0217
17.8576
6439.18
23019.40
1.9752
F10
1.6570
0.8116
3.1066
3.9244
2.5577
F11
0.0449
0.0081
1.33E-09
1.34E-09
-39.3428
F12
0.0532
0.0318
0.0416
0.0249
-2.0307
F13
7.04E-04
5.16E-04
9.33E-17
2.21E-16
-9.6465
Comparison of the best results among the three methods with 60 dimensions.
60 dimensions
Function
GWO
FWA
Hybrid V1
Hybrid V2
Hybrid V3
F1
1.03E-18
8.76E-03
1.45E-20
5.24E-16
1.49E-12
F2
2.04E-10
1.88E-01
2.69E-12
2.81E-10
2.32E-08
F3
0.0246
9.54E-08
0.0062
0.2206
0.0579
F4
3.29E-04
1.72E-02
1.0835
0.4721
0.2902
F5
56.0643
6.5639
56.5365
56.7458
56.8672
F6
2.4988
4.70E-01
2.9397
1.9858
1.5036
F7
-1103.01
-1123.702
-1022.80
-1114.42
-963.5091
F8
9.13E-05
1.27E-01
0.0016
6.75E-06
0.0062
F9
8429.33
2.6085
21956.43
6268.08
853.71
F10
2.22E-11
1.0527
2.27E-13
6.90E-11
5.10E-08
F11
20.9788
2.97E-02
1.75E-10
2.21E-09
1.63E-07
F12
0.0595
5.27E-03
0.0067
0.0782
8.48E-04
F13
0.00
1.46E-04
0.00
0.00
3.99E-14
Comparison of the best results among the three methods with 90 dimensions.
90 dimensions
Function
GWO
FWA
Hybrid V1
Hybrid V2
Hybrid V3
F1
1.81E-14
2.91E-02
1.70E-15
1.27E-12
7.25E-09
F2
5.12E-08
1.59E-01
3.95E-10
3.34E-08
1.75E-06
F3
9.3068
1.23E-07
1.1431
3.5748
0.7638
F4
0.0198
2.40E-02
5.2841
1.2142
1.6948
F5
85.9349
89.8066
87.2861
87.5498
87.1859
F6
5.9709
1.7693
5.5327
5.1995
3.1307
F7
-1495.39
-1551.06
-2678.99
-1968.90
-1507.83
F8
2.33E-04
0.3271
0.0369
2.22E-04
0.1971
F9
57096.74
5.7946
42806.79
19650.57
10246.35
F10
2.5244
2.0417
5.19E-10
1.1558
9.9620
F11
21.1015
0.0235
2.85E-07
7.28E-07
1.32E-05
F12
0.1476
0.0167
0.0078
0.3672
0.0338
F13
2.66E-15
0.0004
7.77E-16
1.12E-14
1.02E-10
From Table 4 we can conclude based on the hypothesis test that, for the case of 30 dimensions, the FWA-GWO in version 1 is better in 7 of the 13 benchmark functions that are analyzed in this paper.
In Table 5 we show the results of a hypothesis test for the original GWO and the FWA-GWO in version 2 and in this case the proposed method is better in 6 of the 13 analyzed functions.
Finally, Table 6 shows the results of the hypothesis tests when we are using version 3 of the FWA-GWO, in other words when the algorithm has 8 packs and 5 wolves for each pack, then we can conclude that the proposed hybrid approach (FWA-GWO) is better in 7 of the 13 functions that were tested in this paper.
In the following figures we are illustrating in a graphical way the results of the hypothesis tests, more specifically the z-values. The comparison is between the conventional algorithms and the proposed FWA-GWO with the different versions.
In this case the blue bar represents the z value of the hypothesis test with 60 dimensions, the green bar represents the z values of the hypothesis with 90 dimensions, and finally the red bar represents the critical value according of the hypothesis test that was described above, which in this case is −1.645. Therefore, while the blue and green bars are less than −1.645, the hybrid method is better.
For 60 dimensions in the analyzed problems, the FWA-GWO in version 1 is better in 6 of the 13 analyzed benchmark functions according to Figure 12 and also we can find the z-values for 90 dimensions between the GWO and version 1 of the FWA-GWO. In this case the proposed method is better in 4 benchmark functions.
Comparison between GWO and the FWA-GWO in version 1 with 60 and 90 dimensions.
In Figure 13 we can conclude that for 60 dimensions, the FWA-GWO in version 2 is better only in 3 of the 13 analyzed functions, respectively, and for 90 dimensions. In this case, the version 2 of the proposed method (FWA-GWO) has not a good performance because it is better than the original method only in 2 of the analyzed benchmark functions.
Comparison between GWO and the FWA-GWO in version 2 with 60 and 90 dimensions.
Finally, we are presenting in Figure 14 the z-values when the problem has 60 and 90 dimensions, respectively. According to the hypothesis tests between GWO and the FWA-GWO in version 3 we can conclude that the FWA-GWO is better only in 3 of the 13 analyzed functions, respectively, with 60 dimensions and for 90 dimensions we can find that version 3 is better in 4 of the 13 analyzed functions based on the results of the hypothesis test.
Comparison between GWO and FWA-GWO in version 3 with 60 and 90 dimensions.
4.2. Comparison between FWA and FWA-GWO
In addition we are presenting the hypothesis testing between the FWA and the FWA-GWO; Table 7 shows the results of the averages, standard deviations, and Z-values for 30 dimensions with the FWA-GWO in version 1.
We can conclude from Table 7 that the FWA-GWO is better in 5 of the 13 benchmark functions that are analyzed in this paper.
Table 8 shows the results when we used the FWA-GWO in version 2 and in this case we can find that it has a better a performance because it is better than the conventional FWA in 8 of the 13 total functions.
Finally, for 30 dimensions in Table 9 we are presenting the last version of the FWA-GWO algorithm that was analyzed in this paper and we can conclude that, for 30 dimensions, this version has the best configuration of the parameters because, according of the hypothesis testing, the FWA-GWO is better than FWA in 9 of the 13 benchmarks functions that were analyzed.
Based on Figure 15 we can mention that for 60 dimensions the FWA-GWO version 1 is better than the conventional FWA in 5 of the 13 benchmark functions that were analyzed in this paper and when the problems have 90 dimensions we can find that, for this number of dimensions, version number 1 is better in 4 benchmark functions that were analyzed.
Comparison between FWA and FWA-GWO in version 1 with 60 and 90 dimensions.
In Figure 16 we can find in a graphical way the z values of the hypothesis test between FWA and the FWA-GWO in version 2 and for 60 dimensions we can mention that we have a similar performance with respect to version 1 because both methods are better than the conventional FWA in 5 of the 13 analyzed benchmark functions. Finally, for 90 dimensions we can conclude that the FWA-GWO is better in 5 of the 13 benchmark functions that are analyzed.
Comparison between FWA and FWA-GWO in version 2 with 60 and 90 dimensions.
Finally, in Figure 17 we are presenting the results of the hypothesis test between FWA and version 3 of the FWA-GWO and we can mention that for 60 dimensions we can conclude that in this version of FWA-GWO it only has better performance in 4 benchmark functions and for 90 dimensions FWA-GWO is better in 5 of the 13 benchmark functions that are analyzed in this paper.
Comparison between FWA and FWA-GWO in version 3 with 60 and 90 dimensions.
4.3. Comparison of the Best Solutions among the Three Methods
In Figures 12, 13, and 14 we can notice that when the problem has 60 and 90 dimensions, respectively, we can find an interesting behavior and that, for these benchmark functions, the conventional GWO has better performance than the FWA-GWO according to the analyzed hypothesis test.
Also we can mention that, for the comparison between the FWA and the FWA-GWO, for 60 and 90 dimensions, respectively, the FWA-GWO has better performance around half of total analyzed benchmark functions and it is important to say that we are presenting only three different configurations of the FWA-GWO and for this comparison the performance can be significantly improved if we change these configurations.
In addition we consider that it is very important to summarize in the following tables a brief comparison only of the best results obtained in the 50 independent executions of each method for 60 and 90 dimensions, respectively, with the main goal of showing that although in a hypothesis test we cannot prove a best performance in some of the benchmark functions that are analyzed, the FWA-GWO still found a better result than the conventional algorithm.
Table 10 shows the best values of each method with 60 dimensions and we can conclude that in general the proposed FWA-GWO algorithm has better performance in 8 of the 13 analyzed functions, the FWA has better performance than the others in 4 of the 13 functions, and finally the GWO algorithm has better performance in 2 of the 13 benchmark functions that are analyzed in this paper. The best version of the FWA-GWO is with version 1 with 5 functions, and finally with versions 2 and 3, both methods are better in 2 benchmark functions, respectively.
Table 11 shows the general results for the best experiments of 50 runs that were executed with 90 dimensions and we can find that the FWA-GWO has better performance in 7 of the 13 analyzed benchmark functions. The FWA metaheuristic has better performance in 4 of the 13 functions and finally the GWO algorithm has better performance in 2 of 13 analyzed benchmark functions with 90 dimensions.
Based on Tables 10 and 11 we can find an interesting conclusion because we can note that the FWA-GWO has a better performance than the GWO and FWA, respectively, based on the best results that we obtained in the respective sample. This is very important because, for example, when we want to design fuzzy controllers, the design, the architecture, or the best plant is based on the best result that we obtained in the total experiments or executions and we can conclude that for these optimization problems the FWA-GWO has better performance than the conventional methods (GWO and FWA). Finally we can mention that for improving the results of the FWA-GWO in the hypothesis testing with 60 and 90 dimensions, respectively, we need to test other types of configurations of the FWA-GWO, because the configurations that we presented in this paper were chosen randomly.
Finally, we present the results of the rest of the total benchmark functions [F14, F22] that are analyzed in this paper, and these benchmark functions are called “fixed-dimension multimodal” and averages and standard deviation are presented in the following tables.
Tables 12 and 13 illustrate a brief comparison only with the averages of GWO, FWA, and the FWA-GWO in versions 1 and 2, respectively, and we can conclude that the three methods have a good performance with these benchmark functions.
Comparison among GWA, FWA, and FWA-GWO in version 1 in fixed-dimensions multimodal benchmark functions.
Fixed-dimensions multimodal benchmark functions
Function
GWO
STD
FWA
STD
Hybrid V1
STD
F14
4.0425
4.2528
0.9981
2.21E-04
2.7499
2.8626
F15
3.37E-04
6.25E-04
6.03E-04
2.76E-04
7.42E-04
2.70E-04
F16
-1.0316
1.0316
-1.0316
4.96E-05
-1.0315
5.60E-04
F17
3.0000
3.0000
3.0011
0.0018
3.0007
8.20E-04
F18
-3.8626
3.8628
-0.2979
6.20E-06
-3.8612
0.0024
F19
-3.2865
3.2506
-3.2655
0.0599
-3.3124
0.0449
F20
-10.1514
9.14015
-10.1466
0.0054
-5.3093
0.6601
F21
-10.4015
8.5844
-10.3929
0.0144
-5.2378
0.5854
F22
-10.5343
8.55899
-10.5113
0.0481
-5.3646
0.4713
Comparison among GWA, FWA, and FWA-GWO in version 2 in fixed-dimensions multimodal benchmark functions.
Fixed-dimensions multimodal benchmark functions
Function
GWO
STD
FWA
STD
Hybrid V2
STD
F14
4.0425
4.2528
0.9981
2.21E-04
4.1664
3.7214
F15
3.37E-04
6.25E-04
6.03E-04
2.76E-04
7.00E-04
3.18E-04
F16
-1.0316
1.0316
-1.0316
4.96E-05
-1.0316
2.61E-05
F17
3.0000
3.0000
3.0011
0.0018
3.0009
9.83E-04
F18
-3.8626
3.8628
-0.2979
6.20E-06
-3.8612
0.0022
F19
-3.2865
3.2506
-3.2655
0.0599
-3.3093
0.0484
F20
-10.1514
9.14015
-10.1466
0.0054
-5.5335
0.9876
F21
-10.4015
8.5844
-10.3929
0.0144
-5.3205
0.6697
F22
-10.5343
8.55899
-10.5113
0.0481
-5.5058
0.7676
In addition, Table 14 shows the comparison of the last version presented in this paper; the main goal of evaluating these benchmark functions is to prove that the methods work correctly and have a good performance with these types of problems and we only present a brief comparison in average without hypothesis test because the results are very similar.
Comparison among GWA, FWA, and FWA-GWO in version 3 in fixed-dimensions multimodal benchmark functions.
Fixed-dimensions multimodal benchmark functions
Function
GWO
STD
FWA
STD
Hybrid V3
STD
F14
4.0425
4.2528
0.9981
2.21E-04
4.1338
3.5058
F15
3.37E-04
6.25E-04
6.03E-04
2.76E-04
9.05E-04
6.44E-04
F16
-1.0316
1.0316
-1.0316
4.96E-05
-1.0316
4.86E-06
F17
3.0000
3.0000
3.0011
0.0018
3.0016
0.0019
F18
-3.8626
3.8628
-0.2979
6.20E-06
-3.8617
0.0017
F19
-3.2865
3.2506
-3.2655
0.0599
-3.3220
2.13E-05
F20
-10.1514
9.14015
-10.1466
0.0054
-5.1421
0.4826
F21
-10.4015
8.5844
-10.3929
0.0144
-5.3531
0.8978
F22
-10.5343
8.55899
-10.5113
0.0481
-5.3103
0.6953
4.4. Convergence of the FWA-GWO
In the following figures we are presenting convergence plots between the GWO and the FWA-GWO with two different problems and the difference is that in one problem the FWA-GWO (hybrid) has better performance and in others has not, according to the hypothesis test.
Figure 18 shows the convergence curve of the GWO and FWA-GWO algorithms, respectively, for the benchmark function number 4.
Convergence curve of GWO and FWA-GWO Algorithm with F4.
In Figure 19 we can notice the performance of the GWO and the FWA-GWO when we are using the benchmark function number 10. The red line represents the convergence curve of the GWO algorithm and the blue line represents the convergence of the FWA-GWO (hybrid).
Convergence curve of GWO and FWA-GWO with F10.
It is important to mention that in conclusion we can say that, with the help of Figures 18 and 19, the FWA-GWO always finds other solutions throughout the iterations and this behavior is good because the algorithm always converges and avoids the local optima and this process occurs through the advantages of the hybridization between the GWO and FWA algorithms. On the other hand we can note in Figure 18 that, for example, in this problem (F10), the GWO algorithm falls into a local minimum and cannot avoid this problem. In addition, we can test with other configurations in the FWA-GWO toolbox, which is presented in [29].
5. Conclusions
In this paper we have presented the hybridization among two metaheuristics that were described above and the new hybrid method is called FWA-GWO. Is important to mention that an advantage of this new hybrid method is the capacity of choosing between the exploitation and exploration phases in the algorithm that are two main features of metaheuristics, so it is very important to choose this abilities according to the problem that we want to solve.
According to the results presented in this paper we can conclude that for 30 dimensions the different versions of the hybrid method are better than the GWO and FWA, respectively, according to the hypothesis test presented. On the other hand, we can note that for 60 and 90 dimensions in the hypothesis tests the performance of FWA-GWO is poor, but we can note an interesting behavior, and that if we analyzed the best result of the total executions, we found the best performance in the different FWA-GWO versions. It is important to mention that the configuration of the hybrid method for these types of problems can be significantly improved and users can test other configurations of FWA-GWO on the Toolbox.
Taking into account the fact that the proposed FWA-GWO method obtained the best result in 60 and 90 dimensions, we can conclude that the FWA-GWO can perform in a better way in problems of control and design of structures in computational intelligence than the FWA or GWO algorithms [30–32], respectively.
Also we can conclude that the FWA-GWO is an algorithm that has diversity in the method for searching for the optimal value, because in the total population the algorithm has a set n of possible best solutions, so this feature is important because it helps avoid the local optima or false best values, searching always for possible different solutions. Finally, the proposed hybrid method (FWA-GWO) could be easily adapted to solve multiobjective problems.
As future work, we could test the FWA-GWO algorithm in problems with control of plants, as a first choice or as a second option, and dynamically adjust the parameters into the FWA-GWO algorithm with type 1 and interval type 2 fuzzy logic, respectively.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
BonabeauE.DorigoM.TheraulazG.1999USAOUPCanU.AlatasB.Physics based metaheuristic algorithms for global optimization2015194106WolpertD. H.MacreadyW. G.No free lunch theorems for optimization19971167822-s2.0-003111820310.1109/4235.585893MeliánB.MorenoJ.Metaheurísticas: una visión global200319728Al AdwanF.Al ShraidehM.Saleem Al SaidatM. R.A genetic algorithm approach for breaking of bimplified data encryption standard2015992953042-s2.0-8495015031310.14257/ijsia.2015.9.9.26SotoJ.MelinP.CastilloO.Optimization of the Fuzzy Integrators in Ensembles of ANFIS Model for Time Series Prediction: The case of Mackey-GlassProceedings of the IEEE Conference on Norbert Wiener in the 21st Century (21CW)June 2014Boston, Mass, USA1810.1109/NORBERT.2014.6893880TanY.2015Berlin, Heidelberg, GermanySpringer-VerlagRodríguezL.CastilloO.SoriaJ.A study of parameters of the grey wolf optimizer algorithm for dynamic adaptation with fuzzy logic20176673713902-s2.0-8500611772010.1007/978-3-319-47054-2_25ZhengY.SongQ.ChenS.Multiobjective fireworks optimization for variable-rate fertilization in oil crop production201313114253426310.1016/j.asoc.2013.07.0042-s2.0-84880944667LiJ.ZhengS.TanY.Adaptive fireworks algorithmProceedings of the IEEE Congress on Evolutionary Computation (CEC '14)July 2014Beijing, ChinaSpringer3214322110.1109/CEC.2014.6900418TanY.ZhuY.2010Berlin, Heidelberg, GermanySpringer-Verlag10.1007/978-3-662-46353-6MR3525354LiJ.ZhengS.TanY.The Effect of Information Utilization: Introducing a Novel Guiding Spark in the Fireworks Algorithm20172111531662-s2.0-8501158703310.1109/TEVC.2016.2589821TanY.ZhengS.Dynamic search in fireworks algorithmProceedings of the IEEE Congress on Evolutionary Computation (CEC'14)July 20143222322910.1109/CEC.2014.69004852-s2.0-84908576810AbdulmajeedN. H.AyobM.A firework algorithm for solving capacitated vehicle routing problem2014617986LiJ.TanY.The bare bones fireworks algorithm: A minimalist global optimizer20186245446210.1016/j.asoc.2017.10.046ZhangB.ZhengY.-J.ZhangM.-X.ChenS.-Y.Fireworks Algorithm with Enhanced Fireworks Interaction201714142552-s2.0-8502758995110.1109/TCBB.2015.2446487ZhengS.LiJ.JanecekA.TanY.A cooperative framework for fireworks algorithm201714127412-s2.0-8502678378910.1109/TCBB.2015.2497227MirjaliliS.MirjaliliS. M.LewisA.Grey wolf optimizer201469466110.1016/j.advengsoft.2013.12.0072-s2.0-84893010002MuroC.EscobedoR.SpectorL.CoppingerR. P.Wolf-pack (Canis lupus) hunting strategies emerge from simple rules in computational simulations201188319219710.1016/j.beproc.2011.09.0062-s2.0-80055038151RodriguezL.CastilloO.SoriaJ.MelinP.ValdezF.GonzalezC. I.MartinezG. E.SotoJ.A fuzzy hierarchical operator in the grey wolf optimizer algorithm20175731532810.1016/j.asoc.2017.03.0482-s2.0-85018582837HeidariA. A.PahlavaniP.An efficient modified grey wolf optimizer with Lévy flight for optimization tasks2017601151342-s2.0-8502176819310.1016/j.asoc.2017.06.044RodriguezL.CastilloO.SoriaJ.Grey wolf optimizer with dynamic adaptation of parameters using fuzzy logicProceedings of the 2016 IEEE Congress on Evolutionary Computation, CEC 2016July 2016Canada311631232-s2.0-8500826300010.1109/CEC.2016.7744183BarrazaJ.MelinP.ValdezF.GonzálezC.Fireworks algorithm (FWA) with adaptation of parameters using fuzzy logic20176673133272-s2.0-8500617519510.1007/978-3-319-47054-2_21DigalakisJ. G.MargaritisK. G.On benchmarking functions for genetic algorithms200177448150610.1080/00207160108805080MR1898724Zbl0984.650042-s2.0-26444529052MolgaM.SmutnickiC.Test functions for optimization needsIn pressYangX. S.Test problems in optimization2010John Wiley & Sonshttps://arxiv.org/abs/1008.0549v1BarrazaJ.MelinP.ValdezF.GonzalezC. I.Fuzzy FWA with dynamic adaptation of parametersProceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC'16)July 2016405340602-s2.0-85008254185LarsonR.FarberB.2003Perarson Education IncToolbox FWAGWOHybridization between the FWA and GWO algorithms2017, http://www.hafsamx.org/melin/ToolFWAGWO/GuhaD.RoyP. K.BanerjeeS.Load frequency control of interconnected power system using grey Wolf optimization201627971152-s2.0-8496012394910.1016/j.swevo.2015.10.004PrecupR.-E.DavidR.-C.PetriuE. M.Grey Wolf optimizer algorithm-based tuning of fuzzy control systems with reduced parametric sensitivity20176415275342-s2.0-8502065238710.1109/TIE.2016.2607698SaremiS.MirjaliliS. Z.MirjaliliS. M.Evolutionary population dynamics and grey wolf optimizer20152651257126310.1007/s00521-014-1806-7