The mechanical resistance of a locomotive is crucial for power consumption. It is desirable to maintain this resistance at a minimum value for energy efficiency under optimal operation conditions. The optimal conditions can be found by particle swarm optimization with constraints. The particle swarm optimization method is a highly preferred type of heuristic algorithm because of its advantages, such as fewer parameters, faster speed, and a simpler flow diagram. However, fast convergence can be misleading in finding the optimum solution in some cases. Pareto analysis is used in this proposed study to prevent missing the target. When the literature is searched, it is seen that there are various studies using this method. However, in all of these studies, the results of the particle swarm method have been interpreted as whether or not they complied with Pareto’s 80/20 rule. The validity of the Pareto analysis is taken as an assumption, and with the help of this assumption, the coefficients of a locomotive’s mathematical equation were changed, and finally the results were found by applying the particle herd optimization method. Finally, a novel hybrid method has been created by including the Pareto optimality condition to particle swarm optimization. The results are compared with this innovative hybrid method of Pareto and particle swarm and the results found using only the particle swarm method.
1. Introduction
Optimization is used in many fields of life. Firstly, the mathematical model of the system is introduced, then this modal is solved with different optimization methods. Optimization methods are classified as classical methods and heuristic optimization methods. Classical methods depend on the analytical and derivation of the problem. There are two types of classical methods. They are Gradient and nongradient based methods. When the problem size increases, classical methods may be insufficient. A study [1] proves that classical methods are not sufficient in finding the optimum working points of the locomotive.
Heuristic optimization methods use reflecting of nature. There are many algorithms that have been produced that reflect the behavior of living beings in nature. These include the Ant System [2], the Max-Min Ant System [3], Particle Swarm Optimization [4], Artificial Bee Colony [5], the Fruit Fly Optimization Algorithm [6], Cuckoo optimization based on Levy Flight [7], the Krill Herd Optimization Algorithm [8], Bakeri Foraging Behavior [9], the Bat Algorithm [10], the Firefly Algorithm [11], the Lion Algorithm [12], the Gray Wolf Algorithm [13], the Dolphin Algorithm [14], the Bush Colony Algorithm [15], the Artificial Algae Algorithm [16], the Virus Colony Search Algorithm [17], the Shark Olfaction Optimization Algorithm [18], and the Social Spider Algorithm [19]. Among all these algorithms, particle swarm optimization is the most cited intuitive intelligence algorithm with the cited number being 5721 [20].
Particle Swarm Optimization (PSO) is a heuristic optimization method based on social information sharing, which, to summarize briefly, was developed jointly by a social psychologist and an electrical engineer [4]. This method is an algorithm designed by considering the behaviors that fish and birds exhibit in order to survive as a swarm. The algorithm of the PSO method is based on the communication between individuals in the swarm. The search is performed by the number of generations as in the genetic algorithm. In fact, individuals are called particles, and the community of these particles is called the swarm.
The aim is that each particle tries to approach the particle with the best position in the swarm, based on the experience of its previous position. This approach speed is random, and the assumption is that the next step is better than the previous one. This approach continues until it reaches the target (best position). Although it is similar to the genetic algorithm (GA) method, it is easier to use and respond better in some studies than GA.
The original PSO algorithm was designed for unconstrained optimization problems. However, real-life problems generally need constraints on the input elements of the problem. According to this need, some constrained PSO approximations were applied in the literature. One of these studies is a dynamic particle swarm optimizer with escaping prey (DPSOEP), which was designed for constrained nonconvex and piecewise problems [21]. In another recent research, partially and fully constrained improved particle swarm optimization methods (PCIPSO and FCIPSO) were proposed for planning the optimal usage of a water reservoir [22]. A moth swarm algorithm with arithmetic crossover modification (MSA-AC) was also developed for solving constrained optimization problems and tested on most common benchmark functions [23]. In a different study, the inequalities for the determination of constraints were explained as a fuzzy set, and new constraint definitions for this fuzzy set were improved, which can be interpreted by the constrained PSO [24]. Constrained PSO was also used for optimal trajectory planning of a space robot in a recent research [25]. For hyperspectral images, the vertices of shapes, which are represented by highly mixed pixels, were found by Linear Mixture Model Constrained PSO (LMMC-PSO) in another recent research [26]. In the most recent research for constrained PSO, a novel PSO method called CMPSOWV was developed by extracting velocity vector [27]. As seen in the literature, many optimization problems were handled by constrained PSO approximation. In this study, locomotive resistance is also handled as a constrained optimization problem.
The Pareto analysis was found by the Italian economist Vilfredo Frederico Damaso Pareto (1848–1923); it is a bar diagram used to distinguish the important causes of a problem from the less important causes. This analysis started with Pareto observing that 80% of the land is owned by 20% of the population in Italy. This distribution has been tried in many different areas reaching the same findings. This statistical rate (80/20), which can also be applied to engineering, is called the “Pareto Law” especially in economic fields.
Hwang and Masud established the necessity of decision-making and classified the existing decision-making approaches to optimization into three categories: a priori, a posteriori, and progressive articulation of preferences [28]. A priori articulation of preferences involves the aggregation of objective functions using weights and, thereafter, determines the optimal solution. A posteriori articulation of preferences, in which decision-making is applied after an efficient set of solutions is obtained, is the selection of the best solution among the obtained solutions. A progressive articulation of preferences, which involves the process of decision-making along with the search process, is a guide for the search effort toward the regions of the Pareto front, where the probability of getting the compromise solution is the highest.
Normalization is performed using different related targets within a single purpose function. Another way is to find the Pareto optimal front that summarizes all Pareto optimal solutions. A Pareto optimal solution, by definition, is the best solution that can be achieved for the one target, while keeping at least one other target without a disadvantage [29]. In fact, normalization gives the same importance to all variables, whereas Pareto gives different levels of importance to all variables. Studies using Swarm and Pareto in the literature generally include a posteriori articulation as in the study of Ulungu and Teghem [29] and the study of Di Barba et al. [30]. Although there are many studies in the literature including both Pareto and Swarm, specific studies can be listed as follows.
In one of the studies, where Swarm and Pareto were used together, the PSO algorithm was applied to multiobjective optimization problems with two and more objectives [31]. The algorithm revealed an excellent performance concerning the number of solutions of the forward problem, completed with a very reliable representation of the Pareto optimal front. In another study, the Pareto dominant was adapted to evaluate particles found throughout the search process in the swarm; with this approach, the nondominant characters are stored in an external replenishment that constantly renews itself [32]. This paper [33] presents an approach in which Pareto dominance is incorporated into particle swarm optimization (PSO) in order to allow this heuristic to handle problems with several objective functions. In another study, a simple evolution scheme for multiobjective optimization problems was introduced, called the Pareto Archived Evolution Strategy (PAES). It was argued that PAES may represent the simplest possible nontrivial algorithm capable of generating diverse solutions in the Pareto optimal set [34]. The objective of another paper was to introduce a novel Pareto-frontier Differential Evolution (PDE) algorithm to solve multiobjective functions. The solutions provided by the proposed algorithm for two standard test problems outperform the Strength Pareto Evolutionary Algorithm, one of the state-of-the-art evolutionary algorithms for solving multiobjective functions [35]. In another research, particles of PSO were classified as nondominated, dominated, and Pareto-front particles for finding the optimal placement and sizing configuration of distributed generation in the radial distribution system [36]. A similar Pareto-front PSO approximation was applied for discrete time-cost trade-off problem [37]. A Pareto-glowworm swarm optimization method was developed for a quality of service problem in a wireless sensor network [38]. A Pareto improved artificial fish swarm algorithm (IAFSA) was improved for a disassembly line balancing problem [39]. In another research, the Pareto analysis was used to select the best solver in the particle swarm [40]. A virtual Pareto front (vPF) structure was included to improve the multiobjective PSO method in a recent research [41]. A novel Pareto-active learning method was included to surrogate assisted-PSO for solving multiobjective optimization problems in one of the most recent researches [42].
In this study, the Pareto analysis was used for the determination of the places where the probability of the solution is maximum, that is, in which progressive articulation of preferences is made, and then the results were found by introducing additional limits to the particle swarm optimization of a locomotive’s mathematical equation. The advantages of this innovative method are as follows:
The improvement of the algorithm is simple to implement by including a Pareto Optimality Check process to the PSO structure
The improvement reduces the necessary number of iterations to reach the optimal solution
Although the iteration number is small, the hybrid system gives better results
In Methods section, firstly, reaching minimal locomotive motion resistance in the past is given in brief, then the mathematical expression of the original PSO and a general explanation of the Pareto Assumption are explained. The optimization problem for the locomotive resistance is explained in the YHT 65000 High-Speed Train section. Embedding the Pareto assumption in a constrained PSO algorithm is introduced in the Application of Pareto Assumption on Constrained PSO section. The optimization performances of pure constrained PSO and constrained PSO with the Pareto assumption are benchmarked in the Comparative Results section. Finally, the results are interpreted and the suggestions are presented in the Conclusions section.
2. Methods
Some studies generally depend on Davis’ polynomial to find minimal locomotive motion resistance in the literature [43, 44]. Another study uses the PSO to avoid highest locomotive motion resistance. However, not be obtained correctly with the PSO method, the method needs hybridizing with a different method [45]. In a study, the flow chart is presenting the logical steps that were followed in order to obtain optimal speed profiles regarding minimum energy consumption. The program takes about 10 seconds to run [46]. This study considers the theoretical analysis, modeling technique, and experimental verification of the effects of the gradient, running, and curve resistance on train performance [47].
In our study, a new hybrid method was created by combining the advantages of two different methods: one is a metaheuristic algorithm and the other is a statistical rule. Studies related to this have been performed before, and the main studies are given hereafter. However, in previous studies, different from this study, the results of constrained PSO have been evaluated as to whether or not constrained PSO is a good method for the study by looking at whether it complies with the 80/20 rule.
2.1. Mathematical Expression of PSO
In a PSO, a sequence of particles with random positions and velocities is initiated at size D. Dimension D is also equal to the unknown number in the conformity function. The goal here is to find the best value by updating its generations. At each iteration, each particle is updated according to the two “best” values. In fact, there are two aspects to the best value. The first is the best suitability value a particle has ever found. This value is stored in memory for use later and is referred to as “pbest,” the best value of the particle. The second best value is the best fitness value that any particle in the swarm has ever achieved. This value is the global best value for the swarm and is called “gbest.” The speeds and positions are changed according to these new assigned values. The swarm particle matrix is n×D in size, where n is the number of particles:(1)X=X11⋯X1D⋮⋱⋮Xn1⋯XnD.
Particle updates velocity (amount of change of position in each size) and position according to (2) and (3):(2)Vik+1=Vik+c1∗rand1kpbestik−xik+c2∗rand2kgbestk−xik,(3)Xik+1=Xik+Vik+1.
In order to better understand the situations described in Figures 1 and 2, they are shown as working principle and flow chart, respectively.
Working principle of particle swarm optimization [1].
PSO flow chart.
2.2. Pareto Assumption
The Pareto analysis can be generalized by three different approximations in the literature. These approximations belong to Burell [48], Egghe [49], and Chen et al. [50]. In this study, Egghe’s equation was chosen, which is explained by (4) and (5):(4)X=Xn2,(5)Q=Qn,where n is the number of examples, X: 20% rate, Q: 80% rate.
The Pareto analysis is explained in terms of the 80/20 rule, which was also mentioned in Introduction. The application of the 80/20 rule is demonstrated in Figure 3. In Figure 3, the distribution of the input parameters is given by a decreasing Probability Mass Function. The cumulative rate determines which parameters are critical and which parameters can be neglected.
Diagram of the Pareto analysis.
From the information presented in Figure 3, it is seen that problems 1 and 2 intersect with the cumulative distribution curve. This curve explains that 80% of the problems will be solved by finding solutions to problems 1 and 2.
To obtain the results presented in Figure 3, the following preparations should be applied:
Define and list the parameters of the optimization problems
Sort the parameters in order of priority
Score the sorted parameters in descending order
Separate the most critical parameters from the others
Construct the diagram
The result may not always be 80/20. This ratio is sometimes 70/30, 90/10, etc. It does not seem possible to interpret the ratio without creating a diagram in terms of the solution.
3. YHT 65000 High-Speed Train
This train is a high-speed train model that is currently used in Turkey. It is a high-speed train set produced by the Spanish railway manufacturer Construcciones y Auxiliar de Ferrocarriles (CAF).
YHT 65000 trains are based on the trains that Red Nacional De Ferrocarriles Espanoles (RENFE) Class with 120 trains uses in Spain. A set consists of 6 cars as standard. In this study, it was thought that there were 6 cars. However, it has a modular structure, and 2 more cars can be added if desired. In addition, 2 sets can be combined to form a total of 12 cars. Figure 4 shows the YHT 65000 train in Turkey. Before this study, the technical information of the high-speed train used in the study is given in Table 1.
YHT 65000 high-speed train [51].
Technical information of high-speed YHT 65000 train∗.
Main characteristics
YHT 65000
Power
38400 kW
Locomotive load
297.25 ton
Axle load
17 ton
Axle type
—
Maximum velocity
275 km/h
Line gap
1435 mm
Catenary type
AC 25 kV, 50 Hz
Traction motor power
AC 4800 kW
∗Obtained from Turkish State Railways (TCDD).
For to find the mathematical model of YHT 65000 High-Speed Train, four resistances, which prevent the movement of high-speed trains, are calculated (aerodynamic resistance is neglected). These are listed as combined resistance of cruise and locomotive, curve resistance, ramp resistance, and acceleration resistance. Since there is also a fifth resistance, which is wind resistance, and it has a probabilistic behavior due to the characteristics of environmental conditions, the effect of this fifth resistance is neglected in the constructed model. It should be noted that these resistors directly affect power consumption. In this case, it is possible to express the total resistance (RT) equation as follows (G is the total load carried in tons):
3.1. Combined Resistance of Cruise and Locomotive
(6)RS=1.3953−0.0071⋅V+0.0006⋅V2G.
Since combined resistance of cruise and locomotive exists for a high-speed train set, it consists of a single resistance provided by the manufacturer, which is inseparable as car and locomotive resistance.
3.2. Curve Resistance
(7)Rk=650k−55G.
This curve resistance formula is known as the Röcki formula and is used for 1435 mm line length. The k value in the equation is the curvature radius of the line in m.
3.3. Ramp Resistance
(8)Rr=rG,where r is the ramp value in ‰. This value is taken as positive when climbing the ramp and negative when descending.
In order for the train set to move, it must overcome these resistances. A train can only move with “Steady” velocity after overcoming these resistances. At steady velocity, there is no acceleration (a) (it is equal to zero) according to Newton’s 1st Law. There is one more resistance that must be overcome when the train wants to change its velocity. This resistance is called acceleration resistance.
3.4. Acceleration Resistance
(9)Ra=4V²SG.
The acceleration resistance given above is the acceleration resistance of the train set. S refers to the line.
While there is no acceleration in the first three resistance types, in the fourth one, there is a rise in acceleration resistance when the speed changes. The power consumption is determined by applying the PSO method to the total resistance formula:(10)R=Rs+Rk+Rr+Ra.
The equation in full is written as(11)R=V20.0006+4D−0.0071V+1.3953+650k−55+rG,V=at,(12)R=a2t20.0006+4D−0.0071at+1.3953+650k−55+rG.
4. Application of Pareto Assumption on Constrained PSO
The Pareto assumption of speed and load parameters was obtained by using E=1/2mV2, the most general equation of the amount of energy used. While deriving these assumptions, the speed was progressively decreased from 240 km/h to 10 km/h by 10 km/h each time, and 24 levels of the speed were obtained. The load was also progressively decreased from 328.425 tons to empty weight of cruise, which is 287 tons, by 1.5 tons in each time, and 24 levels of load were obtained. The Pareto analysis of speed and load is demonstrated in Figures 5 and 6 respectively.
Pareto diagram of speed. ∗1: 240 km/h, 2: 230 km/h,..., 24:10 km/h.
Pareto diagram of load. ∗1: 328,418 ton, 2: 327.052 ton,..., 24: 297 ton.
In Figure 5, it is seen that the share of the first 9 speed levels, including 160 km/h, in energy consumption is 60%. These 9 levels maximize the energy consumption. This curve explains that 60% of the energy consumption will be solved by finding solutions to problems from 1 to 9. They should be excluded from the search of PSO, which means that the speed must be lower than 160 km/h or 44.44 m/sec.
In Figure 6, it is seen that the share of the first 15 load levels, including 309.294 ton, in energy consumption is 63.73%. These 15 levels maximize the energy consumption. This curve explains that 63.73% of the energy consumption will be solved by finding solutions to problems from 1 to 15. They should be excluded from the search of PSO, which means that the load must be lower than 309.294 ton.
From the inference made according to Figures 5 and 6, it is seen that the correction to be made only for the speed is contribution to the energy consumed, because 15 levels of the load mean big limitation. This limitation is not so likely. Although trains can go at low speed, they cannot go at low load in practice.
For this reason, the algorithm can be made hybrid by only changing the speed in the swarm algorithm used for the optimization of mathematical modeling. Algorithms 1–4 give the details of the application of the proposed model on the locomotive resistance minimization problem. Algorithm 3 gives the details of the improvement on PSO by Pareto analysis. Algorithm 4 shows how the algorithm is restricted by constraints.
Algorithm 1: Pareto-PSO algorithm for locomotive resistance minimization.
The application of pareto optimality conditions to obtain the hybrid Pareto-PSO algorithm can also be expressed by the flow chart shown in Figure 7.
Pareto-PSO hybrid system flow chart.
This flow chart is different from PSO flow chart because of the red process. This red process is coming from making constraints according to Pareto analysis.
5. Determination of PSO Parameters
In the PSO part of the proposed algorithm, there are two critical parameters that should be chosen, which are the number of necessary particles (Nparticles) and the number of necessary iterations (Niterations). The parameter ctraining is the counter parameter, which is used for iteration tracking, which is initialized as 0 at the beginning and incremented after each iteration unless it is reached Niterations. The particle update velocity vector V⟶ is initialized and updated relatively randomly at each iteration, where the values of its elements are bounded by the difference vectors of optimum solution vector and solution vector of each particle from the previous iteration, as seen in Algorithm 1. V⟶ is used to update solution vector X⟶swarmi of each swarm. However, this update step is limited by Algorithm 3 for the Pareto optimality check. Algorithm 3 is only used when Pareto’s assumption is handled. In the pure PSO algorithm, this algorithm is not applied. Moreover, for the constraint optimization X⟶swarmi is also limited by Algorithm 4.
The high number of particles up to value will accelerate to reach the optimum point. However, when the number of particles exceeds the upper limit, the complexity of the algorithm increases, and deviations in learning the extremum point occur. Therefore, the particles should be limited by the upper value to avoid this computational complexity, redundancy, and distortion in convergence steps.
For the purpose of finding optimum (Nparticles, Niterations) couple, the experiment was repeated by changing the number of particles from 1 to 20 and the number of iterations from 1 to 50 for PSO without and with Pareto assumption. Experiments were carried out on the R2020a version of the Matlab program on a computer with 2.8 GHz, Intel Core i7 processor, 16 Gb 1067 Mhz DDR3 RAM, and MacOS High Sierra operating system.
The minimum resistance and elapsed time for the calculation in seconds are obtained as in Figures 8 and 9 for PSO without Pareto assumption.
Minimum resistances found by different (Nparticles, Niterations) Configurations of PSO.
Elapsed computation time by different (Nparticles, Niterations) Configurations of PSO.
As can be seen from Figure 8, the best solution for the minimum resistance was obtained with the minimum (Nparticles, Niterations) configuration, when the number of particles was chosen as 20 and the number of iterations was chosen as 25 for pure PSO method.
As seen in Figure 9, 22.4 milliseconds was spent for the simplest (Nparticles, Niterations) configuration of pure PSO solution. The maximum duration was 56.44 and 54.55 milliseconds when the number of particles was selected as 18, the number of iterations was 50 or the number of particles was 20, and the number of iterations was 50. However, when Figure 8 was examined again, it was seen that these maximum configurations did not provide the minimum resistance result. When Figure 9 is examined, it can be said that pure PSO shows a linear computational complexity in terms of both particle number and iteration number.
When the Pareto assumption was applied for the same experiments, the minimum resistances and elapsed computation time were obtained as in Figures 10 and 11.
Minimum Resistances Found by Different (Nparticles, Niterations) configuration of Pareto-PSO.
Elapsed computation time by different (Nparticles, Niterations) Configurations of Pareto-PSO.
It can be understood from Figure 10 that the best solution for the minimum resistance was obtained with the minimum (Nparticles, Niterations) configuration, when the number of particles was chosen as 10 and the number of iterations was chosen as 12 for pure PSO method.
As seen in Figure 11, 9.2 milliseconds was spent for the simplest (Nparticles, Niterations) configuration of PSO with Pareto assumption. The maximum duration was 69.9 and 56.02 milliseconds when the number of particles was selected as 14, the number of iterations was 49 or the number of particles was 20, and the number of iterations was 50. When Figure 10 was examined again, the minimum resistance can also be found when (20, 50) was chosen instead of minimum configuration.
According to Figure 11, it can be said that PSO with Pareto assumption also shows a linear computational complexity in terms of both particle number and iteration number as similar to pure PSO method. The inclusion of Algorithm 3 ensures that the optimum result is achieved with fewer iterations and particles, although there is no significant burden on the computational complexity for the same number of particles and iterations. For optimum particle numbers obtained by experiments (Nparticles,PSO = 20, Nparticles,Pareto-PSO = 10), the comparison of the pure PSO and the Pareto assumption method is presented in detail in the Comparative Results section.
6. Comparative Results
In Figure 12, the convergences for the optimal conditions that satisfy the minimum resistance are illustrated for each training iteration of pure constrained PSO with 20 particles, which is the simplest PSO structure that can find minimum resistance as presented in Figure 8. In addition, the values reached for minimum resistance and elapsed time for each iteration are also presented.
Iteration numbers of PSO method for different variables.
In Figure 12, the iterations are limited to 50. The best acceleration, ramp, distance, power, travel time, curve, load, and elapsed time values as found by the PSO are detailed. The optimum resistance was reached at the 25th iteration with 22.4-millisecond computation time. When the iterations were continued until the 50th iteration, the computation time was reached to 54.6 milliseconds.
In Figure 13, the convergences for the optimal conditions that satisfy the minimum resistance are illustrated for each training iteration of the hybrid Pareto-PSO method. The values reached for the minimum resistance and elapsed time are also presented.
Iteration numbers of Pareto-PSO hybrid method for different variables.
In Figure 13, the iterations are also limited to 50. The best acceleration, ramp, distance, power, travel time, curve, load, and elapsed time values as found by the Hybrid system are shown. The minimum resistance was reached at the 12th iteration with 9.2-millisecond computation time. When the iterations were continued until 50th iteration, the computation time was reached to 35.7 milliseconds. According to these results, it can be said that Pareto assumption on constrained PSO method reduced computation time to 41.07% of minimum configuration for the best solution of constrained PSO method. Moreover, the computation time was also reduced to 65.38% by Pareto assumption, when both constrained PSO and constrained Pareto-PSO methods were continued until to the 50th iteration.
Both constrained PSO and constrained Pareto-PSO methods could find the conditions that satisfy the minimum resistance as −7638 kW for their optimum (Nparticles, Niterations) configurations. This situation can be preserved by the constrained Pareto-PSO method when it is reached until the 50th iteration, but not by the constrained PSO method. At 50th iteration, the minimum resistance found by the constrained PSO increased from −7638 kW to −6670 kW.
Acceleration, distance, load, curve, ramp, and travel time values that can be seen for the increasing number of iterations of the pure constrained PSO are detailed in Table 2.
Values of a, D, G, k, r, and t in Different Iteration Numbers for Constrained PSO.
Iteration
A
D
G
k
r
t
1.
−0.0276
26983
287
130
1.1505
1800
2.
−0.0927
90676
315.62
345.97
3.8663
2896
3.
−0.0371
94090
316.69
354.10
1.5462
2937.2
4.
−0.0463
35362
302.45
232.49
1.9330
2357.3
5.
0.0097
92365
296.45
277.32
−6.7304
2581.8
10.
0.0373
47879
287
130
−12.052
1800
20.
0.0027
300000
328.425
530
−26
3600
30.
0.0058
88866
287.56
311.41
−26
2464.5
40.
0.0334
26779
287
152.93
−26
1800
50.
−0.0047
29192
289
525.92
−25.968
1876.7
If the values found by constrained PSO, which are listed in Table 2, are investigated, it can be said that distance, load, and ramp reached relatively stable states between 30th and 40th iterations. Travel time and curve reached a stable state after the 12th iteration until the 25th iteration. After the 25th iteration, computed travel time and curve diverged from their stability point. Acceleration could not reach a stable state until the 50th iteration.
Acceleration, distance, load, curve, ramp, and travel time values can be seen for the increasing number of iterations of constrained Pareto PSO in Table 3.
Values of a, D, G, k, r, and t in Different Iteration Numbers for Constrained Pareto PSO.
Iteration
A
D
G
k
r
t
1.
0.0366
6263.7
287
130
−25.378
1800
2.
−0.0419
1993.6
328.425
130
−26
1800
3.
0.0278
300000
328.425
222.45
−26
1800
4.
0.0278
300000
328.425
530
−26
1800
5.
0.0278
300000
328.425
530
−26
1800
10.
−0.0087
300000
328.425
530
−26
1800
20.
0.0018
300000
328.425
530
−26
1800
30.
0.0039
300000
328.425
530
−26
1800
40.
0.0054
300000
328.425
530
−26
1800
50.
0.0038
300000
328.425
530
−26
1800
Distance, load, curve, ramp, and time reached a stable value in iteration 4. On the other hand, acceleration could not reach a stable value until the 50th iteration.
7. Conclusions
In this study, a novelty hybrid methodology was developed using the Pareto analyses and the PSO algorithm. Firstly, a mathematical model of a high-speed train in Turkey (YHT 65000) was used as an objective function for optimization. Next, the constrained PSO and this new hybrid system of Pareto and constrained PSO were compared by using the findings of the optimal values of acceleration, ramp, distance, power, travel time, curve, load, and elapsed time for reaching minimal motion resistance.
This new model has some advantages compared with constrained PSO. Firstly, it has a very simple algorithm that only adds one process to the PSO flow chart. Secondly, the number of iterations required is less than PSO. When the experiments were repeated 100 times for both constrained PSO method and proposed hybrid Pareto-PSO method for their optimum number of particles (Nparticles, PSO = 20, Nparticles, Pareto-PSO = 10), PSO found an average value of −6035.7 kW for minimum resistance, with an average of 30.51 iterations and 29.4-millisecond computation time. On the other hand, the proposed hybrid method found an average value of −7483.5 kW for minimum resistance, with an average of 29.45 iterations and 16.3-millisecond calculation time for finding the optimum a (acceleration), D (distance), G (load), k (curve), r (ramp), and t (time) values. These performance results show that the proposed hybrid method reduces the calculation time of the original PSO to 55.44% on average, obtaining much more reliable values for minimum resistance.
This novel hybrid method can be used for the optimization of other mathematical equation easily. This system not only is simple to implement, but also gives good performance in terms of both result and iteration. The method can also be adapted to conventional locomotives by using their constraints on acceleration, load, duration, travel time, ramp, curve, and travel distance instead of constraints of the chosen high-speed locomotive in this study. This feature of the presented method not only makes the proposed study a special method for high-speed trains, but also gives a general approach applicable to all locomotives for motion resistance optimization.
Data Availability
No specific data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
SertsozM.Rayli sistemlerde güneş enerjisi destekli yenilikçi enerji modellemesi, optimizasyonu ve analizi2018Bilecik, TurkeyBilecik Şeyh Edebali University, Enerji Sistemleri MühendisliğiPh.D. thesisDorigoM.ManiezzoV.ColorniA.Ant system: optimization by a colony of cooperating agents1996261294110.1109/3477.4844362-s2.0-0030082551StützleT.HoosH. H.The Max–Min ANT system and local search for combinatorial optimization problems20001610.1007/978-1-4615-5775-3_22EberhartR.KennedyJ.A new optimizer using particle swarm theoryProceedings of the Sixth International Symposium on Micro Machine and Human ScienceOctober 1995Nagoya, JapanIEEE10.1109/MHS.1995.494215KarabogaD.BasturkB.Sayısal fonksiyon optimizasyonu için güçlü ve verimli bir algoritma: yapay arı kolonisi (ABC) algoritması200739310.1007/s10898-007-9149-x2-s2.0-35148821762PanW.-T.A new fruit fly optimization algorithm: taking the financial distress model as an example201226697410.1016/j.knosys.2011.07.0012-s2.0-84155181068YangX. S.DebS.Cuckoo search via lévy flightsProceedings of 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC)December 2009Coimbatore, IndiaIEEE91110.1109/NABIC.2009.53936902-s2.0-77949623076GandomiA. H.AlaviA. H.Krill herd: a new bio-inspired optimization algorithm201217124831484510.1016/j.cnsns.2012.05.0102-s2.0-84864403130PassinoK. M.Biomimicry of bacterial foraging for distributed optimization and control2002223526710.1109/MCS.2002.10040102-s2.0-0036608987YangX. S.A new metaheuristic Bat–Inspired algorithmProceedings of the International Workshop on Nature Inspired Cooperative Strategies for Optimization (NICSO 2008)November 2008Tenerife SpainYangX.-S.Firefly algorithms for multimodal optimization2009579216917810.1007/978-3-642-04944-6_142-s2.0-77957863557RajakumarB. R.The lion’s algorithm: a new nature-inspired search algorithm2012612613510.1016/j.protcy.2012.10.016MirjaliliS.MirjaliliS. M.LewisA.Grey Wolf optimizer201469466110.1016/j.advengsoft.2013.12.0072-s2.0-84893010002KavehA.FarhoudiN.A new optimization method: Dolphin echolocation201359537010.1016/j.advengsoft.2013.03.0042-s2.0-84876700137MehrabianA. R.LucasC.A novel numerical optimization algorithm inspired from weed colonization20061435536610.1016/j.ecoinf.2006.07.0032-s2.0-33751530288UymazS. A.TezelG.YelE.Artificial algae algorithm with multi-light source for numerical optimization and applications2015138253810.1016/j.biosystems.2015.11.0042-s2.0-84959418932LiM. D.ZhaoH.WengX. W.HanT.A novel nature-inspired algorithm for optimization: virus colony search201692658810.1016/j.advengsoft.2015.11.0042-s2.0-84950112302AbediniaO.AmjadyN.GhasemiA.A new metaheuristic algorithm based on shark smell optimization20142159711610.1002/cplx.216342-s2.0-84919681016YuJ. J. Q.LiV. O. K.A social spider algorithm for global optimization20153061462710.1016/j.asoc.2015.02.0142-s2.0-84923770705ErdoğmuşP.Doğadan esinlenen optimizasyon algoritmaları ve optimizasyon algoritmalarının optimizasyonu20164293304ChenJ.ZhengJ.WuP.ZhangL.WuQ.Dynamic particle swarm optimizer with escaping prey for solving constrained non-convex and piecewise optimization problems20178620822310.1016/j.eswa.2017.05.0472-s2.0-85020208875MoeiniR.BabaeiM.Constrained improved particle swarm optimization algorithm for optimal operation of large scale reservoir: proposing three approaches20178428730110.1007/s12530-017-9192-x2-s2.0-85032576511DumanS.A modified moth swarm algorithm based on an arithmetic crossover for constrained optimization and optimal power flow problems20186453944541610.1109/access.2018.28495992-s2.0-85049076642TsekourasG. E.TsimikasJ.KalloniatisC.GritzalisS.Interpretability constraints for fuzzy modeling implemented by constrained particle swarm optimization20182642348236110.1109/tfuzz.2017.27741872-s2.0-85035128624WangM.LuoJ.YuanJ.WalterU.Coordinated trajectory planning of dual-arm space robot using constrained particle swarm optimization201814625927210.1016/j.actaastro.2018.03.0122-s2.0-85043578109XuM.DuB.FanY.Endmember extraction from highly mixed data using linear mixture model constrained particle swarm optimization20195785502551110.1109/tgrs.2019.28998262-s2.0-85069756620AngK. M.LimW. H.IsaN. A. M.TiangS. S.WongC. H.A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems202014011288210.1016/j.eswa.2019.112882HwangC. L.MasudA. S. M.Multiple objective decision making—methods and applications1979Berlin, GermanySpringerUlunguE. L.TeghemJ.Multi-objective combinatorial optimization problems: a survey1994328310410.1002/mcda.40200302042-s2.0-84994926936Di BarbaP.FarinaM.SaviniA.An improved technique for enhancing diversity in Pareto evolutionary optimization of electromagnetic devices200120248249610.1108/033216401103833662-s2.0-0035787669BaumgartnerU.MageleC.RenhartW.Pareto optimality and particle swarm optimization20044021172117510.1109/tmag.2004.8254302-s2.0-2342475919ShubhamA.DashoraY.TiwariM. K.Young-JunS.Interactive particle swarm: a pareto-adaptive metaheuristic to multiobjective optimization200838225827710.1109/tsmca.2007.9147672-s2.0-41149153821CoelloC. A. C.PulidoG. T.LechugaM. S.Handling multiple objectives with particle swarm optimization20048325627910.1109/tevc.2004.8260672-s2.0-3142756516KnowlesJ. D.CorneD. W.Approximating the nondominated front using the pareto archived evolution Strategy20008214917210.1162/1063656005681672-s2.0-0034199912AbbassH. A.SarkerR.NewtonC.PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems1Proceedings of the 2001 Congress on Evolutionary ComputationFebruary 2001Seoul, Korea97197810.1109/CEC.2001.934295MaheshK.NallagowndenP.ElamvazuthiI.Advanced pareto front non-dominated sorting multi-objective particle swarm optimization for optimal placement and sizing of distributed generation201691298210.3390/en91209822-s2.0-85019454015AminbakhshS.SonmezR.Pareto front particle swarm optimizer for discrete time-cost trade-off problem201731110.1061/(asce)cp.1943-5487.00006062-s2.0-85009238196BurugariV. K.PeriasamyP. S.Multi QoS constrained data sharing using hybridized pareto-glowworm swarm optimization201722S49727973510.1007/s10586-017-1454-72-s2.0-85038382208ZhangZ.WangK.ZhuL.WangY.A Pareto improved artificial fish swarm algorithm for solving a multi-objective fuzzy disassembly line balancing problem20178616517610.1016/j.eswa.2017.05.0532-s2.0-85020004694ButcherS. G. W.SheppardJ. W.StrasserS.Pareto improving selection of the global best in particle swarm optimizationProceedinds of the 2018 IEEE Congress on Evolutionary Computation (Cec)July 2018Rio de Janeiro, BrazilIEEE66266910.1109/Cec.2018.84776832-s2.0-85050585137WuB. L.HuW.HeZ. N.JiangM.YenG. G.A many-objective particle swarm optimization based on virtual pareto front2018788510.1109/Cec.2018.84778022-s2.0-85056261665LvZ.WangL.HanZ.ZhaoJ.WangW.Surrogate-assisted particle swarm optimization algorithm with Pareto active learning for expensive multi-objective optimization20196383884910.1109/jas.2019.19114502-s2.0-85063892006OrellanoA.Aerodynamics of high speed trains2010Stockholm, SwedenKTHArseneS.SebeşanI.Analysis of the resistance to motion in the passenger trains hauled by the locomotive LE 060 EA 5100kW201463132110.13111/2066-8201.2014.6.3.2SertsözM.Fi̇danM.A critical approach to the particle swarm optimization method for finding maximum points20204211112210.26900/jsp.4.009GkortzasP.2013Västerås, SwedenMälardalen UniversityMaster thesisRangelovV. N.Gradient modelling with calibrated train performance models201212712313410.2495/CR1201112-s2.0-84870607593BurrellQ. L.The 80/20 rule: library lore or statistical law?1985411243910.1108/eb0267722-s2.0-0012113158EggheL.Pratt’s measure for some bibliometric distributions and its relation with the 80/20 rule198738428829710.1002/(sici)1097-4571(198707)38:4<288::Aid-asi9>3.0.Co;2-qChenY.-S.Pete ChongP.TongY.Theoretical foundation of the 80/20 rule199328218320410.1007/BF020168992-s2.0-0009223757https://tr.wikipedia.org/wiki/TCDD_HT65000