Particle Swarm Optimization Algorithm Based on Information Sharing in Industry 4.0

Intelligent manufacturing is an important part of Industry 4.0; artificial intelligence technology is a necessary means to realize intelligent manufacturing. 'is requires the exploration of pattern recognition, computer vision, intelligent optimization, and other related technologies. Particle swarm optimization (PSO) algorithm is an optimization algorithm inspired by the foraging behavior of birds. PSO was an intelligent technology and an efficient optimization algorithm verified by a lot of research and experiments. In this paper, the traditional PSO algorithm is compared with genetic algorithms (GA) to illustrate the performance of the traditional PSO algorithm. By analyzing the advantages and disadvantages of the traditional PSO algorithm, the traditional PSO algorithm is improved through introducing into the sharing information mechanism and the competition strategy, called information sharing based PSO (IPSO).'e novel algorithm IPSO was the rapid convergence speed similar to the traditional PSO and enhanced the global search capability. Our experimental results show that IPSO has better performance than the traditional PSO and the GA algorithm on benchmark functions, especially for difficult functions.


Introduction
Particle swarm optimization (PSO) algorithm was a bioinspired intelligent technology proposed by Kennedy in 1995 [1]. It was a metaheuristic optimization inspired by the foraging behavior of birds [2]. Compared with the genetic algorithms (GA), we find that both algorithms are population based. But the PSO algorithm has no genetic operation, which only works on an individual chromosome. Instead, the PSO algorithm's work mechanism was the individual's exchange information with each other, so the population was optimized. Starting from a group of random solutions, the optimal solutions are obtained by searching repeatedly. When the PSO algorithm was first published, it attracted extensive attention from the optimization field scholars. Before long, it became the focus of research in the optimization field. Many scientific achievements have been made in this field . Kulkarni and Venayagamoorthy used PSO to address wireless-sensor networks (WSN) issues such as optimal deployment, node localization, clustering, and data aggregation. Park et al. propose an improved PSO framework employing chaotic sequences combined with the conventional linearly decreasing inertia weights and adopting a crossover operation scheme to increase both exploration and exploitation capability of the PSO. Wu et al. propose a swarm-intelligence-based method-Particle Swarm Optimization (PSO) algorithm to handle the elastic parameter inversion problem. Gong et al. proposed a discrete framework of the particle swarm optimization algorithm. Based on the proposed discrete framework, a multiobjective discrete particle swarm optimization algorithm is proposed to solve the network clustering problem. Chen et al. used particle swarm optimization to determine optimal power management. Delgarm et al. proposed a mono-and multiobjective particle swarm optimization (MOPSO) algorithm to couple with Energy-Plus building energy simulation software to find a set of nondominated solutions to enhance the building energy performance. Wu et al. introduced particle swarm optimization algorithm into KNN multilabel classification and made adjustments to Euclidean distance formula in traditional KNN classification algorithm and added weight value to each feature. Mousavi et al. proposed a modified particle swarm optimization for solving the integrated location and inventory control problems in a twoechelon supply chain network. Chatterjee et al. proposed a particle swarm optimization-based approach to train the NN (NN-PSO). e PSO is employed to find a weight vector with minimum root-mean-square error (RMSE) for the NN. e proposed (NN-PSO) classifier is capable of tackling the problem of predicting structural failure of multistoried reinforced concrete buildings via detecting the failure possibility of the multistoried RC building structure in the future.
rough research and experiments, the PSO algorithm was an efficient metaheuristic optimization algorithm [29]. PSO algorithm does not need to model the optimization problem and has strong global searchability. But, like all metaheuristics algorithms, PSO algorithm cannot guarantee finding the optimal solution. PSO algorithm is special in that it does not need to use the optimization problem's gradient. In other words, the PSO algorithm is unlike the classic deterministic optimization methods, for example, gradient descent method and the quasi-Newton method. erefore, the PSO algorithm is widely used in partially irregularity optimization, high noise optimization, and time-varying optimization, etc. However, the disadvantage of the traditional PSO algorithm is that it falls into local optimum easily.
us, it is needed to improve PSO to avoid this shortcoming. is paper proposes an information sharing based PSO algorithm (called IPSO, details described in Section 3). e implementation is as simple as the traditional PSO, but the experimental results for benchmark functions show that IPSO is a more efficient searching ability for global optima. e main contributions of our work are as follows: (1) Study the performance of traditional PSO by comparing it with genetic algorithms. (2) Analyze the shortcoming of traditional PSO and develop an improved version of PSO, called IPSO. It is verified that IPSO has better performance than traditional PSO on benchmark functions. e rest of this paper is organized as follows. Section 2 formally introduces the traditional PSO algorithm and investigates its performance on 10 benchmark functions. In Section 3, we first analyze the reasons why traditional PSO could not always achieve optimal solutions. Based on the analysis, we develop a novel PSO (IPSO) and conduct experiments. Finally, Section 4 concludes the work of this paper.

PSO Algorithm.
e PSO algorithm was first proposed by Eberhart and Kennedy in 1995. PSO algorithm was inspired by the flight behavior of groups of birds in search of food. PSO algorithm was an efficient metaheuristic optimization algorithm and has wide applications in the optimization field. e traditional PSO algorithm's work mechanism is to generate a group (called swarm) of candidate solutions (called particle). e movement of these particles follows certain rules and can be expressed by equations. e best location of particles and the best location of the whole swarm are important factors. When better locations are found, the entire swarm will move to that location. Repeat this process until you find a satisfactory solution (the best location) or reach the termination condition.
Suppose that f: R n ⟶ R is the minimized function. In this function, the candidate solution is a vector of real numbers, and each candidate solution has a real number value which calculates using an objective function. e function's gradient is unknown. e purpose is to get a vector A to satisfy equation f(A) ≤ f(B); in here, all B are in the search space. at is to say, the global minimum that the algorithm is looking for is the solution A.
e global maximum can be given by the function h � −f.
In the traditional PSO algorithm, each solution in the search space can be thought of as a bird called "particle." e fitness value of each particle can be calculated by the objective function. e state of each particle is determined by its velocity and location. e velocity of a particle determines its direction of flight. All the particles in the swarm search for the optimal solution through mutual cooperation and information exchange. e search process of the PSO algorithm is driven by an update of each particle state. At the initialize process, the algorithm randomly generates a group of particles (random solution), and each particle includes the information of velocity and location. e velocity and location of each particle are updated using (1) and (2). (1) has determined the best experienced location for the current particle (called P idb ) and the current entire swarm's best location (called P gdb in (1)). Update on the velocity and location of particles repeats until the iteration's maximum number is reached. e maximum number is set at the beginning of the algorithm. rough this updating mechanism iteratively, the algorithm can find (or approach) the best solution in the solution space.
Because each particle's information is updated not only by its best experienced location (P idb ), but also by the whole swarm's best location (P gdb ), then, the particles in the swarm must exchange the information with each other. For the traditional PSO algorithm, a particle's velocity and location update equations are shown in the following equations: Here, ω is the weight value of the inertia factor. It is a scaling factor related to the particle's previous velocity and the value is 0 < ω < 1. η 1 and η 2 are the accelerating factors; they are constant such as η 1 � η 2 � 2. rand() is the random function to generate the random numbers. X id denotes the location of the particle id. V id denotes the velocity of particle id. P idb denotes the best location in the current moment the particle id was 2 Wireless Communications and Mobile Computing found. P gdb denotes the whole swarm's best location in the current moment found.
In (1), it is combined with three parts. e first part is the influence of the particle's previous velocity. It makes the particles have the ability to expand, that makes the global search ability of the algorithm can be enhanced. e second part represents the influence of the particle's experience knowledge called the cognition part. e third part represents the learning strategy of particles called the social part. It is the experience of learning and social cooperation process of particles with each other. e traditional PSO algorithm's flow is described like this. First, the algorithm randomly generates a group of particles, each particle with a random initial velocity V id and random location X id . en, evaluate each particle through calculating the fitness value f. Find the best location P idb of X id and the global best location P gdb which has the best fitness value in the current entire swarm. According to (1) and (2), each particle can update its location and velocity. en, each particle recalculates the fitness value using the objective function under its current velocity and location. e fitness value of each particle was obtained according to the recalculation, and the algorithm updates each particle's best location P idb and the swarm's best location P gdb . en, use (1) and (2) again to update each particle's velocity and location. Update on the velocity and location of particles repeats until the iteration's maximum number is reached; the maximum number is set at the beginning of the algorithm.

Experiment Comparison.
e traditional PSO algorithm has a high convergence rate. Clerc has presented the proof in his paper [2]. We select 10 benchmark functions for the experiment to verify the high convergence rate of the PSO algorithm and compare it with the GA. e 10 benchmark functions have a multimodal function. It is a function that is difficult to solve. And the reason why we chose it is because we want to study not only the algorithm's convergence efficiency but also the algorithm's ability to find the global optimal solutions. e 10 benchmark functions are briefly described below. F1: Schaffer function (the function's graph shown in Figure 1).

Wireless Communications and Mobile Computing
Function 5 (the function's graph shown in Figure 5).
Function 6 (the function's graph shown in Figure 6).
Function 7 (the function's graph shown in Figure 7).
Function 8 (the function's graph shown in Figure 8).
Function 9 (the function's graph shown in Figure 9).
Function 10 (the function's graph shown in Figure 10).
In order to verify the analysis of the high convergence rate of the PSO algorithm, we run 100 times algorithm experiments for the above 10 benchmark test functions. e comparison experimental results are shown in Table 1. e comparison includes the optimal solution found and the times of finding the optimal solution for each function. For example, for the difficult Function 6, GA cannot find an optimal solution for this function. e best solution GA can find is −14.786954. But PSO algorithm can find the optimal solution (−39.944506) five times in the 100 times algorithm run.
From experimental results in Table 1, we can observe that the convergence rate of the PSO algorithm is significantly higher than GA, except for the simplest function F9.
For Function 9, the two algorithms can find the optimal solution 100% in 100 times algorithm run. e following conclusions can be drawn from the experimental results in Table 1: PSO algorithm has better global search ability than GA. e experimental results show that the PSO algorithm's convergence rate is better than GA. us, compared with GA, we prefer to apply PSO to optimization fields, such as traveling salesman problems.
From Table 1, we also notice that the PSO algorithm cannot always find the optimal solution, except the simplest function F9, although it performs better than the GA algorithm. For the functions F4, F5, and F6, in terms of the success rates of achieving the optimal solutions, PSO performs significantly better than GA. However, its success rate is also not very high. We need to analyze the reason and develop the traditional PSO in the next section.

Why PSO Could Not Always Find Optima.
e above experimental results show that the reason why the PSO algorithm can converge quickly is that the cognition part and the social part in (1) can guide particles to search only around P gdb and P idb . By analyzing the velocity update and the location update equations of particles in the algorithm, it can be known that when the best particle in the swarm falls into the local optimum, the algorithm's information sharing mechanism will guide all the particles to this local optimum location. is will cause the entire swarm to only converge to this solution. As the entire swarm falls into the local optimum, we can find that the value of the cognition part and the value of the social part in (1) will eventually become zero. At the same time, with the execution of the iterative process, the particles' velocity will eventually drop to zero because the inertia weight is between 0 and 1 (0 < ω < 1).
As all the particles in the swarm have zero velocity, the swarm has converged to one location. At this time, if the current value of P gdb in the swarm is not the global optimum, this will lead to the entire swarm falling into the current optimal solution. is results in the swarm not being able to jump out of the local optimum. Especially, this occurs frequently for multimodal functions. at is the reason why the traditional PSO can only achieve the optimal solution five times out of 100 runs on function F6, although it is better than the GA algorithm, which achieves zero time out of 100 runs. To further verify this, we select three more multimodal functions (shown in Table 2 and Figures 11-13). In Table 2, S Wireless Communications and Mobile Computing presents the range of variables; and f min shows the minimum value of each function. Like what we did in Section 2, we run the traditional PSO 100 times for each function. In our experiments, the traditional PSO could not achieve the optimal solution once out of 100 runs for each function.
us, we count the best, mean, and worst values achieved by       Table 3. ese experimental results confirm our conjuncture: the traditional PSO algorithm falls into local optimum easily. ( e GA algorithm has the same problem.) e main reason of the traditional PSO algorithm falls into local optimum easily is because it has no mechanisms to force the particles have the ability to escape from the local optimum. is is the fatal disadvantage of the traditional PSO algorithm. is disadvantage is the reason why traditional PSO could not always find the optimal solution. With this guidance, we will develop a new solution in the next subsection, which forces the traditional PSO algorithm to avoid falling into the local optimum.

An Improved Version of PSO Algorithm.
In this section, we propose an improved version of the PSO algorithm called IPSO, which aims to overcome the disadvantages of easily falling into the local optimum. e main purpose of the IPSO algorithm is must have a very strong global search ability. To achieve this purpose, a new information sharing mechanism introduces into the IPSO algorithm. is reverse mechanism lets the particles move in the opposite direction of the worst particle's location and the worst swarm's location. In this way, the global search space can be expanded and the probability of particles falling into the local optimum can be reduced.
Traditional PSO algorithm, each particle's flying direction follows a certain rule: it always approaches the location of the best particle in the swarm. From the above analysis, we can conclude that the risk of doing this is that if the best particle in the swarm cannot find the optimal solution, then it will cause the entire swarm cannot find the optimal solution. Aim at avoiding falling into local optimal solution.    Wireless Communications and Mobile Computing e new algorithm gives each particle two flight directions (an active direction and a passive direction). is is different from the traditional PSO algorithm in which particles only have one direction of flight. e passive direction is used to store particles that fall into the local optimum. Since particles in the passive direction are not always worse than those in the active direction, the new algorithm will select the best particle from active and passive directions for the update operation. en a competitive strategy is introduced into the new algorithm. is competitive strategy prompts the worst information to compete with the best. is makes the algorithm not only reduce the probability of falling into the local optimum solution, but also can increase the probability of converging to the optimal solution, and the time complexity is only double that of the traditional PSO algorithm.
How will the new algorithm give each particle the two flight directions (the active and the passive)? In the traditional PSO algorithm, each particle has been planned an active flight direction. is direction is still retained in the new algorithm as the active flight direction. e process that produces the passive flight direction is the same as the process that produces the active flight direction. It is well known that when a particle is in a search for the optimal solution in the solution space, it does not know where the optimum solution is. In the traditional PSO algorithm, the best location of a particle and the best location of the entire swarm will be recorded. Based on these two best locations, generate the particle's active direction through (1) and (2). We can amend the process of generating the active flight direction of particles in the traditional PSO algorithm to generate the passive flight direction of particles. In this process, we record the location of the worst particle and the worst location of the entire swarm. Based on these two worst locations, the new algorithm generates the passive direction of particle flight through (13) and (14). Equations (13) and (14) are the new velocity and location update equations of particles in the new algorithm.
where P idw represents the worst location that the particle id has recorded. P gdw represents the worst location that the entire swarm has recorded. e particle updating strategy of the IPSO is to select the best one in two flight directions (the active and the passive) for particles updating. In this way, the search space of particles can be expanded, and the premature local optimum can be avoided. erefore, the probability of finding the global optimal solution can be improved. Since IPSO needs to calculate the velocity and location of each particle twice, the time complexity of IPSO is twice that of the traditional PSO. e framework of IPSO is explained in Algorithm 1.

Results and Discussion
In order to verify the effect of the IPSO algorithm, we use the same 10 benchmark functions described before to do the experiment and compare it with the traditional PSO algorithm. Again, in the experiments, the two algorithms run 100 times for every function, and the parameters are set as ω � 0.6, η1 � η2 � 2. Table 4 is an experimental results comparison.
From Table 4, we can see that our IPSO achieves amazing results. It always achieves optimal solutions on six out of the 10 functions. e function F3 almost always achieves the optimal solution (97 times out of 100 runs). On the rest three difficult functions, F4, F5, and F6, our IPSO also significantly improves the success rate of finding the optimal solution. For example, for the most difficult function F6 (again, it is a multimodal function), it achieves the optimal solution 18 times out of 100 runs. e traditional PSO algorithm can only achieve five times out of 100 runs.
For the last three multimodal functions (shown in Table 2), Table 5 shows the experimental results (including the best value, the mean value, and the worst value). From Table 5, we can conclude that our IPSO also significantly improves the performance of the traditional PSO algorithm. For these three difficult functions, our IPSO can achieve very close to the optimal solutions shown in the last column (only a tiny difference). eir best, mean, and worst values are also very close to each other.

Varied Parameters Setting.
In order to further verify the performance of our IPSO, we also carried out experiments on the different values of parameters and compared the experimental results with the traditional PSO.
ere are three parameters ω, η 1 and η 2 in the equation of the velocity update. Among them, η 1 andη 2 are the accelerating factors, and their value is constant. Generally, their value is set as η1 � η2 � 2 [1]. We will not vary them here. We just change the value of inertia weight ω. It is used to balance the global and local search capabilities. e larger the inertia weight ω, both our IPSO and the standard PSO promote global search. Note that when we vary the inertia weight ω, the values for η 1 and η 2 are varied relatively. We show the experimental results of different values of the inertia weight from 0.1 to 0.9 on the function F3 in Table 6. Like previous experiments, each algorithm runs 100 times with the firmed accelerating factor setting η1 � η2 � 2. Table 6 shows that our IPSO always performs better than the traditional PSO algorithm under different settings. It also shows that both our IPSO and the traditional PSO perform the best when the inertia weight ω � 0.6 or 0.7. With the increase of inertia weight ω, the performance of the two algorithms is getting better. However, when the inertia weight is set not less than 0.8, our IPSO keeps its performance, but the traditional PSO gets worse with the increment of the inertia weight. Step 1: randomly generate particles and give the velocity and location of each particle.
Step 2: evaluate each particle using the fitness value.
Step 3: for each particle, If the fitness value of the particle is less than the best particle's fitness value of P idb , update its location with P idb . Otherwise, if the fitness value of the particle is greater than the worst particle's fitness value P idw , update its location P idw .
Step 4: for each particle, If the fitness value of the particle is less than the best fitness value P gdb of the entire swarm, update the value of P gdb using this particle's fitness value. Otherwise, if the fitness value of the particle is greater than the worst fitness value P gdb of the entire swarm, update the worst fitness value P gdw of the entire swarm using this particle's fitness value.
(3) Compare t and t′, and choose the best of them to update this particle.
Step 6: repeat Step 2 until you find the optimal solution or satisfy the termination condition.
ALGORITHM 1: e framework of IPSO (searching the minimum value).

Conclusions
An information sharing based PSO algorithm called IPSO is proposed in this paper. Comparing with the traditional PSO algorithm, improvements have been made in two areas: First, it introduces a new information sharing mechanism: each particle's worst location and the entire swarm's worst location are also recorded. is new mechanism causes the particles to move in the opposite direction of the worst particle's locations and the worst swarm's locations. In this way, the global search space can be expanded and the probability of particles falling into local optimum can be reduced.
Second, it introduces a particle competitive strategy. is competitive strategy prompts the worst information to compete with the best. is makes the algorithm not only reduce the probability of falling into the local optimum solution, but also increase the probability of converging to the optimal solution, and the time complexity is only double that of the traditional PSO algorithm.
In summary, in this paper, we first investigated the performance of PSO by comparing it with GA and verified that PSO performs much better than GA. en, we analyzed the shortcoming of the standard PSO. With the guidance of the analysis, we proposed a new PSO algorithm (IPSO). Our experimental results showed that IPSO performs better than the traditional PSO and the GA algorithm on benchmark functions. Especially for difficult functions, our IPSO significantly improves the success rate of finding the optimal solution.
Data Availability e benchmark function data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.